Compare commits

..

283 Commits

Author SHA1 Message Date
psychedelicious
b7d439c295 fix(mm): model names with periods borked
When we provide a config object during a model install, the config can override individual fields that would otherwise be derived programmatically. We use this to install starter models w/ a given name, description, etc.

This logic used `pathlib` to append a suffix to the model's name. When we provide a model name that has a period in it, `pathlib` splits the name at the period and replaces everything after it with the suffix. This is then used to determine the output path of the model.

As a result, some starter model paths are incorrect. For example, `IP Adapter SD1.5 Image Encoder` gets installed to `sd-1/clip_vision/IP Adapter SD1`.
2024-10-15 17:00:08 +10:00
Brandon Rising
3da8076a2b fix: Pin onnx versions to builds that don't require rare dlls 2024-10-12 10:36:51 -04:00
Mary Hipp
80360a8abb fix(api): update enum usage to work for python 3.11 2024-10-12 10:21:26 -04:00
Mary Hipp
acfeb4a276 undo changes that made category optional 2024-10-11 17:37:57 -04:00
Mary Hipp
b33dbfc95f prefix share link with window location 2024-10-11 17:25:58 -04:00
Mary Hipp
f9bc29203b ruff 2024-10-11 17:23:34 -04:00
Mary Hipp
cbe7717409 make sure combobox is not searchable 2024-10-11 17:23:34 -04:00
Mary Hipp
d6add93901 lint 2024-10-11 17:23:34 -04:00
Mary Hipp
ea45dce9dc (ui) add board sorting UI to board settings popover 2024-10-11 17:23:34 -04:00
Mary Hipp
8d44363d49 (ui): update boards list queries to only use sort params for list, and make sure archived boards are included in most places we are searching 2024-10-11 17:23:34 -04:00
Mary Hipp
9933cdb6b7 (api) fix missing sort params being drilled down, add case insensitivity to name sorting 2024-10-11 17:23:34 -04:00
Mary Hipp
e3e9d1f27c (ui) break out boards settings from gallery/image settings 2024-10-11 17:23:34 -04:00
psychedelicious
bb59ad438a docs(ui): add comments to ImageContextMenu 2024-10-11 09:36:23 -04:00
psychedelicious
e38f5b1576 fix(ui): safari doesn't have find on iterators 2024-10-11 09:36:23 -04:00
psychedelicious
1bb49b698f perf(ui): efficient gallery image hover state 2024-10-11 09:36:23 -04:00
psychedelicious
fa1fbd89fe tidy(ui): remove extraneous prop extraction 2024-10-11 09:36:23 -04:00
psychedelicious
190ef6732c perf(ui): properly memoize gallery image icon components 2024-10-11 09:36:23 -04:00
psychedelicious
947cd4694b perf(ui): use single event for all image context menus
Image elements register their target ref in a map, which is used to look up the image that was clicked on. Substantial perf improvement.
2024-10-11 09:36:23 -04:00
psychedelicious
ee32d0666d perf(ui): memoize gallery page buttons 2024-10-11 09:36:23 -04:00
psychedelicious
bc8ad9ccbf perf(ui): remove another extraneous useCallback 2024-10-11 09:36:23 -04:00
psychedelicious
e96b290fa9 perf(ui): remove extraneous useCallbacks 2024-10-11 09:36:23 -04:00
psychedelicious
b9f83eae6a perf(ui): do not call upload hook unless upload is needed 2024-10-11 09:36:23 -04:00
psychedelicious
9868e23235 feat(ui): use singleton context menu
This improves render perf for the image component by 10-20%.
2024-10-11 09:36:23 -04:00
psychedelicious
0060cae17c build(ui): set package mode target to ES2015 2024-10-11 07:54:44 -04:00
psychedelicious
56f0845552 tidy(ui): consistent naming for selector builder util 2024-10-11 07:51:55 -04:00
psychedelicious
da3f85dd8b fix(ui): edge case where entity isn't visible until interacting with canvas
To trigger the edge case:
- Have an empty layer and non-empty layer
- Select the non-empty layer
- Refresh the page
- Select to the empty layer without doing any other action
- You may be unable to draw on the layer
- Zoom in/out slightly
- You can now draw on it

The problem was not syncing visibility when a layer is selected, leaving the layer hidden. This indirectly disabled interactions.

The fix is to listen for changes to the layer's selected status and sync visibility when that changes.
2024-10-11 07:51:55 -04:00
psychedelicious
7185363f17 fix(ui): edge case where controladapters added counts could be off
We were:
- Incrementing `addedControlNets` or `addedT2IAdapters`
- Attempting to add it, but maybe failing and skipping

Need to swap the order of operations to prevent misreporting of added cnet/t2i.

I don't think this would ever actually cause problems.
2024-10-11 10:37:30 +11:00
Ryan Dick
ac08c31fbc Remove unnecessary hasattr checks for scaled_dot_product_attention. We pin the torch version, so there should be no concern that this function does not exist. 2024-10-10 19:23:45 -04:00
Ryan Dick
ea54a2655a Add a workaround for broken sliced attention on MPS with torch 2.4.1. 2024-10-10 19:23:45 -04:00
psychedelicious
cc83dede9f chore: bump version to v5.2.0rc1 2024-10-11 10:11:47 +11:00
Riccardo Giovanetti
8464fd2ced translationBot(ui): update translation (Italian)
Currently translated at 98.5% (1462 of 1483 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-10-11 09:41:45 +11:00
Васянатор
c3316368d9 translationBot(ui): update translation (Russian)
Currently translated at 100.0% (1479 of 1479 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-10-11 09:41:45 +11:00
Riku
8b2d5ab28a translationBot(ui): update translation (German)
Currently translated at 70.3% (1048 of 1490 strings)

translationBot(ui): update translation (German)

Currently translated at 69.4% (1027 of 1479 strings)

Co-authored-by: Riku <riku.block@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-10-11 09:41:45 +11:00
psychedelicious
3f6acdc2d3 fix(ui): use non-icon version of delete menu item on canvas context menu 2024-10-10 18:23:32 -04:00
psychedelicious
4aa20a95b2 feat(ui): consolidate img2img canvas flow
Make the `New Canvas From Image` button do what the `New Img2Img From Image` does.
2024-10-11 09:03:44 +11:00
Ryan Dick
2d82e69a33 Add support for FLUX ControlNet models (XLabs and InstantX) (#7070)
## Summary

Add support for FLUX ControlNet models (XLabs and InstantX).

## QA Instructions

- [x] SD1 and SDXL ControlNets, since the ModelLoaderRegistry calls were
changed.
- [x] Single Xlabs controlnet
- [x] Single InstantX union controlnet
- [x] Single InstantX controlnet
- [x] Single Shakker Labs Union controlnet
- [x] Multiple controlnets
- [x] Weight, start, end params all work as expected
- [x] Can be used with image-to-image and inpainting.
- [x] Clear error message if no VAE is passed when using InstantX
controlnet.
- [x] Install InstantX ControlNet in diffusers format from HF repo
(`InstantX/FLUX.1-dev-Controlnet-Union`)
- [x] Test all FLUX ControlNet starter models
## Merge Plan

No special instructions.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-10-10 12:37:09 -04:00
Ryan Dick
683f9a70e7 Restore instantx_control_mode field on FLUX ControlNet invocation. 2024-10-10 15:25:30 +00:00
Ryan Dick
bb6d073828 Use the Shakker-Labs ControlNet union model as the only FLUX ControlNet starter model. 2024-10-10 13:59:59 +00:00
Kent Keirsey
7f7d8e5177 Merge branch 'ryan/flux-controlnet-xlabs-instantx' of https://github.com/invoke-ai/InvokeAI into ryan/flux-controlnet-xlabs-instantx 2024-10-10 08:06:25 -04:00
Ryan Dick
f37c5011f4 Reduce peak memory utilization when preparing FLUX controlnet inputs. 2024-10-10 07:59:29 -04:00
Ryan Dick
bb947c6162 Make FLUX controlnet node API more like SD API and get it working with linear UI. 2024-10-10 07:59:29 -04:00
Ryan Dick
a654dad20f Remove instantx_control_mode from FLUX ControlNet node. 2024-10-10 07:59:29 -04:00
Mary Hipp
2bd44662f3 possibly a working FLUX controlnet graph 2024-10-10 07:59:29 -04:00
Ryan Dick
e7f9086006 Fix bug with InstantX input image range. 2024-10-10 07:59:29 -04:00
Mary Hipp
5141be8009 hide Control Mode for FLUX control net layer 2024-10-10 07:59:29 -04:00
Mary Hipp
eacdfc660b ui: enable controlnet controls when FLUX is main model, update schema 2024-10-10 07:59:29 -04:00
maryhipp
5fd3c39431 update prepreprocessor logic to be more resilient 2024-10-10 07:59:29 -04:00
maryhipp
7daf3b7d4a update starter models to include FLUX controlnets 2024-10-10 07:59:29 -04:00
Ryan Dick
908f65698d Fix support for InstantX non-union models (with no single blocks). 2024-10-10 07:59:29 -04:00
Ryan Dick
63c4ac58e9 Support installing InstantX ControlNet models from diffusers directory format. 2024-10-10 07:59:29 -04:00
Ryan Dick
8c125681ea Skip tests that are failing on MacOS CI runners (for now). 2024-10-10 07:59:29 -04:00
Ryan Dick
118f0ba3bf Revert "Try to fix test failures affecting MacOS CI runners."
This reverts commit 216b36c75d.
2024-10-10 07:59:29 -04:00
Ryan Dick
b3b7d084d0 Try to fix test failures affecting MacOS CI runners. 2024-10-10 07:59:29 -04:00
Ryan Dick
812940eb95 (minor) Add comment about future memory optimization. 2024-10-10 07:59:29 -04:00
Ryan Dick
0559480dd6 Shift the controlnet-type-specific logic into the specific ControlNet extensions and make the FLUX model controlnet-type-agnostic. 2024-10-10 07:59:29 -04:00
Ryan Dick
d99e7dd4e4 Add instantx_control_mode param to FLUX ControlNet invocation. 2024-10-10 07:59:29 -04:00
Ryan Dick
e854181417 Create a dedicated FLUX ControlNet invocation. 2024-10-10 07:59:29 -04:00
Ryan Dick
de414c09fd Bugfixes to get InstantX ControlNet working. 2024-10-10 07:59:29 -04:00
Ryan Dick
ce4624f72b Update ControlNetCheckpointProbe.get_base_type() to work with InstantX. 2024-10-10 07:59:29 -04:00
Ryan Dick
47c7df3476 Fix circular imports related to XLabsControlNetFluxOutput and InstantXControlNetFluxOutput. 2024-10-10 07:59:29 -04:00
Ryan Dick
4289b5e6c3 Add instantx controlnet logic to FLUX model forward(). 2024-10-10 07:59:29 -04:00
Ryan Dick
c8d1d14662 Work on integrating InstantX into denoise process. 2024-10-10 07:59:29 -04:00
Ryan Dick
44c588d778 Rename DiffusersControlNetFlux -> InstantXControlNetFlux. 2024-10-10 07:59:29 -04:00
Ryan Dick
d75ac56d00 Create flux/extensions directory. 2024-10-10 07:59:29 -04:00
Ryan Dick
714dd5f0be Update FluxControlnetModel to work with both XLabs and InstantX. 2024-10-10 07:59:29 -04:00
Ryan Dick
2f4d3cb5e6 Add unit test to test the full flow of loading an InstantX ControlNet from a state dict. 2024-10-10 07:59:29 -04:00
Ryan Dick
b76555bda9 Add unit test for infer_instantx_num_control_modes_from_state_dict(). 2024-10-10 07:59:29 -04:00
Ryan Dick
1cdd501a0a Add unit test for infer_flux_params_from_state_dict(...). 2024-10-10 07:59:29 -04:00
Ryan Dick
1125218bc5 Update FLUX ControlNet unit test state dicts to include shapes. 2024-10-10 07:59:29 -04:00
Ryan Dick
683504bfb5 Add scripts/extract_sd_keys_and_shapes.py 2024-10-10 07:59:29 -04:00
Ryan Dick
03cf953398 First pass of utility function to infer the FluxParams from a state dict. 2024-10-10 07:59:29 -04:00
Ryan Dick
24c115663d Add unit test for convert_diffusers_instantx_state_dict_to_bfl_format(...) and fix a few bugs. 2024-10-10 07:59:29 -04:00
Ryan Dick
a9e7ecad49 Finish first draft of convert_diffusers_instantx_state_dict_to_bfl_format(...). 2024-10-10 07:59:29 -04:00
Ryan Dick
76f4766324 WIP - implement convert_diffusers_instantx_state_dict_to_bfl_format(...). 2024-10-10 07:59:29 -04:00
Ryan Dick
3dfc242f77 (minor) rename other_forward() -> forward() 2024-10-10 07:59:29 -04:00
Ryan Dick
1e43389cb4 Add utils for detecting XLabs ControlNet vs. InstantX ControlNet from
state dict.
2024-10-10 07:59:29 -04:00
Ryan Dick
cb33de34f7 Migrate DiffusersControlNetFlux from diffusers-style to BFL-style. 2024-10-10 07:59:29 -04:00
Ryan Dick
7562ea48dc Improve typing of zero_module(). 2024-10-10 07:59:29 -04:00
Ryan Dick
83f4700f5a Use top-level torch import for all torch stuff. 2024-10-10 07:59:29 -04:00
Ryan Dick
704e7479b2 Remove DiffusersControlNetFlux.from_transformer(...). 2024-10-10 07:59:29 -04:00
Ryan Dick
5f44559f30 Fixup typing around DiffusersControlNetFluxOutput. 2024-10-10 07:59:29 -04:00
Ryan Dick
7a22819100 Remove gradient checkpointing from DiffusersControlNetFlux. 2024-10-10 07:59:29 -04:00
Ryan Dick
70495665c5 Remove FluxMultiControlNetModel 2024-10-10 07:59:29 -04:00
Ryan Dick
ca30acc5b4 Remove LoRA stuff from DiffusersCotnrolNetFlux. 2024-10-10 07:59:29 -04:00
Ryan Dick
8121843d86 Remove logic for modifying attn processors from DiffusersControlNetFlux. 2024-10-10 07:59:29 -04:00
Ryan Dick
bc0ded0a23 Rename FluxControlNetModel -> DiffusersControlNetFlux 2024-10-10 07:59:29 -04:00
Ryan Dick
30f6034f88 Start updating imports for FluxControlNetModel 2024-10-10 07:59:29 -04:00
Ryan Dick
7d56a8ce54 Copy model from 99f608218c/src/diffusers/models/controlnet_flux.py 2024-10-10 07:59:29 -04:00
Ryan Dick
e7dc439006 Rename ControlNetFlux -> XLabsControlNetFlux 2024-10-10 07:59:29 -04:00
Ryan Dick
bce5a93eb1 Add InstantX FLUX ControlNet state dict for unit testing. 2024-10-10 07:59:29 -04:00
Ryan Dick
93e98a1f63 Add support for FLUX controlnet weight, begin_step_percent and end_step_percent. 2024-10-10 07:59:29 -04:00
Ryan Dick
0f93deab3b First pass at integrating FLUX ControlNets into the FLUX Denoise invocation. 2024-10-10 07:59:29 -04:00
Ryan Dick
3f3aba8b10 Add FLUX XLabs ControlNet model probing. 2024-10-10 07:59:29 -04:00
Ryan Dick
0b84f567f1 Fix type errors and imporve docs for ControlNetFlux. 2024-10-10 07:59:29 -04:00
Ryan Dick
69c0d7dcc9 Remove gradient checkpointing from ControlNetFlux. 2024-10-10 07:59:29 -04:00
Ryan Dick
5307248fcf Remove ControlNetFlux logic related to attn processor overrides. 2024-10-10 07:59:29 -04:00
Ryan Dick
2efaea8f79 Remove duplicate FluxParams class. 2024-10-10 07:59:29 -04:00
Ryan Dick
c1dfd9b7d9 Fix FLUX module imports for ControlNetFlux. 2024-10-10 07:59:29 -04:00
Ryan Dick
c594ef89d2 Copy ControlNetFlux model from 47495425db/src/flux/controlnet.py. 2024-10-10 07:59:29 -04:00
Ryan Dick
563db67b80 Add XLabs FLUX controlnet state dict key file to be used for development/testing. 2024-10-10 07:59:29 -04:00
psychedelicious
236c065edd fix(ui): respect grid size when fitting layer to box 2024-10-10 07:43:46 -04:00
psychedelicious
1f5d744d01 fix(ui): disable canvas-related image context menu items when canvas is busy 2024-10-10 07:43:46 -04:00
psychedelicious
b36c6af0ae feat(ui): add new img2img canvas from image functionality
This replicates the img2img flow:
- Reset the canvas
- Resize the bbox to the image's aspect ratio at the optimal size for the selected model
- Add the image as a raster layer
- Resizes the layer to fit the bbox using the 'fill' strategy

After this completes, the user can immediately click Invoke and it will do img2img.
2024-10-10 07:43:46 -04:00
psychedelicious
4e431a9d5f feat(ui): add a mutex to CanvasEntityTransformer to prevent concurrent operations 2024-10-10 07:43:46 -04:00
psychedelicious
48a8232285 feat(ui): add entity adapter init callbacks
If an entity needs to do something after init, it can use this system. For example, if a layer should be transformed immediately after initializing, it can use an init callback.
2024-10-10 07:43:46 -04:00
psychedelicious
94007fef5b tidy(ui): remove unused reducer 2024-10-10 07:43:46 -04:00
psychedelicious
9e6fb3bd3f feat(ui): add hooks for new layer/canvas from image & use them 2024-10-10 07:43:46 -04:00
psychedelicious
8522129639 tidy(ui): "syncCache" -> "syncKonvaCache"
Reduce confusion w/ the many other caches
2024-10-10 17:45:05 +11:00
psychedelicious
15033b1a9d fix(ui): prevent edge case where layers get cached while hidden 2024-10-10 17:45:05 +11:00
psychedelicious
743d78f82b feat(ui): more debug info for canvas adapters 2024-10-10 17:45:05 +11:00
psychedelicious
06a434b0a2 tidy(ui): clean up awkward selector in CanvasEntityAdapterBase 2024-10-10 17:45:05 +11:00
psychedelicious
7f2fdae870 perf(ui): optimized object rendering
- Throttle opacity and compositing fill rendering to 100ms
- Reduce compositing rect rendering to minimum
2024-10-10 17:45:05 +11:00
psychedelicious
00be03b5b9 perf(ui): hide offscreen & uninteractable layers 2024-10-10 17:45:05 +11:00
psychedelicious
0f98806a25 fix(ui): deprecated konva attr 2024-10-10 17:45:05 +11:00
psychedelicious
0f1541d091 perf(ui): disable perfect draw for all shapes
This feature involves a certain amount of extra work to ensure stroke and fill with partial opacity render correctly together. However, none of our shapes actually use that combination of attributes, so we can disable this for a minor perf boost.
2024-10-10 17:45:05 +11:00
psychedelicious
c49bbb22e5 feat(ui): track whether entities intersect the bbox 2024-10-10 17:45:05 +11:00
psychedelicious
7bd4b586a6 feat(ui): track whether entities are on-screen or off-screen 2024-10-10 17:45:05 +11:00
psychedelicious
754f049f54 feat(ui): getScaledStageRect returns snapped values 2024-10-10 17:45:05 +11:00
psychedelicious
883beb90eb refactor(ui): do not rely on konva internal canvas cache for layer previews
Instead of pulling the preview canvas from the konva internals, use the canvas created for bbox calculations as the preview canvas.

This doesn't change perf characteristics, because we were already creating this canvas. It just means we don't need to dip into the konva internals.

It fixes an issue where the layer preview didn't update or show when a layer is disabled or otherwise hidden.
2024-10-10 17:45:05 +11:00
psychedelicious
ad76399702 feat(ui): add getRectIntersection util 2024-10-10 17:45:05 +11:00
psychedelicious
69773a791d feat(ui): use useAssertSingleton for all singleton modals
footgun insurance
2024-10-10 15:49:09 +11:00
psychedelicious
99e88e601d fix(ui): edge case where you get stuck w/ the workflow list menu open, or it opens unexpectedly
- When resetting workflows, retain the current mode state
- Remove the useEffect that reacted to the `isCleanEditor` flag to prevent getting menu getting locked open
2024-10-10 15:49:09 +11:00
psychedelicious
4050f7deae feat(ui): make workflow support link work like a link 2024-10-10 15:49:09 +11:00
psychedelicious
0399b04f29 fix(ui): workflows marked touched on first load 2024-10-10 15:49:09 +11:00
psychedelicious
3b349b2686 chore(ui): lint 2024-10-10 15:49:09 +11:00
psychedelicious
aa34dbe1e1 feat(ui): "CopyWorkflowLinkModal" -> "ShareWorkflowModal" 2024-10-10 15:49:09 +11:00
psychedelicious
ac2476c63c fix(ui): use modal overlay for workflow share modal 2024-10-10 15:49:09 +11:00
psychedelicious
f16489f1ce feat(ui): split out delete style preset dialog logic into singleton 2024-10-10 15:49:09 +11:00
psychedelicious
3b38b69192 feat(ui): split out copy workflow link dialog logic into singleton 2024-10-10 15:49:09 +11:00
psychedelicious
2c601438eb feat(ui): split out delete workflow dialog logic into singleton 2024-10-10 15:49:09 +11:00
psychedelicious
5d6a2a3709 fix(ui): use Text component in style preset delete dialog 2024-10-10 15:49:09 +11:00
psychedelicious
1d7a264050 feat(ui): workflow share icon only for non-user workflows 2024-10-10 15:49:09 +11:00
psychedelicious
c494e0642a feat(ui): split out new workflow dialog logic, use it in list menu, restore new workflow dialog 2024-10-10 15:49:09 +11:00
psychedelicious
849b9e8d86 fix(ui): duplicate copy workflow link modals
The component state is a global singleton, but each workflow had an instance of the modal. So when you open one, they _all_ opened.
2024-10-10 15:49:09 +11:00
psychedelicious
4a66b7ac83 chore(ui): bump @invoke-ai/ui-library
Brings in a fix where ConfirmationAlertDialog rest props weren't used correctly.
2024-10-10 15:49:09 +11:00
psychedelicious
751eb59afa fix(ui): issues with workflow list state
- Tooltips on buttons for a list item getting stuck
- List item action buttons should not propagate clicks
2024-10-10 15:49:09 +11:00
psychedelicious
f537cf1916 fix(ui): downloading workflow loads it 2024-10-10 15:49:09 +11:00
psychedelicious
0cc6f67bb1 feat(ui): use buildUseDisclosure for workflow list menu 2024-10-10 15:49:09 +11:00
psychedelicious
b2bf03fd37 feat(ui): use own useDisclosure for workflow delete confirm dialog 2024-10-10 15:49:09 +11:00
psychedelicious
14bc06ab66 feat(ui): add our own useDisclosure hook 2024-10-10 15:49:09 +11:00
psychedelicious
9c82cc7fcb feat(ui): use buildUseDisclosure for workflow copy link modal 2024-10-10 15:49:09 +11:00
psychedelicious
c60cab97a7 feat(ui): add buildUseDisclosure 2024-10-10 15:49:09 +11:00
psychedelicious
eda979341a feat(installer): use torch extra index on all cuda install pathways 2024-10-09 22:46:18 -04:00
Eugene Brodsky
b6c7949bb7 feat(backend): prefer xformers based on cuda compute capability 2024-10-09 22:46:18 -04:00
Eugene Brodsky
d691f672a2 feat(docker): upgrade to CUDA 12.4 in container 2024-10-09 22:46:18 -04:00
Eugene Brodsky
8deeac1372 feat(installer): add options to include or exclude xFormers based on the GPU model 2024-10-09 22:46:18 -04:00
Ryan Dick
4aace24f1f Reduce peak memory utilization when preparing FLUX controlnet inputs. 2024-10-10 00:18:46 +00:00
Ryan Dick
b1567fe0e4 Make FLUX controlnet node API more like SD API and get it working with linear UI. 2024-10-09 23:38:31 +00:00
Ryan Dick
3953e60a4f Remove instantx_control_mode from FLUX ControlNet node. 2024-10-09 22:00:54 +00:00
Mary Hipp
3c46522595 feat(ui): add option to copy share link for workflows if projectURL is defined (commercial) 2024-10-10 08:42:37 +11:00
Mary Hipp
63a2e17f6b possibly a working FLUX controlnet graph 2024-10-09 15:42:02 -04:00
Ryan Dick
8b1ef4b902 Fix bug with InstantX input image range. 2024-10-09 19:38:30 +00:00
Mary Hipp
5f2279c984 hide Control Mode for FLUX control net layer 2024-10-09 15:31:44 -04:00
Mary Hipp
e82d67849c ui: enable controlnet controls when FLUX is main model, update schema 2024-10-09 15:05:29 -04:00
maryhipp
3977ffaa3e update prepreprocessor logic to be more resilient 2024-10-09 14:57:14 -04:00
maryhipp
9a8a858fe4 update starter models to include FLUX controlnets 2024-10-09 14:57:14 -04:00
Ryan Dick
859944f848 Fix support for InstantX non-union models (with no single blocks). 2024-10-09 18:51:53 +00:00
Ryan Dick
8d1a45863c Support installing InstantX ControlNet models from diffusers directory format. 2024-10-09 17:04:10 +00:00
Ryan Dick
6798bbab26 Skip tests that are failing on MacOS CI runners (for now). 2024-10-09 16:34:42 +00:00
Ryan Dick
2c92e8a495 Revert "Try to fix test failures affecting MacOS CI runners."
This reverts commit 216b36c75d.
2024-10-09 16:30:40 +00:00
Ryan Dick
216b36c75d Try to fix test failures affecting MacOS CI runners. 2024-10-09 16:21:52 +00:00
Ryan Dick
8bf8742984 (minor) Add comment about future memory optimization. 2024-10-09 16:16:04 +00:00
Ryan Dick
c78eeb1645 Shift the controlnet-type-specific logic into the specific ControlNet extensions and make the FLUX model controlnet-type-agnostic. 2024-10-09 16:12:09 +00:00
Ryan Dick
cd88723a80 Add instantx_control_mode param to FLUX ControlNet invocation. 2024-10-09 14:17:42 +00:00
Ryan Dick
dea6cbd599 Create a dedicated FLUX ControlNet invocation. 2024-10-09 14:17:42 +00:00
Ryan Dick
0dd9f1f772 Bugfixes to get InstantX ControlNet working. 2024-10-09 14:17:42 +00:00
Ryan Dick
5d11c30ce6 Update ControlNetCheckpointProbe.get_base_type() to work with InstantX. 2024-10-09 14:17:42 +00:00
Ryan Dick
a783539cd2 Fix circular imports related to XLabsControlNetFluxOutput and InstantXControlNetFluxOutput. 2024-10-09 14:17:42 +00:00
Ryan Dick
2f8f30b497 Add instantx controlnet logic to FLUX model forward(). 2024-10-09 14:17:42 +00:00
Ryan Dick
f878e5e74e Work on integrating InstantX into denoise process. 2024-10-09 14:17:42 +00:00
Ryan Dick
bfc460a5c6 Rename DiffusersControlNetFlux -> InstantXControlNetFlux. 2024-10-09 14:17:42 +00:00
Ryan Dick
a24581ede2 Create flux/extensions directory. 2024-10-09 14:17:42 +00:00
Ryan Dick
56731766ca Update FluxControlnetModel to work with both XLabs and InstantX. 2024-10-09 14:17:42 +00:00
Ryan Dick
80bc4ebee3 Add unit test to test the full flow of loading an InstantX ControlNet from a state dict. 2024-10-09 14:17:42 +00:00
Ryan Dick
745b6dbd5d Add unit test for infer_instantx_num_control_modes_from_state_dict(). 2024-10-09 14:17:42 +00:00
Ryan Dick
c7628945c4 Add unit test for infer_flux_params_from_state_dict(...). 2024-10-09 14:17:42 +00:00
Ryan Dick
728927ecff Update FLUX ControlNet unit test state dicts to include shapes. 2024-10-09 14:17:42 +00:00
Ryan Dick
1a7eece695 Add scripts/extract_sd_keys_and_shapes.py 2024-10-09 14:17:42 +00:00
Ryan Dick
2cd14dd066 First pass of utility function to infer the FluxParams from a state dict. 2024-10-09 14:17:42 +00:00
Ryan Dick
5872f05342 Add unit test for convert_diffusers_instantx_state_dict_to_bfl_format(...) and fix a few bugs. 2024-10-09 14:17:42 +00:00
Ryan Dick
4ad135c6ae Finish first draft of convert_diffusers_instantx_state_dict_to_bfl_format(...). 2024-10-09 14:17:42 +00:00
Ryan Dick
c72c2770fe WIP - implement convert_diffusers_instantx_state_dict_to_bfl_format(...). 2024-10-09 14:17:42 +00:00
Ryan Dick
e733a1f30e (minor) rename other_forward() -> forward() 2024-10-09 14:17:42 +00:00
Ryan Dick
4be3a33744 Add utils for detecting XLabs ControlNet vs. InstantX ControlNet from
state dict.
2024-10-09 14:17:42 +00:00
Ryan Dick
1751c380db Migrate DiffusersControlNetFlux from diffusers-style to BFL-style. 2024-10-09 14:17:42 +00:00
Ryan Dick
16cda33025 Improve typing of zero_module(). 2024-10-09 14:17:42 +00:00
Ryan Dick
8308e7d186 Use top-level torch import for all torch stuff. 2024-10-09 14:17:42 +00:00
Ryan Dick
c0aab56d08 Remove DiffusersControlNetFlux.from_transformer(...). 2024-10-09 14:17:42 +00:00
Ryan Dick
1795f4f8a2 Fixup typing around DiffusersControlNetFluxOutput. 2024-10-09 14:17:42 +00:00
Ryan Dick
5bfd2ec6b7 Remove gradient checkpointing from DiffusersControlNetFlux. 2024-10-09 14:17:42 +00:00
Ryan Dick
a35b229a9d Remove FluxMultiControlNetModel 2024-10-09 14:17:42 +00:00
Ryan Dick
e93da5d4b2 Remove LoRA stuff from DiffusersCotnrolNetFlux. 2024-10-09 14:17:42 +00:00
Ryan Dick
a17ea9bfad Remove logic for modifying attn processors from DiffusersControlNetFlux. 2024-10-09 14:17:42 +00:00
Ryan Dick
3578010ba4 Rename FluxControlNetModel -> DiffusersControlNetFlux 2024-10-09 14:17:42 +00:00
Ryan Dick
459cf52043 Start updating imports for FluxControlNetModel 2024-10-09 14:17:42 +00:00
Ryan Dick
9bcb93f575 Copy model from 99f608218c/src/diffusers/models/controlnet_flux.py 2024-10-09 14:17:42 +00:00
Ryan Dick
d1a0e99701 Rename ControlNetFlux -> XLabsControlNetFlux 2024-10-09 14:17:42 +00:00
Ryan Dick
92b1515d9d Add InstantX FLUX ControlNet state dict for unit testing. 2024-10-09 14:17:42 +00:00
Ryan Dick
36515e1e2a Add support for FLUX controlnet weight, begin_step_percent and end_step_percent. 2024-10-09 14:17:42 +00:00
Ryan Dick
c81bb761ed First pass at integrating FLUX ControlNets into the FLUX Denoise invocation. 2024-10-09 14:17:42 +00:00
Ryan Dick
1d4a58e52b Add FLUX XLabs ControlNet model probing. 2024-10-09 14:17:42 +00:00
Ryan Dick
62d12e6468 Fix type errors and imporve docs for ControlNetFlux. 2024-10-09 14:17:41 +00:00
Ryan Dick
9541156ce5 Remove gradient checkpointing from ControlNetFlux. 2024-10-09 14:17:41 +00:00
Ryan Dick
eb5b6625ea Remove ControlNetFlux logic related to attn processor overrides. 2024-10-09 14:17:41 +00:00
Ryan Dick
9758e5a622 Remove duplicate FluxParams class. 2024-10-09 14:17:41 +00:00
Ryan Dick
58eba8bdbd Fix FLUX module imports for ControlNetFlux. 2024-10-09 14:17:41 +00:00
Ryan Dick
2821ba8967 Copy ControlNetFlux model from 47495425db/src/flux/controlnet.py. 2024-10-09 14:17:41 +00:00
Ryan Dick
2cc72b19bc Add XLabs FLUX controlnet state dict key file to be used for development/testing. 2024-10-09 14:17:41 +00:00
psychedelicious
8544ba3798 feat(ui): add fit to bbox context menu item
This immediately fits the selected layer to the bbox, maintaining its aspect ratio.
2024-10-09 23:13:08 +11:00
psychedelicious
65fe79fa0e feat(ui): add silent option to transformer.startTransform
A "silent" transformation executes without any user feedback.
2024-10-09 23:13:08 +11:00
psychedelicious
c99852657e feat(ui): disable transfomer controls while applying transform 2024-10-09 23:13:08 +11:00
psychedelicious
ed54b89e9e fix(ui): edge case where transforms don't do anything due to caching
This could be triggered by transforming a layer, undoing, then transforming again. The simple fix is to ignore the rasterization cache for all transforms.
2024-10-09 23:13:08 +11:00
psychedelicious
d56c80af8e feat(ui): add ability to ignore rasterization cache 2024-10-09 23:13:08 +11:00
psychedelicious
0a65a01db8 feat(ui): use icons for layer menu common actions 2024-10-09 23:13:08 +11:00
psychedelicious
5f416ee4fa feat(ui): add IconMenuItem component 2024-10-09 23:13:08 +11:00
psychedelicious
115c82231b fix(ui): type signature for abstract sync method 2024-10-09 23:13:08 +11:00
psychedelicious
ccc1d4417e feat(ui): add "contain" and "cover" fit modes to transform 2024-10-09 23:13:08 +11:00
psychedelicious
5806a4bc73 chore: bump version to v5.1.1 2024-10-09 14:43:55 +11:00
psychedelicious
734631bfe4 feat(app): update example config file comment 2024-10-09 14:23:06 +11:00
psychedelicious
8d6996cdf0 fix(ui): sync pointer position on pointerdown
There's a Konva bug where `pointerenter` & `pointerleave` events aren't fired correctly on the stage.

In 87fdea4cc6 I made a change that surfaced this bug, breaking touch and Apple Pencil interactions, because the cursor position doesn't get updated.

Simple fix - ensure we update the cursor on `pointerdown` events, even though we shouldn't need to.

Will make a bug report upstream
2024-10-09 13:59:20 +11:00
psychedelicious
965d6be1f4 fix(ui): validate edges on paste
Closes #7058
2024-10-09 13:49:31 +11:00
psychedelicious
e31f253b90 fix(ui): canvas sliders
- Set an empty title to prevent browsers from showing "Please match the requested format." when hovering the number input
- Fix issue w/ `z-index` that prevented the popover button from being clicked while the input was focused
2024-10-09 13:45:36 +11:00
psychedelicious
5a94575603 chore(ui): lint 2024-10-09 13:43:22 +11:00
psychedelicious
1c3d06dc83 fix(ui): remove straggling onPointerUp handlers 2024-10-09 13:43:22 +11:00
psychedelicious
09b19e3640 fix(ui): formatting in translation source 2024-10-09 11:37:21 +11:00
Thomas Bolteau
1e0a4dfa3c translationBot(ui): update translation (French)
Currently translated at 55.6% (822 of 1477 strings)

Co-authored-by: Thomas Bolteau <thomas.bolteau50@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/fr/
Translation: InvokeAI/Web UI
2024-10-09 11:37:21 +11:00
Riccardo Giovanetti
5a1ab4aa9c translationBot(ui): update translation (Italian)
Currently translated at 98.7% (1461 of 1479 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.7% (1460 of 1479 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.5% (1458 of 1479 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.7% (1459 of 1477 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.7% (1453 of 1471 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-10-09 11:37:21 +11:00
Anonymous
d5c872292f translationBot(ui): update translation (Russian)
Currently translated at 99.9% (1470 of 1471 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.7% (1452 of 1471 strings)

translationBot(ui): update translation (English)

Currently translated at 99.9% (1470 of 1471 strings)

Co-authored-by: Anonymous <noreply@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/en/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-10-09 11:37:21 +11:00
Mary Hipp Rogers
0d7edbce25 add missing translations (#7073)
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-10-08 20:07:00 -04:00
psychedelicious
e20d964b59 chore(ui): lint 2024-10-09 08:02:11 +11:00
psychedelicious
ee95321801 fix(ui): edge case where board edit button doesn't disappear 2024-10-09 08:02:11 +11:00
psychedelicious
179c6d206c tweak(ui): edit board title button layout 2024-10-09 08:02:11 +11:00
psychedelicious
ffecd83815 fix(ui): typo 2024-10-09 07:32:01 +11:00
psychedelicious
f1c538fafc fix(ui): workflow sort popover behaviour 2024-10-09 07:32:01 +11:00
Mary Hipp
ed88b096f3 (ui) update so that default list does not sort 2024-10-09 07:32:01 +11:00
Mary Hipp
a28cabdf97 restore sorting UI for workflow library 2024-10-09 07:32:01 +11:00
Mary Hipp
db25be3ba2 (ui): add opened/created/updated details to tooltip, default sort by opened (OSS) and created (non-OSS) 2024-10-09 07:32:01 +11:00
Mary Hipp Rogers
3b9d1e8218 misc(ui): image/asset tab tooltips, icon to rename board, getting started text (#7067)
* add tooltips for images/assets tabs

* add icon by board name that can be used to activate editable

* update getting started text

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-10-08 15:46:08 -04:00
Mary Hipp
05d9ba8fa0 PR review feedback 2024-10-08 10:08:50 -04:00
Mary Hipp
3eee1ba113 remove prints 2024-10-08 10:08:50 -04:00
psychedelicious
7882e9beae feat(ui): WorkflowListItem simplify layout 2024-10-08 10:08:50 -04:00
Mary Hipp
7c9779b496 (ui) handle empty state 2024-10-08 10:08:50 -04:00
Mary Hipp
5832228fea lint and cleanup 2024-10-08 10:08:50 -04:00
Mary Hipp
1d32e70a75 (ui): clean up old workflow library 2024-10-08 10:08:50 -04:00
Mary Hipp
9092280583 (ui) new menu list of workflows 2024-10-08 10:08:50 -04:00
Mary Hipp
96dd1d5102 (api) update workflow list route to work with certain params optional so we can get all at once 2024-10-08 10:08:50 -04:00
Kent Keirsey
969f8b8e8d ruff update 2024-10-08 08:56:26 -04:00
David Burnett
ccb5f90556 Get Flux working on MPS when torch 2.5.0 test or nightlies are installed. 2024-10-08 08:56:26 -04:00
Alex Ameen
4770d9895d update flake (#7032)
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-10-08 10:55:49 +11:00
Elias Rad
aeb2275bd8 Update LOCAL_DEVELOPMENT.md 2024-10-08 10:08:24 +11:00
Elias Rad
aff5524457 Update INVOCATIONS.md 2024-10-08 10:08:24 +11:00
Elias Rad
825c564089 Update tutorials.md 2024-10-08 10:08:24 +11:00
Elias Rad
9b97c57f00 Update development.md 2024-10-08 10:08:24 +11:00
skunkworxdark
4b3a201790 Add Enhance Detail to communityNodes.md
- Add Enhance Detail node
- Fix some broken github image links.
2024-10-08 09:56:15 +11:00
psychedelicious
7e1b9567c1 chore: bump version to v5.1.0 2024-10-08 09:50:17 +11:00
psychedelicious
56ef754292 fix(ui): duplicate translation string for "layer" 2024-10-08 08:11:07 +11:00
Phrixus2023
2de99ec32d translationBot(ui): update translation (Chinese (Simplified Han script))
Currently translated at 65.0% (957 of 1471 strings)

Co-authored-by: Phrixus2023 <920414016@qq.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2024-10-08 07:56:57 +11:00
Riccardo Giovanetti
889e63d585 translationBot(ui): update translation (Italian)
Currently translated at 98.7% (1453 of 1471 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.7% (1453 of 1471 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.7% (1452 of 1471 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-10-08 07:56:57 +11:00
Riku
56de2b3a51 feat(ui): allow for a broader range of guidance values for flux models 2024-10-08 07:51:20 +11:00
Alex Ameen
eb40bdb810 docs: list FLUX as supported
Adds FLUX to the list of supported models.
2024-10-07 10:27:56 -04:00
psychedelicious
0840e5fa65 fix(ui): missing translations for canvas drop area 2024-10-07 07:55:28 -04:00
Riccardo Giovanetti
b79f2a4e4f translationBot(ui): update translation (Italian)
Currently translated at 90.6% (1334 of 1471 strings)

translationBot(ui): update translation (Italian)

Currently translated at 85.9% (1265 of 1471 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-10-07 11:44:02 +11:00
Васянатор
76a533e67e translationBot(ui): update translation (Russian)
Currently translated at 100.0% (1471 of 1471 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-10-07 11:44:02 +11:00
Thomas Bolteau
188974988c translationBot(ui): update translation (French)
Currently translated at 55.5% (817 of 1471 strings)

Co-authored-by: Thomas Bolteau <thomas.bolteau50@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/fr/
Translation: InvokeAI/Web UI
2024-10-07 11:44:02 +11:00
Riku
b47aae2165 translationBot(ui): update translation (German)
Currently translated at 67.2% (989 of 1471 strings)

Co-authored-by: Riku <riku.block@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-10-07 11:44:02 +11:00
psychedelicious
7105a22e0f chore(ui): bump @invoke-ai/ui-library
- Reverts the `onClick -> onPointerUp` changes, which fixed Apple Pencil interactions of buttons with tooltips but broke things in other subtle ways.
- Adds a default `openDelay` on tooltips of 500ms. This is another way to fix Apple Pencil interactions, and according to some searching online, is the best practice for tooltips anyways. The default behaviour  should be for there to be a delay, and only in specific circumstances should there be no delay. So we'll see how this is received.
2024-10-07 10:05:20 +11:00
psychedelicious
eee4175e4d Revert "fix(ui): Apple Pencil requires onPointerUp instead of onClick"
This reverts commit 2a90f4f59e.
2024-10-07 10:05:20 +11:00
psychedelicious
e0b63559d0 docs(ui): getColorAtCoordinate 2024-10-05 23:41:33 -04:00
psychedelicious
aa54c1f969 feat(ui): fix color picker wrong color, improved perf
The color picker take some time to sample the color from the canvas state. This could cause a race condition where the cursor position changes between the time sampling starts, resulting in the picker showing the wrong color. Sometimes it picks up the color picker tool preview!

To resolve this, the color picker's color syncing is now throttled to once per animation frame. Besides fixing the incorrect color issue, it improves the perf substantially by reducing number of samples we take.
2024-10-05 23:41:33 -04:00
psychedelicious
87fdea4cc6 feat(ui): updated cursor position tracking
- Record both absolute and relative positions
- Use simpler method to get relative position
- Generalize getColorUnderCursor to be getColorAtCoordinate
2024-10-05 23:41:33 -04:00
psychedelicious
53443084c5 tidy(ui): move getColorUnderCursor to utils 2024-10-05 23:41:33 -04:00
psychedelicious
8d2e5bfd77 tidy(ui): use constants for keys 2024-10-05 23:41:33 -04:00
psychedelicious
05e285c95a tidy(ui): getCanDraw code style 2024-10-05 23:41:33 -04:00
psychedelicious
25f19a35d7 tidy(ui): use entity isInteractable in tool module 2024-10-05 23:41:33 -04:00
psychedelicious
01bbd32598 fix(ui): board drop targets
We just changed all buttons to use `onPointerUp` events to fix Apple Pencil behaviour. This, plus the specific DOM layout of boards, resulted in the `onPointerUp` being triggered on a board before the drop triggered.

The app saw this as selecting the board, which then reset the gallery selection to the first image in the board. By the time you drop, the gallery selection had reset.

DOM layout slightly altered to work around this.
2024-10-06 08:15:53 +11:00
Thomas Bolteau
0e2761d5c6 translationBot(ui): update translation (French)
Currently translated at 54.1% (796 of 1470 strings)

Co-authored-by: Thomas Bolteau <thomas.bolteau50@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/fr/
Translation: InvokeAI/Web UI
2024-10-05 15:12:51 +10:00
psychedelicious
d5b51cca56 chore: bump version to v5.1.0rc5 2024-10-04 22:17:41 -04:00
psychedelicious
a303777777 fix(ui): image context menu buttons don't close menu
Need to render as a `MenuItem` to trigger the close behaviour
2024-10-04 21:33:01 -04:00
psychedelicious
e90b3de706 feat(ui): error state for missing ip adapter image 2024-10-04 21:30:38 -04:00
psychedelicious
3ce94e5b84 feat(ui): improved node image drop target & error state 2024-10-04 21:30:38 -04:00
psychedelicious
42e5ec3916 fix(ui): fix wonky drop target layouts 2024-10-04 21:30:38 -04:00
psychedelicious
ffa00d1d9a chore(ui): lint 2024-10-05 09:47:22 +10:00
psychedelicious
1648a2af6e fix(ui): board title editable 2024-10-05 09:47:22 +10:00
364 changed files with 7540 additions and 2784 deletions

View File

@@ -105,7 +105,7 @@ Invoke features an organized gallery system for easily storing, accessing, and r
### Other features
- Support for both ckpt and diffusers models
- SD1.5, SD2.0, and SDXL support
- SD1.5, SD2.0, SDXL, and FLUX support
- Upscaling Tools
- Embedding Manager & Support
- Model Manager & Support

View File

@@ -40,7 +40,7 @@ RUN --mount=type=cache,target=/root/.cache/pip \
elif [ "$GPU_DRIVER" = "rocm" ]; then \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/rocm5.6"; \
else \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cu121"; \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cu124"; \
fi &&\
# xformers + triton fails to install on arm64

View File

@@ -144,7 +144,7 @@ As you might have noticed, we added two new arguments to the `InputField`
definition for `width` and `height`, called `gt` and `le`. They stand for
_greater than or equal to_ and _less than or equal to_.
These impose contraints on those fields, and will raise an exception if the
These impose constraints on those fields, and will raise an exception if the
values do not meet the constraints. Field constraints are provided by
**pydantic**, so anything you see in the **pydantic docs** will work.

View File

@@ -239,7 +239,7 @@ Consult the
get it set up.
Suggest using VSCode's included settings sync so that your remote dev host has
all the same app settings and extensions automagically.
all the same app settings and extensions automatically.
##### One remote dev gotcha

View File

@@ -2,7 +2,7 @@
## **What do I need to know to help?**
If you are looking to help to with a code contribution, InvokeAI uses several different technologies under the hood: Python (Pydantic, FastAPI, diffusers) and Typescript (React, Redux Toolkit, ChakraUI, Mantine, Konva). Familiarity with StableDiffusion and image generation concepts is helpful, but not essential.
If you are looking to help with a code contribution, InvokeAI uses several different technologies under the hood: Python (Pydantic, FastAPI, diffusers) and Typescript (React, Redux Toolkit, ChakraUI, Mantine, Konva). Familiarity with StableDiffusion and image generation concepts is helpful, but not essential.
## **Get Started**

View File

@@ -1,6 +1,6 @@
# Tutorials
Tutorials help new & existing users expand their abilty to use InvokeAI to the full extent of our features and services.
Tutorials help new & existing users expand their ability to use InvokeAI to the full extent of our features and services.
Currently, we have a set of tutorials available on our [YouTube channel](https://www.youtube.com/@invokeai), but as InvokeAI continues to evolve with new updates, we want to ensure that we are giving our users the resources they need to succeed.
@@ -8,4 +8,4 @@ Tutorials can be in the form of videos or article walkthroughs on a subject of y
## Contributing
Please reach out to @imic or @hipsterusername on [Discord](https://discord.gg/ZmtBAhwWhy) to help create tutorials for InvokeAI.
Please reach out to @imic or @hipsterusername on [Discord](https://discord.gg/ZmtBAhwWhy) to help create tutorials for InvokeAI.

View File

@@ -21,6 +21,7 @@ To use a community workflow, download the `.json` node graph file and load it in
+ [Clothing Mask](#clothing-mask)
+ [Contrast Limited Adaptive Histogram Equalization](#contrast-limited-adaptive-histogram-equalization)
+ [Depth Map from Wavefront OBJ](#depth-map-from-wavefront-obj)
+ [Enhance Detail](#enhance-detail)
+ [Film Grain](#film-grain)
+ [Generative Grammar-Based Prompt Nodes](#generative-grammar-based-prompt-nodes)
+ [GPT2RandomPromptMaker](#gpt2randompromptmaker)
@@ -81,7 +82,7 @@ Note: These are inherited from the core nodes so any update to the core nodes sh
**Example Usage:**
</br>
<img src="https://github.com/skunkworxdark/autostereogram_nodes/blob/main/images/spider.png" width="200" /> -> <img src="https://github.com/skunkworxdark/autostereogram_nodes/blob/main/images/spider-depth.png" width="200" /> -> <img src="https://github.com/skunkworxdark/autostereogram_nodes/raw/main/images/spider-dots.png" width="200" /> <img src="https://github.com/skunkworxdark/autostereogram_nodes/raw/main/images/spider-pattern.png" width="200" />
<img src="https://raw.githubusercontent.com/skunkworxdark/autostereogram_nodes/refs/heads/main/images/spider.png" width="200" /> -> <img src="https://raw.githubusercontent.com/skunkworxdark/autostereogram_nodes/refs/heads/main/images/spider-depth.png" width="200" /> -> <img src="https://raw.githubusercontent.com/skunkworxdark/autostereogram_nodes/refs/heads/main/images/spider-dots.png" width="200" /> <img src="https://raw.githubusercontent.com/skunkworxdark/autostereogram_nodes/refs/heads/main/images/spider-pattern.png" width="200" />
--------------------------------
### Average Images
@@ -142,6 +143,17 @@ To be imported, an .obj must use triangulated meshes, so make sure to enable tha
**Example Usage:**
</br><img src="https://raw.githubusercontent.com/dwringer/depth-from-obj-node/main/depth_from_obj_usage.jpg" width="500" />
--------------------------------
### Enhance Detail
**Description:** A single node that can enhance the detail in an image. Increase or decrease details in an image using a guided filter (as opposed to the typical Gaussian blur used by most sharpening filters.) Based on the `Enhance Detail` ComfyUI node from https://github.com/spacepxl/ComfyUI-Image-Filters
**Node Link:** https://github.com/skunkworxdark/enhance-detail-node
**Example Usage:**
</br>
<img src="https://raw.githubusercontent.com/skunkworxdark/enhance-detail-node/refs/heads/main/images/Comparison.png" />
--------------------------------
### Film Grain
@@ -308,7 +320,7 @@ View:
**Node Link:** https://github.com/helix4u/load_video_frame
**Output Example:**
<img src="https://raw.githubusercontent.com/helix4u/load_video_frame/main/_git_assets/testmp4_embed_converted.gif" width="500" />
<img src="https://raw.githubusercontent.com/helix4u/load_video_frame/refs/heads/main/_git_assets/dance1736978273.gif" width="500" />
--------------------------------
### Make 3D
@@ -349,7 +361,7 @@ See full docs here: https://github.com/skunkworxdark/Prompt-tools-nodes/edit/mai
**Output Examples**
<img src="https://github.com/skunkworxdark/match_histogram/assets/21961335/ed12f329-a0ef-444a-9bae-129ed60d6097" width="300" />
<img src="https://github.com/skunkworxdark/match_histogram/assets/21961335/ed12f329-a0ef-444a-9bae-129ed60d6097" />
--------------------------------
### Metadata Linked Nodes
@@ -407,7 +419,7 @@ View:
--------------------------------
### One Button Prompt
<img src="https://github.com/AIrjen/OneButtonPrompt_X_InvokeAI/blob/main/images/background.png" width="800" />
<img src="https://raw.githubusercontent.com/AIrjen/OneButtonPrompt_X_InvokeAI/refs/heads/main/images/background.png" width="800" />
**Description:** an extensive suite of auto prompt generation and prompt helper nodes based on extensive logic. Get creative with the best prompt generator in the world.
@@ -417,7 +429,7 @@ The main node generates interesting prompts based on a set of parameters. There
**Nodes:**
<img src="https://github.com/AIrjen/OneButtonPrompt_X_InvokeAI/blob/main/images/OBP_nodes_invokeai.png" width="800" />
<img src="https://raw.githubusercontent.com/AIrjen/OneButtonPrompt_X_InvokeAI/refs/heads/main/images/OBP_nodes_invokeai.png" width="800" />
--------------------------------
### Oobabooga
@@ -470,7 +482,7 @@ See full docs here: https://github.com/skunkworxdark/Prompt-tools-nodes/edit/mai
**Workflow Examples**
<img src="https://github.com/skunkworxdark/prompt-tools/blob/main/images/CSVToIndexStringNode.png" width="300" />
<img src="https://raw.githubusercontent.com/skunkworxdark/prompt-tools/refs/heads/main/images/CSVToIndexStringNode.png"/>
--------------------------------
### Remote Image
@@ -608,7 +620,7 @@ See full docs here: https://github.com/skunkworxdark/XYGrid_nodes/edit/main/READ
**Output Examples**
<img src="https://github.com/skunkworxdark/XYGrid_nodes/blob/main/images/collage.png" width="300" />
<img src="https://raw.githubusercontent.com/skunkworxdark/XYGrid_nodes/refs/heads/main/images/collage.png" />
--------------------------------

6
flake.lock generated
View File

@@ -2,11 +2,11 @@
"nodes": {
"nixpkgs": {
"locked": {
"lastModified": 1690630721,
"narHash": "sha256-Y04onHyBQT4Erfr2fc82dbJTfXGYrf4V0ysLUYnPOP8=",
"lastModified": 1727955264,
"narHash": "sha256-lrd+7mmb5NauRoMa8+J1jFKYVa+rc8aq2qc9+CxPDKc=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "d2b52322f35597c62abf56de91b0236746b2a03d",
"rev": "71cd616696bd199ef18de62524f3df3ffe8b9333",
"type": "github"
},
"original": {

View File

@@ -34,7 +34,7 @@
cudaPackages.cudnn
cudaPackages.cuda_nvrtc
cudatoolkit
pkgconfig
pkg-config
libconfig
cmake
blas
@@ -66,7 +66,7 @@
black
# Frontend.
yarn
pnpm_8
nodejs
];
LD_LIBRARY_PATH = pkgs.lib.makeLibraryPath buildInputs;

View File

@@ -282,12 +282,6 @@ class InvokeAiInstance:
shutil.copy(src, dest)
os.chmod(dest, 0o0755)
def update(self):
pass
def remove(self):
pass
### Utility functions ###
@@ -402,7 +396,7 @@ def get_torch_source() -> Tuple[str | None, str | None]:
:rtype: list
"""
from messages import select_gpu
from messages import GpuType, select_gpu
# device can be one of: "cuda", "rocm", "cpu", "cuda_and_dml, autodetect"
device = select_gpu()
@@ -412,15 +406,21 @@ def get_torch_source() -> Tuple[str | None, str | None]:
url = None
optional_modules: str | None = None
if OS == "Linux":
if device.value == "rocm":
if device == GpuType.ROCM:
url = "https://download.pytorch.org/whl/rocm5.6"
elif device.value == "cpu":
elif device == GpuType.CPU:
url = "https://download.pytorch.org/whl/cpu"
elif device.value == "cuda":
# CUDA uses the default PyPi index
elif device == GpuType.CUDA:
url = "https://download.pytorch.org/whl/cu124"
optional_modules = "[onnx-cuda]"
elif device == GpuType.CUDA_WITH_XFORMERS:
url = "https://download.pytorch.org/whl/cu124"
optional_modules = "[xformers,onnx-cuda]"
elif OS == "Windows":
if device.value == "cuda":
if device == GpuType.CUDA:
url = "https://download.pytorch.org/whl/cu124"
optional_modules = "[onnx-cuda]"
elif device == GpuType.CUDA_WITH_XFORMERS:
url = "https://download.pytorch.org/whl/cu124"
optional_modules = "[xformers,onnx-cuda]"
elif device.value == "cpu":

View File

@@ -206,6 +206,7 @@ def dest_path(dest: Optional[str | Path] = None) -> Path | None:
class GpuType(Enum):
CUDA_WITH_XFORMERS = "xformers"
CUDA = "cuda"
ROCM = "rocm"
CPU = "cpu"
@@ -221,11 +222,15 @@ def select_gpu() -> GpuType:
return GpuType.CPU
nvidia = (
"an [gold1 b]NVIDIA[/] GPU (using CUDA™)",
"an [gold1 b]NVIDIA[/] RTX 3060 or newer GPU using CUDA",
GpuType.CUDA,
)
vintage_nvidia = (
"an [gold1 b]NVIDIA[/] RTX 20xx or older GPU using CUDA+xFormers",
GpuType.CUDA_WITH_XFORMERS,
)
amd = (
"an [gold1 b]AMD[/] GPU (using ROCm™)",
"an [gold1 b]AMD[/] GPU using ROCm",
GpuType.ROCM,
)
cpu = (
@@ -235,14 +240,13 @@ def select_gpu() -> GpuType:
options = []
if OS == "Windows":
options = [nvidia, cpu]
options = [nvidia, vintage_nvidia, cpu]
if OS == "Linux":
options = [nvidia, amd, cpu]
options = [nvidia, vintage_nvidia, amd, cpu]
elif OS == "Darwin":
options = [cpu]
if len(options) == 1:
print(f'Your platform [gold1]{OS}-{ARCH}[/] only supports the "{options[0][1]}" driver. Proceeding with that.')
return options[0][1]
options = {str(i): opt for i, opt in enumerate(options, 1)}

View File

@@ -5,9 +5,10 @@ from fastapi.routing import APIRouter
from pydantic import BaseModel, Field
from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.services.board_records.board_records_common import BoardChanges
from invokeai.app.services.board_records.board_records_common import BoardChanges, BoardRecordOrderBy
from invokeai.app.services.boards.boards_common import BoardDTO
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
boards_router = APIRouter(prefix="/v1/boards", tags=["boards"])
@@ -115,6 +116,8 @@ async def delete_board(
response_model=Union[OffsetPaginatedResults[BoardDTO], list[BoardDTO]],
)
async def list_boards(
order_by: BoardRecordOrderBy = Query(default=BoardRecordOrderBy.CreatedAt, description="The attribute to order by"),
direction: SQLiteDirection = Query(default=SQLiteDirection.Descending, description="The direction to order by"),
all: Optional[bool] = Query(default=None, description="Whether to list all boards"),
offset: Optional[int] = Query(default=None, description="The page offset"),
limit: Optional[int] = Query(default=None, description="The number of boards per page"),
@@ -122,9 +125,9 @@ async def list_boards(
) -> Union[OffsetPaginatedResults[BoardDTO], list[BoardDTO]]:
"""Gets a list of boards"""
if all:
return ApiDependencies.invoker.services.boards.get_all(include_archived)
return ApiDependencies.invoker.services.boards.get_all(order_by, direction, include_archived)
elif offset is not None and limit is not None:
return ApiDependencies.invoker.services.boards.get_many(offset, limit, include_archived)
return ApiDependencies.invoker.services.boards.get_many(order_by, direction, offset, limit, include_archived)
else:
raise HTTPException(
status_code=400,

View File

@@ -83,7 +83,7 @@ async def create_workflow(
)
async def list_workflows(
page: int = Query(default=0, description="The page to get"),
per_page: int = Query(default=10, description="The number of workflows per page"),
per_page: Optional[int] = Query(default=None, description="The number of workflows per page"),
order_by: WorkflowRecordOrderBy = Query(
default=WorkflowRecordOrderBy.Name, description="The attribute to order by"
),
@@ -93,5 +93,5 @@ async def list_workflows(
) -> PaginatedResults[WorkflowRecordListItemDTO]:
"""Gets a page of workflows"""
return ApiDependencies.invoker.services.workflow_records.get_many(
page=page, per_page=per_page, order_by=order_by, direction=direction, query=query, category=category
order_by=order_by, direction=direction, page=page, per_page=per_page, query=query, category=category
)

View File

@@ -192,6 +192,7 @@ class FieldDescriptions:
freeu_s2 = 'Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to mitigate the "oversmoothing effect" in the enhanced denoising process.'
freeu_b1 = "Scaling factor for stage 1 to amplify the contributions of backbone features."
freeu_b2 = "Scaling factor for stage 2 to amplify the contributions of backbone features."
instantx_control_mode = "The control mode for InstantX ControlNet union models. Ignored for other ControlNet models. The standard mapping is: canny (0), tile (1), depth (2), blur (3), pose (4), gray (5), low quality (6). Negative values will be treated as 'None'."
class ImageField(BaseModel):

View File

@@ -0,0 +1,99 @@
from pydantic import BaseModel, Field, field_validator, model_validator
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import FieldDescriptions, ImageField, InputField, OutputField, UIType
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.controlnet_utils import CONTROLNET_RESIZE_VALUES
class FluxControlNetField(BaseModel):
image: ImageField = Field(description="The control image")
control_model: ModelIdentifierField = Field(description="The ControlNet model to use")
control_weight: float | list[float] = Field(default=1, description="The weight given to the ControlNet")
begin_step_percent: float = Field(
default=0, ge=0, le=1, description="When the ControlNet is first applied (% of total steps)"
)
end_step_percent: float = Field(
default=1, ge=0, le=1, description="When the ControlNet is last applied (% of total steps)"
)
resize_mode: CONTROLNET_RESIZE_VALUES = Field(default="just_resize", description="The resize mode to use")
instantx_control_mode: int | None = Field(default=-1, description=FieldDescriptions.instantx_control_mode)
@field_validator("control_weight")
@classmethod
def validate_control_weight(cls, v: float | list[float]) -> float | list[float]:
validate_weights(v)
return v
@model_validator(mode="after")
def validate_begin_end_step_percent(self):
validate_begin_end_step(self.begin_step_percent, self.end_step_percent)
return self
@invocation_output("flux_controlnet_output")
class FluxControlNetOutput(BaseInvocationOutput):
"""FLUX ControlNet info"""
control: FluxControlNetField = OutputField(description=FieldDescriptions.control)
@invocation(
"flux_controlnet",
title="FLUX ControlNet",
tags=["controlnet", "flux"],
category="controlnet",
version="1.0.0",
classification=Classification.Prototype,
)
class FluxControlNetInvocation(BaseInvocation):
"""Collect FLUX ControlNet info to pass to other nodes."""
image: ImageField = InputField(description="The control image")
control_model: ModelIdentifierField = InputField(
description=FieldDescriptions.controlnet_model, ui_type=UIType.ControlNetModel
)
control_weight: float | list[float] = InputField(
default=1.0, ge=-1, le=2, description="The weight given to the ControlNet"
)
begin_step_percent: float = InputField(
default=0, ge=0, le=1, description="When the ControlNet is first applied (% of total steps)"
)
end_step_percent: float = InputField(
default=1, ge=0, le=1, description="When the ControlNet is last applied (% of total steps)"
)
resize_mode: CONTROLNET_RESIZE_VALUES = InputField(default="just_resize", description="The resize mode used")
# Note: We default to -1 instead of None, because in the workflow editor UI None is not currently supported.
instantx_control_mode: int | None = InputField(default=-1, description=FieldDescriptions.instantx_control_mode)
@field_validator("control_weight")
@classmethod
def validate_control_weight(cls, v: float | list[float]) -> float | list[float]:
validate_weights(v)
return v
@model_validator(mode="after")
def validate_begin_end_step_percent(self):
validate_begin_end_step(self.begin_step_percent, self.end_step_percent)
return self
def invoke(self, context: InvocationContext) -> FluxControlNetOutput:
return FluxControlNetOutput(
control=FluxControlNetField(
image=self.image,
control_model=self.control_model,
control_weight=self.control_weight,
begin_step_percent=self.begin_step_percent,
end_step_percent=self.end_step_percent,
resize_mode=self.resize_mode,
instantx_control_mode=self.instantx_control_mode,
),
)

View File

@@ -6,7 +6,6 @@ import torchvision.transforms as tv_transforms
from torchvision.transforms.functional import resize as tv_resize
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.controlnet_image_processors import ControlField
from invokeai.app.invocations.fields import (
DenoiseMaskField,
FieldDescriptions,
@@ -17,13 +16,16 @@ from invokeai.app.invocations.fields import (
WithBoard,
WithMetadata,
)
from invokeai.app.invocations.model import TransformerField
from invokeai.app.invocations.flux_controlnet import FluxControlNetField
from invokeai.app.invocations.model import TransformerField, VAEField
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.controlnet.controlnet_flux import ControlNetFlux
from invokeai.backend.flux.controlnet_extension import ControlNetExtension
from invokeai.backend.flux.controlnet.instantx_controlnet_flux import InstantXControlNetFlux
from invokeai.backend.flux.controlnet.xlabs_controlnet_flux import XLabsControlNetFlux
from invokeai.backend.flux.denoise import denoise
from invokeai.backend.flux.inpaint_extension import InpaintExtension
from invokeai.backend.flux.extensions.inpaint_extension import InpaintExtension
from invokeai.backend.flux.extensions.instantx_controlnet_extension import InstantXControlNetExtension
from invokeai.backend.flux.extensions.xlabs_controlnet_extension import XLabsControlNetExtension
from invokeai.backend.flux.model import Flux
from invokeai.backend.flux.sampling_utils import (
clip_timestep_schedule_fractional,
@@ -90,9 +92,13 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
description="The guidance strength. Higher values adhere more strictly to the prompt, and will produce less diverse images. FLUX dev only, ignored for schnell.",
)
seed: int = InputField(default=0, description="Randomness seed for reproducibility.")
controlnet: ControlField | list[ControlField] | None = InputField(
control: FluxControlNetField | list[FluxControlNetField] | None = InputField(
default=None, input=Input.Connection, description="ControlNet models."
)
controlnet_vae: VAEField | None = InputField(
description=FieldDescriptions.vae,
input=Input.Connection,
)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
@@ -198,12 +204,21 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
noise=noise,
)
with (
transformer_info.model_on_device() as (cached_weights, transformer),
ExitStack() as exit_stack,
):
assert isinstance(transformer, Flux)
with ExitStack() as exit_stack:
# Prepare ControlNet extensions.
# Note: We do this before loading the transformer model to minimize peak memory (see implementation).
controlnet_extensions = self._prep_controlnet_extensions(
context=context,
exit_stack=exit_stack,
latent_height=latent_h,
latent_width=latent_w,
dtype=inference_dtype,
device=x.device,
)
# Load the transformer model.
(cached_weights, transformer) = exit_stack.enter_context(transformer_info.model_on_device())
assert isinstance(transformer, Flux)
config = transformer_info.config
assert config is not None
@@ -237,16 +252,6 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
else:
raise ValueError(f"Unsupported model format: {config.format}")
# Prepare ControlNet extensions.
controlnet_extensions = self._prep_controlnet_extensions(
context=context,
exit_stack=exit_stack,
latent_height=latent_h,
latent_width=latent_w,
dtype=inference_dtype,
device=x.device,
)
x = denoise(
model=transformer,
img=x,
@@ -313,39 +318,93 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
latent_width: int,
dtype: torch.dtype,
device: torch.device,
) -> list[ControlNetExtension] | None:
) -> list[XLabsControlNetExtension | InstantXControlNetExtension]:
# Normalize the controlnet input to list[ControlField].
controlnets: list[ControlField]
if self.controlnet is None:
return None
elif isinstance(self.controlnet, ControlField):
controlnets = [self.controlnet]
elif isinstance(self.controlnet, list):
controlnets = self.controlnet
controlnets: list[FluxControlNetField]
if self.control is None:
controlnets = []
elif isinstance(self.control, FluxControlNetField):
controlnets = [self.control]
elif isinstance(self.control, list):
controlnets = self.control
else:
raise ValueError(f"Unsupported controlnet type: {type(self.controlnet)}")
raise ValueError(f"Unsupported controlnet type: {type(self.control)}")
controlnet_extensions: list[ControlNetExtension] = []
for controlnet in controlnets:
model = exit_stack.enter_context(context.models.load(controlnet.control_model))
assert isinstance(model, ControlNetFlux)
# TODO(ryand): Add a field to the model config so that we can distinguish between XLabs and InstantX ControlNets
# before loading the models. Then make sure that all VAE encoding is done before loading the ControlNets to
# minimize peak memory.
# First, load the ControlNet models so that we can determine the ControlNet types.
controlnet_models = [context.models.load(controlnet.control_model) for controlnet in controlnets]
# Calculate the controlnet conditioning tensors.
# We do this before loading the ControlNet models because it may require running the VAE, and we are trying to
# keep peak memory down.
controlnet_conds: list[torch.Tensor] = []
for controlnet, controlnet_model in zip(controlnets, controlnet_models, strict=True):
image = context.images.get_pil(controlnet.image.image_name)
controlnet_extensions.append(
ControlNetExtension.from_controlnet_image(
model=model,
controlnet_image=image,
latent_height=latent_height,
latent_width=latent_width,
dtype=dtype,
device=device,
control_mode=controlnet.control_mode,
resize_mode=controlnet.resize_mode,
weight=controlnet.control_weight,
begin_step_percent=controlnet.begin_step_percent,
end_step_percent=controlnet.end_step_percent,
if isinstance(controlnet_model.model, InstantXControlNetFlux):
if self.controlnet_vae is None:
raise ValueError("A ControlNet VAE is required when using an InstantX FLUX ControlNet.")
vae_info = context.models.load(self.controlnet_vae.vae)
controlnet_conds.append(
InstantXControlNetExtension.prepare_controlnet_cond(
controlnet_image=image,
vae_info=vae_info,
latent_height=latent_height,
latent_width=latent_width,
dtype=dtype,
device=device,
resize_mode=controlnet.resize_mode,
)
)
)
elif isinstance(controlnet_model.model, XLabsControlNetFlux):
controlnet_conds.append(
XLabsControlNetExtension.prepare_controlnet_cond(
controlnet_image=image,
latent_height=latent_height,
latent_width=latent_width,
dtype=dtype,
device=device,
resize_mode=controlnet.resize_mode,
)
)
# Finally, load the ControlNet models and initialize the ControlNet extensions.
controlnet_extensions: list[XLabsControlNetExtension | InstantXControlNetExtension] = []
for controlnet, controlnet_cond, controlnet_model in zip(
controlnets, controlnet_conds, controlnet_models, strict=True
):
model = exit_stack.enter_context(controlnet_model)
if isinstance(model, XLabsControlNetFlux):
controlnet_extensions.append(
XLabsControlNetExtension(
model=model,
controlnet_cond=controlnet_cond,
weight=controlnet.control_weight,
begin_step_percent=controlnet.begin_step_percent,
end_step_percent=controlnet.end_step_percent,
)
)
elif isinstance(model, InstantXControlNetFlux):
instantx_control_mode: torch.Tensor | None = None
if controlnet.instantx_control_mode is not None and controlnet.instantx_control_mode >= 0:
instantx_control_mode = torch.tensor(controlnet.instantx_control_mode, dtype=torch.long)
instantx_control_mode = instantx_control_mode.reshape([-1, 1])
controlnet_extensions.append(
InstantXControlNetExtension(
model=model,
controlnet_cond=controlnet_cond,
instantx_control_mode=instantx_control_mode,
weight=controlnet.control_weight,
begin_step_percent=controlnet.begin_step_percent,
end_step_percent=controlnet.end_step_percent,
)
)
else:
raise ValueError(f"Unsupported ControlNet model type: {type(model)}")
return controlnet_extensions

View File

@@ -1,7 +1,8 @@
from abc import ABC, abstractmethod
from invokeai.app.services.board_records.board_records_common import BoardChanges, BoardRecord
from invokeai.app.services.board_records.board_records_common import BoardChanges, BoardRecord, BoardRecordOrderBy
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
class BoardRecordStorageBase(ABC):
@@ -39,12 +40,19 @@ class BoardRecordStorageBase(ABC):
@abstractmethod
def get_many(
self, offset: int = 0, limit: int = 10, include_archived: bool = False
self,
order_by: BoardRecordOrderBy,
direction: SQLiteDirection,
offset: int = 0,
limit: int = 10,
include_archived: bool = False,
) -> OffsetPaginatedResults[BoardRecord]:
"""Gets many board records."""
pass
@abstractmethod
def get_all(self, include_archived: bool = False) -> list[BoardRecord]:
def get_all(
self, order_by: BoardRecordOrderBy, direction: SQLiteDirection, include_archived: bool = False
) -> list[BoardRecord]:
"""Gets all board records."""
pass

View File

@@ -1,8 +1,10 @@
from datetime import datetime
from enum import Enum
from typing import Optional, Union
from pydantic import BaseModel, Field
from invokeai.app.util.metaenum import MetaEnum
from invokeai.app.util.misc import get_iso_timestamp
from invokeai.app.util.model_exclude_null import BaseModelExcludeNull
@@ -60,6 +62,13 @@ class BoardChanges(BaseModel, extra="forbid"):
archived: Optional[bool] = Field(default=None, description="Whether or not the board is archived")
class BoardRecordOrderBy(str, Enum, metaclass=MetaEnum):
"""The order by options for board records"""
CreatedAt = "created_at"
Name = "board_name"
class BoardRecordNotFoundException(Exception):
"""Raised when an board record is not found."""

View File

@@ -8,10 +8,12 @@ from invokeai.app.services.board_records.board_records_common import (
BoardRecord,
BoardRecordDeleteException,
BoardRecordNotFoundException,
BoardRecordOrderBy,
BoardRecordSaveException,
deserialize_board_record,
)
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from invokeai.app.util.misc import uuid_string
@@ -144,7 +146,12 @@ class SqliteBoardRecordStorage(BoardRecordStorageBase):
return self.get(board_id)
def get_many(
self, offset: int = 0, limit: int = 10, include_archived: bool = False
self,
order_by: BoardRecordOrderBy,
direction: SQLiteDirection,
offset: int = 0,
limit: int = 10,
include_archived: bool = False,
) -> OffsetPaginatedResults[BoardRecord]:
try:
self._lock.acquire()
@@ -154,17 +161,16 @@ class SqliteBoardRecordStorage(BoardRecordStorageBase):
SELECT *
FROM boards
{archived_filter}
ORDER BY created_at DESC
ORDER BY {order_by} {direction}
LIMIT ? OFFSET ?;
"""
# Determine archived filter condition
if include_archived:
archived_filter = ""
else:
archived_filter = "WHERE archived = 0"
archived_filter = "" if include_archived else "WHERE archived = 0"
final_query = base_query.format(archived_filter=archived_filter)
final_query = base_query.format(
archived_filter=archived_filter, order_by=order_by.value, direction=direction.value
)
# Execute query to fetch boards
self._cursor.execute(final_query, (limit, offset))
@@ -198,23 +204,32 @@ class SqliteBoardRecordStorage(BoardRecordStorageBase):
finally:
self._lock.release()
def get_all(self, include_archived: bool = False) -> list[BoardRecord]:
def get_all(
self, order_by: BoardRecordOrderBy, direction: SQLiteDirection, include_archived: bool = False
) -> list[BoardRecord]:
try:
self._lock.acquire()
base_query = """
SELECT *
FROM boards
{archived_filter}
ORDER BY created_at DESC
"""
if include_archived:
archived_filter = ""
if order_by == BoardRecordOrderBy.Name:
base_query = """
SELECT *
FROM boards
{archived_filter}
ORDER BY LOWER(board_name) {direction}
"""
else:
archived_filter = "WHERE archived = 0"
base_query = """
SELECT *
FROM boards
{archived_filter}
ORDER BY {order_by} {direction}
"""
final_query = base_query.format(archived_filter=archived_filter)
archived_filter = "" if include_archived else "WHERE archived = 0"
final_query = base_query.format(
archived_filter=archived_filter, order_by=order_by.value, direction=direction.value
)
self._cursor.execute(final_query)

View File

@@ -1,8 +1,9 @@
from abc import ABC, abstractmethod
from invokeai.app.services.board_records.board_records_common import BoardChanges
from invokeai.app.services.board_records.board_records_common import BoardChanges, BoardRecordOrderBy
from invokeai.app.services.boards.boards_common import BoardDTO
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
class BoardServiceABC(ABC):
@@ -43,12 +44,19 @@ class BoardServiceABC(ABC):
@abstractmethod
def get_many(
self, offset: int = 0, limit: int = 10, include_archived: bool = False
self,
order_by: BoardRecordOrderBy,
direction: SQLiteDirection,
offset: int = 0,
limit: int = 10,
include_archived: bool = False,
) -> OffsetPaginatedResults[BoardDTO]:
"""Gets many boards."""
pass
@abstractmethod
def get_all(self, include_archived: bool = False) -> list[BoardDTO]:
def get_all(
self, order_by: BoardRecordOrderBy, direction: SQLiteDirection, include_archived: bool = False
) -> list[BoardDTO]:
"""Gets all boards."""
pass

View File

@@ -1,8 +1,9 @@
from invokeai.app.services.board_records.board_records_common import BoardChanges
from invokeai.app.services.board_records.board_records_common import BoardChanges, BoardRecordOrderBy
from invokeai.app.services.boards.boards_base import BoardServiceABC
from invokeai.app.services.boards.boards_common import BoardDTO, board_record_to_dto
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
class BoardService(BoardServiceABC):
@@ -47,9 +48,16 @@ class BoardService(BoardServiceABC):
self.__invoker.services.board_records.delete(board_id)
def get_many(
self, offset: int = 0, limit: int = 10, include_archived: bool = False
self,
order_by: BoardRecordOrderBy,
direction: SQLiteDirection,
offset: int = 0,
limit: int = 10,
include_archived: bool = False,
) -> OffsetPaginatedResults[BoardDTO]:
board_records = self.__invoker.services.board_records.get_many(offset, limit, include_archived)
board_records = self.__invoker.services.board_records.get_many(
order_by, direction, offset, limit, include_archived
)
board_dtos = []
for r in board_records.items:
cover_image = self.__invoker.services.image_records.get_most_recent_image_for_board(r.board_id)
@@ -63,8 +71,10 @@ class BoardService(BoardServiceABC):
return OffsetPaginatedResults[BoardDTO](items=board_dtos, offset=offset, limit=limit, total=len(board_dtos))
def get_all(self, include_archived: bool = False) -> list[BoardDTO]:
board_records = self.__invoker.services.board_records.get_all(include_archived)
def get_all(
self, order_by: BoardRecordOrderBy, direction: SQLiteDirection, include_archived: bool = False
) -> list[BoardDTO]:
board_records = self.__invoker.services.board_records.get_all(order_by, direction, include_archived)
board_dtos = []
for r in board_records:
cover_image = self.__invoker.services.image_records.get_most_recent_image_for_board(r.board_id)

View File

@@ -250,9 +250,9 @@ class InvokeAIAppConfig(BaseSettings):
)
if as_example:
file.write(
"# This is an example file with default and example settings. Use the values here as a baseline.\n\n"
)
file.write("# This is an example file with default and example settings.\n")
file.write("# You should not copy this whole file into your config.\n")
file.write("# Only add the settings you need to change to your config file.\n\n")
file.write("# Internal metadata - do not edit:\n")
file.write(yaml.dump(meta_dict, sort_keys=False))
file.write("\n")

View File

@@ -184,7 +184,8 @@ class ModelInstallService(ModelInstallServiceBase):
) # type: ignore
if preferred_name := config.name:
preferred_name = Path(preferred_name).with_suffix(model_path.suffix)
if model_path.suffix:
preferred_name = f"{preferred_name}.{model_path.suffix}"
dest_path = (
self.app_config.models_path / info.base.value / info.type.value / (preferred_name or model_path.name)

View File

@@ -39,11 +39,11 @@ class WorkflowRecordsStorageBase(ABC):
@abstractmethod
def get_many(
self,
page: int,
per_page: int,
order_by: WorkflowRecordOrderBy,
direction: SQLiteDirection,
category: WorkflowCategory,
page: int,
per_page: Optional[int],
query: Optional[str],
) -> PaginatedResults[WorkflowRecordListItemDTO]:
"""Gets many workflows."""

View File

@@ -125,11 +125,11 @@ class SqliteWorkflowRecordsStorage(WorkflowRecordsStorageBase):
def get_many(
self,
page: int,
per_page: int,
order_by: WorkflowRecordOrderBy,
direction: SQLiteDirection,
category: WorkflowCategory,
page: int = 0,
per_page: Optional[int] = None,
query: Optional[str] = None,
) -> PaginatedResults[WorkflowRecordListItemDTO]:
try:
@@ -138,7 +138,7 @@ class SqliteWorkflowRecordsStorage(WorkflowRecordsStorageBase):
assert order_by in WorkflowRecordOrderBy
assert direction in SQLiteDirection
assert category in WorkflowCategory
count_query = "SELECT COUNT(*) FROM workflow_library WHERE category = ?"
count_query = "SELECT COUNT(*) FROM workflow_library"
main_query = """
SELECT
workflow_id,
@@ -153,6 +153,7 @@ class SqliteWorkflowRecordsStorage(WorkflowRecordsStorageBase):
"""
main_params: list[int | str] = [category.value]
count_params: list[int | str] = [category.value]
stripped_query = query.strip() if query else None
if stripped_query:
wildcard_query = "%" + stripped_query + "%"
@@ -161,20 +162,28 @@ class SqliteWorkflowRecordsStorage(WorkflowRecordsStorageBase):
main_params.extend([wildcard_query, wildcard_query])
count_params.extend([wildcard_query, wildcard_query])
main_query += f" ORDER BY {order_by.value} {direction.value} LIMIT ? OFFSET ?;"
main_params.extend([per_page, page * per_page])
main_query += f" ORDER BY {order_by.value} {direction.value}"
if per_page:
main_query += " LIMIT ? OFFSET ?"
main_params.extend([per_page, page * per_page])
self._cursor.execute(main_query, main_params)
rows = self._cursor.fetchall()
workflows = [WorkflowRecordListItemDTOValidator.validate_python(dict(row)) for row in rows]
self._cursor.execute(count_query, count_params)
total = self._cursor.fetchone()[0]
pages = total // per_page + (total % per_page > 0)
if per_page:
pages = total // per_page + (total % per_page > 0)
else:
pages = 1 # If no pagination, there is only one page
return PaginatedResults(
items=workflows,
page=page,
per_page=per_page,
per_page=per_page if per_page else total,
pages=pages,
total=total,
)

View File

@@ -0,0 +1,58 @@
from dataclasses import dataclass
import torch
@dataclass
class ControlNetFluxOutput:
single_block_residuals: list[torch.Tensor] | None
double_block_residuals: list[torch.Tensor] | None
def apply_weight(self, weight: float):
if self.single_block_residuals is not None:
for i in range(len(self.single_block_residuals)):
self.single_block_residuals[i] = self.single_block_residuals[i] * weight
if self.double_block_residuals is not None:
for i in range(len(self.double_block_residuals)):
self.double_block_residuals[i] = self.double_block_residuals[i] * weight
def add_tensor_lists_elementwise(
list1: list[torch.Tensor] | None, list2: list[torch.Tensor] | None
) -> list[torch.Tensor] | None:
"""Add two tensor lists elementwise that could be None."""
if list1 is None and list2 is None:
return None
if list1 is None:
return list2
if list2 is None:
return list1
new_list: list[torch.Tensor] = []
for list1_tensor, list2_tensor in zip(list1, list2, strict=True):
new_list.append(list1_tensor + list2_tensor)
return new_list
def add_controlnet_flux_outputs(
controlnet_output_1: ControlNetFluxOutput, controlnet_output_2: ControlNetFluxOutput
) -> ControlNetFluxOutput:
return ControlNetFluxOutput(
single_block_residuals=add_tensor_lists_elementwise(
controlnet_output_1.single_block_residuals, controlnet_output_2.single_block_residuals
),
double_block_residuals=add_tensor_lists_elementwise(
controlnet_output_1.double_block_residuals, controlnet_output_2.double_block_residuals
),
)
def sum_controlnet_flux_outputs(
controlnet_outputs: list[ControlNetFluxOutput],
) -> ControlNetFluxOutput:
controlnet_output_sum = ControlNetFluxOutput(single_block_residuals=None, double_block_residuals=None)
for controlnet_output in controlnet_outputs:
controlnet_output_sum = add_controlnet_flux_outputs(controlnet_output_sum, controlnet_output)
return controlnet_output_sum

View File

@@ -0,0 +1,180 @@
# This file was initially copied from:
# https://github.com/huggingface/diffusers/blob/99f608218caa069a2f16dcf9efab46959b15aec0/src/diffusers/models/controlnet_flux.py
from dataclasses import dataclass
import torch
import torch.nn as nn
from invokeai.backend.flux.controlnet.zero_module import zero_module
from invokeai.backend.flux.model import FluxParams
from invokeai.backend.flux.modules.layers import (
DoubleStreamBlock,
EmbedND,
MLPEmbedder,
SingleStreamBlock,
timestep_embedding,
)
@dataclass
class InstantXControlNetFluxOutput:
controlnet_block_samples: list[torch.Tensor] | None
controlnet_single_block_samples: list[torch.Tensor] | None
# NOTE(ryand): Mapping between diffusers FLUX transformer params and BFL FLUX transformer params:
# - Diffusers: BFL
# - in_channels: in_channels
# - num_layers: depth
# - num_single_layers: depth_single_blocks
# - attention_head_dim: hidden_size // num_heads
# - num_attention_heads: num_heads
# - joint_attention_dim: context_in_dim
# - pooled_projection_dim: vec_in_dim
# - guidance_embeds: guidance_embed
# - axes_dims_rope: axes_dim
class InstantXControlNetFlux(torch.nn.Module):
def __init__(self, params: FluxParams, num_control_modes: int | None = None):
"""
Args:
params (FluxParams): The parameters for the FLUX model.
num_control_modes (int | None, optional): The number of controlnet modes. If non-None, then the model is a
'union controlnet' model and expects a mode conditioning input at runtime.
"""
super().__init__()
# The following modules mirror the base FLUX transformer model.
# -------------------------------------------------------------
self.params = params
self.in_channels = params.in_channels
self.out_channels = self.in_channels
if params.hidden_size % params.num_heads != 0:
raise ValueError(f"Hidden size {params.hidden_size} must be divisible by num_heads {params.num_heads}")
pe_dim = params.hidden_size // params.num_heads
if sum(params.axes_dim) != pe_dim:
raise ValueError(f"Got {params.axes_dim} but expected positional dim {pe_dim}")
self.hidden_size = params.hidden_size
self.num_heads = params.num_heads
self.pe_embedder = EmbedND(dim=pe_dim, theta=params.theta, axes_dim=params.axes_dim)
self.img_in = nn.Linear(self.in_channels, self.hidden_size, bias=True)
self.time_in = MLPEmbedder(in_dim=256, hidden_dim=self.hidden_size)
self.vector_in = MLPEmbedder(params.vec_in_dim, self.hidden_size)
self.guidance_in = (
MLPEmbedder(in_dim=256, hidden_dim=self.hidden_size) if params.guidance_embed else nn.Identity()
)
self.txt_in = nn.Linear(params.context_in_dim, self.hidden_size)
self.double_blocks = nn.ModuleList(
[
DoubleStreamBlock(
self.hidden_size,
self.num_heads,
mlp_ratio=params.mlp_ratio,
qkv_bias=params.qkv_bias,
)
for _ in range(params.depth)
]
)
self.single_blocks = nn.ModuleList(
[
SingleStreamBlock(self.hidden_size, self.num_heads, mlp_ratio=params.mlp_ratio)
for _ in range(params.depth_single_blocks)
]
)
# The following modules are specific to the ControlNet model.
# -----------------------------------------------------------
self.controlnet_blocks = nn.ModuleList([])
for _ in range(len(self.double_blocks)):
self.controlnet_blocks.append(zero_module(nn.Linear(self.hidden_size, self.hidden_size)))
self.controlnet_single_blocks = nn.ModuleList([])
for _ in range(len(self.single_blocks)):
self.controlnet_single_blocks.append(zero_module(nn.Linear(self.hidden_size, self.hidden_size)))
self.is_union = False
if num_control_modes is not None:
self.is_union = True
self.controlnet_mode_embedder = nn.Embedding(num_control_modes, self.hidden_size)
self.controlnet_x_embedder = zero_module(torch.nn.Linear(self.in_channels, self.hidden_size))
def forward(
self,
controlnet_cond: torch.Tensor,
controlnet_mode: torch.Tensor | None,
img: torch.Tensor,
img_ids: torch.Tensor,
txt: torch.Tensor,
txt_ids: torch.Tensor,
timesteps: torch.Tensor,
y: torch.Tensor,
guidance: torch.Tensor | None = None,
) -> InstantXControlNetFluxOutput:
if img.ndim != 3 or txt.ndim != 3:
raise ValueError("Input img and txt tensors must have 3 dimensions.")
img = self.img_in(img)
# Add controlnet_cond embedding.
img = img + self.controlnet_x_embedder(controlnet_cond)
vec = self.time_in(timestep_embedding(timesteps, 256))
if self.params.guidance_embed:
if guidance is None:
raise ValueError("Didn't get guidance strength for guidance distilled model.")
vec = vec + self.guidance_in(timestep_embedding(guidance, 256))
vec = vec + self.vector_in(y)
txt = self.txt_in(txt)
# If this is a union ControlNet, then concat the control mode embedding to the T5 text embedding.
if self.is_union:
if controlnet_mode is None:
# We allow users to enter 'None' as the controlnet_mode if they don't want to worry about this input.
# We've chosen to use a zero-embedding in this case.
zero_index = torch.zeros([1, 1], dtype=torch.long, device=txt.device)
controlnet_mode_emb = torch.zeros_like(self.controlnet_mode_embedder(zero_index))
else:
controlnet_mode_emb = self.controlnet_mode_embedder(controlnet_mode)
txt = torch.cat([controlnet_mode_emb, txt], dim=1)
txt_ids = torch.cat([txt_ids[:, :1, :], txt_ids], dim=1)
else:
assert controlnet_mode is None
ids = torch.cat((txt_ids, img_ids), dim=1)
pe = self.pe_embedder(ids)
double_block_samples: list[torch.Tensor] = []
for block in self.double_blocks:
img, txt = block(img=img, txt=txt, vec=vec, pe=pe)
double_block_samples.append(img)
img = torch.cat((txt, img), 1)
single_block_samples: list[torch.Tensor] = []
for block in self.single_blocks:
img = block(img, vec=vec, pe=pe)
single_block_samples.append(img[:, txt.shape[1] :])
# ControlNet Block
controlnet_double_block_samples: list[torch.Tensor] = []
for double_block_sample, controlnet_block in zip(double_block_samples, self.controlnet_blocks, strict=True):
double_block_sample = controlnet_block(double_block_sample)
controlnet_double_block_samples.append(double_block_sample)
controlnet_single_block_samples: list[torch.Tensor] = []
for single_block_sample, controlnet_block in zip(
single_block_samples, self.controlnet_single_blocks, strict=True
):
single_block_sample = controlnet_block(single_block_sample)
controlnet_single_block_samples.append(single_block_sample)
return InstantXControlNetFluxOutput(
controlnet_block_samples=controlnet_double_block_samples or None,
controlnet_single_block_samples=controlnet_single_block_samples or None,
)

View File

@@ -0,0 +1,295 @@
from typing import Any, Dict
import torch
from invokeai.backend.flux.model import FluxParams
def is_state_dict_xlabs_controlnet(sd: Dict[str, Any]) -> bool:
"""Is the state dict for an XLabs ControlNet model?
This is intended to be a reasonably high-precision detector, but it is not guaranteed to have perfect precision.
"""
# If all of the expected keys are present, then this is very likely an XLabs ControlNet model.
expected_keys = {
"controlnet_blocks.0.bias",
"controlnet_blocks.0.weight",
"input_hint_block.0.bias",
"input_hint_block.0.weight",
"pos_embed_input.bias",
"pos_embed_input.weight",
}
if expected_keys.issubset(sd.keys()):
return True
return False
def is_state_dict_instantx_controlnet(sd: Dict[str, Any]) -> bool:
"""Is the state dict for an InstantX ControlNet model?
This is intended to be a reasonably high-precision detector, but it is not guaranteed to have perfect precision.
"""
# If all of the expected keys are present, then this is very likely an InstantX ControlNet model.
expected_keys = {
"controlnet_blocks.0.bias",
"controlnet_blocks.0.weight",
"controlnet_x_embedder.bias",
"controlnet_x_embedder.weight",
}
if expected_keys.issubset(sd.keys()):
return True
return False
def _fuse_weights(*t: torch.Tensor) -> torch.Tensor:
"""Fuse weights along dimension 0.
Used to fuse q, k, v attention weights into a single qkv tensor when converting from diffusers to BFL format.
"""
# TODO(ryand): Double check dim=0 is correct.
return torch.cat(t, dim=0)
def _convert_flux_double_block_sd_from_diffusers_to_bfl_format(
sd: Dict[str, torch.Tensor], double_block_index: int
) -> Dict[str, torch.Tensor]:
"""Convert the state dict for a double block from diffusers format to BFL format."""
to_prefix = f"double_blocks.{double_block_index}"
from_prefix = f"transformer_blocks.{double_block_index}"
new_sd: dict[str, torch.Tensor] = {}
# Check one key to determine if this block exists.
if f"{from_prefix}.attn.add_q_proj.bias" not in sd:
return new_sd
# txt_attn.qkv
new_sd[f"{to_prefix}.txt_attn.qkv.bias"] = _fuse_weights(
sd.pop(f"{from_prefix}.attn.add_q_proj.bias"),
sd.pop(f"{from_prefix}.attn.add_k_proj.bias"),
sd.pop(f"{from_prefix}.attn.add_v_proj.bias"),
)
new_sd[f"{to_prefix}.txt_attn.qkv.weight"] = _fuse_weights(
sd.pop(f"{from_prefix}.attn.add_q_proj.weight"),
sd.pop(f"{from_prefix}.attn.add_k_proj.weight"),
sd.pop(f"{from_prefix}.attn.add_v_proj.weight"),
)
# img_attn.qkv
new_sd[f"{to_prefix}.img_attn.qkv.bias"] = _fuse_weights(
sd.pop(f"{from_prefix}.attn.to_q.bias"),
sd.pop(f"{from_prefix}.attn.to_k.bias"),
sd.pop(f"{from_prefix}.attn.to_v.bias"),
)
new_sd[f"{to_prefix}.img_attn.qkv.weight"] = _fuse_weights(
sd.pop(f"{from_prefix}.attn.to_q.weight"),
sd.pop(f"{from_prefix}.attn.to_k.weight"),
sd.pop(f"{from_prefix}.attn.to_v.weight"),
)
# Handle basic 1-to-1 key conversions.
key_map = {
# img_attn
"attn.norm_k.weight": "img_attn.norm.key_norm.scale",
"attn.norm_q.weight": "img_attn.norm.query_norm.scale",
"attn.to_out.0.weight": "img_attn.proj.weight",
"attn.to_out.0.bias": "img_attn.proj.bias",
# img_mlp
"ff.net.0.proj.weight": "img_mlp.0.weight",
"ff.net.0.proj.bias": "img_mlp.0.bias",
"ff.net.2.weight": "img_mlp.2.weight",
"ff.net.2.bias": "img_mlp.2.bias",
# img_mod
"norm1.linear.weight": "img_mod.lin.weight",
"norm1.linear.bias": "img_mod.lin.bias",
# txt_attn
"attn.norm_added_q.weight": "txt_attn.norm.query_norm.scale",
"attn.norm_added_k.weight": "txt_attn.norm.key_norm.scale",
"attn.to_add_out.weight": "txt_attn.proj.weight",
"attn.to_add_out.bias": "txt_attn.proj.bias",
# txt_mlp
"ff_context.net.0.proj.weight": "txt_mlp.0.weight",
"ff_context.net.0.proj.bias": "txt_mlp.0.bias",
"ff_context.net.2.weight": "txt_mlp.2.weight",
"ff_context.net.2.bias": "txt_mlp.2.bias",
# txt_mod
"norm1_context.linear.weight": "txt_mod.lin.weight",
"norm1_context.linear.bias": "txt_mod.lin.bias",
}
for from_key, to_key in key_map.items():
new_sd[f"{to_prefix}.{to_key}"] = sd.pop(f"{from_prefix}.{from_key}")
return new_sd
def _convert_flux_single_block_sd_from_diffusers_to_bfl_format(
sd: Dict[str, torch.Tensor], single_block_index: int
) -> Dict[str, torch.Tensor]:
"""Convert the state dict for a single block from diffusers format to BFL format."""
to_prefix = f"single_blocks.{single_block_index}"
from_prefix = f"single_transformer_blocks.{single_block_index}"
new_sd: dict[str, torch.Tensor] = {}
# Check one key to determine if this block exists.
if f"{from_prefix}.attn.to_q.bias" not in sd:
return new_sd
# linear1 (qkv)
new_sd[f"{to_prefix}.linear1.bias"] = _fuse_weights(
sd.pop(f"{from_prefix}.attn.to_q.bias"),
sd.pop(f"{from_prefix}.attn.to_k.bias"),
sd.pop(f"{from_prefix}.attn.to_v.bias"),
sd.pop(f"{from_prefix}.proj_mlp.bias"),
)
new_sd[f"{to_prefix}.linear1.weight"] = _fuse_weights(
sd.pop(f"{from_prefix}.attn.to_q.weight"),
sd.pop(f"{from_prefix}.attn.to_k.weight"),
sd.pop(f"{from_prefix}.attn.to_v.weight"),
sd.pop(f"{from_prefix}.proj_mlp.weight"),
)
# Handle basic 1-to-1 key conversions.
key_map = {
# linear2
"proj_out.weight": "linear2.weight",
"proj_out.bias": "linear2.bias",
# modulation
"norm.linear.weight": "modulation.lin.weight",
"norm.linear.bias": "modulation.lin.bias",
# norm
"attn.norm_k.weight": "norm.key_norm.scale",
"attn.norm_q.weight": "norm.query_norm.scale",
}
for from_key, to_key in key_map.items():
new_sd[f"{to_prefix}.{to_key}"] = sd.pop(f"{from_prefix}.{from_key}")
return new_sd
def convert_diffusers_instantx_state_dict_to_bfl_format(sd: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]:
"""Convert an InstantX ControlNet state dict to the format that can be loaded by our internal
InstantXControlNetFlux model.
The original InstantX ControlNet model was developed to be used in diffusers. We have ported the original
implementation to InstantXControlNetFlux to make it compatible with BFL-style models. This function converts the
original state dict to the format expected by InstantXControlNetFlux.
"""
# Shallow copy sd so that we can pop keys from it without modifying the original.
sd = sd.copy()
new_sd: dict[str, torch.Tensor] = {}
# Handle basic 1-to-1 key conversions.
basic_key_map = {
# Base model keys.
# ----------------
# txt_in keys.
"context_embedder.bias": "txt_in.bias",
"context_embedder.weight": "txt_in.weight",
# guidance_in MLPEmbedder keys.
"time_text_embed.guidance_embedder.linear_1.bias": "guidance_in.in_layer.bias",
"time_text_embed.guidance_embedder.linear_1.weight": "guidance_in.in_layer.weight",
"time_text_embed.guidance_embedder.linear_2.bias": "guidance_in.out_layer.bias",
"time_text_embed.guidance_embedder.linear_2.weight": "guidance_in.out_layer.weight",
# vector_in MLPEmbedder keys.
"time_text_embed.text_embedder.linear_1.bias": "vector_in.in_layer.bias",
"time_text_embed.text_embedder.linear_1.weight": "vector_in.in_layer.weight",
"time_text_embed.text_embedder.linear_2.bias": "vector_in.out_layer.bias",
"time_text_embed.text_embedder.linear_2.weight": "vector_in.out_layer.weight",
# time_in MLPEmbedder keys.
"time_text_embed.timestep_embedder.linear_1.bias": "time_in.in_layer.bias",
"time_text_embed.timestep_embedder.linear_1.weight": "time_in.in_layer.weight",
"time_text_embed.timestep_embedder.linear_2.bias": "time_in.out_layer.bias",
"time_text_embed.timestep_embedder.linear_2.weight": "time_in.out_layer.weight",
# img_in keys.
"x_embedder.bias": "img_in.bias",
"x_embedder.weight": "img_in.weight",
}
for old_key, new_key in basic_key_map.items():
v = sd.pop(old_key, None)
if v is not None:
new_sd[new_key] = v
# Handle the double_blocks.
block_index = 0
while True:
converted_double_block_sd = _convert_flux_double_block_sd_from_diffusers_to_bfl_format(sd, block_index)
if len(converted_double_block_sd) == 0:
break
new_sd.update(converted_double_block_sd)
block_index += 1
# Handle the single_blocks.
block_index = 0
while True:
converted_singe_block_sd = _convert_flux_single_block_sd_from_diffusers_to_bfl_format(sd, block_index)
if len(converted_singe_block_sd) == 0:
break
new_sd.update(converted_singe_block_sd)
block_index += 1
# Transfer controlnet keys as-is.
for k in list(sd.keys()):
if k.startswith("controlnet_"):
new_sd[k] = sd.pop(k)
# Assert that all keys have been handled.
assert len(sd) == 0
return new_sd
def infer_flux_params_from_state_dict(sd: Dict[str, torch.Tensor]) -> FluxParams:
"""Infer the FluxParams from the shape of a FLUX state dict. When a model is distributed in diffusers format, this
information is all contained in the config.json file that accompanies the model. However, being apple to infer the
params from the state dict enables us to load models (e.g. an InstantX ControlNet) from a single weight file.
"""
hidden_size = sd["img_in.weight"].shape[0]
mlp_hidden_dim = sd["double_blocks.0.img_mlp.0.weight"].shape[0]
# mlp_ratio is a float, but we treat it as an int here to avoid having to think about possible float precision
# issues. In practice, mlp_ratio is usually 4.
mlp_ratio = mlp_hidden_dim // hidden_size
head_dim = sd["double_blocks.0.img_attn.norm.query_norm.scale"].shape[0]
num_heads = hidden_size // head_dim
# Count the number of double blocks.
double_block_index = 0
while f"double_blocks.{double_block_index}.img_attn.qkv.weight" in sd:
double_block_index += 1
# Count the number of single blocks.
single_block_index = 0
while f"single_blocks.{single_block_index}.linear1.weight" in sd:
single_block_index += 1
return FluxParams(
in_channels=sd["img_in.weight"].shape[1],
vec_in_dim=sd["vector_in.in_layer.weight"].shape[1],
context_in_dim=sd["txt_in.weight"].shape[1],
hidden_size=hidden_size,
mlp_ratio=mlp_ratio,
num_heads=num_heads,
depth=double_block_index,
depth_single_blocks=single_block_index,
# axes_dim cannot be inferred from the state dict. The hard-coded value is correct for dev/schnell models.
axes_dim=[16, 56, 56],
# theta cannot be inferred from the state dict. The hard-coded value is correct for dev/schnell models.
theta=10_000,
qkv_bias="double_blocks.0.img_attn.qkv.bias" in sd,
guidance_embed="guidance_in.in_layer.weight" in sd,
)
def infer_instantx_num_control_modes_from_state_dict(sd: Dict[str, torch.Tensor]) -> int | None:
"""Infer the number of ControlNet Union modes from the shape of a InstantX ControlNet state dict.
Returns None if the model is not a ControlNet Union model. Otherwise returns the number of modes.
"""
mode_embedder_key = "controlnet_mode_embedder.weight"
if mode_embedder_key not in sd:
return None
return sd[mode_embedder_key].shape[0]

View File

@@ -2,22 +2,22 @@
# https://github.com/XLabs-AI/x-flux/blob/47495425dbed499be1e8e5a6e52628b07349cba2/src/flux/controlnet.py
from dataclasses import dataclass
import torch
from einops import rearrange
from torch import Tensor, nn
from invokeai.backend.flux.controlnet.zero_module import zero_module
from invokeai.backend.flux.model import FluxParams
from invokeai.backend.flux.modules.layers import DoubleStreamBlock, EmbedND, MLPEmbedder, timestep_embedding
def _zero_module(module: torch.nn.Module) -> torch.nn.Module:
"""Initialize the parameters of a module to zero."""
for p in module.parameters():
nn.init.zeros_(p)
return module
@dataclass
class XLabsControlNetFluxOutput:
controlnet_double_block_residuals: list[torch.Tensor] | None
class ControlNetFlux(nn.Module):
class XLabsControlNetFlux(torch.nn.Module):
"""A ControlNet model for FLUX.
The architecture is very similar to the base FLUX model, with the following differences:
@@ -40,15 +40,15 @@ class ControlNetFlux(nn.Module):
self.hidden_size = params.hidden_size
self.num_heads = params.num_heads
self.pe_embedder = EmbedND(dim=pe_dim, theta=params.theta, axes_dim=params.axes_dim)
self.img_in = nn.Linear(self.in_channels, self.hidden_size, bias=True)
self.img_in = torch.nn.Linear(self.in_channels, self.hidden_size, bias=True)
self.time_in = MLPEmbedder(in_dim=256, hidden_dim=self.hidden_size)
self.vector_in = MLPEmbedder(params.vec_in_dim, self.hidden_size)
self.guidance_in = (
MLPEmbedder(in_dim=256, hidden_dim=self.hidden_size) if params.guidance_embed else nn.Identity()
MLPEmbedder(in_dim=256, hidden_dim=self.hidden_size) if params.guidance_embed else torch.nn.Identity()
)
self.txt_in = nn.Linear(params.context_in_dim, self.hidden_size)
self.txt_in = torch.nn.Linear(params.context_in_dim, self.hidden_size)
self.double_blocks = nn.ModuleList(
self.double_blocks = torch.nn.ModuleList(
[
DoubleStreamBlock(
self.hidden_size,
@@ -61,41 +61,41 @@ class ControlNetFlux(nn.Module):
)
# Add ControlNet blocks.
self.controlnet_blocks = nn.ModuleList([])
self.controlnet_blocks = torch.nn.ModuleList([])
for _ in range(controlnet_depth):
controlnet_block = nn.Linear(self.hidden_size, self.hidden_size)
controlnet_block = _zero_module(controlnet_block)
controlnet_block = torch.nn.Linear(self.hidden_size, self.hidden_size)
controlnet_block = zero_module(controlnet_block)
self.controlnet_blocks.append(controlnet_block)
self.pos_embed_input = nn.Linear(self.in_channels, self.hidden_size, bias=True)
self.input_hint_block = nn.Sequential(
nn.Conv2d(3, 16, 3, padding=1),
nn.SiLU(),
nn.Conv2d(16, 16, 3, padding=1),
nn.SiLU(),
nn.Conv2d(16, 16, 3, padding=1, stride=2),
nn.SiLU(),
nn.Conv2d(16, 16, 3, padding=1),
nn.SiLU(),
nn.Conv2d(16, 16, 3, padding=1, stride=2),
nn.SiLU(),
nn.Conv2d(16, 16, 3, padding=1),
nn.SiLU(),
nn.Conv2d(16, 16, 3, padding=1, stride=2),
nn.SiLU(),
_zero_module(nn.Conv2d(16, 16, 3, padding=1)),
self.pos_embed_input = torch.nn.Linear(self.in_channels, self.hidden_size, bias=True)
self.input_hint_block = torch.nn.Sequential(
torch.nn.Conv2d(3, 16, 3, padding=1),
torch.nn.SiLU(),
torch.nn.Conv2d(16, 16, 3, padding=1),
torch.nn.SiLU(),
torch.nn.Conv2d(16, 16, 3, padding=1, stride=2),
torch.nn.SiLU(),
torch.nn.Conv2d(16, 16, 3, padding=1),
torch.nn.SiLU(),
torch.nn.Conv2d(16, 16, 3, padding=1, stride=2),
torch.nn.SiLU(),
torch.nn.Conv2d(16, 16, 3, padding=1),
torch.nn.SiLU(),
torch.nn.Conv2d(16, 16, 3, padding=1, stride=2),
torch.nn.SiLU(),
zero_module(torch.nn.Conv2d(16, 16, 3, padding=1)),
)
def forward(
self,
img: Tensor,
img_ids: Tensor,
controlnet_cond: Tensor,
txt: Tensor,
txt_ids: Tensor,
timesteps: Tensor,
y: Tensor,
guidance: Tensor | None = None,
) -> list[Tensor]:
img: torch.Tensor,
img_ids: torch.Tensor,
controlnet_cond: torch.Tensor,
txt: torch.Tensor,
txt_ids: torch.Tensor,
timesteps: torch.Tensor,
y: torch.Tensor,
guidance: torch.Tensor | None = None,
) -> XLabsControlNetFluxOutput:
if img.ndim != 3 or txt.ndim != 3:
raise ValueError("Input img and txt tensors must have 3 dimensions.")
@@ -127,4 +127,4 @@ class ControlNetFlux(nn.Module):
block_res_sample = controlnet_block(block_res_sample)
controlnet_block_res_samples.append(block_res_sample)
return controlnet_block_res_samples
return XLabsControlNetFluxOutput(controlnet_double_block_residuals=controlnet_block_res_samples)

View File

@@ -0,0 +1,12 @@
from typing import TypeVar
import torch
T = TypeVar("T", bound=torch.nn.Module)
def zero_module(module: T) -> T:
"""Initialize the parameters of a module to zero."""
for p in module.parameters():
torch.nn.init.zeros_(p)
return module

View File

@@ -1,103 +0,0 @@
import math
from typing import List, Union
import torch
from PIL.Image import Image
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.util.controlnet_utils import CONTROLNET_MODE_VALUES, CONTROLNET_RESIZE_VALUES, prepare_control_image
from invokeai.backend.flux.controlnet.controlnet_flux import ControlNetFlux
class ControlNetExtension:
def __init__(
self,
model: ControlNetFlux,
controlnet_cond: torch.Tensor,
weight: Union[float, List[float]],
begin_step_percent: float,
end_step_percent: float,
):
self._model = model
# _controlnet_cond is the control image passed to the ControlNet model.
# Pixel values are in the range [-1, 1]. Shape: (batch_size, 3, height, width).
self._controlnet_cond = controlnet_cond
self._weight = weight
self._begin_step_percent = begin_step_percent
self._end_step_percent = end_step_percent
@classmethod
def from_controlnet_image(
cls,
model: ControlNetFlux,
controlnet_image: Image,
latent_height: int,
latent_width: int,
dtype: torch.dtype,
device: torch.device,
control_mode: CONTROLNET_MODE_VALUES,
resize_mode: CONTROLNET_RESIZE_VALUES,
weight: Union[float, List[float]],
begin_step_percent: float,
end_step_percent: float,
):
image_height = latent_height * LATENT_SCALE_FACTOR
image_width = latent_width * LATENT_SCALE_FACTOR
controlnet_cond = prepare_control_image(
image=controlnet_image,
do_classifier_free_guidance=False,
width=image_width,
height=image_height,
device=device,
dtype=dtype,
control_mode=control_mode,
resize_mode=resize_mode,
)
# Map pixel values from [0, 1] to [-1, 1].
controlnet_cond = controlnet_cond * 2 - 1
return cls(
model=model,
controlnet_cond=controlnet_cond,
weight=weight,
begin_step_percent=begin_step_percent,
end_step_percent=end_step_percent,
)
def run_controlnet(
self,
timestep_index: int,
total_num_timesteps: int,
img: torch.Tensor,
img_ids: torch.Tensor,
txt: torch.Tensor,
txt_ids: torch.Tensor,
y: torch.Tensor,
timesteps: torch.Tensor,
guidance: torch.Tensor | None,
) -> list[torch.Tensor] | None:
first_step = math.floor(self._begin_step_percent * total_num_timesteps)
last_step = math.ceil(self._end_step_percent * total_num_timesteps)
if timestep_index < first_step or timestep_index > last_step:
return
weight = self._weight
controlnet_block_res_samples = self._model(
img=img,
img_ids=img_ids,
controlnet_cond=self._controlnet_cond,
txt=txt,
txt_ids=txt_ids,
timesteps=timesteps,
y=y,
guidance=guidance,
)
# Apply weight to the residuals.
for block_res_sample in controlnet_block_res_samples:
block_res_sample *= weight
return controlnet_block_res_samples

View File

@@ -3,8 +3,10 @@ from typing import Callable
import torch
from tqdm import tqdm
from invokeai.backend.flux.controlnet_extension import ControlNetExtension
from invokeai.backend.flux.inpaint_extension import InpaintExtension
from invokeai.backend.flux.controlnet.controlnet_flux_output import ControlNetFluxOutput, sum_controlnet_flux_outputs
from invokeai.backend.flux.extensions.inpaint_extension import InpaintExtension
from invokeai.backend.flux.extensions.instantx_controlnet_extension import InstantXControlNetExtension
from invokeai.backend.flux.extensions.xlabs_controlnet_extension import XLabsControlNetExtension
from invokeai.backend.flux.model import Flux
from invokeai.backend.stable_diffusion.diffusers_pipeline import PipelineIntermediateState
@@ -22,7 +24,7 @@ def denoise(
step_callback: Callable[[PipelineIntermediateState], None],
guidance: float,
inpaint_extension: InpaintExtension | None,
controlnet_extensions: list[ControlNetExtension] | None,
controlnet_extensions: list[XLabsControlNetExtension | InstantXControlNetExtension],
):
# step 0 is the initial state
total_steps = len(timesteps) - 1
@@ -42,10 +44,9 @@ def denoise(
t_vec = torch.full((img.shape[0],), t_curr, dtype=img.dtype, device=img.device)
# Run ControlNet models.
# controlnet_block_residuals[i][j] is the residual of the j-th block of the i-th ControlNet model.
controlnet_block_residuals: list[list[torch.Tensor] | None] = []
for controlnet_extension in controlnet_extensions or []:
controlnet_block_residuals.append(
controlnet_residuals: list[ControlNetFluxOutput] = []
for controlnet_extension in controlnet_extensions:
controlnet_residuals.append(
controlnet_extension.run_controlnet(
timestep_index=step - 1,
total_num_timesteps=total_steps,
@@ -59,6 +60,12 @@ def denoise(
)
)
# Merge the ControlNet residuals from multiple ControlNets.
# TODO(ryand): We may want to alculate the sum just-in-time to keep peak memory low. Keep in mind, that the
# controlnet_residuals datastructure is efficient in that it likely contains multiple references to the same
# tensors. Calculating the sum materializes each tensor into its own instance.
merged_controlnet_residuals = sum_controlnet_flux_outputs(controlnet_residuals)
pred = model(
img=img,
img_ids=img_ids,
@@ -67,7 +74,8 @@ def denoise(
y=vec,
timesteps=t_vec,
guidance=guidance_vec,
controlnet_block_residuals=controlnet_block_residuals,
controlnet_double_block_residuals=merged_controlnet_residuals.double_block_residuals,
controlnet_single_block_residuals=merged_controlnet_residuals.single_block_residuals,
)
preview_img = img - t_curr * pred

View File

@@ -0,0 +1,45 @@
import math
from abc import ABC, abstractmethod
from typing import List, Union
import torch
from invokeai.backend.flux.controlnet.controlnet_flux_output import ControlNetFluxOutput
class BaseControlNetExtension(ABC):
def __init__(
self,
weight: Union[float, List[float]],
begin_step_percent: float,
end_step_percent: float,
):
self._weight = weight
self._begin_step_percent = begin_step_percent
self._end_step_percent = end_step_percent
def _get_weight(self, timestep_index: int, total_num_timesteps: int) -> float:
first_step = math.floor(self._begin_step_percent * total_num_timesteps)
last_step = math.ceil(self._end_step_percent * total_num_timesteps)
if timestep_index < first_step or timestep_index > last_step:
return 0.0
if isinstance(self._weight, list):
return self._weight[timestep_index]
return self._weight
@abstractmethod
def run_controlnet(
self,
timestep_index: int,
total_num_timesteps: int,
img: torch.Tensor,
img_ids: torch.Tensor,
txt: torch.Tensor,
txt_ids: torch.Tensor,
y: torch.Tensor,
timesteps: torch.Tensor,
guidance: torch.Tensor | None,
) -> ControlNetFluxOutput: ...

View File

@@ -0,0 +1,194 @@
import math
from typing import List, Union
import torch
from PIL.Image import Image
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.flux_vae_encode import FluxVaeEncodeInvocation
from invokeai.app.util.controlnet_utils import CONTROLNET_RESIZE_VALUES, prepare_control_image
from invokeai.backend.flux.controlnet.controlnet_flux_output import ControlNetFluxOutput
from invokeai.backend.flux.controlnet.instantx_controlnet_flux import (
InstantXControlNetFlux,
InstantXControlNetFluxOutput,
)
from invokeai.backend.flux.extensions.base_controlnet_extension import BaseControlNetExtension
from invokeai.backend.flux.sampling_utils import pack
from invokeai.backend.model_manager.load.load_base import LoadedModel
class InstantXControlNetExtension(BaseControlNetExtension):
def __init__(
self,
model: InstantXControlNetFlux,
controlnet_cond: torch.Tensor,
instantx_control_mode: torch.Tensor | None,
weight: Union[float, List[float]],
begin_step_percent: float,
end_step_percent: float,
):
super().__init__(
weight=weight,
begin_step_percent=begin_step_percent,
end_step_percent=end_step_percent,
)
self._model = model
# The VAE-encoded and 'packed' control image to pass to the ControlNet model.
self._controlnet_cond = controlnet_cond
# TODO(ryand): Should we define an enum for the instantx_control_mode? Is it likely to change for future models?
# The control mode for InstantX ControlNet union models.
# See the values defined here: https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Union#control-mode
# Expected shape: (batch_size, 1), Expected dtype: torch.long
# If None, a zero-embedding will be used.
self._instantx_control_mode = instantx_control_mode
# TODO(ryand): Pass in these params if a new base transformer / InstantX ControlNet pair get released.
self._flux_transformer_num_double_blocks = 19
self._flux_transformer_num_single_blocks = 38
@classmethod
def prepare_controlnet_cond(
cls,
controlnet_image: Image,
vae_info: LoadedModel,
latent_height: int,
latent_width: int,
dtype: torch.dtype,
device: torch.device,
resize_mode: CONTROLNET_RESIZE_VALUES,
):
image_height = latent_height * LATENT_SCALE_FACTOR
image_width = latent_width * LATENT_SCALE_FACTOR
resized_controlnet_image = prepare_control_image(
image=controlnet_image,
do_classifier_free_guidance=False,
width=image_width,
height=image_height,
device=device,
dtype=dtype,
control_mode="balanced",
resize_mode=resize_mode,
)
# Shift the image from [0, 1] to [-1, 1].
resized_controlnet_image = resized_controlnet_image * 2 - 1
# Run VAE encoder.
controlnet_cond = FluxVaeEncodeInvocation.vae_encode(vae_info=vae_info, image_tensor=resized_controlnet_image)
controlnet_cond = pack(controlnet_cond)
return controlnet_cond
@classmethod
def from_controlnet_image(
cls,
model: InstantXControlNetFlux,
controlnet_image: Image,
instantx_control_mode: torch.Tensor | None,
vae_info: LoadedModel,
latent_height: int,
latent_width: int,
dtype: torch.dtype,
device: torch.device,
resize_mode: CONTROLNET_RESIZE_VALUES,
weight: Union[float, List[float]],
begin_step_percent: float,
end_step_percent: float,
):
image_height = latent_height * LATENT_SCALE_FACTOR
image_width = latent_width * LATENT_SCALE_FACTOR
resized_controlnet_image = prepare_control_image(
image=controlnet_image,
do_classifier_free_guidance=False,
width=image_width,
height=image_height,
device=device,
dtype=dtype,
control_mode="balanced",
resize_mode=resize_mode,
)
# Shift the image from [0, 1] to [-1, 1].
resized_controlnet_image = resized_controlnet_image * 2 - 1
# Run VAE encoder.
controlnet_cond = FluxVaeEncodeInvocation.vae_encode(vae_info=vae_info, image_tensor=resized_controlnet_image)
controlnet_cond = pack(controlnet_cond)
return cls(
model=model,
controlnet_cond=controlnet_cond,
instantx_control_mode=instantx_control_mode,
weight=weight,
begin_step_percent=begin_step_percent,
end_step_percent=end_step_percent,
)
def _instantx_output_to_controlnet_output(
self, instantx_output: InstantXControlNetFluxOutput
) -> ControlNetFluxOutput:
# The `interval_control` logic here is based on
# https://github.com/huggingface/diffusers/blob/31058cdaef63ca660a1a045281d156239fba8192/src/diffusers/models/transformers/transformer_flux.py#L507-L511
# Handle double block residuals.
double_block_residuals: list[torch.Tensor] = []
double_block_samples = instantx_output.controlnet_block_samples
if double_block_samples:
interval_control = self._flux_transformer_num_double_blocks / len(double_block_samples)
interval_control = int(math.ceil(interval_control))
for i in range(self._flux_transformer_num_double_blocks):
double_block_residuals.append(double_block_samples[i // interval_control])
# Handle single block residuals.
single_block_residuals: list[torch.Tensor] = []
single_block_samples = instantx_output.controlnet_single_block_samples
if single_block_samples:
interval_control = self._flux_transformer_num_single_blocks / len(single_block_samples)
interval_control = int(math.ceil(interval_control))
for i in range(self._flux_transformer_num_single_blocks):
single_block_residuals.append(single_block_samples[i // interval_control])
return ControlNetFluxOutput(
double_block_residuals=double_block_residuals or None,
single_block_residuals=single_block_residuals or None,
)
def run_controlnet(
self,
timestep_index: int,
total_num_timesteps: int,
img: torch.Tensor,
img_ids: torch.Tensor,
txt: torch.Tensor,
txt_ids: torch.Tensor,
y: torch.Tensor,
timesteps: torch.Tensor,
guidance: torch.Tensor | None,
) -> ControlNetFluxOutput:
weight = self._get_weight(timestep_index=timestep_index, total_num_timesteps=total_num_timesteps)
if weight < 1e-6:
return ControlNetFluxOutput(single_block_residuals=None, double_block_residuals=None)
# Make sure inputs have correct device and dtype.
self._controlnet_cond = self._controlnet_cond.to(device=img.device, dtype=img.dtype)
self._instantx_control_mode = (
self._instantx_control_mode.to(device=img.device) if self._instantx_control_mode is not None else None
)
instantx_output: InstantXControlNetFluxOutput = self._model(
controlnet_cond=self._controlnet_cond,
controlnet_mode=self._instantx_control_mode,
img=img,
img_ids=img_ids,
txt=txt,
txt_ids=txt_ids,
timesteps=timesteps,
y=y,
guidance=guidance,
)
controlnet_output = self._instantx_output_to_controlnet_output(instantx_output)
controlnet_output.apply_weight(weight)
return controlnet_output

View File

@@ -0,0 +1,150 @@
from typing import List, Union
import torch
from PIL.Image import Image
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.util.controlnet_utils import CONTROLNET_RESIZE_VALUES, prepare_control_image
from invokeai.backend.flux.controlnet.controlnet_flux_output import ControlNetFluxOutput
from invokeai.backend.flux.controlnet.xlabs_controlnet_flux import XLabsControlNetFlux, XLabsControlNetFluxOutput
from invokeai.backend.flux.extensions.base_controlnet_extension import BaseControlNetExtension
class XLabsControlNetExtension(BaseControlNetExtension):
def __init__(
self,
model: XLabsControlNetFlux,
controlnet_cond: torch.Tensor,
weight: Union[float, List[float]],
begin_step_percent: float,
end_step_percent: float,
):
super().__init__(
weight=weight,
begin_step_percent=begin_step_percent,
end_step_percent=end_step_percent,
)
self._model = model
# _controlnet_cond is the control image passed to the ControlNet model.
# Pixel values are in the range [-1, 1]. Shape: (batch_size, 3, height, width).
self._controlnet_cond = controlnet_cond
# TODO(ryand): Pass in these params if a new base transformer / XLabs ControlNet pair get released.
self._flux_transformer_num_double_blocks = 19
self._flux_transformer_num_single_blocks = 38
@classmethod
def prepare_controlnet_cond(
cls,
controlnet_image: Image,
latent_height: int,
latent_width: int,
dtype: torch.dtype,
device: torch.device,
resize_mode: CONTROLNET_RESIZE_VALUES,
):
image_height = latent_height * LATENT_SCALE_FACTOR
image_width = latent_width * LATENT_SCALE_FACTOR
controlnet_cond = prepare_control_image(
image=controlnet_image,
do_classifier_free_guidance=False,
width=image_width,
height=image_height,
device=device,
dtype=dtype,
control_mode="balanced",
resize_mode=resize_mode,
)
# Map pixel values from [0, 1] to [-1, 1].
controlnet_cond = controlnet_cond * 2 - 1
return controlnet_cond
@classmethod
def from_controlnet_image(
cls,
model: XLabsControlNetFlux,
controlnet_image: Image,
latent_height: int,
latent_width: int,
dtype: torch.dtype,
device: torch.device,
resize_mode: CONTROLNET_RESIZE_VALUES,
weight: Union[float, List[float]],
begin_step_percent: float,
end_step_percent: float,
):
image_height = latent_height * LATENT_SCALE_FACTOR
image_width = latent_width * LATENT_SCALE_FACTOR
controlnet_cond = prepare_control_image(
image=controlnet_image,
do_classifier_free_guidance=False,
width=image_width,
height=image_height,
device=device,
dtype=dtype,
control_mode="balanced",
resize_mode=resize_mode,
)
# Map pixel values from [0, 1] to [-1, 1].
controlnet_cond = controlnet_cond * 2 - 1
return cls(
model=model,
controlnet_cond=controlnet_cond,
weight=weight,
begin_step_percent=begin_step_percent,
end_step_percent=end_step_percent,
)
def _xlabs_output_to_controlnet_output(self, xlabs_output: XLabsControlNetFluxOutput) -> ControlNetFluxOutput:
# The modulo index logic used here is based on:
# https://github.com/XLabs-AI/x-flux/blob/47495425dbed499be1e8e5a6e52628b07349cba2/src/flux/model.py#L198-L200
# Handle double block residuals.
double_block_residuals: list[torch.Tensor] = []
xlabs_double_block_residuals = xlabs_output.controlnet_double_block_residuals
if xlabs_double_block_residuals is not None:
for i in range(self._flux_transformer_num_double_blocks):
double_block_residuals.append(xlabs_double_block_residuals[i % len(xlabs_double_block_residuals)])
return ControlNetFluxOutput(
double_block_residuals=double_block_residuals,
single_block_residuals=None,
)
def run_controlnet(
self,
timestep_index: int,
total_num_timesteps: int,
img: torch.Tensor,
img_ids: torch.Tensor,
txt: torch.Tensor,
txt_ids: torch.Tensor,
y: torch.Tensor,
timesteps: torch.Tensor,
guidance: torch.Tensor | None,
) -> ControlNetFluxOutput:
weight = self._get_weight(timestep_index=timestep_index, total_num_timesteps=total_num_timesteps)
if weight < 1e-6:
return ControlNetFluxOutput(single_block_residuals=None, double_block_residuals=None)
xlabs_output: XLabsControlNetFluxOutput = self._model(
img=img,
img_ids=img_ids,
controlnet_cond=self._controlnet_cond,
txt=txt,
txt_ids=txt_ids,
timesteps=timesteps,
y=y,
guidance=guidance,
)
controlnet_output = self._xlabs_output_to_controlnet_output(xlabs_output)
controlnet_output.apply_weight(weight)
return controlnet_output

View File

@@ -16,7 +16,10 @@ def attention(q: Tensor, k: Tensor, v: Tensor, pe: Tensor) -> Tensor:
def rope(pos: Tensor, dim: int, theta: int) -> Tensor:
assert dim % 2 == 0
scale = torch.arange(0, dim, 2, dtype=torch.float64, device=pos.device) / dim
scale = (
torch.arange(0, dim, 2, dtype=torch.float32 if pos.device.type == "mps" else torch.float64, device=pos.device)
/ dim
)
omega = 1.0 / (theta**scale)
out = torch.einsum("...n,d->...nd", pos, omega)
out = torch.stack([torch.cos(out), -torch.sin(out), torch.sin(out), torch.cos(out)], dim=-1)

View File

@@ -87,8 +87,9 @@ class Flux(nn.Module):
txt_ids: Tensor,
timesteps: Tensor,
y: Tensor,
guidance: Tensor | None = None,
controlnet_block_residuals: list[list[Tensor] | None] | None = None,
guidance: Tensor | None,
controlnet_double_block_residuals: list[Tensor] | None,
controlnet_single_block_residuals: list[Tensor] | None,
) -> Tensor:
if img.ndim != 3 or txt.ndim != 3:
raise ValueError("Input img and txt tensors must have 3 dimensions.")
@@ -106,18 +107,27 @@ class Flux(nn.Module):
ids = torch.cat((txt_ids, img_ids), dim=1)
pe = self.pe_embedder(ids)
# Validate double_block_residuals shape.
if controlnet_double_block_residuals is not None:
assert len(controlnet_double_block_residuals) == len(self.double_blocks)
for block_index, block in enumerate(self.double_blocks):
img, txt = block(img=img, txt=txt, vec=vec, pe=pe)
# Apply ControlNet residuals.
if controlnet_block_residuals is not None:
for single_controlnet_block_residuals in controlnet_block_residuals:
if single_controlnet_block_residuals:
img += single_controlnet_block_residuals[block_index % len(single_controlnet_block_residuals)]
if controlnet_double_block_residuals is not None:
img += controlnet_double_block_residuals[block_index]
img = torch.cat((txt, img), 1)
for block in self.single_blocks:
# Validate single_block_residuals shape.
if controlnet_single_block_residuals is not None:
assert len(controlnet_single_block_residuals) == len(self.single_blocks)
for block_index, block in enumerate(self.single_blocks):
img = block(img, vec=vec, pe=pe)
if controlnet_single_block_residuals is not None:
img[:, txt.shape[1] :, ...] += controlnet_single_block_residuals[block_index]
img = img[:, txt.shape[1] :, ...]
img = self.final_layer(img, vec) # (N, T, patch_size ** 2 * out_channels)

View File

@@ -10,7 +10,15 @@ from safetensors.torch import load_file
from transformers import AutoConfig, AutoModelForTextEncoding, CLIPTextModel, CLIPTokenizer, T5EncoderModel, T5Tokenizer
from invokeai.app.services.config.config_default import get_config
from invokeai.backend.flux.controlnet.controlnet_flux import ControlNetFlux
from invokeai.backend.flux.controlnet.instantx_controlnet_flux import InstantXControlNetFlux
from invokeai.backend.flux.controlnet.state_dict_utils import (
convert_diffusers_instantx_state_dict_to_bfl_format,
infer_flux_params_from_state_dict,
infer_instantx_num_control_modes_from_state_dict,
is_state_dict_instantx_controlnet,
is_state_dict_xlabs_controlnet,
)
from invokeai.backend.flux.controlnet.xlabs_controlnet_flux import XLabsControlNetFlux
from invokeai.backend.flux.model import Flux
from invokeai.backend.flux.modules.autoencoder import AutoEncoder
from invokeai.backend.flux.util import ae_params, params
@@ -26,6 +34,7 @@ from invokeai.backend.model_manager.config import (
CheckpointConfigBase,
CLIPEmbedDiffusersConfig,
ControlNetCheckpointConfig,
ControlNetDiffusersConfig,
MainBnbQuantized4bCheckpointConfig,
MainCheckpointConfig,
MainGGUFCheckpointConfig,
@@ -298,6 +307,7 @@ class FluxBnbQuantizednf4bCheckpointModel(ModelLoader):
@ModelLoaderRegistry.register(base=BaseModelType.Flux, type=ModelType.ControlNet, format=ModelFormat.Checkpoint)
@ModelLoaderRegistry.register(base=BaseModelType.Flux, type=ModelType.ControlNet, format=ModelFormat.Diffusers)
class FluxControlnetModel(ModelLoader):
"""Class to load FLUX ControlNet models."""
@@ -306,13 +316,39 @@ class FluxControlnetModel(ModelLoader):
config: AnyModelConfig,
submodel_type: Optional[SubModelType] = None,
) -> AnyModel:
assert isinstance(config, ControlNetCheckpointConfig)
model_path = Path(config.path)
with accelerate.init_empty_weights():
# HACK(ryand): Is it safe to assume dev here?
model = ControlNetFlux(params["flux-dev"])
if isinstance(config, ControlNetCheckpointConfig):
model_path = Path(config.path)
elif isinstance(config, ControlNetDiffusersConfig):
# If this is a diffusers directory, we simply ignore the config file and load from the weight file.
model_path = Path(config.path) / "diffusion_pytorch_model.safetensors"
else:
raise ValueError(f"Unexpected ControlNet model config type: {type(config)}")
sd = load_file(model_path)
# Detect the FLUX ControlNet model type from the state dict.
if is_state_dict_xlabs_controlnet(sd):
return self._load_xlabs_controlnet(sd)
elif is_state_dict_instantx_controlnet(sd):
return self._load_instantx_controlnet(sd)
else:
raise ValueError("Do not recognize the state dict as an XLabs or InstantX ControlNet model.")
def _load_xlabs_controlnet(self, sd: dict[str, torch.Tensor]) -> AnyModel:
with accelerate.init_empty_weights():
# HACK(ryand): Is it safe to assume dev here?
model = XLabsControlNetFlux(params["flux-dev"])
model.load_state_dict(sd, assign=True)
return model
def _load_instantx_controlnet(self, sd: dict[str, torch.Tensor]) -> AnyModel:
sd = convert_diffusers_instantx_state_dict_to_bfl_format(sd)
flux_params = infer_flux_params_from_state_dict(sd)
num_control_modes = infer_instantx_num_control_modes_from_state_dict(sd)
with accelerate.init_empty_weights():
model = InstantXControlNetFlux(flux_params, num_control_modes)
model.load_state_dict(sd, assign=True)
return model

View File

@@ -10,6 +10,10 @@ from picklescan.scanner import scan_file_path
import invokeai.backend.util.logging as logger
from invokeai.app.util.misc import uuid_string
from invokeai.backend.flux.controlnet.state_dict_utils import (
is_state_dict_instantx_controlnet,
is_state_dict_xlabs_controlnet,
)
from invokeai.backend.lora.conversions.flux_diffusers_lora_conversion_utils import (
is_state_dict_likely_in_flux_diffusers_format,
)
@@ -116,6 +120,7 @@ class ModelProbe(object):
"CLIPModel": ModelType.CLIPEmbed,
"CLIPTextModel": ModelType.CLIPEmbed,
"T5EncoderModel": ModelType.T5Encoder,
"FluxControlNetModel": ModelType.ControlNet,
}
@classmethod
@@ -450,6 +455,7 @@ MODEL_NAME_TO_PREPROCESSOR = {
"lineart": "lineart_image_processor",
"lineart_anime": "lineart_anime_image_processor",
"softedge": "hed_image_processor",
"hed": "hed_image_processor",
"shuffle": "content_shuffle_image_processor",
"pose": "dw_openpose_image_processor",
"mediapipe": "mediapipe_face_processor",
@@ -461,7 +467,8 @@ MODEL_NAME_TO_PREPROCESSOR = {
def get_default_settings_controlnet_t2i_adapter(model_name: str) -> Optional[ControlAdapterDefaultSettings]:
for k, v in MODEL_NAME_TO_PREPROCESSOR.items():
if k in model_name:
model_name_lower = model_name.lower()
if k in model_name_lower:
return ControlAdapterDefaultSettings(preprocessor=v)
return None
@@ -635,8 +642,7 @@ class ControlNetCheckpointProbe(CheckpointProbeBase):
def get_base_type(self) -> BaseModelType:
checkpoint = self.checkpoint
if "double_blocks.0.img_attn.qkv.weight" in checkpoint:
if is_state_dict_xlabs_controlnet(checkpoint) or is_state_dict_instantx_controlnet(checkpoint):
# TODO(ryand): Should I distinguish between XLabs, InstantX and other ControlNet models by implementing
# get_format()?
return BaseModelType.Flux
@@ -862,22 +868,19 @@ class ControlNetFolderProbe(FolderProbeBase):
raise InvalidModelConfigException(f"Cannot determine base type for {self.model_path}")
with open(config_file, "r") as file:
config = json.load(file)
if config.get("_class_name", None) == "FluxControlNetModel":
return BaseModelType.Flux
# no obvious way to distinguish between sd2-base and sd2-768
dimension = config["cross_attention_dim"]
base_model = (
BaseModelType.StableDiffusion1
if dimension == 768
else (
BaseModelType.StableDiffusion2
if dimension == 1024
else BaseModelType.StableDiffusionXL
if dimension == 2048
else None
)
)
if not base_model:
raise InvalidModelConfigException(f"Unable to determine model base for {self.model_path}")
return base_model
if dimension == 768:
return BaseModelType.StableDiffusion1
if dimension == 1024:
return BaseModelType.StableDiffusion2
if dimension == 2048:
return BaseModelType.StableDiffusionXL
raise InvalidModelConfigException(f"Unable to determine model base for {self.model_path}")
class LoRAFolderProbe(FolderProbeBase):

View File

@@ -422,6 +422,13 @@ STARTER_MODELS: list[StarterModel] = [
description="ControlNet weights trained on sdxl-1.0 with tiled image conditioning",
type=ModelType.ControlNet,
),
StarterModel(
name="FLUX.1-dev-Controlnet-Union-Pro",
base=BaseModelType.Flux,
source="Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro",
description="A unified ControlNet for FLUX.1-dev model that supports 7 control modes, including canny (0), tile (1), depth (2), blur (3), pose (4), gray (5), low quality (6)",
type=ModelType.ControlNet,
),
# endregion
# region T2I Adapter
StarterModel(

View File

@@ -171,8 +171,19 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
"""
if xformers is available, use it, otherwise use sliced attention.
"""
# On 30xx and 40xx series GPUs, `torch-sdp` is faster than `xformers`. This corresponds to a CUDA major
# version of 8 or higher. So, for major version 7 or below, we prefer `xformers`.
# See:
# - https://developer.nvidia.com/cuda-gpus
# - https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#compute-capabilities
try:
prefer_xformers = torch.cuda.is_available() and torch.cuda.get_device_properties("cuda").major <= 7 # type: ignore # Type of "get_device_properties" is partially unknown
except Exception:
prefer_xformers = False
config = get_config()
if config.attention_type == "xformers":
if config.attention_type == "xformers" and is_xformers_available() and prefer_xformers:
self.enable_xformers_memory_efficient_attention()
return
elif config.attention_type == "sliced":
@@ -187,20 +198,24 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
self.disable_attention_slicing()
return
elif config.attention_type == "torch-sdp":
if hasattr(torch.nn.functional, "scaled_dot_product_attention"):
# diffusers enables sdp automatically
return
else:
raise Exception("torch-sdp attention slicing not available")
# torch-sdp is the default in diffusers.
return
# the remainder if this code is called when attention_type=='auto'
# See https://github.com/invoke-ai/InvokeAI/issues/7049 for context.
# Bumping torch from 2.2.2 to 2.4.1 caused the sliced attention implementation to produce incorrect results.
# For now, if a user is on an MPS device and has not explicitly set the attention_type, then we select the
# non-sliced torch-sdp implementation. This keeps things working on MPS at the cost of increased peak memory
# utilization.
if torch.backends.mps.is_available():
return
# The remainder if this code is called when attention_type=='auto'.
if self.unet.device.type == "cuda":
if is_xformers_available():
if is_xformers_available() and prefer_xformers:
self.enable_xformers_memory_efficient_attention()
return
elif hasattr(torch.nn.functional, "scaled_dot_product_attention"):
# diffusers enables sdp automatically
return
# torch-sdp is the default in diffusers.
return
if self.unet.device.type == "cpu" or self.unet.device.type == "mps":
mem_free = psutil.virtual_memory().free

View File

@@ -58,7 +58,7 @@
"@dnd-kit/sortable": "^8.0.0",
"@dnd-kit/utilities": "^3.2.2",
"@fontsource-variable/inter": "^5.1.0",
"@invoke-ai/ui-library": "^0.0.40",
"@invoke-ai/ui-library": "^0.0.42",
"@nanostores/react": "^0.7.3",
"@reduxjs/toolkit": "2.2.3",
"@roarr/browser-log-writer": "^1.3.0",
@@ -83,6 +83,7 @@
"overlayscrollbars-react": "^0.5.6",
"perfect-freehand": "^1.2.2",
"query-string": "^9.1.0",
"raf-throttle": "^2.0.6",
"react": "^18.3.1",
"react-colorful": "^5.6.1",
"react-dom": "^18.3.1",

View File

@@ -24,8 +24,8 @@ dependencies:
specifier: ^5.1.0
version: 5.1.0
'@invoke-ai/ui-library':
specifier: ^0.0.40
version: 0.0.40(@chakra-ui/form-control@2.2.0)(@chakra-ui/icon@3.2.0)(@chakra-ui/media-query@3.3.0)(@chakra-ui/menu@2.2.1)(@chakra-ui/spinner@2.1.0)(@chakra-ui/system@2.6.2)(@fontsource-variable/inter@5.1.0)(@types/react@18.3.11)(i18next@23.15.1)(react-dom@18.3.1)(react@18.3.1)
specifier: ^0.0.42
version: 0.0.42(@chakra-ui/form-control@2.2.0)(@chakra-ui/icon@3.2.0)(@chakra-ui/media-query@3.3.0)(@chakra-ui/menu@2.2.1)(@chakra-ui/spinner@2.1.0)(@chakra-ui/system@2.6.2)(@fontsource-variable/inter@5.1.0)(@types/react@18.3.11)(i18next@23.15.1)(react-dom@18.3.1)(react@18.3.1)
'@nanostores/react':
specifier: ^0.7.3
version: 0.7.3(nanostores@0.11.3)(react@18.3.1)
@@ -98,6 +98,9 @@ dependencies:
query-string:
specifier: ^9.1.0
version: 9.1.0
raf-throttle:
specifier: ^2.0.6
version: 2.0.6
react:
specifier: ^18.3.1
version: 18.3.1
@@ -490,8 +493,8 @@ packages:
resolution: {integrity: sha512-MV6D4VLRIHr4PkW4zMyqfrNS1mPlCTiCXwvYGtDFQYr+xHFfonhAuf9WjsSc0nyp2m0OdkSLnzmVKkZFLo25Tg==}
dev: false
/@chakra-ui/anatomy@2.3.3:
resolution: {integrity: sha512-Sy2VAG0WrzkQE40Y0fY406c6AlyqFxAc7j6fDz8Wwotz9veAvm+y5UgFUyhZ6FoYNAjDMPQ7JCcN7OGz74pNlA==}
/@chakra-ui/anatomy@2.3.4:
resolution: {integrity: sha512-fFIYN7L276gw0Q7/ikMMlZxP7mvnjRaWJ7f3Jsf9VtDOi6eAYIBRrhQe6+SZ0PGmoOkRaBc7gSE5oeIbgFFyrw==}
dev: false
/@chakra-ui/breakpoint-utils@2.0.8:
@@ -548,12 +551,12 @@ packages:
react: 18.3.1
dev: false
/@chakra-ui/hooks@2.3.3(react@18.3.1):
resolution: {integrity: sha512-nvqQfR+u0qAJ2/mdGF1XTrnfW9WahSsOc62E/xtRm5hPClfkxPCIXDuw0C17lZ2RYfg/hxsYKLJCGnQWZcC/7w==}
/@chakra-ui/hooks@2.4.2(react@18.3.1):
resolution: {integrity: sha512-LRKiVE1oA7afT5tbbSKAy7Uas2xFHE6IkrQdbhWCHmkHBUtPvjQQDgwtnd4IRZPmoEfNGwoJ/MQpwOM/NRTTwA==}
peerDependencies:
react: '>=18'
dependencies:
'@chakra-ui/utils': 2.1.3(react@18.3.1)
'@chakra-ui/utils': 2.2.2(react@18.3.1)
'@zag-js/element-size': 0.31.1
copy-to-clipboard: 3.3.3
framesync: 6.1.2
@@ -571,13 +574,13 @@ packages:
react: 18.3.1
dev: false
/@chakra-ui/icons@2.2.3(@chakra-ui/react@2.9.4)(react@18.3.1):
resolution: {integrity: sha512-BihIvFvAKq+9/U3sI47Vdo3Mmr9VxTvWcFBl3qZsbJSBpqK7GYaakNWADyPvgsCRGo2be72AZgcOAYaAqWDThQ==}
/@chakra-ui/icons@2.2.4(@chakra-ui/react@2.10.2)(react@18.3.1):
resolution: {integrity: sha512-l5QdBgwrAg3Sc2BRqtNkJpfuLw/pWRDwwT58J6c4PqQT6wzXxyNa8Q0PForu1ltB5qEiFb1kxr/F/HO1EwNa6g==}
peerDependencies:
'@chakra-ui/react': '>=2.0.0'
react: '>=18'
dependencies:
'@chakra-ui/react': 2.9.4(@emotion/react@11.13.3)(@emotion/styled@11.13.0)(@types/react@18.3.11)(framer-motion@11.10.0)(react-dom@18.3.1)(react@18.3.1)
'@chakra-ui/react': 2.10.2(@emotion/react@11.13.3)(@emotion/styled@11.13.0)(@types/react@18.3.11)(framer-motion@11.10.0)(react-dom@18.3.1)(react@18.3.1)
react: 18.3.1
dev: false
@@ -800,8 +803,8 @@ packages:
react: 18.3.1
dev: false
/@chakra-ui/react@2.9.4(@emotion/react@11.13.3)(@emotion/styled@11.13.0)(@types/react@18.3.11)(framer-motion@11.10.0)(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-e7fMItdoUjZsQuVsq4DSvrX/dpmYHEwJD2UM5dkHvR2Vzsrili0EWfXrT9R+4kCDHtc6vlECbHA13RHru7XUUg==}
/@chakra-ui/react@2.10.2(@emotion/react@11.13.3)(@emotion/styled@11.13.0)(@types/react@18.3.11)(framer-motion@11.10.0)(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-TfIHTqTlxTHYJZBtpiR5EZasPUrLYKJxdbHkdOJb5G1OQ+2c5kKl5XA7c2pMtsEptzb7KxAAIB62t3hxdfWp1w==}
peerDependencies:
'@emotion/react': '>=11'
'@emotion/styled': '>=11'
@@ -809,10 +812,10 @@ packages:
react: '>=18'
react-dom: '>=18'
dependencies:
'@chakra-ui/hooks': 2.3.3(react@18.3.1)
'@chakra-ui/styled-system': 2.10.3(react@18.3.1)
'@chakra-ui/theme': 3.4.3(@chakra-ui/styled-system@2.10.3)(react@18.3.1)
'@chakra-ui/utils': 2.1.3(react@18.3.1)
'@chakra-ui/hooks': 2.4.2(react@18.3.1)
'@chakra-ui/styled-system': 2.11.2(react@18.3.1)
'@chakra-ui/theme': 3.4.6(@chakra-ui/styled-system@2.11.2)(react@18.3.1)
'@chakra-ui/utils': 2.2.2(react@18.3.1)
'@emotion/react': 11.13.3(@types/react@18.3.11)(react@18.3.1)
'@emotion/styled': 11.13.0(@emotion/react@11.13.3)(@types/react@18.3.11)(react@18.3.1)
'@popperjs/core': 2.11.8
@@ -823,7 +826,6 @@ packages:
react-dom: 18.3.1(react@18.3.1)
react-fast-compare: 3.2.2
react-focus-lock: 2.13.2(@types/react@18.3.11)(react@18.3.1)
react-lorem-component: 0.13.0(react@18.3.1)
react-remove-scroll: 2.6.0(@types/react@18.3.11)(react@18.3.1)
transitivePeerDependencies:
- '@types/react'
@@ -844,10 +846,10 @@ packages:
react: 18.3.1
dev: false
/@chakra-ui/styled-system@2.10.3(react@18.3.1):
resolution: {integrity: sha512-rU4sG712pnp3Qrc8XT5AKcMhZjByXp1IrErLJ8wmiez2v8hAl/Dv8roK2BTqd4GfkJOrtkyfq2e2ZcDWjbd9Dw==}
/@chakra-ui/styled-system@2.11.2(react@18.3.1):
resolution: {integrity: sha512-y++z2Uop+hjfZX9mbH88F1ikazPv32asD2er56zMJBemUAzweXnHTpiCQbluEDSUDhqmghVZAdb+5L4XLbsRxA==}
dependencies:
'@chakra-ui/utils': 2.1.3(react@18.3.1)
'@chakra-ui/utils': 2.2.2(react@18.3.1)
csstype: 3.1.3
transitivePeerDependencies:
- react
@@ -891,14 +893,14 @@ packages:
color2k: 2.0.3
dev: false
/@chakra-ui/theme-tools@2.2.3(@chakra-ui/styled-system@2.10.3)(react@18.3.1):
resolution: {integrity: sha512-9fbBh4YaF8k1puovMnvdZtoVxQd1IKlRvWQBmIzXoae3KSJi9p1znRLzEX+Qjvph15dFCa2Q4h1gynI+HOh8oQ==}
/@chakra-ui/theme-tools@2.2.6(@chakra-ui/styled-system@2.11.2)(react@18.3.1):
resolution: {integrity: sha512-3UhKPyzKbV3l/bg1iQN9PBvffYp+EBOoYMUaeTUdieQRPFzo2jbYR0lNCxqv8h5aGM/k54nCHU2M/GStyi9F2A==}
peerDependencies:
'@chakra-ui/styled-system': '>=2.0.0'
dependencies:
'@chakra-ui/anatomy': 2.3.3
'@chakra-ui/styled-system': 2.10.3(react@18.3.1)
'@chakra-ui/utils': 2.1.3(react@18.3.1)
'@chakra-ui/anatomy': 2.3.4
'@chakra-ui/styled-system': 2.11.2(react@18.3.1)
'@chakra-ui/utils': 2.2.2(react@18.3.1)
color2k: 2.0.3
transitivePeerDependencies:
- react
@@ -924,15 +926,15 @@ packages:
'@chakra-ui/theme-tools': 2.1.2(@chakra-ui/styled-system@2.9.2)
dev: false
/@chakra-ui/theme@3.4.3(@chakra-ui/styled-system@2.10.3)(react@18.3.1):
resolution: {integrity: sha512-WxGk5wEMr8x/YmR99TfVcnt+qsHt9qy5FJycPgcKoL8blQiZ+v/rLhdWhXvu8K03DyfAoLkQDh2guVl+wKFfHA==}
/@chakra-ui/theme@3.4.6(@chakra-ui/styled-system@2.11.2)(react@18.3.1):
resolution: {integrity: sha512-ZwFBLfiMC3URwaO31ONXoKH9k0TX0OW3UjdPF3EQkQpYyrk/fm36GkkzajjtdpWEd7rzDLRsQjPmvwNaSoNDtg==}
peerDependencies:
'@chakra-ui/styled-system': '>=2.8.0'
dependencies:
'@chakra-ui/anatomy': 2.3.3
'@chakra-ui/styled-system': 2.10.3(react@18.3.1)
'@chakra-ui/theme-tools': 2.2.3(@chakra-ui/styled-system@2.10.3)(react@18.3.1)
'@chakra-ui/utils': 2.1.3(react@18.3.1)
'@chakra-ui/anatomy': 2.3.4
'@chakra-ui/styled-system': 2.11.2(react@18.3.1)
'@chakra-ui/theme-tools': 2.2.6(@chakra-ui/styled-system@2.11.2)(react@18.3.1)
'@chakra-ui/utils': 2.2.2(react@18.3.1)
transitivePeerDependencies:
- react
dev: false
@@ -957,8 +959,8 @@ packages:
lodash.mergewith: 4.6.2
dev: false
/@chakra-ui/utils@2.1.3(react@18.3.1):
resolution: {integrity: sha512-qIuyEg1ThVrUAnkV5nOngMDxUVCKavC04LfuraOCS1PHU4zhU4urJC2FURriALIQSgy6LpegASjvRzi7CIDDQQ==}
/@chakra-ui/utils@2.2.2(react@18.3.1):
resolution: {integrity: sha512-jUPLT0JzRMWxpdzH6c+t0YMJYrvc5CLericgITV3zDSXblkfx3DsYXqU11DJTSGZI9dUKzM1Wd0Wswn4eJwvFQ==}
peerDependencies:
react: '>=16.8.0'
dependencies:
@@ -1694,18 +1696,18 @@ packages:
prettier: 3.3.3
dev: true
/@invoke-ai/ui-library@0.0.40(@chakra-ui/form-control@2.2.0)(@chakra-ui/icon@3.2.0)(@chakra-ui/media-query@3.3.0)(@chakra-ui/menu@2.2.1)(@chakra-ui/spinner@2.1.0)(@chakra-ui/system@2.6.2)(@fontsource-variable/inter@5.1.0)(@types/react@18.3.11)(i18next@23.15.1)(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-GoqihMV1uaHPRgJ/GAmtt5+0ES1S3YpWUAkXAdRFqRWBoMs7i6mWddAY+qB9r5dWUR+LTESrGLKADHJBYjtVEQ==}
/@invoke-ai/ui-library@0.0.42(@chakra-ui/form-control@2.2.0)(@chakra-ui/icon@3.2.0)(@chakra-ui/media-query@3.3.0)(@chakra-ui/menu@2.2.1)(@chakra-ui/spinner@2.1.0)(@chakra-ui/system@2.6.2)(@fontsource-variable/inter@5.1.0)(@types/react@18.3.11)(i18next@23.15.1)(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-OuDXRipBO5mu+Nv4qN8cd8MiwiGBdq6h4PirVgPI9/ltbdcIzePgUJ0dJns26lflHSTRWW38I16wl4YTw3mNWA==}
peerDependencies:
'@fontsource-variable/inter': ^5.0.16
react: ^18.2.0
react-dom: ^18.2.0
dependencies:
'@chakra-ui/anatomy': 2.2.2
'@chakra-ui/icons': 2.2.3(@chakra-ui/react@2.9.4)(react@18.3.1)
'@chakra-ui/icons': 2.2.4(@chakra-ui/react@2.10.2)(react@18.3.1)
'@chakra-ui/layout': 2.3.1(@chakra-ui/system@2.6.2)(react@18.3.1)
'@chakra-ui/portal': 2.1.0(react-dom@18.3.1)(react@18.3.1)
'@chakra-ui/react': 2.9.4(@emotion/react@11.13.3)(@emotion/styled@11.13.0)(@types/react@18.3.11)(framer-motion@11.10.0)(react-dom@18.3.1)(react@18.3.1)
'@chakra-ui/react': 2.10.2(@emotion/react@11.13.3)(@emotion/styled@11.13.0)(@types/react@18.3.11)(framer-motion@11.10.0)(react-dom@18.3.1)(react@18.3.1)
'@chakra-ui/styled-system': 2.9.2
'@chakra-ui/theme-tools': 2.1.2(@chakra-ui/styled-system@2.9.2)
'@emotion/react': 11.13.3(@types/react@18.3.11)(react@18.3.1)
@@ -4688,13 +4690,6 @@ packages:
yaml: 1.10.2
dev: false
/create-react-class@15.7.0:
resolution: {integrity: sha512-QZv4sFWG9S5RUvkTYWbflxeZX+JG7Cz0Tn33rQBJ+WFQTqTfUTjMjiv9tnfXazjsO5r0KhPs+AqCjyrQX6h2ng==}
dependencies:
loose-envify: 1.4.0
object-assign: 4.1.1
dev: false
/cross-fetch@4.0.0:
resolution: {integrity: sha512-e4a5N8lVvuLgAWgnCrLr2PP0YyDOTHa9H/Rj54dirp61qXnNq46m82bRhNqIA5VccJtWBvPTFRV3TtvHUKPB1g==}
dependencies:
@@ -6813,13 +6808,6 @@ packages:
dependencies:
js-tokens: 4.0.0
/lorem-ipsum@1.0.6:
resolution: {integrity: sha512-Rx4XH8X4KSDCKAVvWGYlhAfNqdUP5ZdT4rRyf0jjrvWgtViZimDIlopWNfn/y3lGM5K4uuiAoY28TaD+7YKFrQ==}
hasBin: true
dependencies:
minimist: 1.2.8
dev: false
/loupe@2.3.7:
resolution: {integrity: sha512-zSMINGVYkdpYSOBmLi0D1Uo7JU9nVdQKrHxC8eYlV+9YKK9WePqAlL7lSlorG/U2Fw1w0hTBmaa/jrQ3UbPHtA==}
dependencies:
@@ -7013,6 +7001,7 @@ packages:
/minimist@1.2.8:
resolution: {integrity: sha512-2yyAR8qBkN3YuheJanUpWC5U3bb5osDywNB8RzDVlDwDHbocAJveqqj1u8+SVD7jkWT4yvsHCpWqqWqAxb0zCA==}
dev: true
/minipass@7.1.2:
resolution: {integrity: sha512-qOOzS1cBTWYF4BH8fVePDBOO9iptMnGUEZwNc/cMWnTV2nVLZ7VoNWEPHkYczZA0pdoA7dl6e7FL659nX9S2aw==}
@@ -7568,6 +7557,10 @@ packages:
resolution: {integrity: sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A==}
dev: true
/raf-throttle@2.0.6:
resolution: {integrity: sha512-C7W6hy78A+vMmk5a/B6C5szjBHrUzWJkVyakjKCK59Uy2CcA7KhO1JUvvH32IXYFIcyJ3FMKP3ZzCc2/71I6Vg==}
dev: false
/railroad-diagrams@1.0.0:
resolution: {integrity: sha512-cz93DjNeLY0idrCNOH6PviZGRN9GJhsdm9hpn1YCS879fj4W+x5IFJhhkRZcwVgMmFF7R82UA/7Oh+R8lLZg6A==}
dev: false
@@ -7767,18 +7760,6 @@ packages:
resolution: {integrity: sha512-/LLMVyas0ljjAtoYiPqYiL8VWXzUUdThrmU5+n20DZv+a+ClRoevUzw5JxU+Ieh5/c87ytoTBV9G1FiKfNJdmg==}
dev: true
/react-lorem-component@0.13.0(react@18.3.1):
resolution: {integrity: sha512-4mWjxmcG/DJJwdxdKwXWyP2N9zohbJg/yYaC+7JffQNrKj3LYDpA/A4u/Dju1v1ZF6Jew2gbFKGb5Z6CL+UNTw==}
peerDependencies:
react: 16.x
dependencies:
create-react-class: 15.7.0
lorem-ipsum: 1.0.6
object-assign: 4.1.1
react: 18.3.1
seedable-random: 0.0.1
dev: false
/react-redux@9.1.2(@types/react@18.3.11)(react@18.3.1)(redux@5.0.1):
resolution: {integrity: sha512-0OA4dhM1W48l3uzmv6B7TXPCGmokUU4p1M44DGN2/D9a1FjVPukVjER1PcPX97jIg6aUeLq1XJo1IpfbgULn0w==}
peerDependencies:
@@ -8293,10 +8274,6 @@ packages:
engines: {node: '>=0.10.0'}
dev: false
/seedable-random@0.0.1:
resolution: {integrity: sha512-uZWbEfz3BQdBl4QlUPELPqhInGEO1Q6zjzqrTDkd3j7mHaWWJo7h4ydr2g24a2WtTLk3imTLc8mPbBdQqdsbGw==}
dev: false
/semver-compare@1.0.0:
resolution: {integrity: sha512-YM3/ITh2MJ5MtzaM429anh+x2jiLVjqILF4m4oyQB18W7Ggea7BfqdH/wGMK7dDiMghv/6WG7znWMwUDzJiXow==}
dev: false

View File

@@ -651,7 +651,8 @@
"imageCopied": "Bild kopiert",
"parametersNotSet": "Parameter nicht festgelegt",
"addedToBoard": "Dem Board hinzugefügt",
"loadedWithWarnings": "Workflow mit Warnungen geladen"
"loadedWithWarnings": "Workflow mit Warnungen geladen",
"imageSaved": "Bild gespeichert"
},
"accessibility": {
"uploadImage": "Bild hochladen",
@@ -664,7 +665,9 @@
"resetUI": "$t(accessibility.reset) von UI",
"createIssue": "Ticket erstellen",
"about": "Über",
"submitSupportTicket": "Support-Ticket senden"
"submitSupportTicket": "Support-Ticket senden",
"toggleRightPanel": "Rechtes Bedienfeld umschalten (G)",
"toggleLeftPanel": "Linkes Bedienfeld umschalten (T)"
},
"boards": {
"autoAddBoard": "Board automatisch erstellen",
@@ -933,7 +936,8 @@
},
"paramScheduler": {
"paragraphs": [
"\"Planer\" definiert, wie iterativ Rauschen zu einem Bild hinzugefügt wird, oder wie ein Sample bei der Ausgabe eines Modells aktualisiert wird."
"Verwendeter Planer währende des Generierungsprozesses.",
"Jeder Planer definiert, wie einem Bild iterativ Rauschen hinzugefügt wird, oder wie ein Sample basierend auf der Ausgabe eines Modells aktualisiert wird."
],
"heading": "Planer"
},
@@ -959,6 +963,61 @@
},
"ipAdapterMethod": {
"heading": "Methode"
},
"refinerScheduler": {
"heading": "Planer",
"paragraphs": [
"Planer, der während der Veredelungsphase des Generierungsprozesses verwendet wird.",
"Ähnlich wie der Generierungsplaner."
]
},
"compositingCoherenceMode": {
"paragraphs": [
"Verwendete Methode zur Erstellung eines kohärenten Bildes mit dem neu generierten maskierten Bereich."
],
"heading": "Modus"
},
"compositingCoherencePass": {
"heading": "Kohärenzdurchlauf"
},
"controlNet": {
"heading": "ControlNet"
},
"compositingMaskAdjustments": {
"paragraphs": [
"Die Maske anpassen."
],
"heading": "Maskenanpassungen"
},
"compositingMaskBlur": {
"paragraphs": [
"Der Unschärferadius der Maske."
],
"heading": "Maskenunschärfe"
},
"compositingBlurMethod": {
"paragraphs": [
"Die auf den maskierten Bereich angewendete Unschärfemethode."
],
"heading": "Unschärfemethode"
},
"controlNetResizeMode": {
"heading": "Größenänderungsmodus"
},
"paramWidth": {
"heading": "Breite",
"paragraphs": [
"Breite des generierten Bildes. Muss ein Vielfaches von 8 sein."
]
},
"controlNetControlMode": {
"heading": "Kontrollmodus"
},
"controlNetProcessor": {
"heading": "Prozessor"
},
"patchmatchDownScaleSize": {
"heading": "Herunterskalieren"
}
},
"invocationCache": {
@@ -1062,7 +1121,23 @@
"missingFieldTemplate": "Fehlende Feldvorlage",
"missingNode": "Fehlender Aufrufknoten",
"missingInvocationTemplate": "Fehlende Aufrufvorlage",
"edit": "Bearbeiten"
"edit": "Bearbeiten",
"workflowAuthor": "Autor",
"graph": "Graph",
"workflowDescription": "Kurze Beschreibung",
"versionUnknown": " Version unbekannt",
"workflow": "Arbeitsablauf",
"noGraph": "Kein Graph",
"version": "Version",
"zoomInNodes": "Hineinzoomen",
"zoomOutNodes": "Herauszoomen",
"workflowName": "Name",
"unknownNode": "Unbekannter Knoten",
"workflowContact": "Kontaktdaten",
"workflowNotes": "Notizen",
"workflowTags": "Tags",
"workflowVersion": "Version",
"saveToGallery": "In Galerie speichern"
},
"hrf": {
"enableHrf": "Korrektur für hohe Auflösungen",
@@ -1232,7 +1307,16 @@
"searchByName": "Nach Name suchen",
"promptTemplateCleared": "Promptvorlage gelöscht",
"preview": "Vorschau",
"positivePrompt": "Positiv-Prompt"
"positivePrompt": "Positiv-Prompt",
"active": "Aktiv",
"deleteTemplate2": "Sind Sie sicher, dass Sie diese Vorlage löschen möchten? Dies kann nicht rückgängig gemacht werden.",
"deleteTemplate": "Vorlage löschen",
"copyTemplate": "Vorlage kopieren",
"editTemplate": "Vorlage bearbeiten",
"deleteImage": "Bild löschen",
"defaultTemplates": "Standardvorlagen",
"nameColumn": "'name'",
"exportDownloaded": "Export heruntergeladen"
},
"newUserExperience": {
"gettingStartedSeries": "Wünschen Sie weitere Anleitungen? In unserer <LinkComponent>Einführungsserie</LinkComponent> finden Sie Tipps, wie Sie das Potenzial von Invoke Studio voll ausschöpfen können.",
@@ -1245,13 +1329,22 @@
"bbox": "Bbox"
},
"transform": {
"fitToBbox": "An Bbox anpassen"
"fitToBbox": "An Bbox anpassen",
"reset": "Zurücksetzen",
"apply": "Anwenden",
"cancel": "Abbrechen"
},
"pullBboxIntoLayerError": "Problem, Bbox in die Ebene zu ziehen",
"pullBboxIntoLayer": "Bbox in Ebene ziehen",
"HUD": {
"bbox": "Bbox",
"scaledBbox": "Skalierte Bbox"
"scaledBbox": "Skalierte Bbox",
"entityStatus": {
"isHidden": "{{title}} ist ausgeblendet",
"isDisabled": "{{title}} ist deaktiviert",
"isLocked": "{{title}} ist gesperrt",
"isEmpty": "{{title}} ist leer"
}
},
"fitBboxToLayers": "Bbox an Ebenen anpassen",
"pullBboxIntoReferenceImage": "Bbox ins Referenzbild ziehen",
@@ -1261,7 +1354,12 @@
"clipToBbox": "Pinselstriche auf Bbox beschränken",
"canvasContextMenu": {
"saveBboxToGallery": "Bbox in Galerie speichern",
"bboxGroup": "Aus Bbox erstellen"
"bboxGroup": "Aus Bbox erstellen",
"canvasGroup": "Leinwand",
"newGlobalReferenceImage": "Neues globales Referenzbild",
"newRegionalReferenceImage": "Neues regionales Referenzbild",
"newControlLayer": "Neue Kontroll-Ebene",
"newRasterLayer": "Neue Raster-Ebene"
},
"rectangle": "Rechteck",
"saveCanvasToGallery": "Leinwand in Galerie speichern",
@@ -1292,7 +1390,7 @@
"regional": "Regional",
"newGlobalReferenceImageOk": "Globales Referenzbild erstellt",
"savedToGalleryError": "Fehler beim Speichern in der Galerie",
"savedToGalleryOk": "In Galerie speichern",
"savedToGalleryOk": "In Galerie gespeichert",
"newGlobalReferenceImageError": "Problem beim Erstellen eines globalen Referenzbilds",
"newRegionalReferenceImageOk": "Regionales Referenzbild erstellt",
"duplicate": "Duplizieren",
@@ -1325,12 +1423,39 @@
"showProgressOnCanvas": "Fortschritt auf Leinwand anzeigen",
"controlMode": {
"balanced": "Ausgewogen"
}
},
"globalReferenceImages_withCount_hidden": "Globale Referenzbilder ({{count}} ausgeblendet)",
"sendToGallery": "An Galerie senden",
"stagingArea": {
"accept": "Annehmen",
"next": "Nächste",
"discardAll": "Alle verwerfen",
"discard": "Verwerfen",
"previous": "Vorherige"
},
"regionalGuidance_withCount_visible": "Regionale Führung ({{count}})",
"regionalGuidance_withCount_hidden": "Regionale Führung ({{count}} ausgeblendet)",
"settings": {
"snapToGrid": {
"on": "Ein",
"off": "Aus",
"label": "Am Raster ausrichten"
}
},
"layer_one": "Ebene",
"layer_other": "Ebenen",
"layer_withCount_one": "Ebene ({{count}})",
"layer_withCount_other": "Ebenen ({{count}})"
},
"upsell": {
"shareAccess": "Zugang teilen",
"professional": "Professionell",
"inviteTeammates": "Teamkollegen einladen",
"professionalUpsell": "Verfügbar in der Professional Edition von Invoke. Klicken Sie hier oder besuchen Sie invoke.com/pricing für weitere Details."
},
"upscaling": {
"creativity": "Kreativität",
"structure": "Struktur",
"scale": "Maßstab"
}
}

View File

@@ -53,7 +53,8 @@
"imagesWithCount_one": "{{count}} image",
"imagesWithCount_other": "{{count}} images",
"assetsWithCount_one": "{{count}} asset",
"assetsWithCount_other": "{{count}} assets"
"assetsWithCount_other": "{{count}} assets",
"updateBoardError": "Error updating board"
},
"accordions": {
"generation": {
@@ -89,6 +90,7 @@
"batch": "Batch Manager",
"beta": "Beta",
"cancel": "Cancel",
"close": "Close",
"copy": "Copy",
"copyError": "$t(gallery.copy) Error",
"on": "On",
@@ -280,8 +282,10 @@
"gallery": "Gallery",
"alwaysShowImageSizeBadge": "Always Show Image Size Badge",
"assets": "Assets",
"assetsTab": "Files youve uploaded for use in your projects.",
"autoAssignBoardOnClick": "Auto-Assign Board on Click",
"autoSwitchNewImages": "Auto-Switch to New Images",
"boardsSettings": "Boards Settings",
"copy": "Copy",
"currentlyInUse": "This image is currently in use in the following features:",
"drop": "Drop",
@@ -300,6 +304,8 @@
"gallerySettings": "Gallery Settings",
"go": "Go",
"image": "image",
"imagesTab": "Images youve created and saved within Invoke.",
"imagesSettings": "Gallery Images Settings",
"jump": "Jump",
"loading": "Loading",
"newestFirst": "Newest First",
@@ -698,7 +704,7 @@
"convert": "Convert",
"convertingModelBegin": "Converting Model. Please wait.",
"convertToDiffusers": "Convert To Diffusers",
"convertToDiffusersHelpText1": "This model will be converted to the \ud83e\udde8 Diffusers format.",
"convertToDiffusersHelpText1": "This model will be converted to the 🧨 Diffusers format.",
"convertToDiffusersHelpText2": "This process will replace your Model Manager entry with the Diffusers version of the same model.",
"convertToDiffusersHelpText3": "Your checkpoint file on disk WILL be deleted if it is in InvokeAI root folder. If it is in a custom location, then it WILL NOT be deleted.",
"convertToDiffusersHelpText4": "This is a one time process only. It might take around 30s-60s depending on the specifications of your computer.",
@@ -853,6 +859,8 @@
"ipAdapter": "IP-Adapter",
"loadingNodes": "Loading Nodes...",
"loadWorkflow": "Load Workflow",
"noWorkflows": "No Workflows",
"noMatchingWorkflows": "No Matching Workflows",
"noWorkflow": "No Workflow",
"mismatchedVersion": "Invalid node: node {{node}} of type {{type}} has mismatched version (try updating?)",
"missingTemplate": "Invalid node: node {{node}} of type {{type}} missing template (not installed?)",
@@ -869,6 +877,7 @@
"nodeType": "Node Type",
"noFieldsLinearview": "No fields added to Linear View",
"noFieldsViewMode": "This workflow has no selected fields to display. View the full workflow to configure values.",
"workflowHelpText": "Need Help? Check out our guide to <LinkComponent>Getting Started with Workflows</LinkComponent>.",
"noNodeSelected": "No node selected",
"nodeOpacity": "Node Opacity",
"nodeVersion": "Node Version",
@@ -1119,6 +1128,7 @@
"canceled": "Processing Canceled",
"connected": "Connected to Server",
"imageCopied": "Image Copied",
"linkCopied": "Link Copied",
"unableToLoadImage": "Unable to Load Image",
"unableToLoadImageMetadata": "Unable to Load Image Metadata",
"unableToLoadStylePreset": "Unable to Load Style Preset",
@@ -1515,6 +1525,7 @@
}
},
"workflows": {
"chooseWorkflowFromLibrary": "Choose Workflow from Library",
"defaultWorkflows": "Default Workflows",
"userWorkflows": "User Workflows",
"projectWorkflows": "Project Workflows",
@@ -1527,7 +1538,9 @@
"openWorkflow": "Open Workflow",
"updated": "Updated",
"uploadWorkflow": "Load from File",
"uploadAndSaveWorkflow": "Upload to Library",
"deleteWorkflow": "Delete Workflow",
"deleteWorkflow2": "Are you sure you want to delete this workflow? This cannot be undone.",
"unnamedWorkflow": "Unnamed Workflow",
"downloadWorkflow": "Save to File",
"saveWorkflow": "Save Workflow",
@@ -1550,9 +1563,13 @@
"loadFromGraph": "Load Workflow from Graph",
"convertGraph": "Convert Graph",
"loadWorkflow": "$t(common.load) Workflow",
"autoLayout": "Auto Layout"
"autoLayout": "Auto Layout",
"edit": "Edit",
"download": "Download",
"copyShareLink": "Copy Share Link",
"copyShareLinkForWorkflow": "Copy Share Link for Workflow",
"delete": "Delete"
},
"app": {},
"controlLayers": {
"regional": "Regional",
"global": "Global",
@@ -1626,19 +1643,20 @@
"sendToCanvas": "Send To Canvas",
"newLayerFromImage": "New Layer from Image",
"newCanvasFromImage": "New Canvas from Image",
"newImg2ImgCanvasFromImage": "New Img2Img from Image",
"copyToClipboard": "Copy to Clipboard",
"sendToCanvasDesc": "Pressing Invoke stages your work in progress on the canvas.",
"viewProgressInViewer": "View progress and outputs in the <Btn>Image Viewer</Btn>.",
"viewProgressOnCanvas": "View progress and stage outputs on the <Btn>Canvas</Btn>.",
"rasterLayer_withCount_one": "$t(controlLayers.rasterLayer)",
"controlLayer_withCount_one": "$t(controlLayers.controlLayer)",
"inpaintMask_withCount_one": "$t(controlLayers.inpaintMask)",
"regionalGuidance_withCount_one": "$t(controlLayers.regionalGuidance)",
"globalReferenceImage_withCount_one": "$t(controlLayers.globalReferenceImage)",
"rasterLayer_withCount_other": "Raster Layers",
"controlLayer_withCount_one": "$t(controlLayers.controlLayer)",
"controlLayer_withCount_other": "Control Layers",
"inpaintMask_withCount_one": "$t(controlLayers.inpaintMask)",
"inpaintMask_withCount_other": "Inpaint Masks",
"regionalGuidance_withCount_one": "$t(controlLayers.regionalGuidance)",
"regionalGuidance_withCount_other": "Regional Guidance",
"globalReferenceImage_withCount_one": "$t(controlLayers.globalReferenceImage)",
"globalReferenceImage_withCount_other": "Global Reference Images",
"opacity": "Opacity",
"regionalGuidance_withCount_hidden": "Regional Guidance ({{count}} hidden)",
@@ -1651,7 +1669,6 @@
"rasterLayers_withCount_visible": "Raster Layers ({{count}})",
"globalReferenceImages_withCount_visible": "Global Reference Images ({{count}})",
"inpaintMasks_withCount_visible": "Inpaint Masks ({{count}})",
"layer": "Layer",
"layer_one": "Layer",
"layer_other": "Layers",
"layer_withCount_one": "Layer ({{count}})",
@@ -1802,6 +1819,10 @@
"transform": {
"transform": "Transform",
"fitToBbox": "Fit to Bbox",
"fitMode": "Fit Mode",
"fitModeContain": "Contain",
"fitModeCover": "Cover",
"fitModeFill": "Fill",
"reset": "Reset",
"apply": "Apply",
"cancel": "Cancel"
@@ -1968,7 +1989,7 @@
}
},
"newUserExperience": {
"toGetStarted": "To get started, enter a prompt in the box and click <StrongComponent>Invoke</StrongComponent> to generate your first image. You can choose to save your images directly to the <StrongComponent>Gallery</StrongComponent> or edit them to the <StrongComponent>Canvas</StrongComponent>.",
"toGetStarted": "To get started, enter a prompt in the box and click <StrongComponent>Invoke</StrongComponent> to generate your first image. Select a prompt template to improve results. You can choose to save your images directly to the <StrongComponent>Gallery</StrongComponent> or edit them to the <StrongComponent>Canvas</StrongComponent>.",
"gettingStartedSeries": "Want more guidance? Check out our <LinkComponent>Getting Started Series</LinkComponent> for tips on unlocking the full potential of the Invoke Studio."
},
"whatsNew": {

View File

@@ -207,7 +207,9 @@
"settings": "Paramètres",
"predictionType": "Type de Prédiction",
"advanced": "Avancé",
"modelType": "Type de modèle"
"modelType": "Type de modèle",
"vaePrecision": "Précision VAE",
"noModelSelected": "Aucun modèle sélectionné"
},
"parameters": {
"images": "Images",
@@ -268,7 +270,16 @@
"cancel": "Annuler"
},
"useCpuNoise": "Utiliser le bruit du CPU",
"imageActions": "Actions d'image"
"imageActions": "Actions d'image",
"setToOptimalSize": "Optimiser la taille pour le modèle",
"setToOptimalSizeTooSmall": "$t(parameters.setToOptimalSize) (peut être trop petit)",
"swapDimensions": "Échanger les dimensions",
"aspect": "Aspect",
"cfgRescaleMultiplier": "Multiplicateur de mise à l'échelle CFG",
"setToOptimalSizeTooLarge": "$t(parameters.setToOptimalSize) (peut être trop grand)",
"useSize": "Utiliser la taille",
"remixImage": "Remixer l'image",
"lockAspectRatio": "Verrouiller le rapport hauteur/largeur"
},
"settings": {
"models": "Modèles",
@@ -284,7 +295,23 @@
"beta": "Bêta",
"generation": "Génération",
"ui": "Interface Utilisateur",
"developer": "Développeur"
"developer": "Développeur",
"enableNSFWChecker": "Activer le vérificateur NSFW",
"clearIntermediatesDesc2": "Les images intermédiaires sont des sous-produits de la génération, différentes des images de résultat dans la galerie. La suppression des intermédiaires libérera de l'espace disque.",
"clearIntermediatesDisabled": "La file d'attente doit être vide pour effacer les intermédiaires.",
"reloadingIn": "Rechargement dans",
"intermediatesClearedFailed": "Problème de suppression des intermédiaires",
"clearIntermediates": "Effacer les intermédiaires",
"enableInvisibleWatermark": "Activer le Filigrane Invisible",
"clearIntermediatesDesc1": "Effacer les intermédiaires réinitialisera votre Toile et votre ControlNet.",
"enableInformationalPopovers": "Activer les infobulles d'information",
"intermediatesCleared_one": "Effacé {{count}} Intermédiaire",
"intermediatesCleared_many": "Effacé {{count}} Intermédiaires",
"intermediatesCleared_other": "Effacé {{count}} Intermédiaires",
"clearIntermediatesDesc3": "Vos images de galerie ne seront pas supprimées.",
"clearIntermediatesWithCount_one": "Effacé {{count}} Intermédiaire",
"clearIntermediatesWithCount_many": "Effacé {{count}} Intermédiaires",
"clearIntermediatesWithCount_other": "Effacé {{count}} Intermédiaires"
},
"toast": {
"uploadFailed": "Téléchargement échoué",
@@ -304,7 +331,15 @@
"loadedWithWarnings": "Processus chargé avec des avertissements",
"imageUploaded": "Image importée",
"modelAddedSimple": "Modèle ajouté à la file d'attente",
"setControlImage": "Définir comme image de contrôle"
"setControlImage": "Définir comme image de contrôle",
"workflowDeleted": "Processus supprimé",
"baseModelChangedCleared_one": "Effacé ou désactivé {{count}} sous-modèle incompatible",
"baseModelChangedCleared_many": "Effacé ou désactivé {{count}} sous-modèles incompatibles",
"baseModelChangedCleared_other": "Effacé ou désactivé {{count}} sous-modèles incompatibles",
"invalidUpload": "Téléchargement invalide",
"problemDownloadingImage": "Impossible de télécharger l'image",
"problemRetrievingWorkflow": "Problème de récupération du processus",
"problemDeletingWorkflow": "Problème de suppression du processus"
},
"accessibility": {
"uploadImage": "Charger une image",
@@ -630,7 +665,11 @@
"heading": "Prompt Négatif"
},
"paramVAEPrecision": {
"heading": "Précision du VAE"
"heading": "Précision du VAE",
"paragraphs": [
"La précision utilisée lors de l'encodage et du décodage VAE.",
"La pr'ecision Fp16/Half est plus efficace, au détriment de légères variations d'image."
]
},
"controlNetWeight": {
"heading": "Poids",
@@ -660,7 +699,8 @@
"paramScheduler": {
"heading": "Planificateur",
"paragraphs": [
"Planificateur utilisé pendant le processus de génération."
"Planificateur utilisé pendant le processus de génération.",
"Chaque planificateur définit comment ajouter de manière itérative du bruit à une image ou comment mettre à jour un échantillon en fonction de la sortie d'un modèle."
]
},
"controlNet": {
@@ -670,7 +710,11 @@
"heading": "ControlNet"
},
"paramSteps": {
"heading": "Étapes"
"heading": "Étapes",
"paragraphs": [
"Nombre d'étapes qui seront effectuées à chaque génération.",
"Des nombres d'étapes plus élevés créeront généralement de meilleures images, mais nécessiteront plus de temps de génération."
]
},
"controlNetBeginEnd": {
"heading": "Pourcentage de début / de fin d'étape",
@@ -695,7 +739,10 @@
]
},
"paramVAE": {
"heading": "VAE"
"heading": "VAE",
"paragraphs": [
"Modèle utilisé pour convertir la sortie de l'IA en l'image finale."
]
},
"compositingCoherenceMode": {
"heading": "Mode",
@@ -704,7 +751,11 @@
]
},
"paramIterations": {
"heading": "Itérations"
"heading": "Itérations",
"paragraphs": [
"Le nombre d'images à générer.",
"Si les prompts dynamiques sont activées, chaque prompt sera généré autant de fois."
]
},
"dynamicPrompts": {
"paragraphs": [
@@ -715,7 +766,10 @@
"heading": "Prompts Dynamiques"
},
"paramModel": {
"heading": "Modèle"
"heading": "Modèle",
"paragraphs": [
"Modèle utilisé pour la génération. Différents modèles sont entraînés pour se spécialiser dans la production de résultats esthétiques et de contenus variés."
]
},
"compositingCoherencePass": {
"heading": "Passe de cohérence",
@@ -724,13 +778,24 @@
]
},
"paramRatio": {
"heading": "Rapport hauteur/largeur"
"heading": "Rapport hauteur/largeur",
"paragraphs": [
"Le rapport hauteur/largeur de l'image générée.",
"Une taille d'image (en nombre de pixels) équivalente à 512x512 est recommandée pour les modèles SD1.5 et une taille équivalente à 1024x1024 est recommandée pour les modèles SDXL."
]
},
"paramSeed": {
"heading": "Graine"
"heading": "Graine",
"paragraphs": [
"Contrôle le bruit de départ utilisé pour la génération.",
"Désactivez l'option \"Aléatoire\" pour produire des résultats identiques avec les mêmes paramètres de génération."
]
},
"scaleBeforeProcessing": {
"heading": "Échelle avant traitement"
"heading": "Échelle avant traitement",
"paragraphs": [
"\"Auto\" ajuste la zone sélectionnée à la taille la mieux adaptée au modèle avant le processus de génération d'image."
]
},
"compositingBlurMethod": {
"heading": "Méthode de flou",
@@ -751,7 +816,11 @@
]
},
"paramDenoisingStrength": {
"heading": "Force de débruitage"
"heading": "Force de débruitage",
"paragraphs": [
"Intensité du bruit ajouté à l'image d'entrée.",
"0 produira une image identique, tandis que 1 produira une image complètement différente."
]
},
"lora": {
"heading": "LoRA",
@@ -760,10 +829,104 @@
]
},
"noiseUseCPU": {
"heading": "Utiliser le bruit du CPU"
"heading": "Utiliser le bruit du CPU",
"paragraphs": [
"Contrôle si le bruit est généré sur le CPU ou le GPU.",
"Avec le bruit du CPU activé, une graine particulière produira la même image sur n'importe quelle machine.",
"Il n'y a aucun impact sur les performances à activer le bruit du CPU."
]
},
"paramCFGScale": {
"heading": "Échelle CFG"
"heading": "Échelle CFG",
"paragraphs": [
"Contrôle de l'influence du prompt sur le processus de génération.",
"Des valeurs élevées de l'échelle CFG peuvent entraîner une saturation excessive et des distortions. "
]
},
"loraWeight": {
"heading": "Poids",
"paragraphs": [
"Poids du LoRA. Un poids plus élevé aura un impact plus important sur l'image finale."
]
},
"imageFit": {
"heading": "Ajuster l'image initiale à la taille de sortie",
"paragraphs": [
"Redimensionne l'image initiale à la largeur et à la hauteur de l'image de sortie. Il est recommandé de l'activer."
]
},
"paramCFGRescaleMultiplier": {
"heading": "Multiplicateur de mise à l'échelle CFG",
"paragraphs": [
"Multiplicateur de mise à l'échelle pour le guidage CFG, utilisé pour les modèles entraînés en utilisant le zero-terminal SNR (ztsnr).",
"Une valeur de 0.7 est suggérée pour ces modèles."
]
},
"controlNetProcessor": {
"heading": "Processeur",
"paragraphs": [
"Méthode de traitement de l'image d'entrée pour guider le processus de génération. Différents processeurs fourniront différents effets ou styles dans vos images générées."
]
},
"paramUpscaleMethod": {
"paragraphs": [
"Méthode utilisée pour améliorer l'image pour la correction de haute résolution."
],
"heading": "Méthode d'agrandissement"
},
"refinerModel": {
"heading": "Modèle de Raffinage",
"paragraphs": [
"Modèle utilisé pendant la partie raffinage du processus de génération.",
"Similaire au Modèle de Génération."
]
},
"paramWidth": {
"paragraphs": [
"Largeur de l'image générée. Doit être un multiple de 8."
],
"heading": "Largeur"
},
"paramHeight": {
"heading": "Hauteur",
"paragraphs": [
"Hauteur de l'image générée. Doit être un multiple de 8."
]
},
"paramHrf": {
"heading": "Activer la correction haute résolution",
"paragraphs": [
"Générez des images de haute qualité à une résolution plus grande que celle qui est optimale pour le modèle. Cela est généralement utilisé pour prévenir la duplication dans l'image générée."
]
},
"patchmatchDownScaleSize": {
"paragraphs": [
"Intensité du sous-échantillonage qui se produit avant le remplissage?",
"Un sous-échantillonage plus élevé améliorera les performances et réduira la qualité."
],
"heading": "Sous-échantillonage"
},
"paramAspect": {
"paragraphs": [
"Rapport hauteur/largeur de l'image générée. Changer le rapport mettra à jour la largeur et la hauteur en conséquence.",
"\"Optimiser\" définira la largeur et la hauteur aux dimensions optimales pour le modèle choisi."
],
"heading": "Aspect"
},
"refinerScheduler": {
"heading": "Planificateur"
},
"refinerPositiveAestheticScore": {
"paragraphs": [
"Ajoute un biais envers les générations pour qu'elles soient plus similaires aux images ayant un score esthétique élevé, en fonction des données d'entraînement."
],
"heading": "Score Esthétique Positif"
},
"refinerNegativeAestheticScore": {
"heading": "Score Esthétique Négatif",
"paragraphs": [
"Ajoute un biais envers les générations pour qu'elles soient plus similaires aux images ayant un faible score esthétique, en fonction des données d'entraînement."
]
}
},
"dynamicPrompts": {
@@ -800,7 +963,11 @@
"generationMode": "Mode Génération",
"height": "Hauteur",
"createdBy": "Créé par",
"strength": "Force d'image à image"
"strength": "Force d'image à image",
"vae": "VAE",
"noRecallParameters": "Aucun paramètre à rappeler trouvé.",
"cfgRescaleMultiplier": "$t(parameters.cfgRescaleMultiplier)",
"recallParameters": "Rappeler les paramètres"
},
"sdxl": {
"freePromptStyle": "Écriture de Prompt manuelle",
@@ -830,9 +997,9 @@
"downloadWorkflow": "Télécharger processus en JSON",
"loadWorkflow": "Charger le processus",
"reloadNodeTemplates": "Recharger les modèles de nœuds",
"animatedEdges": "Bords animés",
"animatedEdges": "Connexions animées",
"cannotConnectToSelf": "Impossible de se connecter à soi-même",
"edge": "Bord",
"edge": "Connexion",
"workflowAuthor": "Auteur",
"enum": "Énumération",
"integer": "Entier",
@@ -844,8 +1011,8 @@
"version": "Version",
"boolean": "Booléens",
"executionStateCompleted": "Terminé",
"colorCodeEdges": "Code de couleur des bords",
"colorCodeEdgesHelp": "Code couleur des arêtes en fonction de leurs champs connectés.",
"colorCodeEdges": "Code de couleur des connexions",
"colorCodeEdgesHelp": "Code couleur des connexions en fonction de leurs champs connectés.",
"currentImage": "Image actuelle",
"noFieldsLinearview": "Aucun champ ajouté à la vue linéaire",
"float": "Flottant",
@@ -874,7 +1041,7 @@
"unknownField": "Champ inconnu",
"workflowNotes": "Notes",
"workflowTags": "Tags",
"animatedEdgesHelp": "Animer les arêtes sélectionnées et les arêtes connectées aux nœuds sélectionnés.",
"animatedEdgesHelp": "Animer les connexions sélectionnées et les connexions associées aux nœuds sélectionnés",
"nodeTemplate": "Modèle de nœud",
"fieldTypesMustMatch": "Les types de champs doivent correspondre.",
"fullyContainNodes": "Contient complètement les nœuds à sélectionner",
@@ -897,13 +1064,86 @@
"workflowName": "Nom",
"snapToGridHelp": "Aligner les nœuds sur la grille lorsqu'ils sont déplacés.",
"unableToValidateWorkflow": "Impossible de valider le processus",
"validateConnections": "Valider les connexions et le graphique"
"validateConnections": "Valider les connexions et le graphique",
"unableToUpdateNodes_one": "Impossible de mettre à jour {{count}} nœud",
"unableToUpdateNodes_many": "Impossible de mettre à jour {{count}} nœuds",
"unableToUpdateNodes_other": "Impossible de mettre à jour {{count}} nœuds",
"cannotDuplicateConnection": "Impossible de créer des connexions en double.",
"resetToDefaultValue": "Réinitialiser à la valeur par défaut",
"unknownNodeType": "Type de nœud inconnu",
"unknownInput": "Entrée inconnue : {{name}}",
"prototypeDesc": "Cette invocation est un prototype. Elle peut subir des modifications majeures lors des mises à jour de l'application et peut être supprimée à tout moment.",
"nodePack": "Paquet de nœuds",
"sourceNodeDoesNotExist": "Connexion invalide : le nœud source/de sortie {{node}} n'existe pas.",
"sourceNodeFieldDoesNotExist": "Connexion invalide : {{node}}.{{field}} n'existe pas",
"unableToGetWorkflowVersion": "Impossible d'obtenir la version du schéma de processus",
"newWorkflowDesc2": "Votre processus actuel comporte des modifications non enregistrées.",
"deletedInvalidEdge": "Connexion invalide supprimé {{source}} -> {{target}}",
"targetNodeDoesNotExist": "Connexion invalide : le nœud cible/entrée {{node}} n'existe pas.",
"targetNodeFieldDoesNotExist": "Connexion invalide : le champ {{node}}.{{field}} n'existe pas.",
"nodeVersion": "Version du noeud",
"clearWorkflowDesc2": "Votre processus actuel comporte des modifications non enregistrées.",
"clearWorkflow": "Effacer le Processus",
"clearWorkflowDesc": "Effacer ce processus et en commencer un nouveau?",
"unsupportedArrayItemType": "type d'élément de tableau non pris en charge \"{{type}}\"",
"addLinearView": "Ajouter à la vue linéaire",
"collectionOrScalarFieldType": "{{name}} (Unique ou Collection)",
"unableToExtractEnumOptions": "impossible d'extraire les options d'énumération",
"unsupportedAnyOfLength": "trop de membres dans l'union ({{count}})",
"ipAdapter": "IP-Adapter",
"viewMode": "Utiliser en vue linéaire",
"collectionFieldType": "{{name}} (Collection)",
"newWorkflow": "Nouveau processus",
"reorderLinearView": "Réorganiser la vue linéaire",
"unknownOutput": "Sortie inconnue : {{name}}",
"outputFieldTypeParseError": "Impossible d'analyser le type du champ de sortie {{node}}.{{field}} ({{message}})",
"unsupportedMismatchedUnion": "type CollectionOrScalar non concordant avec les types de base {{firstType}} et {{secondType}}",
"unableToParseFieldType": "impossible d'analyser le type de champ",
"betaDesc": "Cette invocation est en version bêta. Tant qu'elle n'est pas stable, elle peut avoir des changements majeurs lors des mises à jour de l'application. Nous prévoyons de soutenir cette invocation à long terme.",
"unknownFieldType": "$t(nodes.unknownField) type : {{type}}",
"inputFieldTypeParseError": "Impossible d'analyser le type du champ d'entrée {{node}}.{{field}} ({{message}})",
"unableToExtractSchemaNameFromRef": "impossible d'extraire le nom du schéma à partir de la référence",
"editMode": "Modifier dans l'éditeur de processus",
"unknownErrorValidatingWorkflow": "Erreur inconnue lors de la validation du processus.",
"updateAllNodes": "Mettre à jour les nœuds",
"allNodesUpdated": "Tous les nœuds mis à jour",
"newWorkflowDesc": "Créer un nouveau processus?"
},
"models": {
"noMatchingModels": "Aucun modèle correspondant",
"noModelsAvailable": "Aucun modèle disponible",
"loading": "chargement",
"selectModel": "Sélectionner un modèle",
"noMatchingLoRAs": "Aucun LoRA correspondant"
"noMatchingLoRAs": "Aucun LoRA correspondant",
"lora": "LoRA",
"noRefinerModelsInstalled": "Aucun modèle SDXL Refiner installé",
"noLoRAsInstalled": "Aucun LoRA installé",
"addLora": "Ajouter LoRA",
"defaultVAE": "VAE par défaut"
},
"workflows": {
"workflowLibrary": "Bibliothèque",
"loading": "Chargement des processus",
"searchWorkflows": "Rechercher des processus",
"workflowCleared": "Processus effacé",
"noDescription": "Aucune description",
"deleteWorkflow": "Supprimer le processus",
"openWorkflow": "Ouvrir le processus",
"uploadWorkflow": "Charger à partir du fichier",
"workflowName": "Nom du processus",
"unnamedWorkflow": "Processus sans nom",
"saveWorkflowAs": "Enregistrer le processus sous",
"workflows": "Processus",
"savingWorkflow": "Enregistrement du processus...",
"saveWorkflowToProject": "Enregistrer le processus dans le projet",
"downloadWorkflow": "Enregistrer dans le fichier",
"saveWorkflow": "Enregistrer le processus",
"problemSavingWorkflow": "Problème de sauvegarde du processus",
"workflowEditorMenu": "Menu de l'Éditeur de Processus",
"newWorkflowCreated": "Nouveau processus créé",
"clearWorkflowSearchFilter": "Réinitialiser le filtre de recherche de processus",
"problemLoading": "Problème de chargement des processus",
"workflowSaved": "Processus enregistré",
"noWorkflows": "Pas de processus"
}
}

View File

@@ -65,7 +65,7 @@
"blue": "Blu",
"alpha": "Alfa",
"copy": "Copia",
"on": "Attivato",
"on": "Acceso",
"checkpoint": "Checkpoint",
"safetensors": "Safetensors",
"ai": "ia",
@@ -85,7 +85,7 @@
"openInViewer": "Apri nel visualizzatore",
"apply": "Applica",
"loadingImage": "Caricamento immagine",
"off": "Disattivato",
"off": "Spento",
"edit": "Modifica",
"placeholderSelectAModel": "Seleziona un modello",
"reset": "Reimposta",
@@ -155,7 +155,9 @@
"move": "Sposta",
"gallery": "Galleria",
"openViewer": "Apri visualizzatore",
"closeViewer": "Chiudi visualizzatore"
"closeViewer": "Chiudi visualizzatore",
"imagesTab": "Immagini create e salvate in Invoke.",
"assetsTab": "File che hai caricato per usarli nei tuoi progetti."
},
"hotkeys": {
"searchHotkeys": "Cerca tasti di scelta rapida",
@@ -321,6 +323,22 @@
"selectViewTool": {
"title": "Strumento Visualizza",
"desc": "Seleziona lo strumento Visualizza."
},
"applyFilter": {
"title": "Applica filtro",
"desc": "Applica il filtro in sospeso al livello selezionato."
},
"cancelFilter": {
"title": "Annulla filtro",
"desc": "Annulla il filtro in sospeso."
},
"cancelTransform": {
"desc": "Annulla la trasformazione in sospeso.",
"title": "Annulla Trasforma"
},
"applyTransform": {
"title": "Applica trasformazione",
"desc": "Applica la trasformazione in sospeso al livello selezionato."
}
},
"workflows": {
@@ -574,8 +592,8 @@
"scale": "Scala",
"imageFit": "Adatta l'immagine iniziale alle dimensioni di output",
"scaleBeforeProcessing": "Scala prima dell'elaborazione",
"scaledWidth": "Larghezza ridimensionata",
"scaledHeight": "Altezza ridimensionata",
"scaledWidth": "Larghezza scalata",
"scaledHeight": "Altezza scalata",
"infillMethod": "Metodo di riempimento",
"tileSize": "Dimensione piastrella",
"downloadImage": "Scarica l'immagine",
@@ -617,7 +635,11 @@
"ipAdapterIncompatibleBaseModel": "Il modello base dell'adattatore IP non è compatibile",
"ipAdapterNoImageSelected": "Nessuna immagine dell'adattatore IP selezionata",
"rgNoPromptsOrIPAdapters": "Nessun prompt o adattatore IP",
"rgNoRegion": "Nessuna regione selezionata"
"rgNoRegion": "Nessuna regione selezionata",
"t2iAdapterIncompatibleBboxWidth": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, larghezza riquadro è {{width}}",
"t2iAdapterIncompatibleBboxHeight": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, altezza riquadro è {{height}}",
"t2iAdapterIncompatibleScaledBboxWidth": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, larghezza del riquadro scalato {{width}}",
"t2iAdapterIncompatibleScaledBboxHeight": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, altezza del riquadro scalato {{height}}"
},
"fluxModelIncompatibleBboxHeight": "$t(parameters.invoke.fluxRequiresDimensionsToBeMultipleOf16), altezza riquadro è {{height}}",
"fluxModelIncompatibleBboxWidth": "$t(parameters.invoke.fluxRequiresDimensionsToBeMultipleOf16), larghezza riquadro è {{width}}",
@@ -625,7 +647,11 @@
"fluxModelIncompatibleScaledBboxHeight": "$t(parameters.invoke.fluxRequiresDimensionsToBeMultipleOf16), altezza del riquadro scalato è {{height}}",
"noT5EncoderModelSelected": "Nessun modello di encoder T5 selezionato per la generazione con FLUX",
"noCLIPEmbedModelSelected": "Nessun modello CLIP Embed selezionato per la generazione con FLUX",
"noFLUXVAEModelSelected": "Nessun modello VAE selezionato per la generazione con FLUX"
"noFLUXVAEModelSelected": "Nessun modello VAE selezionato per la generazione con FLUX",
"canvasIsTransforming": "La tela sta trasformando",
"canvasIsRasterizing": "La tela sta rasterizzando",
"canvasIsCompositing": "La tela è in fase di composizione",
"canvasIsFiltering": "La tela sta filtrando"
},
"useCpuNoise": "Usa la CPU per generare rumore",
"iterations": "Iterazioni",
@@ -644,7 +670,12 @@
"processImage": "Elabora Immagine",
"sendToUpscale": "Invia a Amplia",
"postProcessing": "Post-elaborazione (Shift + U)",
"guidance": "Guida"
"guidance": "Guida",
"gaussianBlur": "Sfocatura Gaussiana",
"boxBlur": "Sfocatura Box",
"staged": "Maschera espansa",
"optimizedImageToImage": "Immagine-a-immagine ottimizzata",
"sendToCanvas": "Invia alla Tela"
},
"settings": {
"models": "Modelli",
@@ -678,7 +709,8 @@
"enableInformationalPopovers": "Abilita testo informativo a comparsa",
"reloadingIn": "Ricaricando in",
"informationalPopoversDisabled": "Testo informativo a comparsa disabilitato",
"informationalPopoversDisabledDesc": "I testi informativi a comparsa sono disabilitati. Attivali nelle impostazioni."
"informationalPopoversDisabledDesc": "I testi informativi a comparsa sono disabilitati. Attivali nelle impostazioni.",
"confirmOnNewSession": "Conferma su nuova sessione"
},
"toast": {
"uploadFailed": "Caricamento fallito",
@@ -721,7 +753,20 @@
"somethingWentWrong": "Qualcosa è andato storto",
"outOfMemoryErrorDesc": "Le impostazioni della generazione attuale superano la capacità del sistema. Modifica le impostazioni e riprova.",
"importFailed": "Importazione non riuscita",
"importSuccessful": "Importazione riuscita"
"importSuccessful": "Importazione riuscita",
"layerSavedToAssets": "Livello salvato nelle risorse",
"problemSavingLayer": "Impossibile salvare il livello",
"unableToLoadImage": "Impossibile caricare l'immagine",
"problemCopyingLayer": "Impossibile copiare il livello",
"sentToCanvas": "Inviato alla Tela",
"sentToUpscale": "Inviato a Amplia",
"unableToLoadStylePreset": "Impossibile caricare lo stile predefinito",
"stylePresetLoaded": "Stile predefinito caricato",
"unableToLoadImageMetadata": "Impossibile caricare i metadati dell'immagine",
"imageSaved": "Immagine salvata",
"imageSavingFailed": "Salvataggio dell'immagine non riuscito",
"layerCopiedToClipboard": "Livello copiato negli appunti",
"imageNotLoadedDesc": "Impossibile trovare l'immagine"
},
"accessibility": {
"invokeProgressBar": "Barra di avanzamento generazione",
@@ -734,7 +779,9 @@
"resetUI": "$t(accessibility.reset) l'Interfaccia Utente",
"createIssue": "Segnala un problema",
"about": "Informazioni",
"submitSupportTicket": "Invia ticket di supporto"
"submitSupportTicket": "Invia ticket di supporto",
"toggleLeftPanel": "Attiva/disattiva il pannello sinistro (T)",
"toggleRightPanel": "Attiva/disattiva il pannello destro (G)"
},
"nodes": {
"zoomOutNodes": "Rimpicciolire",
@@ -854,7 +901,7 @@
"clearWorkflowDesc": "Cancellare questo flusso di lavoro e avviarne uno nuovo?",
"clearWorkflow": "Cancella il flusso di lavoro",
"clearWorkflowDesc2": "Il tuo flusso di lavoro attuale presenta modifiche non salvate.",
"viewMode": "Utilizzare nella vista lineare",
"viewMode": "Usa la vista lineare",
"reorderLinearView": "Riordina la vista lineare",
"editMode": "Modifica nell'editor del flusso di lavoro",
"resetToDefaultValue": "Ripristina il valore predefinito",
@@ -872,7 +919,10 @@
"imageAccessError": "Impossibile trovare l'immagine {{image_name}}, ripristino ai valori predefiniti",
"boardAccessError": "Impossibile trovare la bacheca {{board_id}}, ripristino ai valori predefiniti",
"modelAccessError": "Impossibile trovare il modello {{key}}, ripristino ai valori predefiniti",
"saveToGallery": "Salva nella Galleria"
"saveToGallery": "Salva nella Galleria",
"noMatchingWorkflows": "Nessun flusso di lavoro corrispondente",
"noWorkflows": "Nessun flusso di lavoro",
"workflowHelpText": "Hai bisogno di aiuto? Consulta la nostra guida <LinkComponent>Introduzione ai flussi di lavoro</LinkComponent>"
},
"boards": {
"autoAddBoard": "Aggiungi automaticamente bacheca",
@@ -916,7 +966,8 @@
"noBoards": "Nessuna bacheca {{boardType}}",
"hideBoards": "Nascondi bacheche",
"viewBoards": "Visualizza bacheche",
"deletedPrivateBoardsCannotbeRestored": "Le bacheche cancellate non possono essere ripristinate. Selezionando 'Cancella solo bacheca', le immagini verranno spostate nella bacheca \"Non categorizzato\" privata dell'autore dell'immagine."
"deletedPrivateBoardsCannotbeRestored": "Le bacheche cancellate non possono essere ripristinate. Selezionando 'Cancella solo bacheca', le immagini verranno spostate nella bacheca \"Non categorizzato\" privata dell'autore dell'immagine.",
"updateBoardError": "Errore durante l'aggiornamento della bacheca"
},
"queue": {
"queueFront": "Aggiungi all'inizio della coda",
@@ -1401,6 +1452,25 @@
"paragraphs": [
"La struttura determina quanto l'immagine finale rispecchierà il layout dell'originale. Una struttura bassa permette cambiamenti significativi, mentre una struttura alta conserva la composizione e il layout originali."
]
},
"fluxDevLicense": {
"heading": "Licenza non commerciale",
"paragraphs": [
"I modelli FLUX.1 [dev] sono concessi in licenza con la licenza non commerciale FLUX [dev]. Per utilizzare questo tipo di modello per scopi commerciali in Invoke, visita il nostro sito Web per saperne di più."
]
},
"optimizedDenoising": {
"heading": "Immagine-a-immagine ottimizzata",
"paragraphs": [
"Abilita 'Immagine-a-immagine ottimizzata' per una scala di riduzione del rumore più graduale per le trasformazioni da immagine a immagine e di inpainting con modelli Flux. Questa impostazione migliora la capacità di controllare la quantità di modifica applicata a un'immagine, ma può essere disattivata se preferisci usare la scala di riduzione rumore standard. Questa impostazione è ancora in fase di messa a punto ed è in stato beta."
]
},
"paramGuidance": {
"heading": "Guida",
"paragraphs": [
"Controlla quanto il prompt influenza il processo di generazione.",
"Valori di guida elevati possono causare sovrasaturazione e una guida elevata o bassa può causare risultati di generazione distorti. La guida si applica solo ai modelli FLUX DEV."
]
}
},
"sdxl": {
@@ -1494,7 +1564,13 @@
"convertGraph": "Converti grafico",
"loadWorkflow": "$t(common.load) Flusso di lavoro",
"autoLayout": "Disposizione automatica",
"loadFromGraph": "Carica il flusso di lavoro dal grafico"
"loadFromGraph": "Carica il flusso di lavoro dal grafico",
"userWorkflows": "Flussi di lavoro utente",
"projectWorkflows": "Flussi di lavoro del progetto",
"defaultWorkflows": "Flussi di lavoro predefiniti",
"uploadAndSaveWorkflow": "Carica nella libreria",
"chooseWorkflowFromLibrary": "Scegli il flusso di lavoro dalla libreria",
"deleteWorkflow2": "Vuoi davvero eliminare questo flusso di lavoro? Questa operazione non può essere annullata."
},
"accordions": {
"compositing": {
@@ -1533,7 +1609,303 @@
"addPositivePrompt": "Aggiungi $t(controlLayers.prompt)",
"addNegativePrompt": "Aggiungi $t(controlLayers.negativePrompt)",
"regionalGuidance": "Guida regionale",
"opacity": "Opacità"
"opacity": "Opacità",
"mergeVisible": "Fondi il visibile",
"mergeVisibleOk": "Livelli visibili uniti",
"deleteReferenceImage": "Elimina l'immagine di riferimento",
"referenceImage": "Immagine di riferimento",
"fitBboxToLayers": "Adatta il riquadro di delimitazione ai livelli",
"mergeVisibleError": "Errore durante l'unione dei livelli visibili",
"regionalReferenceImage": "Immagine di riferimento Regionale",
"newLayerFromImage": "Nuovo livello da immagine",
"newCanvasFromImage": "Nuova tela da immagine",
"globalReferenceImage": "Immagine di riferimento Globale",
"copyToClipboard": "Copia negli appunti",
"sendingToCanvas": "Effettua le generazioni nella Tela",
"clearHistory": "Cancella la cronologia",
"inpaintMask": "Maschera Inpaint",
"sendToGallery": "Invia alla Galleria",
"controlLayer": "Livello di Controllo",
"rasterLayer_withCount_one": "$t(controlLayers.rasterLayer)",
"rasterLayer_withCount_many": "Livelli Raster",
"rasterLayer_withCount_other": "Livelli Raster",
"controlLayer_withCount_one": "$t(controlLayers.controlLayer)",
"controlLayer_withCount_many": "Livelli di controllo",
"controlLayer_withCount_other": "Livelli di controllo",
"clipToBbox": "Ritaglia i tratti al riquadro",
"duplicate": "Duplica",
"width": "Larghezza",
"addControlLayer": "Aggiungi $t(controlLayers.controlLayer)",
"addInpaintMask": "Aggiungi $t(controlLayers.inpaintMask)",
"addRegionalGuidance": "Aggiungi $t(controlLayers.regionalGuidance)",
"sendToCanvasDesc": "Premendo Invoke il lavoro in corso viene visualizzato sulla tela.",
"addRasterLayer": "Aggiungi $t(controlLayers.rasterLayer)",
"clearCaches": "Svuota le cache",
"regionIsEmpty": "La regione selezionata è vuota",
"recalculateRects": "Ricalcola rettangoli",
"removeBookmark": "Rimuovi segnalibro",
"saveCanvasToGallery": "Salva la tela nella Galleria",
"regional": "Regionale",
"global": "Globale",
"canvas": "Tela",
"bookmark": "Segnalibro per cambio rapido",
"newRegionalReferenceImageOk": "Immagine di riferimento regionale creata",
"newRegionalReferenceImageError": "Problema nella creazione dell'immagine di riferimento regionale",
"newControlLayerOk": "Livello di controllo creato",
"bboxOverlay": "Mostra sovrapposizione riquadro",
"resetCanvas": "Reimposta la tela",
"outputOnlyMaskedRegions": "Solo regioni mascherate in uscita",
"enableAutoNegative": "Abilita Auto Negativo",
"disableAutoNegative": "Disabilita Auto Negativo",
"showHUD": "Mostra HUD",
"maskFill": "Riempimento maschera",
"addReferenceImage": "Aggiungi $t(controlLayers.referenceImage)",
"addGlobalReferenceImage": "Aggiungi $t(controlLayers.globalReferenceImage)",
"sendingToGallery": "Inviare generazioni alla Galleria",
"sendToGalleryDesc": "Premendo Invoke viene generata e salvata un'immagine unica nella tua galleria.",
"sendToCanvas": "Invia alla Tela",
"viewProgressInViewer": "Visualizza i progressi e i risultati nel <Btn>Visualizzatore immagini</Btn>.",
"viewProgressOnCanvas": "Visualizza i progressi e i risultati nella <Btn>Tela</Btn>.",
"saveBboxToGallery": "Salva il riquadro di delimitazione nella Galleria",
"cropLayerToBbox": "Ritaglia il livello al riquadro di delimitazione",
"savedToGalleryError": "Errore durante il salvataggio nella galleria",
"rasterLayer": "Livello Raster",
"regionalGuidance_withCount_one": "$t(controlLayers.regionalGuidance)",
"regionalGuidance_withCount_many": "Guide regionali",
"regionalGuidance_withCount_other": "Guide regionali",
"inpaintMask_withCount_one": "$t(controlLayers.inpaintMask)",
"inpaintMask_withCount_many": "Maschere Inpaint",
"inpaintMask_withCount_other": "Maschere Inpaint",
"savedToGalleryOk": "Salvato nella Galleria",
"newGlobalReferenceImageOk": "Immagine di riferimento globale creata",
"newGlobalReferenceImageError": "Problema nella creazione dell'immagine di riferimento globale",
"newControlLayerError": "Problema nella creazione del livello di controllo",
"newRasterLayerOk": "Livello raster creato",
"newRasterLayerError": "Problema nella creazione del livello raster",
"saveLayerToAssets": "Salva il livello nelle Risorse",
"pullBboxIntoLayerError": "Problema nel caricare il riquadro nel livello",
"pullBboxIntoReferenceImageOk": "Contenuto del riquadro inserito nell'immagine di riferimento",
"pullBboxIntoLayerOk": "Riquadro caricato nel livello",
"pullBboxIntoReferenceImageError": "Problema nell'inserimento del contenuto del riquadro nell'immagine di riferimento",
"globalReferenceImage_withCount_one": "$t(controlLayers.globalReferenceImage)",
"globalReferenceImage_withCount_many": "Immagini di riferimento Globali",
"globalReferenceImage_withCount_other": "Immagini di riferimento Globali",
"controlMode": {
"balanced": "Bilanciato",
"controlMode": "Modalità di controllo",
"prompt": "Prompt",
"control": "Controllo",
"megaControl": "Mega Controllo"
},
"negativePrompt": "Prompt Negativo",
"prompt": "Prompt Positivo",
"beginEndStepPercentShort": "Inizio/Fine %",
"stagingOnCanvas": "Genera immagini nella",
"ipAdapterMethod": {
"full": "Completo",
"style": "Solo Stile",
"composition": "Solo Composizione",
"ipAdapterMethod": "Metodo Adattatore IP"
},
"showingType": "Mostrare {{type}}",
"dynamicGrid": "Griglia dinamica",
"tool": {
"view": "Muovi",
"colorPicker": "Selettore Colore",
"rectangle": "Rettangolo",
"bbox": "Riquadro di delimitazione",
"move": "Sposta",
"brush": "Pennello",
"eraser": "Cancellino"
},
"filter": {
"apply": "Applica",
"reset": "Reimposta",
"process": "Elabora",
"cancel": "Annulla",
"autoProcess": "Processo automatico",
"filterType": "Tipo Filtro",
"filter": "Filtro",
"filters": "Filtri",
"mlsd_detection": {
"score_threshold": "Soglia di punteggio",
"distance_threshold": "Soglia di distanza",
"description": "Genera una mappa dei segmenti di linea dal livello selezionato utilizzando il modello di rilevamento dei segmenti di linea MLSD.",
"label": "Rilevamento segmenti di linea"
},
"content_shuffle": {
"label": "Mescola contenuto",
"scale_factor": "Fattore di scala",
"description": "Mescola il contenuto del livello selezionato, in modo simile all'effetto \"liquefa\"."
},
"mediapipe_face_detection": {
"min_confidence": "Confidenza minima",
"label": "Rilevamento del volto MediaPipe",
"max_faces": "Max volti",
"description": "Rileva i volti nel livello selezionato utilizzando il modello di rilevamento dei volti MediaPipe."
},
"dw_openpose_detection": {
"draw_face": "Disegna il volto",
"description": "Rileva le pose umane nel livello selezionato utilizzando il modello DW Openpose.",
"label": "Rilevamento DW Openpose",
"draw_hands": "Disegna le mani",
"draw_body": "Disegna il corpo"
},
"normal_map": {
"description": "Genera una mappa delle normali dal livello selezionato.",
"label": "Mappa delle normali"
},
"lineart_edge_detection": {
"label": "Rilevamento bordi Lineart",
"coarse": "Grossolano",
"description": "Genera una mappa dei bordi dal livello selezionato utilizzando il modello di rilevamento dei bordi Lineart."
},
"depth_anything_depth_estimation": {
"model_size_small": "Piccolo",
"model_size_small_v2": "Piccolo v2",
"model_size": "Dimensioni modello",
"model_size_large": "Grande",
"model_size_base": "Base",
"description": "Genera una mappa di profondità dal livello selezionato utilizzando un modello Depth Anything."
},
"color_map": {
"label": "Mappa colore",
"description": "Crea una mappa dei colori dal livello selezionato.",
"tile_size": "Dimens. Piastrella"
},
"canny_edge_detection": {
"high_threshold": "Soglia superiore",
"low_threshold": "Soglia inferiore",
"description": "Genera una mappa dei bordi dal livello selezionato utilizzando l'algoritmo di rilevamento dei bordi Canny.",
"label": "Rilevamento bordi Canny"
},
"spandrel_filter": {
"scale": "Scala di destinazione",
"autoScaleDesc": "Il modello selezionato verrà eseguito fino al raggiungimento della scala di destinazione.",
"description": "Esegue un modello immagine-a-immagine sul livello selezionato.",
"label": "Modello Immagine-a-Immagine",
"model": "Modello",
"autoScale": "Auto Scala"
},
"pidi_edge_detection": {
"quantize_edges": "Quantizza i bordi",
"scribble": "Scarabocchio",
"description": "Genera una mappa dei bordi dal livello selezionato utilizzando il modello di rilevamento dei bordi PiDiNet.",
"label": "Rilevamento bordi PiDiNet"
},
"hed_edge_detection": {
"label": "Rilevamento bordi HED",
"description": "Genera una mappa dei bordi dal livello selezionato utilizzando il modello di rilevamento dei bordi HED.",
"scribble": "Scarabocchio"
},
"lineart_anime_edge_detection": {
"description": "Genera una mappa dei bordi dal livello selezionato utilizzando il modello di rilevamento dei bordi Lineart Anime.",
"label": "Rilevamento bordi Lineart Anime"
}
},
"controlLayers_withCount_hidden": "Livelli di controllo ({{count}} nascosti)",
"regionalGuidance_withCount_hidden": "Guida regionale ({{count}} nascosti)",
"fill": {
"grid": "Griglia",
"crosshatch": "Tratteggio incrociato",
"fillColor": "Colore di riempimento",
"fillStyle": "Stile riempimento",
"solid": "Solido",
"vertical": "Verticale",
"horizontal": "Orizzontale",
"diagonal": "Diagonale"
},
"rasterLayers_withCount_hidden": "Livelli raster ({{count}} nascosti)",
"inpaintMasks_withCount_hidden": "Maschere Inpaint ({{count}} nascoste)",
"regionalGuidance_withCount_visible": "Guide regionali ({{count}})",
"locked": "Bloccato",
"hidingType": "Nascondere {{type}}",
"logDebugInfo": "Registro Info Debug",
"inpaintMasks_withCount_visible": "Maschere Inpaint ({{count}})",
"layer_one": "Livello",
"layer_many": "Livelli",
"layer_other": "Livelli",
"disableTransparencyEffect": "Disabilita l'effetto trasparenza",
"controlLayers_withCount_visible": "Livelli di controllo ({{count}})",
"transparency": "Trasparenza",
"newCanvasSessionDesc": "Questo cancellerà la tela e tutte le impostazioni, eccetto la selezione del modello. Le generazioni saranno effettuate sulla tela.",
"rasterLayers_withCount_visible": "Livelli raster ({{count}})",
"globalReferenceImages_withCount_visible": "Immagini di riferimento Globali ({{count}})",
"globalReferenceImages_withCount_hidden": "Immagini di riferimento globali ({{count}} nascoste)",
"layer_withCount_one": "Livello ({{count}})",
"layer_withCount_many": "Livelli ({{count}})",
"layer_withCount_other": "Livelli ({{count}})",
"convertToControlLayer": "Converti in livello di controllo",
"convertToRasterLayer": "Converti in livello raster",
"unlocked": "Sbloccato",
"enableTransparencyEffect": "Abilita l'effetto trasparenza",
"replaceLayer": "Sostituisci livello",
"pullBboxIntoLayer": "Carica l'immagine delimitata nel riquadro",
"pullBboxIntoReferenceImage": "Carica l'immagine delimitata nel riquadro",
"showProgressOnCanvas": "Mostra i progressi sulla Tela",
"weight": "Peso",
"newGallerySession": "Nuova sessione Galleria",
"newGallerySessionDesc": "Questo cancellerà la tela e tutte le impostazioni, eccetto la selezione del modello. Le generazioni saranno inviate alla galleria.",
"newCanvasSession": "Nuova sessione Tela",
"deleteSelected": "Elimina selezione",
"settings": {
"isolatedFilteringPreview": "Anteprima del filtraggio isolata",
"isolatedStagingPreview": "Anteprima di generazione isolata",
"isolatedTransformingPreview": "Anteprima di trasformazione isolata",
"isolatedPreview": "Anteprima isolata",
"invertBrushSizeScrollDirection": "Inverti scorrimento per dimensione pennello",
"snapToGrid": {
"label": "Aggancia alla griglia",
"on": "Acceso",
"off": "Spento"
},
"pressureSensitivity": "Sensibilità alla pressione",
"preserveMask": {
"alert": "Preservare la regione mascherata",
"label": "Preserva la regione mascherata"
}
},
"transform": {
"reset": "Reimposta",
"fitToBbox": "Adatta al Riquadro",
"transform": "Trasforma",
"apply": "Applica",
"cancel": "Annulla"
},
"stagingArea": {
"next": "Successiva",
"discard": "Scarta",
"discardAll": "Scarta tutto",
"accept": "Accetta",
"saveToGallery": "Salva nella Galleria",
"previous": "Precedente",
"showResultsOn": "Risultati visualizzati",
"showResultsOff": "Risultati nascosti"
},
"HUD": {
"bbox": "Riquadro di delimitazione",
"entityStatus": {
"isHidden": "{{title}} è nascosto",
"isLocked": "{{title}} è bloccato",
"isTransforming": "{{title}} sta trasformando",
"isFiltering": "{{title}} sta filtrando",
"isEmpty": "{{title}} è vuoto",
"isDisabled": "{{title}} è disabilitato"
},
"scaledBbox": "Riquadro scalato"
},
"canvasContextMenu": {
"newControlLayer": "Nuovo Livello di Controllo",
"newRegionalReferenceImage": "Nuova immagine di riferimento Regionale",
"newGlobalReferenceImage": "Nuova immagine di riferimento Globale",
"bboxGroup": "Crea dal riquadro di delimitazione",
"saveBboxToGallery": "Salva il riquadro nella Galleria",
"cropCanvasToBbox": "Ritaglia la Tela al riquadro",
"canvasGroup": "Tela",
"newRasterLayer": "Nuovo Livello Raster",
"saveCanvasToGallery": "Salva la Tela nella Galleria",
"saveToGalleryGroup": "Salva nella Galleria"
}
},
"ui": {
"tabs": {
@@ -1545,7 +1917,8 @@
"modelsTab": "$t(ui.tabs.models) $t(common.tab)",
"queue": "Coda",
"upscaling": "Amplia",
"upscalingTab": "$t(ui.tabs.upscaling) $t(common.tab)"
"upscalingTab": "$t(ui.tabs.upscaling) $t(common.tab)",
"gallery": "Galleria"
}
},
"upscaling": {
@@ -1615,5 +1988,45 @@
"noTemplates": "Nessun modello",
"acceptedColumnsKeys": "Colonne/chiavi accettate:",
"promptTemplateCleared": "Modello di prompt cancellato"
},
"newUserExperience": {
"gettingStartedSeries": "Desideri maggiori informazioni? Consulta la nostra <LinkComponent>Getting Started Series</LinkComponent> per suggerimenti su come sfruttare appieno il potenziale di Invoke Studio.",
"toGetStarted": "Per iniziare, inserisci un prompt nella casella e fai clic su <StrongComponent>Invoke</StrongComponent> per generare la tua prima immagine. Seleziona un modello di prompt per migliorare i risultati. Puoi scegliere di salvare le tue immagini direttamente nella <StrongComponent>Galleria</StrongComponent> o modificarle nella <StrongComponent>Tela</StrongComponent>."
},
"whatsNew": {
"canvasV2Announcement": {
"readReleaseNotes": "Leggi le Note di Rilascio",
"fluxSupport": "Supporto per la famiglia di modelli Flux",
"newCanvas": "Una nuova potente tela di controllo",
"watchReleaseVideo": "Guarda il video di rilascio",
"watchUiUpdatesOverview": "Guarda le novità dell'interfaccia",
"newLayerTypes": "Nuovi tipi di livello per un miglior controllo"
},
"whatsNewInInvoke": "Novità in Invoke"
},
"system": {
"logLevel": {
"info": "Info",
"warn": "Avviso",
"fatal": "Fatale",
"error": "Errore",
"debug": "Debug",
"trace": "Traccia",
"logLevel": "Livello di registro"
},
"logNamespaces": {
"workflows": "Flussi di lavoro",
"generation": "Generazione",
"canvas": "Tela",
"config": "Configurazione",
"models": "Modelli",
"gallery": "Galleria",
"queue": "Coda",
"events": "Eventi",
"system": "Sistema",
"metadata": "Metadati",
"logNamespaces": "Elementi del registro"
},
"enableLogging": "Abilita la registrazione"
}
}

View File

@@ -93,7 +93,8 @@
"placeholderSelectAModel": "Выбрать модель",
"reset": "Сброс",
"none": "Ничего",
"new": "Новый"
"new": "Новый",
"ok": "Ok"
},
"gallery": {
"galleryImageSize": "Размер изображений",
@@ -157,7 +158,9 @@
"move": "Двигать",
"gallery": "Галерея",
"openViewer": "Открыть просмотрщик",
"closeViewer": "Закрыть просмотрщик"
"closeViewer": "Закрыть просмотрщик",
"imagesTab": "Изображения, созданные и сохраненные в Invoke.",
"assetsTab": "Файлы, которые вы загрузили для использования в своих проектах."
},
"hotkeys": {
"searchHotkeys": "Поиск горячих клавиш",
@@ -227,6 +230,118 @@
"selectBrushTool": {
"title": "Инструмент кисть",
"desc": "Выбирает кисть."
},
"selectBboxTool": {
"title": "Инструмент рамка",
"desc": "Выбрать инструмент «Ограничительная рамка»."
},
"incrementToolWidth": {
"desc": "Increment the brush or eraser tool width, whichever is selected.",
"title": "Increment Tool Width"
},
"selectColorPickerTool": {
"title": "Color Picker Tool",
"desc": "Select the color picker tool."
},
"prevEntity": {
"title": "Prev Layer",
"desc": "Select the previous layer in the list."
},
"filterSelected": {
"title": "Filter",
"desc": "Filter the selected layer. Only applies to Raster and Control layers."
},
"undo": {
"desc": "Отменяет последнее действие на холсте.",
"title": "Отменить"
},
"transformSelected": {
"title": "Transform",
"desc": "Transform the selected layer."
},
"setZoomTo400Percent": {
"title": "Zoom to 400%",
"desc": "Set the canvas zoom to 400%."
},
"setZoomTo200Percent": {
"title": "Zoom to 200%",
"desc": "Set the canvas zoom to 200%."
},
"deleteSelected": {
"desc": "Delete the selected layer.",
"title": "Delete Layer"
},
"resetSelected": {
"title": "Reset Layer",
"desc": "Reset the selected layer. Only applies to Inpaint Mask and Regional Guidance."
},
"redo": {
"desc": "Возвращает последнее отмененное действие.",
"title": "Вернуть"
},
"nextEntity": {
"title": "Next Layer",
"desc": "Select the next layer in the list."
},
"setFillToWhite": {
"title": "Set Color to White",
"desc": "Set the current tool color to white."
},
"applyFilter": {
"title": "Apply Filter",
"desc": "Apply the pending filter to the selected layer."
},
"cancelFilter": {
"title": "Cancel Filter",
"desc": "Cancel the pending filter."
},
"applyTransform": {
"desc": "Apply the pending transform to the selected layer.",
"title": "Apply Transform"
},
"cancelTransform": {
"title": "Cancel Transform",
"desc": "Cancel the pending transform."
},
"selectEraserTool": {
"title": "Eraser Tool",
"desc": "Select the eraser tool."
},
"fitLayersToCanvas": {
"desc": "Scale and position the view to fit all visible layers.",
"title": "Fit Layers to Canvas"
},
"decrementToolWidth": {
"title": "Decrement Tool Width",
"desc": "Decrement the brush or eraser tool width, whichever is selected."
},
"setZoomTo800Percent": {
"title": "Zoom to 800%",
"desc": "Set the canvas zoom to 800%."
},
"quickSwitch": {
"title": "Layer Quick Switch",
"desc": "Switch between the last two selected layers. If a layer is bookmarked, always switch between it and the last non-bookmarked layer."
},
"fitBboxToCanvas": {
"title": "Fit Bbox to Canvas",
"desc": "Scale and position the view to fit the bbox."
},
"setZoomTo100Percent": {
"title": "Zoom to 100%",
"desc": "Set the canvas zoom to 100%."
},
"selectMoveTool": {
"desc": "Select the move tool.",
"title": "Move Tool"
},
"selectRectTool": {
"title": "Rect Tool",
"desc": "Select the rect tool."
},
"selectViewTool": {
"title": "View Tool",
"desc": "Select the view tool."
}
},
"hotkeys": "Горячие клавиши",
@@ -236,11 +351,33 @@
"desc": "Отменить последнее действие в рабочем процессе."
},
"deleteSelection": {
"desc": "Удалить выделенные узлы и ребра."
"desc": "Удалить выделенные узлы и ребра.",
"title": "Delete"
},
"redo": {
"title": "Вернуть",
"desc": "Вернуть последнее действие в рабочем процессе."
},
"copySelection": {
"title": "Copy",
"desc": "Copy selected nodes and edges."
},
"pasteSelection": {
"title": "Paste",
"desc": "Paste copied nodes and edges."
},
"addNode": {
"desc": "Open the add node menu.",
"title": "Add Node"
},
"title": "Workflows",
"pasteSelectionWithEdges": {
"title": "Paste with Edges",
"desc": "Paste copied nodes, edges, and all edges connected to copied nodes."
},
"selectAll": {
"desc": "Select all nodes and edges.",
"title": "Select All"
}
},
"viewer": {
@@ -257,12 +394,84 @@
"title": "Восстановить все метаданные"
},
"swapImages": {
"desc": "Поменять местами сравниваемые изображения."
"desc": "Поменять местами сравниваемые изображения.",
"title": "Swap Comparison Images"
},
"title": "Просмотрщик изображений",
"toggleViewer": {
"title": "Открыть/закрыть просмотрщик",
"desc": "Показать или скрыть просмотрщик изображений. Доступно только на вкладке «Холст»."
},
"recallSeed": {
"title": "Recall Seed",
"desc": "Recall the seed for the current image."
},
"recallPrompts": {
"desc": "Recall the positive and negative prompts for the current image.",
"title": "Recall Prompts"
},
"remix": {
"title": "Remix",
"desc": "Recall all metadata except for the seed for the current image."
},
"useSize": {
"desc": "Use the current image's size as the bbox size.",
"title": "Use Size"
},
"runPostprocessing": {
"title": "Run Postprocessing",
"desc": "Run the selected postprocessing on the current image."
},
"toggleMetadata": {
"title": "Show/Hide Metadata",
"desc": "Show or hide the current image's metadata overlay."
}
},
"gallery": {
"galleryNavRightAlt": {
"desc": "Same as Navigate Right, but selects the compare image, opening compare mode if it isn't already open.",
"title": "Navigate Right (Compare Image)"
},
"galleryNavRight": {
"desc": "Navigate right in the gallery grid, selecting that image. If at the last image of the row, go to the next row. If at the last image of the page, go to the next page.",
"title": "Navigate Right"
},
"galleryNavUp": {
"desc": "Navigate up in the gallery grid, selecting that image. If at the top of the page, go to the previous page.",
"title": "Navigate Up"
},
"galleryNavDown": {
"title": "Navigate Down",
"desc": "Navigate down in the gallery grid, selecting that image. If at the bottom of the page, go to the next page."
},
"galleryNavLeft": {
"title": "Navigate Left",
"desc": "Navigate left in the gallery grid, selecting that image. If at the first image of the row, go to the previous row. If at the first image of the page, go to the previous page."
},
"galleryNavDownAlt": {
"title": "Navigate Down (Compare Image)",
"desc": "Same as Navigate Down, but selects the compare image, opening compare mode if it isn't already open."
},
"galleryNavLeftAlt": {
"desc": "Same as Navigate Left, but selects the compare image, opening compare mode if it isn't already open.",
"title": "Navigate Left (Compare Image)"
},
"clearSelection": {
"desc": "Clear the current selection, if any.",
"title": "Clear Selection"
},
"deleteSelection": {
"title": "Delete",
"desc": "Delete all selected images. By default, you will be prompted to confirm deletion. If the images are currently in use in the app, you will be warned."
},
"galleryNavUpAlt": {
"title": "Navigate Up (Compare Image)",
"desc": "Same as Navigate Up, but selects the compare image, opening compare mode if it isn't already open."
},
"title": "Gallery",
"selectAllOnPage": {
"title": "Select All On Page",
"desc": "Select all images on the current page."
}
}
},
@@ -372,7 +581,9 @@
"ipAdapters": "IP адаптеры",
"starterModelsInModelManager": "Стартовые модели можно найти в Менеджере моделей",
"learnMoreAboutSupportedModels": "Подробнее о поддерживаемых моделях",
"t5Encoder": "T5 энкодер"
"t5Encoder": "T5 энкодер",
"spandrelImageToImage": "Image to Image (Spandrel)",
"clipEmbed": "CLIP Embed"
},
"parameters": {
"images": "Изображения",
@@ -432,12 +643,16 @@
"rgNoRegion": "регион не выбран",
"rgNoPromptsOrIPAdapters": "нет текстовых запросов или IP-адаптеров",
"ipAdapterIncompatibleBaseModel": "несовместимая базовая модель IP-адаптера",
"ipAdapterNoImageSelected": "изображение IP-адаптера не выбрано"
"ipAdapterNoImageSelected": "изображение IP-адаптера не выбрано",
"t2iAdapterIncompatibleScaledBboxWidth": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, масштабированная ширина рамки {{width}}",
"t2iAdapterIncompatibleBboxHeight": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, высота рамки {{height}}",
"t2iAdapterIncompatibleBboxWidth": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, ширина рамки {{width}}",
"t2iAdapterIncompatibleScaledBboxHeight": "$t(parameters.invoke.layer.t2iAdapterRequiresDimensionsToBeMultipleOf) {{multiple}}, масштабированная высота рамки {{height}}"
},
"fluxModelIncompatibleBboxWidth": "$t(parameters.invoke.fluxRequiresDimensionsToBeMultipleOf16), ширина bbox {{width}}",
"fluxModelIncompatibleBboxHeight": "$t(parameters.invoke.fluxRequiresDimensionsToBeMultipleOf16), высота bbox {{height}}",
"fluxModelIncompatibleScaledBboxHeight": "$t(parameters.invoke.fluxRequiresDimensionsToBeMultipleOf16), масштабированная высота bbox {{height}}",
"fluxModelIncompatibleScaledBboxWidth": "$t(parameters.invoke.fluxRequiresDimensionsToBeMultipleOf16) масштабированная ширина bbox {{width}}",
"fluxModelIncompatibleBboxWidth": "$t(parameters.invoke.fluxRequiresDimensionsToBeMultipleOf16), ширина рамки {{width}}",
"fluxModelIncompatibleBboxHeight": "$t(parameters.invoke.fluxRequiresDimensionsToBeMultipleOf16), высота рамки {{height}}",
"fluxModelIncompatibleScaledBboxHeight": "$t(parameters.invoke.fluxRequiresDimensionsToBeMultipleOf16), масштабированная высота рамки {{height}}",
"fluxModelIncompatibleScaledBboxWidth": "$t(parameters.invoke.fluxRequiresDimensionsToBeMultipleOf16) масштабированная ширина рамки {{width}}",
"noFLUXVAEModelSelected": "Для генерации FLUX не выбрана модель VAE",
"noT5EncoderModelSelected": "Для генерации FLUX не выбрана модель T5 энкодера",
"canvasIsFiltering": "Холст фильтруется",
@@ -470,7 +685,8 @@
"staged": "Инсценировка",
"optimizedImageToImage": "Оптимизированное img2img",
"sendToCanvas": "Отправить на холст",
"guidance": "Точность"
"guidance": "Точность",
"boxBlur": "Box Blur"
},
"settings": {
"models": "Модели",
@@ -504,7 +720,8 @@
"intermediatesClearedFailed": "Проблема очистки промежуточных",
"reloadingIn": "Перезагрузка через",
"informationalPopoversDisabled": "Информационные всплывающие окна отключены",
"informationalPopoversDisabledDesc": "Информационные всплывающие окна были отключены. Включите их в Настройках."
"informationalPopoversDisabledDesc": "Информационные всплывающие окна были отключены. Включите их в Настройках.",
"confirmOnNewSession": "Подтверждение нового сеанса"
},
"toast": {
"uploadFailed": "Загрузка не удалась",
@@ -518,8 +735,8 @@
"parameterSet": "Параметр задан",
"problemCopyingImage": "Не удается скопировать изображение",
"baseModelChangedCleared_one": "Очищена или отключена {{count}} несовместимая подмодель",
"baseModelChangedCleared_few": "Очищены или отключены {{count}} несовместимые подмодели",
"baseModelChangedCleared_many": "Очищены или отключены {{count}} несовместимых подмоделей",
"baseModelChangedCleared_few": "Очищено или отключено {{count}} несовместимых подмодели",
"baseModelChangedCleared_many": "Очищено или отключено {{count}} несовместимых подмоделей",
"loadedWithWarnings": "Рабочий процесс загружен с предупреждениями",
"setControlImage": "Установить как контрольное изображение",
"setNodeField": "Установить как поле узла",
@@ -573,7 +790,9 @@
"resetUI": "$t(accessibility.reset) интерфейс",
"createIssue": "Сообщить о проблеме",
"about": "Об этом",
"submitSupportTicket": "Отправить тикет в службу поддержки"
"submitSupportTicket": "Отправить тикет в службу поддержки",
"toggleRightPanel": "Переключить правую панель (G)",
"toggleLeftPanel": "Переключить левую панель (T)"
},
"nodes": {
"zoomInNodes": "Увеличьте масштаб",
@@ -711,7 +930,10 @@
"imageAccessError": "Невозможно найти изображение {{image_name}}, сбрасываем на значение по умолчанию",
"boardAccessError": "Невозможно найти доску {{board_id}}, сбрасываем на значение по умолчанию",
"modelAccessError": "Невозможно найти модель {{key}}, сброс на модель по умолчанию",
"saveToGallery": "Сохранить в галерею"
"saveToGallery": "Сохранить в галерею",
"noWorkflows": "Нет рабочих процессов",
"noMatchingWorkflows": "Нет совпадающих рабочих процессов",
"workflowHelpText": "Нужна помощь? Ознакомьтесь с нашим руководством <LinkComponent>Getting Started with Workflows</LinkComponent>"
},
"boards": {
"autoAddBoard": "Авто добавление Доски",
@@ -730,16 +952,16 @@
"loading": "Загрузка...",
"clearSearch": "Очистить поиск",
"deleteBoardOnly": "Удалить только доску",
"movingImagesToBoard_one": "Перемещаем {{count}} изображение на доску:",
"movingImagesToBoard_few": "Перемещаем {{count}} изображения на доску:",
"movingImagesToBoard_many": "Перемещаем {{count}} изображений на доску:",
"movingImagesToBoard_one": "Перемещение {{count}} изображения на доску:",
"movingImagesToBoard_few": "Перемещение {{count}} изображений на доску:",
"movingImagesToBoard_many": "Перемещение {{count}} изображений на доску:",
"downloadBoard": "Скачать доску",
"deleteBoard": "Удалить доску",
"deleteBoardAndImages": "Удалить доску и изображения",
"deletedBoardsCannotbeRestored": "Удаленные доски не могут быть восстановлены. Выбор «Удалить только доску» переведет изображения в состояние без категории.",
"assetsWithCount_one": "{{count}} ассет",
"assetsWithCount_few": "{{count}} ассета",
"assetsWithCount_many": "{{count}} ассетов",
"assetsWithCount_one": "{{count}} актив",
"assetsWithCount_few": "{{count}} актива",
"assetsWithCount_many": "{{count}} активов",
"imagesWithCount_one": "{{count}} изображение",
"imagesWithCount_few": "{{count}} изображения",
"imagesWithCount_many": "{{count}} изображений",
@@ -755,7 +977,8 @@
"hideBoards": "Скрыть доски",
"viewBoards": "Просмотреть доски",
"noBoards": "Нет досок {{boardType}}",
"deletedPrivateBoardsCannotbeRestored": "Удаленные доски не могут быть восстановлены. Выбор «Удалить только доску» переведет изображения в приватное состояние без категории для создателя изображения."
"deletedPrivateBoardsCannotbeRestored": "Удаленные доски не могут быть восстановлены. Выбор «Удалить только доску» переведет изображения в приватное состояние без категории для создателя изображения.",
"updateBoardError": "Ошибка обновления доски"
},
"dynamicPrompts": {
"seedBehaviour": {
@@ -1335,7 +1558,10 @@
"autoLayout": "Автоматическое расположение",
"userWorkflows": "Пользовательские рабочие процессы",
"projectWorkflows": "Рабочие процессы проекта",
"defaultWorkflows": "Стандартные рабочие процессы"
"defaultWorkflows": "Стандартные рабочие процессы",
"deleteWorkflow2": "Вы уверены, что хотите удалить этот рабочий процесс? Это нельзя отменить.",
"chooseWorkflowFromLibrary": "Выбрать рабочий процесс из библиотеки",
"uploadAndSaveWorkflow": "Загрузить в библиотеку"
},
"hrf": {
"enableHrf": "Включить исправление высокого разрешения",
@@ -1392,15 +1618,15 @@
"autoNegative": "Авто негатив",
"deletePrompt": "Удалить запрос",
"rectangle": "Прямоугольник",
"addNegativePrompt": "Добавить $t(common.negativePrompt)",
"addNegativePrompt": "Добавить $t(controlLayers.negativePrompt)",
"regionalGuidance": "Региональная точность",
"opacity": "Непрозрачность",
"addLayer": "Добавить слой",
"moveToFront": "На передний план",
"addPositivePrompt": "Добавить $t(common.positivePrompt)",
"addPositivePrompt": "Добавить $t(controlLayers.prompt)",
"regional": "Региональный",
"bookmark": "Закладка для быстрого переключения",
"fitBboxToLayers": "Подогнать Bbox к слоям",
"fitBboxToLayers": "Подогнать рамку к слоям",
"mergeVisibleOk": "Объединенные видимые слои",
"mergeVisibleError": "Ошибка объединения видимых слоев",
"clearHistory": "Очистить историю",
@@ -1409,7 +1635,7 @@
"saveLayerToAssets": "Сохранить слой в активы",
"clearCaches": "Очистить кэши",
"recalculateRects": "Пересчитать прямоугольники",
"saveBboxToGallery": "Сохранить Bbox в галерею",
"saveBboxToGallery": "Сохранить рамку в галерею",
"resetCanvas": "Сбросить холст",
"canvas": "Холст",
"global": "Глобальный",
@@ -1421,15 +1647,280 @@
"newRasterLayerOk": "Создан растровый слой",
"newRasterLayerError": "Ошибка создания растрового слоя",
"newGlobalReferenceImageOk": "Создано глобальное эталонное изображение",
"bboxOverlay": "Показать наложение Bbox",
"bboxOverlay": "Показать наложение ограничительной рамки",
"saveCanvasToGallery": "Сохранить холст в галерею",
"pullBboxIntoReferenceImageOk": "Bbox перенесен в эталонное изображение",
"pullBboxIntoReferenceImageError": "Ошибка переноса BBox в эталонное изображение",
"pullBboxIntoReferenceImageOk": "рамка перенесена в эталонное изображение",
"pullBboxIntoReferenceImageError": "Ошибка переноса рамки в эталонное изображение",
"regionIsEmpty": "Выбранный регион пуст",
"savedToGalleryOk": "Сохранено в галерею",
"savedToGalleryError": "Ошибка сохранения в галерею",
"pullBboxIntoLayerOk": "Bbox перенесен в слой",
"pullBboxIntoLayerError": "Проблема с переносом BBox в слой"
"pullBboxIntoLayerOk": "Рамка перенесена в слой",
"pullBboxIntoLayerError": "Проблема с переносом рамки в слой",
"newLayerFromImage": "Новый слой из изображения",
"filter": {
"lineart_anime_edge_detection": {
"label": "Обнаружение краев Lineart Anime",
"description": "Создает карту краев выбранного слоя с помощью модели обнаружения краев Lineart Anime."
},
"hed_edge_detection": {
"scribble": "Штрих",
"label": "обнаружение границ HED",
"description": "Создает карту границ из выбранного слоя с использованием модели обнаружения границ HED."
},
"mlsd_detection": {
"description": "Генерирует карту сегментов линий из выбранного слоя с помощью модели обнаружения сегментов линий MLSD.",
"score_threshold": "Пороговый балл",
"distance_threshold": "Порог расстояния",
"label": "Обнаружение сегментов линии"
},
"canny_edge_detection": {
"low_threshold": "Низкий порог",
"high_threshold": "Высокий порог",
"label": "Обнаружение краев",
"description": "Создает карту краев выбранного слоя с помощью алгоритма обнаружения краев Canny."
},
"color_map": {
"description": "Создайте цветовую карту из выбранного слоя.",
"label": "Цветная карта",
"tile_size": "Размер плитки"
},
"depth_anything_depth_estimation": {
"model_size_base": "Базовая",
"model_size_large": "Большая",
"label": "Анализ глубины",
"model_size_small": "Маленькая",
"model_size_small_v2": "Маленькая v2",
"description": "Создает карту глубины из выбранного слоя с использованием модели Depth Anything.",
"model_size": "Размер модели"
},
"mediapipe_face_detection": {
"min_confidence": "Минимальная уверенность",
"label": "Распознавание лиц MediaPipe",
"description": "Обнаруживает лица в выбранном слое с помощью модели обнаружения лиц MediaPipe.",
"max_faces": "Максимум лиц"
},
"lineart_edge_detection": {
"label": "Обнаружение краев Lineart",
"description": "Создает карту краев выбранного слоя с помощью модели обнаружения краев Lineart.",
"coarse": "Грубый"
},
"filterType": "Тип фильтра",
"autoProcess": "Автообработка",
"reset": "Сбросить",
"content_shuffle": {
"scale_factor": "Коэффициент",
"label": "Перетасовка контента",
"description": "Перемешивает содержимое выбранного слоя, аналогично эффекту «сжижения»."
},
"dw_openpose_detection": {
"label": "Обнаружение DW Openpose",
"draw_hands": "Рисовать руки",
"description": "Обнаруживает позы человека в выбранном слое с помощью модели DW Openpose.",
"draw_face": "Рисовать лицо",
"draw_body": "Рисовать тело"
},
"normal_map": {
"label": "Карта нормалей",
"description": "Создает карту нормалей для выбранного слоя."
},
"spandrel_filter": {
"model": "Модель",
"label": "Модель img2img",
"autoScale": "Авто масштабирование",
"scale": "Целевой масштаб",
"description": "Запустить модель изображения к изображению на выбранном слое.",
"autoScaleDesc": "Выбранная модель будет работать до тех пор, пока не будет достигнут целевой масштаб."
},
"pidi_edge_detection": {
"scribble": "Штрих",
"description": "Генерирует карту краев из выбранного слоя с помощью модели обнаружения краев PiDiNet.",
"label": "Обнаружение краев PiDiNet",
"quantize_edges": "Квантизация краев"
},
"process": "Обработать",
"apply": "Применить",
"cancel": "Отменить",
"filter": "Фильтр",
"filters": "Фильтры"
},
"HUD": {
"entityStatus": {
"isHidden": "{{title}} скрыт",
"isLocked": "{{title}} заблокирован",
"isDisabled": "{{title}} отключен",
"isEmpty": "{{title}} пуст",
"isFiltering": "{{title}} фильтруется",
"isTransforming": "{{title}} трансформируется"
},
"scaledBbox": "Масштабированная рамка",
"bbox": "Ограничительная рамка"
},
"canvasContextMenu": {
"saveBboxToGallery": "Сохранить рамку в галерею",
"newGlobalReferenceImage": "Новое глобальное эталонное изображение",
"bboxGroup": "Сохдать из рамки",
"canvasGroup": "Холст",
"newControlLayer": "Новый контрольный слой",
"newRasterLayer": "Новый растровый слой",
"saveToGalleryGroup": "Сохранить в галерею",
"saveCanvasToGallery": "Сохранить холст в галерею",
"cropCanvasToBbox": "Обрезать холст по рамке",
"newRegionalReferenceImage": "Новое региональное эталонное изображение"
},
"fill": {
"solid": "Сплошной",
"fillStyle": "Стиль заполнения",
"fillColor": "Цвет заполнения",
"grid": "Сетка",
"horizontal": "Горизонтальная",
"diagonal": "Диагональная",
"crosshatch": "Штриховка",
"vertical": "Вертикальная"
},
"showHUD": "Показать HUD",
"copyToClipboard": "Копировать в буфер обмена",
"ipAdapterMethod": {
"composition": "Только композиция",
"style": "Только стиль",
"ipAdapterMethod": "Метод IP адаптера",
"full": "Полный"
},
"addReferenceImage": "Добавить $t(controlLayers.referenceImage)",
"inpaintMask": "Маска перерисовки",
"sendToGalleryDesc": "При нажатии кнопки Invoke создается изображение и сохраняется в вашей галерее.",
"sendToCanvas": "Отправить на холст",
"regionalGuidance_withCount_one": "$t(controlLayers.regionalGuidance)",
"regionalGuidance_withCount_few": "Региональных точности",
"regionalGuidance_withCount_many": "Региональных точностей",
"controlLayer_withCount_one": "$t(controlLayers.controlLayer)",
"controlLayer_withCount_few": "Контрольных слоя",
"controlLayer_withCount_many": "Контрольных слоев",
"newCanvasFromImage": "Новый холст из изображения",
"inpaintMask_withCount_one": "$t(controlLayers.inpaintMask)",
"inpaintMask_withCount_few": "Маски перерисовки",
"inpaintMask_withCount_many": "Масок перерисовки",
"globalReferenceImages_withCount_visible": "Глобальные эталонные изображения ({{count}})",
"controlMode": {
"prompt": "Запрос",
"controlMode": "Режим контроля",
"megaControl": "Мега контроль",
"balanced": "Сбалансированный",
"control": "Контроль"
},
"settings": {
"isolatedPreview": "Изолированный предпросмотр",
"isolatedTransformingPreview": "Изолированный предпросмотр преобразования",
"invertBrushSizeScrollDirection": "Инвертировать прокрутку для размера кисти",
"snapToGrid": {
"label": "Привязка к сетке",
"on": "Вкл",
"off": "Выкл"
},
"isolatedFilteringPreview": "Изолированный предпросмотр фильтрации",
"pressureSensitivity": "Чувствительность к давлению",
"isolatedStagingPreview": "Изолированный предпросмотр на промежуточной стадии",
"preserveMask": {
"label": "Сохранить замаскированную область",
"alert": "Сохранение замаскированной области"
}
},
"stagingArea": {
"discardAll": "Отбросить все",
"discard": "Отбросить",
"accept": "Принять",
"previous": "Предыдущий",
"next": "Следующий",
"saveToGallery": "Сохранить в галерею",
"showResultsOn": "Показать результаты",
"showResultsOff": "Скрыть результаты"
},
"pullBboxIntoReferenceImage": "Поместить рамку в эталонное изображение",
"enableAutoNegative": "Включить авто негатив",
"maskFill": "Заполнение маски",
"viewProgressInViewer": "Просматривайте прогресс и результаты в <Btn>Просмотрщике изображений</Btn>.",
"convertToRasterLayer": "Конвертировать в растровый слой",
"tool": {
"move": "Двигать",
"bbox": "Ограничительная рамка",
"view": "Смотреть",
"brush": "Кисть",
"eraser": "Ластик",
"rectangle": "Прямоугольник",
"colorPicker": "Подборщик цветов"
},
"rasterLayer": "Растровый слой",
"sendingToCanvas": "Постановка генераций на холст",
"rasterLayers_withCount_visible": "Растровые слои ({{count}})",
"regionalGuidance_withCount_hidden": "Региональная точность ({{count}} скрыто)",
"enableTransparencyEffect": "Включить эффект прозрачности",
"hidingType": "Скрыть {{type}}",
"addRegionalGuidance": "Добавить $t(controlLayers.regionalGuidance)",
"sendingToGallery": "Отправка генераций в галерею",
"viewProgressOnCanvas": "Просматривайте прогресс и результаты этапов на <Btn>Холсте</Btn>.",
"controlLayers_withCount_hidden": "Контрольные слои ({{count}} скрыто)",
"rasterLayers_withCount_hidden": "Растровые слои ({{count}} скрыто)",
"deleteSelected": "Удалить выбранное",
"stagingOnCanvas": "Постановка изображений на",
"pullBboxIntoLayer": "Поместить рамку в слой",
"locked": "Заблокировано",
"replaceLayer": "Заменить слой",
"width": "Ширина",
"controlLayer": "Слой управления",
"addRasterLayer": "Добавить $t(controlLayers.rasterLayer)",
"addControlLayer": "Добавить $t(controlLayers.controlLayer)",
"addInpaintMask": "Добавить $t(controlLayers.inpaintMask)",
"inpaintMasks_withCount_hidden": "Маски перерисовки ({{count}} скрыто)",
"regionalGuidance_withCount_visible": "Региональная точность ({{count}})",
"newGallerySessionDesc": "Это очистит холст и все настройки, кроме выбранной модели. Генерации будут отправлены в галерею.",
"newCanvasSession": "Новая сессия холста",
"newCanvasSessionDesc": "Это очистит холст и все настройки, кроме выбора модели. Генерации будут размещены на холсте.",
"cropLayerToBbox": "Обрезать слой по ограничительной рамке",
"clipToBbox": "Обрезка штрихов в рамке",
"outputOnlyMaskedRegions": "Вывод только маскированных областей",
"duplicate": "Дублировать",
"inpaintMasks_withCount_visible": "Маски перерисовки ({{count}})",
"layer_one": "Слой",
"layer_few": "Слоя",
"layer_many": "Слоев",
"prompt": "Запрос",
"negativePrompt": "Исключающий запрос",
"beginEndStepPercentShort": "Начало/конец %",
"transform": {
"transform": "Трансформировать",
"fitToBbox": "Вместить в рамку",
"reset": "Сбросить",
"apply": "Применить",
"cancel": "Отменить"
},
"disableAutoNegative": "Отключить авто негатив",
"deleteReferenceImage": "Удалить эталонное изображение",
"controlLayers_withCount_visible": "Контрольные слои ({{count}})",
"rasterLayer_withCount_one": "$t(controlLayers.rasterLayer)",
"rasterLayer_withCount_few": "Растровых слоя",
"rasterLayer_withCount_many": "Растровых слоев",
"transparency": "Прозрачность",
"weight": "Вес",
"newGallerySession": "Новая сессия галереи",
"sendToCanvasDesc": "Нажатие кнопки Invoke отображает вашу текущую работу на холсте.",
"globalReferenceImages_withCount_hidden": "Глобальные эталонные изображения ({{count}} скрыто)",
"convertToControlLayer": "Конвертировать в контрольный слой",
"layer_withCount_one": "Слой ({{count}})",
"layer_withCount_few": "Слои ({{count}})",
"layer_withCount_many": "Слои ({{count}})",
"disableTransparencyEffect": "Отключить эффект прозрачности",
"showingType": "Показать {{type}}",
"dynamicGrid": "Динамическая сетка",
"logDebugInfo": "Писать отладочную информацию",
"unlocked": "Разблокировано",
"showProgressOnCanvas": "Показать прогресс на холсте",
"globalReferenceImage_withCount_one": "$t(controlLayers.globalReferenceImage)",
"globalReferenceImage_withCount_few": "Глобальных эталонных изображения",
"globalReferenceImage_withCount_many": "Глобальных эталонных изображений",
"regionalReferenceImage": "Региональное эталонное изображение",
"globalReferenceImage": "Глобальное эталонное изображение",
"sendToGallery": "Отправить в галерею",
"referenceImage": "Эталонное изображение",
"addGlobalReferenceImage": "Добавить $t(controlLayers.globalReferenceImage)"
},
"ui": {
"tabs": {
@@ -1441,7 +1932,8 @@
"modelsTab": "$t(ui.tabs.models) $t(common.tab)",
"queue": "Очередь",
"upscaling": "Увеличение",
"upscalingTab": "$t(ui.tabs.upscaling) $t(common.tab)"
"upscalingTab": "$t(ui.tabs.upscaling) $t(common.tab)",
"gallery": "Галерея"
}
},
"upscaling": {
@@ -1513,5 +2005,45 @@
"professional": "Профессионал",
"professionalUpsell": "Доступно в профессиональной версии Invoke. Нажмите здесь или посетите invoke.com/pricing для получения более подробной информации.",
"shareAccess": "Поделиться доступом"
},
"system": {
"logNamespaces": {
"canvas": "Холст",
"config": "Конфигурация",
"generation": "Генерация",
"workflows": "Рабочие процессы",
"gallery": "Галерея",
"models": "Модели",
"logNamespaces": "Пространства имен логов",
"events": "События",
"system": "Система",
"queue": "Очередь",
"metadata": "Метаданные"
},
"enableLogging": "Включить логи",
"logLevel": {
"logLevel": "Уровень логов",
"fatal": "Фатальное",
"debug": "Отладка",
"info": "Инфо",
"warn": "Предупреждение",
"error": "Ошибки",
"trace": "Трассировка"
}
},
"whatsNew": {
"canvasV2Announcement": {
"newLayerTypes": "Новые типы слоев для еще большего контроля",
"readReleaseNotes": "Прочитать информацию о выпуске",
"watchReleaseVideo": "Смотреть видео о выпуске",
"fluxSupport": "Поддержка семейства моделей Flux",
"newCanvas": "Новый мощный холст управления",
"watchUiUpdatesOverview": "Обзор обновлений пользовательского интерфейса"
},
"whatsNewInInvoke": "Что нового в Invoke"
},
"newUserExperience": {
"toGetStarted": "Чтобы начать работу, введите в поле запрос и нажмите <StrongComponent>Invoke</StrongComponent>, чтобы сгенерировать первое изображение. Выберите шаблон запроса, чтобы улучшить результаты. Вы можете сохранить изображения непосредственно в <StrongComponent>Галерею</StrongComponent> или отредактировать их на <StrongComponent>Холсте</StrongComponent>.",
"gettingStartedSeries": "Хотите получить больше рекомендаций? Ознакомьтесь с нашей серией <LinkComponent>Getting Started Series</LinkComponent> для получения советов по раскрытию всего потенциала Invoke Studio."
}
}

View File

@@ -415,7 +415,8 @@
"resetUI": "$t(accessibility.reset) UI",
"createIssue": "创建问题",
"about": "关于",
"submitSupportTicket": "提交支持工单"
"submitSupportTicket": "提交支持工单",
"toggleRightPanel": "切换右侧面板(G)"
},
"nodes": {
"zoomInNodes": "放大",

View File

@@ -20,13 +20,18 @@ import {
import DeleteImageModal from 'features/deleteImageModal/components/DeleteImageModal';
import { DynamicPromptsModal } from 'features/dynamicPrompts/components/DynamicPromptsPreviewModal';
import DeleteBoardModal from 'features/gallery/components/Boards/DeleteBoardModal';
import { ImageContextMenu } from 'features/gallery/components/ImageContextMenu/ImageContextMenu';
import { useStarterModelsToast } from 'features/modelManagerV2/hooks/useStarterModelsToast';
import { ShareWorkflowModal } from 'features/nodes/components/sidePanel/WorkflowListMenu/ShareWorkflowModal';
import { ClearQueueConfirmationsAlertDialog } from 'features/queue/components/ClearQueueConfirmationAlertDialog';
import { DeleteStylePresetDialog } from 'features/stylePresets/components/DeleteStylePresetDialog';
import { StylePresetModal } from 'features/stylePresets/components/StylePresetForm/StylePresetModal';
import RefreshAfterResetModal from 'features/system/components/SettingsModal/RefreshAfterResetModal';
import { configChanged } from 'features/system/store/configSlice';
import { selectLanguage } from 'features/system/store/systemSelectors';
import { AppContent } from 'features/ui/components/AppContent';
import { DeleteWorkflowDialog } from 'features/workflowLibrary/components/DeleteLibraryWorkflowConfirmationAlertDialog';
import { NewWorkflowConfirmationAlertDialog } from 'features/workflowLibrary/components/NewWorkflowConfirmationAlertDialog';
import { AnimatePresence } from 'framer-motion';
import i18n from 'i18n';
import { size } from 'lodash-es';
@@ -107,11 +112,16 @@ const App = ({ config = DEFAULT_CONFIG, studioInitAction }: Props) => {
<DynamicPromptsModal />
<StylePresetModal />
<ClearQueueConfirmationsAlertDialog />
<NewWorkflowConfirmationAlertDialog />
<DeleteStylePresetDialog />
<DeleteWorkflowDialog />
<ShareWorkflowModal />
<RefreshAfterResetModal />
<DeleteBoardModal />
<GlobalImageHotkeys />
<NewGallerySessionDialog />
<NewCanvasSessionDialog />
<ImageContextMenu />
</ErrorBoundary>
);
};

View File

@@ -65,10 +65,10 @@ const AppErrorBoundaryFallback = ({ error, resetErrorBoundary }: Props) => {
</Text>
</Flex>
<Flex gap={4}>
<Button leftIcon={<PiArrowCounterClockwiseBold />} onPointerUp={resetErrorBoundary}>
<Button leftIcon={<PiArrowCounterClockwiseBold />} onClick={resetErrorBoundary}>
{t('accessibility.resetUI')}
</Button>
<Button leftIcon={<PiCopyBold />} onPointerUp={handleCopy}>
<Button leftIcon={<PiCopyBold />} onClick={handleCopy}>
{t('common.copyError')}
</Button>
<Link href={url} isExternal>

View File

@@ -1,6 +1,7 @@
import { skipToken } from '@reduxjs/toolkit/query';
import { useAppSelector } from 'app/store/storeHooks';
import { useIsRegionFocused } from 'common/hooks/focus';
import { useAssertSingleton } from 'common/hooks/useAssertSingleton';
import { selectIsStaging } from 'features/controlLayers/store/canvasStagingAreaSlice';
import { useImageActions } from 'features/gallery/hooks/useImageActions';
import { selectLastSelectedImage } from 'features/gallery/store/gallerySelectors';
@@ -11,6 +12,7 @@ import { useGetImageDTOQuery } from 'services/api/endpoints/images';
import type { ImageDTO } from 'services/api/types';
export const GlobalImageHotkeys = memo(() => {
useAssertSingleton('GlobalImageHotkeys');
const lastSelectedImage = useAppSelector(selectLastSelectedImage);
const { currentData: imageDTO } = useGetImageDTOQuery(lastSelectedImage?.image_name ?? skipToken);

View File

@@ -9,11 +9,11 @@ import { imageDTOToImageObject } from 'features/controlLayers/store/util';
import { $imageViewer } from 'features/gallery/components/ImageViewer/useImageViewer';
import { sentImageToCanvas } from 'features/gallery/store/actions';
import { parseAndRecallAllMetadata } from 'features/metadata/util/handlers';
import { $isWorkflowListMenuIsOpen } from 'features/nodes/store/workflowListMenu';
import { $isStylePresetsMenuOpen, activeStylePresetIdChanged } from 'features/stylePresets/store/stylePresetSlice';
import { toast } from 'features/toast/toast';
import { setActiveTab } from 'features/ui/store/uiSlice';
import { useGetAndLoadLibraryWorkflow } from 'features/workflowLibrary/hooks/useGetAndLoadLibraryWorkflow';
import { $workflowLibraryModal } from 'features/workflowLibrary/store/isWorkflowLibraryModalOpen';
import { useCallback, useEffect, useRef } from 'react';
import { useTranslation } from 'react-i18next';
import { getImageDTO, getImageMetadata } from 'services/api/endpoints/images';
@@ -160,7 +160,7 @@ export const useStudioInitAction = (action?: StudioInitAction) => {
case 'viewAllWorkflows':
// Go to the workflows tab and open the workflow library modal
store.dispatch(setActiveTab('workflows'));
$workflowLibraryModal.set(true);
$isWorkflowListMenuIsOpen.set(true);
break;
case 'viewAllStylePresets':
// Go to the canvas tab and open the style presets menu

View File

@@ -1,3 +1,4 @@
export const STORAGE_PREFIX = '@@invokeai-';
export const EMPTY_ARRAY = [];
/** @knipignore */
export const EMPTY_OBJECT = {};

View File

@@ -98,8 +98,8 @@ const RgbColorPicker = (props: Props) => {
export default memo(RgbColorPicker);
const ColorSwatch = ({ color, onChange }: { color: RgbColor; onChange: (color: RgbColor) => void }) => {
const onPointerUp = useCallback(() => {
const onClick = useCallback(() => {
onChange(color);
}, [color, onChange]);
return <Box role="button" onPointerUp={onPointerUp} h={8} w={8} bg={rgbColorToString(color)} borderRadius="base" />;
return <Box role="button" onClick={onClick} h={8} w={8} bg={rgbColorToString(color)} borderRadius="base" />;
};

View File

@@ -109,8 +109,8 @@ const RgbaColorPicker = (props: Props) => {
export default memo(RgbaColorPicker);
const ColorSwatch = ({ color, onChange }: { color: RgbaColor; onChange: (color: RgbaColor) => void }) => {
const onPointerUp = useCallback(() => {
const onClick = useCallback(() => {
onChange(color);
}, [color, onChange]);
return <Box role="button" onPointerUp={onPointerUp} h={8} w={8} bg={rgbaColorToString(color)} borderRadius="base" />;
return <Box role="button" onClick={onClick} h={8} w={8} bg={rgbaColorToString(color)} borderRadius="base" />;
};

View File

@@ -4,9 +4,9 @@ import { IAILoadingImageFallback, IAINoContentFallback } from 'common/components
import ImageMetadataOverlay from 'common/components/ImageMetadataOverlay';
import { useImageUploadButton } from 'common/hooks/useImageUploadButton';
import type { TypesafeDraggableData, TypesafeDroppableData } from 'features/dnd/types';
import ImageContextMenu from 'features/gallery/components/ImageContextMenu/ImageContextMenu';
import { useImageContextMenu } from 'features/gallery/components/ImageContextMenu/ImageContextMenu';
import type { MouseEvent, ReactElement, ReactNode, SyntheticEvent } from 'react';
import { memo, useCallback, useMemo } from 'react';
import { memo, useCallback, useMemo, useRef } from 'react';
import { PiImageBold, PiUploadSimpleBold } from 'react-icons/pi';
import type { ImageDTO, PostUploadAction } from 'services/api/types';
@@ -17,7 +17,14 @@ const defaultUploadElement = <Icon as={PiUploadSimpleBold} boxSize={16} />;
const defaultNoContentFallback = <IAINoContentFallback icon={PiImageBold} />;
const baseStyles: SystemStyleObject = {
touchAction: 'none',
userSelect: 'none',
webkitUserSelect: 'none',
};
const sx: SystemStyleObject = {
...baseStyles,
'.gallery-image-container::before': {
content: '""',
display: 'inline-block',
@@ -55,7 +62,7 @@ type IAIDndImageProps = FlexProps & {
imageDTO: ImageDTO | undefined;
onError?: (event: SyntheticEvent<HTMLImageElement>) => void;
onLoad?: (event: SyntheticEvent<HTMLImageElement>) => void;
onPointerUp?: (event: MouseEvent<HTMLDivElement>) => void;
onClick?: (event: MouseEvent<HTMLDivElement>) => void;
withMetadataOverlay?: boolean;
isDragDisabled?: boolean;
isDropDisabled?: boolean;
@@ -82,7 +89,7 @@ const IAIDndImage = (props: IAIDndImageProps) => {
const {
imageDTO,
onError,
onPointerUp,
onClick,
withMetadataOverlay = false,
isDropDisabled = false,
isDragDisabled = false,
@@ -102,59 +109,10 @@ const IAIDndImage = (props: IAIDndImageProps) => {
useThumbailFallback,
withHoverOverlay = false,
children,
onMouseOver,
onMouseOut,
dataTestId,
...rest
} = props;
const handleMouseOver = useCallback(
(e: MouseEvent<HTMLDivElement>) => {
if (onMouseOver) {
onMouseOver(e);
}
},
[onMouseOver]
);
const handleMouseOut = useCallback(
(e: MouseEvent<HTMLDivElement>) => {
if (onMouseOut) {
onMouseOut(e);
}
},
[onMouseOut]
);
const { getUploadButtonProps, getUploadInputProps } = useImageUploadButton({
postUploadAction,
isDisabled: isUploadDisabled,
});
const uploadButtonStyles = useMemo<SystemStyleObject>(() => {
const styles: SystemStyleObject = {
minH: minSize,
w: 'full',
h: 'full',
alignItems: 'center',
justifyContent: 'center',
borderRadius: 'base',
transitionProperty: 'common',
transitionDuration: '0.1s',
color: 'base.500',
};
if (!isUploadDisabled) {
Object.assign(styles, {
cursor: 'pointer',
bg: 'base.700',
_hover: {
bg: 'base.650',
color: 'base.300',
},
});
}
return styles;
}, [isUploadDisabled, minSize]);
const openInNewTab = useCallback(
(e: MouseEvent) => {
if (!imageDTO) {
@@ -168,76 +126,126 @@ const IAIDndImage = (props: IAIDndImageProps) => {
[imageDTO]
);
const ref = useRef<HTMLDivElement>(null);
useImageContextMenu(imageDTO, ref);
return (
<ImageContextMenu imageDTO={imageDTO}>
{(ref) => (
<Flex
ref={ref}
width="full"
height="full"
alignItems="center"
justifyContent="center"
position="relative"
minW={minSize ? minSize : undefined}
minH={minSize ? minSize : undefined}
userSelect="none"
cursor={isDragDisabled || !imageDTO ? 'default' : 'pointer'}
sx={withHoverOverlay ? sx : baseStyles}
data-selected={isSelectedForCompare ? 'selectedForCompare' : isSelected ? 'selected' : undefined}
{...rest}
>
{imageDTO && (
<Flex
ref={ref}
onMouseOver={handleMouseOver}
onMouseOut={handleMouseOut}
width="full"
height="full"
className="gallery-image-container"
w="full"
h="full"
position={fitContainer ? 'absolute' : 'relative'}
alignItems="center"
justifyContent="center"
position="relative"
minW={minSize ? minSize : undefined}
minH={minSize ? minSize : undefined}
userSelect="none"
cursor={isDragDisabled || !imageDTO ? 'default' : 'pointer'}
sx={withHoverOverlay ? sx : undefined}
data-selected={isSelectedForCompare ? 'selectedForCompare' : isSelected ? 'selected' : undefined}
{...rest}
>
{imageDTO && (
<Flex
className="gallery-image-container"
w="full"
h="full"
position={fitContainer ? 'absolute' : 'relative'}
alignItems="center"
justifyContent="center"
>
<Image
src={thumbnail ? imageDTO.thumbnail_url : imageDTO.image_url}
fallbackStrategy="beforeLoadOrError"
fallbackSrc={useThumbailFallback ? imageDTO.thumbnail_url : undefined}
fallback={useThumbailFallback ? undefined : <IAILoadingImageFallback image={imageDTO} />}
onError={onError}
draggable={false}
w={imageDTO.width}
objectFit="contain"
maxW="full"
maxH="full"
borderRadius="base"
sx={imageSx}
data-testid={dataTestId}
/>
{withMetadataOverlay && <ImageMetadataOverlay imageDTO={imageDTO} />}
</Flex>
)}
{!imageDTO && !isUploadDisabled && (
<>
<Flex sx={uploadButtonStyles} {...getUploadButtonProps()}>
<input {...getUploadInputProps()} />
{uploadElement}
</Flex>
</>
)}
{!imageDTO && isUploadDisabled && noContentFallback}
{imageDTO && !isDragDisabled && (
<IAIDraggable
data={draggableData}
disabled={isDragDisabled || !imageDTO}
onPointerUp={onPointerUp}
onAuxClick={openInNewTab}
/>
)}
{children}
{!isDropDisabled && <IAIDroppable data={droppableData} disabled={isDropDisabled} dropLabel={dropLabel} />}
<Image
src={thumbnail ? imageDTO.thumbnail_url : imageDTO.image_url}
fallbackStrategy="beforeLoadOrError"
fallbackSrc={useThumbailFallback ? imageDTO.thumbnail_url : undefined}
fallback={useThumbailFallback ? undefined : <IAILoadingImageFallback image={imageDTO} />}
onError={onError}
draggable={false}
w={imageDTO.width}
objectFit="contain"
maxW="full"
maxH="full"
borderRadius="base"
sx={imageSx}
data-testid={dataTestId}
/>
{withMetadataOverlay && <ImageMetadataOverlay imageDTO={imageDTO} />}
</Flex>
)}
</ImageContextMenu>
{!imageDTO && !isUploadDisabled && (
<UploadButton
isUploadDisabled={isUploadDisabled}
postUploadAction={postUploadAction}
uploadElement={uploadElement}
minSize={minSize}
/>
)}
{!imageDTO && isUploadDisabled && noContentFallback}
{imageDTO && !isDragDisabled && (
<IAIDraggable
data={draggableData}
disabled={isDragDisabled || !imageDTO}
onClick={onClick}
onAuxClick={openInNewTab}
/>
)}
{children}
{!isDropDisabled && <IAIDroppable data={droppableData} disabled={isDropDisabled} dropLabel={dropLabel} />}
</Flex>
);
};
export default memo(IAIDndImage);
const UploadButton = memo(
({
isUploadDisabled,
postUploadAction,
uploadElement,
minSize,
}: {
isUploadDisabled: boolean;
postUploadAction?: PostUploadAction;
uploadElement: ReactNode;
minSize: number;
}) => {
const { getUploadButtonProps, getUploadInputProps } = useImageUploadButton({
postUploadAction,
isDisabled: isUploadDisabled,
});
const uploadButtonStyles = useMemo<SystemStyleObject>(() => {
const styles: SystemStyleObject = {
minH: minSize,
w: 'full',
h: 'full',
alignItems: 'center',
justifyContent: 'center',
borderRadius: 'base',
transitionProperty: 'common',
transitionDuration: '0.1s',
color: 'base.500',
};
if (!isUploadDisabled) {
Object.assign(styles, {
cursor: 'pointer',
bg: 'base.700',
_hover: {
bg: 'base.650',
color: 'base.300',
},
});
}
return styles;
}, [isUploadDisabled, minSize]);
return (
<Flex sx={uploadButtonStyles} {...getUploadButtonProps()}>
<input {...getUploadInputProps()} />
{uploadElement}
</Flex>
);
}
);
UploadButton.displayName = 'UploadButton';

View File

@@ -16,17 +16,17 @@ const sx: SystemStyleObject = {
},
};
type Props = Omit<IconButtonProps, 'aria-label' | 'onPointerUp' | 'tooltip'> & {
onPointerUp: (event: MouseEvent<HTMLButtonElement>) => void;
type Props = Omit<IconButtonProps, 'aria-label' | 'onClick' | 'tooltip'> & {
onClick: (event: MouseEvent<HTMLButtonElement>) => void;
tooltip: string;
};
const IAIDndImageIcon = (props: Props) => {
const { onPointerUp, tooltip, icon, ...rest } = props;
const { onClick, tooltip, icon, ...rest } = props;
return (
<IconButton
onPointerUp={onPointerUp}
onClick={onClick}
aria-label={tooltip}
icon={icon}
variant="link"

View File

@@ -1,80 +1,62 @@
import { Flex, Text } from '@invoke-ai/ui-library';
import type { AnimationProps } from 'framer-motion';
import { motion } from 'framer-motion';
import { memo, useRef } from 'react';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { v4 as uuidv4 } from 'uuid';
type Props = {
isOver: boolean;
label?: string;
};
const initial: AnimationProps['initial'] = {
opacity: 0,
};
const animate: AnimationProps['animate'] = {
opacity: 1,
transition: { duration: 0.1 },
};
const exit: AnimationProps['exit'] = {
opacity: 0,
transition: { duration: 0.1 },
};
const IAIDropOverlay = (props: Props) => {
const { t } = useTranslation();
const { isOver, label = t('gallery.drop') } = props;
const motionId = useRef(uuidv4());
return (
<motion.div key={motionId.current} initial={initial} animate={animate} exit={exit}>
<Flex position="absolute" top={0} right={0} bottom={0} left={0}>
<Flex
position="absolute"
top={0}
right={0}
bottom={0}
left={0}
w="full"
h="full"
bg="base.900"
opacity={0.7}
borderRadius="base"
alignItems="center"
justifyContent="center"
transitionProperty="common"
transitionDuration="0.1s"
/>
<Flex position="absolute" top={0} right={0} bottom={0} left={0}>
<Flex
position="absolute"
top={0}
right={0}
bottom={0}
left={0}
w="full"
h="full"
bg="base.900"
opacity={0.7}
borderRadius="base"
alignItems="center"
justifyContent="center"
transitionProperty="common"
transitionDuration="0.1s"
/>
<Flex
position="absolute"
top={0.5}
right={0.5}
bottom={0.5}
left={0.5}
opacity={1}
borderWidth={1.5}
borderColor={isOver ? 'invokeYellow.300' : 'base.500'}
borderRadius="base"
borderStyle="dashed"
<Flex
position="absolute"
top={0.5}
right={0.5}
bottom={0.5}
left={0.5}
opacity={1}
borderWidth={1.5}
borderColor={isOver ? 'invokeYellow.300' : 'base.500'}
borderRadius="base"
borderStyle="dashed"
transitionProperty="common"
transitionDuration="0.1s"
alignItems="center"
justifyContent="center"
>
<Text
fontSize="lg"
fontWeight="semibold"
color={isOver ? 'invokeYellow.300' : 'base.500'}
transitionProperty="common"
transitionDuration="0.1s"
alignItems="center"
justifyContent="center"
p={4}
textAlign="center"
>
<Text
fontSize="lg"
fontWeight="semibold"
color={isOver ? 'invokeYellow.300' : 'base.500'}
transitionProperty="common"
transitionDuration="0.1s"
textAlign="center"
>
{label}
</Text>
</Flex>
{label}
</Text>
</Flex>
</motion.div>
</Flex>
);
};

View File

@@ -0,0 +1,30 @@
import type { MenuItemProps } from '@invoke-ai/ui-library';
import { Flex, MenuItem, Tooltip } from '@invoke-ai/ui-library';
import type { ReactNode } from 'react';
type Props = MenuItemProps & {
tooltip?: ReactNode;
icon: ReactNode;
};
export const IconMenuItem = ({ tooltip, icon, ...props }: Props) => {
return (
<Tooltip label={tooltip} placement="top" gutter={12}>
<MenuItem
display="flex"
alignItems="center"
justifyContent="center"
w="min-content"
aspectRatio="1"
borderRadius="base"
{...props}
>
{icon}
</MenuItem>
</Tooltip>
);
};
export const IconMenuItemGroup = ({ children }: { children: ReactNode }) => {
return <Flex gap={2}>{children}</Flex>;
};

View File

@@ -95,14 +95,14 @@ const Content = ({ data, feature, hideDisable }: ContentProps) => {
[feature, t]
);
const onPointerUpLearnMore = useCallback(() => {
const onClickLearnMore = useCallback(() => {
if (!data?.href) {
return;
}
window.open(data.href);
}, [data?.href]);
const onPointerUpDontShowMeThese = useCallback(() => {
const onClickDontShowMeThese = useCallback(() => {
dispatch(setShouldEnableInformationalPopovers(false));
toast({
title: t('settings.informationalPopoversDisabled'),
@@ -135,13 +135,13 @@ const Content = ({ data, feature, hideDisable }: ContentProps) => {
<Divider />
<Flex alignItems="center" justifyContent="space-between" w="full">
{!hideDisable && (
<Button onPointerUp={onPointerUpDontShowMeThese} variant="link" size="sm">
<Button onClick={onClickDontShowMeThese} variant="link" size="sm">
{t('common.dontShowMeThese')}
</Button>
)}
<Spacer />
{data?.href && (
<Button onPointerUp={onPointerUpLearnMore} leftIcon={<PiArrowSquareOutBold />} variant="link" size="sm">
<Button onClick={onClickLearnMore} leftIcon={<PiArrowSquareOutBold />} variant="link" size="sm">
{t('common.learnMore') ?? heading}
</Button>
)}

View File

@@ -51,7 +51,7 @@ export const buildUseBoolean = (initialValue: boolean): [() => UseBoolean, Writa
* Hook to manage a boolean state. Use this for a local boolean state.
* @param initialValue Initial value of the boolean
*/
export const useBoolean = (initialValue: boolean) => {
export const useBoolean = (initialValue: boolean): UseBoolean => {
const [isTrue, set] = useState(initialValue);
const setTrue = useCallback(() => {
@@ -72,3 +72,82 @@ export const useBoolean = (initialValue: boolean) => {
toggle,
};
};
type UseDisclosure = {
isOpen: boolean;
open: () => void;
close: () => void;
set: (isOpen: boolean) => void;
toggle: () => void;
};
/**
* This is the same as `buildUseBoolean`, but the method names are more descriptive,
* serving the semantics of a disclosure state.
*
* Creates a hook to manage a boolean state. The boolean is stored in a nanostores atom.
* Returns a tuple containing the hook and the atom. Use this for global boolean state.
*
* @param defaultIsOpen Initial state of the disclosure
*/
export const buildUseDisclosure = (defaultIsOpen: boolean): [() => UseDisclosure, WritableAtom<boolean>] => {
const $isOpen = atom(defaultIsOpen);
const open = () => {
$isOpen.set(true);
};
const close = () => {
$isOpen.set(false);
};
const set = (isOpen: boolean) => {
$isOpen.set(isOpen);
};
const toggle = () => {
$isOpen.set(!$isOpen.get());
};
const useDisclosure = () => {
const isOpen = useStore($isOpen);
return {
isOpen,
open,
close,
set,
toggle,
};
};
return [useDisclosure, $isOpen] as const;
};
/**
* This is the same as `useBoolean`, but the method names are more descriptive,
* serving the semantics of a disclosure state.
*
* Hook to manage a boolean state. Use this for a local boolean state.
* @param defaultIsOpen Initial state of the disclosure
*
* @knipignore
*/
export const useDisclosure = (defaultIsOpen: boolean): UseDisclosure => {
const [isOpen, set] = useState(defaultIsOpen);
const open = useCallback(() => {
set(true);
}, [set]);
const close = useCallback(() => {
set(false);
}, [set]);
const toggle = useCallback(() => {
set((val) => !val);
}, [set]);
return {
isOpen,
open,
close,
set,
toggle,
};
};

View File

@@ -3,12 +3,12 @@ import { Combobox, ConfirmationAlertDialog, Flex, FormControl, Text } from '@inv
import { createSelector } from '@reduxjs/toolkit';
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { useAssertSingleton } from 'common/hooks/useAssertSingleton';
import {
changeBoardReset,
isModalOpenChanged,
selectChangeBoardModalSlice,
} from 'features/changeBoardModal/store/slice';
import { selectListBoardsQueryArgs } from 'features/gallery/store/gallerySelectors';
import { memo, useCallback, useMemo, useState } from 'react';
import { useTranslation } from 'react-i18next';
import { useListAllBoardsQuery } from 'services/api/endpoints/boards';
@@ -25,10 +25,10 @@ const selectIsModalOpen = createSelector(
);
const ChangeBoardModal = () => {
useAssertSingleton('ChangeBoardModal');
const dispatch = useAppDispatch();
const [selectedBoard, setSelectedBoard] = useState<string | null>();
const queryArgs = useAppSelector(selectListBoardsQueryArgs);
const { data: boards, isFetching } = useListAllBoardsQuery(queryArgs);
const { data: boards, isFetching } = useListAllBoardsQuery({ include_archived: true });
const isModalOpen = useAppSelector(selectIsModalOpen);
const imagesToChange = useAppSelector(selectImagesToChange);
const [addImagesToBoard] = useAddImagesToBoardMutation();

View File

@@ -33,7 +33,7 @@ export const CanvasAddEntityButtons = memo(() => {
variant="ghost"
justifyContent="flex-start"
leftIcon={<PiPlusBold />}
onPointerUp={addGlobalReferenceImage}
onClick={addGlobalReferenceImage}
isDisabled={isFLUX}
>
{t('controlLayers.globalReferenceImage')}
@@ -46,7 +46,7 @@ export const CanvasAddEntityButtons = memo(() => {
variant="ghost"
justifyContent="flex-start"
leftIcon={<PiPlusBold />}
onPointerUp={addInpaintMask}
onClick={addInpaintMask}
>
{t('controlLayers.inpaintMask')}
</Button>
@@ -55,7 +55,7 @@ export const CanvasAddEntityButtons = memo(() => {
variant="ghost"
justifyContent="flex-start"
leftIcon={<PiPlusBold />}
onPointerUp={addRegionalGuidance}
onClick={addRegionalGuidance}
isDisabled={isFLUX}
>
{t('controlLayers.regionalGuidance')}
@@ -65,7 +65,7 @@ export const CanvasAddEntityButtons = memo(() => {
variant="ghost"
justifyContent="flex-start"
leftIcon={<PiPlusBold />}
onPointerUp={addRegionalReferenceImage}
onClick={addRegionalReferenceImage}
isDisabled={isFLUX}
>
{t('controlLayers.regionalReferenceImage')}
@@ -79,8 +79,7 @@ export const CanvasAddEntityButtons = memo(() => {
variant="ghost"
justifyContent="flex-start"
leftIcon={<PiPlusBold />}
onPointerUp={addControlLayer}
isDisabled={isFLUX}
onClick={addControlLayer}
>
{t('controlLayers.controlLayer')}
</Button>
@@ -89,7 +88,7 @@ export const CanvasAddEntityButtons = memo(() => {
variant="ghost"
justifyContent="flex-start"
leftIcon={<PiPlusBold />}
onPointerUp={addRasterLayer}
onClick={addRasterLayer}
>
{t('controlLayers.rasterLayer')}
</Button>

View File

@@ -17,12 +17,12 @@ import { Trans, useTranslation } from 'react-i18next';
const ActivateImageViewerButton = (props: PropsWithChildren) => {
const imageViewer = useImageViewer();
const onPointerUp = useCallback(() => {
const onClick = useCallback(() => {
imageViewer.open();
selectCanvasRightPanelGalleryTab();
}, [imageViewer]);
return (
<Button onPointerUp={onPointerUp} size="sm" variant="link" color="base.50">
<Button onClick={onClick} size="sm" variant="link" color="base.50">
{props.children}
</Button>
);
@@ -58,13 +58,13 @@ export const CanvasAlertsSendingToGallery = () => {
const ActivateCanvasButton = (props: PropsWithChildren) => {
const dispatch = useAppDispatch();
const imageViewer = useImageViewer();
const onPointerUp = useCallback(() => {
const onClick = useCallback(() => {
dispatch(setActiveTab('canvas'));
selectCanvasRightPanelLayersTab();
imageViewer.close();
}, [dispatch, imageViewer]);
return (
<Button onPointerUp={onPointerUp} size="sm" variant="link" color="base.50">
<Button onClick={onClick} size="sm" variant="link" color="base.50">
{props.children}
</Button>
);

View File

@@ -30,24 +30,24 @@ export const CanvasContextMenuGlobalMenuItems = memo(() => {
<CanvasContextMenuItemsCropCanvasToBbox />
</MenuGroup>
<MenuGroup title={t('controlLayers.canvasContextMenu.saveToGalleryGroup')}>
<MenuItem icon={<PiFloppyDiskBold />} isDisabled={isBusy} onPointerUp={saveCanvasToGallery}>
<MenuItem icon={<PiFloppyDiskBold />} isDisabled={isBusy} onClick={saveCanvasToGallery}>
{t('controlLayers.canvasContextMenu.saveCanvasToGallery')}
</MenuItem>
<MenuItem icon={<PiFloppyDiskBold />} isDisabled={isBusy} onPointerUp={saveBboxToGallery}>
<MenuItem icon={<PiFloppyDiskBold />} isDisabled={isBusy} onClick={saveBboxToGallery}>
{t('controlLayers.canvasContextMenu.saveBboxToGallery')}
</MenuItem>
</MenuGroup>
<MenuGroup title={t('controlLayers.canvasContextMenu.bboxGroup')}>
<MenuItem icon={<NewLayerIcon />} isDisabled={isBusy} onPointerUp={newGlobalReferenceImageFromBbox}>
<MenuItem icon={<NewLayerIcon />} isDisabled={isBusy} onClick={newGlobalReferenceImageFromBbox}>
{t('controlLayers.canvasContextMenu.newGlobalReferenceImage')}
</MenuItem>
<MenuItem icon={<NewLayerIcon />} isDisabled={isBusy} onPointerUp={newRegionalReferenceImageFromBbox}>
<MenuItem icon={<NewLayerIcon />} isDisabled={isBusy} onClick={newRegionalReferenceImageFromBbox}>
{t('controlLayers.canvasContextMenu.newRegionalReferenceImage')}
</MenuItem>
<MenuItem icon={<NewLayerIcon />} isDisabled={isBusy} onPointerUp={newControlLayerFromBbox}>
<MenuItem icon={<NewLayerIcon />} isDisabled={isBusy} onClick={newControlLayerFromBbox}>
{t('controlLayers.canvasContextMenu.newControlLayer')}
</MenuItem>
<MenuItem icon={<NewLayerIcon />} isDisabled={isBusy} onPointerUp={newRasterLayerFromBbox}>
<MenuItem icon={<NewLayerIcon />} isDisabled={isBusy} onClick={newRasterLayerFromBbox}>
{t('controlLayers.canvasContextMenu.newRasterLayer')}
</MenuItem>
</MenuGroup>

View File

@@ -17,7 +17,7 @@ export const CanvasContextMenuItemsCropCanvasToBbox = memo(() => {
}, [canvasManager]);
return (
<MenuItem icon={<PiCropBold />} isDisabled={isBusy} onPointerUp={cropCanvasToBbox}>
<MenuItem icon={<PiCropBold />} isDisabled={isBusy} onClick={cropCanvasToBbox}>
{t('controlLayers.canvasContextMenu.cropCanvasToBbox')}
</MenuItem>
);

View File

@@ -8,6 +8,7 @@ import type {
} from 'features/dnd/types';
import { useImageViewer } from 'features/gallery/components/ImageViewer/useImageViewer';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
const addRasterLayerFromImageDropData: AddRasterLayerFromImageDropData = {
id: 'add-raster-layer-from-image-drop-data',
@@ -30,6 +31,7 @@ const addGlobalReferenceImageFromImageDropData: AddGlobalReferenceImageFromImage
};
export const CanvasDropArea = memo(() => {
const { t } = useTranslation();
const imageViewer = useImageViewer();
if (imageViewer.isOpen) {
@@ -49,16 +51,28 @@ export const CanvasDropArea = memo(() => {
pointerEvents="none"
>
<GridItem position="relative">
<IAIDroppable dropLabel="New Raster Layer" data={addRasterLayerFromImageDropData} />
<IAIDroppable
dropLabel={t('controlLayers.canvasContextMenu.newRasterLayer')}
data={addRasterLayerFromImageDropData}
/>
</GridItem>
<GridItem position="relative">
<IAIDroppable dropLabel="New Control Layer" data={addControlLayerFromImageDropData} />
<IAIDroppable
dropLabel={t('controlLayers.canvasContextMenu.newControlLayer')}
data={addControlLayerFromImageDropData}
/>
</GridItem>
<GridItem position="relative">
<IAIDroppable dropLabel="New Regional Reference Image" data={addRegionalReferenceImageFromImageDropData} />
<IAIDroppable
dropLabel={t('controlLayers.canvasContextMenu.newRegionalReferenceImage')}
data={addRegionalReferenceImageFromImageDropData}
/>
</GridItem>
<GridItem position="relative">
<IAIDroppable dropLabel="New Global Reference Image" data={addGlobalReferenceImageFromImageDropData} />
<IAIDroppable
dropLabel={t('controlLayers.canvasContextMenu.newGlobalReferenceImage')}
data={addGlobalReferenceImageFromImageDropData}
/>
</GridItem>
</Grid>
</>

View File

@@ -40,26 +40,26 @@ export const EntityListGlobalActionBarAddLayerMenu = memo(() => {
/>
<MenuList>
<MenuGroup title={t('controlLayers.global')}>
<MenuItem icon={<PiPlusBold />} onPointerUp={addGlobalReferenceImage} isDisabled={isFLUX}>
<MenuItem icon={<PiPlusBold />} onClick={addGlobalReferenceImage} isDisabled={isFLUX}>
{t('controlLayers.globalReferenceImage')}
</MenuItem>
</MenuGroup>
<MenuGroup title={t('controlLayers.regional')}>
<MenuItem icon={<PiPlusBold />} onPointerUp={addInpaintMask}>
<MenuItem icon={<PiPlusBold />} onClick={addInpaintMask}>
{t('controlLayers.inpaintMask')}
</MenuItem>
<MenuItem icon={<PiPlusBold />} onPointerUp={addRegionalGuidance} isDisabled={isFLUX}>
<MenuItem icon={<PiPlusBold />} onClick={addRegionalGuidance} isDisabled={isFLUX}>
{t('controlLayers.regionalGuidance')}
</MenuItem>
<MenuItem icon={<PiPlusBold />} onPointerUp={addRegionalReferenceImage} isDisabled={isFLUX}>
<MenuItem icon={<PiPlusBold />} onClick={addRegionalReferenceImage} isDisabled={isFLUX}>
{t('controlLayers.regionalReferenceImage')}
</MenuItem>
</MenuGroup>
<MenuGroup title={t('controlLayers.layer_other')}>
<MenuItem icon={<PiPlusBold />} onPointerUp={addControlLayer} isDisabled={isFLUX}>
<MenuItem icon={<PiPlusBold />} onClick={addControlLayer}>
{t('controlLayers.controlLayer')}
</MenuItem>
<MenuItem icon={<PiPlusBold />} onPointerUp={addRasterLayer}>
<MenuItem icon={<PiPlusBold />} onClick={addRasterLayer}>
{t('controlLayers.rasterLayer')}
</MenuItem>
</MenuGroup>

View File

@@ -12,7 +12,7 @@ export const EntityListSelectedEntityActionBarDuplicateButton = memo(() => {
const dispatch = useAppDispatch();
const isBusy = useCanvasIsBusy();
const selectedEntityIdentifier = useAppSelector(selectSelectedEntityIdentifier);
const onPointerUp = useCallback(() => {
const onClick = useCallback(() => {
if (!selectedEntityIdentifier) {
return;
}
@@ -21,7 +21,7 @@ export const EntityListSelectedEntityActionBarDuplicateButton = memo(() => {
return (
<IconButton
onPointerUp={onPointerUp}
onClick={onClick}
isDisabled={!selectedEntityIdentifier || isBusy}
size="sm"
variant="link"

View File

@@ -22,7 +22,7 @@ export const EntityListSelectedEntityActionBarFilterButton = memo(() => {
return (
<IconButton
onPointerUp={filter.start}
onClick={filter.start}
isDisabled={filter.isDisabled}
size="sm"
variant="link"

View File

@@ -157,7 +157,7 @@ export const EntityListSelectedEntityActionBarOpacity = memo(() => {
clampValueOnBlur={false}
variant="outline"
>
<NumberInputField paddingInlineEnd={7} _focusVisible={{ zIndex: 0 }} />
<NumberInputField paddingInlineEnd={7} _focusVisible={{ zIndex: 0 }} title="" />
<PopoverTrigger>
<IconButton
aria-label="open-slider"

View File

@@ -15,7 +15,7 @@ export const EntityListSelectedEntityActionBarSaveToAssetsButton = memo(() => {
const selectedEntityIdentifier = useAppSelector(selectSelectedEntityIdentifier);
const adapter = useEntityAdapterSafe(selectedEntityIdentifier);
const saveLayerToAssets = useSaveLayerToAssets();
const onPointerUp = useCallback(() => {
const onClick = useCallback(() => {
saveLayerToAssets(adapter);
}, [saveLayerToAssets, adapter]);
@@ -29,7 +29,7 @@ export const EntityListSelectedEntityActionBarSaveToAssetsButton = memo(() => {
return (
<IconButton
onPointerUp={onPointerUp}
onClick={onClick}
isDisabled={!selectedEntityIdentifier || isBusy}
size="sm"
variant="link"

View File

@@ -22,7 +22,7 @@ export const EntityListSelectedEntityActionBarTransformButton = memo(() => {
return (
<IconButton
onPointerUp={transform.start}
onClick={transform.start}
isDisabled={transform.isDisabled}
size="sm"
variant="link"

View File

@@ -20,7 +20,7 @@ export const CanvasRightPanel = memo(() => {
const { t } = useTranslation();
const tabIndex = useStore($canvasRightPanelTabIndex);
const imageViewer = useImageViewer();
const onPointerUpViewerToggleButton = useCallback(() => {
const onClickViewerToggleButton = useCallback(() => {
if ($canvasRightPanelTabIndex.get() !== 1) {
$canvasRightPanelTabIndex.set(1);
}
@@ -38,7 +38,7 @@ export const CanvasRightPanel = memo(() => {
<TabList alignItems="center">
<PanelTabs />
<Spacer />
<Button size="sm" variant="ghost" onPointerUp={onPointerUpViewerToggleButton}>
<Button size="sm" variant="ghost" onClick={onClickViewerToggleButton}>
{imageViewer.isOpen ? t('gallery.closeViewer') : t('gallery.openViewer')}
</Button>
</TabList>

View File

@@ -16,6 +16,7 @@ import {
controlLayerModelChanged,
controlLayerWeightChanged,
} from 'features/controlLayers/store/canvasSlice';
import { selectIsFLUX } from 'features/controlLayers/store/paramsSlice';
import { selectCanvasSlice, selectEntityOrThrow } from 'features/controlLayers/store/selectors';
import type { CanvasEntityIdentifier, ControlModeV2 } from 'features/controlLayers/store/types';
import { memo, useCallback, useMemo } from 'react';
@@ -42,6 +43,7 @@ export const ControlLayerControlAdapter = memo(() => {
const entityIdentifier = useEntityIdentifierContext('control_layer');
const controlAdapter = useControlLayerControlAdapter(entityIdentifier);
const filter = useEntityFilter(entityIdentifier);
const isFLUX = useAppSelector(selectIsFLUX);
const onChangeBeginEndStepPct = useCallback(
(beginEndStepPct: [number, number]) => {
@@ -84,7 +86,7 @@ export const ControlLayerControlAdapter = memo(() => {
<Flex w="full" gap={2}>
<ControlLayerControlAdapterModel modelKey={controlAdapter.model?.key ?? null} onChange={onChangeModel} />
<IconButton
onPointerUp={filter.start}
onClick={filter.start}
isDisabled={filter.isDisabled}
size="sm"
alignSelf="stretch"
@@ -94,7 +96,7 @@ export const ControlLayerControlAdapter = memo(() => {
icon={<PiShootingStarBold />}
/>
<IconButton
onPointerUp={pullBboxIntoLayer}
onClick={pullBboxIntoLayer}
isDisabled={isBusy}
size="sm"
alignSelf="stretch"
@@ -117,7 +119,7 @@ export const ControlLayerControlAdapter = memo(() => {
</Flex>
<Weight weight={controlAdapter.weight} onChange={onChangeWeight} />
<BeginEndStepPct beginEndStepPct={controlAdapter.beginEndStepPct} onChange={onChangeBeginEndStepPct} />
{controlAdapter.type === 'controlnet' && (
{controlAdapter.type === 'controlnet' && !isFLUX && (
<ControlLayerControlAdapterControlMode
controlMode={controlAdapter.controlMode}
onChange={onChangeControlMode}

View File

@@ -1,4 +1,5 @@
import { MenuDivider } from '@invoke-ai/ui-library';
import { IconMenuItemGroup } from 'common/components/IconMenuItem';
import { CanvasEntityMenuItemsArrange } from 'features/controlLayers/components/common/CanvasEntityMenuItemsArrange';
import { CanvasEntityMenuItemsCopyToClipboard } from 'features/controlLayers/components/common/CanvasEntityMenuItemsCopyToClipboard';
import { CanvasEntityMenuItemsCropToBbox } from 'features/controlLayers/components/common/CanvasEntityMenuItemsCropToBbox';
@@ -14,18 +15,20 @@ import { memo } from 'react';
export const ControlLayerMenuItems = memo(() => {
return (
<>
<IconMenuItemGroup>
<CanvasEntityMenuItemsArrange />
<CanvasEntityMenuItemsDuplicate />
<CanvasEntityMenuItemsDelete asIcon />
</IconMenuItemGroup>
<MenuDivider />
<CanvasEntityMenuItemsTransform />
<CanvasEntityMenuItemsFilter />
<ControlLayerMenuItemsConvertControlToRaster />
<ControlLayerMenuItemsTransparencyEffect />
<MenuDivider />
<CanvasEntityMenuItemsArrange />
<MenuDivider />
<CanvasEntityMenuItemsCropToBbox />
<CanvasEntityMenuItemsDuplicate />
<CanvasEntityMenuItemsCopyToClipboard />
<CanvasEntityMenuItemsSave />
<CanvasEntityMenuItemsDelete />
</>
);
});

View File

@@ -18,7 +18,7 @@ export const ControlLayerMenuItemsConvertControlToRaster = memo(() => {
}, [dispatch, entityIdentifier]);
return (
<MenuItem onPointerUp={convertControlLayerToRasterLayer} icon={<PiLightningBold />} isDisabled={!isInteractable}>
<MenuItem onClick={convertControlLayerToRasterLayer} icon={<PiLightningBold />} isDisabled={!isInteractable}>
{t('controlLayers.convertToRasterLayer')}
</MenuItem>
);

View File

@@ -28,7 +28,7 @@ export const ControlLayerMenuItemsTransparencyEffect = memo(() => {
}, [dispatch, entityIdentifier]);
return (
<MenuItem onPointerUp={onToggle} icon={<PiDropHalfBold />} isDisabled={!isInteractable}>
<MenuItem onClick={onToggle} icon={<PiDropHalfBold />} isDisabled={!isInteractable}>
{withTransparencyEffect
? t('controlLayers.disableTransparencyEffect')
: t('controlLayers.enableTransparencyEffect')}

View File

@@ -109,7 +109,7 @@ const FilterContent = memo(
<Button
variant="ghost"
leftIcon={<PiShootingStarBold />}
onPointerUp={adapter.filterer.processImmediate}
onClick={adapter.filterer.processImmediate}
isLoading={isProcessing}
loadingText={t('controlLayers.filter.process')}
isDisabled={!isValid || autoProcessFilter}
@@ -119,7 +119,7 @@ const FilterContent = memo(
<Spacer />
<Button
leftIcon={<PiArrowsCounterClockwiseBold />}
onPointerUp={adapter.filterer.reset}
onClick={adapter.filterer.reset}
isLoading={isProcessing}
loadingText={t('controlLayers.filter.reset')}
variant="ghost"
@@ -129,7 +129,7 @@ const FilterContent = memo(
<Button
variant="ghost"
leftIcon={<PiCheckBold />}
onPointerUp={adapter.filterer.apply}
onClick={adapter.filterer.apply}
isLoading={isProcessing}
loadingText={t('controlLayers.filter.apply')}
isDisabled={!isValid || !hasProcessed}
@@ -139,7 +139,7 @@ const FilterContent = memo(
<Button
variant="ghost"
leftIcon={<PiXBold />}
onPointerUp={adapter.filterer.cancel}
onClick={adapter.filterer.cancel}
loadingText={t('controlLayers.filter.cancel')}
>
{t('controlLayers.filter.cancel')}

View File

@@ -49,7 +49,16 @@ export const IPAdapterImagePreview = memo(({ image, onChangeImage, droppableData
}, [handleResetControlImage, isConnected, isErrorControlImage]);
return (
<Flex position="relative" w="full" h="full" alignItems="center">
<Flex
position="relative"
w="full"
h="full"
alignItems="center"
borderColor="error.500"
borderStyle="solid"
borderWidth={controlImage ? 0 : 1}
borderRadius="base"
>
<IAIDndImage
draggableData={draggableData}
droppableData={droppableData}
@@ -60,7 +69,7 @@ export const IPAdapterImagePreview = memo(({ image, onChangeImage, droppableData
{controlImage && (
<Flex position="absolute" flexDir="column" top={2} insetInlineEnd={2} gap={1}>
<IAIDndImageIcon
onPointerUp={handleResetControlImage}
onClick={handleResetControlImage}
icon={<PiArrowCounterClockwiseBold size={16} />}
tooltip={t('common.reset')}
/>

View File

@@ -1,4 +1,4 @@
import { MenuDivider } from '@invoke-ai/ui-library';
import { IconMenuItemGroup } from 'common/components/IconMenuItem';
import { CanvasEntityMenuItemsArrange } from 'features/controlLayers/components/common/CanvasEntityMenuItemsArrange';
import { CanvasEntityMenuItemsDelete } from 'features/controlLayers/components/common/CanvasEntityMenuItemsDelete';
import { CanvasEntityMenuItemsDuplicate } from 'features/controlLayers/components/common/CanvasEntityMenuItemsDuplicate';
@@ -6,12 +6,11 @@ import { memo } from 'react';
export const IPAdapterMenuItems = memo(() => {
return (
<>
<IconMenuItemGroup>
<CanvasEntityMenuItemsArrange />
<MenuDivider />
<CanvasEntityMenuItemsDuplicate />
<CanvasEntityMenuItemsDelete />
</>
<CanvasEntityMenuItemsDelete asIcon />
</IconMenuItemGroup>
);
});

View File

@@ -103,7 +103,7 @@ export const IPAdapterSettings = memo(() => {
/>
</Box>
<IconButton
onPointerUp={pullBboxIntoIPAdapter}
onClick={pullBboxIntoIPAdapter}
isDisabled={isBusy}
variant="ghost"
aria-label={t('controlLayers.pullBboxIntoReferenceImage')}

View File

@@ -1,4 +1,5 @@
import { MenuDivider } from '@invoke-ai/ui-library';
import { IconMenuItemGroup } from 'common/components/IconMenuItem';
import { CanvasEntityMenuItemsArrange } from 'features/controlLayers/components/common/CanvasEntityMenuItemsArrange';
import { CanvasEntityMenuItemsCropToBbox } from 'features/controlLayers/components/common/CanvasEntityMenuItemsCropToBbox';
import { CanvasEntityMenuItemsDelete } from 'features/controlLayers/components/common/CanvasEntityMenuItemsDelete';
@@ -9,13 +10,15 @@ import { memo } from 'react';
export const InpaintMaskMenuItems = memo(() => {
return (
<>
<IconMenuItemGroup>
<CanvasEntityMenuItemsArrange />
<CanvasEntityMenuItemsDuplicate />
<CanvasEntityMenuItemsDelete asIcon />
</IconMenuItemGroup>
<MenuDivider />
<CanvasEntityMenuItemsTransform />
<MenuDivider />
<CanvasEntityMenuItemsArrange />
<MenuDivider />
<CanvasEntityMenuItemsCropToBbox />
<CanvasEntityMenuItemsDuplicate />
<CanvasEntityMenuItemsDelete />
</>
);
});

View File

@@ -1,5 +1,6 @@
import { Checkbox, ConfirmationAlertDialog, Flex, FormControl, FormLabel, Text } from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { useAssertSingleton } from 'common/hooks/useAssertSingleton';
import { buildUseBoolean } from 'common/hooks/useBoolean';
import { newCanvasSessionRequested, newGallerySessionRequested } from 'features/controlLayers/store/actions';
import {
@@ -65,6 +66,7 @@ export const useNewCanvasSession = () => {
};
export const NewGallerySessionDialog = memo(() => {
useAssertSingleton('NewGallerySessionDialog');
const { t } = useTranslation();
const dispatch = useAppDispatch();
@@ -100,6 +102,7 @@ export const NewGallerySessionDialog = memo(() => {
NewGallerySessionDialog.displayName = 'NewGallerySessionDialog';
export const NewCanvasSessionDialog = memo(() => {
useAssertSingleton('NewCanvasSessionDialog');
const { t } = useTranslation();
const dispatch = useAppDispatch();

View File

@@ -1,4 +1,5 @@
import { MenuDivider } from '@invoke-ai/ui-library';
import { IconMenuItemGroup } from 'common/components/IconMenuItem';
import { CanvasEntityMenuItemsArrange } from 'features/controlLayers/components/common/CanvasEntityMenuItemsArrange';
import { CanvasEntityMenuItemsCopyToClipboard } from 'features/controlLayers/components/common/CanvasEntityMenuItemsCopyToClipboard';
import { CanvasEntityMenuItemsCropToBbox } from 'features/controlLayers/components/common/CanvasEntityMenuItemsCropToBbox';
@@ -13,17 +14,19 @@ import { memo } from 'react';
export const RasterLayerMenuItems = memo(() => {
return (
<>
<IconMenuItemGroup>
<CanvasEntityMenuItemsArrange />
<CanvasEntityMenuItemsDuplicate />
<CanvasEntityMenuItemsDelete asIcon />
</IconMenuItemGroup>
<MenuDivider />
<CanvasEntityMenuItemsTransform />
<CanvasEntityMenuItemsFilter />
<RasterLayerMenuItemsConvertRasterToControl />
<MenuDivider />
<CanvasEntityMenuItemsArrange />
<MenuDivider />
<CanvasEntityMenuItemsCropToBbox />
<CanvasEntityMenuItemsDuplicate />
<CanvasEntityMenuItemsCopyToClipboard />
<CanvasEntityMenuItemsSave />
<CanvasEntityMenuItemsDelete />
</>
);
});

View File

@@ -15,7 +15,7 @@ export const RasterLayerMenuItemsConvertRasterToControl = memo(() => {
const defaultControlAdapter = useAppSelector(selectDefaultControlAdapter);
const isInteractable = useIsEntityInteractable(entityIdentifier);
const onPointerUp = useCallback(() => {
const onClick = useCallback(() => {
dispatch(
rasterLayerConvertedToControlLayer({
entityIdentifier,
@@ -27,7 +27,7 @@ export const RasterLayerMenuItemsConvertRasterToControl = memo(() => {
}, [defaultControlAdapter, dispatch, entityIdentifier]);
return (
<MenuItem onPointerUp={onPointerUp} icon={<PiLightningBold />} isDisabled={!isInteractable}>
<MenuItem onClick={onClick} icon={<PiLightningBold />} isDisabled={!isInteractable}>
{t('controlLayers.convertToControlLayer')}
</MenuItem>
);

View File

@@ -30,7 +30,7 @@ export const RegionalGuidanceAddPromptsIPAdapterButtons = () => {
size="sm"
variant="ghost"
leftIcon={<PiPlusBold />}
onPointerUp={addRegionalGuidancePositivePrompt}
onClick={addRegionalGuidancePositivePrompt}
isDisabled={!validActions.canAddPositivePrompt}
>
{t('controlLayers.prompt')}
@@ -39,12 +39,12 @@ export const RegionalGuidanceAddPromptsIPAdapterButtons = () => {
size="sm"
variant="ghost"
leftIcon={<PiPlusBold />}
onPointerUp={addRegionalGuidanceNegativePrompt}
onClick={addRegionalGuidanceNegativePrompt}
isDisabled={!validActions.canAddNegativePrompt}
>
{t('controlLayers.negativePrompt')}
</Button>
<Button size="sm" variant="ghost" leftIcon={<PiPlusBold />} onPointerUp={addRegionalGuidanceIPAdapter}>
<Button size="sm" variant="ghost" leftIcon={<PiPlusBold />} onClick={addRegionalGuidanceIPAdapter}>
{t('controlLayers.referenceImage')}
</Button>
</Flex>

View File

@@ -15,7 +15,7 @@ export const RegionalGuidanceDeletePromptButton = memo(({ onDelete }: Props) =>
variant="link"
aria-label={t('controlLayers.deletePrompt')}
icon={<PiTrashSimpleFill />}
onPointerUp={onDelete}
onClick={onDelete}
flexGrow={0}
size="sm"
p={0}

View File

@@ -120,7 +120,7 @@ export const RegionalGuidanceIPAdapterSettings = memo(({ referenceImageId }: Pro
icon={<PiTrashSimpleFill />}
tooltip={t('controlLayers.deleteReferenceImage')}
aria-label={t('controlLayers.deleteReferenceImage')}
onPointerUp={onDeleteIPAdapter}
onClick={onDeleteIPAdapter}
colorScheme="error"
/>
</Flex>
@@ -135,7 +135,7 @@ export const RegionalGuidanceIPAdapterSettings = memo(({ referenceImageId }: Pro
/>
</Box>
<IconButton
onPointerUp={pullBboxIntoIPAdapter}
onClick={pullBboxIntoIPAdapter}
isDisabled={isBusy}
variant="ghost"
aria-label={t('controlLayers.pullBboxIntoReferenceImage')}

View File

@@ -1,4 +1,4 @@
import { MenuDivider } from '@invoke-ai/ui-library';
import { Flex, MenuDivider } from '@invoke-ai/ui-library';
import { CanvasEntityMenuItemsArrange } from 'features/controlLayers/components/common/CanvasEntityMenuItemsArrange';
import { CanvasEntityMenuItemsCropToBbox } from 'features/controlLayers/components/common/CanvasEntityMenuItemsCropToBbox';
import { CanvasEntityMenuItemsDelete } from 'features/controlLayers/components/common/CanvasEntityMenuItemsDelete';
@@ -11,16 +11,18 @@ import { memo } from 'react';
export const RegionalGuidanceMenuItems = memo(() => {
return (
<>
<Flex gap={2}>
<CanvasEntityMenuItemsArrange />
<CanvasEntityMenuItemsDuplicate />
<CanvasEntityMenuItemsDelete asIcon />
</Flex>
<MenuDivider />
<RegionalGuidanceMenuItemsAddPromptsAndIPAdapter />
<MenuDivider />
<CanvasEntityMenuItemsTransform />
<RegionalGuidanceMenuItemsAutoNegative />
<MenuDivider />
<CanvasEntityMenuItemsArrange />
<MenuDivider />
<CanvasEntityMenuItemsCropToBbox />
<CanvasEntityMenuItemsDuplicate />
<CanvasEntityMenuItemsDelete />
</>
);
});

View File

@@ -26,19 +26,13 @@ export const RegionalGuidanceMenuItemsAddPromptsAndIPAdapter = memo(() => {
return (
<>
<MenuItem
onPointerUp={addRegionalGuidancePositivePrompt}
isDisabled={!validActions.canAddPositivePrompt || isBusy}
>
<MenuItem onClick={addRegionalGuidancePositivePrompt} isDisabled={!validActions.canAddPositivePrompt || isBusy}>
{t('controlLayers.addPositivePrompt')}
</MenuItem>
<MenuItem
onPointerUp={addRegionalGuidanceNegativePrompt}
isDisabled={!validActions.canAddNegativePrompt || isBusy}
>
<MenuItem onClick={addRegionalGuidanceNegativePrompt} isDisabled={!validActions.canAddNegativePrompt || isBusy}>
{t('controlLayers.addNegativePrompt')}
</MenuItem>
<MenuItem onPointerUp={addRegionalGuidanceIPAdapter} isDisabled={isBusy}>
<MenuItem onClick={addRegionalGuidanceIPAdapter} isDisabled={isBusy}>
{t('controlLayers.addReferenceImage')}
</MenuItem>
</>

View File

@@ -17,12 +17,12 @@ export const RegionalGuidanceMenuItemsAutoNegative = memo(() => {
[entityIdentifier]
);
const autoNegative = useAppSelector(selectAutoNegative);
const onPointerUp = useCallback(() => {
const onClick = useCallback(() => {
dispatch(rgAutoNegativeToggled({ entityIdentifier }));
}, [dispatch, entityIdentifier]);
return (
<MenuItem icon={<PiSelectionInverseBold />} onPointerUp={onPointerUp}>
<MenuItem icon={<PiSelectionInverseBold />} onClick={onClick}>
{autoNegative ? t('controlLayers.disableAutoNegative') : t('controlLayers.enableAutoNegative')}
</MenuItem>
);

View File

@@ -10,7 +10,7 @@ export const CanvasSettingsClearCachesButton = memo(() => {
canvasManager.cache.clearAll();
}, [canvasManager]);
return (
<Button onPointerUp={clearCaches} size="sm" colorScheme="warning">
<Button onClick={clearCaches} size="sm" colorScheme="warning">
{t('controlLayers.clearCaches')}
</Button>
);

View File

@@ -7,11 +7,11 @@ import { useTranslation } from 'react-i18next';
export const CanvasSettingsClearHistoryButton = memo(() => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const onPointerUp = useCallback(() => {
const onClick = useCallback(() => {
dispatch(canvasClearHistory());
}, [dispatch]);
return (
<Button onPointerUp={onPointerUp} size="sm">
<Button onClick={onClick} size="sm">
{t('controlLayers.clearHistory')}
</Button>
);

View File

@@ -6,11 +6,11 @@ import { useTranslation } from 'react-i18next';
export const CanvasSettingsLogDebugInfoButton = memo(() => {
const { t } = useTranslation();
const canvasManager = useCanvasManager();
const onPointerUp = useCallback(() => {
const onClick = useCallback(() => {
canvasManager.logDebugInfo();
}, [canvasManager]);
return (
<Button onPointerUp={onPointerUp} size="sm">
<Button onClick={onClick} size="sm">
{t('controlLayers.logDebugInfo')}
</Button>
);

View File

@@ -6,14 +6,14 @@ import { useTranslation } from 'react-i18next';
export const CanvasSettingsRecalculateRectsButton = memo(() => {
const { t } = useTranslation();
const canvasManager = useCanvasManager();
const onPointerUp = useCallback(() => {
const onClick = useCallback(() => {
for (const adapter of canvasManager.getAllAdapters()) {
adapter.transformer.requestRectCalculation();
}
}, [canvasManager]);
return (
<Button onPointerUp={onPointerUp} size="sm" colorScheme="warning">
<Button onClick={onClick} size="sm" colorScheme="warning">
{t('controlLayers.recalculateRects')}
</Button>
);

View File

@@ -60,7 +60,7 @@ export const StagingAreaToolbarAcceptButton = memo(() => {
tooltip={`${t('common.accept')} (Enter)`}
aria-label={`${t('common.accept')} (Enter)`}
icon={<PiCheckBold />}
onPointerUp={acceptSelected}
onClick={acceptSelected}
colorScheme="invokeBlue"
isDisabled={!selectedImage}
/>

Some files were not shown because too many files have changed in this diff Show More