Compare commits

..

358 Commits

Author SHA1 Message Date
psychedelicious
ceae1dc04f chore: bump version to v5.4.1 2024-11-15 11:21:24 +11:00
psychedelicious
4b390906bc fix(ui): multiple selection dnd sometimes doesn't get full selection
Turns out a gallery image's `imageDTO` object can actually be a different object by reference. I thought this was not possible thanks to how we have a quasi-normalized cache.

Need to check against image name instead of reference equality when deciding whether or not to use the single image or the gallery selection for the dnd payload.
2024-11-15 11:21:03 +11:00
psychedelicious
c5b8efe03b fix(ui): unable to use text inputs within draggable 2024-11-15 10:25:30 +11:00
psychedelicious
4d08d00ad8 chore(ui): knip 2024-11-14 13:38:40 -08:00
psychedelicious
9b0130262b fix(ui): use silent upload for single-image upload buttons 2024-11-14 13:38:40 -08:00
psychedelicious
878093f64e fix(ui): image uploading handling
Rework uploadImage and uploadImages helpers and the RTK listener, ensuring gallery view isn't changed unexpectedly and preventing extraneous toasts.

Fix staging area save to gallery button to essentially make a copy of the image, instead of changing its intermediate status.
2024-11-14 13:38:40 -08:00
psychedelicious
d5ff7ef250 feat(ui): update output only masked regions
- New name: "Output only Generated Regions"
- New default: true (this was the intention, but at some point the behaviour of the setting was inverted without the default being changed)
2024-11-14 13:35:55 -08:00
psychedelicious
f36583f866 feat(ui): tweak image selection/hover styling
The styling in gallery for selected vs hovered was very similar, leading users to think that the hovered image was also selected.

Reducing the borders for hovered images to a single pixel makes it easier to distinguish between selected and hovered.
2024-11-14 16:28:53 -05:00
psychedelicious
829bc1bc7d feat(ui): progress alert config setting
- Add `invocationProgressAlert` as a disable-able feature. Hide the alert and the setting in system settings when disabled.
- Fix merge conflict
2024-11-15 05:49:05 +11:00
Mary Hipp
17c7b57145 (ui): make detailed progress view a setting that can be hidden 2024-11-15 05:49:05 +11:00
psychedelicious
6a12189542 feat(ui): updated progress event display
- Tweak layout/styling of alerts for consistent spacing
- Add percentage to message if it has percentage
- Only show events if the destination is canvas (so workflows events are hidden for example)
2024-11-15 05:49:05 +11:00
psychedelicious
96a31a5563 feat(app): add more events when loading/running models 2024-11-15 05:49:05 +11:00
psychedelicious
067747eca9 feat(app): tweak model load events
- Pass in the `UtilInterface` to the `ModelsInterface` so we can call the simple `signal_progress` method instead of the complicated `emit_invocation_progress` method.
- Only emit load events when starting to load - not after.
- Add more detail to the messages, like submodel type
2024-11-15 05:49:05 +11:00
Mary Hipp
c7878fddc6 (pytest) mock emit_invocation_progress on events service 2024-11-15 05:49:05 +11:00
maryhipp
54c51e0a06 (worker) add progress images for downloading remote models 2024-11-15 05:49:05 +11:00
Mary Hipp
1640ea0298 (pytest) add missing arg for mocked context 2024-11-15 05:49:05 +11:00
Mary Hipp
0c32ae9775 (pytest) fix import 2024-11-15 05:49:05 +11:00
maryhipp
fdb8ca5165 (worker) use source if name is not available 2024-11-15 05:49:05 +11:00
Mary Hipp
571faf6d7c (pytest) add queue_item and invocation to data in context for test 2024-11-15 05:49:05 +11:00
Mary Hipp
bdbdb22b74 (ui) add Canvas Alert for invocation progress messages 2024-11-15 05:49:05 +11:00
maryhipp
9bbb5644af (worker) add invocation_progress events to model loading 2024-11-15 05:49:05 +11:00
Mary Hipp
e90ad19f22 (ui): update en string for full IP adapter 2024-11-14 10:07:42 -08:00
Ryan Dick
0ba11e8f73 SD3 Image-to-Image and Inpainting (#7295)
## Summary

Add support for SD3 image-to-image and inpainting. Similar to FLUX, the
implementation supports fractional denoise_start/denoise_end for more
fine-grained denoise strength control, and a gradient mask adjustment
schedule for smoother inpainting seams.

## Example
Workflow
<img width="1016" alt="image"
src="https://github.com/user-attachments/assets/ee598d77-be80-4ca7-9355-c3cbefa2ef43">

Result

![image](https://github.com/user-attachments/assets/43953fa7-0e4e-42b5-84e8-85cfeeeee00b)

## QA Instructions

- [x] Regression test of text-to-image
- [x] Test image-to-image without mask
- [x] Test that adjusting denoising_start allows fine-grained control of
amount of change in image-to-image
- [x] Test inpainting with mask
- [x] Smoke test SD1, SDXL, FLUX image-to-image to make sure there was
no regression with the frontend changes.

## Merge Plan

<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2024-11-14 09:33:51 -08:00
Ryan Dick
1cf7600f5b Merge branch 'main' into ryan/sd3-image-to-image 2024-11-14 09:25:23 -08:00
Ryan Dick
4f9d12b872 Fix FLUX diffusers LoRA models with no .proj_mlp layers (#7313)
## Summary

Add support for FLUX diffusers LoRA models without `.proj_mlp` layers.

## Related Issues / Discussions

Closes #7129 

## QA Instructions

- [x] FLUX diffusers LoRA **without .proj_mlp** layers
- [x] FLUX diffusers LoRA **with .proj_mlp** layers
- [x] FLUX diffusers LoRA **without .proj_mlp** layers, quantized base
model
- [x] FLUX diffusers LoRA **with .proj_mlp** layers, quantized base
model

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2024-11-14 09:09:10 -08:00
Ryan Dick
68c3b0649b Add unit tests for FLUX diffusers LoRA without .proj_mlp layers. 2024-11-14 16:53:49 +00:00
Ryan Dick
8ef8bd4261 Add state dict tensor shapes for existing LoRA unit tests. 2024-11-14 16:53:49 +00:00
Ryan Dick
50897ba066 Add flag to optionally allow missing layer keys in FLUX lora loader. 2024-11-14 16:53:49 +00:00
Ryan Dick
3510643870 Support FLUX LoRAs without .proj_mlp layers. 2024-11-14 16:53:49 +00:00
Ryan Dick
ca9cb1c9ef Flux Vae broke for float16, force bfloat16 or float32 were compatible (#7213)
## Summary

The Flux VAE, like many VAEs, is broken if run using float16 inputs
returning black images due to NaNs
This will fix the issue by forcing the VAE to run in bfloat16 or float32
were compatible

## Related Issues / Discussions

Fix for issue https://github.com/invoke-ai/InvokeAI/issues/7208

## QA Instructions

Tested on MacOS, VAE works with float16 in the invoke.yaml and left to
default.
I also briefly forced it down the float32 route to check that to.
Needs testing on CUDA / ROCm

## Merge Plan

It should be a straight forward merge,
2024-11-13 15:51:40 -08:00
Ryan Dick
b89caa02bd Merge branch 'main' into flux_vae_fp16_broke 2024-11-13 15:33:43 -08:00
Ryan Dick
eaf4e08c44 Use vae.parameters() for more efficient access of the first model parameter. 2024-11-13 23:32:40 +00:00
Darrell
fb19621361 Updated link to flux ip adapter model 2024-11-12 08:11:40 -05:00
Mary Hipp
9179619077 actually use optimized denoising 2024-11-08 20:46:08 -05:00
Mary Hipp
13cb5f0ba2 Merge remote-tracking branch 'origin/main' into ryan/sd3-image-to-image 2024-11-08 20:29:56 -05:00
Mary Hipp
7e52fc1c17 Merge branch 'ryan/sd3-image-to-image' of https://github.com/invoke-ai/InvokeAI into ryan/sd3-image-to-image 2024-11-08 20:14:24 -05:00
Mary Hipp
7f60a4a282 (ui): update more generation settings for SD3 linear UI 2024-11-08 20:14:13 -05:00
psychedelicious
3f880496f7 feat(ui): clarify denoising strength badge text 2024-11-09 08:38:41 +11:00
Ryan Dick
f05efd3270 Fix import for getInfill. 2024-11-08 20:42:44 +00:00
psychedelicious
79eb8172b6 feat(ui): update warnings on upscaling tab based on model arch
When an unsupported model architecture is selected, show that warning only, without the extra warnings (i.e. no "missing tile controlnet" warning)

Update Invoke tooltip warnings accordingly

Closes #7239
Closes #7177
2024-11-09 07:34:03 +11:00
Ryan Dick
7732b5d478 Fix bug related to i2l nodes during graph construction of image-to-image workflows. 2024-11-08 20:15:34 +00:00
Mary Hipp
a2a1934b66 Merge branch 'ryan/sd3-image-to-image' of https://github.com/invoke-ai/InvokeAI into ryan/sd3-image-to-image 2024-11-08 13:43:19 -05:00
Mary Hipp
dff6570078 (ui) SD3 support in linear UI 2024-11-08 13:42:57 -05:00
maryhipp
04e4fb63af add SD3 generation modes for metadata validation 2024-11-08 13:13:58 -05:00
Vargol
83609d5008 Merge branch 'invoke-ai:main' into flux_vae_fp16_broke 2024-11-08 10:37:31 +00:00
David Burnett
2618ed0ae7 ruff complained 2024-11-08 10:31:53 +00:00
David Burnett
bb3cedddd5 Rework change based on comments 2024-11-08 10:27:47 +00:00
psychedelicious
5b3e1593ca fix(ui): restore missing image paste handler
Missed migrating this logic over during dnd migration.
2024-11-08 16:42:39 +11:00
psychedelicious
2d08078a7d fix(ui): fit bbox to layers math 2024-11-08 16:40:24 +11:00
psychedelicious
75acece1f1 fix(ui): excessive toasts when generating on canvas
- Add `withToast` flag to `uploadImage` util
- Skip the toast if this is not set
- Use the flag to disable toasts when canvas does internal image-uploading stuff that should be invisible to user
2024-11-08 10:30:04 +11:00
psychedelicious
a9db2ffefd fix(ui): ensure clip vision model is set correctly for FLUX IP Adapters 2024-11-08 10:02:41 +11:00
psychedelicious
cdd148b4d1 feat(ui): add toast for graph building errors 2024-11-08 10:02:41 +11:00
psychedelicious
730fabe2de feat(ui): add util to extract message from a tsafe AssertionError 2024-11-08 10:02:41 +11:00
psychedelicious
6c59790a7f chore: bump version to v5.4.1rc2 2024-11-08 10:00:20 +11:00
Ryan Dick
0e6cb91863 Update SD3 InpaintExtension with gradient adjustment to match FLUX. 2024-11-07 22:55:30 +00:00
Ryan Dick
a0fefcd43f Switch to using a custom scheduler implementation for SD3 rather than the diffusers FlowMatchEulerDiscreteScheduler. It is easier to work with and enables us to re-use the clip_timestep_schedule_fractional() utility from FLUX. 2024-11-07 22:46:52 +00:00
psychedelicious
c37251d6f7 tweak(ui): workflow linear field styling 2024-11-08 07:39:09 +11:00
psychedelicious
2854210162 fix(ui): dnd autoscroll on elements w/ custom scrollbar
Have to do a bit of fanagling to get it to work and get `pragmatic-drag-and-drop` to not complain.
2024-11-08 07:39:09 +11:00
psychedelicious
5545b980af fix(ui): workflow field sorting doesn't use unique identifier for fields 2024-11-08 07:39:09 +11:00
psychedelicious
0c9434c464 chore(ui): lint 2024-11-08 07:39:09 +11:00
psychedelicious
8771de917d feat(ui): migrate fullscreen drop zone to pdnd 2024-11-08 07:39:09 +11:00
psychedelicious
122946ef4c feat(ui): DndDropOverlay supports react node for label 2024-11-08 07:39:09 +11:00
psychedelicious
2d974f670c feat(ui): restore missing upload buttons 2024-11-08 07:39:09 +11:00
psychedelicious
75f0da9c35 fix(ui): use revised uploader for CL empty state 2024-11-08 07:39:09 +11:00
psychedelicious
5df3c00e28 feat(ui): remove SerializableObject, use type-fest's JsonObject 2024-11-08 07:39:09 +11:00
psychedelicious
b049880502 fix(ui): uploads initiated from canvas 2024-11-08 07:39:09 +11:00
psychedelicious
e5293fdd1a fix(ui): match new default controlnet behaviour 2024-11-08 07:39:09 +11:00
psychedelicious
8883775762 feat(ui): rework image uploads (wip) 2024-11-08 07:39:09 +11:00
psychedelicious
cfadb313d2 fix(ui): ts issues 2024-11-08 07:39:09 +11:00
psychedelicious
b5cadd9a1a fix(ui): scroll issue w/ boards list 2024-11-08 07:39:09 +11:00
psychedelicious
5361b6e014 refactor(ui): image actions sep of concerns 2024-11-08 07:39:09 +11:00
psychedelicious
ff346172af feat(ui): use new image actions system for image menu 2024-11-08 07:39:09 +11:00
psychedelicious
92f660018b refactor(ui): dnd actions to image actions
We don't need a "dnd" image system. We need a "image action" system. We need to execute specific flows with images from various "origins":
- internal dnd e.g. from gallery
- external dnd e.g. user drags an image file into the browser
- direct file upload e.g. user clicks an upload button
- some other internal app button e.g. a context menu

The actions are now generalized to better support these various use-cases.
2024-11-08 07:39:09 +11:00
psychedelicious
1afc2cba4e feat(ui): support different labels for external drop targets (e.g. uploads) 2024-11-08 07:39:09 +11:00
psychedelicious
ee8359242c feat(ui): more dnd cleanup and tidy 2024-11-08 07:39:09 +11:00
psychedelicious
f0c80a8d7a tidy(ui): dnd stuff 2024-11-08 07:39:09 +11:00
psychedelicious
8da9e7c1f6 fix(ui): min height for workflow image field drop target 2024-11-08 07:39:09 +11:00
psychedelicious
6d7a486e5b feat(ui): restore dnd to workflow fields 2024-11-08 07:39:09 +11:00
psychedelicious
57122c6aa3 feat(ui): layer reordering styling 2024-11-08 07:39:09 +11:00
psychedelicious
54abd8d4d1 feat(ui): dnd layer reordering (wip) 2024-11-08 07:39:09 +11:00
psychedelicious
06283cffed feat(ui): use custom drag previews for images 2024-11-08 07:39:09 +11:00
psychedelicious
27fa0e1140 tidy(ui): more efficient dnd overlay styling 2024-11-08 07:39:09 +11:00
psychedelicious
533d48abdb feat(ui): multi-image drag preview 2024-11-08 07:39:09 +11:00
psychedelicious
6845cae4c9 tidy(ui): move new dnd impl into features/dnd 2024-11-08 07:39:09 +11:00
psychedelicious
31c9acb1fa tidy(ui): clean up old dnd stuff 2024-11-08 07:39:09 +11:00
psychedelicious
fb5e462300 tidy(ui): document & clean up dnd 2024-11-08 07:39:09 +11:00
psychedelicious
2f3abc29b1 feat(ui): better types for getData 2024-11-08 07:39:09 +11:00
psychedelicious
c5c071f285 feat(ui): better type name 2024-11-08 07:39:09 +11:00
psychedelicious
93a3ed56e7 feat(ui): simpler dnd typing implementation 2024-11-08 07:39:09 +11:00
psychedelicious
406fc58889 feat(ui): migrate to pragmatic-drag-and-drop (wip 4) 2024-11-08 07:39:09 +11:00
psychedelicious
cf67d084fd feat(ui): migrate to pragmatic-drag-and-drop (wip 3) 2024-11-08 07:39:09 +11:00
psychedelicious
d4a95af14f perf(ui): more gallery perf improvements 2024-11-08 07:39:09 +11:00
psychedelicious
8c8e7102c2 perf(ui): improved gallery perf 2024-11-08 07:39:09 +11:00
psychedelicious
b6b9ea9d70 feat(ui): migrate to pragmatic-drag-and-drop (wip 2) 2024-11-08 07:39:09 +11:00
psychedelicious
63126950bc feat(ui): migrate to pragmatic-drag-and-drop (wip) 2024-11-08 07:39:09 +11:00
psychedelicious
29d63d5dea fix(app): silence pydantic protected namespace warning
Closes #7287
2024-11-08 07:36:50 +11:00
Ryan Dick
a5f8c23dee Add inpainting support for SD3. 2024-11-07 20:21:43 +00:00
Ryan Dick
7bb4ea57c6 Add SD3ImageToLatentsInvocation. 2024-11-07 16:07:57 +00:00
Ryan Dick
75dc961bcb Add image-to-image support for SD3 - WIP. 2024-11-07 15:48:35 +00:00
Vargol
a9a1f6ef21 Merge branch 'invoke-ai:main' into flux_vae_fp16_broke 2024-11-07 14:02:51 +00:00
Jonathan
aa40161f26 Update flux_denoise.py
Added a bool to allow the node user to add noise in to initial latents (default) or to leave them alone.
2024-11-07 14:02:20 +00:00
psychedelicious
6efa812874 chore(ui): bump version to v5.4.1rc1 2024-11-07 14:02:20 +00:00
psychedelicious
8a683f5a3c feat(ui): updated whats new handling and v5.4.1 items 2024-11-07 14:02:20 +00:00
Brandon Rising
f4b0b6a93d fix: Look in known subfolders for configs for clip variants 2024-11-07 14:02:20 +00:00
Brandon Rising
1337c33ad3 fix: Avoid downloading unsafe .bin files if a safetensors file is available 2024-11-07 14:02:20 +00:00
Jonathan
2f6b035138 Update flux_denoise.py
Added a bool to allow the node user to add noise in to initial latents (default) or to leave them alone.
2024-11-07 08:44:10 -05:00
psychedelicious
4f9ae44472 chore(ui): bump version to v5.4.1rc1 2024-11-07 12:19:28 +11:00
psychedelicious
c682330852 feat(ui): updated whats new handling and v5.4.1 items 2024-11-07 12:19:28 +11:00
Brandon Rising
c064257759 fix: Look in known subfolders for configs for clip variants 2024-11-07 12:01:02 +11:00
Brandon Rising
8a4c629576 fix: Avoid downloading unsafe .bin files if a safetensors file is available 2024-11-06 19:31:18 -05:00
David Burnett
496b02a3bc Same issue affects image2image, so do the same again 2024-11-06 17:47:22 -05:00
David Burnett
7b5efc2203 Flux Vae broke for float16, force bfloat16 or float32 were compatible 2024-11-06 17:47:22 -05:00
psychedelicious
a01d44f813 chore(ui): lint 2024-11-06 10:25:46 -05:00
psychedelicious
63fb3a15e9 feat(ui): default to no control model selected for control layers 2024-11-06 10:25:46 -05:00
psychedelicious
4d0837541b feat(ui): add simple mode filtering 2024-11-06 10:25:46 -05:00
psychedelicious
999809b4c7 fix(ui): minor viewer close button styling 2024-11-06 10:25:46 -05:00
psychedelicious
c452edfb9f feat(ui): add control layer empty state 2024-11-06 10:25:46 -05:00
psychedelicious
ad2cdbd8a2 feat(ui): tooltip for canvas preview image 2024-11-06 10:25:46 -05:00
psychedelicious
f15c24bfa7 feat(ui): add " (recommended)" to balanced control mode label 2024-11-06 10:25:46 -05:00
psychedelicious
d1f653f28c feat(ui): make default control end step 0.75 2024-11-06 10:25:46 -05:00
psychedelicious
244465d3a6 feat(ui): make default control weight 0.75 2024-11-06 10:25:46 -05:00
psychedelicious
c6236ab70c feat(ui): add menubar-ish header on comparison 2024-11-06 10:25:46 -05:00
psychedelicious
644d5cb411 feat(ui): add menubar-ish header on viewer 2024-11-06 10:25:46 -05:00
Riku
bb0a630416 fix(ui): adjust knip config to ignore parameter schema exports 2024-11-06 22:51:17 +11:00
Riku
2148ae9287 feat(ui): simplify parameter schema declaration and type inference 2024-11-06 22:51:17 +11:00
psychedelicious
42d242609c chore(gh): update pr template w/ reminder for what's new copy 2024-11-06 19:03:31 +11:00
psychedelicious
fd0a52392b feat(ui): added line about when denoising str is disabled 2024-11-06 19:01:33 +11:00
psychedelicious
e64415d59a feat(ui): revised logic to disable denoising str 2024-11-06 19:01:33 +11:00
psychedelicious
1871e0bdbf feat(ui): tweaked denoise str styling 2024-11-06 19:01:33 +11:00
Mary Hipp
3ae9a965c2 lint 2024-11-06 19:01:33 +11:00
Mary Hipp
85932e35a7 update copy again 2024-11-06 19:01:33 +11:00
Mary Hipp
41b07a56cc update popover copy and add image 2024-11-06 19:01:33 +11:00
Mary Hipp
54064c0cb8 fix(ui): match badge height to slider height so layout does not shift 2024-11-06 19:01:33 +11:00
Mary Hipp
68284b37fa remove opacity logic from WavyLine, add badge explaining disabled state, add translations 2024-11-06 19:01:33 +11:00
Mary Hipp
ae5bc6f5d6 feat(ui): move denoising strength to layers panel w/ visualization of how much change will be applied, only enable if 1+ enabled raster layer 2024-11-06 19:01:33 +11:00
Mary Hipp
6dc16c9f54 wip 2024-11-06 19:01:33 +11:00
Brandon Rising
faa9ac4e15 fix: get_clip_variant_type should never return None 2024-11-06 09:59:50 +11:00
Mary Hipp Rogers
d0460849b0 fix bad merge conflict (#7273)
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-11-05 16:02:03 -05:00
Mary Hipp Rogers
bed3c2dd77 update Whats New for 5.3.1 (#7272)
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-11-05 15:43:16 -05:00
Mary Hipp
916ddd17d7 fix(ui): fix link for infill method popover 2024-11-05 15:39:03 -05:00
Mary Hipp
accfa7407f fix undefined 2024-11-05 15:30:17 -05:00
Mary Hipp
908db31e48 feat(api,ui): allow Whats New module to get content from back-end 2024-11-05 15:30:17 -05:00
Mary Hipp
b70f632b26 fix(ui): add some feedback while layers are merging 2024-11-05 12:38:50 -05:00
Brandon Rising
d07a6385ab Always default to ClipVariantType.L instead of None 2024-11-05 12:03:40 -05:00
Brandon Rising
68df612fa1 fix: Never throw an exception when finding the clip variant type 2024-11-05 12:03:40 -05:00
psychedelicious
3b96c79461 chore: bump version to v5.4.0 2024-11-05 10:09:21 +11:00
psychedelicious
89bda5b983 Ryan/sd3 diffusers (#7222)
## Summary

Nodes to support SD3.5 txt2img generations
* adds SD3.5 to starter models
* adds default workflow for SD3.5 txt2img

## Related Issues / Discussions

<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->

## QA Instructions

<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->

## Merge Plan

<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->

## Checklist

- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-11-05 08:21:28 +11:00
Brandon Rising
22bff1fb22 Fix conditional within filter_by_variant to not read all candidates as default 2024-11-04 12:42:09 -05:00
Mary Hipp
55ba6488d1 fix up types file 2024-11-04 12:42:09 -05:00
brandonrising
2d78859171 Create bespoke latents to image node for sd3 2024-11-04 12:42:09 -05:00
Mary Hipp
3a661bac34 fix(ui): exclude submodels from model manager 2024-11-04 12:42:09 -05:00
Mary Hipp
bb8a02de18 update schema 2024-11-04 12:42:09 -05:00
maryhipp
78155344f6 update node fields for SD3 to match other SD nodes 2024-11-04 12:42:09 -05:00
Brandon Rising
391a24b0f6 Re-add erroniously removed hash code 2024-11-04 12:42:09 -05:00
Brandon Rising
e75903389f Run ruff, fix bug in hf downloading code which failed to download parts of a model 2024-11-04 12:42:09 -05:00
Brandon Rising
27567052f2 Create new latent factors for sd35 2024-11-04 12:42:09 -05:00
Brandon Rising
6f447f7169 Rather than .fp16., some repos start the suffix with .fp16... for weights spread across multiple files 2024-11-04 12:42:09 -05:00
Mary Hipp
8b370cc182 (ui): dont show SD3 in main model dropdown yet 2024-11-04 12:42:09 -05:00
maryhipp
af583d2971 ruff format 2024-11-04 12:42:09 -05:00
Mary Hipp
0ebe8fb1bd (ui): add required/optional logic to other submodel fields 2024-11-04 12:42:09 -05:00
maryhipp
befb629f46 add default workflow 2024-11-04 12:42:09 -05:00
maryhipp
874d67cb37 add SD3.5 to starter models 2024-11-04 12:42:09 -05:00
Mary Hipp
19f7a1295a (ui): add fields for CLIP-L and CLIP-G, remove MainModelConfig type changes 2024-11-04 12:42:09 -05:00
maryhipp
78bd605617 (nodes,api): expose the submodels on SD3 model loader as optional, add types needed for CLIP-L and CLIP-G fields 2024-11-04 12:42:09 -05:00
Brandon Rising
b87f4e59a5 Create clip variant type, create new fucntions for discerning clipL and clipG in the frontend 2024-11-04 12:42:09 -05:00
Ryan Dick
1eca4f12c8 Make T5 encoder optonal in SD3 workflows. 2024-11-04 12:42:09 -05:00
Ryan Dick
f1de11d6bf Make the default CFG for SD3 3.5. 2024-11-04 12:42:09 -05:00
Ryan Dick
9361ed9d70 Add progress images to SD3 and make denoising cancellable. 2024-11-04 12:42:09 -05:00
Brandon Rising
ebabf4f7a8 Setup Model and T5 Encoder selection fields for sd3 nodes 2024-11-04 12:42:09 -05:00
Brandon Rising
606f3321f5 Initial wave of frontend updates for sd-3 node inputs 2024-11-04 12:42:09 -05:00
Brandon Rising
3970aa30fb define submodels on sd3 models during probe 2024-11-04 12:42:09 -05:00
Ryan Dick
678436e07c Add tqdm progress bar for SD3. 2024-11-04 12:42:09 -05:00
Ryan Dick
c620581699 Bug fixes to get SD3 text-to-image workflow running. 2024-11-04 12:42:09 -05:00
Ryan Dick
c331d42ce4 Temporary hack for testing SD3 model loader. 2024-11-04 12:42:09 -05:00
Ryan Dick
1ac9b502f1 Fix Sd3TextEncoderInvocation output type. 2024-11-04 12:42:09 -05:00
Ryan Dick
3fa478a12f Initial draft of SD3DenoiseInvocation. 2024-11-04 12:42:09 -05:00
Ryan Dick
2d86298b7f Add first draft of Sd3TextEncoderInvocation. 2024-11-04 12:42:09 -05:00
Ryan Dick
009cdb714c Add Sd3ModelLoaderInvocation. 2024-11-04 12:42:09 -05:00
Ryan Dick
9d3f5427b4 Move FluxModelLoaderInvocation to its own file. model.py was getting bloated. 2024-11-04 12:42:09 -05:00
Ryan Dick
e4b17f019a Get diffusers SD3 model probing working. 2024-11-04 12:42:09 -05:00
Ryan Dick
586c00bc02 (minor) Remove unused dict. 2024-11-04 12:42:09 -05:00
Eugene Brodsky
0f11fda65a fix(deps): pin mediapipe strictly to a known working version 2024-11-04 10:16:19 -05:00
psychedelicious
3e75331ef7 fix(ui): load workflow from file
In a8de6406c5 a change was made to many menus in an effort to improve performance. The menus were made to be lazy, so that they are mounted only while open.

This causes unexpected behaviour when there is some logic in the menu that may need to execute after the user selects a menu item.

In this case, when you click to load a workflow from file, the file picker opens but then the menuitem unmounts, taking the input element and all uploading logic with it. When you select a file, nothing happens because we've nuked the handlers by unmounting everything.

Easy fix - un-lazy-fy the menu.

Closes #7240
2024-11-04 08:02:55 -05:00
psychedelicious
be133408ac fix(nodes): relaxed validation for segment anything
The validation on this node causes graph validation to valid. It must be validated _after_ instantiation.

Also, it was a bit too strict. The only case we explicitly do not handle is when both bboxes and points are provided. It's acceptable if neither are provided.

Closes #7248
2024-11-04 08:00:52 -05:00
psychedelicious
7e1e0d6928 fix(ui): non-default filters can erase layer
When filtering, we use a listener to trigger processing the image whenever a filter setting changes. For example, if the user changes from canny to depth, and auto-process is enabled, we re-process the layer with new filter settings.

The filterer has a method to reset its ephemeral state. This includes the filter settings, so resetting the ephemeral state is expected to trigger processing of the filter.

When we exit filtering, we reset the ephemeral state before resetting everything else, like the listeners.

This can cause problem when we exit filtering. The sequence:
- Start filtering a layer.
- Auto-process the filter in response to starting the filter process.
- Change the filter settings.
- Auto-process the filter in response to the changed settings.
- Apply the filter.
- Exit filtering, first by resetting the ephemeral state.
- Auto-process the filter in response to the reset settings.*
- Finish exiting, including unsubscribing from listeners.

*Whoops! That last auto-process has now borked the layer's rendering by processing a filter when we shouldn't be processing a filter.

We need to first unsubscribe from listeners, so we don't react to that change to the filter settings and erroneously process the layer.

Also, add a check to the `processImmediate` method to prevent processing if that method is accidentally called without first starting the filterer.

The same issue could affect the segmenyanything module - same fixes are implemented there.
2024-11-04 07:11:20 -05:00
psychedelicious
cd3d8df5a8 fix(ui): save canvas to gallery does nothing
The root issue is the compositing cache. When we save the canvas to gallery, we need to first composite raster layers together and then upload the image.

The compositor makes extensive use of caching to reduce the number of images created and improve performance. There are two "layers" of caching:
1. Caching the composite canvas element, which is used both for uploading the canvas and for generation mode analysis.
2. Caching the uploaded composite canvas element as an image.

The combination of these caches allows for the various processes that require composite canvases to do minimal work.

But this causes a problem in this situation, because the user expects a new image to be uploaded when they click save to gallery.

For example, suppose we have already composited and uploaded the raster layer state for use in a generation. Then, we ask the compositor to save the canvas to gallery.

The compositor sees that we are requesting an image for the current canvas state, and instead of recompositing and uploading the image again, it just returns the cached image.

In this case, no image is uploaded and it the button does nothing.

We need to be able to opt out of the caching at some level, for certain actions. A `forceUpload` arg is added to the compositor's high-level `getCompositeImageDTO` method to do this.

When true, we ignore the uppermost caching layer (the uploaded image layer), but still use the lower caching layer (the canvas element layer). So we don't recompute the canvas element, but we do upload it as a new image to the server.
2024-11-04 07:11:20 -05:00
psychedelicious
24d3c22017 fix(ui): temp fix for stuck tooltips 2024-11-04 07:11:20 -05:00
psychedelicious
b0d37f4e51 fix(ui): progress image does not reset when canceling generation
Previously, we cleared the canvas progress image when the canvas had no active generations. This allowed for a brief flash of canvas state between the last progress image for a given generation, and when the output image for that generation rendered. Here's the sequence:
- Progress images are received and rendered
- Generation completes - no active canvas generations
- Clear the progress image -> canvas layers visible unexpectedly, creating an awkward jarring change
- Generation output image is rendered -> output image overlaid on canvas layers

In 83538c4b2b I attempted to fix this by only clearing the progress image while we were not staging.

This isn't quite right, though. We are often staging with no active generations - for example, you have a few images completed and are waiting to choose one.

In this situation, if you cancel a pending generation, the logic to clear the progress image doesn't fire because it sees staging is in progress.

What we really need is:
- Staging area module clears the progress image once it has rendered an output image.
- Progress image module clears the progress image when a generation is canceled or failed, in which case there will be no output image.

To do this, we can add an event listener to the progress image module to listen for queue item status changes, and when we get a cancelation or failure, clear the progress image.
2024-11-04 07:11:20 -05:00
psychedelicious
3559124674 feat(ui): use nanostores in CanvasProgressImageModule for internal state 2024-11-04 07:11:20 -05:00
Eugene Brodsky
6c33e02141 fix(pkg): pin torch to <2.5.0 to prevent unnecessary downloads
pip's dependency resolution doesn't take into account transitive
dependencies when choosing package versions for download.
Even though `torch=~2.4.1` is required by `diffusers`, pip will
download 2.5.0 and higher, but only install 2.4.1.
Pinning torch to <2.5.0 prevents this behaviour.
2024-11-01 12:27:28 -04:00
psychedelicious
8cf94d602f chore: bump version to v5.3.1 2024-11-01 13:31:51 +11:00
psychedelicious
016a6f182f Make T2I Adapters work with any resolution supported by the models (#7215)
## Summary

This change mimics the unet padding strategy to align T2I featuremaps
with the latents during denoising. It also slightly adjusts the crop and
scale logic so that the control will match the input image without
shifting when it needs to pad.

## Related Issues / Discussions

<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->

## QA Instructions

Image generated at 1032x1024

![image](https://github.com/user-attachments/assets/7ea579e4-61dc-4b6b-aa84-33d676d160c6)

Image generated at 1080x1040 to prove feature alignment.

![image](https://github.com/user-attachments/assets/ee6e5b6a-d0d5-474d-9fc4-f65c104964bd)

Edge artifacts on the bottom and right are a result of SDXL's unet
padding, and t2i influence will be cut off in those regions.

## Merge Plan

Contingent on #7205 
Currently the Canvas UI prevents users from generating non-64
resolutions while t2i adapter layers are active. Will leave this as a
draft until fixing that.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-11-01 13:22:00 +11:00
Kent Keirsey
6fbc019142 Merge branch 'main' into t2i_resolution_hack 2024-10-31 22:08:38 -04:00
psychedelicious
26f95d6a97 fix(ui): disable move tool when staging 2024-10-31 22:08:16 -04:00
psychedelicious
40f7b0d171 fix(ui): cursor disappearing on empty layers 2024-10-31 22:08:16 -04:00
psychedelicious
4904700751 feat(ui): more info in state module repr 2024-10-31 22:08:16 -04:00
psychedelicious
83538c4b2b fix(ui): flash of canvas state between last progress image and generation result 2024-10-31 22:08:16 -04:00
psychedelicious
eb7b559529 fix(ui): sync canvas layer visibility when staging state changes 2024-10-31 22:08:16 -04:00
Kent Keirsey
4945465cf0 Merge branch 'main' into t2i_resolution_hack 2024-10-31 21:17:06 -04:00
Will
7eed7282a9 removing periods from update link to prevent page not found error 2024-11-01 07:42:31 +11:00
psychedelicious
47f0781822 fix(ui): add missing translations
Closes #7229
2024-11-01 07:40:52 +11:00
Eugene Brodsky
88b8e3e3d5 chore(deps): adjust pins for torch, numpy, other dependencies, to satisfy stricted dependency resolution 2024-10-31 16:26:53 -04:00
dunkeroni
47c3ab9214 Remove UI restrictions for T2I resolutions 2024-10-31 16:07:46 -04:00
dunkeroni
d6d436b59c Merge branch 'invoke-ai:main' into t2i_resolution_hack 2024-10-31 15:52:24 -04:00
Hippalectryon
6ff7057967 fix broken link in installer 2024-10-31 09:50:08 -04:00
psychedelicious
e032ab1179 fix(ui): ensure compositing rect is rendered correctly
This fixes an issue uncovered by the previous commit in which we do not exit filter/select-object on save-as.
2024-10-31 08:57:10 -04:00
psychedelicious
65bddfcd93 feat(ui): filter/select-object do not exit on save-as 2024-10-31 08:57:10 -04:00
aidawanglion
2d3ce418dd translationBot(ui): update translation (Chinese (Simplified Han script))
Currently translated at 73.7% (1160 of 1573 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
2024-10-31 17:18:35 +11:00
Hosted Weblate
548d72f7b9 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
2024-10-31 17:18:35 +11:00
aidawanglion
19837a0f29 translationBot(ui): update translation (Chinese (Simplified Han script))
Currently translated at 73.3% (1146 of 1563 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
2024-10-31 17:18:35 +11:00
aidawanglion
483b65a1dc translationBot(ui): update translation (Chinese (Simplified Han script))
Currently translated at 69.4% (1086 of 1563 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
2024-10-31 17:18:35 +11:00
Riccardo Giovanetti
b85931c7ab translationBot(ui): update translation (Italian)
Currently translated at 99.4% (1554 of 1563 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
2024-10-31 17:18:35 +11:00
Hosted Weblate
9225f47338 translationBot(ui): update translation files
Updated by "Remove blank strings" hook in Weblate.

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
2024-10-31 17:18:35 +11:00
Riccardo Giovanetti
bccac5e4a6 translationBot(ui): update translation (Italian)
Currently translated at 99.4% (1553 of 1562 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
2024-10-31 17:18:35 +11:00
Hosted Weblate
7cb07fdc04 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
2024-10-31 17:18:35 +11:00
dakota2472
b137450026 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (1562 of 1562 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
2024-10-31 17:18:35 +11:00
Hosted Weblate
dc5090469a translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
2024-10-31 17:18:35 +11:00
Thomas Bolteau
e0ae2ace89 translationBot(ui): update translation (French)
Currently translated at 100.0% (1561 of 1561 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/fr/
2024-10-31 17:18:35 +11:00
Riku
269faae04b translationBot(ui): update translation (German)
Currently translated at 71.4% (1115 of 1561 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
2024-10-31 17:18:35 +11:00
Riccardo Giovanetti
e282acd41c translationBot(ui): update translation (Italian)
Currently translated at 98.8% (1543 of 1561 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
2024-10-31 17:18:35 +11:00
Ettore Atalan
a266668348 translationBot(ui): update translation (German)
Currently translated at 69.3% (1083 of 1561 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
2024-10-31 17:18:35 +11:00
Riccardo Giovanetti
3bb3e142fc translationBot(ui): update translation (Italian)
Currently translated at 98.8% (1543 of 1561 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
2024-10-31 17:18:35 +11:00
Hosted Weblate
6ac6d70a22 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-10-31 17:18:35 +11:00
Riccardo Giovanetti
b0acf33ba5 translationBot(ui): update translation (Italian)
Currently translated at 98.5% (1496 of 1518 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-10-31 17:18:35 +11:00
qyouqme
b3eb64b64c translationBot(ui): update translation (Chinese (Simplified Han script))
Currently translated at 66.0% (1003 of 1518 strings)

Co-authored-by: qyouqme <camtasiacn@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2024-10-31 17:18:35 +11:00
Riku
95f8ab1a29 translationBot(ui): update translation (German)
Currently translated at 71.3% (1083 of 1518 strings)

Co-authored-by: Riku <riku.block@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-10-31 17:18:35 +11:00
Hosted Weblate
4e043384db translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-10-31 17:18:35 +11:00
psychedelicious
0f5df8ba17 chore(ui): lint 2024-10-31 16:54:31 +11:00
psychedelicious
2826ab48a2 refactor(ui): layer interaction locking
Previously we maintained an `isInteractable` flag, which was derived from these layer flags:
- Locked/unlocked
- Enabled/disabled
- Layer's type visible/hidden

When a layer was not interactable, we blocked all layer actions.

After comparing to the behaviour in Affinity and considering user feedback, I've loosened these restrictions while maintaining safety. First, some definitions.

There two kinds of layer actions - mutating actions and non-mutating actions.
- Mutating actions are drawing on the layer, cropping it, filtering it, converting it, etc. Anything that changes the layer.
- Non-mutating actions are copying the layer, saving the layer to gallery, etc. Anything that _uses_ the layer.

Then, there are two broad canvas states - busy and not busy. "Busy" means the canvas is actively filtering, staging, compositing layers together, etc - something that is "single-threaded" by nature.

And here are the revised restrictions:
- When canvas is busy, you cannot initiate any layer actions.
- When the canvas is not busy, and the layer is locked, you initiate any mutating actions.
- When the canvas is not busy and the layer is not locked, you can initiate any layer action.

Besides safely giving users more freedom, it also fixes an issue where the context menu for a layer was disabled if it was not the selected layer.
2024-10-31 16:54:31 +11:00
psychedelicious
7ff1b635c8 docs: clarify comments for invoke method return annotation validation 2024-10-31 16:21:07 +11:00
psychedelicious
dfb5e8b5e5 tests: add invoke method & output annotation tests 2024-10-31 16:21:07 +11:00
psychedelicious
7259da799c feat(nodes): attempt to look up invoke return types by name 2024-10-31 16:21:07 +11:00
psychedelicious
965069fce1 tests: fix nodes tests
they now require a valid output
2024-10-31 16:21:07 +11:00
psychedelicious
90232806d9 feat(nodes): add validation for invoke method return types 2024-10-31 16:21:07 +11:00
Hippalectryon
81bc153399 Fix link in dev docs 2024-10-31 16:06:44 +11:00
Jonathan
c63e526f99 Update FAQ.md
Fixed typo
2024-10-31 16:04:23 +11:00
nirmal0001
2b74263007 Update patchmatch.md
add required Install dependencies for arch linux
2024-10-31 16:01:57 +11:00
psychedelicious
d3a82f7119 feat(ui): do not show hftoken error until user attempts to set it 2024-10-31 15:47:14 +11:00
Mary Hipp
291c5a0341 lint 2024-10-31 15:47:14 +11:00
Mary Hipp
bcb41399ca feat(ui,api): support for HF tokens in UI, handle Unauthorized and Forbidden errors 2024-10-31 15:47:14 +11:00
psychedelicious
6f0f53849b tests: reset config changes in test_deny_nodes when finished testing 2024-10-31 15:22:14 +11:00
psychedelicious
4e7d63761a fix(nodes): nodes denylist handling
- Add method to force a rebuild of the pydantic type adapter for the union of invocations, which is used to validate graphs.
- Update the xfail'd test.
2024-10-31 15:22:14 +11:00
psychedelicious
198c84105d fix(ui): compositor not setting processing flag when cleaning up 2024-10-30 16:27:36 +11:00
psychedelicious
2453b9f443 chore: bump version to v5.3.0rc1 2024-10-30 13:11:41 +11:00
psychedelicious
b091aca986 chore(ui): lint 2024-10-30 11:05:46 +11:00
psychedelicious
8f02ce54a0 perf(ui): cache image data & transparency mode during generation mode calculation
Perf boost and reduces the number of images we create on the backend.
2024-10-30 11:05:46 +11:00
psychedelicious
f4b7c63002 feat(ui): omit non-render-impacting keys when hashing entities
Had missed several of these, which means we were invalidating caches far too often. For example, when you changed a RG prompt, we were invalidating the cached canvas for that entity, even though changing the prompt doesn't affect the canvas at all.
2024-10-30 11:05:46 +11:00
psychedelicious
a4629280b5 feat(ui): use typeguard instead of string comparison 2024-10-30 11:05:46 +11:00
psychedelicious
855fb007da tidy(ui): minor type fix 2024-10-30 11:05:46 +11:00
psychedelicious
d805b52c1f feat(ui): merge down deletes merged entities 2024-10-30 11:05:46 +11:00
psychedelicious
2ea55685bb feat(ui): add save to assets for inpaint & rg 2024-10-30 11:05:46 +11:00
psychedelicious
bd6ff3deaa feat(ui): add merge down for all entity types 2024-10-30 11:05:46 +11:00
psychedelicious
82dd53ec88 tidy(ui): clean up merge visible logic 2024-10-30 11:05:46 +11:00
psychedelicious
71d749541d feat(ui): control layers supports merge visible
The "lighter" GlobalCompositeOperation is used. This seems to be the best one when merging control layers, as it retains edge maps.
2024-10-30 11:05:46 +11:00
psychedelicious
48a57fc4b9 feat(ui): support globalCompositeOperation when compositing canvas 2024-10-30 11:05:46 +11:00
psychedelicious
530e0910fc feat(ui): regional guidance supports merge visible 2024-10-30 11:05:46 +11:00
psychedelicious
2fdf8fc0a2 feat(ui): merge visible creates new layer
Previously, merge visible deleted all other visible layers. This is not how affinity works, I should have confirmed before making it work like this in the first place.Ï
2024-10-30 11:05:46 +11:00
psychedelicious
91db9c9300 refactor(ui): generalize compositor methods
`CanvasCompositorModule` had a fairly inflexible API, only supporting compositing all raster layers or inpaint masks.

The API has been generalized work with a list of canvas entities. This enables `Merge Down` and `Merge Selected` functionality (though `Merge Selected` is not part of this set of changes).
2024-10-30 11:05:46 +11:00
psychedelicious
bc42205593 fix(ui): remember to disable isFiltering when finishing filtering 2024-10-30 09:19:30 +11:00
psychedelicious
2e3cba6416 fix(ui): flash of original layer when applying filter/segment
Let the parent module adopt the filtered/segemented image instead of destroying it and making the parent re-create it, which results in a brief flash of the parent layer's original objects before the new image is rendered.
2024-10-30 09:19:30 +11:00
psychedelicious
7852aacd11 fix(uI): track whether graph succeeded in runGraphAndReturnImageOutput
This prevents extraneous graph cancel requests when cleaning up the abort signal after a successful run of a graph.
2024-10-30 09:19:30 +11:00
psychedelicious
6cccd67ecd feat(ui): update SAM module to w/ minor improvements from filter module 2024-10-30 09:19:30 +11:00
psychedelicious
a7a89c9de1 feat(ui): use more resilient logic in canvas filter module, same as in SAM module 2024-10-30 09:19:30 +11:00
psychedelicious
5ca8eed89e tidy(ui): remove all buffer renderer interactions in SAM module
We don't use the buffer rendere in this module; there's no reason to clear it.
2024-10-30 09:19:30 +11:00
psychedelicious
c885c3c9a6 fix(ui): filter layer data pushed to parent rendered when saving as 2024-10-30 09:19:30 +11:00
Mary Hipp
d81c38c350 update announcements 2024-10-29 09:53:13 -04:00
Riku
92d5b73215 fix(ui): seamless zod parameter cleanup 2024-10-29 20:43:44 +11:00
Riku
097e92db6a fix(ui): always write seamless metadata
Ensure images without seamless enabled correctly reset the setting
when all parameters are recalled
2024-10-29 20:43:44 +11:00
Riku
84c6209a45 feat(ui): display seamless values in metadata viewer 2024-10-29 20:43:44 +11:00
Riku
107e48808a fix(ui): recall seamless settings 2024-10-29 20:43:44 +11:00
dunkeroni
47168b5505 chore: make ruff 2024-10-29 14:07:20 +11:00
dunkeroni
58152ec981 fix preview progress bar pre-denoise 2024-10-29 14:07:20 +11:00
dunkeroni
c74afbf332 convert to bgr on sdxl t2i 2024-10-29 14:07:20 +11:00
psychedelicious
7cdda00a54 feat(ui): rearrange canvas paste back nodes to save an image step
We were scaling the unscaled image and mask down before doing the paste-back, but this adds an extraneous step & image output.

We can do the paste-back first, then scale to output size after. So instead of 2 resizes before the paste-back, we have 1 resize after.

The end result is the same.
2024-10-29 11:13:31 +11:00
psychedelicious
a74282bce6 feat(ui): graph builders use objects for arg instead of many args 2024-10-29 11:13:31 +11:00
psychedelicious
107f048c7a feat(ui): extract canvas output node prefix to constant 2024-10-29 11:13:31 +11:00
Ryan Dick
a2486a5f06 Remove unused prediction_type and upcast_attention from from_single_file(...) calls. 2024-10-28 13:05:17 -04:00
Ryan Dick
07ab116efb Remove load_safety_checker=False from calls to from_single_file(...).
This param has been deprecated, and by including it (even when set to
False) the safety checker automatically gets downloaded.
2024-10-28 13:05:17 -04:00
Ryan Dick
1a13af3c7a Fix huggingface_hub.errors imports after version bump. 2024-10-28 13:05:17 -04:00
Ryan Dick
f2966a2594 Fix changed import for FromOriginalControlNetMixin after diffusers bump. 2024-10-28 13:05:17 -04:00
Ryan Dick
58bb97e3c6 Bump diffusers, accelerate, and huggingface-hub. 2024-10-28 13:05:17 -04:00
dunkeroni
34569a2410 Make T2I Adapters compatible with x8 resolutions 2024-10-27 15:38:22 -04:00
psychedelicious
a84aa5c049 fix(ui): canvas alerts blocking metadata panel 2024-10-27 09:46:01 +11:00
dunkeroni
acfa9c87ef Merge branch 'main' into sdxl_t2i_bgr 2024-10-25 23:44:13 -04:00
dunkeroni
f245d8e429 chore: make ruff 2024-10-25 23:43:33 -04:00
dunkeroni
62cf0f54e0 fix preview progress bar pre-denoise 2024-10-25 23:22:06 -04:00
dunkeroni
5f015e76ba convert to bgr on sdxl t2i 2024-10-25 23:04:17 -04:00
psychedelicious
aebcec28e0 chore: bump version to v5.3.0 2024-10-25 22:37:59 -04:00
psychedelicious
db1c5a94f7 feat(ui): image ctx -> New from Image -> Canvas as Raster/Control Layer 2024-10-25 22:27:00 -04:00
psychedelicious
56222a8493 feat(ui): organize layer context menu items 2024-10-25 22:27:00 -04:00
psychedelicious
b7510ce709 feat(ui): filter, select object and transform UI buttons
- Restore dedicated `Apply` buttons
- Remove icons from the buttons, too much noise when the words are short and clear
- Update loading state to show a spinner next to the `Process` button instead of on _every_ button
2024-10-25 22:27:00 -04:00
psychedelicious
5739799e2e fix(ui): close viewer when transforming 2024-10-25 22:27:00 -04:00
psychedelicious
813cf87920 feat(ui): move canvas alerts to top-left corner 2024-10-25 22:27:00 -04:00
psychedelicious
c95b151daf feat(ui): add layer title heading for canvas ctx menu 2024-10-25 22:27:00 -04:00
psychedelicious
a0f823a3cf feat(ui): reset shouldShowStagedImage flag when starting staging 2024-10-25 22:27:00 -04:00
Hippalectryon
64e0f6d688 Improve dev install docs
Fix numbering
2024-10-25 08:27:26 -04:00
psychedelicious
ddd5b1087c fix(nodes): return copies of objects in invocation ctx
Closes #6820
2024-10-25 08:26:09 -04:00
psychedelicious
008be9b846 feat(ui): add all save as options to filter 2024-10-25 08:12:14 -04:00
psychedelicious
8e7cabdc04 feat(ui): add Replace Current open to Select Object -> Save As 2024-10-25 08:12:14 -04:00
psychedelicious
a4c4237f99 feat(ui): use PiPlayFill for process buttons for filter & select object 2024-10-25 08:12:14 -04:00
psychedelicious
bda3740dcd feat(ui): use fill style icons for Filter 2024-10-25 08:12:14 -04:00
psychedelicious
5b4633baa9 feat(ui): use PiShapesFill icon for Select Object 2024-10-25 08:12:14 -04:00
psychedelicious
96351181cb feat(ui): make canvas layer toolbar icons a bit larger 2024-10-25 08:12:14 -04:00
psychedelicious
957d591d99 feat(ui): "Auto-Mask" -> "Select Object" 2024-10-25 08:12:14 -04:00
psychedelicious
75f605ba1a feat(ui): support inverted selection in auto-mask 2024-10-25 08:12:14 -04:00
psychedelicious
ab898a7180 chore(ui): typegen 2024-10-25 08:12:14 -04:00
psychedelicious
c9a4516ab1 feat(nodes): add invert to apply_tensor_mask_to_image 2024-10-25 08:12:14 -04:00
psychedelicious
fe97c0d5eb tweak(ui): default settings verbiage 2024-10-25 16:09:59 +11:00
psychedelicious
6056764840 feat(ui): disable default settings button when synced
A blue button is begging to be clicked, but clicking it will do nothing. Instead, we should communicate that no action is needed by disabling the button when the default settings are already in use.
2024-10-25 16:09:59 +11:00
psychedelicious
8747c0dbb0 fix(ui): handle no model selection in default settings tooltip 2024-10-25 16:09:59 +11:00
psychedelicious
c5cdd5f9c6 fix(ui): use const EMPTY_OBJECT to prevent rerenders 2024-10-25 16:09:59 +11:00
psychedelicious
abc5d53159 fix(ui): use explicit null check when comparing default settings
Using `&&` will result in false negatives for settings where a falsy value might be valid. For example, any setting for which 0 is a valid number. To be on the safe side, just use an explicit null check on all values.
2024-10-25 16:09:59 +11:00
psychedelicious
2f76019a89 tweak(ui): defaults sync tooltip styling 2024-10-25 16:09:59 +11:00
Mary Hipp
3f45beb1ed feat(ui): add out of sync details to model default settings button 2024-10-25 16:09:59 +11:00
Mary Hipp
bc1126a85b (ui): add setting for showing model descriptions in dropdown defaulted to true 2024-10-25 14:52:33 +11:00
psychedelicious
380017041e fix(app): mutating an image also changes the in-memory cached image
We use an in-memory cache for PIL images to reduce I/O. If a node mutates the image in any way, the cached image object is also updated (but the on-disk image file is not).

We've lucked out that this hasn't caused major issues in the past (well, maybe it has but we didn't understand them?) mainly because of a happy accident. When you call `context.images.get_pil` in a node, if you provide an image mode (e.g. `mode="RGB"`), we call `convert`  on the image. This returns a copy. The node can do whatever it wants to that copy and nothing breaks.

However, when mode is not specified, we return the image directly. This is where we get in trouble - nodes that load the image like this, and then mutate the image, update the cache. Other nodes that reference that same image will now get the mutated version of it.

The fix is super simple - we make sure to return only copies from `get_pil`.
2024-10-25 10:22:22 +11:00
psychedelicious
ab7cdbb7e0 fix(ui): do not delete point on right-mouse click 2024-10-25 10:22:22 +11:00
psychedelicious
e5b78d0221 fix(ui): canvas drop area grid layout 2024-10-25 10:22:22 +11:00
psychedelicious
1acaa6c486 chore: bump version to v5.3.0rc2 2024-10-25 07:50:58 +11:00
psychedelicious
b0381076b7 revert(ui): drop targets for inpaint mask and rg 2024-10-25 07:42:46 +11:00
psychedelicious
ffff2d6dbb feat(ui): add New from Image submenu for image ctx menu 2024-10-25 07:42:46 +11:00
psychedelicious
afa9f07649 fix(ui): missing cursor when transforming 2024-10-25 07:42:46 +11:00
psychedelicious
addb5c49ea feat(ui): support dnd images onto inpaint mask/rg entities 2024-10-25 07:42:46 +11:00
psychedelicious
a112d2d55b feat(ui): add logging to useCopyLayerToClipboard 2024-10-25 07:42:46 +11:00
psychedelicious
619a271c8a feat(ui): disable copy to clipboard when layer is empty 2024-10-25 07:42:46 +11:00
psychedelicious
909f2ee36d feat(ui): add help tooltip to automask 2024-10-25 07:42:46 +11:00
psychedelicious
b4cf3d9d03 fix(ui): canvas context menu w/ eraser tool erases 2024-10-25 07:42:46 +11:00
psychedelicious
e6ab6e0293 chore(ui): lint 2024-10-24 08:39:29 -04:00
psychedelicious
66d9c7c631 fix(ui): icon for automask save as 2024-10-24 08:39:29 -04:00
psychedelicious
fec45f3eb6 feat(ui): animate automask preview overlay 2024-10-24 08:39:29 -04:00
psychedelicious
7211d1a6fc feat(ui): add context menu options for layer type convert/copy 2024-10-24 08:39:29 -04:00
psychedelicious
f3069754a9 feat(ui): add logic to convert/copy between all layer types 2024-10-24 08:39:29 -04:00
psychedelicious
4f43152aeb fix(ui): handle pen/touch events on submenu 2024-10-24 08:39:29 -04:00
psychedelicious
7125055d02 fix(ui): icon menu item group spacing 2024-10-24 08:39:29 -04:00
psychedelicious
c91a9ce390 feat(ui): add pull bbox to global ref image ctx menu 2024-10-24 08:39:29 -04:00
psychedelicious
3e7b73da2c feat(ui): add entity context menu as canvas context menu sub-menu 2024-10-24 08:39:29 -04:00
psychedelicious
61ac50c00d feat(ui): use sub-menu for image metadata recall 2024-10-24 08:39:29 -04:00
psychedelicious
c1201f0bce feat(ui): add useSubMenu hook to abstract logic for sub-menus 2024-10-24 08:39:29 -04:00
psychedelicious
acdffac5ad feat(ui): close viewer when filtering/transforming/automasking 2024-10-24 08:39:29 -04:00
psychedelicious
e420300fa4 feat(ui): replace automask apply w/ save as menu 2024-10-24 08:39:29 -04:00
psychedelicious
260a5a4f9a feat(ui): add automask button to toolbar 2024-10-24 08:39:29 -04:00
psychedelicious
ed0c2006fe feat(ui): rename "foreground"/"background" -> "include"/"exclude" 2024-10-24 08:39:29 -04:00
psychedelicious
9ffd888c86 feat(ui): remove neutral points 2024-10-24 08:39:29 -04:00
psychedelicious
175a9dc28d feat(ui): more resilient auto-masking processing
- Use a hash of the last processed points instead of a `hasProcessed` flag to determine whether or not we should re-process a given set of points.
- Store point coords in state instead of pulling them out of the konva node positions. This makes moving a point a more explicit action in code.
- Add a `roundCoord` util to round the x and y values of a coordinate.
- Ensure we always re-process when $points changes.
2024-10-24 08:39:29 -04:00
psychedelicious
5764e4f7f2 chore(ui): lint 2024-10-24 23:34:06 +11:00
psychedelicious
4275a494b9 tweak(ui): bundle info icon 2024-10-24 23:34:06 +11:00
psychedelicious
a3deb8d30d tweak(ui): bundle tooltip styling 2024-10-24 23:34:06 +11:00
Mary Hipp
aafdb0a37b update popover copy 2024-10-24 23:34:06 +11:00
Mary Hipp
56a815719a update schema 2024-10-24 23:34:06 +11:00
Mary Hipp
4db26bfa3a (ui): add information popovers for other layer types 2024-10-24 23:34:06 +11:00
Mary Hipp
8d84ccb12b bump UI dep for combobox descriptions 2024-10-24 23:34:06 +11:00
Mary Hipp
3321d14997 undo show descriptions for now 2024-10-24 23:34:06 +11:00
maryhipp
43cc4684e1 (api) make sure all controlnet starter models will still have pre-processors correctly assigned when probed based on name 2024-10-24 23:34:06 +11:00
Mary Hipp
afa5a4b17c (ui): add informational popover for controlnet layers 2024-10-24 23:34:06 +11:00
Mary Hipp
33c433fe59 (ui): show models in starter bundles on hover, use previous_names for isInstalled logic, allow grouped model combobox to optionally show descriptions 2024-10-24 23:34:06 +11:00
maryhipp
9cd47fa857 (api): update names of starter models, add ability to track previous_names so it does not mess up logic that prevents dupe starter model installs 2024-10-24 23:34:06 +11:00
psychedelicious
32d9abe802 tweak(ui): prevent show/hide boards button cutoff
The use of hard 25% widths caused issues for some translations. Adjusted styling to not rely on any hard numbers. Tested with a project name and URL.
2024-10-24 08:21:16 -04:00
psychedelicious
3947d4a165 fix(ui): normalize infill alpha to 0-255 when building infill nodes
The browser/UI uses float 0-1 for alpha, while backend uses 0-255. We need to normalize the value when building the infill nodes for outpaint.
2024-10-24 19:22:36 +11:00
399 changed files with 17261 additions and 12951 deletions

View File

@@ -19,3 +19,4 @@
- [ ] _The PR has a short but descriptive title, suitable for a changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_

View File

@@ -5,7 +5,7 @@ If you're a new contributor to InvokeAI or Open Source Projects, this is the gui
## New Contributor Checklist
- [x] Set up your local development environment & fork of InvokAI by following [the steps outlined here](../dev-environment.md)
- [x] Set up your local tooling with [this guide](InvokeAI/contributing/LOCAL_DEVELOPMENT/#developing-invokeai-in-vscode). Feel free to skip this step if you already have tooling you're comfortable with.
- [x] Set up your local tooling with [this guide](../LOCAL_DEVELOPMENT.md). Feel free to skip this step if you already have tooling you're comfortable with.
- [x] Familiarize yourself with [Git](https://www.atlassian.com/git) & our project structure by reading through the [development documentation](development.md)
- [x] Join the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord
- [x] Choose an issue to work on! This can be achieved by asking in the #dev-chat channel, tackling a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) or finding an item on the [roadmap](https://github.com/orgs/invoke-ai/projects/7). If nothing in any of those places catches your eye, feel free to work on something of interest to you!

View File

@@ -17,46 +17,49 @@ If you just want to use Invoke, you should use the [installer][installer link].
## Setup
1. Run through the [requirements][requirements link].
1. [Fork and clone][forking link] the [InvokeAI repo][repo link].
1. Create an directory for user data (images, models, db, etc). This is typically at `~/invokeai`, but if you already have a non-dev install, you may want to create a separate directory for the dev install.
1. Create a python virtual environment inside the directory you just created:
2. [Fork and clone][forking link] the [InvokeAI repo][repo link].
3. Create an directory for user data (images, models, db, etc). This is typically at `~/invokeai`, but if you already have a non-dev install, you may want to create a separate directory for the dev install.
4. Create a python virtual environment inside the directory you just created:
```sh
python3 -m venv .venv --prompt InvokeAI-Dev
```
```sh
python3 -m venv .venv --prompt InvokeAI-Dev
```
1. Activate the venv (you'll need to do this every time you want to run the app):
5. Activate the venv (you'll need to do this every time you want to run the app):
```sh
source .venv/bin/activate
```
```sh
source .venv/bin/activate
```
1. Install the repo as an [editable install][editable install link]:
6. Install the repo as an [editable install][editable install link]:
```sh
pip install -e ".[dev,test,xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
```
```sh
pip install -e ".[dev,test,xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
```
Refer to the [manual installation][manual install link]] instructions for more determining the correct install options. `xformers` is optional, but `dev` and `test` are not.
Refer to the [manual installation][manual install link]] instructions for more determining the correct install options. `xformers` is optional, but `dev` and `test` are not.
1. Install the frontend dev toolchain:
7. Install the frontend dev toolchain:
- [`nodejs`](https://nodejs.org/) (recommend v20 LTS)
- [`pnpm`](https://pnpm.io/installation#installing-a-specific-version) (must be v8 - not v9!)
- [`pnpm`](https://pnpm.io/8.x/installation) (must be v8 - not v9!)
1. Do a production build of the frontend:
8. Do a production build of the frontend:
```sh
pnpm build
```
```sh
cd PATH_TO_INVOKEAI_REPO/invokeai/frontend/web
pnpm i
pnpm build
```
1. Start the application:
9. Start the application:
```sh
python scripts/invokeai-web.py
```
```sh
cd PATH_TO_INVOKEAI_REPO
python scripts/invokeai-web.py
```
1. Access the UI at `localhost:9090`.
10. Access the UI at `localhost:9090`.
## Updating the UI

View File

@@ -209,7 +209,7 @@ checkpoint models.
To solve this, go to the Model Manager tab (the cube), select the
checkpoint model that's giving you trouble, and press the "Convert"
button in the upper right of your browser window. This will conver the
button in the upper right of your browser window. This will convert the
checkpoint into a diffusers model, after which loading should be
faster and less memory-intensive.

View File

@@ -97,16 +97,16 @@ Prior to installing PyPatchMatch, you need to take the following steps:
sudo pacman -S --needed base-devel
```
2. Install `opencv` and `blas`:
2. Install `opencv`, `blas`, and required dependencies:
```sh
sudo pacman -S opencv blas
sudo pacman -S opencv blas fmt glew vtk hdf5
```
or for CUDA support
```sh
sudo pacman -S opencv-cuda blas
sudo pacman -S opencv-cuda blas fmt glew vtk hdf5
```
3. Fix the naming of the `opencv` package configuration file:

View File

@@ -259,7 +259,7 @@ def select_gpu() -> GpuType:
[
f"Detected the [gold1]{OS}-{ARCH}[/] platform",
"",
"See [deep_sky_blue1]https://invoke-ai.github.io/InvokeAI/#system[/] to ensure your system meets the minimum requirements.",
"See [deep_sky_blue1]https://invoke-ai.github.io/InvokeAI/installation/requirements/[/] to ensure your system meets the minimum requirements.",
"",
"[red3]🠶[/] [b]Your GPU drivers must be correctly installed before using InvokeAI![/] [red3]🠴[/]",
]

View File

@@ -68,7 +68,7 @@ do_line_input() {
printf "2: Open the developer console\n"
printf "3: Command-line help\n"
printf "Q: Quit\n\n"
printf "To update, download and run the installer from https://github.com/invoke-ai/InvokeAI/releases/latest.\n\n"
printf "To update, download and run the installer from https://github.com/invoke-ai/InvokeAI/releases/latest\n\n"
read -p "Please enter 1-4, Q: [1] " yn
choice=${yn:='1'}
do_choice $choice

View File

@@ -40,6 +40,8 @@ class AppVersion(BaseModel):
version: str = Field(description="App version")
highlights: Optional[list[str]] = Field(default=None, description="Highlights of release")
class AppDependencyVersions(BaseModel):
"""App depencency Versions Response"""

View File

@@ -1,6 +1,7 @@
# Copyright (c) 2023 Lincoln D. Stein
"""FastAPI route for model configuration records."""
import contextlib
import io
import pathlib
import shutil
@@ -10,6 +11,7 @@ from enum import Enum
from tempfile import TemporaryDirectory
from typing import List, Optional, Type
import huggingface_hub
from fastapi import Body, Path, Query, Response, UploadFile
from fastapi.responses import FileResponse, HTMLResponse
from fastapi.routing import APIRouter
@@ -27,6 +29,7 @@ from invokeai.app.services.model_records import (
ModelRecordChanges,
UnknownModelException,
)
from invokeai.app.util.suppress_output import SuppressOutput
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
@@ -808,7 +811,11 @@ def get_is_installed(
for model in installed_models:
if model.source == starter_model.source:
return True
if model.name == starter_model.name and model.base == starter_model.base and model.type == starter_model.type:
if (
(model.name == starter_model.name or model.name in starter_model.previous_names)
and model.base == starter_model.base
and model.type == starter_model.type
):
return True
return False
@@ -919,3 +926,51 @@ async def get_stats() -> Optional[CacheStats]:
"""Return performance statistics on the model manager's RAM cache. Will return null if no models have been loaded."""
return ApiDependencies.invoker.services.model_manager.load.ram_cache.stats
class HFTokenStatus(str, Enum):
VALID = "valid"
INVALID = "invalid"
UNKNOWN = "unknown"
class HFTokenHelper:
@classmethod
def get_status(cls) -> HFTokenStatus:
try:
if huggingface_hub.get_token_permission(huggingface_hub.get_token()):
# Valid token!
return HFTokenStatus.VALID
# No token set
return HFTokenStatus.INVALID
except Exception:
return HFTokenStatus.UNKNOWN
@classmethod
def set_token(cls, token: str) -> HFTokenStatus:
with SuppressOutput(), contextlib.suppress(Exception):
huggingface_hub.login(token=token, add_to_git_credential=False)
return cls.get_status()
@model_manager_router.get("/hf_login", operation_id="get_hf_login_status", response_model=HFTokenStatus)
async def get_hf_login_status() -> HFTokenStatus:
token_status = HFTokenHelper.get_status()
if token_status is HFTokenStatus.UNKNOWN:
ApiDependencies.invoker.services.logger.warning("Unable to verify HF token")
return token_status
@model_manager_router.post("/hf_login", operation_id="do_hf_login", response_model=HFTokenStatus)
async def do_hf_login(
token: str = Body(description="Hugging Face token to use for login", embed=True),
) -> HFTokenStatus:
HFTokenHelper.set_token(token)
token_status = HFTokenHelper.get_status()
if token_status is HFTokenStatus.UNKNOWN:
ApiDependencies.invoker.services.logger.warning("Unable to verify HF token")
return token_status

View File

@@ -4,6 +4,7 @@ from __future__ import annotations
import inspect
import re
import sys
import warnings
from abc import ABC, abstractmethod
from enum import Enum
@@ -192,12 +193,19 @@ class BaseInvocation(ABC, BaseModel):
"""Gets a pydantc TypeAdapter for the union of all invocation types."""
if not cls._typeadapter or cls._typeadapter_needs_update:
AnyInvocation = TypeAliasType(
"AnyInvocation", Annotated[Union[tuple(cls._invocation_classes)], Field(discriminator="type")]
"AnyInvocation", Annotated[Union[tuple(cls.get_invocations())], Field(discriminator="type")]
)
cls._typeadapter = TypeAdapter(AnyInvocation)
cls._typeadapter_needs_update = False
return cls._typeadapter
@classmethod
def invalidate_typeadapter(cls) -> None:
"""Invalidates the typeadapter, forcing it to be rebuilt on next access. If the invocation allowlist or
denylist is changed, this should be called to ensure the typeadapter is updated and validation respects
the updated allowlist and denylist."""
cls._typeadapter_needs_update = True
@classmethod
def get_invocations(cls) -> Iterable[BaseInvocation]:
"""Gets all invocations, respecting the allowlist and denylist."""
@@ -479,6 +487,26 @@ def invocation(
title="type", default=invocation_type, json_schema_extra={"field_kind": FieldKind.NodeAttribute}
)
# Validate the `invoke()` method is implemented
if "invoke" in cls.__abstractmethods__:
raise ValueError(f'Invocation "{invocation_type}" must implement the "invoke" method')
# And validate that `invoke()` returns a subclass of `BaseInvocationOutput
invoke_return_annotation = signature(cls.invoke).return_annotation
try:
# TODO(psyche): If `invoke()` is not defined, `return_annotation` ends up as the string "BaseInvocationOutput"
# instead of the class `BaseInvocationOutput`. This may be a pydantic bug: https://github.com/pydantic/pydantic/issues/7978
if isinstance(invoke_return_annotation, str):
invoke_return_annotation = getattr(sys.modules[cls.__module__], invoke_return_annotation)
assert invoke_return_annotation is not BaseInvocationOutput
assert issubclass(invoke_return_annotation, BaseInvocationOutput)
except Exception:
raise ValueError(
f'Invocation "{invocation_type}" must have a return annotation of a subclass of BaseInvocationOutput (got "{invoke_return_annotation}")'
)
docstring = cls.__doc__
cls = create_model(
cls.__qualname__,

View File

@@ -95,6 +95,7 @@ class CompelInvocation(BaseInvocation):
ti_manager,
),
):
context.util.signal_progress("Building conditioning")
assert isinstance(text_encoder, CLIPTextModel)
assert isinstance(tokenizer, CLIPTokenizer)
compel = Compel(
@@ -191,6 +192,7 @@ class SDXLPromptInvocationBase:
ti_manager,
),
):
context.util.signal_progress("Building conditioning")
assert isinstance(text_encoder, (CLIPTextModel, CLIPTextModelWithProjection))
assert isinstance(tokenizer, CLIPTokenizer)

View File

@@ -65,6 +65,7 @@ class CreateDenoiseMaskInvocation(BaseInvocation):
img_mask = tv_resize(mask, image_tensor.shape[-2:], T.InterpolationMode.BILINEAR, antialias=False)
masked_image = image_tensor * torch.where(img_mask < 0.5, 0.0, 1.0)
# TODO:
context.util.signal_progress("Running VAE encoder")
masked_latents = ImageToLatentsInvocation.vae_encode(vae_info, self.fp32, self.tiled, masked_image.clone())
masked_latents_name = context.tensors.save(tensor=masked_latents)

View File

@@ -131,6 +131,7 @@ class CreateGradientMaskInvocation(BaseInvocation):
image_tensor = image_tensor.unsqueeze(0)
img_mask = tv_resize(mask, image_tensor.shape[-2:], T.InterpolationMode.BILINEAR, antialias=False)
masked_image = image_tensor * torch.where(img_mask < 0.5, 0.0, 1.0)
context.util.signal_progress("Running VAE encoder")
masked_latents = ImageToLatentsInvocation.vae_encode(
vae_info, self.fp32, self.tiled, masked_image.clone()
)

View File

@@ -13,6 +13,7 @@ from diffusers.models.unets.unet_2d_condition import UNet2DConditionModel
from diffusers.schedulers.scheduling_dpmsolver_sde import DPMSolverSDEScheduler
from diffusers.schedulers.scheduling_tcd import TCDScheduler
from diffusers.schedulers.scheduling_utils import SchedulerMixin as Scheduler
from PIL import Image
from pydantic import field_validator
from torchvision.transforms.functional import resize as tv_resize
from transformers import CLIPVisionModelWithProjection
@@ -510,6 +511,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
context: InvocationContext,
t2i_adapters: Optional[Union[T2IAdapterField, list[T2IAdapterField]]],
ext_manager: ExtensionsManager,
bgr_mode: bool = False,
) -> None:
if t2i_adapters is None:
return
@@ -519,6 +521,10 @@ class DenoiseLatentsInvocation(BaseInvocation):
t2i_adapters = [t2i_adapters]
for t2i_adapter_field in t2i_adapters:
image = context.images.get_pil(t2i_adapter_field.image.image_name)
if bgr_mode: # SDXL t2i trained on cv2's BGR outputs, but PIL won't convert straight to BGR
r, g, b = image.split()
image = Image.merge("RGB", (b, g, r))
ext_manager.add_extension(
T2IAdapterExt(
node_context=context,
@@ -616,13 +622,17 @@ class DenoiseLatentsInvocation(BaseInvocation):
for t2i_adapter_field in t2i_adapter:
t2i_adapter_model_config = context.models.get_config(t2i_adapter_field.t2i_adapter_model.key)
t2i_adapter_loaded_model = context.models.load(t2i_adapter_field.t2i_adapter_model)
image = context.images.get_pil(t2i_adapter_field.image.image_name)
image = context.images.get_pil(t2i_adapter_field.image.image_name, mode="RGB")
# The max_unet_downscale is the maximum amount that the UNet model downscales the latent image internally.
if t2i_adapter_model_config.base == BaseModelType.StableDiffusion1:
max_unet_downscale = 8
elif t2i_adapter_model_config.base == BaseModelType.StableDiffusionXL:
max_unet_downscale = 4
# SDXL adapters are trained on cv2's BGR outputs
r, g, b = image.split()
image = Image.merge("RGB", (b, g, r))
else:
raise ValueError(f"Unexpected T2I-Adapter base model type: '{t2i_adapter_model_config.base}'.")
@@ -630,29 +640,39 @@ class DenoiseLatentsInvocation(BaseInvocation):
with t2i_adapter_loaded_model as t2i_adapter_model:
total_downscale_factor = t2i_adapter_model.total_downscale_factor
# Resize the T2I-Adapter input image.
# We select the resize dimensions so that after the T2I-Adapter's total_downscale_factor is applied, the
# result will match the latent image's dimensions after max_unet_downscale is applied.
t2i_input_height = latents_shape[2] // max_unet_downscale * total_downscale_factor
t2i_input_width = latents_shape[3] // max_unet_downscale * total_downscale_factor
# Note: We have hard-coded `do_classifier_free_guidance=False`. This is because we only want to prepare
# a single image. If CFG is enabled, we will duplicate the resultant tensor after applying the
# T2I-Adapter model.
#
# Note: We re-use the `prepare_control_image(...)` from ControlNet for T2I-Adapter, because it has many
# of the same requirements (e.g. preserving binary masks during resize).
# Assuming fixed dimensional scaling of LATENT_SCALE_FACTOR.
_, _, latent_height, latent_width = latents_shape
control_height_resize = latent_height * LATENT_SCALE_FACTOR
control_width_resize = latent_width * LATENT_SCALE_FACTOR
t2i_image = prepare_control_image(
image=image,
do_classifier_free_guidance=False,
width=t2i_input_width,
height=t2i_input_height,
width=control_width_resize,
height=control_height_resize,
num_channels=t2i_adapter_model.config["in_channels"], # mypy treats this as a FrozenDict
device=t2i_adapter_model.device,
dtype=t2i_adapter_model.dtype,
resize_mode=t2i_adapter_field.resize_mode,
)
# Resize the T2I-Adapter input image.
# We select the resize dimensions so that after the T2I-Adapter's total_downscale_factor is applied, the
# result will match the latent image's dimensions after max_unet_downscale is applied.
# We crop the image to this size so that the positions match the input image on non-standard resolutions
t2i_input_height = latents_shape[2] // max_unet_downscale * total_downscale_factor
t2i_input_width = latents_shape[3] // max_unet_downscale * total_downscale_factor
if t2i_image.shape[2] > t2i_input_height or t2i_image.shape[3] > t2i_input_width:
t2i_image = t2i_image[
:, :, : min(t2i_image.shape[2], t2i_input_height), : min(t2i_image.shape[3], t2i_input_width)
]
adapter_state = t2i_adapter_model(t2i_image)
if do_classifier_free_guidance:
@@ -900,7 +920,8 @@ class DenoiseLatentsInvocation(BaseInvocation):
# ext = extension_field.to_extension(exit_stack, context, ext_manager)
# ext_manager.add_extension(ext)
self.parse_controlnet_field(exit_stack, context, self.control, ext_manager)
self.parse_t2i_adapter_field(exit_stack, context, self.t2i_adapter, ext_manager)
bgr_mode = self.unet.unet.base == BaseModelType.StableDiffusionXL
self.parse_t2i_adapter_field(exit_stack, context, self.t2i_adapter, ext_manager, bgr_mode)
# ext: t2i/ip adapter
ext_manager.run_callback(ExtensionCallbackType.SETUP, denoise_ctx)

View File

@@ -41,6 +41,7 @@ class UIType(str, Enum, metaclass=MetaEnum):
# region Model Field Types
MainModel = "MainModelField"
FluxMainModel = "FluxMainModelField"
SD3MainModel = "SD3MainModelField"
SDXLMainModel = "SDXLMainModelField"
SDXLRefinerModel = "SDXLRefinerModelField"
ONNXModel = "ONNXModelField"
@@ -52,6 +53,8 @@ class UIType(str, Enum, metaclass=MetaEnum):
T2IAdapterModel = "T2IAdapterModelField"
T5EncoderModel = "T5EncoderModelField"
CLIPEmbedModel = "CLIPEmbedModelField"
CLIPLEmbedModel = "CLIPLEmbedModelField"
CLIPGEmbedModel = "CLIPGEmbedModelField"
SpandrelImageToImageModel = "SpandrelImageToImageModelField"
# endregion
@@ -131,6 +134,7 @@ class FieldDescriptions:
clip = "CLIP (tokenizer, text encoder, LoRAs) and skipped layer count"
t5_encoder = "T5 tokenizer and text encoder"
clip_embed_model = "CLIP Embed loader"
clip_g_model = "CLIP-G Embed loader"
unet = "UNet (scheduler, LoRAs)"
transformer = "Transformer"
mmditx = "MMDiTX"
@@ -248,6 +252,12 @@ class FluxConditioningField(BaseModel):
conditioning_name: str = Field(description="The name of conditioning tensor")
class SD3ConditioningField(BaseModel):
"""A conditioning tensor primitive value"""
conditioning_name: str = Field(description="The name of conditioning tensor")
class ConditioningField(BaseModel):
"""A conditioning tensor primitive value"""

View File

@@ -56,7 +56,7 @@ from invokeai.backend.util.devices import TorchDevice
title="FLUX Denoise",
tags=["image", "flux"],
category="image",
version="3.2.0",
version="3.2.1",
classification=Classification.Prototype,
)
class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
@@ -81,6 +81,7 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
description=FieldDescriptions.denoising_start,
)
denoising_end: float = InputField(default=1.0, ge=0, le=1, description=FieldDescriptions.denoising_end)
add_noise: bool = InputField(default=True, description="Add noise based on denoising start.")
transformer: TransformerField = InputField(
description=FieldDescriptions.flux_model,
input=Input.Connection,
@@ -207,9 +208,12 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
"to be poor. Consider using a FLUX dev model instead."
)
# Noise the orig_latents by the appropriate amount for the first timestep.
t_0 = timesteps[0]
x = t_0 * noise + (1.0 - t_0) * init_latents
if self.add_noise:
# Noise the orig_latents by the appropriate amount for the first timestep.
t_0 = timesteps[0]
x = t_0 * noise + (1.0 - t_0) * init_latents
else:
x = init_latents
else:
# init_latents are not provided, so we are not doing image-to-image (i.e. we are starting from pure noise).
if self.denoising_start > 1e-5:

View File

@@ -11,7 +11,10 @@ from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField
from invokeai.app.invocations.model import CLIPField, ModelIdentifierField, T5EncoderField, TransformerField, VAEField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.util import max_seq_lengths
from invokeai.backend.model_manager.config import CheckpointConfigBase, SubModelType
from invokeai.backend.model_manager.config import (
CheckpointConfigBase,
SubModelType,
)
@invocation_output("flux_model_loader_output")

View File

@@ -71,6 +71,7 @@ class FluxTextEncoderInvocation(BaseInvocation):
t5_encoder = HFEncoder(t5_text_encoder, t5_tokenizer, False, self.t5_max_seq_len)
context.util.signal_progress("Running T5 encoder")
prompt_embeds = t5_encoder(prompt)
assert isinstance(prompt_embeds, torch.Tensor)
@@ -111,6 +112,7 @@ class FluxTextEncoderInvocation(BaseInvocation):
clip_encoder = HFEncoder(clip_text_encoder, clip_tokenizer, True, 77)
context.util.signal_progress("Running CLIP encoder")
pooled_prompt_embeds = clip_encoder(prompt)
assert isinstance(pooled_prompt_embeds, torch.Tensor)

View File

@@ -41,7 +41,8 @@ class FluxVaeDecodeInvocation(BaseInvocation, WithMetadata, WithBoard):
def _vae_decode(self, vae_info: LoadedModel, latents: torch.Tensor) -> Image.Image:
with vae_info as vae:
assert isinstance(vae, AutoEncoder)
latents = latents.to(device=TorchDevice.choose_torch_device(), dtype=TorchDevice.choose_torch_dtype())
vae_dtype = next(iter(vae.parameters())).dtype
latents = latents.to(device=TorchDevice.choose_torch_device(), dtype=vae_dtype)
img = vae.decode(latents)
img = img.clamp(-1, 1)
@@ -53,6 +54,7 @@ class FluxVaeDecodeInvocation(BaseInvocation, WithMetadata, WithBoard):
def invoke(self, context: InvocationContext) -> ImageOutput:
latents = context.tensors.load(self.latents.latents_name)
vae_info = context.models.load(self.vae.vae)
context.util.signal_progress("Running VAE")
image = self._vae_decode(vae_info=vae_info, latents=latents)
TorchDevice.empty_cache()

View File

@@ -44,9 +44,8 @@ class FluxVaeEncodeInvocation(BaseInvocation):
generator = torch.Generator(device=TorchDevice.choose_torch_device()).manual_seed(0)
with vae_info as vae:
assert isinstance(vae, AutoEncoder)
image_tensor = image_tensor.to(
device=TorchDevice.choose_torch_device(), dtype=TorchDevice.choose_torch_dtype()
)
vae_dtype = next(iter(vae.parameters())).dtype
image_tensor = image_tensor.to(device=TorchDevice.choose_torch_device(), dtype=vae_dtype)
latents = vae.encode(image_tensor, sample=True, generator=generator)
return latents
@@ -60,6 +59,7 @@ class FluxVaeEncodeInvocation(BaseInvocation):
if image_tensor.dim() == 3:
image_tensor = einops.rearrange(image_tensor, "c h w -> 1 c h w")
context.util.signal_progress("Running VAE")
latents = self.vae_encode(vae_info=vae_info, image_tensor=image_tensor)
latents = latents.to("cpu")

View File

@@ -117,6 +117,7 @@ class ImageToLatentsInvocation(BaseInvocation):
if image_tensor.dim() == 3:
image_tensor = einops.rearrange(image_tensor, "c h w -> 1 c h w")
context.util.signal_progress("Running VAE encoder")
latents = self.vae_encode(
vae_info=vae_info, upcast=self.fp32, tiled=self.tiled, image_tensor=image_tensor, tile_size=self.tile_size
)

View File

@@ -60,6 +60,7 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
vae_info = context.models.load(self.vae.vae)
assert isinstance(vae_info.model, (AutoencoderKL, AutoencoderTiny))
with SeamlessExt.static_patch_model(vae_info.model, self.vae.seamless_axes), vae_info as vae:
context.util.signal_progress("Running VAE decoder")
assert isinstance(vae, (AutoencoderKL, AutoencoderTiny))
latents = latents.to(vae.device)
if self.fp32:

View File

@@ -165,6 +165,7 @@ class ApplyMaskTensorToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
mask: TensorField = InputField(description="The mask tensor to apply.")
image: ImageField = InputField(description="The image to apply the mask to.")
invert: bool = InputField(default=False, description="Whether to invert the mask.")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.images.get_pil(self.image.image_name, mode="RGBA")
@@ -179,6 +180,9 @@ class ApplyMaskTensorToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
mask = mask > 0.5
mask_np = (mask.float() * 255).byte().cpu().numpy().astype(np.uint8)
if self.invert:
mask_np = 255 - mask_np
# Apply the mask only to the alpha channel where the original alpha is non-zero. This preserves the original
# image's transparency - else the transparent regions would end up as opaque black.

View File

@@ -147,6 +147,10 @@ GENERATION_MODES = Literal[
"flux_img2img",
"flux_inpaint",
"flux_outpaint",
"sd3_txt2img",
"sd3_img2img",
"sd3_inpaint",
"sd3_outpaint",
]

View File

@@ -18,6 +18,7 @@ from invokeai.app.invocations.fields import (
InputField,
LatentsField,
OutputField,
SD3ConditioningField,
TensorField,
UIComponent,
)
@@ -426,6 +427,17 @@ class FluxConditioningOutput(BaseInvocationOutput):
return cls(conditioning=FluxConditioningField(conditioning_name=conditioning_name))
@invocation_output("sd3_conditioning_output")
class SD3ConditioningOutput(BaseInvocationOutput):
"""Base class for nodes that output a single SD3 conditioning tensor"""
conditioning: SD3ConditioningField = OutputField(description=FieldDescriptions.cond)
@classmethod
def build(cls, conditioning_name: str) -> "SD3ConditioningOutput":
return cls(conditioning=SD3ConditioningField(conditioning_name=conditioning_name))
@invocation_output("conditioning_output")
class ConditioningOutput(BaseInvocationOutput):
"""Base class for nodes that output a single conditioning tensor"""

View File

@@ -0,0 +1,338 @@
from typing import Callable, Optional, Tuple
import torch
import torchvision.transforms as tv_transforms
from diffusers.models.transformers.transformer_sd3 import SD3Transformer2DModel
from torchvision.transforms.functional import resize as tv_resize
from tqdm import tqdm
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.fields import (
DenoiseMaskField,
FieldDescriptions,
Input,
InputField,
LatentsField,
SD3ConditioningField,
WithBoard,
WithMetadata,
)
from invokeai.app.invocations.model import TransformerField
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.invocations.sd3_text_encoder import SD3_T5_MAX_SEQ_LEN
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.sampling_utils import clip_timestep_schedule_fractional
from invokeai.backend.model_manager.config import BaseModelType
from invokeai.backend.sd3.extensions.inpaint_extension import InpaintExtension
from invokeai.backend.stable_diffusion.diffusers_pipeline import PipelineIntermediateState
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import SD3ConditioningInfo
from invokeai.backend.util.devices import TorchDevice
@invocation(
"sd3_denoise",
title="SD3 Denoise",
tags=["image", "sd3"],
category="image",
version="1.1.0",
classification=Classification.Prototype,
)
class SD3DenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Run denoising process with a SD3 model."""
# If latents is provided, this means we are doing image-to-image.
latents: Optional[LatentsField] = InputField(
default=None, description=FieldDescriptions.latents, input=Input.Connection
)
# denoise_mask is used for image-to-image inpainting. Only the masked region is modified.
denoise_mask: Optional[DenoiseMaskField] = InputField(
default=None, description=FieldDescriptions.denoise_mask, input=Input.Connection
)
denoising_start: float = InputField(default=0.0, ge=0, le=1, description=FieldDescriptions.denoising_start)
denoising_end: float = InputField(default=1.0, ge=0, le=1, description=FieldDescriptions.denoising_end)
transformer: TransformerField = InputField(
description=FieldDescriptions.sd3_model, input=Input.Connection, title="Transformer"
)
positive_conditioning: SD3ConditioningField = InputField(
description=FieldDescriptions.positive_cond, input=Input.Connection
)
negative_conditioning: SD3ConditioningField = InputField(
description=FieldDescriptions.negative_cond, input=Input.Connection
)
cfg_scale: float | list[float] = InputField(default=3.5, description=FieldDescriptions.cfg_scale, title="CFG Scale")
width: int = InputField(default=1024, multiple_of=16, description="Width of the generated image.")
height: int = InputField(default=1024, multiple_of=16, description="Height of the generated image.")
steps: int = InputField(default=10, gt=0, description=FieldDescriptions.steps)
seed: int = InputField(default=0, description="Randomness seed for reproducibility.")
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
latents = self._run_diffusion(context)
latents = latents.detach().to("cpu")
name = context.tensors.save(tensor=latents)
return LatentsOutput.build(latents_name=name, latents=latents, seed=None)
def _prep_inpaint_mask(self, context: InvocationContext, latents: torch.Tensor) -> torch.Tensor | None:
"""Prepare the inpaint mask.
- Loads the mask
- Resizes if necessary
- Casts to same device/dtype as latents
Args:
context (InvocationContext): The invocation context, for loading the inpaint mask.
latents (torch.Tensor): A latent image tensor. Used to determine the target shape, device, and dtype for the
inpaint mask.
Returns:
torch.Tensor | None: Inpaint mask. Values of 0.0 represent the regions to be fully denoised, and 1.0
represent the regions to be preserved.
"""
if self.denoise_mask is None:
return None
mask = context.tensors.load(self.denoise_mask.mask_name)
# The input denoise_mask contains values in [0, 1], where 0.0 represents the regions to be fully denoised, and
# 1.0 represents the regions to be preserved.
# We invert the mask so that the regions to be preserved are 0.0 and the regions to be denoised are 1.0.
mask = 1.0 - mask
_, _, latent_height, latent_width = latents.shape
mask = tv_resize(
img=mask,
size=[latent_height, latent_width],
interpolation=tv_transforms.InterpolationMode.BILINEAR,
antialias=False,
)
mask = mask.to(device=latents.device, dtype=latents.dtype)
return mask
def _load_text_conditioning(
self,
context: InvocationContext,
conditioning_name: str,
joint_attention_dim: int,
dtype: torch.dtype,
device: torch.device,
) -> Tuple[torch.Tensor, torch.Tensor]:
# Load the conditioning data.
cond_data = context.conditioning.load(conditioning_name)
assert len(cond_data.conditionings) == 1
sd3_conditioning = cond_data.conditionings[0]
assert isinstance(sd3_conditioning, SD3ConditioningInfo)
sd3_conditioning = sd3_conditioning.to(dtype=dtype, device=device)
t5_embeds = sd3_conditioning.t5_embeds
if t5_embeds is None:
t5_embeds = torch.zeros(
(1, SD3_T5_MAX_SEQ_LEN, joint_attention_dim),
device=device,
dtype=dtype,
)
clip_prompt_embeds = torch.cat([sd3_conditioning.clip_l_embeds, sd3_conditioning.clip_g_embeds], dim=-1)
clip_prompt_embeds = torch.nn.functional.pad(
clip_prompt_embeds, (0, t5_embeds.shape[-1] - clip_prompt_embeds.shape[-1])
)
prompt_embeds = torch.cat([clip_prompt_embeds, t5_embeds], dim=-2)
pooled_prompt_embeds = torch.cat(
[sd3_conditioning.clip_l_pooled_embeds, sd3_conditioning.clip_g_pooled_embeds], dim=-1
)
return prompt_embeds, pooled_prompt_embeds
def _get_noise(
self,
num_samples: int,
num_channels_latents: int,
height: int,
width: int,
dtype: torch.dtype,
device: torch.device,
seed: int,
) -> torch.Tensor:
# We always generate noise on the same device and dtype then cast to ensure consistency across devices/dtypes.
rand_device = "cpu"
rand_dtype = torch.float16
return torch.randn(
num_samples,
num_channels_latents,
int(height) // LATENT_SCALE_FACTOR,
int(width) // LATENT_SCALE_FACTOR,
device=rand_device,
dtype=rand_dtype,
generator=torch.Generator(device=rand_device).manual_seed(seed),
).to(device=device, dtype=dtype)
def _prepare_cfg_scale(self, num_timesteps: int) -> list[float]:
"""Prepare the CFG scale list.
Args:
num_timesteps (int): The number of timesteps in the scheduler. Could be different from num_steps depending
on the scheduler used (e.g. higher order schedulers).
Returns:
list[float]: _description_
"""
if isinstance(self.cfg_scale, float):
cfg_scale = [self.cfg_scale] * num_timesteps
elif isinstance(self.cfg_scale, list):
assert len(self.cfg_scale) == num_timesteps
cfg_scale = self.cfg_scale
else:
raise ValueError(f"Invalid CFG scale type: {type(self.cfg_scale)}")
return cfg_scale
def _run_diffusion(
self,
context: InvocationContext,
):
inference_dtype = TorchDevice.choose_torch_dtype()
device = TorchDevice.choose_torch_device()
transformer_info = context.models.load(self.transformer.transformer)
# Load/process the conditioning data.
# TODO(ryand): Make CFG optional.
do_classifier_free_guidance = True
pos_prompt_embeds, pos_pooled_prompt_embeds = self._load_text_conditioning(
context=context,
conditioning_name=self.positive_conditioning.conditioning_name,
joint_attention_dim=transformer_info.model.config.joint_attention_dim,
dtype=inference_dtype,
device=device,
)
neg_prompt_embeds, neg_pooled_prompt_embeds = self._load_text_conditioning(
context=context,
conditioning_name=self.negative_conditioning.conditioning_name,
joint_attention_dim=transformer_info.model.config.joint_attention_dim,
dtype=inference_dtype,
device=device,
)
# TODO(ryand): Support both sequential and batched CFG inference.
prompt_embeds = torch.cat([neg_prompt_embeds, pos_prompt_embeds], dim=0)
pooled_prompt_embeds = torch.cat([neg_pooled_prompt_embeds, pos_pooled_prompt_embeds], dim=0)
# Prepare the timestep schedule.
# We add an extra step to the end to account for the final timestep of 0.0.
timesteps: list[float] = torch.linspace(1, 0, self.steps + 1).tolist()
# Clip the timesteps schedule based on denoising_start and denoising_end.
timesteps = clip_timestep_schedule_fractional(timesteps, self.denoising_start, self.denoising_end)
total_steps = len(timesteps) - 1
# Prepare the CFG scale list.
cfg_scale = self._prepare_cfg_scale(total_steps)
# Load the input latents, if provided.
init_latents = context.tensors.load(self.latents.latents_name) if self.latents else None
if init_latents is not None:
init_latents = init_latents.to(device=device, dtype=inference_dtype)
# Generate initial latent noise.
num_channels_latents = transformer_info.model.config.in_channels
assert isinstance(num_channels_latents, int)
noise = self._get_noise(
num_samples=1,
num_channels_latents=num_channels_latents,
height=self.height,
width=self.width,
dtype=inference_dtype,
device=device,
seed=self.seed,
)
# Prepare input latent image.
if init_latents is not None:
# Noise the init_latents by the appropriate amount for the first timestep.
t_0 = timesteps[0]
latents = t_0 * noise + (1.0 - t_0) * init_latents
else:
# init_latents are not provided, so we are not doing image-to-image (i.e. we are starting from pure noise).
if self.denoising_start > 1e-5:
raise ValueError("denoising_start should be 0 when initial latents are not provided.")
latents = noise
# If len(timesteps) == 1, then short-circuit. We are just noising the input latents, but not taking any
# denoising steps.
if len(timesteps) <= 1:
return latents
# Prepare inpaint extension.
inpaint_mask = self._prep_inpaint_mask(context, latents)
inpaint_extension: InpaintExtension | None = None
if inpaint_mask is not None:
assert init_latents is not None
inpaint_extension = InpaintExtension(
init_latents=init_latents,
inpaint_mask=inpaint_mask,
noise=noise,
)
step_callback = self._build_step_callback(context)
step_callback(
PipelineIntermediateState(
step=0,
order=1,
total_steps=total_steps,
timestep=int(timesteps[0]),
latents=latents,
),
)
with transformer_info.model_on_device() as (cached_weights, transformer):
assert isinstance(transformer, SD3Transformer2DModel)
# 6. Denoising loop
for step_idx, (t_curr, t_prev) in tqdm(list(enumerate(zip(timesteps[:-1], timesteps[1:], strict=True)))):
# Expand the latents if we are doing CFG.
latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
# Expand the timestep to match the latent model input.
# Multiply by 1000 to match the default FlowMatchEulerDiscreteScheduler num_train_timesteps.
timestep = torch.tensor([t_curr * 1000], device=device).expand(latent_model_input.shape[0])
noise_pred = transformer(
hidden_states=latent_model_input,
timestep=timestep,
encoder_hidden_states=prompt_embeds,
pooled_projections=pooled_prompt_embeds,
joint_attention_kwargs=None,
return_dict=False,
)[0]
# Apply CFG.
if do_classifier_free_guidance:
noise_pred_uncond, noise_pred_cond = noise_pred.chunk(2)
noise_pred = noise_pred_uncond + cfg_scale[step_idx] * (noise_pred_cond - noise_pred_uncond)
# Compute the previous noisy sample x_t -> x_t-1.
latents_dtype = latents.dtype
latents = latents.to(dtype=torch.float32)
latents = latents + (t_prev - t_curr) * noise_pred
latents = latents.to(dtype=latents_dtype)
if inpaint_extension is not None:
latents = inpaint_extension.merge_intermediate_latents_with_init_latents(latents, t_prev)
step_callback(
PipelineIntermediateState(
step=step_idx + 1,
order=1,
total_steps=total_steps,
timestep=int(t_curr),
latents=latents,
),
)
return latents
def _build_step_callback(self, context: InvocationContext) -> Callable[[PipelineIntermediateState], None]:
def step_callback(state: PipelineIntermediateState) -> None:
context.util.sd_step_callback(state, BaseModelType.StableDiffusion3)
return step_callback

View File

@@ -0,0 +1,65 @@
import einops
import torch
from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.fields import (
FieldDescriptions,
ImageField,
Input,
InputField,
WithBoard,
WithMetadata,
)
from invokeai.app.invocations.model import VAEField
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.load.load_base import LoadedModel
from invokeai.backend.stable_diffusion.diffusers_pipeline import image_resized_to_grid_as_tensor
@invocation(
"sd3_i2l",
title="SD3 Image to Latents",
tags=["image", "latents", "vae", "i2l", "sd3"],
category="image",
version="1.0.0",
classification=Classification.Prototype,
)
class SD3ImageToLatentsInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Generates latents from an image."""
image: ImageField = InputField(description="The image to encode")
vae: VAEField = InputField(description=FieldDescriptions.vae, input=Input.Connection)
@staticmethod
def vae_encode(vae_info: LoadedModel, image_tensor: torch.Tensor) -> torch.Tensor:
with vae_info as vae:
assert isinstance(vae, AutoencoderKL)
vae.disable_tiling()
image_tensor = image_tensor.to(device=vae.device, dtype=vae.dtype)
with torch.inference_mode():
image_tensor_dist = vae.encode(image_tensor).latent_dist
# TODO: Use seed to make sampling reproducible.
latents: torch.Tensor = image_tensor_dist.sample().to(dtype=vae.dtype)
latents = vae.config.scaling_factor * latents
return latents
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
image = context.images.get_pil(self.image.image_name)
image_tensor = image_resized_to_grid_as_tensor(image.convert("RGB"))
if image_tensor.dim() == 3:
image_tensor = einops.rearrange(image_tensor, "c h w -> 1 c h w")
vae_info = context.models.load(self.vae.vae)
latents = self.vae_encode(vae_info=vae_info, image_tensor=image_tensor)
latents = latents.to("cpu")
name = context.tensors.save(tensor=latents)
return LatentsOutput.build(latents_name=name, latents=latents, seed=None)

View File

@@ -0,0 +1,74 @@
from contextlib import nullcontext
import torch
from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
from einops import rearrange
from PIL import Image
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import (
FieldDescriptions,
Input,
InputField,
LatentsField,
WithBoard,
WithMetadata,
)
from invokeai.app.invocations.model import VAEField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.stable_diffusion.extensions.seamless import SeamlessExt
from invokeai.backend.util.devices import TorchDevice
@invocation(
"sd3_l2i",
title="SD3 Latents to Image",
tags=["latents", "image", "vae", "l2i", "sd3"],
category="latents",
version="1.3.0",
)
class SD3LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Generates an image from latents."""
latents: LatentsField = InputField(
description=FieldDescriptions.latents,
input=Input.Connection,
)
vae: VAEField = InputField(
description=FieldDescriptions.vae,
input=Input.Connection,
)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ImageOutput:
latents = context.tensors.load(self.latents.latents_name)
vae_info = context.models.load(self.vae.vae)
assert isinstance(vae_info.model, (AutoencoderKL))
with SeamlessExt.static_patch_model(vae_info.model, self.vae.seamless_axes), vae_info as vae:
context.util.signal_progress("Running VAE")
assert isinstance(vae, (AutoencoderKL))
latents = latents.to(vae.device)
vae.disable_tiling()
tiling_context = nullcontext()
# clear memory as vae decode can request a lot
TorchDevice.empty_cache()
with torch.inference_mode(), tiling_context:
# copied from diffusers pipeline
latents = latents / vae.config.scaling_factor
img = vae.decode(latents, return_dict=False)[0]
img = img.clamp(-1, 1)
img = rearrange(img[0], "c h w -> h w c") # noqa: F821
img_pil = Image.fromarray((127.5 * (img + 1.0)).byte().cpu().numpy())
TorchDevice.empty_cache()
image_dto = context.images.save(image=img_pil)
return ImageOutput.build(image_dto)

View File

@@ -1,3 +1,5 @@
from typing import Optional
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
@@ -8,14 +10,14 @@ from invokeai.app.invocations.baseinvocation import (
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
from invokeai.app.invocations.model import CLIPField, ModelIdentifierField, T5EncoderField, TransformerField, VAEField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.config import CheckpointConfigBase, SubModelType
from invokeai.backend.model_manager.config import SubModelType
@invocation_output("sd3_model_loader_output")
class Sd3ModelLoaderOutput(BaseInvocationOutput):
"""SD3 base model loader output."""
mmditx: TransformerField = OutputField(description=FieldDescriptions.mmditx, title="MMDiTX")
transformer: TransformerField = OutputField(description=FieldDescriptions.transformer, title="Transformer")
clip_l: CLIPField = OutputField(description=FieldDescriptions.clip, title="CLIP L")
clip_g: CLIPField = OutputField(description=FieldDescriptions.clip, title="CLIP G")
t5_encoder: T5EncoderField = OutputField(description=FieldDescriptions.t5_encoder, title="T5 Encoder")
@@ -33,68 +35,72 @@ class Sd3ModelLoaderOutput(BaseInvocationOutput):
class Sd3ModelLoaderInvocation(BaseInvocation):
"""Loads a SD3 base model, outputting its submodels."""
# TODO(ryand): Create a UIType.Sd3MainModelField to use here.
model: ModelIdentifierField = InputField(
description=FieldDescriptions.sd3_model,
ui_type=UIType.MainModel,
ui_type=UIType.SD3MainModel,
input=Input.Direct,
)
# TODO(ryand): Make the text encoders optional.
# Note: The text encoders are optional for SD3. The model was trained with dropout, so any can be left out at
# inference time. Typically, only the T5 encoder is omitted, since it is the largest by far.
t5_encoder_model: ModelIdentifierField = InputField(
description=FieldDescriptions.t5_encoder, ui_type=UIType.T5EncoderModel, input=Input.Direct, title="T5 Encoder"
t5_encoder_model: Optional[ModelIdentifierField] = InputField(
description=FieldDescriptions.t5_encoder,
ui_type=UIType.T5EncoderModel,
input=Input.Direct,
title="T5 Encoder",
default=None,
)
clip_l_embed_model: ModelIdentifierField = InputField(
clip_l_model: Optional[ModelIdentifierField] = InputField(
description=FieldDescriptions.clip_embed_model,
ui_type=UIType.CLIPEmbedModel,
ui_type=UIType.CLIPLEmbedModel,
input=Input.Direct,
title="CLIP L Embed",
title="CLIP L Encoder",
default=None,
)
clip_g_embed_model: ModelIdentifierField = InputField(
description=FieldDescriptions.clip_embed_model,
ui_type=UIType.CLIPEmbedModel,
clip_g_model: Optional[ModelIdentifierField] = InputField(
description=FieldDescriptions.clip_g_model,
ui_type=UIType.CLIPGEmbedModel,
input=Input.Direct,
title="CLIP G Embed",
title="CLIP G Encoder",
default=None,
)
# TODO(ryand): Create a UIType.Sd3VaModelField to use here.
vae_model: ModelIdentifierField = InputField(
description=FieldDescriptions.vae_model, ui_type=UIType.VAEModel, title="VAE"
vae_model: Optional[ModelIdentifierField] = InputField(
description=FieldDescriptions.vae_model, ui_type=UIType.VAEModel, title="VAE", default=None
)
def invoke(self, context: InvocationContext) -> Sd3ModelLoaderOutput:
for key in [
self.model.key,
self.t5_encoder_model.key,
self.clip_l_embed_model.key,
self.clip_g_embed_model.key,
self.vae_model.key,
]:
if not context.models.exists(key):
raise ValueError(f"Unknown model: {key}")
# TODO(ryand): Figure out the sub-model types for SD3.
mmditx = self.model.model_copy(update={"submodel_type": SubModelType.Transformer})
vae = self.vae_model.model_copy(update={"submodel_type": SubModelType.VAE})
tokenizer_l = self.clip_l_embed_model.model_copy(update={"submodel_type": SubModelType.Tokenizer})
clip_encoder_l = self.clip_l_embed_model.model_copy(update={"submodel_type": SubModelType.TextEncoder})
tokenizer_g = self.clip_g_embed_model.model_copy(update={"submodel_type": SubModelType.Tokenizer})
clip_encoder_g = self.clip_g_embed_model.model_copy(update={"submodel_type": SubModelType.TextEncoder})
tokenizer_t5 = self.t5_encoder_model.model_copy(update={"submodel_type": SubModelType.Tokenizer2})
t5_encoder = self.t5_encoder_model.model_copy(update={"submodel_type": SubModelType.TextEncoder2})
transformer_config = context.models.get_config(mmditx)
assert isinstance(transformer_config, CheckpointConfigBase)
transformer = self.model.model_copy(update={"submodel_type": SubModelType.Transformer})
vae = (
self.vae_model.model_copy(update={"submodel_type": SubModelType.VAE})
if self.vae_model
else self.model.model_copy(update={"submodel_type": SubModelType.VAE})
)
tokenizer_l = self.model.model_copy(update={"submodel_type": SubModelType.Tokenizer})
clip_encoder_l = (
self.clip_l_model.model_copy(update={"submodel_type": SubModelType.TextEncoder})
if self.clip_l_model
else self.model.model_copy(update={"submodel_type": SubModelType.TextEncoder})
)
tokenizer_g = self.model.model_copy(update={"submodel_type": SubModelType.Tokenizer2})
clip_encoder_g = (
self.clip_g_model.model_copy(update={"submodel_type": SubModelType.TextEncoder2})
if self.clip_g_model
else self.model.model_copy(update={"submodel_type": SubModelType.TextEncoder2})
)
tokenizer_t5 = (
self.t5_encoder_model.model_copy(update={"submodel_type": SubModelType.Tokenizer3})
if self.t5_encoder_model
else self.model.model_copy(update={"submodel_type": SubModelType.Tokenizer3})
)
t5_encoder = (
self.t5_encoder_model.model_copy(update={"submodel_type": SubModelType.TextEncoder3})
if self.t5_encoder_model
else self.model.model_copy(update={"submodel_type": SubModelType.TextEncoder3})
)
return Sd3ModelLoaderOutput(
mmditx=TransformerField(transformer=mmditx, loras=[]),
transformer=TransformerField(transformer=transformer, loras=[]),
clip_l=CLIPField(tokenizer=tokenizer_l, text_encoder=clip_encoder_l, loras=[], skipped_layers=0),
clip_g=CLIPField(tokenizer=tokenizer_g, text_encoder=clip_encoder_g, loras=[], skipped_layers=0),
t5_encoder=T5EncoderField(tokenizer=tokenizer_t5, text_encoder=t5_encoder),

View File

@@ -0,0 +1,201 @@
from contextlib import ExitStack
from typing import Iterator, Tuple
import torch
from transformers import (
CLIPTextModel,
CLIPTextModelWithProjection,
CLIPTokenizer,
T5EncoderModel,
T5Tokenizer,
T5TokenizerFast,
)
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField
from invokeai.app.invocations.model import CLIPField, T5EncoderField
from invokeai.app.invocations.primitives import SD3ConditioningOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.lora.conversions.flux_lora_constants import FLUX_LORA_CLIP_PREFIX
from invokeai.backend.lora.lora_model_raw import LoRAModelRaw
from invokeai.backend.lora.lora_patcher import LoRAPatcher
from invokeai.backend.model_manager.config import ModelFormat
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningFieldData, SD3ConditioningInfo
# The SD3 T5 Max Sequence Length set based on the default in diffusers.
SD3_T5_MAX_SEQ_LEN = 256
@invocation(
"sd3_text_encoder",
title="SD3 Text Encoding",
tags=["prompt", "conditioning", "sd3"],
category="conditioning",
version="1.0.0",
classification=Classification.Prototype,
)
class Sd3TextEncoderInvocation(BaseInvocation):
"""Encodes and preps a prompt for a SD3 image."""
clip_l: CLIPField = InputField(
title="CLIP L",
description=FieldDescriptions.clip,
input=Input.Connection,
)
clip_g: CLIPField = InputField(
title="CLIP G",
description=FieldDescriptions.clip,
input=Input.Connection,
)
# The SD3 models were trained with text encoder dropout, so the T5 encoder can be omitted to save time/memory.
t5_encoder: T5EncoderField | None = InputField(
title="T5Encoder",
default=None,
description=FieldDescriptions.t5_encoder,
input=Input.Connection,
)
prompt: str = InputField(description="Text prompt to encode.")
@torch.no_grad()
def invoke(self, context: InvocationContext) -> SD3ConditioningOutput:
# Note: The text encoding model are run in separate functions to ensure that all model references are locally
# scoped. This ensures that earlier models can be freed and gc'd before loading later models (if necessary).
clip_l_embeddings, clip_l_pooled_embeddings = self._clip_encode(context, self.clip_l)
clip_g_embeddings, clip_g_pooled_embeddings = self._clip_encode(context, self.clip_g)
t5_embeddings: torch.Tensor | None = None
if self.t5_encoder is not None:
t5_embeddings = self._t5_encode(context, SD3_T5_MAX_SEQ_LEN)
conditioning_data = ConditioningFieldData(
conditionings=[
SD3ConditioningInfo(
clip_l_embeds=clip_l_embeddings,
clip_l_pooled_embeds=clip_l_pooled_embeddings,
clip_g_embeds=clip_g_embeddings,
clip_g_pooled_embeds=clip_g_pooled_embeddings,
t5_embeds=t5_embeddings,
)
]
)
conditioning_name = context.conditioning.save(conditioning_data)
return SD3ConditioningOutput.build(conditioning_name)
def _t5_encode(self, context: InvocationContext, max_seq_len: int) -> torch.Tensor:
assert self.t5_encoder is not None
t5_tokenizer_info = context.models.load(self.t5_encoder.tokenizer)
t5_text_encoder_info = context.models.load(self.t5_encoder.text_encoder)
prompt = [self.prompt]
with (
t5_text_encoder_info as t5_text_encoder,
t5_tokenizer_info as t5_tokenizer,
):
context.util.signal_progress("Running T5 encoder")
assert isinstance(t5_text_encoder, T5EncoderModel)
assert isinstance(t5_tokenizer, (T5Tokenizer, T5TokenizerFast))
text_inputs = t5_tokenizer(
prompt,
padding="max_length",
max_length=max_seq_len,
truncation=True,
add_special_tokens=True,
return_tensors="pt",
)
text_input_ids = text_inputs.input_ids
untruncated_ids = t5_tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
assert isinstance(text_input_ids, torch.Tensor)
assert isinstance(untruncated_ids, torch.Tensor)
if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
text_input_ids, untruncated_ids
):
removed_text = t5_tokenizer.batch_decode(untruncated_ids[:, max_seq_len - 1 : -1])
context.logger.warning(
"The following part of your input was truncated because `max_sequence_length` is set to "
f" {max_seq_len} tokens: {removed_text}"
)
prompt_embeds = t5_text_encoder(text_input_ids.to(t5_text_encoder.device))[0]
assert isinstance(prompt_embeds, torch.Tensor)
return prompt_embeds
def _clip_encode(
self, context: InvocationContext, clip_model: CLIPField, tokenizer_max_length: int = 77
) -> Tuple[torch.Tensor, torch.Tensor]:
clip_tokenizer_info = context.models.load(clip_model.tokenizer)
clip_text_encoder_info = context.models.load(clip_model.text_encoder)
prompt = [self.prompt]
with (
clip_text_encoder_info.model_on_device() as (cached_weights, clip_text_encoder),
clip_tokenizer_info as clip_tokenizer,
ExitStack() as exit_stack,
):
context.util.signal_progress("Running CLIP encoder")
assert isinstance(clip_text_encoder, (CLIPTextModel, CLIPTextModelWithProjection))
assert isinstance(clip_tokenizer, CLIPTokenizer)
clip_text_encoder_config = clip_text_encoder_info.config
assert clip_text_encoder_config is not None
# Apply LoRA models to the CLIP encoder.
# Note: We apply the LoRA after the transformer has been moved to its target device for faster patching.
if clip_text_encoder_config.format in [ModelFormat.Diffusers]:
# The model is non-quantized, so we can apply the LoRA weights directly into the model.
exit_stack.enter_context(
LoRAPatcher.apply_lora_patches(
model=clip_text_encoder,
patches=self._clip_lora_iterator(context, clip_model),
prefix=FLUX_LORA_CLIP_PREFIX,
cached_weights=cached_weights,
)
)
else:
# There are currently no supported CLIP quantized models. Add support here if needed.
raise ValueError(f"Unsupported model format: {clip_text_encoder_config.format}")
clip_text_encoder = clip_text_encoder.eval().requires_grad_(False)
text_inputs = clip_tokenizer(
prompt,
padding="max_length",
max_length=tokenizer_max_length,
truncation=True,
return_tensors="pt",
)
text_input_ids = text_inputs.input_ids
untruncated_ids = clip_tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
assert isinstance(text_input_ids, torch.Tensor)
assert isinstance(untruncated_ids, torch.Tensor)
if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
text_input_ids, untruncated_ids
):
removed_text = clip_tokenizer.batch_decode(untruncated_ids[:, tokenizer_max_length - 1 : -1])
context.logger.warning(
"The following part of your input was truncated because CLIP can only handle sequences up to"
f" {tokenizer_max_length} tokens: {removed_text}"
)
prompt_embeds = clip_text_encoder(
input_ids=text_input_ids.to(clip_text_encoder.device), output_hidden_states=True
)
pooled_prompt_embeds = prompt_embeds[0]
prompt_embeds = prompt_embeds.hidden_states[-2]
return prompt_embeds, pooled_prompt_embeds
def _clip_lora_iterator(
self, context: InvocationContext, clip_model: CLIPField
) -> Iterator[Tuple[LoRAModelRaw, float]]:
for lora in clip_model.loras:
lora_info = context.models.load(lora.lora)
assert isinstance(lora_info.model, LoRAModelRaw)
yield (lora_info.model, lora.weight)
del lora_info

View File

@@ -5,7 +5,7 @@ from typing import Literal
import numpy as np
import torch
from PIL import Image
from pydantic import BaseModel, Field, model_validator
from pydantic import BaseModel, Field
from transformers import AutoModelForMaskGeneration, AutoProcessor
from transformers.models.sam import SamModel
from transformers.models.sam.processing_sam import SamProcessor
@@ -77,19 +77,14 @@ class SegmentAnythingInvocation(BaseInvocation):
default="all",
)
@model_validator(mode="after")
def check_point_lists_or_bounding_box(self):
if self.point_lists is None and self.bounding_boxes is None:
raise ValueError("Either point_lists or bounding_box must be provided.")
elif self.point_lists is not None and self.bounding_boxes is not None:
raise ValueError("Only one of point_lists or bounding_box can be provided.")
return self
@torch.no_grad()
def invoke(self, context: InvocationContext) -> MaskOutput:
# The models expect a 3-channel RGB image.
image_pil = context.images.get_pil(self.image.image_name, mode="RGB")
if self.point_lists is not None and self.bounding_boxes is not None:
raise ValueError("Only one of point_lists or bounding_box can be provided.")
if (not self.bounding_boxes or len(self.bounding_boxes) == 0) and (
not self.point_lists or len(self.point_lists) == 0
):

View File

@@ -15,6 +15,7 @@ from invokeai.app.util.model_exclude_null import BaseModelExcludeNull
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
ClipVariantType,
ControlAdapterDefaultSettings,
MainModelDefaultSettings,
ModelFormat,
@@ -85,7 +86,7 @@ class ModelRecordChanges(BaseModelExcludeNull):
# Checkpoint-specific changes
# TODO(MM2): Should we expose these? Feels footgun-y...
variant: Optional[ModelVariantType] = Field(description="The variant of the model.", default=None)
variant: Optional[ModelVariantType | ClipVariantType] = Field(description="The variant of the model.", default=None)
prediction_type: Optional[SchedulerPredictionType] = Field(
description="The prediction type of the model.", default=None
)

View File

@@ -1,3 +1,4 @@
from copy import deepcopy
from dataclasses import dataclass
from pathlib import Path
from typing import TYPE_CHECKING, Callable, Optional, Union
@@ -159,6 +160,10 @@ class LoggerInterface(InvocationContextInterface):
class ImagesInterface(InvocationContextInterface):
def __init__(self, services: InvocationServices, data: InvocationContextData, util: "UtilInterface") -> None:
super().__init__(services, data)
self._util = util
def save(
self,
image: Image,
@@ -185,6 +190,8 @@ class ImagesInterface(InvocationContextInterface):
The saved image DTO.
"""
self._util.signal_progress("Saving image")
# If `metadata` is provided directly, use that. Else, use the metadata provided by `WithMetadata`, falling back to None.
metadata_ = None
if metadata:
@@ -221,7 +228,7 @@ class ImagesInterface(InvocationContextInterface):
)
def get_pil(self, image_name: str, mode: IMAGE_MODES | None = None) -> Image:
"""Gets an image as a PIL Image object.
"""Gets an image as a PIL Image object. This method returns a copy of the image.
Args:
image_name: The name of the image to get.
@@ -233,11 +240,15 @@ class ImagesInterface(InvocationContextInterface):
image = self._services.images.get_pil_image(image_name)
if mode and mode != image.mode:
try:
# convert makes a copy!
image = image.convert(mode)
except ValueError:
self._services.logger.warning(
f"Could not convert image from {image.mode} to {mode}. Using original mode instead."
)
else:
# copy the image to prevent the user from modifying the original
image = image.copy()
return image
def get_metadata(self, image_name: str) -> Optional[MetadataField]:
@@ -290,15 +301,15 @@ class TensorsInterface(InvocationContextInterface):
return name
def load(self, name: str) -> Tensor:
"""Loads a tensor by name.
"""Loads a tensor by name. This method returns a copy of the tensor.
Args:
name: The name of the tensor to load.
Returns:
The loaded tensor.
The tensor.
"""
return self._services.tensors.load(name)
return self._services.tensors.load(name).clone()
class ConditioningInterface(InvocationContextInterface):
@@ -316,21 +327,25 @@ class ConditioningInterface(InvocationContextInterface):
return name
def load(self, name: str) -> ConditioningFieldData:
"""Loads conditioning data by name.
"""Loads conditioning data by name. This method returns a copy of the conditioning data.
Args:
name: The name of the conditioning data to load.
Returns:
The loaded conditioning data.
The conditioning data.
"""
return self._services.conditioning.load(name)
return deepcopy(self._services.conditioning.load(name))
class ModelsInterface(InvocationContextInterface):
"""Common API for loading, downloading and managing models."""
def __init__(self, services: InvocationServices, data: InvocationContextData, util: "UtilInterface") -> None:
super().__init__(services, data)
self._util = util
def exists(self, identifier: Union[str, "ModelIdentifierField"]) -> bool:
"""Check if a model exists.
@@ -363,11 +378,15 @@ class ModelsInterface(InvocationContextInterface):
if isinstance(identifier, str):
model = self._services.model_manager.store.get_model(identifier)
return self._services.model_manager.load.load_model(model, submodel_type)
else:
_submodel_type = submodel_type or identifier.submodel_type
submodel_type = submodel_type or identifier.submodel_type
model = self._services.model_manager.store.get_model(identifier.key)
return self._services.model_manager.load.load_model(model, _submodel_type)
message = f"Loading model {model.name}"
if submodel_type:
message += f" ({submodel_type.value})"
self._util.signal_progress(message)
return self._services.model_manager.load.load_model(model, submodel_type)
def load_by_attrs(
self, name: str, base: BaseModelType, type: ModelType, submodel_type: Optional[SubModelType] = None
@@ -392,6 +411,10 @@ class ModelsInterface(InvocationContextInterface):
if len(configs) > 1:
raise ValueError(f"More than one model found with name {name}, base {base}, and type {type}")
message = f"Loading model {name}"
if submodel_type:
message += f" ({submodel_type.value})"
self._util.signal_progress(message)
return self._services.model_manager.load.load_model(configs[0], submodel_type)
def get_config(self, identifier: Union[str, "ModelIdentifierField"]) -> AnyModelConfig:
@@ -462,6 +485,7 @@ class ModelsInterface(InvocationContextInterface):
Returns:
Path to the downloaded model
"""
self._util.signal_progress(f"Downloading model {source}")
return self._services.model_manager.install.download_and_cache_model(source=source)
def load_local_model(
@@ -484,6 +508,8 @@ class ModelsInterface(InvocationContextInterface):
Returns:
A LoadedModelWithoutConfig object.
"""
self._util.signal_progress(f"Loading model {model_path.name}")
return self._services.model_manager.load.load_model_from_path(model_path=model_path, loader=loader)
def load_remote_model(
@@ -509,6 +535,8 @@ class ModelsInterface(InvocationContextInterface):
A LoadedModelWithoutConfig object.
"""
model_path = self._services.model_manager.install.download_and_cache_model(source=str(source))
self._util.signal_progress(f"Loading model {source}")
return self._services.model_manager.load.load_model_from_path(model_path=model_path, loader=loader)
@@ -702,12 +730,12 @@ def build_invocation_context(
"""
logger = LoggerInterface(services=services, data=data)
images = ImagesInterface(services=services, data=data)
tensors = TensorsInterface(services=services, data=data)
models = ModelsInterface(services=services, data=data)
config = ConfigInterface(services=services, data=data)
util = UtilInterface(services=services, data=data, is_canceled=is_canceled)
conditioning = ConditioningInterface(services=services, data=data)
models = ModelsInterface(services=services, data=data, util=util)
images = ImagesInterface(services=services, data=data, util=util)
boards = BoardsInterface(services=services, data=data)
ctx = InvocationContext(

View File

@@ -0,0 +1,382 @@
{
"name": "SD3.5 Text to Image",
"author": "InvokeAI",
"description": "Sample text to image workflow for Stable Diffusion 3.5",
"version": "1.0.0",
"contact": "invoke@invoke.ai",
"tags": "text2image, SD3.5, default",
"notes": "",
"exposedFields": [
{
"nodeId": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"fieldName": "model"
},
{
"nodeId": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"fieldName": "prompt"
}
],
"meta": {
"version": "3.0.0",
"category": "default"
},
"id": "e3a51d6b-8208-4d6d-b187-fcfe8b32934c",
"nodes": [
{
"id": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"type": "invocation",
"data": {
"id": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"type": "sd3_model_loader",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": true,
"nodePack": "invokeai",
"inputs": {
"model": {
"name": "model",
"label": "",
"value": {
"key": "f7b20be9-92a8-4cfb-bca4-6c3b5535c10b",
"hash": "placeholder",
"name": "stable-diffusion-3.5-medium",
"base": "sd-3",
"type": "main"
}
},
"t5_encoder_model": {
"name": "t5_encoder_model",
"label": ""
},
"clip_l_model": {
"name": "clip_l_model",
"label": ""
},
"clip_g_model": {
"name": "clip_g_model",
"label": ""
},
"vae_model": {
"name": "vae_model",
"label": ""
}
}
},
"position": {
"x": -55.58689609637031,
"y": -111.53602444662268
}
},
{
"id": "f7e394ac-6394-4096-abcb-de0d346506b3",
"type": "invocation",
"data": {
"id": "f7e394ac-6394-4096-abcb-de0d346506b3",
"type": "rand_int",
"version": "1.0.1",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": false,
"nodePack": "invokeai",
"inputs": {
"low": {
"name": "low",
"label": "",
"value": 0
},
"high": {
"name": "high",
"label": "",
"value": 2147483647
}
}
},
"position": {
"x": 470.45870147220353,
"y": 350.3141781644303
}
},
{
"id": "9eb72af0-dd9e-4ec5-ad87-d65e3c01f48b",
"type": "invocation",
"data": {
"id": "9eb72af0-dd9e-4ec5-ad87-d65e3c01f48b",
"type": "sd3_l2i",
"version": "1.3.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": false,
"useCache": true,
"nodePack": "invokeai",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
}
}
},
"position": {
"x": 1192.3097009334897,
"y": -366.0994675072209
}
},
{
"id": "3b4f7f27-cfc0-4373-a009-99c5290d0cd6",
"type": "invocation",
"data": {
"id": "3b4f7f27-cfc0-4373-a009-99c5290d0cd6",
"type": "sd3_text_encoder",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": true,
"nodePack": "invokeai",
"inputs": {
"clip_l": {
"name": "clip_l",
"label": ""
},
"clip_g": {
"name": "clip_g",
"label": ""
},
"t5_encoder": {
"name": "t5_encoder",
"label": ""
},
"prompt": {
"name": "prompt",
"label": "",
"value": ""
}
}
},
"position": {
"x": 408.16054647924784,
"y": 65.06415352118786
}
},
{
"id": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"type": "invocation",
"data": {
"id": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"type": "sd3_text_encoder",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": true,
"nodePack": "invokeai",
"inputs": {
"clip_l": {
"name": "clip_l",
"label": ""
},
"clip_g": {
"name": "clip_g",
"label": ""
},
"t5_encoder": {
"name": "t5_encoder",
"label": ""
},
"prompt": {
"name": "prompt",
"label": "",
"value": ""
}
}
},
"position": {
"x": 378.9283412440941,
"y": -302.65777497352553
}
},
{
"id": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"type": "invocation",
"data": {
"id": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"type": "sd3_denoise",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": true,
"nodePack": "invokeai",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"transformer": {
"name": "transformer",
"label": ""
},
"positive_conditioning": {
"name": "positive_conditioning",
"label": ""
},
"negative_conditioning": {
"name": "negative_conditioning",
"label": ""
},
"cfg_scale": {
"name": "cfg_scale",
"label": "",
"value": 3.5
},
"width": {
"name": "width",
"label": "",
"value": 1024
},
"height": {
"name": "height",
"label": "",
"value": 1024
},
"steps": {
"name": "steps",
"label": "",
"value": 30
},
"seed": {
"name": "seed",
"label": "",
"value": 0
}
}
},
"position": {
"x": 813.7814762740603,
"y": -142.20529727605867
}
}
],
"edges": [
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4cvae-9eb72af0-dd9e-4ec5-ad87-d65e3c01f48bvae",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "9eb72af0-dd9e-4ec5-ad87-d65e3c01f48b",
"sourceHandle": "vae",
"targetHandle": "vae"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4ct5_encoder-3b4f7f27-cfc0-4373-a009-99c5290d0cd6t5_encoder",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "3b4f7f27-cfc0-4373-a009-99c5290d0cd6",
"sourceHandle": "t5_encoder",
"targetHandle": "t5_encoder"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4ct5_encoder-e17d34e7-6ed1-493c-9a85-4fcd291cb084t5_encoder",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"sourceHandle": "t5_encoder",
"targetHandle": "t5_encoder"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4cclip_g-3b4f7f27-cfc0-4373-a009-99c5290d0cd6clip_g",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "3b4f7f27-cfc0-4373-a009-99c5290d0cd6",
"sourceHandle": "clip_g",
"targetHandle": "clip_g"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4cclip_g-e17d34e7-6ed1-493c-9a85-4fcd291cb084clip_g",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"sourceHandle": "clip_g",
"targetHandle": "clip_g"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4cclip_l-3b4f7f27-cfc0-4373-a009-99c5290d0cd6clip_l",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "3b4f7f27-cfc0-4373-a009-99c5290d0cd6",
"sourceHandle": "clip_l",
"targetHandle": "clip_l"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4cclip_l-e17d34e7-6ed1-493c-9a85-4fcd291cb084clip_l",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"sourceHandle": "clip_l",
"targetHandle": "clip_l"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4ctransformer-c7539f7b-7ac5-49b9-93eb-87ede611409ftransformer",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"sourceHandle": "transformer",
"targetHandle": "transformer"
},
{
"id": "reactflow__edge-f7e394ac-6394-4096-abcb-de0d346506b3value-c7539f7b-7ac5-49b9-93eb-87ede611409fseed",
"type": "default",
"source": "f7e394ac-6394-4096-abcb-de0d346506b3",
"target": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"sourceHandle": "value",
"targetHandle": "seed"
},
{
"id": "reactflow__edge-c7539f7b-7ac5-49b9-93eb-87ede611409flatents-9eb72af0-dd9e-4ec5-ad87-d65e3c01f48blatents",
"type": "default",
"source": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"target": "9eb72af0-dd9e-4ec5-ad87-d65e3c01f48b",
"sourceHandle": "latents",
"targetHandle": "latents"
},
{
"id": "reactflow__edge-e17d34e7-6ed1-493c-9a85-4fcd291cb084conditioning-c7539f7b-7ac5-49b9-93eb-87ede611409fpositive_conditioning",
"type": "default",
"source": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"target": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"sourceHandle": "conditioning",
"targetHandle": "positive_conditioning"
},
{
"id": "reactflow__edge-3b4f7f27-cfc0-4373-a009-99c5290d0cd6conditioning-c7539f7b-7ac5-49b9-93eb-87ede611409fnegative_conditioning",
"type": "default",
"source": "3b4f7f27-cfc0-4373-a009-99c5290d0cd6",
"target": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"sourceHandle": "conditioning",
"targetHandle": "negative_conditioning"
}
]
}

View File

@@ -34,6 +34,25 @@ SD1_5_LATENT_RGB_FACTORS = [
[-0.1307, -0.1874, -0.7445], # L4
]
SD3_5_LATENT_RGB_FACTORS = [
[-0.05240681, 0.03251581, 0.0749016],
[-0.0580572, 0.00759826, 0.05729818],
[0.16144888, 0.01270368, -0.03768577],
[0.14418615, 0.08460266, 0.15941818],
[0.04894035, 0.0056485, -0.06686988],
[0.05187166, 0.19222395, 0.06261094],
[0.1539433, 0.04818359, 0.07103094],
[-0.08601796, 0.09013458, 0.10893912],
[-0.12398469, -0.06766567, 0.0033688],
[-0.0439737, 0.07825329, 0.02258823],
[0.03101129, 0.06382551, 0.07753657],
[-0.01315361, 0.08554491, -0.08772475],
[0.06464487, 0.05914605, 0.13262741],
[-0.07863674, -0.02261737, -0.12761454],
[-0.09923835, -0.08010759, -0.06264447],
[-0.03392309, -0.0804029, -0.06078822],
]
FLUX_LATENT_RGB_FACTORS = [
[-0.0412, 0.0149, 0.0521],
[0.0056, 0.0291, 0.0768],
@@ -110,6 +129,9 @@ def stable_diffusion_step_callback(
sdxl_latent_rgb_factors = torch.tensor(SDXL_LATENT_RGB_FACTORS, dtype=sample.dtype, device=sample.device)
sdxl_smooth_matrix = torch.tensor(SDXL_SMOOTH_MATRIX, dtype=sample.dtype, device=sample.device)
image = sample_to_lowres_estimated_image(sample, sdxl_latent_rgb_factors, sdxl_smooth_matrix)
elif base_model == BaseModelType.StableDiffusion3:
sd3_latent_rgb_factors = torch.tensor(SD3_5_LATENT_RGB_FACTORS, dtype=sample.dtype, device=sample.device)
image = sample_to_lowres_estimated_image(sample, sd3_latent_rgb_factors)
else:
v1_5_latent_rgb_factors = torch.tensor(SD1_5_LATENT_RGB_FACTORS, dtype=sample.dtype, device=sample.device)
image = sample_to_lowres_estimated_image(sample, v1_5_latent_rgb_factors)

View File

@@ -45,8 +45,9 @@ def lora_model_from_flux_diffusers_state_dict(state_dict: Dict[str, torch.Tensor
# Constants for FLUX.1
num_double_layers = 19
num_single_layers = 38
# inner_dim = 3072
# mlp_ratio = 4.0
hidden_size = 3072
mlp_ratio = 4.0
mlp_hidden_dim = int(hidden_size * mlp_ratio)
layers: dict[str, AnyLoRALayer] = {}
@@ -62,30 +63,43 @@ def lora_model_from_flux_diffusers_state_dict(state_dict: Dict[str, torch.Tensor
layers[dst_key] = LoRALayer.from_state_dict_values(values=value)
assert len(src_layer_dict) == 0
def add_qkv_lora_layer_if_present(src_keys: list[str], dst_qkv_key: str) -> None:
def add_qkv_lora_layer_if_present(
src_keys: list[str],
src_weight_shapes: list[tuple[int, int]],
dst_qkv_key: str,
allow_missing_keys: bool = False,
) -> None:
"""Handle the Q, K, V matrices for a transformer block. We need special handling because the diffusers format
stores them in separate matrices, whereas the BFL format used internally by InvokeAI concatenates them.
"""
# We expect that either all src keys are present or none of them are. Verify this.
keys_present = [key in grouped_state_dict for key in src_keys]
assert all(keys_present) or not any(keys_present)
# If none of the keys are present, return early.
keys_present = [key in grouped_state_dict for key in src_keys]
if not any(keys_present):
return
src_layer_dicts = [grouped_state_dict.pop(key) for key in src_keys]
sub_layers: list[LoRALayer] = []
for src_layer_dict in src_layer_dicts:
values = {
"lora_down.weight": src_layer_dict.pop("lora_A.weight"),
"lora_up.weight": src_layer_dict.pop("lora_B.weight"),
}
if alpha is not None:
values["alpha"] = torch.tensor(alpha)
sub_layers.append(LoRALayer.from_state_dict_values(values=values))
assert len(src_layer_dict) == 0
layers[dst_qkv_key] = ConcatenatedLoRALayer(lora_layers=sub_layers, concat_axis=0)
for src_key, src_weight_shape in zip(src_keys, src_weight_shapes, strict=True):
src_layer_dict = grouped_state_dict.pop(src_key, None)
if src_layer_dict is not None:
values = {
"lora_down.weight": src_layer_dict.pop("lora_A.weight"),
"lora_up.weight": src_layer_dict.pop("lora_B.weight"),
}
if alpha is not None:
values["alpha"] = torch.tensor(alpha)
assert values["lora_down.weight"].shape[1] == src_weight_shape[1]
assert values["lora_up.weight"].shape[0] == src_weight_shape[0]
sub_layers.append(LoRALayer.from_state_dict_values(values=values))
assert len(src_layer_dict) == 0
else:
if not allow_missing_keys:
raise ValueError(f"Missing LoRA layer: '{src_key}'.")
values = {
"lora_up.weight": torch.zeros((src_weight_shape[0], 1)),
"lora_down.weight": torch.zeros((1, src_weight_shape[1])),
}
sub_layers.append(LoRALayer.from_state_dict_values(values=values))
layers[dst_qkv_key] = ConcatenatedLoRALayer(lora_layers=sub_layers)
# time_text_embed.timestep_embedder -> time_in.
add_lora_layer_if_present("time_text_embed.timestep_embedder.linear_1", "time_in.in_layer")
@@ -118,6 +132,7 @@ def lora_model_from_flux_diffusers_state_dict(state_dict: Dict[str, torch.Tensor
f"transformer_blocks.{i}.attn.to_k",
f"transformer_blocks.{i}.attn.to_v",
],
[(hidden_size, hidden_size), (hidden_size, hidden_size), (hidden_size, hidden_size)],
f"double_blocks.{i}.img_attn.qkv",
)
add_qkv_lora_layer_if_present(
@@ -126,6 +141,7 @@ def lora_model_from_flux_diffusers_state_dict(state_dict: Dict[str, torch.Tensor
f"transformer_blocks.{i}.attn.add_k_proj",
f"transformer_blocks.{i}.attn.add_v_proj",
],
[(hidden_size, hidden_size), (hidden_size, hidden_size), (hidden_size, hidden_size)],
f"double_blocks.{i}.txt_attn.qkv",
)
@@ -175,7 +191,14 @@ def lora_model_from_flux_diffusers_state_dict(state_dict: Dict[str, torch.Tensor
f"single_transformer_blocks.{i}.attn.to_v",
f"single_transformer_blocks.{i}.proj_mlp",
],
[
(hidden_size, hidden_size),
(hidden_size, hidden_size),
(hidden_size, hidden_size),
(mlp_hidden_dim, hidden_size),
],
f"single_blocks.{i}.linear1",
allow_missing_keys=True,
)
# Output projections.

View File

@@ -53,8 +53,7 @@ class BaseModelType(str, Enum):
Any = "any"
StableDiffusion1 = "sd-1"
StableDiffusion2 = "sd-2"
# TODO(ryand): Should this just be StableDiffusion3?
StableDiffusion35 = "sd-3.5"
StableDiffusion3 = "sd-3"
StableDiffusionXL = "sdxl"
StableDiffusionXLRefiner = "sdxl-refiner"
Flux = "flux"
@@ -85,8 +84,10 @@ class SubModelType(str, Enum):
Transformer = "transformer"
TextEncoder = "text_encoder"
TextEncoder2 = "text_encoder_2"
TextEncoder3 = "text_encoder_3"
Tokenizer = "tokenizer"
Tokenizer2 = "tokenizer_2"
Tokenizer3 = "tokenizer_3"
VAE = "vae"
VAEDecoder = "vae_decoder"
VAEEncoder = "vae_encoder"
@@ -94,6 +95,13 @@ class SubModelType(str, Enum):
SafetyChecker = "safety_checker"
class ClipVariantType(str, Enum):
"""Variant type."""
L = "large"
G = "gigantic"
class ModelVariantType(str, Enum):
"""Variant type."""
@@ -149,6 +157,17 @@ class ModelSourceType(str, Enum):
DEFAULTS_PRECISION = Literal["fp16", "fp32"]
AnyVariant: TypeAlias = Union[ModelVariantType, ClipVariantType, None]
class SubmodelDefinition(BaseModel):
path_or_prefix: str
model_type: ModelType
variant: AnyVariant = None
model_config = ConfigDict(protected_namespaces=())
class MainModelDefaultSettings(BaseModel):
vae: str | None = Field(default=None, description="Default VAE for this model (model key)")
vae_precision: DEFAULTS_PRECISION | None = Field(default=None, description="Default VAE precision for this model")
@@ -195,6 +214,9 @@ class ModelConfigBase(BaseModel):
schema["required"].extend(["key", "type", "format"])
model_config = ConfigDict(validate_assignment=True, json_schema_extra=json_schema_extra)
submodels: Optional[Dict[SubModelType, SubmodelDefinition]] = Field(
description="Loadable submodels in this model", default=None
)
class CheckpointConfigBase(ModelConfigBase):
@@ -337,7 +359,7 @@ class MainConfigBase(ModelConfigBase):
default_settings: Optional[MainModelDefaultSettings] = Field(
description="Default settings for this model", default=None
)
variant: ModelVariantType = ModelVariantType.Normal
variant: AnyVariant = ModelVariantType.Normal
class MainCheckpointConfig(CheckpointConfigBase, MainConfigBase):
@@ -421,12 +443,33 @@ class CLIPEmbedDiffusersConfig(DiffusersConfigBase):
type: Literal[ModelType.CLIPEmbed] = ModelType.CLIPEmbed
format: Literal[ModelFormat.Diffusers] = ModelFormat.Diffusers
variant: ClipVariantType = ClipVariantType.L
@staticmethod
def get_tag() -> Tag:
return Tag(f"{ModelType.CLIPEmbed.value}.{ModelFormat.Diffusers.value}")
class CLIPGEmbedDiffusersConfig(CLIPEmbedDiffusersConfig):
"""Model config for CLIP-G Embeddings."""
variant: ClipVariantType = ClipVariantType.G
@staticmethod
def get_tag() -> Tag:
return Tag(f"{ModelType.CLIPEmbed.value}.{ModelFormat.Diffusers.value}.{ClipVariantType.G}")
class CLIPLEmbedDiffusersConfig(CLIPEmbedDiffusersConfig):
"""Model config for CLIP-L Embeddings."""
variant: ClipVariantType = ClipVariantType.L
@staticmethod
def get_tag() -> Tag:
return Tag(f"{ModelType.CLIPEmbed.value}.{ModelFormat.Diffusers.value}.{ClipVariantType.L}")
class CLIPVisionDiffusersConfig(DiffusersConfigBase):
"""Model config for CLIPVision."""
@@ -503,6 +546,8 @@ AnyModelConfig = Annotated[
Annotated[SpandrelImageToImageConfig, SpandrelImageToImageConfig.get_tag()],
Annotated[CLIPVisionDiffusersConfig, CLIPVisionDiffusersConfig.get_tag()],
Annotated[CLIPEmbedDiffusersConfig, CLIPEmbedDiffusersConfig.get_tag()],
Annotated[CLIPLEmbedDiffusersConfig, CLIPLEmbedDiffusersConfig.get_tag()],
Annotated[CLIPGEmbedDiffusersConfig, CLIPGEmbedDiffusersConfig.get_tag()],
],
Discriminator(get_model_discriminator_value),
]

View File

@@ -35,6 +35,7 @@ class ModelLoader(ModelLoaderBase):
self._logger = logger
self._ram_cache = ram_cache
self._torch_dtype = TorchDevice.choose_torch_dtype()
self._torch_device = TorchDevice.choose_torch_device()
def load_model(self, model_config: AnyModelConfig, submodel_type: Optional[SubModelType] = None) -> LoadedModel:
"""

View File

@@ -84,7 +84,15 @@ class FluxVAELoader(ModelLoader):
model = AutoEncoder(ae_params[config.config_path])
sd = load_file(model_path)
model.load_state_dict(sd, assign=True)
model.to(dtype=self._torch_dtype)
# VAE is broken in float16, which mps defaults to
if self._torch_dtype == torch.float16:
try:
vae_dtype = torch.tensor([1.0], dtype=torch.bfloat16, device=self._torch_device).dtype
except TypeError:
vae_dtype = torch.float32
else:
vae_dtype = self._torch_dtype
model.to(vae_dtype)
return model
@@ -128,9 +136,9 @@ class BnbQuantizedLlmInt8bCheckpointModel(ModelLoader):
"The bnb modules are not available. Please install bitsandbytes if available on your platform."
)
match submodel_type:
case SubModelType.Tokenizer2:
case SubModelType.Tokenizer2 | SubModelType.Tokenizer3:
return T5Tokenizer.from_pretrained(Path(config.path) / "tokenizer_2", max_length=512)
case SubModelType.TextEncoder2:
case SubModelType.TextEncoder2 | SubModelType.TextEncoder3:
te2_model_path = Path(config.path) / "text_encoder_2"
model_config = AutoConfig.from_pretrained(te2_model_path)
with accelerate.init_empty_weights():
@@ -172,9 +180,9 @@ class T5EncoderCheckpointModel(ModelLoader):
raise ValueError("Only T5EncoderConfig models are currently supported here.")
match submodel_type:
case SubModelType.Tokenizer2:
case SubModelType.Tokenizer2 | SubModelType.Tokenizer3:
return T5Tokenizer.from_pretrained(Path(config.path) / "tokenizer_2", max_length=512)
case SubModelType.TextEncoder2:
case SubModelType.TextEncoder2 | SubModelType.TextEncoder3:
return T5EncoderModel.from_pretrained(Path(config.path) / "text_encoder_2", torch_dtype="auto")
raise ValueError(

View File

@@ -1,55 +0,0 @@
from pathlib import Path
from typing import Optional
from invokeai.backend.model_manager.config import (
AnyModel,
AnyModelConfig,
BaseModelType,
CheckpointConfigBase,
MainCheckpointConfig,
ModelFormat,
ModelType,
SubModelType,
)
from invokeai.backend.model_manager.load.load_default import ModelLoader
from invokeai.backend.model_manager.load.model_loader_registry import ModelLoaderRegistry
@ModelLoaderRegistry.register(base=BaseModelType.StableDiffusion35, type=ModelType.Main, format=ModelFormat.Checkpoint)
class FluxCheckpointModel(ModelLoader):
"""Class to load main models."""
def _load_model(
self,
config: AnyModelConfig,
submodel_type: Optional[SubModelType] = None,
) -> AnyModel:
if not isinstance(config, CheckpointConfigBase):
raise ValueError("Only CheckpointConfigBase models are currently supported here.")
match submodel_type:
case SubModelType.Transformer:
return self._load_from_singlefile(config)
raise ValueError(
f"Only Transformer submodels are currently supported. Received: {submodel_type.value if submodel_type else 'None'}"
)
def _load_from_singlefile(
self,
config: AnyModelConfig,
) -> AnyModel:
assert isinstance(config, MainCheckpointConfig)
model_path = Path(config.path)
# model = Flux(params[config.config_path])
# sd = load_file(model_path)
# if "model.diffusion_model.double_blocks.0.img_attn.norm.key_norm.scale" in sd:
# sd = convert_bundle_to_flux_transformer_checkpoint(sd)
# new_sd_size = sum([ten.nelement() * torch.bfloat16.itemsize for ten in sd.values()])
# self._ram_cache.make_room(new_sd_size)
# for k in sd.keys():
# # We need to cast to bfloat16 due to it being the only currently supported dtype for inference
# sd[k] = sd[k].to(torch.bfloat16)
# model.load_state_dict(sd, assign=True)
return model

View File

@@ -42,6 +42,7 @@ VARIANT_TO_IN_CHANNEL_MAP = {
@ModelLoaderRegistry.register(
base=BaseModelType.StableDiffusionXLRefiner, type=ModelType.Main, format=ModelFormat.Diffusers
)
@ModelLoaderRegistry.register(base=BaseModelType.StableDiffusion3, type=ModelType.Main, format=ModelFormat.Diffusers)
@ModelLoaderRegistry.register(base=BaseModelType.StableDiffusion1, type=ModelType.Main, format=ModelFormat.Checkpoint)
@ModelLoaderRegistry.register(base=BaseModelType.StableDiffusion2, type=ModelType.Main, format=ModelFormat.Checkpoint)
@ModelLoaderRegistry.register(base=BaseModelType.StableDiffusionXL, type=ModelType.Main, format=ModelFormat.Checkpoint)
@@ -51,13 +52,6 @@ VARIANT_TO_IN_CHANNEL_MAP = {
class StableDiffusionDiffusersModel(GenericDiffusersLoader):
"""Class to load main models."""
model_base_to_model_type = {
BaseModelType.StableDiffusion1: "FrozenCLIPEmbedder",
BaseModelType.StableDiffusion2: "FrozenOpenCLIPEmbedder",
BaseModelType.StableDiffusionXL: "SDXL",
BaseModelType.StableDiffusionXLRefiner: "SDXL-Refiner",
}
def _load_model(
self,
config: AnyModelConfig,
@@ -117,8 +111,6 @@ class StableDiffusionDiffusersModel(GenericDiffusersLoader):
load_class = load_classes[config.base][config.variant]
except KeyError as e:
raise Exception(f"No diffusers pipeline known for base={config.base}, variant={config.variant}") from e
prediction_type = config.prediction_type.value
upcast_attention = config.upcast_attention
# Without SilenceWarnings we get log messages like this:
# site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
@@ -129,13 +121,7 @@ class StableDiffusionDiffusersModel(GenericDiffusersLoader):
# ['text_model.embeddings.position_ids']
with SilenceWarnings():
pipeline = load_class.from_single_file(
config.path,
torch_dtype=self._torch_dtype,
prediction_type=prediction_type,
upcast_attention=upcast_attention,
load_safety_checker=False,
)
pipeline = load_class.from_single_file(config.path, torch_dtype=self._torch_dtype)
if not submodel_type:
return pipeline

View File

@@ -20,7 +20,7 @@ from typing import Optional
import requests
from huggingface_hub import HfApi, configure_http_backend, hf_hub_url
from huggingface_hub.utils._errors import RepositoryNotFoundError, RevisionNotFoundError
from huggingface_hub.errors import RepositoryNotFoundError, RevisionNotFoundError
from pydantic.networks import AnyHttpUrl
from requests.sessions import Session

View File

@@ -1,7 +1,7 @@
import json
import re
from pathlib import Path
from typing import Any, Dict, Literal, Optional, Union
from typing import Any, Callable, Dict, Literal, Optional, Union
import safetensors.torch
import spandrel
@@ -22,6 +22,7 @@ from invokeai.backend.lora.conversions.flux_kohya_lora_conversion_utils import i
from invokeai.backend.model_hash.model_hash import HASHING_ALGORITHMS, ModelHash
from invokeai.backend.model_manager.config import (
AnyModelConfig,
AnyVariant,
BaseModelType,
ControlAdapterDefaultSettings,
InvalidModelConfigException,
@@ -33,11 +34,17 @@ from invokeai.backend.model_manager.config import (
ModelType,
ModelVariantType,
SchedulerPredictionType,
SubmodelDefinition,
SubModelType,
)
from invokeai.backend.model_manager.load.model_loaders.generic_diffusers import ConfigLoader
from invokeai.backend.model_manager.util.model_util import (
get_clip_variant_type,
lora_token_vector_length,
read_checkpoint_meta,
)
from invokeai.backend.model_manager.util.model_util import lora_token_vector_length, read_checkpoint_meta
from invokeai.backend.quantization.gguf.ggml_tensor import GGMLTensor
from invokeai.backend.quantization.gguf.loaders import gguf_sd_loader
from invokeai.backend.sd3.sd3_state_dict_utils import is_sd3_checkpoint
from invokeai.backend.spandrel_image_to_image_model import SpandrelImageToImageModel
from invokeai.backend.util.silence_warnings import SilenceWarnings
@@ -113,6 +120,7 @@ class ModelProbe(object):
"StableDiffusionXLPipeline": ModelType.Main,
"StableDiffusionXLImg2ImgPipeline": ModelType.Main,
"StableDiffusionXLInpaintPipeline": ModelType.Main,
"StableDiffusion3Pipeline": ModelType.Main,
"LatentConsistencyModelPipeline": ModelType.Main,
"AutoencoderKL": ModelType.VAE,
"AutoencoderTiny": ModelType.VAE,
@@ -121,11 +129,14 @@ class ModelProbe(object):
"T2IAdapter": ModelType.T2IAdapter,
"CLIPModel": ModelType.CLIPEmbed,
"CLIPTextModel": ModelType.CLIPEmbed,
"CLIPTextModelWithProjection": ModelType.CLIPEmbed,
"T5EncoderModel": ModelType.T5Encoder,
"FluxControlNetModel": ModelType.ControlNet,
"SD3Transformer2DModel": ModelType.Main,
"CLIPTextModelWithProjection": ModelType.CLIPEmbed,
}
TYPE2VARIANT: Dict[ModelType, Callable[[str], Optional[AnyVariant]]] = {ModelType.CLIPEmbed: get_clip_variant_type}
@classmethod
def register_probe(
cls, format: Literal["diffusers", "checkpoint", "onnx"], model_type: ModelType, probe_class: type[ProbeBase]
@@ -172,7 +183,10 @@ class ModelProbe(object):
fields["path"] = model_path.as_posix()
fields["type"] = fields.get("type") or model_type
fields["base"] = fields.get("base") or probe.get_base_type()
fields["variant"] = fields.get("variant") or probe.get_variant_type()
variant_func = cls.TYPE2VARIANT.get(fields["type"], None)
fields["variant"] = (
fields.get("variant") or (variant_func and variant_func(model_path.as_posix())) or probe.get_variant_type()
)
fields["prediction_type"] = fields.get("prediction_type") or probe.get_scheduler_prediction_type()
fields["image_encoder_model_id"] = fields.get("image_encoder_model_id") or probe.get_image_encoder_model_id()
fields["name"] = fields.get("name") or cls.get_model_name(model_path)
@@ -219,6 +233,10 @@ class ModelProbe(object):
and fields["prediction_type"] == SchedulerPredictionType.VPrediction
)
get_submodels = getattr(probe, "get_submodels", None)
if fields["base"] == BaseModelType.StableDiffusion3 and callable(get_submodels):
fields["submodels"] = get_submodels()
model_info = ModelConfigFactory.make_config(fields) # , key=fields.get("key", None))
return model_info
@@ -243,11 +261,6 @@ class ModelProbe(object):
for key in [str(k) for k in ckpt.keys()]:
if key.startswith(
(
# The following prefixes appear when multiple models have been bundled together in a single file (I
# believe the format originated in ComfyUI).
# first_stage_model = VAE
# cond_stage_model = Text Encoder
# model.diffusion_model = UNet / Transformer
"cond_stage_model.",
"first_stage_model.",
"model.diffusion_model.",
@@ -404,9 +417,6 @@ class ModelProbe(object):
# is used rather than attempting to support flux with separate model types and format
# If changed in the future, please fix me
config_file = "flux-schnell"
elif base_type == BaseModelType.StableDiffusion35:
# TODO(ryand): Think about what to do here.
config_file = "sd3.5-large"
else:
config_file = LEGACY_CONFIGS[base_type][variant_type]
if isinstance(config_file, dict): # need another tier for sd-2.x models
@@ -472,8 +482,9 @@ MODEL_NAME_TO_PREPROCESSOR = {
"normal": "normalbae_image_processor",
"sketch": "pidi_image_processor",
"scribble": "lineart_image_processor",
"lineart": "lineart_image_processor",
"lineart anime": "lineart_anime_image_processor",
"lineart_anime": "lineart_anime_image_processor",
"lineart": "lineart_image_processor",
"softedge": "hed_image_processor",
"hed": "hed_image_processor",
"shuffle": "content_shuffle_image_processor",
@@ -526,7 +537,7 @@ class CheckpointProbeBase(ProbeBase):
def get_variant_type(self) -> ModelVariantType:
model_type = ModelProbe.get_model_type_from_checkpoint(self.model_path, self.checkpoint)
base_type = self.get_base_type()
if model_type != ModelType.Main or base_type in (BaseModelType.Flux, BaseModelType.StableDiffusion35):
if model_type != ModelType.Main or base_type == BaseModelType.Flux:
return ModelVariantType.Normal
state_dict = self.checkpoint.get("state_dict") or self.checkpoint
in_channels = state_dict["model.diffusion_model.input_blocks.0.0.weight"].shape[1]
@@ -551,10 +562,6 @@ class PipelineCheckpointProbe(CheckpointProbeBase):
or "model.diffusion_model.double_blocks.0.img_attn.norm.key_norm.scale" in state_dict
):
return BaseModelType.Flux
if is_sd3_checkpoint(state_dict):
return BaseModelType.StableDiffusion35
key_name = "model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight"
if key_name in state_dict and state_dict[key_name].shape[-1] == 768:
return BaseModelType.StableDiffusion1
@@ -760,18 +767,33 @@ class FolderProbeBase(ProbeBase):
class PipelineFolderProbe(FolderProbeBase):
def get_base_type(self) -> BaseModelType:
with open(self.model_path / "unet" / "config.json", "r") as file:
unet_conf = json.load(file)
if unet_conf["cross_attention_dim"] == 768:
return BaseModelType.StableDiffusion1
elif unet_conf["cross_attention_dim"] == 1024:
return BaseModelType.StableDiffusion2
elif unet_conf["cross_attention_dim"] == 1280:
return BaseModelType.StableDiffusionXLRefiner
elif unet_conf["cross_attention_dim"] == 2048:
return BaseModelType.StableDiffusionXL
else:
raise InvalidModelConfigException(f"Unknown base model for {self.model_path}")
# Handle pipelines with a UNet (i.e SD 1.x, SD2, SDXL).
config_path = self.model_path / "unet" / "config.json"
if config_path.exists():
with open(config_path) as file:
unet_conf = json.load(file)
if unet_conf["cross_attention_dim"] == 768:
return BaseModelType.StableDiffusion1
elif unet_conf["cross_attention_dim"] == 1024:
return BaseModelType.StableDiffusion2
elif unet_conf["cross_attention_dim"] == 1280:
return BaseModelType.StableDiffusionXLRefiner
elif unet_conf["cross_attention_dim"] == 2048:
return BaseModelType.StableDiffusionXL
else:
raise InvalidModelConfigException(f"Unknown base model for {self.model_path}")
# Handle pipelines with a transformer (i.e. SD3).
config_path = self.model_path / "transformer" / "config.json"
if config_path.exists():
with open(config_path) as file:
transformer_conf = json.load(file)
if transformer_conf["_class_name"] == "SD3Transformer2DModel":
return BaseModelType.StableDiffusion3
else:
raise InvalidModelConfigException(f"Unknown base model for {self.model_path}")
raise InvalidModelConfigException(f"Unknown base model for {self.model_path}")
def get_scheduler_prediction_type(self) -> SchedulerPredictionType:
with open(self.model_path / "scheduler" / "scheduler_config.json", "r") as file:
@@ -783,6 +805,23 @@ class PipelineFolderProbe(FolderProbeBase):
else:
raise InvalidModelConfigException("Unknown scheduler prediction type: {scheduler_conf['prediction_type']}")
def get_submodels(self) -> Dict[SubModelType, SubmodelDefinition]:
config = ConfigLoader.load_config(self.model_path, config_name="model_index.json")
submodels: Dict[SubModelType, SubmodelDefinition] = {}
for key, value in config.items():
if key.startswith("_") or not (isinstance(value, list) and len(value) == 2):
continue
model_loader = str(value[1])
if model_type := ModelProbe.CLASS2TYPE.get(model_loader):
variant_func = ModelProbe.TYPE2VARIANT.get(model_type, None)
submodels[SubModelType(key)] = SubmodelDefinition(
path_or_prefix=(self.model_path / key).resolve().as_posix(),
model_type=model_type,
variant=variant_func and variant_func((self.model_path / key).as_posix()),
)
return submodels
def get_variant_type(self) -> ModelVariantType:
# This only works for pipelines! Any kind of
# exception results in our returning the

View File

@@ -13,6 +13,9 @@ class StarterModelWithoutDependencies(BaseModel):
type: ModelType
format: Optional[ModelFormat] = None
is_installed: bool = False
# allows us to track what models a user has installed across name changes within starter models
# if you update a starter model name, please add the old one to this list for that starter model
previous_names: list[str] = []
class StarterModel(StarterModelWithoutDependencies):
@@ -137,6 +140,22 @@ flux_dev = StarterModel(
type=ModelType.Main,
dependencies=[t5_base_encoder, flux_vae, clip_l_encoder],
)
sd35_medium = StarterModel(
name="SD3.5 Medium",
base=BaseModelType.StableDiffusion3,
source="stabilityai/stable-diffusion-3.5-medium",
description="Medium SD3.5 Model: ~15GB",
type=ModelType.Main,
dependencies=[],
)
sd35_large = StarterModel(
name="SD3.5 Large",
base=BaseModelType.StableDiffusion3,
source="stabilityai/stable-diffusion-3.5-large",
description="Large SD3.5 Model: ~19G",
type=ModelType.Main,
dependencies=[],
)
cyberrealistic_sd1 = StarterModel(
name="CyberRealistic v4.1",
base=BaseModelType.StableDiffusion1,
@@ -243,44 +262,49 @@ easy_neg_sd1 = StarterModel(
# endregion
# region IP Adapter
ip_adapter_sd1 = StarterModel(
name="IP Adapter",
name="Standard Reference (IP Adapter)",
base=BaseModelType.StableDiffusion1,
source="https://huggingface.co/InvokeAI/ip_adapter_sd15/resolve/main/ip-adapter_sd15.safetensors",
description="IP-Adapter for SD 1.5 models",
description="References images with a more generalized/looser degree of precision.",
type=ModelType.IPAdapter,
dependencies=[ip_adapter_sd_image_encoder],
previous_names=["IP Adapter"],
)
ip_adapter_plus_sd1 = StarterModel(
name="IP Adapter Plus",
name="Precise Reference (IP Adapter Plus)",
base=BaseModelType.StableDiffusion1,
source="https://huggingface.co/InvokeAI/ip_adapter_plus_sd15/resolve/main/ip-adapter-plus_sd15.safetensors",
description="Refined IP-Adapter for SD 1.5 models",
description="References images with a higher degree of precision.",
type=ModelType.IPAdapter,
dependencies=[ip_adapter_sd_image_encoder],
previous_names=["IP Adapter Plus"],
)
ip_adapter_plus_face_sd1 = StarterModel(
name="IP Adapter Plus Face",
name="Face Reference (IP Adapter Plus Face)",
base=BaseModelType.StableDiffusion1,
source="https://huggingface.co/InvokeAI/ip_adapter_plus_face_sd15/resolve/main/ip-adapter-plus-face_sd15.safetensors",
description="Refined IP-Adapter for SD 1.5 models, adapted for faces",
description="References images with a higher degree of precision, adapted for faces",
type=ModelType.IPAdapter,
dependencies=[ip_adapter_sd_image_encoder],
previous_names=["IP Adapter Plus Face"],
)
ip_adapter_sdxl = StarterModel(
name="IP Adapter SDXL",
name="Standard Reference (IP Adapter ViT-H)",
base=BaseModelType.StableDiffusionXL,
source="https://huggingface.co/InvokeAI/ip_adapter_sdxl_vit_h/resolve/main/ip-adapter_sdxl_vit-h.safetensors",
description="IP-Adapter for SDXL models",
description="References images with a higher degree of precision.",
type=ModelType.IPAdapter,
dependencies=[ip_adapter_sdxl_image_encoder],
previous_names=["IP Adapter SDXL"],
)
ip_adapter_flux = StarterModel(
name="XLabs FLUX IP-Adapter",
name="Standard Reference (XLabs FLUX IP-Adapter)",
base=BaseModelType.Flux,
source="https://huggingface.co/XLabs-AI/flux-ip-adapter/resolve/main/flux-ip-adapter.safetensors",
description="FLUX IP-Adapter",
source="https://huggingface.co/XLabs-AI/flux-ip-adapter/resolve/main/ip_adapter.safetensors",
description="References images with a more generalized/looser degree of precision.",
type=ModelType.IPAdapter,
dependencies=[clip_vit_l_image_encoder],
previous_names=["XLabs FLUX IP-Adapter"],
)
# endregion
# region ControlNet
@@ -299,157 +323,162 @@ qr_code_cnet_sdxl = StarterModel(
type=ModelType.ControlNet,
)
canny_sd1 = StarterModel(
name="canny",
name="Hard Edge Detection (canny)",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_canny",
description="ControlNet weights trained on sd-1.5 with canny conditioning.",
description="Uses detected edges in the image to control composition.",
type=ModelType.ControlNet,
previous_names=["canny"],
)
inpaint_cnet_sd1 = StarterModel(
name="inpaint",
name="Inpainting",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_inpaint",
description="ControlNet weights trained on sd-1.5 with canny conditioning, inpaint version",
type=ModelType.ControlNet,
previous_names=["inpaint"],
)
mlsd_sd1 = StarterModel(
name="mlsd",
name="Line Drawing (mlsd)",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_mlsd",
description="ControlNet weights trained on sd-1.5 with canny conditioning, MLSD version",
description="Uses straight line detection for controlling the generation.",
type=ModelType.ControlNet,
previous_names=["mlsd"],
)
depth_sd1 = StarterModel(
name="depth",
name="Depth Map",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11f1p_sd15_depth",
description="ControlNet weights trained on sd-1.5 with depth conditioning",
description="Uses depth information in the image to control the depth in the generation.",
type=ModelType.ControlNet,
previous_names=["depth"],
)
normal_bae_sd1 = StarterModel(
name="normal_bae",
name="Lighting Detection (Normals)",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_normalbae",
description="ControlNet weights trained on sd-1.5 with normalbae image conditioning",
description="Uses detected lighting information to guide the lighting of the composition.",
type=ModelType.ControlNet,
previous_names=["normal_bae"],
)
seg_sd1 = StarterModel(
name="seg",
name="Segmentation Map",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_seg",
description="ControlNet weights trained on sd-1.5 with seg image conditioning",
description="Uses segmentation maps to guide the structure of the composition.",
type=ModelType.ControlNet,
previous_names=["seg"],
)
lineart_sd1 = StarterModel(
name="lineart",
name="Lineart",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_lineart",
description="ControlNet weights trained on sd-1.5 with lineart image conditioning",
description="Uses lineart detection to guide the lighting of the composition.",
type=ModelType.ControlNet,
previous_names=["lineart"],
)
lineart_anime_sd1 = StarterModel(
name="lineart_anime",
name="Lineart Anime",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15s2_lineart_anime",
description="ControlNet weights trained on sd-1.5 with anime image conditioning",
description="Uses anime lineart detection to guide the lighting of the composition.",
type=ModelType.ControlNet,
previous_names=["lineart_anime"],
)
openpose_sd1 = StarterModel(
name="openpose",
name="Pose Detection (openpose)",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_openpose",
description="ControlNet weights trained on sd-1.5 with openpose image conditioning",
description="Uses pose information to control the pose of human characters in the generation.",
type=ModelType.ControlNet,
previous_names=["openpose"],
)
scribble_sd1 = StarterModel(
name="scribble",
name="Contour Detection (scribble)",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_scribble",
description="ControlNet weights trained on sd-1.5 with scribble image conditioning",
description="Uses edges, contours, or line art in the image to control composition.",
type=ModelType.ControlNet,
previous_names=["scribble"],
)
softedge_sd1 = StarterModel(
name="softedge",
name="Soft Edge Detection (softedge)",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11p_sd15_softedge",
description="ControlNet weights trained on sd-1.5 with soft edge conditioning",
description="Uses a soft edge detection map to control composition.",
type=ModelType.ControlNet,
previous_names=["softedge"],
)
shuffle_sd1 = StarterModel(
name="shuffle",
name="Remix (shuffle)",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11e_sd15_shuffle",
description="ControlNet weights trained on sd-1.5 with shuffle image conditioning",
type=ModelType.ControlNet,
previous_names=["shuffle"],
)
tile_sd1 = StarterModel(
name="tile",
name="Tile",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11f1e_sd15_tile",
description="ControlNet weights trained on sd-1.5 with tiled image conditioning",
type=ModelType.ControlNet,
)
ip2p_sd1 = StarterModel(
name="ip2p",
base=BaseModelType.StableDiffusion1,
source="lllyasviel/control_v11e_sd15_ip2p",
description="ControlNet weights trained on sd-1.5 with ip2p conditioning.",
description="Uses image data to replicate exact colors/structure in the resulting generation.",
type=ModelType.ControlNet,
previous_names=["tile"],
)
canny_sdxl = StarterModel(
name="canny-sdxl",
name="Hard Edge Detection (canny)",
base=BaseModelType.StableDiffusionXL,
source="xinsir/controlNet-canny-sdxl-1.0",
description="ControlNet weights trained on sdxl-1.0 with canny conditioning, by Xinsir.",
description="Uses detected edges in the image to control composition.",
type=ModelType.ControlNet,
previous_names=["canny-sdxl"],
)
depth_sdxl = StarterModel(
name="depth-sdxl",
name="Depth Map",
base=BaseModelType.StableDiffusionXL,
source="diffusers/controlNet-depth-sdxl-1.0",
description="ControlNet weights trained on sdxl-1.0 with depth conditioning.",
description="Uses depth information in the image to control the depth in the generation.",
type=ModelType.ControlNet,
previous_names=["depth-sdxl"],
)
softedge_sdxl = StarterModel(
name="softedge-dexined-sdxl",
name="Soft Edge Detection (softedge)",
base=BaseModelType.StableDiffusionXL,
source="SargeZT/controlNet-sd-xl-1.0-softedge-dexined",
description="ControlNet weights trained on sdxl-1.0 with dexined soft edge preprocessing.",
type=ModelType.ControlNet,
)
depth_zoe_16_sdxl = StarterModel(
name="depth-16bit-zoe-sdxl",
base=BaseModelType.StableDiffusionXL,
source="SargeZT/controlNet-sd-xl-1.0-depth-16bit-zoe",
description="ControlNet weights trained on sdxl-1.0 with Zoe's preprocessor (16 bits).",
type=ModelType.ControlNet,
)
depth_zoe_32_sdxl = StarterModel(
name="depth-zoe-sdxl",
base=BaseModelType.StableDiffusionXL,
source="diffusers/controlNet-zoe-depth-sdxl-1.0",
description="ControlNet weights trained on sdxl-1.0 with Zoe's preprocessor (32 bits).",
description="Uses a soft edge detection map to control composition.",
type=ModelType.ControlNet,
previous_names=["softedge-dexined-sdxl"],
)
openpose_sdxl = StarterModel(
name="openpose-sdxl",
name="Pose Detection (openpose)",
base=BaseModelType.StableDiffusionXL,
source="xinsir/controlNet-openpose-sdxl-1.0",
description="ControlNet weights trained on sdxl-1.0 compatible with the DWPose processor by Xinsir.",
description="Uses pose information to control the pose of human characters in the generation.",
type=ModelType.ControlNet,
previous_names=["openpose-sdxl", "controlnet-openpose-sdxl"],
)
scribble_sdxl = StarterModel(
name="scribble-sdxl",
name="Contour Detection (scribble)",
base=BaseModelType.StableDiffusionXL,
source="xinsir/controlNet-scribble-sdxl-1.0",
description="ControlNet weights trained on sdxl-1.0 compatible with various lineart processors and black/white sketches by Xinsir.",
description="Uses edges, contours, or line art in the image to control composition.",
type=ModelType.ControlNet,
previous_names=["scribble-sdxl", "controlnet-scribble-sdxl"],
)
tile_sdxl = StarterModel(
name="tile-sdxl",
name="Tile",
base=BaseModelType.StableDiffusionXL,
source="xinsir/controlNet-tile-sdxl-1.0",
description="ControlNet weights trained on sdxl-1.0 with tiled image conditioning",
description="Uses image data to replicate exact colors/structure in the resulting generation.",
type=ModelType.ControlNet,
previous_names=["tile-sdxl"],
)
union_cnet_sdxl = StarterModel(
name="Multi-Guidance Detection (Union Pro)",
base=BaseModelType.StableDiffusionXL,
source="InvokeAI/Xinsir-SDXL_Controlnet_Union",
description="A unified ControlNet for SDXL model that supports 10+ control types",
type=ModelType.ControlNet,
)
union_cnet_flux = StarterModel(
@@ -462,60 +491,52 @@ union_cnet_flux = StarterModel(
# endregion
# region T2I Adapter
t2i_canny_sd1 = StarterModel(
name="canny-sd15",
name="Hard Edge Detection (canny)",
base=BaseModelType.StableDiffusion1,
source="TencentARC/t2iadapter_canny_sd15v2",
description="T2I Adapter weights trained on sd-1.5 with canny conditioning.",
description="Uses detected edges in the image to control composition",
type=ModelType.T2IAdapter,
previous_names=["canny-sd15"],
)
t2i_sketch_sd1 = StarterModel(
name="sketch-sd15",
name="Sketch",
base=BaseModelType.StableDiffusion1,
source="TencentARC/t2iadapter_sketch_sd15v2",
description="T2I Adapter weights trained on sd-1.5 with sketch conditioning.",
description="Uses a sketch to control composition",
type=ModelType.T2IAdapter,
previous_names=["sketch-sd15"],
)
t2i_depth_sd1 = StarterModel(
name="depth-sd15",
name="Depth Map",
base=BaseModelType.StableDiffusion1,
source="TencentARC/t2iadapter_depth_sd15v2",
description="T2I Adapter weights trained on sd-1.5 with depth conditioning.",
type=ModelType.T2IAdapter,
)
t2i_zoe_depth_sd1 = StarterModel(
name="zoedepth-sd15",
base=BaseModelType.StableDiffusion1,
source="TencentARC/t2iadapter_zoedepth_sd15v1",
description="T2I Adapter weights trained on sd-1.5 with zoe depth conditioning.",
description="Uses depth information in the image to control the depth in the generation.",
type=ModelType.T2IAdapter,
previous_names=["depth-sd15"],
)
t2i_canny_sdxl = StarterModel(
name="canny-sdxl",
name="Hard Edge Detection (canny)",
base=BaseModelType.StableDiffusionXL,
source="TencentARC/t2i-adapter-canny-sdxl-1.0",
description="T2I Adapter weights trained on sdxl-1.0 with canny conditioning.",
type=ModelType.T2IAdapter,
)
t2i_zoe_depth_sdxl = StarterModel(
name="zoedepth-sdxl",
base=BaseModelType.StableDiffusionXL,
source="TencentARC/t2i-adapter-depth-zoe-sdxl-1.0",
description="T2I Adapter weights trained on sdxl-1.0 with zoe depth conditioning.",
description="Uses detected edges in the image to control composition",
type=ModelType.T2IAdapter,
previous_names=["canny-sdxl"],
)
t2i_lineart_sdxl = StarterModel(
name="lineart-sdxl",
name="Lineart",
base=BaseModelType.StableDiffusionXL,
source="TencentARC/t2i-adapter-lineart-sdxl-1.0",
description="T2I Adapter weights trained on sdxl-1.0 with lineart conditioning.",
description="Uses lineart detection to guide the lighting of the composition.",
type=ModelType.T2IAdapter,
previous_names=["lineart-sdxl"],
)
t2i_sketch_sdxl = StarterModel(
name="sketch-sdxl",
name="Sketch",
base=BaseModelType.StableDiffusionXL,
source="TencentARC/t2i-adapter-sketch-sdxl-1.0",
description="T2I Adapter weights trained on sdxl-1.0 with sketch conditioning.",
description="Uses a sketch to control composition",
type=ModelType.T2IAdapter,
previous_names=["sketch-sdxl"],
)
# endregion
# region SpandrelImageToImage
@@ -565,6 +586,8 @@ STARTER_MODELS: list[StarterModel] = [
flux_dev_quantized,
flux_schnell,
flux_dev,
sd35_medium,
sd35_large,
cyberrealistic_sd1,
rev_animated_sd1,
dreamshaper_8_sd1,
@@ -600,22 +623,18 @@ STARTER_MODELS: list[StarterModel] = [
softedge_sd1,
shuffle_sd1,
tile_sd1,
ip2p_sd1,
canny_sdxl,
depth_sdxl,
softedge_sdxl,
depth_zoe_16_sdxl,
depth_zoe_32_sdxl,
openpose_sdxl,
scribble_sdxl,
tile_sdxl,
union_cnet_sdxl,
union_cnet_flux,
t2i_canny_sd1,
t2i_sketch_sd1,
t2i_depth_sd1,
t2i_zoe_depth_sd1,
t2i_canny_sdxl,
t2i_zoe_depth_sdxl,
t2i_lineart_sdxl,
t2i_sketch_sdxl,
realesrgan_x4,
@@ -646,7 +665,6 @@ sd1_bundle: list[StarterModel] = [
softedge_sd1,
shuffle_sd1,
tile_sd1,
ip2p_sd1,
swinir,
]
@@ -657,8 +675,6 @@ sdxl_bundle: list[StarterModel] = [
canny_sdxl,
depth_sdxl,
softedge_sdxl,
depth_zoe_16_sdxl,
depth_zoe_32_sdxl,
openpose_sdxl,
scribble_sdxl,
tile_sdxl,

View File

@@ -8,6 +8,7 @@ import safetensors
import torch
from picklescan.scanner import scan_file_path
from invokeai.backend.model_manager.config import ClipVariantType
from invokeai.backend.quantization.gguf.loaders import gguf_sd_loader
@@ -165,3 +166,25 @@ def convert_bundle_to_flux_transformer_checkpoint(
del transformer_state_dict[k]
return original_state_dict
def get_clip_variant_type(location: str) -> Optional[ClipVariantType]:
try:
path = Path(location)
config_path = path / "config.json"
if not config_path.exists():
config_path = path / "text_encoder" / "config.json"
if not config_path.exists():
return ClipVariantType.L
with open(config_path) as file:
clip_conf = json.load(file)
hidden_size = clip_conf.get("hidden_size", -1)
match hidden_size:
case 1280:
return ClipVariantType.G
case 768:
return ClipVariantType.L
case _:
return ClipVariantType.L
except Exception:
return ClipVariantType.L

View File

@@ -85,6 +85,7 @@ def _filter_by_variant(files: List[Path], variant: ModelRepoVariant) -> Set[Path
"""Select the proper variant files from a list of HuggingFace repo_id paths."""
result: set[Path] = set()
subfolder_weights: dict[Path, list[SubfolderCandidate]] = {}
safetensors_detected = False
for path in files:
if path.suffix in [".onnx", ".pb", ".onnx_data"]:
if variant == ModelRepoVariant.ONNX:
@@ -119,19 +120,27 @@ def _filter_by_variant(files: List[Path], variant: ModelRepoVariant) -> Set[Path
# We prefer safetensors over other file formats and an exact variant match. We'll score each file based on
# variant and format and select the best one.
if safetensors_detected and path.suffix == ".bin":
continue
parent = path.parent
score = 0
if path.suffix == ".safetensors":
safetensors_detected = True
if parent in subfolder_weights:
subfolder_weights[parent] = [sfc for sfc in subfolder_weights[parent] if sfc.path.suffix != ".bin"]
score += 1
candidate_variant_label = path.suffixes[0] if len(path.suffixes) == 2 else None
# Some special handling is needed here if there is not an exact match and if we cannot infer the variant
# from the file name. In this case, we only give this file a point if the requested variant is FP32 or DEFAULT.
if candidate_variant_label == f".{variant}" or (
not candidate_variant_label and variant in [ModelRepoVariant.FP32, ModelRepoVariant.Default]
):
if (
variant is not ModelRepoVariant.Default
and candidate_variant_label
and candidate_variant_label.startswith(f".{variant.value}")
) or (not candidate_variant_label and variant in [ModelRepoVariant.FP32, ModelRepoVariant.Default]):
score += 1
if parent not in subfolder_weights:
@@ -146,7 +155,7 @@ def _filter_by_variant(files: List[Path], variant: ModelRepoVariant) -> Set[Path
# Check if at least one of the files has the explicit fp16 variant.
at_least_one_fp16 = False
for candidate in candidate_list:
if len(candidate.path.suffixes) == 2 and candidate.path.suffixes[0] == ".fp16":
if len(candidate.path.suffixes) == 2 and candidate.path.suffixes[0].startswith(".fp16"):
at_least_one_fp16 = True
break
@@ -162,7 +171,16 @@ def _filter_by_variant(files: List[Path], variant: ModelRepoVariant) -> Set[Path
# candidate.
highest_score_candidate = max(candidate_list, key=lambda candidate: candidate.score)
if highest_score_candidate:
result.add(highest_score_candidate.path)
pattern = r"^(.*?)-\d+-of-\d+(\.\w+)$"
match = re.match(pattern, highest_score_candidate.path.as_posix())
if match:
for candidate in candidate_list:
if candidate.path.as_posix().startswith(match.group(1)) and candidate.path.as_posix().endswith(
match.group(2)
):
result.add(candidate.path)
else:
result.add(highest_score_candidate.path)
# If one of the architecture-related variants was specified and no files matched other than
# config and text files then we return an empty list

View File

@@ -0,0 +1,58 @@
import torch
class InpaintExtension:
"""A class for managing inpainting with SD3."""
def __init__(self, init_latents: torch.Tensor, inpaint_mask: torch.Tensor, noise: torch.Tensor):
"""Initialize InpaintExtension.
Args:
init_latents (torch.Tensor): The initial latents (i.e. un-noised at timestep 0).
inpaint_mask (torch.Tensor): A mask specifying which elements to inpaint. Range [0, 1]. Values of 1 will be
re-generated. Values of 0 will remain unchanged. Values between 0 and 1 can be used to blend the
inpainted region with the background.
noise (torch.Tensor): The noise tensor used to noise the init_latents.
"""
assert init_latents.dim() == inpaint_mask.dim() == noise.dim() == 4
assert init_latents.shape[-2:] == inpaint_mask.shape[-2:] == noise.shape[-2:]
self._init_latents = init_latents
self._inpaint_mask = inpaint_mask
self._noise = noise
def _apply_mask_gradient_adjustment(self, t_prev: float) -> torch.Tensor:
"""Applies inpaint mask gradient adjustment and returns the inpaint mask to be used at the current timestep."""
# As we progress through the denoising process, we promote gradient regions of the mask to have a full weight of
# 1.0. This helps to produce more coherent seams around the inpainted region. We experimented with a (small)
# number of promotion strategies (e.g. gradual promotion based on timestep), but found that a simple cutoff
# threshold worked well.
# We use a small epsilon to avoid any potential issues with floating point precision.
eps = 1e-4
mask_gradient_t_cutoff = 0.5
if t_prev > mask_gradient_t_cutoff:
# Early in the denoising process, use the inpaint mask as-is.
return self._inpaint_mask
else:
# After the cut-off, promote all non-zero mask values to 1.0.
mask = self._inpaint_mask.where(self._inpaint_mask <= (0.0 + eps), 1.0)
return mask
def merge_intermediate_latents_with_init_latents(
self, intermediate_latents: torch.Tensor, t_prev: float
) -> torch.Tensor:
"""Merge the intermediate latents with the initial latents for the current timestep using the inpaint mask. I.e.
update the intermediate latents to keep the regions that are not being inpainted on the correct noise
trajectory.
This function should be called after each denoising step.
"""
mask = self._apply_mask_gradient_adjustment(t_prev)
# Noise the init latents for the current timestep.
noised_init_latents = self._noise * t_prev + (1.0 - t_prev) * self._init_latents
# Merge the intermediate latents with the noised_init_latents using the inpaint_mask.
return intermediate_latents * mask + noised_init_latents * (1.0 - mask)

View File

@@ -1,891 +0,0 @@
# This file was originally copied from:
# https://github.com/Stability-AI/sd3.5/blob/19bf11c4e1e37324c5aa5a61f010d4127848a09c/mmditx.py
### This file contains impls for MM-DiT, the core model component of SD3
import math
from typing import Dict, List, Optional
import numpy as np
import torch
from einops import rearrange, repeat
from invokeai.backend.sd3.other_impls import Mlp, attention
class PatchEmbed(torch.nn.Module):
"""2D Image to Patch Embedding"""
def __init__(
self,
img_size: Optional[int] = 224,
patch_size: int = 16,
in_chans: int = 3,
embed_dim: int = 768,
flatten: bool = True,
bias: bool = True,
strict_img_size: bool = True,
dynamic_img_pad: bool = False,
dtype: torch.dtype | None = None,
device: torch.device | None = None,
):
super().__init__()
self.patch_size = (patch_size, patch_size)
if img_size is not None:
self.img_size = (img_size, img_size)
self.grid_size = tuple([s // p for s, p in zip(self.img_size, self.patch_size, strict=False)])
self.num_patches = self.grid_size[0] * self.grid_size[1]
else:
self.img_size = None
self.grid_size = None
self.num_patches = None
# flatten spatial dim and transpose to channels last, kept for bwd compat
self.flatten = flatten
self.strict_img_size = strict_img_size
self.dynamic_img_pad = dynamic_img_pad
self.proj = torch.nn.Conv2d(
in_chans,
embed_dim,
kernel_size=patch_size,
stride=patch_size,
bias=bias,
dtype=dtype,
device=device,
)
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = self.proj(x)
if self.flatten:
x = x.flatten(2).transpose(1, 2) # NCHW -> NLC
return x
def modulate(x: torch.Tensor, shift: torch.Tensor | None, scale: torch.Tensor) -> torch.Tensor:
if shift is None:
shift = torch.zeros_like(scale)
return x * (1 + scale.unsqueeze(1)) + shift.unsqueeze(1)
#################################################################################
# Sine/Cosine Positional Embedding Functions #
#################################################################################
def get_2d_sincos_pos_embed(
embed_dim: int,
grid_size: int,
cls_token: bool = False,
extra_tokens: int = 0,
scaling_factor: Optional[float] = None,
offset: Optional[float] = None,
):
"""
grid_size: int of the grid height and width
return:
pos_embed: [grid_size*grid_size, embed_dim] or [1+grid_size*grid_size, embed_dim] (w/ or w/o cls_token)
"""
grid_h = np.arange(grid_size, dtype=np.float32)
grid_w = np.arange(grid_size, dtype=np.float32)
grid = np.meshgrid(grid_w, grid_h) # here w goes first
grid = np.stack(grid, axis=0)
if scaling_factor is not None:
grid = grid / scaling_factor
if offset is not None:
grid = grid - offset
grid = grid.reshape([2, 1, grid_size, grid_size])
pos_embed = get_2d_sincos_pos_embed_from_grid(embed_dim, grid)
if cls_token and extra_tokens > 0:
pos_embed = np.concatenate([np.zeros([extra_tokens, embed_dim]), pos_embed], axis=0)
return pos_embed
def get_2d_sincos_pos_embed_from_grid(embed_dim: int, grid):
assert embed_dim % 2 == 0
# use half of dimensions to encode grid_h
emb_h = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[0]) # (H*W, D/2)
emb_w = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[1]) # (H*W, D/2)
emb = np.concatenate([emb_h, emb_w], axis=1) # (H*W, D)
return emb
def get_1d_sincos_pos_embed_from_grid(embed_dim: int, pos):
"""
embed_dim: output dimension for each position
pos: a list of positions to be encoded: size (M,)
out: (M, D)
"""
assert embed_dim % 2 == 0
omega = np.arange(embed_dim // 2, dtype=np.float64)
omega /= embed_dim / 2.0
omega = 1.0 / 10000**omega # (D/2,)
pos = pos.reshape(-1) # (M,)
out = np.einsum("m,d->md", pos, omega) # (M, D/2), outer product
emb_sin = np.sin(out) # (M, D/2)
emb_cos = np.cos(out) # (M, D/2)
return np.concatenate([emb_sin, emb_cos], axis=1) # (M, D)
#################################################################################
# Embedding Layers for Timesteps and Class Labels #
#################################################################################
class TimestepEmbedder(torch.nn.Module):
"""Embeds scalar timesteps into vector representations."""
def __init__(self, hidden_size, frequency_embedding_size=256, dtype=None, device=None):
super().__init__()
self.mlp = torch.nn.Sequential(
torch.nn.Linear(
frequency_embedding_size,
hidden_size,
bias=True,
dtype=dtype,
device=device,
),
torch.nn.SiLU(),
torch.nn.Linear(hidden_size, hidden_size, bias=True, dtype=dtype, device=device),
)
self.frequency_embedding_size = frequency_embedding_size
@staticmethod
def timestep_embedding(t, dim, max_period=10000):
"""
Create sinusoidal timestep embeddings.
:param t: a 1-D Tensor of N indices, one per batch element.
These may be fractional.
:param dim: the dimension of the output.
:param max_period: controls the minimum frequency of the embeddings.
:return: an (N, D) Tensor of positional embeddings.
"""
half = dim // 2
freqs = torch.exp(-math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half).to(
device=t.device
)
args = t[:, None].float() * freqs[None]
embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1)
if dim % 2:
embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1)
if torch.is_floating_point(t):
embedding = embedding.to(dtype=t.dtype)
return embedding
def forward(self, t, dtype, **kwargs):
t_freq = self.timestep_embedding(t, self.frequency_embedding_size).to(dtype)
t_emb = self.mlp(t_freq)
return t_emb
class VectorEmbedder(torch.nn.Module):
"""Embeds a flat vector of dimension input_dim"""
def __init__(self, input_dim: int, hidden_size: int, dtype=None, device=None):
super().__init__()
self.mlp = torch.nn.Sequential(
torch.nn.Linear(input_dim, hidden_size, bias=True, dtype=dtype, device=device),
torch.nn.SiLU(),
torch.nn.Linear(hidden_size, hidden_size, bias=True, dtype=dtype, device=device),
)
def forward(self, x: torch.Tensor) -> torch.Tensor:
return self.mlp(x)
#################################################################################
# Core DiT Model #
#################################################################################
def split_qkv(qkv, head_dim):
qkv = qkv.reshape(qkv.shape[0], qkv.shape[1], 3, -1, head_dim).movedim(2, 0)
return qkv[0], qkv[1], qkv[2]
def optimized_attention(qkv, num_heads):
return attention(qkv[0], qkv[1], qkv[2], num_heads)
class SelfAttention(torch.nn.Module):
ATTENTION_MODES = ("xformers", "torch", "torch-hb", "math", "debug")
def __init__(
self,
dim: int,
num_heads: int = 8,
qkv_bias: bool = False,
qk_scale: Optional[float] = None,
attn_mode: str = "xformers",
pre_only: bool = False,
qk_norm: Optional[str] = None,
rmsnorm: bool = False,
dtype=None,
device=None,
):
super().__init__()
self.num_heads = num_heads
self.head_dim = dim // num_heads
self.qkv = torch.nn.Linear(dim, dim * 3, bias=qkv_bias, dtype=dtype, device=device)
if not pre_only:
self.proj = torch.nn.Linear(dim, dim, dtype=dtype, device=device)
assert attn_mode in self.ATTENTION_MODES
self.attn_mode = attn_mode
self.pre_only = pre_only
if qk_norm == "rms":
self.ln_q = RMSNorm(
self.head_dim,
elementwise_affine=True,
eps=1.0e-6,
dtype=dtype,
device=device,
)
self.ln_k = RMSNorm(
self.head_dim,
elementwise_affine=True,
eps=1.0e-6,
dtype=dtype,
device=device,
)
elif qk_norm == "ln":
self.ln_q = torch.nn.LayerNorm(
self.head_dim,
elementwise_affine=True,
eps=1.0e-6,
dtype=dtype,
device=device,
)
self.ln_k = torch.nn.LayerNorm(
self.head_dim,
elementwise_affine=True,
eps=1.0e-6,
dtype=dtype,
device=device,
)
elif qk_norm is None:
self.ln_q = torch.nn.Identity()
self.ln_k = torch.nn.Identity()
else:
raise ValueError(qk_norm)
def pre_attention(self, x: torch.Tensor):
B, L, C = x.shape
qkv = self.qkv(x)
q, k, v = split_qkv(qkv, self.head_dim)
q = self.ln_q(q).reshape(q.shape[0], q.shape[1], -1)
k = self.ln_k(k).reshape(q.shape[0], q.shape[1], -1)
return (q, k, v)
def post_attention(self, x: torch.Tensor) -> torch.Tensor:
assert not self.pre_only
x = self.proj(x)
return x
def forward(self, x: torch.Tensor) -> torch.Tensor:
(q, k, v) = self.pre_attention(x)
x = attention(q, k, v, self.num_heads)
x = self.post_attention(x)
return x
class RMSNorm(torch.nn.Module):
def __init__(
self,
dim: int,
elementwise_affine: bool = False,
eps: float = 1e-6,
device=None,
dtype=None,
):
"""
Initialize the RMSNorm normalization layer.
Args:
dim (int): The dimension of the input tensor.
eps (float, optional): A small value added to the denominator for numerical stability. Default is 1e-6.
Attributes:
eps (float): A small value added to the denominator for numerical stability.
weight (torch.nn.Parameter): Learnable scaling parameter.
"""
super().__init__()
self.eps = eps
self.learnable_scale = elementwise_affine
if self.learnable_scale:
self.weight = torch.nn.Parameter(torch.empty(dim, device=device, dtype=dtype))
else:
self.register_parameter("weight", None)
def _norm(self, x):
"""
Apply the RMSNorm normalization to the input tensor.
Args:
x (torch.Tensor): The input tensor.
Returns:
torch.Tensor: The normalized tensor.
"""
return x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + self.eps)
def forward(self, x):
"""
Forward pass through the RMSNorm layer.
Args:
x (torch.Tensor): The input tensor.
Returns:
torch.Tensor: The output tensor after applying RMSNorm.
"""
x = self._norm(x)
if self.learnable_scale:
return x * self.weight.to(device=x.device, dtype=x.dtype)
else:
return x
class SwiGLUFeedForward(torch.nn.Module):
def __init__(
self,
dim: int,
hidden_dim: int,
multiple_of: int,
ffn_dim_multiplier: Optional[float] = None,
):
"""
Initialize the FeedForward module.
Args:
dim (int): Input dimension.
hidden_dim (int): Hidden dimension of the feedforward layer.
multiple_of (int): Value to ensure hidden dimension is a multiple of this value.
ffn_dim_multiplier (float, optional): Custom multiplier for hidden dimension. Defaults to None.
Attributes:
w1 (ColumnParallelLinear): Linear transformation for the first layer.
w2 (RowParallelLinear): Linear transformation for the second layer.
w3 (ColumnParallelLinear): Linear transformation for the third layer.
"""
super().__init__()
hidden_dim = int(2 * hidden_dim / 3)
# custom dim factor multiplier
if ffn_dim_multiplier is not None:
hidden_dim = int(ffn_dim_multiplier * hidden_dim)
hidden_dim = multiple_of * ((hidden_dim + multiple_of - 1) // multiple_of)
self.w1 = torch.nn.Linear(dim, hidden_dim, bias=False)
self.w2 = torch.nn.Linear(hidden_dim, dim, bias=False)
self.w3 = torch.nn.Linear(dim, hidden_dim, bias=False)
def forward(self, x):
return self.w2(torch.nn.functional.silu(self.w1(x)) * self.w3(x))
class DismantledBlock(torch.nn.Module):
"""A DiT block with gated adaptive layer norm (adaLN) conditioning."""
ATTENTION_MODES = ("xformers", "torch", "torch-hb", "math", "debug")
def __init__(
self,
hidden_size: int,
num_heads: int,
mlp_ratio: float = 4.0,
attn_mode: str = "xformers",
qkv_bias: bool = False,
pre_only: bool = False,
rmsnorm: bool = False,
scale_mod_only: bool = False,
swiglu: bool = False,
qk_norm: Optional[str] = None,
x_block_self_attn: bool = False,
dtype=None,
device=None,
**block_kwargs,
):
super().__init__()
assert attn_mode in self.ATTENTION_MODES
if not rmsnorm:
self.norm1 = torch.nn.LayerNorm(
hidden_size,
elementwise_affine=False,
eps=1e-6,
dtype=dtype,
device=device,
)
else:
self.norm1 = RMSNorm(hidden_size, elementwise_affine=False, eps=1e-6)
self.attn = SelfAttention(
dim=hidden_size,
num_heads=num_heads,
qkv_bias=qkv_bias,
attn_mode=attn_mode,
pre_only=pre_only,
qk_norm=qk_norm,
rmsnorm=rmsnorm,
dtype=dtype,
device=device,
)
if x_block_self_attn:
assert not pre_only
assert not scale_mod_only
self.x_block_self_attn = True
self.attn2 = SelfAttention(
dim=hidden_size,
num_heads=num_heads,
qkv_bias=qkv_bias,
attn_mode=attn_mode,
pre_only=False,
qk_norm=qk_norm,
rmsnorm=rmsnorm,
dtype=dtype,
device=device,
)
else:
self.x_block_self_attn = False
if not pre_only:
if not rmsnorm:
self.norm2 = torch.nn.LayerNorm(
hidden_size,
elementwise_affine=False,
eps=1e-6,
dtype=dtype,
device=device,
)
else:
self.norm2 = RMSNorm(hidden_size, elementwise_affine=False, eps=1e-6)
mlp_hidden_dim = int(hidden_size * mlp_ratio)
if not pre_only:
if not swiglu:
self.mlp = Mlp(
in_features=hidden_size,
hidden_features=mlp_hidden_dim,
act_layer=torch.nn.GELU(approximate="tanh"),
dtype=dtype,
device=device,
)
else:
self.mlp = SwiGLUFeedForward(dim=hidden_size, hidden_dim=mlp_hidden_dim, multiple_of=256)
self.scale_mod_only = scale_mod_only
if x_block_self_attn:
assert not pre_only
assert not scale_mod_only
n_mods = 9
elif not scale_mod_only:
n_mods = 6 if not pre_only else 2
else:
n_mods = 4 if not pre_only else 1
self.adaLN_modulation = torch.nn.Sequential(
torch.nn.SiLU(),
torch.nn.Linear(hidden_size, n_mods * hidden_size, bias=True, dtype=dtype, device=device),
)
self.pre_only = pre_only
def pre_attention(self, x: torch.Tensor, c: torch.Tensor):
assert x is not None, "pre_attention called with None input"
if not self.pre_only:
if not self.scale_mod_only:
shift_msa, scale_msa, gate_msa, shift_mlp, scale_mlp, gate_mlp = self.adaLN_modulation(c).chunk(
6, dim=1
)
else:
shift_msa = None
shift_mlp = None
scale_msa, gate_msa, scale_mlp, gate_mlp = self.adaLN_modulation(c).chunk(4, dim=1)
qkv = self.attn.pre_attention(modulate(self.norm1(x), shift_msa, scale_msa))
return qkv, (x, gate_msa, shift_mlp, scale_mlp, gate_mlp)
else:
if not self.scale_mod_only:
shift_msa, scale_msa = self.adaLN_modulation(c).chunk(2, dim=1)
else:
shift_msa = None
scale_msa = self.adaLN_modulation(c)
qkv = self.attn.pre_attention(modulate(self.norm1(x), shift_msa, scale_msa))
return qkv, None
def post_attention(
self,
attn: torch.Tensor,
x: torch.Tensor,
gate_msa: torch.Tensor,
shift_mlp: torch.Tensor,
scale_mlp: torch.Tensor,
gate_mlp: torch.Tensor,
) -> torch.Tensor:
assert not self.pre_only
x = x + gate_msa.unsqueeze(1) * self.attn.post_attention(attn)
x = x + gate_mlp.unsqueeze(1) * self.mlp(modulate(self.norm2(x), shift_mlp, scale_mlp))
return x
def pre_attention_x(
self, x: torch.Tensor, c: torch.Tensor
) -> tuple[
tuple[torch.Tensor, torch.Tensor, torch.Tensor],
tuple[torch.Tensor, torch.Tensor, torch.Tensor],
tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor],
]:
assert self.x_block_self_attn
(
shift_msa,
scale_msa,
gate_msa,
shift_mlp,
scale_mlp,
gate_mlp,
shift_msa2,
scale_msa2,
gate_msa2,
) = self.adaLN_modulation(c).chunk(9, dim=1)
x_norm = self.norm1(x)
qkv = self.attn.pre_attention(modulate(x_norm, shift_msa, scale_msa))
qkv2 = self.attn2.pre_attention(modulate(x_norm, shift_msa2, scale_msa2))
return (
qkv,
qkv2,
(
x,
gate_msa,
shift_mlp,
scale_mlp,
gate_mlp,
gate_msa2,
),
)
def post_attention_x(
self,
attn: torch.Tensor,
attn2: torch.Tensor,
x: torch.Tensor,
gate_msa: torch.Tensor,
shift_mlp: torch.Tensor,
scale_mlp: torch.Tensor,
gate_mlp: torch.Tensor,
gate_msa2: torch.Tensor,
attn1_dropout: float = 0.0,
):
assert not self.pre_only
if attn1_dropout > 0.0:
# Use torch.bernoulli to implement dropout, only dropout the batch dimension
attn1_dropout = torch.bernoulli(torch.full((attn.size(0), 1, 1), 1 - attn1_dropout, device=attn.device))
attn_ = gate_msa.unsqueeze(1) * self.attn.post_attention(attn) * attn1_dropout
else:
attn_ = gate_msa.unsqueeze(1) * self.attn.post_attention(attn)
x = x + attn_
attn2_ = gate_msa2.unsqueeze(1) * self.attn2.post_attention(attn2)
x = x + attn2_
mlp_ = gate_mlp.unsqueeze(1) * self.mlp(modulate(self.norm2(x), shift_mlp, scale_mlp))
x = x + mlp_
return x, (gate_msa, gate_msa2, gate_mlp, attn_, attn2_)
def forward(self, x: torch.Tensor, c: torch.Tensor):
assert not self.pre_only
if self.x_block_self_attn:
(q, k, v), (q2, k2, v2), intermediates = self.pre_attention_x(x, c)
attn = attention(q, k, v, self.attn.num_heads)
attn2 = attention(q2, k2, v2, self.attn2.num_heads)
return self.post_attention_x(attn, attn2, *intermediates)
else:
(q, k, v), intermediates = self.pre_attention(x, c)
attn = attention(q, k, v, self.attn.num_heads)
return self.post_attention(attn, *intermediates)
def block_mixing(
context: torch.Tensor, x: torch.Tensor, context_block: DismantledBlock, x_block: DismantledBlock, c: torch.Tensor
):
assert context is not None, "block_mixing called with None context"
context_qkv, context_intermediates = context_block.pre_attention(context, c)
if x_block.x_block_self_attn:
x_qkv, x_qkv2, x_intermediates = x_block.pre_attention_x(x, c)
else:
x_qkv, x_intermediates = x_block.pre_attention(x, c)
o: list[torch.Tensor] = []
for t in range(3):
o.append(torch.cat((context_qkv[t], x_qkv[t]), dim=1))
q, k, v = tuple(o)
attn = attention(q, k, v, x_block.attn.num_heads)
context_attn, x_attn = (
attn[:, : context_qkv[0].shape[1]],
attn[:, context_qkv[0].shape[1] :],
)
if not context_block.pre_only:
context = context_block.post_attention(context_attn, *context_intermediates)
else:
context = None
if x_block.x_block_self_attn:
x_q2, x_k2, x_v2 = x_qkv2
attn2 = attention(x_q2, x_k2, x_v2, x_block.attn2.num_heads)
else:
x = x_block.post_attention(x_attn, *x_intermediates)
return context, x
class JointBlock(torch.nn.Module):
"""just a small wrapper to serve as a fsdp unit"""
def __init__(self, *args, **kwargs):
super().__init__()
pre_only = kwargs.pop("pre_only")
qk_norm = kwargs.pop("qk_norm", None)
x_block_self_attn = kwargs.pop("x_block_self_attn", False)
self.context_block = DismantledBlock(*args, pre_only=pre_only, qk_norm=qk_norm, **kwargs)
self.x_block = DismantledBlock(
*args,
pre_only=False,
qk_norm=qk_norm,
x_block_self_attn=x_block_self_attn,
**kwargs,
)
def forward(self, *args, **kwargs):
return block_mixing(*args, context_block=self.context_block, x_block=self.x_block, **kwargs)
class FinalLayer(torch.nn.Module):
"""
The final layer of DiT.
"""
def __init__(
self,
hidden_size: int,
patch_size: int,
out_channels: int,
total_out_channels: Optional[int] = None,
dtype: Optional[torch.dtype] = None,
device: Optional[torch.device] = None,
):
super().__init__()
self.norm_final = torch.nn.LayerNorm(
hidden_size, elementwise_affine=False, eps=1e-6, dtype=dtype, device=device
)
self.linear = (
torch.nn.Linear(
hidden_size,
patch_size * patch_size * out_channels,
bias=True,
dtype=dtype,
device=device,
)
if (total_out_channels is None)
else torch.nn.Linear(hidden_size, total_out_channels, bias=True, dtype=dtype, device=device)
)
self.adaLN_modulation = torch.nn.Sequential(
torch.nn.SiLU(),
torch.nn.Linear(hidden_size, 2 * hidden_size, bias=True, dtype=dtype, device=device),
)
def forward(self, x: torch.Tensor, c: torch.Tensor) -> torch.Tensor:
shift, scale = self.adaLN_modulation(c).chunk(2, dim=1)
x = modulate(self.norm_final(x), shift, scale)
x = self.linear(x)
return x
class MMDiTX(torch.nn.Module):
"""Diffusion model with a Transformer backbone."""
def __init__(
self,
input_size: int | None = 32,
patch_size: int = 2,
in_channels: int = 4,
depth: int = 28,
mlp_ratio: float = 4.0,
learn_sigma: bool = False,
adm_in_channels: Optional[int] = None,
context_embedder_config: Optional[Dict] = None,
register_length: int = 0,
attn_mode: str = "torch",
rmsnorm: bool = False,
scale_mod_only: bool = False,
swiglu: bool = False,
out_channels: Optional[int] = None,
pos_embed_scaling_factor: Optional[float] = None,
pos_embed_offset: Optional[float] = None,
pos_embed_max_size: Optional[int] = None,
num_patches: Optional[int] = None,
qk_norm: Optional[str] = None,
x_block_self_attn_layers: Optional[List[int]] = None,
qkv_bias: bool = True,
dtype: Optional[torch.dtype] = None,
device: Optional[torch.device] = None,
verbose: bool = False,
):
super().__init__()
if verbose:
print(
f"mmdit initializing with: {input_size=}, {patch_size=}, {in_channels=}, {depth=}, {mlp_ratio=}, {learn_sigma=}, {adm_in_channels=}, {context_embedder_config=}, {register_length=}, {attn_mode=}, {rmsnorm=}, {scale_mod_only=}, {swiglu=}, {out_channels=}, {pos_embed_scaling_factor=}, {pos_embed_offset=}, {pos_embed_max_size=}, {num_patches=}, {qk_norm=}, {qkv_bias=}, {dtype=}, {device=}"
)
self.dtype = dtype
self.learn_sigma = learn_sigma
self.in_channels = in_channels
default_out_channels = in_channels * 2 if learn_sigma else in_channels
self.out_channels = out_channels if out_channels is not None else default_out_channels
self.patch_size = patch_size
self.pos_embed_scaling_factor = pos_embed_scaling_factor
self.pos_embed_offset = pos_embed_offset
self.pos_embed_max_size = pos_embed_max_size
self.x_block_self_attn_layers = x_block_self_attn_layers or []
# apply magic --> this defines a head_size of 64
hidden_size = 64 * depth
num_heads = depth
self.num_heads = num_heads
self.x_embedder = PatchEmbed(
input_size,
patch_size,
in_channels,
hidden_size,
bias=True,
strict_img_size=self.pos_embed_max_size is None,
dtype=dtype,
device=device,
)
self.t_embedder = TimestepEmbedder(hidden_size, dtype=dtype, device=device)
if adm_in_channels is not None:
assert isinstance(adm_in_channels, int)
self.y_embedder = VectorEmbedder(adm_in_channels, hidden_size, dtype=dtype, device=device)
self.context_embedder = torch.nn.Identity()
if context_embedder_config is not None:
if context_embedder_config["target"] == "torch.nn.Linear":
self.context_embedder = torch.nn.Linear(**context_embedder_config["params"], dtype=dtype, device=device)
self.register_length = register_length
if self.register_length > 0:
self.register = torch.nn.Parameter(torch.randn(1, register_length, hidden_size, dtype=dtype, device=device))
# num_patches = self.x_embedder.num_patches
# Will use fixed sin-cos embedding:
# just use a buffer already
if num_patches is not None:
self.register_buffer(
"pos_embed",
torch.zeros(1, num_patches, hidden_size, dtype=dtype, device=device),
)
else:
self.pos_embed = None
self.joint_blocks = torch.nn.ModuleList(
[
JointBlock(
hidden_size,
num_heads,
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias,
attn_mode=attn_mode,
pre_only=i == depth - 1,
rmsnorm=rmsnorm,
scale_mod_only=scale_mod_only,
swiglu=swiglu,
qk_norm=qk_norm,
x_block_self_attn=(i in self.x_block_self_attn_layers),
dtype=dtype,
device=device,
)
for i in range(depth)
]
)
self.final_layer = FinalLayer(hidden_size, patch_size, self.out_channels, dtype=dtype, device=device)
def cropped_pos_embed(self, hw: torch.Size) -> torch.Tensor:
assert self.pos_embed_max_size is not None
p = self.x_embedder.patch_size[0]
h, w = hw
# patched size
h = h // p
w = w // p
assert h <= self.pos_embed_max_size, (h, self.pos_embed_max_size)
assert w <= self.pos_embed_max_size, (w, self.pos_embed_max_size)
top = (self.pos_embed_max_size - h) // 2
left = (self.pos_embed_max_size - w) // 2
spatial_pos_embed: torch.Tensor = rearrange(
self.pos_embed,
"1 (h w) c -> 1 h w c",
h=self.pos_embed_max_size,
w=self.pos_embed_max_size,
) # type: ignore Type checking does not correctly infer the type of the self.pos_embed buffer.
spatial_pos_embed = spatial_pos_embed[:, top : top + h, left : left + w, :]
spatial_pos_embed = rearrange(spatial_pos_embed, "1 h w c -> 1 (h w) c")
return spatial_pos_embed
def unpatchify(self, x: torch.Tensor, hw: Optional[torch.Size] = None) -> torch.Tensor:
"""
x: (N, T, patch_size**2 * C)
imgs: (N, H, W, C)
"""
c = self.out_channels
p = self.x_embedder.patch_size[0]
if hw is None:
h = w = int(x.shape[1] ** 0.5)
else:
h, w = hw
h = h // p
w = w // p
assert h * w == x.shape[1]
x = x.reshape(shape=(x.shape[0], h, w, p, p, c))
x = torch.einsum("nhwpqc->nchpwq", x)
imgs = x.reshape(shape=(x.shape[0], c, h * p, w * p))
return imgs
def forward_core_with_concat(
self,
x: torch.Tensor,
c_mod: torch.Tensor,
context: Optional[torch.Tensor] = None,
) -> torch.Tensor:
if self.register_length > 0:
context = torch.cat(
(
repeat(self.register, "1 ... -> b ...", b=x.shape[0]),
context if context is not None else torch.Tensor([]).type_as(x),
),
1,
)
# context is B, L', D
# x is B, L, D
for block in self.joint_blocks:
context, x = block(context, x, c=c_mod)
x = self.final_layer(x, c_mod) # (N, T, patch_size ** 2 * out_channels)
return x
def forward(
self,
x: torch.Tensor,
t: torch.Tensor,
y: Optional[torch.Tensor] = None,
context: Optional[torch.Tensor] = None,
) -> torch.Tensor:
"""
Forward pass of DiT.
x: (N, C, H, W) tensor of spatial inputs (images or latent representations of images)
t: (N,) tensor of diffusion timesteps
y: (N,) tensor of class labels
"""
hw = x.shape[-2:]
x = self.x_embedder(x) + self.cropped_pos_embed(hw)
c = self.t_embedder(t, dtype=x.dtype) # (N, D)
if y is not None:
y = self.y_embedder(y) # (N, D)
c = c + y # (N, D)
context = self.context_embedder(context)
x = self.forward_core_with_concat(x, c, context)
x = self.unpatchify(x, hw=hw) # (N, out_channels, H, W)
return x

View File

@@ -1,795 +0,0 @@
# This file was originally copied from:
# https://github.com/Stability-AI/sd3.5/blob/19bf11c4e1e37324c5aa5a61f010d4127848a09c/other_impls.py
### This file contains impls for underlying related models (CLIP, T5, etc)
import math
from typing import Callable, Optional
import torch
from transformers import CLIPTokenizer, T5TokenizerFast
#################################################################################################
### Core/Utility
#################################################################################################
def attention(
q: torch.Tensor, k: torch.Tensor, v: torch.Tensor, heads: int, mask: Optional[torch.Tensor] = None
) -> torch.Tensor:
"""Convenience wrapper around a basic attention operation"""
b, _, dim_head = q.shape
dim_head //= heads
q, k, v = map(lambda t: t.view(b, -1, heads, dim_head).transpose(1, 2), (q, k, v))
out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
return out.transpose(1, 2).reshape(b, -1, heads * dim_head)
class Mlp(torch.nn.Module):
"""MLP as used in Vision Transformer, MLP-Mixer and related networks"""
def __init__(
self,
in_features: int,
hidden_features: Optional[int] = None,
out_features: Optional[int] = None,
act_layer: Callable[[torch.Tensor], torch.Tensor] | None = None,
bias: bool = True,
dtype: Optional[torch.dtype] = None,
device: Optional[torch.device] = None,
):
super().__init__()
out_features = out_features or in_features
hidden_features = hidden_features or in_features
if act_layer is None:
act_layer = torch.nn.functional.gelu
self.fc1 = torch.nn.Linear(in_features, hidden_features, bias=bias, dtype=dtype, device=device)
self.act = act_layer
self.fc2 = torch.nn.Linear(hidden_features, out_features, bias=bias, dtype=dtype, device=device)
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = self.fc1(x)
x = self.act(x)
x = self.fc2(x)
return x
#################################################################################################
### CLIP
#################################################################################################
class CLIPAttention(torch.nn.Module):
def __init__(self, embed_dim, heads, dtype, device):
super().__init__()
self.heads = heads
self.q_proj = torch.nn.Linear(embed_dim, embed_dim, bias=True, dtype=dtype, device=device)
self.k_proj = torch.nn.Linear(embed_dim, embed_dim, bias=True, dtype=dtype, device=device)
self.v_proj = torch.nn.Linear(embed_dim, embed_dim, bias=True, dtype=dtype, device=device)
self.out_proj = torch.nn.Linear(embed_dim, embed_dim, bias=True, dtype=dtype, device=device)
def forward(self, x, mask=None):
q = self.q_proj(x)
k = self.k_proj(x)
v = self.v_proj(x)
out = attention(q, k, v, self.heads, mask)
return self.out_proj(out)
ACTIVATIONS = {
"quick_gelu": lambda a: a * torch.sigmoid(1.702 * a),
"gelu": torch.nn.functional.gelu,
}
class CLIPLayer(torch.nn.Module):
def __init__(
self,
embed_dim,
heads,
intermediate_size,
intermediate_activation,
dtype,
device,
):
super().__init__()
self.layer_norm1 = torch.nn.LayerNorm(embed_dim, dtype=dtype, device=device)
self.self_attn = CLIPAttention(embed_dim, heads, dtype, device)
self.layer_norm2 = torch.nn.LayerNorm(embed_dim, dtype=dtype, device=device)
# self.mlp = CLIPMLP(embed_dim, intermediate_size, intermediate_activation, dtype, device)
self.mlp = Mlp(
embed_dim,
intermediate_size,
embed_dim,
act_layer=ACTIVATIONS[intermediate_activation],
dtype=dtype,
device=device,
)
def forward(self, x, mask=None):
x += self.self_attn(self.layer_norm1(x), mask)
x += self.mlp(self.layer_norm2(x))
return x
class CLIPEncoder(torch.nn.Module):
def __init__(
self,
num_layers,
embed_dim,
heads,
intermediate_size,
intermediate_activation,
dtype,
device,
):
super().__init__()
self.layers = torch.nn.ModuleList(
[
CLIPLayer(
embed_dim,
heads,
intermediate_size,
intermediate_activation,
dtype,
device,
)
for i in range(num_layers)
]
)
def forward(self, x, mask=None, intermediate_output=None):
if intermediate_output is not None:
if intermediate_output < 0:
intermediate_output = len(self.layers) + intermediate_output
intermediate = None
for i, l in enumerate(self.layers):
x = l(x, mask)
if i == intermediate_output:
intermediate = x.clone()
return x, intermediate
class CLIPEmbeddings(torch.nn.Module):
def __init__(self, embed_dim, vocab_size=49408, num_positions=77, dtype=None, device=None):
super().__init__()
self.token_embedding = torch.nn.Embedding(vocab_size, embed_dim, dtype=dtype, device=device)
self.position_embedding = torch.nn.Embedding(num_positions, embed_dim, dtype=dtype, device=device)
def forward(self, input_tokens):
return self.token_embedding(input_tokens) + self.position_embedding.weight
class CLIPTextModel_(torch.nn.Module):
def __init__(self, config_dict, dtype, device):
num_layers = config_dict["num_hidden_layers"]
embed_dim = config_dict["hidden_size"]
heads = config_dict["num_attention_heads"]
intermediate_size = config_dict["intermediate_size"]
intermediate_activation = config_dict["hidden_act"]
super().__init__()
self.embeddings = CLIPEmbeddings(embed_dim, dtype=torch.float32, device=device)
self.encoder = CLIPEncoder(
num_layers,
embed_dim,
heads,
intermediate_size,
intermediate_activation,
dtype,
device,
)
self.final_layer_norm = torch.nn.LayerNorm(embed_dim, dtype=dtype, device=device)
def forward(self, input_tokens, intermediate_output=None, final_layer_norm_intermediate=True):
x = self.embeddings(input_tokens)
causal_mask = torch.empty(x.shape[1], x.shape[1], dtype=x.dtype, device=x.device).fill_(float("-inf")).triu_(1)
x, i = self.encoder(x, mask=causal_mask, intermediate_output=intermediate_output)
x = self.final_layer_norm(x)
if i is not None and final_layer_norm_intermediate:
i = self.final_layer_norm(i)
pooled_output = x[
torch.arange(x.shape[0], device=x.device),
input_tokens.to(dtype=torch.int, device=x.device).argmax(dim=-1),
]
return x, i, pooled_output
class CLIPTextModel(torch.nn.Module):
def __init__(self, config_dict, dtype, device):
super().__init__()
self.num_layers = config_dict["num_hidden_layers"]
self.text_model = CLIPTextModel_(config_dict, dtype, device)
embed_dim = config_dict["hidden_size"]
self.text_projection = torch.nn.Linear(embed_dim, embed_dim, bias=False, dtype=dtype, device=device)
self.text_projection.weight.copy_(torch.eye(embed_dim))
self.dtype = dtype
def get_input_embeddings(self):
return self.text_model.embeddings.token_embedding
def set_input_embeddings(self, embeddings):
self.text_model.embeddings.token_embedding = embeddings
def forward(self, *args, **kwargs):
x = self.text_model(*args, **kwargs)
out = self.text_projection(x[2])
return (x[0], x[1], out, x[2])
def parse_parentheses(string):
result = []
current_item = ""
nesting_level = 0
for char in string:
if char == "(":
if nesting_level == 0:
if current_item:
result.append(current_item)
current_item = "("
else:
current_item = "("
else:
current_item += char
nesting_level += 1
elif char == ")":
nesting_level -= 1
if nesting_level == 0:
result.append(current_item + ")")
current_item = ""
else:
current_item += char
else:
current_item += char
if current_item:
result.append(current_item)
return result
def token_weights(string, current_weight):
a = parse_parentheses(string)
out = []
for x in a:
weight = current_weight
if len(x) >= 2 and x[-1] == ")" and x[0] == "(":
x = x[1:-1]
xx = x.rfind(":")
weight *= 1.1
if xx > 0:
try:
weight = float(x[xx + 1 :])
x = x[:xx]
except:
pass
out += token_weights(x, weight)
else:
out += [(x, current_weight)]
return out
def escape_important(text):
text = text.replace("\\)", "\0\1")
text = text.replace("\\(", "\0\2")
return text
def unescape_important(text):
text = text.replace("\0\1", ")")
text = text.replace("\0\2", "(")
return text
class SDTokenizer:
def __init__(
self,
max_length=77,
pad_with_end=True,
tokenizer=None,
has_start_token=True,
pad_to_max_length=True,
min_length=None,
extra_padding_token=None,
):
self.tokenizer = tokenizer
self.max_length = max_length
self.min_length = min_length
empty = self.tokenizer("")["input_ids"]
if has_start_token:
self.tokens_start = 1
self.start_token = empty[0]
self.end_token = empty[1]
else:
self.tokens_start = 0
self.start_token = None
self.end_token = empty[0]
self.pad_with_end = pad_with_end
self.pad_to_max_length = pad_to_max_length
self.extra_padding_token = extra_padding_token
vocab = self.tokenizer.get_vocab()
self.inv_vocab = {v: k for k, v in vocab.items()}
self.max_word_length = 8
def tokenize_with_weights(self, text: str, return_word_ids=False):
"""
Tokenize the text, with weight values - presume 1.0 for all and ignore other features here.
The details aren't relevant for a reference impl, and weights themselves has weak effect on SD3.
"""
if self.pad_with_end:
pad_token = self.end_token
else:
pad_token = 0
text = escape_important(text)
parsed_weights = token_weights(text, 1.0)
# tokenize words
tokens = []
for weighted_segment, weight in parsed_weights:
to_tokenize = unescape_important(weighted_segment).replace("\n", " ").split(" ")
to_tokenize = [x for x in to_tokenize if x != ""]
for word in to_tokenize:
# parse word
tokens.append([(t, weight) for t in self.tokenizer(word)["input_ids"][self.tokens_start : -1]])
# reshape token array to CLIP input size
batched_tokens = []
batch = []
if self.start_token is not None:
batch.append((self.start_token, 1.0, 0))
batched_tokens.append(batch)
for i, t_group in enumerate(tokens):
# determine if we're going to try and keep the tokens in a single batch
is_large = len(t_group) >= self.max_word_length
while len(t_group) > 0:
if len(t_group) + len(batch) > self.max_length - 1:
remaining_length = self.max_length - len(batch) - 1
# break word in two and add end token
if is_large:
batch.extend([(t, w, i + 1) for t, w in t_group[:remaining_length]])
batch.append((self.end_token, 1.0, 0))
t_group = t_group[remaining_length:]
# add end token and pad
else:
batch.append((self.end_token, 1.0, 0))
if self.pad_to_max_length:
batch.extend([(pad_token, 1.0, 0)] * (remaining_length))
# start new batch
batch = []
if self.start_token is not None:
batch.append((self.start_token, 1.0, 0))
batched_tokens.append(batch)
else:
batch.extend([(t, w, i + 1) for t, w in t_group])
t_group = []
# pad extra padding token first befor getting to the end token
if self.extra_padding_token is not None:
batch.extend([(self.extra_padding_token, 1.0, 0)] * (self.min_length - len(batch) - 1))
# fill last batch
batch.append((self.end_token, 1.0, 0))
if self.pad_to_max_length:
batch.extend([(pad_token, 1.0, 0)] * (self.max_length - len(batch)))
if self.min_length is not None and len(batch) < self.min_length:
batch.extend([(pad_token, 1.0, 0)] * (self.min_length - len(batch)))
if not return_word_ids:
batched_tokens = [[(t, w) for t, w, _ in x] for x in batched_tokens]
return batched_tokens
def untokenize(self, token_weight_pair):
return list(map(lambda a: (a, self.inv_vocab[a[0]]), token_weight_pair))
class SDXLClipGTokenizer(SDTokenizer):
def __init__(self, tokenizer):
super().__init__(pad_with_end=False, tokenizer=tokenizer)
class SD3Tokenizer:
def __init__(self):
clip_tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14")
self.clip_l = SDTokenizer(tokenizer=clip_tokenizer)
self.clip_g = SDXLClipGTokenizer(clip_tokenizer)
self.t5xxl = T5XXLTokenizer()
def tokenize_with_weights(self, text: str):
out = {}
out["l"] = self.clip_l.tokenize_with_weights(text)
out["g"] = self.clip_g.tokenize_with_weights(text)
out["t5xxl"] = self.t5xxl.tokenize_with_weights(text[:226])
return out
class ClipTokenWeightEncoder:
def encode_token_weights(self, token_weight_pairs):
tokens = list(map(lambda a: a[0], token_weight_pairs[0]))
out, pooled = self([tokens])
if pooled is not None:
first_pooled = pooled[0:1].cpu()
else:
first_pooled = pooled
output = [out[0:1]]
return torch.cat(output, dim=-2).cpu(), first_pooled
class SDClipModel(torch.nn.Module, ClipTokenWeightEncoder):
"""Uses the CLIP transformer encoder for text (from huggingface)"""
LAYERS = ["last", "pooled", "hidden"]
def __init__(
self,
device="cpu",
max_length=77,
layer="last",
layer_idx=None,
textmodel_json_config=None,
dtype=None,
model_class=CLIPTextModel,
special_tokens={"start": 49406, "end": 49407, "pad": 49407},
layer_norm_hidden_state=True,
return_projected_pooled=True,
):
super().__init__()
assert layer in self.LAYERS
self.transformer = model_class(textmodel_json_config, dtype, device)
self.num_layers = self.transformer.num_layers
self.max_length = max_length
self.transformer = self.transformer.eval()
for param in self.parameters():
param.requires_grad = False
self.layer = layer
self.layer_idx = None
self.special_tokens = special_tokens
self.logit_scale = torch.nn.Parameter(torch.tensor(4.6055))
self.layer_norm_hidden_state = layer_norm_hidden_state
self.return_projected_pooled = return_projected_pooled
if layer == "hidden":
assert layer_idx is not None
assert abs(layer_idx) < self.num_layers
self.set_clip_options({"layer": layer_idx})
self.options_default = (
self.layer,
self.layer_idx,
self.return_projected_pooled,
)
def set_clip_options(self, options):
layer_idx = options.get("layer", self.layer_idx)
self.return_projected_pooled = options.get("projected_pooled", self.return_projected_pooled)
if layer_idx is None or abs(layer_idx) > self.num_layers:
self.layer = "last"
else:
self.layer = "hidden"
self.layer_idx = layer_idx
def forward(self, tokens):
backup_embeds = self.transformer.get_input_embeddings()
device = backup_embeds.weight.device
tokens = torch.LongTensor(tokens).to(device)
outputs = self.transformer(
tokens,
intermediate_output=self.layer_idx,
final_layer_norm_intermediate=self.layer_norm_hidden_state,
)
self.transformer.set_input_embeddings(backup_embeds)
if self.layer == "last":
z = outputs[0]
else:
z = outputs[1]
pooled_output = None
if len(outputs) >= 3:
if not self.return_projected_pooled and len(outputs) >= 4 and outputs[3] is not None:
pooled_output = outputs[3].float()
elif outputs[2] is not None:
pooled_output = outputs[2].float()
return z.float(), pooled_output
class SDXLClipG(SDClipModel):
"""Wraps the CLIP-G model into the SD-CLIP-Model interface"""
def __init__(self, config, device="cpu", layer="penultimate", layer_idx=None, dtype=None):
if layer == "penultimate":
layer = "hidden"
layer_idx = -2
super().__init__(
device=device,
layer=layer,
layer_idx=layer_idx,
textmodel_json_config=config,
dtype=dtype,
special_tokens={"start": 49406, "end": 49407, "pad": 0},
layer_norm_hidden_state=False,
)
class T5XXLModel(SDClipModel):
"""Wraps the T5-XXL model into the SD-CLIP-Model interface for convenience"""
def __init__(self, config, device="cpu", layer="last", layer_idx=None, dtype=None):
super().__init__(
device=device,
layer=layer,
layer_idx=layer_idx,
textmodel_json_config=config,
dtype=dtype,
special_tokens={"end": 1, "pad": 0},
model_class=T5,
)
#################################################################################################
### T5 implementation, for the T5-XXL text encoder portion, largely pulled from upstream impl
#################################################################################################
class T5XXLTokenizer(SDTokenizer):
"""Wraps the T5 Tokenizer from HF into the SDTokenizer interface"""
def __init__(self):
super().__init__(
pad_with_end=False,
tokenizer=T5TokenizerFast.from_pretrained("google/t5-v1_1-xxl"),
has_start_token=False,
pad_to_max_length=False,
max_length=99999999,
min_length=77,
)
class T5LayerNorm(torch.nn.Module):
def __init__(self, hidden_size, eps=1e-6, dtype=None, device=None):
super().__init__()
self.weight = torch.nn.Parameter(torch.ones(hidden_size, dtype=dtype, device=device))
self.variance_epsilon = eps
def forward(self, x):
variance = x.pow(2).mean(-1, keepdim=True)
x = x * torch.rsqrt(variance + self.variance_epsilon)
return self.weight.to(device=x.device, dtype=x.dtype) * x
class T5DenseGatedActDense(torch.nn.Module):
def __init__(self, model_dim, ff_dim, dtype, device):
super().__init__()
self.wi_0 = torch.nn.Linear(model_dim, ff_dim, bias=False, dtype=dtype, device=device)
self.wi_1 = torch.nn.Linear(model_dim, ff_dim, bias=False, dtype=dtype, device=device)
self.wo = torch.nn.Linear(ff_dim, model_dim, bias=False, dtype=dtype, device=device)
def forward(self, x):
hidden_gelu = torch.nn.functional.gelu(self.wi_0(x), approximate="tanh")
hidden_linear = self.wi_1(x)
x = hidden_gelu * hidden_linear
x = self.wo(x)
return x
class T5LayerFF(torch.nn.Module):
def __init__(self, model_dim, ff_dim, dtype, device):
super().__init__()
self.DenseReluDense = T5DenseGatedActDense(model_dim, ff_dim, dtype, device)
self.layer_norm = T5LayerNorm(model_dim, dtype=dtype, device=device)
def forward(self, x):
forwarded_states = self.layer_norm(x)
forwarded_states = self.DenseReluDense(forwarded_states)
x += forwarded_states
return x
class T5Attention(torch.nn.Module):
def __init__(self, model_dim, inner_dim, num_heads, relative_attention_bias, dtype, device):
super().__init__()
# Mesh TensorFlow initialization to avoid scaling before softmax
self.q = torch.nn.Linear(model_dim, inner_dim, bias=False, dtype=dtype, device=device)
self.k = torch.nn.Linear(model_dim, inner_dim, bias=False, dtype=dtype, device=device)
self.v = torch.nn.Linear(model_dim, inner_dim, bias=False, dtype=dtype, device=device)
self.o = torch.nn.Linear(inner_dim, model_dim, bias=False, dtype=dtype, device=device)
self.num_heads = num_heads
self.relative_attention_bias = None
if relative_attention_bias:
self.relative_attention_num_buckets = 32
self.relative_attention_max_distance = 128
self.relative_attention_bias = torch.nn.Embedding(
self.relative_attention_num_buckets, self.num_heads, device=device
)
@staticmethod
def _relative_position_bucket(relative_position, bidirectional=True, num_buckets=32, max_distance=128):
"""
Adapted from Mesh Tensorflow:
https://github.com/tensorflow/mesh/blob/0cb87fe07da627bf0b7e60475d59f95ed6b5be3d/mesh_tensorflow/transformer/transformer_layers.py#L593
Translate relative position to a bucket number for relative attention. The relative position is defined as
memory_position - query_position, i.e. the distance in tokens from the attending position to the attended-to
position. If bidirectional=False, then positive relative positions are invalid. We use smaller buckets for
small absolute relative_position and larger buckets for larger absolute relative_positions. All relative
positions >=max_distance map to the same bucket. All relative positions <=-max_distance map to the same bucket.
This should allow for more graceful generalization to longer sequences than the model has been trained on
Args:
relative_position: an int32 Tensor
bidirectional: a boolean - whether the attention is bidirectional
num_buckets: an integer
max_distance: an integer
Returns:
a Tensor with the same shape as relative_position, containing int32 values in the range [0, num_buckets)
"""
relative_buckets = 0
if bidirectional:
num_buckets //= 2
relative_buckets += (relative_position > 0).to(torch.long) * num_buckets
relative_position = torch.abs(relative_position)
else:
relative_position = -torch.min(relative_position, torch.zeros_like(relative_position))
# now relative_position is in the range [0, inf)
# half of the buckets are for exact increments in positions
max_exact = num_buckets // 2
is_small = relative_position < max_exact
# The other half of the buckets are for logarithmically bigger bins in positions up to max_distance
relative_position_if_large = max_exact + (
torch.log(relative_position.float() / max_exact)
/ math.log(max_distance / max_exact)
* (num_buckets - max_exact)
).to(torch.long)
relative_position_if_large = torch.min(
relative_position_if_large,
torch.full_like(relative_position_if_large, num_buckets - 1),
)
relative_buckets += torch.where(is_small, relative_position, relative_position_if_large)
return relative_buckets
def compute_bias(self, query_length, key_length, device):
"""Compute binned relative position bias"""
context_position = torch.arange(query_length, dtype=torch.long, device=device)[:, None]
memory_position = torch.arange(key_length, dtype=torch.long, device=device)[None, :]
relative_position = memory_position - context_position # shape (query_length, key_length)
relative_position_bucket = self._relative_position_bucket(
relative_position, # shape (query_length, key_length)
bidirectional=True,
num_buckets=self.relative_attention_num_buckets,
max_distance=self.relative_attention_max_distance,
)
values = self.relative_attention_bias(relative_position_bucket) # shape (query_length, key_length, num_heads)
values = values.permute([2, 0, 1]).unsqueeze(0) # shape (1, num_heads, query_length, key_length)
return values
def forward(self, x, past_bias=None):
q = self.q(x)
k = self.k(x)
v = self.v(x)
if self.relative_attention_bias is not None:
past_bias = self.compute_bias(x.shape[1], x.shape[1], x.device)
if past_bias is not None:
mask = past_bias
out = attention(q, k * ((k.shape[-1] / self.num_heads) ** 0.5), v, self.num_heads, mask)
return self.o(out), past_bias
class T5LayerSelfAttention(torch.nn.Module):
def __init__(
self,
model_dim,
inner_dim,
ff_dim,
num_heads,
relative_attention_bias,
dtype,
device,
):
super().__init__()
self.SelfAttention = T5Attention(model_dim, inner_dim, num_heads, relative_attention_bias, dtype, device)
self.layer_norm = T5LayerNorm(model_dim, dtype=dtype, device=device)
def forward(self, x, past_bias=None):
output, past_bias = self.SelfAttention(self.layer_norm(x), past_bias=past_bias)
x += output
return x, past_bias
class T5Block(torch.nn.Module):
def __init__(
self,
model_dim,
inner_dim,
ff_dim,
num_heads,
relative_attention_bias,
dtype,
device,
):
super().__init__()
self.layer = torch.nn.ModuleList()
self.layer.append(
T5LayerSelfAttention(
model_dim,
inner_dim,
ff_dim,
num_heads,
relative_attention_bias,
dtype,
device,
)
)
self.layer.append(T5LayerFF(model_dim, ff_dim, dtype, device))
def forward(self, x, past_bias=None):
x, past_bias = self.layer[0](x, past_bias)
x = self.layer[-1](x)
return x, past_bias
class T5Stack(torch.nn.Module):
def __init__(
self,
num_layers,
model_dim,
inner_dim,
ff_dim,
num_heads,
vocab_size,
dtype,
device,
):
super().__init__()
self.embed_tokens = torch.nn.Embedding(vocab_size, model_dim, device=device)
self.block = torch.nn.ModuleList(
[
T5Block(
model_dim,
inner_dim,
ff_dim,
num_heads,
relative_attention_bias=(i == 0),
dtype=dtype,
device=device,
)
for i in range(num_layers)
]
)
self.final_layer_norm = T5LayerNorm(model_dim, dtype=dtype, device=device)
def forward(self, input_ids, intermediate_output=None, final_layer_norm_intermediate=True):
intermediate = None
x = self.embed_tokens(input_ids)
past_bias = None
for i, l in enumerate(self.block):
x, past_bias = l(x, past_bias)
if i == intermediate_output:
intermediate = x.clone()
x = self.final_layer_norm(x)
if intermediate is not None and final_layer_norm_intermediate:
intermediate = self.final_layer_norm(intermediate)
return x, intermediate
class T5(torch.nn.Module):
def __init__(self, config_dict, dtype, device):
super().__init__()
self.num_layers = config_dict["num_layers"]
self.encoder = T5Stack(
self.num_layers,
config_dict["d_model"],
config_dict["d_model"],
config_dict["d_ff"],
config_dict["num_heads"],
config_dict["vocab_size"],
dtype,
device,
)
self.dtype = dtype
def get_input_embeddings(self):
return self.encoder.embed_tokens
def set_input_embeddings(self, embeddings):
self.encoder.embed_tokens = embeddings
def forward(self, *args, **kwargs):
return self.encoder(*args, **kwargs)

View File

@@ -1,609 +0,0 @@
# This file was originally copied from:
# https://github.com/Stability-AI/sd3.5/blob/19bf11c4e1e37324c5aa5a61f010d4127848a09c/sd3_impls.py
### Impls of the SD3 core diffusion model and VAE
import math
import re
import einops
import torch
from PIL import Image
from tqdm import tqdm
from invokeai.backend.sd3.mmditx import MMDiTX
#################################################################################################
### MMDiT Model Wrapping
#################################################################################################
class ModelSamplingDiscreteFlow(torch.nn.Module):
"""Helper for sampler scheduling (ie timestep/sigma calculations) for Discrete Flow models"""
def __init__(self, shift: float = 1.0):
super().__init__()
self.shift = shift
timesteps = 1000
ts = self.sigma(torch.arange(1, timesteps + 1, 1))
self.register_buffer("sigmas", ts)
@property
def sigma_min(self):
return self.sigmas[0]
@property
def sigma_max(self):
return self.sigmas[-1]
def timestep(self, sigma: torch.Tensor) -> torch.Tensor:
return sigma * 1000
def sigma(self, timestep: torch.Tensor):
timestep = timestep / 1000.0
if self.shift == 1.0:
return timestep
return self.shift * timestep / (1 + (self.shift - 1) * timestep)
def calculate_denoised(
self, sigma: torch.Tensor, model_output: torch.Tensor, model_input: torch.Tensor
) -> torch.Tensor:
sigma = sigma.view(sigma.shape[:1] + (1,) * (model_output.ndim - 1))
return model_input - model_output * sigma
def noise_scaling(self, sigma, noise, latent_image, max_denoise=False):
return sigma * noise + (1.0 - sigma) * latent_image
class BaseModel(torch.nn.Module):
"""Wrapper around the core MM-DiT model"""
def __init__(
self,
shift=1.0,
device=None,
dtype=torch.float32,
file=None,
prefix="",
verbose=False,
):
super().__init__()
# Important configuration values can be quickly determined by checking shapes in the source file
# Some of these will vary between models (eg 2B vs 8B primarily differ in their depth, but also other details change)
patch_size = file.get_tensor(f"{prefix}x_embedder.proj.weight").shape[2]
depth = file.get_tensor(f"{prefix}x_embedder.proj.weight").shape[0] // 64
num_patches = file.get_tensor(f"{prefix}pos_embed").shape[1]
pos_embed_max_size = round(math.sqrt(num_patches))
adm_in_channels = file.get_tensor(f"{prefix}y_embedder.mlp.0.weight").shape[1]
context_shape = file.get_tensor(f"{prefix}context_embedder.weight").shape
qk_norm = "rms" if f"{prefix}joint_blocks.0.context_block.attn.ln_k.weight" in file.keys() else None
x_block_self_attn_layers = sorted(
[
int(key.split(".x_block.attn2.ln_k.weight")[0].split(".")[-1])
for key in list(filter(re.compile(".*.x_block.attn2.ln_k.weight").match, file.keys()))
]
)
context_embedder_config = {
"target": "torch.nn.Linear",
"params": {
"in_features": context_shape[1],
"out_features": context_shape[0],
},
}
self.diffusion_model = MMDiTX(
input_size=None,
pos_embed_scaling_factor=None,
pos_embed_offset=None,
pos_embed_max_size=pos_embed_max_size,
patch_size=patch_size,
in_channels=16,
depth=depth,
num_patches=num_patches,
adm_in_channels=adm_in_channels,
context_embedder_config=context_embedder_config,
qk_norm=qk_norm,
x_block_self_attn_layers=x_block_self_attn_layers,
device=device,
dtype=dtype,
verbose=verbose,
)
self.model_sampling = ModelSamplingDiscreteFlow(shift=shift)
def apply_model(
self, x: torch.Tensor, sigma: float, c_crossattn: torch.Tensor | None = None, y: torch.Tensor | None = None
):
dtype = self.get_dtype()
timestep = self.model_sampling.timestep(sigma).float()
model_output = self.diffusion_model(x.to(dtype), timestep, context=c_crossattn.to(dtype), y=y.to(dtype)).float()
return self.model_sampling.calculate_denoised(sigma, model_output, x)
def forward(self, *args, **kwargs):
return self.apply_model(*args, **kwargs)
def get_dtype(self):
return self.diffusion_model.dtype
class CFGDenoiser(torch.nn.Module):
"""Helper for applying CFG Scaling to diffusion outputs"""
def __init__(self, model):
super().__init__()
self.model = model
def forward(self, x, timestep, cond, uncond, cond_scale):
# Run cond and uncond in a batch together
batched = self.model.apply_model(
torch.cat([x, x]),
torch.cat([timestep, timestep]),
c_crossattn=torch.cat([cond["c_crossattn"], uncond["c_crossattn"]]),
y=torch.cat([cond["y"], uncond["y"]]),
)
# Then split and apply CFG Scaling
pos_out, neg_out = batched.chunk(2)
scaled = neg_out + (pos_out - neg_out) * cond_scale
return scaled
class SD3LatentFormat:
"""Latents are slightly shifted from center - this class must be called after VAE Decode to correct for the shift"""
def __init__(self):
self.scale_factor = 1.5305
self.shift_factor = 0.0609
def process_in(self, latent):
return (latent - self.shift_factor) * self.scale_factor
def process_out(self, latent):
return (latent / self.scale_factor) + self.shift_factor
def decode_latent_to_preview(self, x0):
"""Quick RGB approximate preview of sd3 latents"""
factors = torch.tensor(
[
[-0.0645, 0.0177, 0.1052],
[0.0028, 0.0312, 0.0650],
[0.1848, 0.0762, 0.0360],
[0.0944, 0.0360, 0.0889],
[0.0897, 0.0506, -0.0364],
[-0.0020, 0.1203, 0.0284],
[0.0855, 0.0118, 0.0283],
[-0.0539, 0.0658, 0.1047],
[-0.0057, 0.0116, 0.0700],
[-0.0412, 0.0281, -0.0039],
[0.1106, 0.1171, 0.1220],
[-0.0248, 0.0682, -0.0481],
[0.0815, 0.0846, 0.1207],
[-0.0120, -0.0055, -0.0867],
[-0.0749, -0.0634, -0.0456],
[-0.1418, -0.1457, -0.1259],
],
device="cpu",
)
latent_image = x0[0].permute(1, 2, 0).cpu() @ factors
latents_ubyte = (
((latent_image + 1) / 2)
.clamp(0, 1) # change scale from -1..1 to 0..1
.mul(0xFF) # to 0..255
.byte()
).cpu()
return Image.fromarray(latents_ubyte.numpy())
#################################################################################################
### Samplers
#################################################################################################
def append_dims(x, target_dims):
"""Appends dimensions to the end of a tensor until it has target_dims dimensions."""
dims_to_append = target_dims - x.ndim
return x[(...,) + (None,) * dims_to_append]
def to_d(x, sigma, denoised):
"""Converts a denoiser output to a Karras ODE derivative."""
return (x - denoised) / append_dims(sigma, x.ndim)
@torch.no_grad()
@torch.autocast("cuda", dtype=torch.float16)
def sample_euler(model, x, sigmas, extra_args=None):
"""Implements Algorithm 2 (Euler steps) from Karras et al. (2022)."""
extra_args = {} if extra_args is None else extra_args
s_in = x.new_ones([x.shape[0]])
for i in tqdm(range(len(sigmas) - 1)):
sigma_hat = sigmas[i]
denoised = model(x, sigma_hat * s_in, **extra_args)
d = to_d(x, sigma_hat, denoised)
dt = sigmas[i + 1] - sigma_hat
# Euler method
x = x + d * dt
return x
@torch.no_grad()
@torch.autocast("cuda", dtype=torch.float16)
def sample_dpmpp_2m(model, x, sigmas, extra_args=None):
"""DPM-Solver++(2M)."""
extra_args = {} if extra_args is None else extra_args
s_in = x.new_ones([x.shape[0]])
sigma_fn = lambda t: t.neg().exp()
t_fn = lambda sigma: sigma.log().neg()
old_denoised = None
for i in tqdm(range(len(sigmas) - 1)):
denoised = model(x, sigmas[i] * s_in, **extra_args)
t, t_next = t_fn(sigmas[i]), t_fn(sigmas[i + 1])
h = t_next - t
if old_denoised is None or sigmas[i + 1] == 0:
x = (sigma_fn(t_next) / sigma_fn(t)) * x - (-h).expm1() * denoised
else:
h_last = t - t_fn(sigmas[i - 1])
r = h_last / h
denoised_d = (1 + 1 / (2 * r)) * denoised - (1 / (2 * r)) * old_denoised
x = (sigma_fn(t_next) / sigma_fn(t)) * x - (-h).expm1() * denoised_d
old_denoised = denoised
return x
#################################################################################################
### VAE
#################################################################################################
def Normalize(in_channels, num_groups=32, dtype=torch.float32, device=None):
return torch.nn.GroupNorm(
num_groups=num_groups,
num_channels=in_channels,
eps=1e-6,
affine=True,
dtype=dtype,
device=device,
)
class ResnetBlock(torch.nn.Module):
def __init__(self, *, in_channels, out_channels=None, dtype=torch.float32, device=None):
super().__init__()
self.in_channels = in_channels
out_channels = in_channels if out_channels is None else out_channels
self.out_channels = out_channels
self.norm1 = Normalize(in_channels, dtype=dtype, device=device)
self.conv1 = torch.nn.Conv2d(
in_channels,
out_channels,
kernel_size=3,
stride=1,
padding=1,
dtype=dtype,
device=device,
)
self.norm2 = Normalize(out_channels, dtype=dtype, device=device)
self.conv2 = torch.nn.Conv2d(
out_channels,
out_channels,
kernel_size=3,
stride=1,
padding=1,
dtype=dtype,
device=device,
)
if self.in_channels != self.out_channels:
self.nin_shortcut = torch.nn.Conv2d(
in_channels,
out_channels,
kernel_size=1,
stride=1,
padding=0,
dtype=dtype,
device=device,
)
else:
self.nin_shortcut = None
self.swish = torch.nn.SiLU(inplace=True)
def forward(self, x):
hidden = x
hidden = self.norm1(hidden)
hidden = self.swish(hidden)
hidden = self.conv1(hidden)
hidden = self.norm2(hidden)
hidden = self.swish(hidden)
hidden = self.conv2(hidden)
if self.in_channels != self.out_channels:
x = self.nin_shortcut(x)
return x + hidden
class AttnBlock(torch.nn.Module):
def __init__(self, in_channels, dtype=torch.float32, device=None):
super().__init__()
self.norm = Normalize(in_channels, dtype=dtype, device=device)
self.q = torch.nn.Conv2d(
in_channels,
in_channels,
kernel_size=1,
stride=1,
padding=0,
dtype=dtype,
device=device,
)
self.k = torch.nn.Conv2d(
in_channels,
in_channels,
kernel_size=1,
stride=1,
padding=0,
dtype=dtype,
device=device,
)
self.v = torch.nn.Conv2d(
in_channels,
in_channels,
kernel_size=1,
stride=1,
padding=0,
dtype=dtype,
device=device,
)
self.proj_out = torch.nn.Conv2d(
in_channels,
in_channels,
kernel_size=1,
stride=1,
padding=0,
dtype=dtype,
device=device,
)
def forward(self, x):
hidden = self.norm(x)
q = self.q(hidden)
k = self.k(hidden)
v = self.v(hidden)
b, c, h, w = q.shape
q, k, v = map(
lambda x: einops.rearrange(x, "b c h w -> b 1 (h w) c").contiguous(),
(q, k, v),
)
hidden = torch.nn.functional.scaled_dot_product_attention(q, k, v) # scale is dim ** -0.5 per default
hidden = einops.rearrange(hidden, "b 1 (h w) c -> b c h w", h=h, w=w, c=c, b=b)
hidden = self.proj_out(hidden)
return x + hidden
class Downsample(torch.nn.Module):
def __init__(self, in_channels, dtype=torch.float32, device=None):
super().__init__()
self.conv = torch.nn.Conv2d(
in_channels,
in_channels,
kernel_size=3,
stride=2,
padding=0,
dtype=dtype,
device=device,
)
def forward(self, x):
pad = (0, 1, 0, 1)
x = torch.nn.functional.pad(x, pad, mode="constant", value=0)
x = self.conv(x)
return x
class Upsample(torch.nn.Module):
def __init__(self, in_channels, dtype=torch.float32, device=None):
super().__init__()
self.conv = torch.nn.Conv2d(
in_channels,
in_channels,
kernel_size=3,
stride=1,
padding=1,
dtype=dtype,
device=device,
)
def forward(self, x):
x = torch.nn.functional.interpolate(x, scale_factor=2.0, mode="nearest")
x = self.conv(x)
return x
class VAEEncoder(torch.nn.Module):
def __init__(
self,
ch=128,
ch_mult=(1, 2, 4, 4),
num_res_blocks=2,
in_channels=3,
z_channels=16,
dtype=torch.float32,
device=None,
):
super().__init__()
self.num_resolutions = len(ch_mult)
self.num_res_blocks = num_res_blocks
# downsampling
self.conv_in = torch.nn.Conv2d(
in_channels,
ch,
kernel_size=3,
stride=1,
padding=1,
dtype=dtype,
device=device,
)
in_ch_mult = (1,) + tuple(ch_mult)
self.in_ch_mult = in_ch_mult
self.down = torch.nn.ModuleList()
for i_level in range(self.num_resolutions):
block = torch.nn.ModuleList()
attn = torch.nn.ModuleList()
block_in = ch * in_ch_mult[i_level]
block_out = ch * ch_mult[i_level]
for i_block in range(num_res_blocks):
block.append(
ResnetBlock(
in_channels=block_in,
out_channels=block_out,
dtype=dtype,
device=device,
)
)
block_in = block_out
down = torch.nn.Module()
down.block = block
down.attn = attn
if i_level != self.num_resolutions - 1:
down.downsample = Downsample(block_in, dtype=dtype, device=device)
self.down.append(down)
# middle
self.mid = torch.nn.Module()
self.mid.block_1 = ResnetBlock(in_channels=block_in, out_channels=block_in, dtype=dtype, device=device)
self.mid.attn_1 = AttnBlock(block_in, dtype=dtype, device=device)
self.mid.block_2 = ResnetBlock(in_channels=block_in, out_channels=block_in, dtype=dtype, device=device)
# end
self.norm_out = Normalize(block_in, dtype=dtype, device=device)
self.conv_out = torch.nn.Conv2d(
block_in,
2 * z_channels,
kernel_size=3,
stride=1,
padding=1,
dtype=dtype,
device=device,
)
self.swish = torch.nn.SiLU(inplace=True)
def forward(self, x):
# downsampling
hs = [self.conv_in(x)]
for i_level in range(self.num_resolutions):
for i_block in range(self.num_res_blocks):
h = self.down[i_level].block[i_block](hs[-1])
hs.append(h)
if i_level != self.num_resolutions - 1:
hs.append(self.down[i_level].downsample(hs[-1]))
# middle
h = hs[-1]
h = self.mid.block_1(h)
h = self.mid.attn_1(h)
h = self.mid.block_2(h)
# end
h = self.norm_out(h)
h = self.swish(h)
h = self.conv_out(h)
return h
class VAEDecoder(torch.nn.Module):
def __init__(
self,
ch=128,
out_ch=3,
ch_mult=(1, 2, 4, 4),
num_res_blocks=2,
resolution=256,
z_channels=16,
dtype=torch.float32,
device=None,
):
super().__init__()
self.num_resolutions = len(ch_mult)
self.num_res_blocks = num_res_blocks
block_in = ch * ch_mult[self.num_resolutions - 1]
curr_res = resolution // 2 ** (self.num_resolutions - 1)
# z to block_in
self.conv_in = torch.nn.Conv2d(
z_channels,
block_in,
kernel_size=3,
stride=1,
padding=1,
dtype=dtype,
device=device,
)
# middle
self.mid = torch.nn.Module()
self.mid.block_1 = ResnetBlock(in_channels=block_in, out_channels=block_in, dtype=dtype, device=device)
self.mid.attn_1 = AttnBlock(block_in, dtype=dtype, device=device)
self.mid.block_2 = ResnetBlock(in_channels=block_in, out_channels=block_in, dtype=dtype, device=device)
# upsampling
self.up = torch.nn.ModuleList()
for i_level in reversed(range(self.num_resolutions)):
block = torch.nn.ModuleList()
block_out = ch * ch_mult[i_level]
for i_block in range(self.num_res_blocks + 1):
block.append(
ResnetBlock(
in_channels=block_in,
out_channels=block_out,
dtype=dtype,
device=device,
)
)
block_in = block_out
up = torch.nn.Module()
up.block = block
if i_level != 0:
up.upsample = Upsample(block_in, dtype=dtype, device=device)
curr_res = curr_res * 2
self.up.insert(0, up) # prepend to get consistent order
# end
self.norm_out = Normalize(block_in, dtype=dtype, device=device)
self.conv_out = torch.nn.Conv2d(
block_in,
out_ch,
kernel_size=3,
stride=1,
padding=1,
dtype=dtype,
device=device,
)
self.swish = torch.nn.SiLU(inplace=True)
def forward(self, z):
# z to block_in
hidden = self.conv_in(z)
# middle
hidden = self.mid.block_1(hidden)
hidden = self.mid.attn_1(hidden)
hidden = self.mid.block_2(hidden)
# upsampling
for i_level in reversed(range(self.num_resolutions)):
for i_block in range(self.num_res_blocks + 1):
hidden = self.up[i_level].block[i_block](hidden)
if i_level != 0:
hidden = self.up[i_level].upsample(hidden)
# end
hidden = self.norm_out(hidden)
hidden = self.swish(hidden)
hidden = self.conv_out(hidden)
return hidden
class SDVAE(torch.nn.Module):
def __init__(self, dtype=torch.float32, device=None):
super().__init__()
self.encoder = VAEEncoder(dtype=dtype, device=device)
self.decoder = VAEDecoder(dtype=dtype, device=device)
@torch.autocast("cuda", dtype=torch.float16)
def decode(self, latent):
return self.decoder(latent)
@torch.autocast("cuda", dtype=torch.float16)
def encode(self, image):
hidden = self.encoder(image)
mean, logvar = torch.chunk(hidden, 2, dim=1)
logvar = torch.clamp(logvar, -30.0, 20.0)
std = torch.exp(0.5 * logvar)
return mean + std * torch.randn_like(mean)

View File

@@ -1,426 +0,0 @@
# This file was originally copied from:
# https://github.com/Stability-AI/sd3.5/blob/19bf11c4e1e37324c5aa5a61f010d4127848a09c/sd3_infer.py
# NOTE: Must have folder `models` with the following files:
# - `clip_g.safetensors` (openclip bigG, same as SDXL)
# - `clip_l.safetensors` (OpenAI CLIP-L, same as SDXL)
# - `t5xxl.safetensors` (google T5-v1.1-XXL)
# - `sd3_medium.safetensors` (or whichever main MMDiT model file)
# Also can have
# - `sd3_vae.safetensors` (holds the VAE separately if needed)
import datetime
import math
import os
import fire
import numpy as np
import sd3_impls
import torch
from other_impls import SD3Tokenizer, SDClipModel, SDXLClipG, T5XXLModel
from PIL import Image
from safetensors import safe_open
from sd3_impls import SDVAE, BaseModel, CFGDenoiser, SD3LatentFormat
from tqdm import tqdm
#################################################################################################
### Wrappers for model parts
#################################################################################################
def load_into(f, model, prefix, device, dtype=None):
"""Just a debugging-friendly hack to apply the weights in a safetensors file to the pytorch module."""
for key in f.keys():
if key.startswith(prefix) and not key.startswith("loss."):
path = key[len(prefix) :].split(".")
obj = model
for p in path:
if obj is list:
obj = obj[int(p)]
else:
obj = getattr(obj, p, None)
if obj is None:
print(f"Skipping key '{key}' in safetensors file as '{p}' does not exist in python model")
break
if obj is None:
continue
try:
tensor = f.get_tensor(key).to(device=device)
if dtype is not None:
tensor = tensor.to(dtype=dtype)
obj.requires_grad_(False)
obj.set_(tensor)
except Exception as e:
print(f"Failed to load key '{key}' in safetensors file: {e}")
raise e
CLIPG_CONFIG = {
"hidden_act": "gelu",
"hidden_size": 1280,
"intermediate_size": 5120,
"num_attention_heads": 20,
"num_hidden_layers": 32,
}
class ClipG:
def __init__(self):
with safe_open("models/clip_g.safetensors", framework="pt", device="cpu") as f:
self.model = SDXLClipG(CLIPG_CONFIG, device="cpu", dtype=torch.float32)
load_into(f, self.model.transformer, "", "cpu", torch.float32)
CLIPL_CONFIG = {
"hidden_act": "quick_gelu",
"hidden_size": 768,
"intermediate_size": 3072,
"num_attention_heads": 12,
"num_hidden_layers": 12,
}
class ClipL:
def __init__(self):
with safe_open("models/clip_l.safetensors", framework="pt", device="cpu") as f:
self.model = SDClipModel(
layer="hidden",
layer_idx=-2,
device="cpu",
dtype=torch.float32,
layer_norm_hidden_state=False,
return_projected_pooled=False,
textmodel_json_config=CLIPL_CONFIG,
)
load_into(f, self.model.transformer, "", "cpu", torch.float32)
T5_CONFIG = {
"d_ff": 10240,
"d_model": 4096,
"num_heads": 64,
"num_layers": 24,
"vocab_size": 32128,
}
class T5XXL:
def __init__(self):
with safe_open("models/t5xxl.safetensors", framework="pt", device="cpu") as f:
self.model = T5XXLModel(T5_CONFIG, device="cpu", dtype=torch.float32)
load_into(f, self.model.transformer, "", "cpu", torch.float32)
class SD3:
def __init__(self, model, shift, verbose=False):
with safe_open(model, framework="pt", device="cpu") as f:
self.model = BaseModel(
shift=shift,
file=f,
prefix="model.diffusion_model.",
device="cpu",
dtype=torch.float16,
verbose=verbose,
).eval()
load_into(f, self.model, "model.", "cpu", torch.float16)
class VAE:
def __init__(self, model):
with safe_open(model, framework="pt", device="cpu") as f:
self.model = SDVAE(device="cpu", dtype=torch.float16).eval().cpu()
prefix = ""
if any(k.startswith("first_stage_model.") for k in f.keys()):
prefix = "first_stage_model."
load_into(f, self.model, prefix, "cpu", torch.float16)
#################################################################################################
### Main inference logic
#################################################################################################
# Note: Sigma shift value, publicly released models use 3.0
SHIFT = 3.0
# Naturally, adjust to the width/height of the model you have
WIDTH = 1024
HEIGHT = 1024
# Pick your prompt
PROMPT = "a photo of a cat"
# Most models prefer the range of 4-5, but still work well around 7
CFG_SCALE = 4.5
# Different models want different step counts but most will be good at 50, albeit that's slow to run
# sd3_medium is quite decent at 28 steps
STEPS = 40
# Seed
SEED = 23
# SEEDTYPE = "fixed"
SEEDTYPE = "rand"
# SEEDTYPE = "roll"
# Actual model file path
# MODEL = "models/sd3_medium.safetensors"
# MODEL = "models/sd3.5_large_turbo.safetensors"
MODEL = "models/sd3.5_large.safetensors"
# VAE model file path, or set None to use the same model file
VAEFile = None # "models/sd3_vae.safetensors"
# Optional init image file path
INIT_IMAGE = None
# If init_image is given, this is the percentage of denoising steps to run (1.0 = full denoise, 0.0 = no denoise at all)
DENOISE = 0.6
# Output file path
OUTDIR = "outputs"
# SAMPLER
# SAMPLER = "euler"
SAMPLER = "dpmpp_2m"
class SD3Inferencer:
def print(self, txt):
if self.verbose:
print(txt)
def load(self, model=MODEL, vae=VAEFile, shift=SHIFT, verbose=False):
self.verbose = verbose
print("Loading tokenizers...")
# NOTE: if you need a reference impl for a high performance CLIP tokenizer instead of just using the HF transformers one,
# check https://github.com/Stability-AI/StableSwarmUI/blob/master/src/Utils/CliplikeTokenizer.cs
# (T5 tokenizer is different though)
self.tokenizer = SD3Tokenizer()
print("Loading OpenAI CLIP L...")
self.clip_l = ClipL()
print("Loading OpenCLIP bigG...")
self.clip_g = ClipG()
print("Loading Google T5-v1-XXL...")
self.t5xxl = T5XXL()
print(f"Loading SD3 model {os.path.basename(model)}...")
self.sd3 = SD3(model, shift, verbose)
print("Loading VAE model...")
self.vae = VAE(vae or model)
print("Models loaded.")
def get_empty_latent(self, width, height):
self.print("Prep an empty latent...")
return torch.ones(1, 16, height // 8, width // 8, device="cpu") * 0.0609
def get_sigmas(self, sampling, steps):
start = sampling.timestep(sampling.sigma_max)
end = sampling.timestep(sampling.sigma_min)
timesteps = torch.linspace(start, end, steps)
sigs = []
for x in range(len(timesteps)):
ts = timesteps[x]
sigs.append(sampling.sigma(ts))
sigs += [0.0]
return torch.FloatTensor(sigs)
def get_noise(self, seed, latent):
generator = torch.manual_seed(seed)
self.print(f"dtype = {latent.dtype}, layout = {latent.layout}, device = {latent.device}")
return torch.randn(
latent.size(),
dtype=torch.float32,
layout=latent.layout,
generator=generator,
device="cpu",
).to(latent.dtype)
def get_cond(self, prompt):
self.print("Encode prompt...")
tokens = self.tokenizer.tokenize_with_weights(prompt)
l_out, l_pooled = self.clip_l.model.encode_token_weights(tokens["l"])
g_out, g_pooled = self.clip_g.model.encode_token_weights(tokens["g"])
t5_out, t5_pooled = self.t5xxl.model.encode_token_weights(tokens["t5xxl"])
lg_out = torch.cat([l_out, g_out], dim=-1)
lg_out = torch.nn.functional.pad(lg_out, (0, 4096 - lg_out.shape[-1]))
return torch.cat([lg_out, t5_out], dim=-2), torch.cat((l_pooled, g_pooled), dim=-1)
def max_denoise(self, sigmas):
max_sigma = float(self.sd3.model.model_sampling.sigma_max)
sigma = float(sigmas[0])
return math.isclose(max_sigma, sigma, rel_tol=1e-05) or sigma > max_sigma
def fix_cond(self, cond):
cond, pooled = (cond[0].half().cuda(), cond[1].half().cuda())
return {"c_crossattn": cond, "y": pooled}
def do_sampling(
self,
latent,
seed,
conditioning,
neg_cond,
steps,
cfg_scale,
sampler="dpmpp_2m",
denoise=1.0,
) -> torch.Tensor:
self.print("Sampling...")
latent = latent.half().cuda()
self.sd3.model = self.sd3.model.cuda()
noise = self.get_noise(seed, latent).cuda()
sigmas = self.get_sigmas(self.sd3.model.model_sampling, steps).cuda()
sigmas = sigmas[int(steps * (1 - denoise)) :]
conditioning = self.fix_cond(conditioning)
neg_cond = self.fix_cond(neg_cond)
extra_args = {"cond": conditioning, "uncond": neg_cond, "cond_scale": cfg_scale}
noise_scaled = self.sd3.model.model_sampling.noise_scaling(sigmas[0], noise, latent, self.max_denoise(sigmas))
sample_fn = getattr(sd3_impls, f"sample_{sampler}")
latent = sample_fn(CFGDenoiser(self.sd3.model), noise_scaled, sigmas, extra_args=extra_args)
latent = SD3LatentFormat().process_out(latent)
self.sd3.model = self.sd3.model.cpu()
self.print("Sampling done")
return latent
def vae_encode(self, image) -> torch.Tensor:
self.print("Encoding image to latent...")
image = image.convert("RGB")
image_np = np.array(image).astype(np.float32) / 255.0
image_np = np.moveaxis(image_np, 2, 0)
batch_images = np.expand_dims(image_np, axis=0).repeat(1, axis=0)
image_torch = torch.from_numpy(batch_images)
image_torch = 2.0 * image_torch - 1.0
image_torch = image_torch.cuda()
self.vae.model = self.vae.model.cuda()
latent = self.vae.model.encode(image_torch).cpu()
self.vae.model = self.vae.model.cpu()
self.print("Encoded")
return latent
def vae_decode(self, latent) -> Image.Image:
self.print("Decoding latent to image...")
latent = latent.cuda()
self.vae.model = self.vae.model.cuda()
image = self.vae.model.decode(latent)
image = image.float()
self.vae.model = self.vae.model.cpu()
image = torch.clamp((image + 1.0) / 2.0, min=0.0, max=1.0)[0]
decoded_np = 255.0 * np.moveaxis(image.cpu().numpy(), 0, 2)
decoded_np = decoded_np.astype(np.uint8)
out_image = Image.fromarray(decoded_np)
self.print("Decoded")
return out_image
def gen_image(
self,
prompts=[PROMPT],
width=WIDTH,
height=HEIGHT,
steps=STEPS,
cfg_scale=CFG_SCALE,
sampler=SAMPLER,
seed=SEED,
seed_type=SEEDTYPE,
out_dir=OUTDIR,
init_image=INIT_IMAGE,
denoise=DENOISE,
):
latent = self.get_empty_latent(width, height)
if init_image:
image_data = Image.open(init_image)
image_data = image_data.resize((width, height), Image.LANCZOS)
latent = self.vae_encode(image_data)
latent = SD3LatentFormat().process_in(latent)
neg_cond = self.get_cond("")
seed_num = None
pbar = tqdm(enumerate(prompts), total=len(prompts), position=0, leave=True)
for i, prompt in pbar:
if seed_type == "roll":
seed_num = seed if seed_num is None else seed_num + 1
elif seed_type == "rand":
seed_num = torch.randint(0, 100000, (1,)).item()
else: # fixed
seed_num = seed
conditioning = self.get_cond(prompt)
sampled_latent = self.do_sampling(
latent,
seed_num,
conditioning,
neg_cond,
steps,
cfg_scale,
sampler,
denoise if init_image else 1.0,
)
image = self.vae_decode(sampled_latent)
save_path = os.path.join(out_dir, f"{i:06d}.png")
self.print(f"Will save to {save_path}")
image.save(save_path)
self.print("Done")
CONFIGS = {
"sd3_medium": {
"shift": 1.0,
"cfg": 5.0,
"steps": 50,
"sampler": "dpmpp_2m",
},
"sd3.5_large": {
"shift": 3.0,
"cfg": 4.5,
"steps": 40,
"sampler": "dpmpp_2m",
},
"sd3.5_large_turbo": {"shift": 3.0, "cfg": 1.0, "steps": 4, "sampler": "euler"},
}
@torch.no_grad()
def main(
prompt=PROMPT,
model=MODEL,
out_dir=OUTDIR,
postfix=None,
seed=SEED,
seed_type=SEEDTYPE,
sampler=None,
steps=None,
cfg=None,
shift=None,
width=WIDTH,
height=HEIGHT,
vae=VAEFile,
init_image=INIT_IMAGE,
denoise=DENOISE,
verbose=False,
):
steps = steps or CONFIGS[os.path.splitext(os.path.basename(model))[0]]["steps"]
cfg = cfg or CONFIGS[os.path.splitext(os.path.basename(model))[0]]["cfg"]
shift = shift or CONFIGS[os.path.splitext(os.path.basename(model))[0]]["shift"]
sampler = sampler or CONFIGS[os.path.splitext(os.path.basename(model))[0]]["sampler"]
inferencer = SD3Inferencer()
inferencer.load(model, vae, shift, verbose)
if isinstance(prompt, str):
if os.path.splitext(prompt)[-1] == ".txt":
with open(prompt, "r") as f:
prompts = [l.strip() for l in f.readlines()]
else:
prompts = [prompt]
out_dir = os.path.join(
out_dir,
os.path.splitext(os.path.basename(model))[0],
os.path.splitext(os.path.basename(prompt))[0][:50]
+ (postfix or datetime.datetime.now().strftime("_%Y-%m-%dT%H-%M-%S")),
)
print(f"Saving images to {out_dir}")
os.makedirs(out_dir, exist_ok=False)
inferencer.gen_image(
prompts,
width,
height,
steps,
cfg,
sampler,
seed,
seed_type,
out_dir,
init_image,
denoise,
)
fire.Fire(main)

View File

@@ -1,72 +0,0 @@
from dataclasses import dataclass
from typing import Literal, TypedDict
import torch
from invokeai.backend.sd3.mmditx import MMDiTX
from invokeai.backend.sd3.sd3_impls import ModelSamplingDiscreteFlow
class ContextEmbedderConfig(TypedDict):
target: Literal["torch.nn.Linear"]
params: dict[str, int]
@dataclass
class Sd3MMDiTXParams:
patch_size: int
depth: int
num_patches: int
pos_embed_max_size: int
adm_in_channels: int
context_shape: tuple[int, int]
qk_norm: Literal["rms", None]
x_block_self_attn_layers: list[int]
context_embedder_config: ContextEmbedderConfig
class Sd3MMDiTX(torch.nn.Module):
"""This class is based closely on
https://github.com/Stability-AI/sd3.5/blob/19bf11c4e1e37324c5aa5a61f010d4127848a09c/sd3_impls.py#L53
but has more standard model loading semantics.
"""
def __init__(
self,
params: Sd3MMDiTXParams,
shift: float = 1.0,
device: torch.device | None = None,
dtype: torch.dtype | None = None,
verbose: bool = False,
):
super().__init__()
self.diffusion_model = MMDiTX(
input_size=None,
pos_embed_scaling_factor=None,
pos_embed_offset=None,
pos_embed_max_size=params.pos_embed_max_size,
patch_size=params.patch_size,
in_channels=16,
depth=params.depth,
num_patches=params.num_patches,
adm_in_channels=params.adm_in_channels,
context_embedder_config=params.context_embedder_config,
qk_norm=params.qk_norm,
x_block_self_attn_layers=params.x_block_self_attn_layers,
device=device,
dtype=dtype,
verbose=verbose,
)
self.model_sampling = ModelSamplingDiscreteFlow(shift=shift)
def apply_model(self, x: torch.Tensor, sigma: torch.Tensor, c_crossattn: torch.Tensor, y: torch.Tensor):
dtype = self.get_dtype()
timestep = self.model_sampling.timestep(sigma).float()
model_output = self.diffusion_model(x.to(dtype), timestep, context=c_crossattn.to(dtype), y=y.to(dtype)).float()
return self.model_sampling.calculate_denoised(sigma, model_output, x)
def forward(self, x: torch.Tensor, sigma: float, c_crossattn: torch.Tensor, y: torch.Tensor):
return self.apply_model(x=x, sigma=sigma, c_crossattn=c_crossattn, y=y)
def get_dtype(self):
return self.diffusion_model.dtype

View File

@@ -1,70 +0,0 @@
import math
import re
from typing import Any, Dict
from invokeai.backend.sd3.sd3_mmditx import ContextEmbedderConfig, Sd3MMDiTXParams
def is_sd3_checkpoint(sd: Dict[str, Any]) -> bool:
"""Is the state dict for an SD3 checkpoint like this one?:
https://huggingface.co/stabilityai/stable-diffusion-3.5-large/blob/main/sd3.5_large.safetensors
Note that this checkpoint format contains both the VAE and the MMDiTX model.
This is intended to be a reasonably high-precision detector, but it is not guaranteed to have perfect precision.
"""
# If all of the expected keys are present, then this is very likely a SD3 checkpoint.
expected_keys = {
# VAE decoder and encoder keys.
"first_stage_model.decoder.conv_in.bias",
"first_stage_model.decoder.conv_in.weight",
"first_stage_model.encoder.conv_in.bias",
"first_stage_model.encoder.conv_in.weight",
# MMDiTX keys.
"model.diffusion_model.final_layer.linear.bias",
"model.diffusion_model.final_layer.linear.weight",
"model.diffusion_model.joint_blocks.0.context_block.attn.ln_k.weight",
"model.diffusion_model.joint_blocks.0.context_block.attn.ln_q.weight",
}
return expected_keys.issubset(sd.keys())
def infer_sd3_mmditx_params(sd: Dict[str, Any], prefix: str = "model.diffusion_model.") -> Sd3MMDiTXParams:
"""Infer the MMDiTX model parameters from the state dict.
This logic is based on:
https://github.com/Stability-AI/sd3.5/blob/19bf11c4e1e37324c5aa5a61f010d4127848a09c/sd3_impls.py#L68-L88
"""
patch_size = sd[f"{prefix}x_embedder.proj.weight"].shape[2]
depth = sd[f"{prefix}x_embedder.proj.weight"].shape[0] // 64
num_patches = sd[f"{prefix}pos_embed"].shape[1]
pos_embed_max_size = round(math.sqrt(num_patches))
adm_in_channels = sd[f"{prefix}y_embedder.mlp.0.weight"].shape[1]
context_shape = sd[f"{prefix}context_embedder.weight"].shape
qk_norm = "rms" if f"{prefix}joint_blocks.0.context_block.attn.ln_k.weight" in sd else None
x_block_self_attn_layers = sorted(
[
int(key.split(".x_block.attn2.ln_k.weight")[0].split(".")[-1])
for key in list(filter(re.compile(".*.x_block.attn2.ln_k.weight").match, sd.keys()))
]
)
context_embedder_config: ContextEmbedderConfig = {
"target": "torch.nn.Linear",
"params": {
"in_features": context_shape[1],
"out_features": context_shape[0],
},
}
return Sd3MMDiTXParams(
patch_size=patch_size,
depth=depth,
num_patches=num_patches,
pos_embed_max_size=pos_embed_max_size,
adm_in_channels=adm_in_channels,
context_shape=context_shape,
qk_norm=qk_norm,
x_block_self_attn_layers=x_block_self_attn_layers,
context_embedder_config=context_embedder_config,
)

View File

@@ -499,6 +499,22 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
for idx, value in enumerate(single_t2i_adapter_data.adapter_state):
accum_adapter_state[idx] += value * t2i_adapter_weight
# Hack: force compatibility with irregular resolutions by padding the feature map with zeros
for idx, tensor in enumerate(accum_adapter_state):
# The tensor size is supposed to be some integer downscale factor of the latents size.
# Internally, the unet will pad the latents before downscaling between levels when it is no longer divisible by its downscale factor.
# If the latent size does not scale down evenly, we need to pad the tensor so that it matches the the downscaled padded latents later on.
scale_factor = latents.size()[-1] // tensor.size()[-1]
required_padding_width = math.ceil(latents.size()[-1] / scale_factor) - tensor.size()[-1]
required_padding_height = math.ceil(latents.size()[-2] / scale_factor) - tensor.size()[-2]
tensor = torch.nn.functional.pad(
tensor,
(0, required_padding_width, 0, required_padding_height, 0, 0, 0, 0),
mode="constant",
value=0,
)
accum_adapter_state[idx] = tensor
down_intrablock_additional_residuals = accum_adapter_state
# Handle inpainting models.

View File

@@ -49,9 +49,32 @@ class FLUXConditioningInfo:
return self
@dataclass
class SD3ConditioningInfo:
clip_l_pooled_embeds: torch.Tensor
clip_l_embeds: torch.Tensor
clip_g_pooled_embeds: torch.Tensor
clip_g_embeds: torch.Tensor
t5_embeds: torch.Tensor | None
def to(self, device: torch.device | None = None, dtype: torch.dtype | None = None):
self.clip_l_pooled_embeds = self.clip_l_pooled_embeds.to(device=device, dtype=dtype)
self.clip_l_embeds = self.clip_l_embeds.to(device=device, dtype=dtype)
self.clip_g_pooled_embeds = self.clip_g_pooled_embeds.to(device=device, dtype=dtype)
self.clip_g_embeds = self.clip_g_embeds.to(device=device, dtype=dtype)
if self.t5_embeds is not None:
self.t5_embeds = self.t5_embeds.to(device=device, dtype=dtype)
return self
@dataclass
class ConditioningFieldData:
conditionings: List[BasicConditioningInfo] | List[SDXLConditioningInfo] | List[FLUXConditioningInfo]
conditionings: (
List[BasicConditioningInfo]
| List[SDXLConditioningInfo]
| List[FLUXConditioningInfo]
| List[SD3ConditioningInfo]
)
@dataclass

View File

@@ -33,7 +33,7 @@ class PreviewExt(ExtensionBase):
def initial_preview(self, ctx: DenoiseContext):
self.callback(
PipelineIntermediateState(
step=-1,
step=0,
order=ctx.scheduler.order,
total_steps=len(ctx.inputs.timesteps),
timestep=int(ctx.scheduler.config.num_train_timesteps), # TODO: is there any code which uses it?

View File

@@ -3,7 +3,7 @@ from typing import Any, Dict, List, Optional, Tuple, Union
import diffusers
import torch
from diffusers.configuration_utils import ConfigMixin, register_to_config
from diffusers.loaders import FromOriginalControlNetMixin
from diffusers.loaders.single_file_model import FromOriginalModelMixin
from diffusers.models.attention_processor import AttentionProcessor, AttnProcessor
from diffusers.models.controlnet import ControlNetConditioningEmbedding, ControlNetOutput, zero_module
from diffusers.models.embeddings import (
@@ -32,7 +32,9 @@ from invokeai.backend.util.logging import InvokeAILogger
logger = InvokeAILogger.get_logger(__name__)
class ControlNetModel(ModelMixin, ConfigMixin, FromOriginalControlNetMixin):
# NOTE(ryand): I'm not the origina author of this code, but for future reference, it appears that this class was copied
# from diffusers in order to add support for the encoder_attention_mask argument.
class ControlNetModel(ModelMixin, ConfigMixin, FromOriginalModelMixin):
"""
A ControlNet model.

View File

@@ -9,6 +9,7 @@ const config: KnipConfig = {
'src/services/api/schema.ts',
'src/features/nodes/types/v1/**',
'src/features/nodes/types/v2/**',
'src/features/parameters/types/parameterSchemas.ts',
// TODO(psyche): maybe we can clean up these utils after canvas v2 release
'src/features/controlLayers/konva/util.ts',
// TODO(psyche): restore HRF functionality?

View File

@@ -52,13 +52,13 @@
}
},
"dependencies": {
"@atlaskit/pragmatic-drag-and-drop": "^1.4.0",
"@atlaskit/pragmatic-drag-and-drop-auto-scroll": "^1.4.0",
"@atlaskit/pragmatic-drag-and-drop-hitbox": "^1.0.3",
"@dagrejs/dagre": "^1.1.4",
"@dagrejs/graphlib": "^2.2.4",
"@dnd-kit/core": "^6.1.0",
"@dnd-kit/sortable": "^8.0.0",
"@dnd-kit/utilities": "^3.2.2",
"@fontsource-variable/inter": "^5.1.0",
"@invoke-ai/ui-library": "^0.0.42",
"@invoke-ai/ui-library": "^0.0.43",
"@nanostores/react": "^0.7.3",
"@reduxjs/toolkit": "2.2.3",
"@roarr/browser-log-writer": "^1.3.0",

View File

@@ -5,27 +5,27 @@ settings:
excludeLinksFromLockfile: false
dependencies:
'@atlaskit/pragmatic-drag-and-drop':
specifier: ^1.4.0
version: 1.4.0
'@atlaskit/pragmatic-drag-and-drop-auto-scroll':
specifier: ^1.4.0
version: 1.4.0
'@atlaskit/pragmatic-drag-and-drop-hitbox':
specifier: ^1.0.3
version: 1.0.3
'@dagrejs/dagre':
specifier: ^1.1.4
version: 1.1.4
'@dagrejs/graphlib':
specifier: ^2.2.4
version: 2.2.4
'@dnd-kit/core':
specifier: ^6.1.0
version: 6.1.0(react-dom@18.3.1)(react@18.3.1)
'@dnd-kit/sortable':
specifier: ^8.0.0
version: 8.0.0(@dnd-kit/core@6.1.0)(react@18.3.1)
'@dnd-kit/utilities':
specifier: ^3.2.2
version: 3.2.2(react@18.3.1)
'@fontsource-variable/inter':
specifier: ^5.1.0
version: 5.1.0
'@invoke-ai/ui-library':
specifier: ^0.0.42
version: 0.0.42(@chakra-ui/form-control@2.2.0)(@chakra-ui/icon@3.2.0)(@chakra-ui/media-query@3.3.0)(@chakra-ui/menu@2.2.1)(@chakra-ui/spinner@2.1.0)(@chakra-ui/system@2.6.2)(@fontsource-variable/inter@5.1.0)(@types/react@18.3.11)(i18next@23.15.1)(react-dom@18.3.1)(react@18.3.1)
specifier: ^0.0.43
version: 0.0.43(@chakra-ui/form-control@2.2.0)(@chakra-ui/icon@3.2.0)(@chakra-ui/media-query@3.3.0)(@chakra-ui/menu@2.2.1)(@chakra-ui/spinner@2.1.0)(@chakra-ui/system@2.6.2)(@fontsource-variable/inter@5.1.0)(@types/react@18.3.11)(i18next@23.15.1)(react-dom@18.3.1)(react@18.3.1)
'@nanostores/react':
specifier: ^0.7.3
version: 0.7.3(nanostores@0.11.3)(react@18.3.1)
@@ -319,6 +319,28 @@ packages:
'@jridgewell/trace-mapping': 0.3.25
dev: true
/@atlaskit/pragmatic-drag-and-drop-auto-scroll@1.4.0:
resolution: {integrity: sha512-5GoikoTSW13UX76F9TDeWB8x3jbbGlp/Y+3aRkHe1MOBMkrWkwNpJ42MIVhhX/6NSeaZiPumP0KbGJVs2tOWSQ==}
dependencies:
'@atlaskit/pragmatic-drag-and-drop': 1.4.0
'@babel/runtime': 7.25.7
dev: false
/@atlaskit/pragmatic-drag-and-drop-hitbox@1.0.3:
resolution: {integrity: sha512-/Sbu/HqN2VGLYBhnsG7SbRNg98XKkbF6L7XDdBi+izRybfaK1FeMfodPpm/xnBHPJzwYMdkE0qtLyv6afhgMUA==}
dependencies:
'@atlaskit/pragmatic-drag-and-drop': 1.4.0
'@babel/runtime': 7.25.7
dev: false
/@atlaskit/pragmatic-drag-and-drop@1.4.0:
resolution: {integrity: sha512-qRY3PTJIcxfl/QB8Gwswz+BRvlmgAC5pB+J2hL6dkIxgqAgVwOhAamMUKsrOcFU/axG2Q7RbNs1xfoLKDuhoPg==}
dependencies:
'@babel/runtime': 7.25.7
bind-event-listener: 3.0.0
raf-schd: 4.0.3
dev: false
/@babel/code-frame@7.25.7:
resolution: {integrity: sha512-0xZJFNE5XMpENsgfHYTw8FbX4kv53mFLn2i3XPoq69LyhYSCBJtitaHx9QnsVTrsogI4Z3+HtEfZ2/GFPOtf5g==}
engines: {node: '>=6.9.0'}
@@ -980,49 +1002,6 @@ packages:
engines: {node: '>17.0.0'}
dev: false
/@dnd-kit/accessibility@3.1.0(react@18.3.1):
resolution: {integrity: sha512-ea7IkhKvlJUv9iSHJOnxinBcoOI3ppGnnL+VDJ75O45Nss6HtZd8IdN8touXPDtASfeI2T2LImb8VOZcL47wjQ==}
peerDependencies:
react: '>=16.8.0'
dependencies:
react: 18.3.1
tslib: 2.7.0
dev: false
/@dnd-kit/core@6.1.0(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-J3cQBClB4TVxwGo3KEjssGEXNJqGVWx17aRTZ1ob0FliR5IjYgTxl5YJbKTzA6IzrtelotH19v6y7uoIRUZPSg==}
peerDependencies:
react: '>=16.8.0'
react-dom: '>=16.8.0'
dependencies:
'@dnd-kit/accessibility': 3.1.0(react@18.3.1)
'@dnd-kit/utilities': 3.2.2(react@18.3.1)
react: 18.3.1
react-dom: 18.3.1(react@18.3.1)
tslib: 2.7.0
dev: false
/@dnd-kit/sortable@8.0.0(@dnd-kit/core@6.1.0)(react@18.3.1):
resolution: {integrity: sha512-U3jk5ebVXe1Lr7c2wU7SBZjcWdQP+j7peHJfCspnA81enlu88Mgd7CC8Q+pub9ubP7eKVETzJW+IBAhsqbSu/g==}
peerDependencies:
'@dnd-kit/core': ^6.1.0
react: '>=16.8.0'
dependencies:
'@dnd-kit/core': 6.1.0(react-dom@18.3.1)(react@18.3.1)
'@dnd-kit/utilities': 3.2.2(react@18.3.1)
react: 18.3.1
tslib: 2.7.0
dev: false
/@dnd-kit/utilities@3.2.2(react@18.3.1):
resolution: {integrity: sha512-+MKAJEOfaBe5SmV6t34p80MMKhjvUz0vRrvVJbPT0WElzaOJ/1xs+D+KDv+tD/NE5ujfrChEcshd4fLn0wpiqg==}
peerDependencies:
react: '>=16.8.0'
dependencies:
react: 18.3.1
tslib: 2.7.0
dev: false
/@emotion/babel-plugin@11.12.0:
resolution: {integrity: sha512-y2WQb+oP8Jqvvclh8Q55gLUyb7UFvgv7eJfsj7td5TToBrIUtPay2kMrZi4xjq9qw2vD0ZR5fSho0yqoFgX7Rw==}
dependencies:
@@ -1696,20 +1675,20 @@ packages:
prettier: 3.3.3
dev: true
/@invoke-ai/ui-library@0.0.42(@chakra-ui/form-control@2.2.0)(@chakra-ui/icon@3.2.0)(@chakra-ui/media-query@3.3.0)(@chakra-ui/menu@2.2.1)(@chakra-ui/spinner@2.1.0)(@chakra-ui/system@2.6.2)(@fontsource-variable/inter@5.1.0)(@types/react@18.3.11)(i18next@23.15.1)(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-OuDXRipBO5mu+Nv4qN8cd8MiwiGBdq6h4PirVgPI9/ltbdcIzePgUJ0dJns26lflHSTRWW38I16wl4YTw3mNWA==}
/@invoke-ai/ui-library@0.0.43(@chakra-ui/form-control@2.2.0)(@chakra-ui/icon@3.2.0)(@chakra-ui/media-query@3.3.0)(@chakra-ui/menu@2.2.1)(@chakra-ui/spinner@2.1.0)(@chakra-ui/system@2.6.2)(@fontsource-variable/inter@5.1.0)(@types/react@18.3.11)(i18next@23.15.1)(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-t3fPYyks07ue3dEBPJuTHbeDLnDckDCOrtvc07mMDbLOnlPEZ0StaeiNGH+oO8qLzAuMAlSTdswgHfzTc2MmPw==}
peerDependencies:
'@fontsource-variable/inter': ^5.0.16
react: ^18.2.0
react-dom: ^18.2.0
dependencies:
'@chakra-ui/anatomy': 2.2.2
'@chakra-ui/anatomy': 2.3.4
'@chakra-ui/icons': 2.2.4(@chakra-ui/react@2.10.2)(react@18.3.1)
'@chakra-ui/layout': 2.3.1(@chakra-ui/system@2.6.2)(react@18.3.1)
'@chakra-ui/portal': 2.1.0(react-dom@18.3.1)(react@18.3.1)
'@chakra-ui/react': 2.10.2(@emotion/react@11.13.3)(@emotion/styled@11.13.0)(@types/react@18.3.11)(framer-motion@11.10.0)(react-dom@18.3.1)(react@18.3.1)
'@chakra-ui/styled-system': 2.9.2
'@chakra-ui/theme-tools': 2.1.2(@chakra-ui/styled-system@2.9.2)
'@chakra-ui/styled-system': 2.11.2(react@18.3.1)
'@chakra-ui/theme-tools': 2.2.6(@chakra-ui/styled-system@2.11.2)(react@18.3.1)
'@emotion/react': 11.13.3(@types/react@18.3.11)(react@18.3.1)
'@emotion/styled': 11.13.0(@emotion/react@11.13.3)(@types/react@18.3.11)(react@18.3.1)
'@fontsource-variable/inter': 5.1.0
@@ -4313,6 +4292,10 @@ packages:
open: 8.4.2
dev: true
/bind-event-listener@3.0.0:
resolution: {integrity: sha512-PJvH288AWQhKs2v9zyfYdPzlPqf5bXbGMmhmUIY9x4dAUGIWgomO771oBQNwJnMQSnUIXhKu6sgzpBRXTlvb8Q==}
dev: false
/bl@4.1.0:
resolution: {integrity: sha512-1W07cM9gS6DcLperZfFSj+bWLtaPGSOHWhPiGzXmvVJbRLdG82sH/Kn8EtW1VqWVA54AKf2h5k5BbnIbwF3h6w==}
dependencies:
@@ -7557,6 +7540,10 @@ packages:
resolution: {integrity: sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A==}
dev: true
/raf-schd@4.0.3:
resolution: {integrity: sha512-tQkJl2GRWh83ui2DiPTJz9wEiMN20syf+5oKfB03yYP7ioZcJwsIK8FjrtLwH1m7C7e+Tt2yYBlrOpdT+dyeIQ==}
dev: false
/raf-throttle@2.0.6:
resolution: {integrity: sha512-C7W6hy78A+vMmk5a/B6C5szjBHrUzWJkVyakjKCK59Uy2CcA7KhO1JUvvH32IXYFIcyJ3FMKP3ZzCc2/71I6Vg==}
dev: false

Binary file not shown.

After

Width:  |  Height:  |  Size: 895 KiB

View File

@@ -95,7 +95,8 @@
"none": "Keine",
"new": "Neu",
"ok": "OK",
"close": "Schließen"
"close": "Schließen",
"clipboard": "Zwischenablage"
},
"gallery": {
"galleryImageSize": "Bildgröße",
@@ -535,14 +536,12 @@
"addModels": "Model hinzufügen",
"deleteModelImage": "Lösche Model Bild",
"huggingFaceRepoID": "HuggingFace Repo ID",
"hfToken": "HuggingFace Schlüssel",
"huggingFacePlaceholder": "besitzer/model-name",
"modelSettings": "Modelleinstellungen",
"typePhraseHere": "Phrase hier eingeben",
"spandrelImageToImage": "Bild zu Bild (Spandrel)",
"starterModels": "Einstiegsmodelle",
"t5Encoder": "T5-Kodierer",
"useDefaultSettings": "Standardeinstellungen verwenden",
"uploadImage": "Bild hochladen",
"urlOrLocalPath": "URL oder lokaler Pfad",
"install": "Installieren",
@@ -678,10 +677,41 @@
"toast": {
"uploadFailed": "Hochladen fehlgeschlagen",
"imageCopied": "Bild kopiert",
"parametersNotSet": "Parameter nicht festgelegt",
"parametersNotSet": "Parameter nicht zurückgerufen",
"addedToBoard": "Dem Board hinzugefügt",
"loadedWithWarnings": "Workflow mit Warnungen geladen",
"imageSaved": "Bild gespeichert"
"imageSaved": "Bild gespeichert",
"linkCopied": "Link kopiert",
"problemCopyingLayer": "Ebene kann nicht kopiert werden",
"problemSavingLayer": "Ebene kann nicht gespeichert werden",
"parameterSetDesc": "{{parameter}} zurückgerufen",
"imageUploaded": "Bild hochgeladen",
"problemCopyingImage": "Bild kann nicht kopiert werden",
"parameterNotSetDesc": "{{parameter}} kann nicht zurückgerufen werden",
"prunedQueue": "Warteschlange bereinigt",
"modelAddedSimple": "Modell zur Warteschlange hinzugefügt",
"parametersSet": "Parameter zurückgerufen",
"imageNotLoadedDesc": "Bild konnte nicht gefunden werden",
"setControlImage": "Als Kontrollbild festlegen",
"sentToUpscale": "An Vergrößerung gesendet",
"parameterNotSetDescWithMessage": "{{parameter}} kann nicht zurückgerufen werden: {{message}}",
"unableToLoadImageMetadata": "Bildmetadaten können nicht geladen werden",
"unableToLoadImage": "Bild kann nicht geladen werden",
"serverError": "Serverfehler",
"parameterNotSet": "Parameter nicht zurückgerufen",
"sessionRef": "Sitzung: {{sessionId}}",
"problemDownloadingImage": "Bild kann nicht heruntergeladen werden",
"parameters": "Parameter",
"parameterSet": "Parameter zurückgerufen",
"importFailed": "Import fehlgeschlagen",
"importSuccessful": "Import erfolgreich",
"setNodeField": "Als Knotenfeld festlegen",
"somethingWentWrong": "Etwas ist schief gelaufen",
"workflowLoaded": "Arbeitsablauf geladen",
"workflowDeleted": "Arbeitsablauf gelöscht",
"errorCopied": "Fehler kopiert",
"layerCopiedToClipboard": "Ebene in die Zwischenablage kopiert",
"sentToCanvas": "An Leinwand gesendet"
},
"accessibility": {
"uploadImage": "Bild hochladen",
@@ -825,7 +855,6 @@
"width": "Breite",
"createdBy": "Erstellt von",
"steps": "Schritte",
"seamless": "Nahtlos",
"positivePrompt": "Positiver Prompt",
"generationMode": "Generierungsmodus",
"Threshold": "Rauschen-Schwelle",
@@ -1170,7 +1199,19 @@
"workflowVersion": "Version",
"saveToGallery": "In Galerie speichern",
"noWorkflows": "Keine Arbeitsabläufe",
"noMatchingWorkflows": "Keine passenden Arbeitsabläufe"
"noMatchingWorkflows": "Keine passenden Arbeitsabläufe",
"unknownErrorValidatingWorkflow": "Unbekannter Fehler beim Validieren des Arbeitsablaufes",
"inputFieldTypeParseError": "Typ des Eingabefelds {{node}}.{{field}} kann nicht analysiert werden ({{message}})",
"workflowSettings": "Arbeitsablauf Editor Einstellungen",
"unableToLoadWorkflow": "Arbeitsablauf kann nicht geladen werden",
"viewMode": "In linearen Ansicht verwenden",
"unableToValidateWorkflow": "Arbeitsablauf kann nicht validiert werden",
"outputFieldTypeParseError": "Typ des Ausgabefelds {{node}}.{{field}} kann nicht analysiert werden ({{message}})",
"unableToGetWorkflowVersion": "Version des Arbeitsablaufschemas kann nicht bestimmt werden",
"unknownFieldType": "$t(nodes.unknownField) Typ: {{type}}",
"unknownField": "Unbekanntes Feld",
"unableToUpdateNodes_one": "{{count}} Knoten kann nicht aktualisiert werden",
"unableToUpdateNodes_other": "{{count}} Knoten können nicht aktualisiert werden"
},
"hrf": {
"enableHrf": "Korrektur für hohe Auflösungen",
@@ -1300,15 +1341,7 @@
"enableLogging": "Protokollierung aktivieren"
},
"whatsNew": {
"whatsNewInInvoke": "Was gibt's Neues",
"canvasV2Announcement": {
"fluxSupport": "Unterstützung für Flux-Modelle",
"newCanvas": "Eine leistungsstarke neue Kontrollfläche",
"newLayerTypes": "Neue Ebenentypen für noch mehr Kontrolle",
"readReleaseNotes": "Anmerkungen zu dieser Version lesen",
"watchReleaseVideo": "Video über diese Version anzeigen",
"watchUiUpdatesOverview": "Interface-Updates Übersicht"
}
"whatsNewInInvoke": "Was gibt's Neues"
},
"stylePresets": {
"name": "Name",

View File

@@ -94,6 +94,7 @@
"close": "Close",
"copy": "Copy",
"copyError": "$t(gallery.copy) Error",
"clipboard": "Clipboard",
"on": "On",
"off": "Off",
"or": "or",
@@ -173,7 +174,8 @@
"placeholderSelectAModel": "Select a model",
"reset": "Reset",
"none": "None",
"new": "New"
"new": "New",
"generating": "Generating"
},
"hrf": {
"hrf": "High Resolution Fix",
@@ -681,7 +683,8 @@
"recallParameters": "Recall Parameters",
"recallParameter": "Recall {{label}}",
"scheduler": "Scheduler",
"seamless": "Seamless",
"seamlessXAxis": "Seamless X Axis",
"seamlessYAxis": "Seamless Y Axis",
"seed": "Seed",
"steps": "Steps",
"strength": "Image to image strength",
@@ -702,6 +705,8 @@
"baseModel": "Base Model",
"cancel": "Cancel",
"clipEmbed": "CLIP Embed",
"clipLEmbed": "CLIP-L Embed",
"clipGEmbed": "CLIP-G Embed",
"config": "Config",
"convert": "Convert",
"convertingModelBegin": "Converting Model. Please wait.",
@@ -712,8 +717,12 @@
"convertToDiffusersHelpText4": "This is a one time process only. It might take around 30s-60s depending on the specifications of your computer.",
"convertToDiffusersHelpText5": "Please make sure you have enough disk space. Models generally vary between 2GB-7GB in size.",
"convertToDiffusersHelpText6": "Do you wish to convert this model?",
"noDefaultSettings": "No default settings configured for this model. Visit the Model Manager to add default settings.",
"defaultSettings": "Default Settings",
"defaultSettingsSaved": "Default Settings Saved",
"defaultSettingsOutOfSync": "Some settings do not match the model's defaults:",
"restoreDefaultSettings": "Click to use the model's default settings.",
"usingDefaultSettings": "Using model's default settings",
"delete": "Delete",
"deleteConfig": "Delete Config",
"deleteModel": "Delete Model",
@@ -727,7 +736,17 @@
"huggingFacePlaceholder": "owner/model-name",
"huggingFaceRepoID": "HuggingFace Repo ID",
"huggingFaceHelper": "If multiple models are found in this repo, you will be prompted to select one to install.",
"hfToken": "HuggingFace Token",
"hfTokenLabel": "HuggingFace Token (Required for some models)",
"hfTokenHelperText": "A HF token is required to use some models. Click here to create or get your token.",
"hfTokenInvalid": "Invalid or Missing HF Token",
"hfForbidden": "You do not have access to this HF model",
"hfForbiddenErrorMessage": "We recommend visiting the repo page on HuggingFace.com. The owner may require acceptance of terms in order to download.",
"hfTokenInvalidErrorMessage": "Invalid or missing HuggingFace token.",
"hfTokenRequired": "You are trying to download a model that requires a valid HuggingFace Token.",
"hfTokenInvalidErrorMessage2": "Update it in the ",
"hfTokenUnableToVerify": "Unable to Verify HF Token",
"hfTokenUnableToVerifyErrorMessage": "Unable to verify HuggingFace token. This is likely due to a network error. Please try again later.",
"hfTokenSaved": "HF Token Saved",
"imageEncoderModelId": "Image Encoder Model ID",
"includesNModels": "Includes {{n}} models and their dependencies",
"installQueue": "Install Queue",
@@ -798,7 +817,6 @@
"uploadImage": "Upload Image",
"urlOrLocalPath": "URL or Local Path",
"urlOrLocalPathHelper": "URLs should point to a single file. Local paths can point to a single file or folder for a single diffusers model.",
"useDefaultSettings": "Use Default Settings",
"vae": "VAE",
"vaePrecision": "VAE Precision",
"variant": "Variant",
@@ -982,6 +1000,7 @@
"controlNetControlMode": "Control Mode",
"copyImage": "Copy Image",
"denoisingStrength": "Denoising Strength",
"disabledNoRasterContent": "Disabled (No Raster Content)",
"downloadImage": "Download Image",
"general": "General",
"guidance": "Guidance",
@@ -1032,6 +1051,7 @@
"patchmatchDownScaleSize": "Downscale",
"perlinNoise": "Perlin Noise",
"positivePromptPlaceholder": "Positive Prompt",
"recallMetadata": "Recall Metadata",
"iterations": "Iterations",
"scale": "Scale",
"scaleBeforeProcessing": "Scale Before Processing",
@@ -1108,6 +1128,9 @@
"enableInformationalPopovers": "Enable Informational Popovers",
"informationalPopoversDisabled": "Informational Popovers Disabled",
"informationalPopoversDisabledDesc": "Informational popovers have been disabled. Enable them in Settings.",
"enableModelDescriptions": "Enable Model Descriptions in Dropdowns",
"modelDescriptionsDisabled": "Model Descriptions in Dropdowns Disabled",
"modelDescriptionsDisabledDesc": "Model descriptions in dropdowns have been disabled. Enable them in Settings.",
"enableInvisibleWatermark": "Enable Invisible Watermark",
"enableNSFWChecker": "Enable NSFW Checker",
"general": "General",
@@ -1117,6 +1140,7 @@
"resetWebUI": "Reset Web UI",
"resetWebUIDesc1": "Resetting the web UI only resets the browser's local cache of your images and remembered settings. It does not delete any images from disk.",
"resetWebUIDesc2": "If images aren't showing up in the gallery or something else isn't working, please try resetting before submitting an issue on GitHub.",
"showDetailedInvocationProgress": "Show Progress Details",
"showProgressInViewer": "Show Progress Images in Viewer",
"ui": "User Interface",
"clearIntermediatesDisabled": "Queue must be empty to clear intermediates",
@@ -1251,6 +1275,33 @@
"heading": "Mask Adjustments",
"paragraphs": ["Adjust the mask."]
},
"inpainting": {
"heading": "Inpainting",
"paragraphs": ["Controls which area is modified, guided by Denoising Strength."]
},
"rasterLayer": {
"heading": "Raster Layer",
"paragraphs": ["Pixel-based content of your canvas, used during image generation."]
},
"regionalGuidance": {
"heading": "Regional Guidance",
"paragraphs": ["Brush to guide where elements from global prompts should appear."]
},
"regionalGuidanceAndReferenceImage": {
"heading": "Regional Guidance and Regional Reference Image",
"paragraphs": [
"For Regional Guidance, brush to guide where elements from global prompts should appear.",
"For Regional Reference Image, brush to apply a reference image to specific areas."
]
},
"globalReferenceImage": {
"heading": "Global Reference Image",
"paragraphs": ["Applies a reference image to influence the entire generation."]
},
"regionalReferenceImage": {
"heading": "Regional Reference Image",
"paragraphs": ["Brush to apply a reference image to specific areas."]
},
"controlNet": {
"heading": "ControlNet",
"paragraphs": [
@@ -1366,8 +1417,9 @@
"paramDenoisingStrength": {
"heading": "Denoising Strength",
"paragraphs": [
"How much noise is added to the input image.",
"0 will result in an identical image, while 1 will result in a completely new image."
"Controls how much the generated image varies from the raster layer(s).",
"Lower strength stays closer to the combined visible raster layers. Higher strength relies more on the global prompt.",
"When there are no raster layers with visible content, this setting is ignored."
]
},
"paramHeight": {
@@ -1606,21 +1658,24 @@
"newControlLayerError": "Problem Creating Control Layer",
"newRasterLayerOk": "Created Raster Layer",
"newRasterLayerError": "Problem Creating Raster Layer",
"newFromImage": "New from Image",
"pullBboxIntoLayerOk": "Bbox Pulled Into Layer",
"pullBboxIntoLayerError": "Problem Pulling BBox Into Layer",
"pullBboxIntoReferenceImageOk": "Bbox Pulled Into ReferenceImage",
"pullBboxIntoReferenceImageError": "Problem Pulling BBox Into ReferenceImage",
"regionIsEmpty": "Selected region is empty",
"mergeVisible": "Merge Visible",
"mergeVisibleOk": "Merged visible layers",
"mergeVisibleError": "Error merging visible layers",
"mergeDown": "Merge Down",
"mergeVisibleOk": "Merged layers",
"mergeVisibleError": "Error merging layers",
"mergingLayers": "Merging layers",
"clearHistory": "Clear History",
"bboxOverlay": "Show Bbox Overlay",
"resetCanvas": "Reset Canvas",
"clearCaches": "Clear Caches",
"recalculateRects": "Recalculate Rects",
"clipToBbox": "Clip Strokes to Bbox",
"outputOnlyMaskedRegions": "Output Only Masked Regions",
"outputOnlyMaskedRegions": "Output Only Generated Regions",
"addLayer": "Add Layer",
"duplicate": "Duplicate",
"moveToFront": "Move to Front",
@@ -1648,6 +1703,8 @@
"controlLayer": "Control Layer",
"inpaintMask": "Inpaint Mask",
"regionalGuidance": "Regional Guidance",
"canvasAsRasterLayer": "$t(controlLayers.canvas) as $t(controlLayers.rasterLayer)",
"canvasAsControlLayer": "$t(controlLayers.canvas) as $t(controlLayers.controlLayer)",
"referenceImage": "Reference Image",
"regionalReferenceImage": "Regional Reference Image",
"globalReferenceImage": "Global Reference Image",
@@ -1688,8 +1745,18 @@
"layer_other": "Layers",
"layer_withCount_one": "Layer ({{count}})",
"layer_withCount_other": "Layers ({{count}})",
"convertToControlLayer": "Convert to Control Layer",
"convertToRasterLayer": "Convert to Raster Layer",
"convertRasterLayerTo": "Convert $t(controlLayers.rasterLayer) To",
"convertControlLayerTo": "Convert $t(controlLayers.controlLayer) To",
"convertInpaintMaskTo": "Convert $t(controlLayers.inpaintMask) To",
"convertRegionalGuidanceTo": "Convert $t(controlLayers.regionalGuidance) To",
"copyRasterLayerTo": "Copy $t(controlLayers.rasterLayer) To",
"copyControlLayerTo": "Copy $t(controlLayers.controlLayer) To",
"copyInpaintMaskTo": "Copy $t(controlLayers.inpaintMask) To",
"copyRegionalGuidanceTo": "Copy $t(controlLayers.regionalGuidance) To",
"newRasterLayer": "New $t(controlLayers.rasterLayer)",
"newControlLayer": "New $t(controlLayers.controlLayer)",
"newInpaintMask": "New $t(controlLayers.inpaintMask)",
"newRegionalGuidance": "New $t(controlLayers.regionalGuidance)",
"transparency": "Transparency",
"enableTransparencyEffect": "Enable Transparency Effect",
"disableTransparencyEffect": "Disable Transparency Effect",
@@ -1713,16 +1780,18 @@
"newGallerySessionDesc": "This will clear the canvas and all settings except for your model selection. Generations will be sent to the gallery.",
"newCanvasSession": "New Canvas Session",
"newCanvasSessionDesc": "This will clear the canvas and all settings except for your model selection. Generations will be staged on the canvas.",
"replaceCurrent": "Replace Current",
"controlLayerEmptyState": "<UploadButton>Upload an image</UploadButton>, drag an image from the <GalleryButton>gallery</GalleryButton> onto this layer, or draw on the canvas to get started.",
"controlMode": {
"controlMode": "Control Mode",
"balanced": "Balanced",
"balanced": "Balanced (recommended)",
"prompt": "Prompt",
"control": "Control",
"megaControl": "Mega Control"
},
"ipAdapterMethod": {
"ipAdapterMethod": "IP Adapter Method",
"full": "Full",
"full": "Style and Composition",
"style": "Style Only",
"composition": "Composition Only"
},
@@ -1754,6 +1823,9 @@
"process": "Process",
"apply": "Apply",
"cancel": "Cancel",
"advanced": "Advanced",
"processingLayerWith": "Processing layer with the {{type}} filter.",
"forMoreControl": "For more control, click Advanced below.",
"spandrel_filter": {
"label": "Image-to-Image Model",
"description": "Run an image-to-image model on the selected layer.",
@@ -1842,16 +1914,24 @@
"apply": "Apply",
"cancel": "Cancel"
},
"segment": {
"autoMask": "Auto Mask",
"selectObject": {
"selectObject": "Select Object",
"pointType": "Point Type",
"foreground": "Foreground",
"background": "Background",
"invertSelection": "Invert Selection",
"include": "Include",
"exclude": "Exclude",
"neutral": "Neutral",
"reset": "Reset",
"apply": "Apply",
"reset": "Reset",
"saveAs": "Save As",
"cancel": "Cancel",
"process": "Process"
"process": "Process",
"help1": "Select a single target object. Add <Bold>Include</Bold> and <Bold>Exclude</Bold> points to indicate which parts of the layer are part of the target object.",
"help2": "Start with one <Bold>Include</Bold> point within the target object. Add more points to refine the selection. Fewer points typically produce better results.",
"help3": "Invert the selection to select everything except the target object.",
"clickToAdd": "Click on the layer to add a point",
"dragToMove": "Drag a point to move it",
"clickToRemove": "Click on a point to remove it"
},
"settings": {
"snapToGrid": {
@@ -1892,6 +1972,8 @@
"newRegionalReferenceImage": "New Regional Reference Image",
"newControlLayer": "New Control Layer",
"newRasterLayer": "New Raster Layer",
"newInpaintMask": "New Inpaint Mask",
"newRegionalGuidance": "New Regional Guidance",
"cropCanvasToBbox": "Crop Canvas to Bbox"
},
"stagingArea": {
@@ -1921,7 +2003,9 @@
"upscaleModelDesc": "Upscale (image to image) model",
"missingUpscaleInitialImage": "Missing initial image for upscaling",
"missingUpscaleModel": "Missing upscale model",
"missingTileControlNetModel": "No valid tile ControlNet models installed"
"missingTileControlNetModel": "No valid tile ControlNet models installed",
"incompatibleBaseModel": "Unsupported main model architecture for upscaling",
"incompatibleBaseModelDesc": "Upscaling is supported for SD1.5 and SDXL architecture models only. Change the main model to enable upscaling."
},
"stylePresets": {
"active": "Active",
@@ -2024,13 +2108,12 @@
},
"whatsNew": {
"whatsNewInInvoke": "What's New in Invoke",
"canvasV2Announcement": {
"newCanvas": "A powerful new control canvas",
"newLayerTypes": "New layer types for even more control",
"fluxSupport": "Support for the Flux family of models",
"readReleaseNotes": "Read Release Notes",
"watchReleaseVideo": "Watch Release Video",
"watchUiUpdatesOverview": "Watch UI Updates Overview"
}
"items": [
"<StrongComponent>SD 3.5</StrongComponent>: Support for Text-to-Image in Workflows with SD 3.5 Medium and Large.",
"<StrongComponent>Canvas</StrongComponent>: Streamlined Control Layer processing and improved default Control settings."
],
"readReleaseNotes": "Read Release Notes",
"watchRecentReleaseVideos": "Watch Recent Release Videos",
"watchUiUpdatesOverview": "Watch UI Updates Overview"
}
}

View File

@@ -5,7 +5,7 @@
"reportBugLabel": "Signaler un bug",
"settingsLabel": "Paramètres",
"img2img": "Image vers Image",
"nodes": "Processus",
"nodes": "Workflows",
"upload": "Importer",
"load": "Charger",
"back": "Retour",
@@ -95,7 +95,8 @@
"positivePrompt": "Prompt Positif",
"negativePrompt": "Prompt Négatif",
"ok": "Ok",
"close": "Fermer"
"close": "Fermer",
"clipboard": "Presse-papier"
},
"gallery": {
"galleryImageSize": "Taille de l'image",
@@ -161,7 +162,7 @@
"unstarImage": "Retirer le marquage de l'Image",
"viewerImage": "Visualisation de l'Image",
"imagesSettings": "Paramètres des images de la galerie",
"assetsTab": "Fichiers que vous avez importé pour vos projets.",
"assetsTab": "Fichiers que vous avez importés pour vos projets.",
"imagesTab": "Images que vous avez créées et enregistrées dans Invoke.",
"boardsSettings": "Paramètres des planches"
},
@@ -219,7 +220,6 @@
"typePhraseHere": "Écrire une phrase ici",
"cancel": "Annuler",
"defaultSettingsSaved": "Paramètres par défaut enregistrés",
"hfToken": "Token HuggingFace",
"imageEncoderModelId": "ID du modèle d'encodeur d'image",
"path": "Chemin sur le disque",
"repoVariant": "Variante de dépôt",
@@ -254,7 +254,6 @@
"loraModels": "LoRAs",
"main": "Principal",
"urlOrLocalPathHelper": "Les URL doivent pointer vers un seul fichier. Les chemins locaux peuvent pointer vers un seul fichier ou un dossier pour un seul modèle de diffuseurs.",
"useDefaultSettings": "Utiliser les paramètres par défaut",
"modelImageUpdateFailed": "Mise à jour de l'image du modèle échouée",
"loraTriggerPhrases": "Phrases de déclenchement LoRA",
"mainModelTriggerPhrases": "Phrases de déclenchement du modèle principal",
@@ -284,24 +283,28 @@
"skippingXDuplicates_many": ", en ignorant {{count}} doublons",
"skippingXDuplicates_other": ", en ignorant {{count}} doublons",
"installingModel": "Modèle en cours d'installation",
"installingBundle": "Pack en cours d'installation"
"installingBundle": "Pack en cours d'installation",
"noDefaultSettings": "Aucun paramètre par défaut configuré pour ce modèle. Visitez le Gestionnaire de Modèles pour ajouter des paramètres par défaut.",
"usingDefaultSettings": "Utilisation des paramètres par défaut du modèle",
"defaultSettingsOutOfSync": "Certain paramètres ne correspondent pas aux valeurs par défaut du modèle :",
"restoreDefaultSettings": "Cliquez pour utiliser les paramètres par défaut du modèle."
},
"parameters": {
"images": "Images",
"steps": "Etapes",
"cfgScale": "CFG Echelle",
"steps": "Étapes",
"cfgScale": "Échelle CFG",
"width": "Largeur",
"height": "Hauteur",
"seed": "Graine",
"shuffle": "Mélanger la graine",
"shuffle": "Nouvelle graine",
"noiseThreshold": "Seuil de Bruit",
"perlinNoise": "Bruit de Perlin",
"type": "Type",
"strength": "Force",
"upscaling": "Agrandissement",
"scale": "Echelle",
"scale": "Échelle",
"imageFit": "Ajuster Image Initiale à la Taille de Sortie",
"scaleBeforeProcessing": "Echelle Avant Traitement",
"scaleBeforeProcessing": "Échelle Avant Traitement",
"scaledWidth": "Larg. Échelle",
"scaledHeight": "Haut. Échelle",
"infillMethod": "Méthode de Remplissage",
@@ -422,7 +425,10 @@
"clearIntermediatesWithCount_other": "Effacé {{count}} Intermédiaires",
"informationalPopoversDisabled": "Pop-ups d'information désactivés",
"informationalPopoversDisabledDesc": "Les pop-ups d'information ont été désactivés. Activez-les dans les paramètres.",
"confirmOnNewSession": "Confirmer lors d'une nouvelle session"
"confirmOnNewSession": "Confirmer lors d'une nouvelle session",
"modelDescriptionsDisabledDesc": "Les descriptions des modèles dans les menus déroulants ont été désactivées. Activez-les dans les paramètres.",
"enableModelDescriptions": "Activer les descriptions de modèle dans les menus déroulants",
"modelDescriptionsDisabled": "Descriptions de modèle dans les menus déroulants désactivés"
},
"toast": {
"uploadFailed": "Importation échouée",
@@ -435,22 +441,22 @@
"parameterNotSet": "Paramètre non Rappelé",
"canceled": "Traitement annulé",
"addedToBoard": "Ajouté aux ressources de la planche {{name}}",
"workflowLoaded": "Processus chargé",
"workflowLoaded": "Workflow chargé",
"connected": "Connecté au serveur",
"setNodeField": "Définir comme champ de nœud",
"imageUploadFailed": "Échec de l'importation de l'image",
"loadedWithWarnings": "Processus chargé avec des avertissements",
"loadedWithWarnings": "Workflow chargé avec des avertissements",
"imageUploaded": "Image importée",
"modelAddedSimple": "Modèle ajouté à la file d'attente",
"setControlImage": "Définir comme image de contrôle",
"workflowDeleted": "Processus supprimé",
"workflowDeleted": "Workflow supprimé",
"baseModelChangedCleared_one": "Effacé ou désactivé {{count}} sous-modèle incompatible",
"baseModelChangedCleared_many": "Effacé ou désactivé {{count}} sous-modèles incompatibles",
"baseModelChangedCleared_other": "Effacé ou désactivé {{count}} sous-modèles incompatibles",
"invalidUpload": "Importation invalide",
"problemDownloadingImage": "Impossible de télécharger l'image",
"problemRetrievingWorkflow": "Problème de récupération du processus",
"problemDeletingWorkflow": "Problème de suppression du processus",
"problemRetrievingWorkflow": "Problème de récupération du Workflow",
"problemDeletingWorkflow": "Problème de suppression du Workflow",
"prunedQueue": "File d'attente vidée",
"parameters": "Paramètres",
"modelImportCanceled": "Importation du modèle annulée",
@@ -550,7 +556,7 @@
"accordions": {
"advanced": {
"title": "Avancé",
"options": "$t(accordions.advanced.title) Options"
"options": "Options $t(accordions.advanced.title)"
},
"image": {
"title": "Image"
@@ -631,7 +637,7 @@
"graphQueued": "Graph ajouté à la file d'attente",
"other": "Autre",
"generation": "Génération",
"workflows": "Processus",
"workflows": "Workflows",
"batchFailedToQueue": "Impossible d'ajouter le Lot dans à la file d'attente",
"graphFailedToQueue": "Impossible d'ajouter le graph à la file d'attente",
"item": "Élément",
@@ -704,8 +710,8 @@
"desc": "Rappelle toutes les métadonnées pour l'image actuelle."
},
"loadWorkflow": {
"title": "Charger le processus",
"desc": "Charge le processus enregistré de l'image actuelle (s'il en a un)."
"title": "Ouvrir un Workflow",
"desc": "Charge le workflow enregistré lié à l'image actuelle (s'il en a un)."
},
"recallSeed": {
"desc": "Rappelle la graine pour l'image actuelle.",
@@ -756,8 +762,8 @@
"desc": "Séléctionne l'onglet Agrandissement."
},
"selectWorkflowsTab": {
"desc": "Sélectionne l'onglet Processus.",
"title": "Sélectionner l'onglet Processus"
"desc": "Sélectionne l'onglet Workflows.",
"title": "Sélectionner l'onglet Workflows"
},
"togglePanels": {
"desc": "Affiche ou masque les panneaux gauche et droit en même temps.",
@@ -963,11 +969,11 @@
},
"undo": {
"title": "Annuler",
"desc": "Annule la dernière action de processus."
"desc": "Annule la dernière action de workflow."
},
"redo": {
"title": "Rétablir",
"desc": "Rétablit la dernière action de processus."
"desc": "Rétablit la dernière action de workflow."
},
"addNode": {
"desc": "Ouvre le menu d'ajout de nœud.",
@@ -985,7 +991,7 @@
"desc": "Colle les nœuds et les connections copiés.",
"title": "Coller"
},
"title": "Processus"
"title": "Workflows"
}
},
"popovers": {
@@ -1372,6 +1378,43 @@
"Des valeurs de guidage élevées peuvent entraîner une saturation excessive, et un guidage élevé ou faible peut entraîner des résultats de génération déformés. Le guidage ne s'applique qu'aux modèles FLUX DEV."
],
"heading": "Guidage"
},
"globalReferenceImage": {
"heading": "Image de Référence Globale",
"paragraphs": [
"Applique une image de référence pour influencer l'ensemble de la génération."
]
},
"regionalReferenceImage": {
"heading": "Image de Référence Régionale",
"paragraphs": [
"Pinceau pour appliquer une image de référence à des zones spécifiques."
]
},
"inpainting": {
"heading": "Inpainting",
"paragraphs": [
"Contrôle la zone qui est modifiée, guidé par la force de débruitage."
]
},
"regionalGuidance": {
"heading": "Guide Régional",
"paragraphs": [
"Pinceau pour guider l'emplacement des éléments provenant des prompts globaux."
]
},
"regionalGuidanceAndReferenceImage": {
"heading": "Guide régional et image de référence régionale",
"paragraphs": [
"Pour le Guide Régional, utilisez le pinceau pour indiquer où les éléments des prompts globaux doivent apparaître.",
"Pour l'image de référence régionale, pinceau pour appliquer une image de référence à des zones spécifiques."
]
},
"rasterLayer": {
"heading": "Couche Rastérisation",
"paragraphs": [
"Contenu basé sur les pixels de votre toile, utilisé lors de la génération d'images."
]
}
},
"dynamicPrompts": {
@@ -1392,12 +1435,11 @@
"positivePrompt": "Prompt Positif",
"allPrompts": "Tous les Prompts",
"negativePrompt": "Prompt Négatif",
"seamless": "Sans jointure",
"metadata": "Métadonné",
"scheduler": "Planificateur",
"imageDetails": "Détails de l'Image",
"seed": "Graine",
"workflow": "Processus",
"workflow": "Workflow",
"width": "Largeur",
"Threshold": "Seuil de bruit",
"noMetaData": "Aucune métadonnée trouvée",
@@ -1446,8 +1488,8 @@
"hideMinimapnodes": "Masquer MiniCarte",
"zoomOutNodes": "Dézoomer",
"zoomInNodes": "Zoomer",
"downloadWorkflow": "Télécharger processus en JSON",
"loadWorkflow": "Charger le processus",
"downloadWorkflow": "Exporter le Workflow au format JSON",
"loadWorkflow": "Charger un Workflow",
"reloadNodeTemplates": "Recharger les modèles de nœuds",
"animatedEdges": "Connexions animées",
"cannotConnectToSelf": "Impossible de se connecter à soi-même",
@@ -1470,16 +1512,16 @@
"float": "Flottant",
"mismatchedVersion": "Nœud invalide : le nœud {{node}} de type {{type}} a une version incompatible (essayez de mettre à jour?)",
"missingTemplate": "Nœud invalide : le nœud {{node}} de type {{type}} modèle manquant (non installé?)",
"noWorkflow": "Pas de processus",
"noWorkflow": "Pas de Workflow",
"validateConnectionsHelp": "Prévenir la création de connexions invalides et l'invocation de graphes invalides",
"workflowSettings": "Paramètres de l'Éditeur de Processus",
"workflowValidation": "Erreur de validation du processus",
"workflowSettings": "Paramètres de l'Éditeur de Workflow",
"workflowValidation": "Erreur de validation du Workflow",
"executionStateInProgress": "En cours",
"node": "Noeud",
"scheduler": "Planificateur",
"notes": "Notes",
"notesDescription": "Ajouter des notes sur votre processus",
"unableToLoadWorkflow": "Impossible de charger le processus",
"notesDescription": "Ajouter des notes sur votre workflow",
"unableToLoadWorkflow": "Impossible de charger le Workflow",
"addNode": "Ajouter un nœud",
"problemSettingTitle": "Problème lors de définition du Titre",
"connectionWouldCreateCycle": "La connexion créerait un cycle",
@@ -1502,7 +1544,7 @@
"noOutputRecorded": "Aucun résultat enregistré",
"removeLinearView": "Retirer de la vue linéaire",
"snapToGrid": "Aligner sur la grille",
"workflow": "Processus",
"workflow": "Workflow",
"updateApp": "Mettre à jour l'application",
"updateNode": "Mettre à jour le nœud",
"nodeOutputs": "Sorties de nœud",
@@ -1515,7 +1557,7 @@
"string": "Chaîne de caractères",
"workflowName": "Nom",
"snapToGridHelp": "Aligner les nœuds sur la grille lors du déplacement",
"unableToValidateWorkflow": "Impossible de valider le processus",
"unableToValidateWorkflow": "Impossible de valider le Workflow",
"validateConnections": "Valider les connexions et le graphique",
"unableToUpdateNodes_one": "Impossible de mettre à jour {{count}} nœud",
"unableToUpdateNodes_many": "Impossible de mettre à jour {{count}} nœuds",
@@ -1528,15 +1570,15 @@
"nodePack": "Paquet de nœuds",
"sourceNodeDoesNotExist": "Connexion invalide : le nœud source/de sortie {{node}} n'existe pas",
"sourceNodeFieldDoesNotExist": "Connexion invalide : {{node}}.{{field}} n'existe pas",
"unableToGetWorkflowVersion": "Impossible d'obtenir la version du schéma de processus",
"newWorkflowDesc2": "Votre processus actuel comporte des modifications non enregistrées.",
"unableToGetWorkflowVersion": "Impossible d'obtenir la version du schéma du Workflow",
"newWorkflowDesc2": "Votre workflow actuel comporte des modifications non enregistrées.",
"deletedInvalidEdge": "Connexion invalide supprimé {{source}} -> {{target}}",
"targetNodeDoesNotExist": "Connexion invalide : le nœud cible/entrée {{node}} n'existe pas",
"targetNodeFieldDoesNotExist": "Connexion invalide : le champ {{node}}.{{field}} n'existe pas",
"nodeVersion": "Version du noeud",
"clearWorkflowDesc2": "Votre processus actuel comporte des modifications non enregistrées.",
"clearWorkflow": "Effacer le Processus",
"clearWorkflowDesc": "Effacer ce processus et en commencer un nouveau?",
"clearWorkflowDesc2": "Votre workflow actuel comporte des modifications non enregistrées.",
"clearWorkflow": "Effacer le Workflow",
"clearWorkflowDesc": "Effacer ce workflow et en commencer un nouveau?",
"unsupportedArrayItemType": "type d'élément de tableau non pris en charge \"{{type}}\"",
"addLinearView": "Ajouter à la vue linéaire",
"collectionOrScalarFieldType": "{{name}} (Unique ou Collection)",
@@ -1545,7 +1587,7 @@
"ipAdapter": "IP-Adapter",
"viewMode": "Utiliser en vue linéaire",
"collectionFieldType": "{{name}} (Collection)",
"newWorkflow": "Nouveau processus",
"newWorkflow": "Nouveau Workflow",
"reorderLinearView": "Réorganiser la vue linéaire",
"unknownOutput": "Sortie inconnue : {{name}}",
"outputFieldTypeParseError": "Impossible d'analyser le type du champ de sortie {{node}}.{{field}} ({{message}})",
@@ -1555,13 +1597,13 @@
"unknownFieldType": "$t(nodes.unknownField) type : {{type}}",
"inputFieldTypeParseError": "Impossible d'analyser le type du champ d'entrée {{node}}.{{field}} ({{message}})",
"unableToExtractSchemaNameFromRef": "impossible d'extraire le nom du schéma à partir de la référence",
"editMode": "Modifier dans l'éditeur de processus",
"unknownErrorValidatingWorkflow": "Erreur inconnue lors de la validation du processus",
"editMode": "Modifier dans l'éditeur de Workflow",
"unknownErrorValidatingWorkflow": "Erreur inconnue lors de la validation du Workflow",
"updateAllNodes": "Mettre à jour les nœuds",
"allNodesUpdated": "Tous les nœuds mis à jour",
"newWorkflowDesc": "Créer un nouveau processus?",
"newWorkflowDesc": "Créer un nouveau workflow?",
"edit": "Modifier",
"noFieldsViewMode": "Ce processus n'a aucun champ sélectionné à afficher. Consultez le processus complet pour configurer les valeurs.",
"noFieldsViewMode": "Ce workflow n'a aucun champ sélectionné à afficher. Consultez le workflow complet pour configurer les valeurs.",
"graph": "Graph",
"modelAccessError": "Impossible de trouver le modèle {{key}}, réinitialisation aux paramètres par défaut",
"showEdgeLabelsHelp": "Afficher le nom sur les connections, indiquant les nœuds connectés",
@@ -1575,9 +1617,9 @@
"missingInvocationTemplate": "Modèle d'invocation manquant",
"imageAccessError": "Impossible de trouver l'image {{image_name}}, réinitialisation à la valeur par défaut",
"boardAccessError": "Impossible de trouver la planche {{board_id}}, réinitialisation à la valeur par défaut",
"workflowHelpText": "Besoin d'aide? Consultez notre guide sur <LinkComponent>Comment commencer avec les Processus</LinkComponent>.",
"noWorkflows": "Aucun Processus",
"noMatchingWorkflows": "Aucun processus correspondant"
"workflowHelpText": "Besoin d'aide? Consultez notre guide sur <LinkComponent>Comment commencer avec les Workflows</LinkComponent>.",
"noWorkflows": "Aucun Workflows",
"noMatchingWorkflows": "Aucun Workflows correspondant"
},
"models": {
"noMatchingModels": "Aucun modèle correspondant",
@@ -1594,59 +1636,51 @@
},
"workflows": {
"workflowLibrary": "Bibliothèque",
"loading": "Chargement des processus",
"searchWorkflows": "Rechercher des processus",
"workflowCleared": "Processus effacé",
"loading": "Chargement des Workflows",
"searchWorkflows": "Chercher des Workflows",
"workflowCleared": "Workflow effacé",
"noDescription": "Aucune description",
"deleteWorkflow": "Supprimer le processus",
"openWorkflow": "Ouvrir le processus",
"deleteWorkflow": "Supprimer le Workflow",
"openWorkflow": "Ouvrir le Workflow",
"uploadWorkflow": "Charger à partir d'un fichier",
"workflowName": "Nom du processus",
"unnamedWorkflow": "Processus sans nom",
"saveWorkflowAs": "Enregistrer le processus sous",
"workflows": "Processus",
"savingWorkflow": "Enregistrement du processus...",
"saveWorkflowToProject": "Enregistrer le processus dans le projet",
"workflowName": "Nom du Workflow",
"unnamedWorkflow": "Workflow sans nom",
"saveWorkflowAs": "Enregistrer le Workflow sous",
"workflows": "Workflows",
"savingWorkflow": "Enregistrement du Workflow...",
"saveWorkflowToProject": "Enregistrer le Workflow dans le projet",
"downloadWorkflow": "Enregistrer dans le fichier",
"saveWorkflow": "Enregistrer le processus",
"problemSavingWorkflow": "Problème de sauvegarde du processus",
"workflowEditorMenu": "Menu de l'Éditeur de Processus",
"newWorkflowCreated": "Nouveau processus créé",
"clearWorkflowSearchFilter": "Réinitialiser le filtre de recherche de processus",
"problemLoading": "Problème de chargement des processus",
"workflowSaved": "Processus enregistré",
"noWorkflows": "Pas de processus",
"saveWorkflow": "Enregistrer le Workflow",
"problemSavingWorkflow": "Problème de sauvegarde du Workflow",
"workflowEditorMenu": "Menu de l'Éditeur de Workflow",
"newWorkflowCreated": "Nouveau Workflow créé",
"clearWorkflowSearchFilter": "Réinitialiser le filtre de recherche de Workflow",
"problemLoading": "Problème de chargement des Workflows",
"workflowSaved": "Workflow enregistré",
"noWorkflows": "Pas de Workflows",
"ascending": "Ascendant",
"loadFromGraph": "Charger le processus à partir du graphique",
"loadFromGraph": "Charger le Workflow à partir du graphique",
"descending": "Descendant",
"created": "Créé",
"updated": "Mis à jour",
"loadWorkflow": "$t(common.load) Processus",
"loadWorkflow": "$t(common.load) Workflow",
"convertGraph": "Convertir le graphique",
"opened": "Ouvert",
"name": "Nom",
"autoLayout": "Mise en page automatique",
"defaultWorkflows": "Processus par défaut",
"userWorkflows": "Processus utilisateur",
"projectWorkflows": "Processus du projet",
"defaultWorkflows": "Workflows par défaut",
"userWorkflows": "Workflows de l'utilisateur",
"projectWorkflows": "Workflows du projet",
"copyShareLink": "Copier le lien de partage",
"chooseWorkflowFromLibrary": "Choisir le Processus dans la Bibliothèque",
"chooseWorkflowFromLibrary": "Choisir le Workflow dans la Bibliothèque",
"uploadAndSaveWorkflow": "Importer dans la bibliothèque",
"edit": "Modifer",
"deleteWorkflow2": "Êtes-vous sûr de vouloir supprimer ce processus? Ceci ne peut pas être annulé.",
"deleteWorkflow2": "Êtes-vous sûr de vouloir supprimer ce Workflow? Cette action ne peut pas être annulé.",
"download": "Télécharger",
"copyShareLinkForWorkflow": "Copier le lien de partage pour le processus",
"copyShareLinkForWorkflow": "Copier le lien de partage pour le Workflow",
"delete": "Supprimer"
},
"whatsNew": {
"canvasV2Announcement": {
"watchReleaseVideo": "Regarder la vidéo de lancement",
"newLayerTypes": "Nouveaux types de couches pour un contrôle encore plus précis",
"fluxSupport": "Support pour la famille de modèles Flux",
"readReleaseNotes": "Lire les notes de version",
"newCanvas": "Une nouvelle Toile de contrôle puissant",
"watchUiUpdatesOverview": "Regarder l'aperçu des mises à jour de l'UI"
},
"whatsNewInInvoke": "Quoi de neuf dans Invoke"
},
"ui": {
@@ -1657,7 +1691,7 @@
"gallery": "Galerie",
"upscalingTab": "$t(ui.tabs.upscaling) $t(common.tab)",
"generation": "Génération",
"workflows": "Processus",
"workflows": "Workflows",
"workflowsTab": "$t(ui.tabs.workflows) $t(common.tab)",
"models": "Modèles",
"modelsTab": "$t(ui.tabs.models) $t(common.tab)"
@@ -1767,7 +1801,9 @@
"bboxGroup": "Créer à partir de la bounding box",
"newRegionalReferenceImage": "Nouvelle image de référence régionale",
"newGlobalReferenceImage": "Nouvelle image de référence globale",
"newControlLayer": "Nouveau couche de contrôle"
"newControlLayer": "Nouveau couche de contrôle",
"newInpaintMask": "Nouveau Masque Inpaint",
"newRegionalGuidance": "Nouveau Guide Régional"
},
"bookmark": "Marque-page pour Changement Rapide",
"saveLayerToAssets": "Enregistrer la couche dans les ressources",
@@ -1780,8 +1816,6 @@
"on": "Activé",
"label": "Aligner sur la grille"
},
"isolatedFilteringPreview": "Aperçu de filtrage isolé",
"isolatedTransformingPreview": "Aperçu de transformation isolée",
"invertBrushSizeScrollDirection": "Inverser le défilement pour la taille du pinceau",
"pressureSensitivity": "Sensibilité à la pression",
"preserveMask": {
@@ -1789,9 +1823,10 @@
"alert": "Préserver la zone masquée"
},
"isolatedPreview": "Aperçu Isolé",
"isolatedStagingPreview": "Aperçu de l'attente isolé"
"isolatedStagingPreview": "Aperçu de l'attente isolé",
"isolatedLayerPreview": "Aperçu de la couche isolée",
"isolatedLayerPreviewDesc": "Pour afficher uniquement cette couche lors de l'exécution d'opérations telles que le filtrage ou la transformation."
},
"convertToRasterLayer": "Convertir en Couche de Rastérisation",
"transparency": "Transparence",
"moveBackward": "Reculer",
"rectangle": "Rectangle",
@@ -1914,7 +1949,6 @@
"globalReferenceImage_withCount_one": "$t(controlLayers.globalReferenceImage)",
"globalReferenceImage_withCount_many": "Images de référence globales",
"globalReferenceImage_withCount_other": "Images de référence globales",
"convertToControlLayer": "Convertir en Couche de Contrôle",
"layer_withCount_one": "Couche {{count}}",
"layer_withCount_many": "Couches {{count}}",
"layer_withCount_other": "Couches {{count}}",
@@ -1977,7 +2011,41 @@
"pullBboxIntoReferenceImageOk": "Bounding Box insérée dans l'Image de référence",
"controlLayer_withCount_one": "$t(controlLayers.controlLayer)",
"controlLayer_withCount_many": "Controler les couches",
"controlLayer_withCount_other": "Controler les couches"
"controlLayer_withCount_other": "Controler les couches",
"copyInpaintMaskTo": "Copier $t(controlLayers.inpaintMask) vers",
"copyRegionalGuidanceTo": "Copier $t(controlLayers.regionalGuidance) vers",
"convertRasterLayerTo": "Convertir $t(controlLayers.rasterLayer) vers",
"selectObject": {
"selectObject": "Sélectionner l'objet",
"clickToAdd": "Cliquez sur la couche pour ajouter un point",
"apply": "Appliquer",
"cancel": "Annuler",
"dragToMove": "Faites glisser un point pour le déplacer",
"clickToRemove": "Cliquez sur un point pour le supprimer",
"include": "Inclure",
"invertSelection": "Sélection Inversée",
"saveAs": "Enregistrer sous",
"neutral": "Neutre",
"pointType": "Type de point",
"exclude": "Exclure",
"process": "Traiter",
"reset": "Réinitialiser",
"help1": "Sélectionnez un seul objet cible. Ajoutez des points <Bold>Inclure</Bold> et <Bold>Exclure</Bold> pour indiquer quelles parties de la couche font partie de l'objet cible.",
"help2": "Commencez par un point <Bold>Inclure</Bold> au sein de l'objet cible. Ajoutez d'autres points pour affiner la sélection. Moins de points produisent généralement de meilleurs résultats.",
"help3": "Inversez la sélection pour sélectionner tout sauf l'objet cible."
},
"canvasAsControlLayer": "$t(controlLayers.canvas) en tant que $t(controlLayers.controlLayer)",
"convertRegionalGuidanceTo": "Convertir $t(controlLayers.regionalGuidance) vers",
"copyRasterLayerTo": "Copier $t(controlLayers.rasterLayer) vers",
"newControlLayer": "Nouveau $t(controlLayers.controlLayer)",
"newRegionalGuidance": "Nouveau $t(controlLayers.regionalGuidance)",
"replaceCurrent": "Remplacer Actuel",
"convertControlLayerTo": "Convertir $t(controlLayers.controlLayer) vers",
"convertInpaintMaskTo": "Convertir $t(controlLayers.inpaintMask) vers",
"copyControlLayerTo": "Copier $t(controlLayers.controlLayer) vers",
"newInpaintMask": "Nouveau $t(controlLayers.inpaintMask)",
"newRasterLayer": "Nouveau $t(controlLayers.rasterLayer)",
"canvasAsRasterLayer": "$t(controlLayers.canvas) en tant que $t(controlLayers.rasterLayer)"
},
"upscaling": {
"exceedsMaxSizeDetails": "La limite maximale d'agrandissement est de {{maxUpscaleDimension}}x{{maxUpscaleDimension}} pixels. Veuillez essayer une image plus petite ou réduire votre sélection d'échelle.",
@@ -2048,7 +2116,7 @@
"config": "Configuration",
"canvas": "Toile",
"generation": "Génération",
"workflows": "Processus",
"workflows": "Workflows",
"system": "Système",
"models": "Modèles",
"logNamespaces": "Journalisation des espaces de noms",
@@ -2071,9 +2139,9 @@
"newUserExperience": {
"toGetStarted": "Pour commencer, saisissez un prompt dans la boîte et cliquez sur <StrongComponent>Invoke</StrongComponent> pour générer votre première image. Sélectionnez un template de prompt pour améliorer les résultats. Vous pouvez choisir de sauvegarder vos images directement dans la <StrongComponent>Galerie</StrongComponent> ou de les modifier sur la <StrongComponent>Toile</StrongComponent>.",
"gettingStartedSeries": "Vous souhaitez plus de conseils? Consultez notre <LinkComponent>Série de démarrage</LinkComponent> pour des astuces sur l'exploitation du plein potentiel de l'Invoke Studio.",
"noModelsInstalled": "Il semblerait qu'aucun modèle ne soit installé",
"noModelsInstalled": "Il semble qu'aucun modèle ne soit installé",
"downloadStarterModels": "Télécharger les modèles de démarrage",
"importModels": "Importer Modèles",
"importModels": "Importer des Modèles",
"toGetStartedLocal": "Pour commencer, assurez-vous de télécharger ou d'importer des modèles nécessaires pour exécuter Invoke. Ensuite, saisissez le prompt dans la boîte et cliquez sur <StrongComponent>Invoke</StrongComponent> pour générer votre première image. Sélectionnez un template de prompt pour améliorer les résultats. Vous pouvez choisir de sauvegarder vos images directement sur <StrongComponent>Galerie</StrongComponent> ou les modifier sur la <StrongComponent>Toile</StrongComponent>."
},
"upsell": {

View File

@@ -92,7 +92,9 @@
"none": "Niente",
"new": "Nuovo",
"view": "Vista",
"close": "Chiudi"
"close": "Chiudi",
"clipboard": "Appunti",
"ok": "Ok"
},
"gallery": {
"galleryImageSize": "Dimensione dell'immagine",
@@ -542,7 +544,6 @@
"defaultSettingsSaved": "Impostazioni predefinite salvate",
"defaultSettings": "Impostazioni predefinite",
"metadata": "Metadati",
"useDefaultSettings": "Usa le impostazioni predefinite",
"triggerPhrases": "Frasi Trigger",
"deleteModelImage": "Elimina l'immagine del modello",
"localOnly": "solo locale",
@@ -588,7 +589,15 @@
"installingXModels_many": "Installazione di {{count}} modelli",
"installingXModels_other": "Installazione di {{count}} modelli",
"includesNModels": "Include {{n}} modelli e le loro dipendenze",
"starterBundleHelpText": "Installa facilmente tutti i modelli necessari per iniziare con un modello base, tra cui un modello principale, controlnet, adattatori IP e altro. Selezionando un pacchetto salterai tutti i modelli che hai già installato."
"starterBundleHelpText": "Installa facilmente tutti i modelli necessari per iniziare con un modello base, tra cui un modello principale, controlnet, adattatori IP e altro. Selezionando un pacchetto salterai tutti i modelli che hai già installato.",
"noDefaultSettings": "Nessuna impostazione predefinita configurata per questo modello. Visita Gestione Modelli per aggiungere impostazioni predefinite.",
"defaultSettingsOutOfSync": "Alcune impostazioni non corrispondono a quelle predefinite del modello:",
"restoreDefaultSettings": "Fare clic per utilizzare le impostazioni predefinite del modello.",
"usingDefaultSettings": "Utilizzo delle impostazioni predefinite del modello",
"huggingFace": "HuggingFace",
"huggingFaceRepoID": "HuggingFace Repository ID",
"clipEmbed": "CLIP Embed",
"t5Encoder": "T5 Encoder"
},
"parameters": {
"images": "Immagini",
@@ -689,7 +698,8 @@
"boxBlur": "Sfocatura Box",
"staged": "Maschera espansa",
"optimizedImageToImage": "Immagine-a-immagine ottimizzata",
"sendToCanvas": "Invia alla Tela"
"sendToCanvas": "Invia alla Tela",
"coherenceMinDenoise": "Riduzione minima del rumore"
},
"settings": {
"models": "Modelli",
@@ -724,7 +734,10 @@
"reloadingIn": "Ricaricando in",
"informationalPopoversDisabled": "Testo informativo a comparsa disabilitato",
"informationalPopoversDisabledDesc": "I testi informativi a comparsa sono disabilitati. Attivali nelle impostazioni.",
"confirmOnNewSession": "Conferma su nuova sessione"
"confirmOnNewSession": "Conferma su nuova sessione",
"enableModelDescriptions": "Abilita le descrizioni dei modelli nei menu a discesa",
"modelDescriptionsDisabled": "Descrizioni dei modelli nei menu a discesa disabilitate",
"modelDescriptionsDisabledDesc": "Le descrizioni dei modelli nei menu a discesa sono state disabilitate. Abilitale nelle Impostazioni."
},
"toast": {
"uploadFailed": "Caricamento fallito",
@@ -1076,7 +1089,8 @@
"noLoRAsInstalled": "Nessun LoRA installato",
"addLora": "Aggiungi LoRA",
"defaultVAE": "VAE predefinito",
"concepts": "Concetti"
"concepts": "Concetti",
"lora": "LoRA"
},
"invocationCache": {
"disable": "Disabilita",
@@ -1133,7 +1147,8 @@
"paragraphs": [
"Scegli quanti livelli del modello CLIP saltare.",
"Alcuni modelli funzionano meglio con determinate impostazioni di CLIP Skip."
]
],
"heading": "CLIP Skip"
},
"compositingCoherencePass": {
"heading": "Passaggio di Coerenza",
@@ -1492,6 +1507,42 @@
"Controlla quanto il prompt influenza il processo di generazione.",
"Valori di guida elevati possono causare sovrasaturazione e una guida elevata o bassa può causare risultati di generazione distorti. La guida si applica solo ai modelli FLUX DEV."
]
},
"regionalReferenceImage": {
"paragraphs": [
"Pennello per applicare un'immagine di riferimento ad aree specifiche."
],
"heading": "Immagine di riferimento Regionale"
},
"rasterLayer": {
"paragraphs": [
"Contenuto basato sui pixel della tua tela, utilizzato durante la generazione dell'immagine."
],
"heading": "Livello Raster"
},
"regionalGuidance": {
"heading": "Guida Regionale",
"paragraphs": [
"Pennello per guidare la posizione in cui devono apparire gli elementi dei prompt globali."
]
},
"regionalGuidanceAndReferenceImage": {
"heading": "Guida regionale e immagine di riferimento regionale",
"paragraphs": [
"Per la Guida Regionale, utilizzare il pennello per indicare dove devono apparire gli elementi dei prompt globali.",
"Per l'immagine di riferimento regionale, utilizzare il pennello per applicare un'immagine di riferimento ad aree specifiche."
]
},
"globalReferenceImage": {
"heading": "Immagine di riferimento Globale",
"paragraphs": [
"Applica un'immagine di riferimento per influenzare l'intera generazione."
]
},
"inpainting": {
"paragraphs": [
"Controlla quale area viene modificata, in base all'intensità di riduzione del rumore."
]
}
},
"sdxl": {
@@ -1513,7 +1564,6 @@
"refinerSteps": "Passi Affinamento"
},
"metadata": {
"seamless": "Senza giunture",
"positivePrompt": "Prompt positivo",
"negativePrompt": "Prompt negativo",
"generationMode": "Modalità generazione",
@@ -1541,7 +1591,10 @@
"parsingFailed": "Analisi non riuscita",
"recallParameter": "Richiama {{label}}",
"canvasV2Metadata": "Tela",
"guidance": "Guida"
"guidance": "Guida",
"seamlessXAxis": "Asse X senza giunte",
"seamlessYAxis": "Asse Y senza giunte",
"vae": "VAE"
},
"hrf": {
"enableHrf": "Abilita Correzione Alta Risoluzione",
@@ -1638,11 +1691,11 @@
"regionalGuidance": "Guida regionale",
"opacity": "Opacità",
"mergeVisible": "Fondi il visibile",
"mergeVisibleOk": "Livelli visibili uniti",
"mergeVisibleOk": "Livelli uniti",
"deleteReferenceImage": "Elimina l'immagine di riferimento",
"referenceImage": "Immagine di riferimento",
"fitBboxToLayers": "Adatta il riquadro di delimitazione ai livelli",
"mergeVisibleError": "Errore durante l'unione dei livelli visibili",
"mergeVisibleError": "Errore durante l'unione dei livelli",
"regionalReferenceImage": "Immagine di riferimento Regionale",
"newLayerFromImage": "Nuovo livello da immagine",
"newCanvasFromImage": "Nuova tela da immagine",
@@ -1734,7 +1787,7 @@
"composition": "Solo Composizione",
"ipAdapterMethod": "Metodo Adattatore IP"
},
"showingType": "Mostrare {{type}}",
"showingType": "Mostra {{type}}",
"dynamicGrid": "Griglia dinamica",
"tool": {
"view": "Muovi",
@@ -1862,8 +1915,6 @@
"layer_withCount_one": "Livello ({{count}})",
"layer_withCount_many": "Livelli ({{count}})",
"layer_withCount_other": "Livelli ({{count}})",
"convertToControlLayer": "Converti in livello di controllo",
"convertToRasterLayer": "Converti in livello raster",
"unlocked": "Sbloccato",
"enableTransparencyEffect": "Abilita l'effetto trasparenza",
"replaceLayer": "Sostituisci livello",
@@ -1876,9 +1927,7 @@
"newCanvasSession": "Nuova sessione Tela",
"deleteSelected": "Elimina selezione",
"settings": {
"isolatedFilteringPreview": "Anteprima del filtraggio isolata",
"isolatedStagingPreview": "Anteprima di generazione isolata",
"isolatedTransformingPreview": "Anteprima di trasformazione isolata",
"isolatedPreview": "Anteprima isolata",
"invertBrushSizeScrollDirection": "Inverti scorrimento per dimensione pennello",
"snapToGrid": {
@@ -1890,7 +1939,9 @@
"preserveMask": {
"alert": "Preservare la regione mascherata",
"label": "Preserva la regione mascherata"
}
},
"isolatedLayerPreview": "Anteprima livello isolato",
"isolatedLayerPreviewDesc": "Se visualizzare solo questo livello quando si eseguono operazioni come il filtraggio o la trasformazione."
},
"transform": {
"reset": "Reimposta",
@@ -1935,9 +1986,46 @@
"canvasGroup": "Tela",
"newRasterLayer": "Nuovo Livello Raster",
"saveCanvasToGallery": "Salva la Tela nella Galleria",
"saveToGalleryGroup": "Salva nella Galleria"
"saveToGalleryGroup": "Salva nella Galleria",
"newInpaintMask": "Nuova maschera Inpaint",
"newRegionalGuidance": "Nuova Guida Regionale"
},
"newImg2ImgCanvasFromImage": "Nuova Immagine da immagine"
"newImg2ImgCanvasFromImage": "Nuova Immagine da immagine",
"copyRasterLayerTo": "Copia $t(controlLayers.rasterLayer) in",
"copyControlLayerTo": "Copia $t(controlLayers.controlLayer) in",
"copyInpaintMaskTo": "Copia $t(controlLayers.inpaintMask) in",
"selectObject": {
"dragToMove": "Trascina un punto per spostarlo",
"clickToAdd": "Fare clic sul livello per aggiungere un punto",
"clickToRemove": "Clicca su un punto per rimuoverlo",
"help3": "Inverte la selezione per selezionare tutto tranne l'oggetto di destinazione.",
"pointType": "Tipo punto",
"apply": "Applica",
"reset": "Reimposta",
"cancel": "Annulla",
"selectObject": "Seleziona oggetto",
"invertSelection": "Inverti selezione",
"exclude": "Escludi",
"include": "Includi",
"neutral": "Neutro",
"saveAs": "Salva come",
"process": "Elabora",
"help1": "Seleziona un singolo oggetto di destinazione. Aggiungi i punti <Bold>Includi</Bold> e <Bold>Escludi</Bold> per indicare quali parti del livello fanno parte dell'oggetto di destinazione.",
"help2": "Inizia con un punto <Bold>Include</Bold> all'interno dell'oggetto di destinazione. Aggiungi altri punti per perfezionare la selezione. Meno punti in genere producono risultati migliori."
},
"convertControlLayerTo": "Converti $t(controlLayers.controlLayer) in",
"newRasterLayer": "Nuovo $t(controlLayers.rasterLayer)",
"newRegionalGuidance": "Nuova $t(controlLayers.regionalGuidance)",
"canvasAsRasterLayer": "$t(controlLayers.canvas) come $t(controlLayers.rasterLayer)",
"canvasAsControlLayer": "$t(controlLayers.canvas) come $t(controlLayers.controlLayer)",
"convertInpaintMaskTo": "Converti $t(controlLayers.inpaintMask) in",
"copyRegionalGuidanceTo": "Copia $t(controlLayers.regionalGuidance) in",
"convertRasterLayerTo": "Converti $t(controlLayers.rasterLayer) in",
"convertRegionalGuidanceTo": "Converti $t(controlLayers.regionalGuidance) in",
"newControlLayer": "Nuovo $t(controlLayers.controlLayer)",
"newInpaintMask": "Nuova $t(controlLayers.inpaintMask)",
"replaceCurrent": "Sostituisci corrente",
"mergeDown": "Unire in basso"
},
"ui": {
"tabs": {
@@ -2030,15 +2118,13 @@
"toGetStartedLocal": "Per iniziare, assicurati di scaricare o importare i modelli necessari per eseguire Invoke. Quindi, inserisci un prompt nella casella e fai clic su <StrongComponent>Invoke</StrongComponent> per generare la tua prima immagine. Seleziona un modello di prompt per migliorare i risultati. Puoi scegliere di salvare le tue immagini direttamente nella <StrongComponent>Galleria</StrongComponent> o modificarle nella <StrongComponent>Tela</StrongComponent>."
},
"whatsNew": {
"canvasV2Announcement": {
"readReleaseNotes": "Leggi le Note di Rilascio",
"fluxSupport": "Supporto per la famiglia di modelli Flux",
"newCanvas": "Una nuova potente tela di controllo",
"watchReleaseVideo": "Guarda il video di rilascio",
"watchUiUpdatesOverview": "Guarda le novità dell'interfaccia",
"newLayerTypes": "Nuovi tipi di livello per un miglior controllo"
},
"whatsNewInInvoke": "Novità in Invoke"
"whatsNewInInvoke": "Novità in Invoke",
"line2": "Supporto Flux esteso, ora con immagini di riferimento globali",
"line3": "Tooltip e menu contestuali migliorati",
"readReleaseNotes": "Leggi le note di rilascio",
"watchRecentReleaseVideos": "Guarda i video su questa versione",
"line1": "Strumento <ItalicComponent>Seleziona oggetto</ItalicComponent> per la selezione e la modifica precise degli oggetti",
"watchUiUpdatesOverview": "Guarda le novità dell'interfaccia"
},
"system": {
"logLevel": {

View File

@@ -229,7 +229,6 @@
"submitSupportTicket": "サポート依頼を送信する"
},
"metadata": {
"seamless": "シームレス",
"Threshold": "ノイズ閾値",
"seed": "シード",
"width": "幅",

View File

@@ -155,7 +155,6 @@
"path": "Pad",
"triggerPhrases": "Triggerzinnen",
"typePhraseHere": "Typ zin hier in",
"useDefaultSettings": "Gebruik standaardinstellingen",
"modelImageDeleteFailed": "Fout bij verwijderen modelafbeelding",
"modelImageUpdated": "Modelafbeelding bijgewerkt",
"modelImageUpdateFailed": "Fout bij bijwerken modelafbeelding",
@@ -666,7 +665,6 @@
}
},
"metadata": {
"seamless": "Naadloos",
"positivePrompt": "Positieve prompt",
"negativePrompt": "Negatieve prompt",
"generationMode": "Genereermodus",

View File

@@ -544,7 +544,6 @@
"scanResults": "Результаты сканирования",
"source": "Источник",
"triggerPhrases": "Триггерные фразы",
"useDefaultSettings": "Использовать стандартные настройки",
"modelName": "Название модели",
"modelSettings": "Настройки модели",
"upcastAttention": "Внимание",
@@ -573,7 +572,6 @@
"simpleModelPlaceholder": "URL или путь к локальному файлу или папке diffusers",
"urlOrLocalPath": "URL или локальный путь",
"urlOrLocalPathHelper": "URL-адреса должны указывать на один файл. Локальные пути могут указывать на один файл или папку для одной модели диффузоров.",
"hfToken": "Токен HuggingFace",
"starterModels": "Стартовые модели",
"textualInversions": "Текстовые инверсии",
"loraModels": "LoRAs",
@@ -1402,7 +1400,6 @@
}
},
"metadata": {
"seamless": "Бесшовность",
"positivePrompt": "Запрос",
"negativePrompt": "Негативный запрос",
"generationMode": "Режим генерации",
@@ -1836,14 +1833,12 @@
},
"settings": {
"isolatedPreview": "Изолированный предпросмотр",
"isolatedTransformingPreview": "Изолированный предпросмотр преобразования",
"invertBrushSizeScrollDirection": "Инвертировать прокрутку для размера кисти",
"snapToGrid": {
"label": "Привязка к сетке",
"on": "Вкл",
"off": "Выкл"
},
"isolatedFilteringPreview": "Изолированный предпросмотр фильтрации",
"pressureSensitivity": "Чувствительность к давлению",
"isolatedStagingPreview": "Изолированный предпросмотр на промежуточной стадии",
"preserveMask": {
@@ -1865,7 +1860,6 @@
"enableAutoNegative": "Включить авто негатив",
"maskFill": "Заполнение маски",
"viewProgressInViewer": "Просматривайте прогресс и результаты в <Btn>Просмотрщике изображений</Btn>.",
"convertToRasterLayer": "Конвертировать в растровый слой",
"tool": {
"move": "Двигать",
"bbox": "Ограничительная рамка",
@@ -1933,7 +1927,6 @@
"newGallerySession": "Новая сессия галереи",
"sendToCanvasDesc": "Нажатие кнопки Invoke отображает вашу текущую работу на холсте.",
"globalReferenceImages_withCount_hidden": "Глобальные эталонные изображения ({{count}} скрыто)",
"convertToControlLayer": "Конвертировать в контрольный слой",
"layer_withCount_one": "Слой ({{count}})",
"layer_withCount_few": "Слои ({{count}})",
"layer_withCount_many": "Слои ({{count}})",
@@ -2063,14 +2056,6 @@
}
},
"whatsNew": {
"canvasV2Announcement": {
"newLayerTypes": "Новые типы слоев для еще большего контроля",
"readReleaseNotes": "Прочитать информацию о выпуске",
"watchReleaseVideo": "Смотреть видео о выпуске",
"fluxSupport": "Поддержка семейства моделей Flux",
"newCanvas": "Новый мощный холст управления",
"watchUiUpdatesOverview": "Обзор обновлений пользовательского интерфейса"
},
"whatsNewInInvoke": "Что нового в Invoke"
},
"newUserExperience": {

View File

@@ -82,7 +82,21 @@
"dontShowMeThese": "请勿显示这些内容",
"beta": "测试版",
"toResolve": "解决",
"tab": "标签页"
"tab": "标签页",
"apply": "应用",
"edit": "编辑",
"off": "关",
"loadingImage": "正在加载图片",
"ok": "确定",
"placeholderSelectAModel": "选择一个模型",
"close": "关闭",
"reset": "重设",
"none": "无",
"new": "新建",
"view": "视图",
"alpha": "透明度通道",
"openInViewer": "在查看器中打开",
"clipboard": "剪贴板"
},
"gallery": {
"galleryImageSize": "预览大小",
@@ -124,7 +138,7 @@
"selectAllOnPage": "选择本页全部",
"swapImages": "交换图像",
"exitBoardSearch": "退出面板搜索",
"exitSearch": "退出搜索",
"exitSearch": "退出图像搜索",
"oldestFirst": "最旧在前",
"sortDirection": "排序方向",
"showStarredImagesFirst": "优先显示收藏的图片",
@@ -135,17 +149,333 @@
"searchImages": "按元数据搜索",
"jump": "跳过",
"compareHelp2": "按 <Kbd>M</Kbd> 键切换不同的比较模式。",
"displayBoardSearch": "显示面板搜索",
"displaySearch": "显示搜索",
"displayBoardSearch": "板搜索",
"displaySearch": "图像搜索",
"stretchToFit": "拉伸以适应",
"exitCompare": "退出对比",
"compareHelp1": "在点击图库中的图片或使用箭头键切换比较图片时,请按住<Kbd>Alt</Kbd> 键。",
"go": "运行"
"go": "运行",
"boardsSettings": "画板设置",
"imagesSettings": "画廊图片设置",
"gallery": "画廊",
"move": "移动",
"imagesTab": "您在Invoke中创建和保存的图片。",
"openViewer": "打开查看器",
"closeViewer": "关闭查看器",
"assetsTab": "您已上传用于项目的文件。"
},
"hotkeys": {
"searchHotkeys": "检索快捷键",
"noHotkeysFound": "未找到快捷键",
"clearSearch": "清除检索项"
"clearSearch": "清除检索项",
"app": {
"cancelQueueItem": {
"title": "取消",
"desc": "取消当前正在处理的队列项目。"
},
"selectQueueTab": {
"title": "选择队列标签",
"desc": "选择队列标签。"
},
"toggleLeftPanel": {
"desc": "显示或隐藏左侧面板。",
"title": "开关左侧面板"
},
"resetPanelLayout": {
"title": "重设面板布局",
"desc": "将左侧和右侧面板重置为默认大小和布局。"
},
"togglePanels": {
"title": "开关面板",
"desc": "同时显示或隐藏左右两侧的面板。"
},
"selectWorkflowsTab": {
"title": "选择工作流标签",
"desc": "选择工作流标签。"
},
"selectModelsTab": {
"title": "选择模型标签",
"desc": "选择模型标签。"
},
"toggleRightPanel": {
"title": "开关右侧面板",
"desc": "显示或隐藏右侧面板。"
},
"clearQueue": {
"title": "清除队列",
"desc": "取消并清除所有队列条目。"
},
"selectCanvasTab": {
"title": "选择画布标签",
"desc": "选择画布标签。"
},
"invokeFront": {
"desc": "将生成请求排队,添加到队列的前面。",
"title": "调用(前台)"
},
"selectUpscalingTab": {
"title": "选择放大选项卡",
"desc": "选择高清放大选项卡。"
},
"focusPrompt": {
"title": "聚焦提示",
"desc": "将光标焦点移动到正向提示。"
},
"title": "应用程序",
"invoke": {
"title": "调用",
"desc": "将生成请求排队,添加到队列的末尾。"
}
},
"canvas": {
"selectBrushTool": {
"title": "画笔工具",
"desc": "选择画笔工具。"
},
"selectEraserTool": {
"title": "橡皮擦工具",
"desc": "选择橡皮擦工具。"
},
"title": "画布",
"selectColorPickerTool": {
"title": "拾色器工具",
"desc": "选择拾色器工具。"
},
"fitBboxToCanvas": {
"title": "使边界框适应画布",
"desc": "缩放并调整视图以适应边界框。"
},
"setZoomTo400Percent": {
"title": "缩放到400%",
"desc": "将画布的缩放设置为400%。"
},
"setZoomTo800Percent": {
"desc": "将画布的缩放设置为800%。",
"title": "缩放到800%"
},
"redo": {
"desc": "重做上一次画布操作。",
"title": "重做"
},
"nextEntity": {
"title": "下一层",
"desc": "在列表中选择下一层。"
},
"selectRectTool": {
"title": "矩形工具",
"desc": "选择矩形工具。"
},
"selectViewTool": {
"title": "视图工具",
"desc": "选择视图工具。"
},
"prevEntity": {
"desc": "在列表中选择上一层。",
"title": "上一层"
},
"transformSelected": {
"desc": "变换所选图层。",
"title": "变换"
},
"selectBboxTool": {
"title": "边界框工具",
"desc": "选择边界框工具。"
},
"setZoomTo200Percent": {
"title": "缩放到200%",
"desc": "将画布的缩放设置为200%。"
},
"applyFilter": {
"title": "应用过滤器",
"desc": "将待处理的过滤器应用于所选图层。"
},
"filterSelected": {
"title": "过滤器",
"desc": "对所选图层进行过滤。仅适用于栅格层和控制层。"
},
"cancelFilter": {
"title": "取消过滤器",
"desc": "取消待处理的过滤器。"
},
"incrementToolWidth": {
"title": "增加工具宽度",
"desc": "增加所选的画笔或橡皮擦工具的宽度。"
},
"decrementToolWidth": {
"desc": "减少所选的画笔或橡皮擦工具的宽度。",
"title": "减少工具宽度"
},
"selectMoveTool": {
"title": "移动工具",
"desc": "选择移动工具。"
},
"setFillToWhite": {
"title": "将颜色设置为白色",
"desc": "将当前工具的颜色设置为白色。"
},
"cancelTransform": {
"desc": "取消待处理的变换。",
"title": "取消变换"
},
"applyTransform": {
"title": "应用变换",
"desc": "将待处理的变换应用于所选图层。"
},
"setZoomTo100Percent": {
"title": "缩放到100%",
"desc": "将画布的缩放设置为100%。"
},
"resetSelected": {
"title": "重置图层",
"desc": "重置选定的图层。仅适用于修复蒙版和区域指导。"
},
"undo": {
"title": "撤消",
"desc": "撤消上一次画布操作。"
},
"quickSwitch": {
"title": "图层快速切换",
"desc": "在最后两个选定的图层之间切换。如果某个图层被书签标记,则始终在该图层和最后一个未标记的图层之间切换。"
},
"fitLayersToCanvas": {
"title": "使图层适应画布",
"desc": "缩放并调整视图以适应所有可见图层。"
},
"deleteSelected": {
"title": "删除图层",
"desc": "删除选定的图层。"
}
},
"hotkeys": "快捷键",
"workflows": {
"pasteSelection": {
"title": "粘贴",
"desc": "粘贴复制的节点和边。"
},
"title": "工作流",
"addNode": {
"title": "添加节点",
"desc": "打开添加节点菜单。"
},
"copySelection": {
"desc": "复制选定的节点和边。",
"title": "复制"
},
"pasteSelectionWithEdges": {
"title": "带边缘的粘贴",
"desc": "粘贴复制的节点、边,以及与复制的节点连接的所有边。"
},
"selectAll": {
"title": "全选",
"desc": "选择所有节点和边。"
},
"deleteSelection": {
"title": "删除",
"desc": "删除选定的节点和边。"
},
"undo": {
"title": "撤销",
"desc": "撤销上一个工作流操作。"
},
"redo": {
"desc": "重做上一个工作流操作。",
"title": "重做"
}
},
"gallery": {
"title": "画廊",
"galleryNavUp": {
"title": "向上导航",
"desc": "在图库网格中向上导航,选择该图像。如果在页面顶部,则转到上一页。"
},
"galleryNavUpAlt": {
"title": "向上导航(比较图像)",
"desc": "与向上导航相同,但选择比较图像,如果比较模式尚未打开,则将其打开。"
},
"selectAllOnPage": {
"desc": "选择当前页面上的所有图像。",
"title": "选页面上的所有内容"
},
"galleryNavDownAlt": {
"title": "向下导航(比较图像)",
"desc": "与向下导航相同,但选择比较图像,如果比较模式尚未打开,则将其打开。"
},
"galleryNavLeftAlt": {
"title": "向左导航(比较图像)",
"desc": "与向左导航相同,但选择比较图像,如果比较模式尚未打开,则将其打开。"
},
"clearSelection": {
"title": "清除选择",
"desc": "清除当前的选择(如果有的话)。"
},
"deleteSelection": {
"title": "删除",
"desc": "删除所有选定的图像。默认情况下,系统会提示您确认删除。如果这些图像当前在应用中使用,系统将发出警告。"
},
"galleryNavLeft": {
"title": "向左导航",
"desc": "在图库网格中向左导航,选择该图像。如果处于行的第一张图像,转到上一行。如果处于页面的第一张图像,转到上一页。"
},
"galleryNavRight": {
"title": "向右导航",
"desc": "在图库网格中向右导航,选择该图像。如果在行的最后一张图像,转到下一行。如果在页面的最后一张图像,转到下一页。"
},
"galleryNavDown": {
"desc": "在图库网格中向下导航,选择该图像。如果在页面底部,则转到下一页。",
"title": "向下导航"
},
"galleryNavRightAlt": {
"title": "向右导航(比较图像)",
"desc": "与向右导航相同,但选择比较图像,如果比较模式尚未打开,则将其打开。"
}
},
"viewer": {
"toggleMetadata": {
"desc": "显示或隐藏当前图像的元数据覆盖。",
"title": "显示/隐藏元数据"
},
"recallPrompts": {
"desc": "召回当前图像的正面和负面提示。",
"title": "召回提示"
},
"toggleViewer": {
"title": "显示/隐藏图像查看器",
"desc": "显示或隐藏图像查看器。仅在画布选项卡上可用。"
},
"recallAll": {
"desc": "召回当前图像的所有元数据。",
"title": "召回所有元数据"
},
"recallSeed": {
"title": "召回种子",
"desc": "召回当前图像的种子。"
},
"swapImages": {
"title": "交换比较图像",
"desc": "交换正在比较的图像。"
},
"nextComparisonMode": {
"title": "下一个比较模式",
"desc": "环浏览比较模式。"
},
"loadWorkflow": {
"title": "加载工作流",
"desc": "加载当前图像的保存工作流程(如果有的话)。"
},
"title": "图像查看器",
"remix": {
"title": "混合",
"desc": "召回当前图像的所有元数据,除了种子。"
},
"useSize": {
"title": "使用尺寸",
"desc": "使用当前图像的尺寸作为边界框尺寸。"
},
"runPostprocessing": {
"title": "行后处理",
"desc": "对当前图像运行所选的后处理。"
}
}
},
"modelManager": {
"modelManager": "模型管理器",
@@ -210,7 +540,6 @@
"noModelsInstalled": "无已安装的模型",
"urlOrLocalPathHelper": "链接应该指向单个文件.本地路径可以指向单个文件,或者对于单个扩散模型(diffusers model),可以指向一个文件夹.",
"modelSettings": "模型设置",
"useDefaultSettings": "使用默认设置",
"scanPlaceholder": "本地文件夹路径",
"installRepo": "安装仓库",
"modelImageDeleted": "模型图像已删除",
@@ -249,7 +578,16 @@
"loraTriggerPhrases": "LoRA 触发词",
"ipAdapters": "IP适配器",
"spandrelImageToImage": "图生图(Spandrel)",
"starterModelsInModelManager": "您可以在模型管理器中找到初始模型"
"starterModelsInModelManager": "您可以在模型管理器中找到初始模型",
"noDefaultSettings": "此模型没有配置默认设置。请访问模型管理器添加默认设置。",
"clipEmbed": "CLIP 嵌入",
"defaultSettingsOutOfSync": "某些设置与模型的默认值不匹配:",
"restoreDefaultSettings": "点击以使用模型的默认设置。",
"usingDefaultSettings": "使用模型的默认设置",
"huggingFace": "HuggingFace",
"hfTokenInvalid": "HF 令牌无效或缺失",
"hfTokenLabel": "HuggingFace 令牌(某些模型所需)",
"hfTokenHelperText": "使用某些模型需要 HF 令牌。点击这里创建或获取你的令牌。"
},
"parameters": {
"images": "图像",
@@ -367,7 +705,7 @@
"uploadFailed": "上传失败",
"imageCopied": "图像已复制",
"parametersNotSet": "参数未恢复",
"uploadFailedInvalidUploadDesc": "必须是单张的 PNG 或 JPEG 图",
"uploadFailedInvalidUploadDesc": "必须是单 PNG 或 JPEG 图像。",
"connected": "服务器连接",
"parameterSet": "参数已恢复",
"parameterNotSet": "参数未恢复",
@@ -379,7 +717,7 @@
"setControlImage": "设为控制图像",
"setNodeField": "设为节点字段",
"imageUploaded": "图像已上传",
"addedToBoard": "添加到面板",
"addedToBoard": "添加到{{name}}的资产中",
"workflowLoaded": "工作流已加载",
"imageUploadFailed": "图像上传失败",
"baseModelChangedCleared_other": "已清除或禁用{{count}}个不兼容的子模型",
@@ -416,7 +754,9 @@
"createIssue": "创建问题",
"about": "关于",
"submitSupportTicket": "提交支持工单",
"toggleRightPanel": "切换右侧面板(G)"
"toggleRightPanel": "切换右侧面板(G)",
"uploadImages": "上传图片",
"toggleLeftPanel": "开关左侧面板(T)"
},
"nodes": {
"zoomInNodes": "放大",
@@ -569,7 +909,7 @@
"cancelSucceeded": "项目已取消",
"queue": "队列",
"batch": "批处理",
"clearQueueAlertDialog": "清队列时会立即取消所有处理的项目并且会完全清队列。",
"clearQueueAlertDialog": "清队列立即取消所有正在处理的项目并完全清队列。待处理的过滤器将被取消。",
"pending": "待定",
"completedIn": "完成于",
"resumeFailed": "恢复处理器时出现问题",
@@ -610,7 +950,15 @@
"openQueue": "打开队列",
"prompts_other": "提示词",
"iterations_other": "迭代",
"generations_other": "生成"
"generations_other": "生成",
"canvas": "画布",
"workflows": "工作流",
"generation": "生成",
"other": "其他",
"gallery": "画廊",
"destination": "目标存储",
"upscaling": "高清放大",
"origin": "来源"
},
"sdxl": {
"refinerStart": "Refiner 开始作用时机",
@@ -649,7 +997,6 @@
"workflow": "工作流",
"steps": "步数",
"scheduler": "调度器",
"seamless": "无缝",
"recallParameters": "召回参数",
"noRecallParameters": "未找到要召回的参数",
"vae": "VAE",
@@ -658,7 +1005,11 @@
"parsingFailed": "解析失败",
"recallParameter": "调用{{label}}",
"imageDimensions": "图像尺寸",
"parameterSet": "已设置参数{{parameter}}"
"parameterSet": "已设置参数{{parameter}}",
"guidance": "指导",
"seamlessXAxis": "无缝 X 轴",
"seamlessYAxis": "无缝 Y 轴",
"canvasV2Metadata": "画布"
},
"models": {
"noMatchingModels": "无相匹配的模型",
@@ -709,7 +1060,8 @@
"shared": "共享面板",
"archiveBoard": "归档面板",
"archived": "已归档",
"assetsWithCount_other": "{{count}}项资源"
"assetsWithCount_other": "{{count}}项资源",
"updateBoardError": "更新画板出错"
},
"dynamicPrompts": {
"seedBehaviour": {
@@ -1175,7 +1527,8 @@
},
"prompt": {
"addPromptTrigger": "添加提示词触发器",
"noMatchingTriggers": "没有匹配的触发器"
"noMatchingTriggers": "没有匹配的触发器",
"compatibleEmbeddings": "兼容的嵌入"
},
"controlLayers": {
"autoNegative": "自动反向",
@@ -1186,8 +1539,8 @@
"moveToFront": "移动到前面",
"addLayer": "添加层",
"deletePrompt": "删除提示词",
"addPositivePrompt": "添加 $t(common.positivePrompt)",
"addNegativePrompt": "添加 $t(common.negativePrompt)",
"addPositivePrompt": "添加 $t(controlLayers.prompt)",
"addNegativePrompt": "添加 $t(controlLayers.negativePrompt)",
"rectangle": "矩形",
"opacity": "透明度"
},

View File

@@ -58,7 +58,6 @@
"model": "模型",
"seed": "種子",
"vae": "VAE",
"seamless": "無縫",
"metadata": "元數據",
"width": "寬度",
"height": "高度"

View File

@@ -8,10 +8,8 @@ import { useSyncLoggingConfig } from 'app/logging/useSyncLoggingConfig';
import { appStarted } from 'app/store/middleware/listenerMiddleware/listeners/appStarted';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import type { PartialAppConfig } from 'app/types/invokeai';
import ImageUploadOverlay from 'common/components/ImageUploadOverlay';
import { useFocusRegionWatcher } from 'common/hooks/focus';
import { useClearStorage } from 'common/hooks/useClearStorage';
import { useFullscreenDropzone } from 'common/hooks/useFullscreenDropzone';
import { useGlobalHotkeys } from 'common/hooks/useGlobalHotkeys';
import ChangeBoardModal from 'features/changeBoardModal/components/ChangeBoardModal';
import {
@@ -19,6 +17,7 @@ import {
NewGallerySessionDialog,
} from 'features/controlLayers/components/NewSessionConfirmationAlertDialog';
import DeleteImageModal from 'features/deleteImageModal/components/DeleteImageModal';
import { FullscreenDropzone } from 'features/dnd/FullscreenDropzone';
import { DynamicPromptsModal } from 'features/dynamicPrompts/components/DynamicPromptsPreviewModal';
import DeleteBoardModal from 'features/gallery/components/Boards/DeleteBoardModal';
import { ImageContextMenu } from 'features/gallery/components/ImageContextMenu/ImageContextMenu';
@@ -62,8 +61,6 @@ const App = ({ config = DEFAULT_CONFIG, studioInitAction }: Props) => {
useGetOpenAPISchemaQuery();
useSyncLoggingConfig();
const { dropzone, isHandlingUpload, setIsHandlingUpload } = useFullscreenDropzone();
const handleReset = useCallback(() => {
clearStorage();
location.reload();
@@ -92,19 +89,8 @@ const App = ({ config = DEFAULT_CONFIG, studioInitAction }: Props) => {
return (
<ErrorBoundary onReset={handleReset} FallbackComponent={AppErrorBoundaryFallback}>
<Box
id="invoke-app-wrapper"
w="100dvw"
h="100dvh"
position="relative"
overflow="hidden"
{...dropzone.getRootProps()}
>
<input {...dropzone.getInputProps()} />
<Box id="invoke-app-wrapper" w="100dvw" h="100dvh" position="relative" overflow="hidden">
<AppContent />
{dropzone.isDragActive && isHandlingUpload && (
<ImageUploadOverlay dropzone={dropzone} setIsHandlingUpload={setIsHandlingUpload} />
)}
</Box>
<DeleteImageModal />
<ChangeBoardModal />
@@ -121,6 +107,7 @@ const App = ({ config = DEFAULT_CONFIG, studioInitAction }: Props) => {
<NewGallerySessionDialog />
<NewCanvasSessionDialog />
<ImageContextMenu />
<FullscreenDropzone />
</ErrorBoundary>
);
};

View File

@@ -1,4 +1,3 @@
import { skipToken } from '@reduxjs/toolkit/query';
import { useAppSelector } from 'app/store/storeHooks';
import { useIsRegionFocused } from 'common/hooks/focus';
import { useAssertSingleton } from 'common/hooks/useAssertSingleton';
@@ -8,13 +7,11 @@ import { selectLastSelectedImage } from 'features/gallery/store/gallerySelectors
import { useRegisteredHotkeys } from 'features/system/components/HotkeysModal/useHotkeyData';
import { useFeatureStatus } from 'features/system/hooks/useFeatureStatus';
import { memo } from 'react';
import { useGetImageDTOQuery } from 'services/api/endpoints/images';
import type { ImageDTO } from 'services/api/types';
export const GlobalImageHotkeys = memo(() => {
useAssertSingleton('GlobalImageHotkeys');
const lastSelectedImage = useAppSelector(selectLastSelectedImage);
const { currentData: imageDTO } = useGetImageDTOQuery(lastSelectedImage?.image_name ?? skipToken);
const imageDTO = useAppSelector(selectLastSelectedImage);
if (!imageDTO) {
return null;

View File

@@ -19,7 +19,6 @@ import { $workflowCategories } from 'app/store/nanostores/workflowCategories';
import { createStore } from 'app/store/store';
import type { PartialAppConfig } from 'app/types/invokeai';
import Loading from 'common/components/Loading/Loading';
import AppDndContext from 'features/dnd/components/AppDndContext';
import type { WorkflowCategory } from 'features/nodes/types/workflow';
import type { PropsWithChildren, ReactNode } from 'react';
import React, { lazy, memo, useEffect, useLayoutEffect, useMemo } from 'react';
@@ -237,9 +236,7 @@ const InvokeAIUI = ({
<Provider store={store}>
<React.Suspense fallback={<Loading />}>
<ThemeLocaleProvider>
<AppDndContext>
<App config={config} studioInitAction={studioInitAction} />
</AppDndContext>
<App config={config} studioInitAction={studioInitAction} />
</ThemeLocaleProvider>
</React.Suspense>
</Provider>

View File

@@ -17,6 +17,7 @@ const $logger = atom<Logger>(Roarr.child(BASE_CONTEXT));
export const zLogNamespace = z.enum([
'canvas',
'config',
'dnd',
'events',
'gallery',
'generation',

View File

@@ -1,4 +1,3 @@
export const STORAGE_PREFIX = '@@invokeai-';
export const EMPTY_ARRAY = [];
/** @knipignore */
export const EMPTY_OBJECT = {};

View File

@@ -16,7 +16,6 @@ import { addGalleryOffsetChangedListener } from 'app/store/middleware/listenerMi
import { addGetOpenAPISchemaListener } from 'app/store/middleware/listenerMiddleware/listeners/getOpenAPISchema';
import { addImageAddedToBoardFulfilledListener } from 'app/store/middleware/listenerMiddleware/listeners/imageAddedToBoard';
import { addImageDeletionListeners } from 'app/store/middleware/listenerMiddleware/listeners/imageDeletionListeners';
import { addImageDroppedListener } from 'app/store/middleware/listenerMiddleware/listeners/imageDropped';
import { addImageRemovedFromBoardFulfilledListener } from 'app/store/middleware/listenerMiddleware/listeners/imageRemovedFromBoard';
import { addImagesStarredListener } from 'app/store/middleware/listenerMiddleware/listeners/imagesStarred';
import { addImagesUnstarredListener } from 'app/store/middleware/listenerMiddleware/listeners/imagesUnstarred';
@@ -93,9 +92,6 @@ addGetOpenAPISchemaListener(startAppListening);
addWorkflowLoadRequestedListener(startAppListening);
addUpdateAllNodesRequestedListener(startAppListening);
// DND
addImageDroppedListener(startAppListening);
// Models
addModelSelectedListener(startAppListening);

View File

@@ -1,12 +1,12 @@
import { createAction } from '@reduxjs/toolkit';
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import type { SerializableObject } from 'common/types';
import { buildAdHocPostProcessingGraph } from 'features/nodes/util/graph/buildAdHocPostProcessingGraph';
import { toast } from 'features/toast/toast';
import { t } from 'i18next';
import { queueApi } from 'services/api/endpoints/queue';
import type { BatchConfig, ImageDTO } from 'services/api/types';
import type { JsonObject } from 'type-fest';
const log = logger('queue');
@@ -39,9 +39,9 @@ export const addAdHocPostProcessingRequestedListener = (startAppListening: AppSt
const enqueueResult = await req.unwrap();
req.reset();
log.debug({ enqueueResult } as SerializableObject, t('queue.graphQueued'));
log.debug({ enqueueResult } as JsonObject, t('queue.graphQueued'));
} catch (error) {
log.error({ enqueueBatchArg } as SerializableObject, t('queue.graphFailedToQueue'));
log.error({ enqueueBatchArg } as JsonObject, t('queue.graphFailedToQueue'));
if (error instanceof Object && 'status' in error && error.status === 403) {
return;

View File

@@ -1,12 +1,12 @@
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import type { SerializableObject } from 'common/types';
import { zPydanticValidationError } from 'features/system/store/zodSchemas';
import { toast } from 'features/toast/toast';
import { t } from 'i18next';
import { truncate, upperFirst } from 'lodash-es';
import { serializeError } from 'serialize-error';
import { queueApi } from 'services/api/endpoints/queue';
import type { JsonObject } from 'type-fest';
const log = logger('queue');
@@ -17,7 +17,7 @@ export const addBatchEnqueuedListener = (startAppListening: AppStartListening) =
effect: (action) => {
const enqueueResult = action.payload;
const arg = action.meta.arg.originalArgs;
log.debug({ enqueueResult } as SerializableObject, 'Batch enqueued');
log.debug({ enqueueResult } as JsonObject, 'Batch enqueued');
toast({
id: 'QUEUE_BATCH_SUCCEEDED',
@@ -45,7 +45,7 @@ export const addBatchEnqueuedListener = (startAppListening: AppStartListening) =
status: 'error',
description: t('common.unknownError'),
});
log.error({ batchConfig } as SerializableObject, t('queue.batchFailedToQueue'));
log.error({ batchConfig } as JsonObject, t('queue.batchFailedToQueue'));
return;
}
@@ -71,7 +71,7 @@ export const addBatchEnqueuedListener = (startAppListening: AppStartListening) =
description: t('common.unknownError'),
});
}
log.error({ batchConfig, error: serializeError(response) } as SerializableObject, t('queue.batchFailedToQueue'));
log.error({ batchConfig, error: serializeError(response) } as JsonObject, t('queue.batchFailedToQueue'));
},
});
};

View File

@@ -1,19 +1,22 @@
import { logger } from 'app/logging/logger';
import { enqueueRequested } from 'app/store/actions';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import type { SerializableObject } from 'common/types';
import { extractMessageFromAssertionError } from 'common/util/extractMessageFromAssertionError';
import type { Result } from 'common/util/result';
import { withResult, withResultAsync } from 'common/util/result';
import { $canvasManager } from 'features/controlLayers/store/ephemeral';
import { prepareLinearUIBatch } from 'features/nodes/util/graph/buildLinearBatchConfig';
import { buildFLUXGraph } from 'features/nodes/util/graph/generation/buildFLUXGraph';
import { buildSD1Graph } from 'features/nodes/util/graph/generation/buildSD1Graph';
import { buildSD3Graph } from 'features/nodes/util/graph/generation/buildSD3Graph';
import { buildSDXLGraph } from 'features/nodes/util/graph/generation/buildSDXLGraph';
import type { Graph } from 'features/nodes/util/graph/generation/Graph';
import { toast } from 'features/toast/toast';
import { serializeError } from 'serialize-error';
import { queueApi } from 'services/api/endpoints/queue';
import type { Invocation } from 'services/api/types';
import { assert } from 'tsafe';
import { assert, AssertionError } from 'tsafe';
import type { JsonObject } from 'type-fest';
const log = logger('generation');
@@ -32,8 +35,8 @@ export const addEnqueueRequestedLinear = (startAppListening: AppStartListening)
let buildGraphResult: Result<
{
g: Graph;
noise: Invocation<'noise' | 'flux_denoise'>;
posCond: Invocation<'compel' | 'sdxl_compel_prompt' | 'flux_text_encoder'>;
noise: Invocation<'noise' | 'flux_denoise' | 'sd3_denoise'>;
posCond: Invocation<'compel' | 'sdxl_compel_prompt' | 'flux_text_encoder' | 'sd3_text_encoder'>;
},
Error
>;
@@ -49,6 +52,9 @@ export const addEnqueueRequestedLinear = (startAppListening: AppStartListening)
case `sd-2`:
buildGraphResult = await withResultAsync(() => buildSD1Graph(state, manager));
break;
case `sd-3`:
buildGraphResult = await withResultAsync(() => buildSD3Graph(state, manager));
break;
case `flux`:
buildGraphResult = await withResultAsync(() => buildFLUXGraph(state, manager));
break;
@@ -57,7 +63,17 @@ export const addEnqueueRequestedLinear = (startAppListening: AppStartListening)
}
if (buildGraphResult.isErr()) {
log.error({ error: serializeError(buildGraphResult.error) }, 'Failed to build graph');
let description: string | null = null;
if (buildGraphResult.error instanceof AssertionError) {
description = extractMessageFromAssertionError(buildGraphResult.error);
}
const error = serializeError(buildGraphResult.error);
log.error({ error }, 'Failed to build graph');
toast({
status: 'error',
title: 'Failed to build graph',
description,
});
return;
}
@@ -88,7 +104,7 @@ export const addEnqueueRequestedLinear = (startAppListening: AppStartListening)
return;
}
log.debug({ batchConfig: prepareBatchResult.value } as SerializableObject, 'Enqueued batch');
log.debug({ batchConfig: prepareBatchResult.value } as JsonObject, 'Enqueued batch');
},
});
};

View File

@@ -1,12 +1,12 @@
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import type { SerializableObject } from 'common/types';
import { parseify } from 'common/util/serialize';
import { $templates } from 'features/nodes/store/nodesSlice';
import { parseSchema } from 'features/nodes/util/schema/parseSchema';
import { size } from 'lodash-es';
import { serializeError } from 'serialize-error';
import { appInfoApi } from 'services/api/endpoints/appInfo';
import type { JsonObject } from 'type-fest';
const log = logger('system');
@@ -16,12 +16,12 @@ export const addGetOpenAPISchemaListener = (startAppListening: AppStartListening
effect: (action, { getState }) => {
const schemaJSON = action.payload;
log.debug({ schemaJSON: parseify(schemaJSON) } as SerializableObject, 'Received OpenAPI schema');
log.debug({ schemaJSON: parseify(schemaJSON) } as JsonObject, 'Received OpenAPI schema');
const { nodesAllowlist, nodesDenylist } = getState().config;
const nodeTemplates = parseSchema(schemaJSON, nodesAllowlist, nodesDenylist);
log.debug({ nodeTemplates } as SerializableObject, `Built ${size(nodeTemplates)} node templates`);
log.debug({ nodeTemplates } as JsonObject, `Built ${size(nodeTemplates)} node templates`);
$templates.set(nodeTemplates);
},

View File

@@ -1,292 +0,0 @@
import { createAction } from '@reduxjs/toolkit';
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { deepClone } from 'common/util/deepClone';
import { selectDefaultControlAdapter, selectDefaultIPAdapter } from 'features/controlLayers/hooks/addLayerHooks';
import { getPrefixedId } from 'features/controlLayers/konva/util';
import {
controlLayerAdded,
entityRasterized,
entitySelected,
rasterLayerAdded,
referenceImageAdded,
referenceImageIPAdapterImageChanged,
rgAdded,
rgIPAdapterImageChanged,
} from 'features/controlLayers/store/canvasSlice';
import { selectCanvasSlice } from 'features/controlLayers/store/selectors';
import type {
CanvasControlLayerState,
CanvasRasterLayerState,
CanvasReferenceImageState,
CanvasRegionalGuidanceState,
} from 'features/controlLayers/store/types';
import { imageDTOToImageObject, imageDTOToImageWithDims } from 'features/controlLayers/store/util';
import type { TypesafeDraggableData, TypesafeDroppableData } from 'features/dnd/types';
import { isValidDrop } from 'features/dnd/util/isValidDrop';
import { imageToCompareChanged, selectionChanged } from 'features/gallery/store/gallerySlice';
import { fieldImageValueChanged } from 'features/nodes/store/nodesSlice';
import { upscaleInitialImageChanged } from 'features/parameters/store/upscaleSlice';
import { imagesApi } from 'services/api/endpoints/images';
export const dndDropped = createAction<{
overData: TypesafeDroppableData;
activeData: TypesafeDraggableData;
}>('dnd/dndDropped');
const log = logger('system');
export const addImageDroppedListener = (startAppListening: AppStartListening) => {
startAppListening({
actionCreator: dndDropped,
effect: (action, { dispatch, getState }) => {
const { activeData, overData } = action.payload;
if (!isValidDrop(overData, activeData)) {
return;
}
if (activeData.payloadType === 'IMAGE_DTO') {
log.debug({ activeData, overData }, 'Image dropped');
} else if (activeData.payloadType === 'GALLERY_SELECTION') {
log.debug({ activeData, overData }, `Images (${getState().gallery.selection.length}) dropped`);
} else if (activeData.payloadType === 'NODE_FIELD') {
log.debug({ activeData, overData }, 'Node field dropped');
} else {
log.debug({ activeData, overData }, `Unknown payload dropped`);
}
/**
* Image dropped on IP Adapter Layer
*/
if (
overData.actionType === 'SET_IPA_IMAGE' &&
activeData.payloadType === 'IMAGE_DTO' &&
activeData.payload.imageDTO
) {
const { id } = overData.context;
dispatch(
referenceImageIPAdapterImageChanged({
entityIdentifier: { id, type: 'reference_image' },
imageDTO: activeData.payload.imageDTO,
})
);
return;
}
/**
* Image dropped on RG Layer IP Adapter
*/
if (
overData.actionType === 'SET_RG_IP_ADAPTER_IMAGE' &&
activeData.payloadType === 'IMAGE_DTO' &&
activeData.payload.imageDTO
) {
const { id, referenceImageId } = overData.context;
dispatch(
rgIPAdapterImageChanged({
entityIdentifier: { id, type: 'regional_guidance' },
referenceImageId,
imageDTO: activeData.payload.imageDTO,
})
);
return;
}
/**
* Image dropped on Raster layer
*/
if (
overData.actionType === 'ADD_RASTER_LAYER_FROM_IMAGE' &&
activeData.payloadType === 'IMAGE_DTO' &&
activeData.payload.imageDTO
) {
const imageObject = imageDTOToImageObject(activeData.payload.imageDTO);
const { x, y } = selectCanvasSlice(getState()).bbox.rect;
const overrides: Partial<CanvasRasterLayerState> = {
objects: [imageObject],
position: { x, y },
};
dispatch(rasterLayerAdded({ overrides, isSelected: true }));
return;
}
/**
* Image dropped on Raster layer
*/
if (
overData.actionType === 'ADD_CONTROL_LAYER_FROM_IMAGE' &&
activeData.payloadType === 'IMAGE_DTO' &&
activeData.payload.imageDTO
) {
const state = getState();
const imageObject = imageDTOToImageObject(activeData.payload.imageDTO);
const { x, y } = selectCanvasSlice(state).bbox.rect;
const defaultControlAdapter = selectDefaultControlAdapter(state);
const overrides: Partial<CanvasControlLayerState> = {
objects: [imageObject],
position: { x, y },
controlAdapter: defaultControlAdapter,
};
dispatch(controlLayerAdded({ overrides, isSelected: true }));
return;
}
if (
overData.actionType === 'ADD_REGIONAL_REFERENCE_IMAGE_FROM_IMAGE' &&
activeData.payloadType === 'IMAGE_DTO' &&
activeData.payload.imageDTO
) {
const state = getState();
const ipAdapter = deepClone(selectDefaultIPAdapter(state));
ipAdapter.image = imageDTOToImageWithDims(activeData.payload.imageDTO);
const overrides: Partial<CanvasRegionalGuidanceState> = {
referenceImages: [{ id: getPrefixedId('regional_guidance_reference_image'), ipAdapter }],
};
dispatch(rgAdded({ overrides, isSelected: true }));
return;
}
if (
overData.actionType === 'ADD_GLOBAL_REFERENCE_IMAGE_FROM_IMAGE' &&
activeData.payloadType === 'IMAGE_DTO' &&
activeData.payload.imageDTO
) {
const state = getState();
const ipAdapter = deepClone(selectDefaultIPAdapter(state));
ipAdapter.image = imageDTOToImageWithDims(activeData.payload.imageDTO);
const overrides: Partial<CanvasReferenceImageState> = {
ipAdapter,
};
dispatch(referenceImageAdded({ overrides, isSelected: true }));
return;
}
/**
* Image dropped on Raster layer
*/
if (overData.actionType === 'REPLACE_LAYER_WITH_IMAGE' && activeData.payloadType === 'IMAGE_DTO') {
const state = getState();
const { entityIdentifier } = overData.context;
const imageObject = imageDTOToImageObject(activeData.payload.imageDTO);
const { x, y } = selectCanvasSlice(state).bbox.rect;
dispatch(entityRasterized({ entityIdentifier, imageObject, position: { x, y }, replaceObjects: true }));
dispatch(entitySelected({ entityIdentifier }));
return;
}
/**
* Image dropped on node image field
*/
if (
overData.actionType === 'SET_NODES_IMAGE' &&
activeData.payloadType === 'IMAGE_DTO' &&
activeData.payload.imageDTO
) {
const { fieldName, nodeId } = overData.context;
dispatch(
fieldImageValueChanged({
nodeId,
fieldName,
value: activeData.payload.imageDTO,
})
);
return;
}
/**
* Image selected for compare
*/
if (
overData.actionType === 'SELECT_FOR_COMPARE' &&
activeData.payloadType === 'IMAGE_DTO' &&
activeData.payload.imageDTO
) {
const { imageDTO } = activeData.payload;
dispatch(imageToCompareChanged(imageDTO));
return;
}
/**
* Image dropped on user board
*/
if (
overData.actionType === 'ADD_TO_BOARD' &&
activeData.payloadType === 'IMAGE_DTO' &&
activeData.payload.imageDTO
) {
const { imageDTO } = activeData.payload;
const { boardId } = overData.context;
dispatch(
imagesApi.endpoints.addImageToBoard.initiate({
imageDTO,
board_id: boardId,
})
);
dispatch(selectionChanged([]));
return;
}
/**
* Image dropped on 'none' board
*/
if (
overData.actionType === 'REMOVE_FROM_BOARD' &&
activeData.payloadType === 'IMAGE_DTO' &&
activeData.payload.imageDTO
) {
const { imageDTO } = activeData.payload;
dispatch(
imagesApi.endpoints.removeImageFromBoard.initiate({
imageDTO,
})
);
dispatch(selectionChanged([]));
return;
}
/**
* Image dropped on upscale initial image
*/
if (
overData.actionType === 'SET_UPSCALE_INITIAL_IMAGE' &&
activeData.payloadType === 'IMAGE_DTO' &&
activeData.payload.imageDTO
) {
const { imageDTO } = activeData.payload;
dispatch(upscaleInitialImageChanged(imageDTO));
return;
}
/**
* Multiple images dropped on user board
*/
if (overData.actionType === 'ADD_TO_BOARD' && activeData.payloadType === 'GALLERY_SELECTION') {
const imageDTOs = getState().gallery.selection;
const { boardId } = overData.context;
dispatch(
imagesApi.endpoints.addImagesToBoard.initiate({
imageDTOs,
board_id: boardId,
})
);
dispatch(selectionChanged([]));
return;
}
/**
* Multiple images dropped on 'none' board
*/
if (overData.actionType === 'REMOVE_FROM_BOARD' && activeData.payloadType === 'GALLERY_SELECTION') {
const imageDTOs = getState().gallery.selection;
dispatch(
imagesApi.endpoints.removeImagesFromBoard.initiate({
imageDTOs,
})
);
dispatch(selectionChanged([]));
return;
}
},
});
};

View File

@@ -1,18 +1,8 @@
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import type { RootState } from 'app/store/store';
import {
entityRasterized,
entitySelected,
referenceImageIPAdapterImageChanged,
rgIPAdapterImageChanged,
} from 'features/controlLayers/store/canvasSlice';
import { selectCanvasSlice } from 'features/controlLayers/store/selectors';
import { imageDTOToImageObject } from 'features/controlLayers/store/util';
import { selectListBoardsQueryArgs } from 'features/gallery/store/gallerySelectors';
import { boardIdSelected, galleryViewChanged } from 'features/gallery/store/gallerySlice';
import { fieldImageValueChanged } from 'features/nodes/store/nodesSlice';
import { upscaleInitialImageChanged } from 'features/parameters/store/upscaleSlice';
import { toast } from 'features/toast/toast';
import { t } from 'i18next';
import { omit } from 'lodash-es';
@@ -51,12 +41,14 @@ export const addImageUploadedFulfilledListener = (startAppListening: AppStartLis
log.debug({ imageDTO }, 'Image uploaded');
const { postUploadAction } = action.meta.arg.originalArgs;
if (!postUploadAction) {
if (action.meta.arg.originalArgs.silent || imageDTO.is_intermediate) {
// When a "silent" upload is requested, or the image is intermediate, we can skip all post-upload actions,
// like toasts and switching the gallery view
return;
}
const boardId = imageDTO.board_id ?? 'none';
const DEFAULT_UPLOADED_TOAST = {
id: 'IMAGE_UPLOADED',
title: t('toast.imageUploaded'),
@@ -64,80 +56,34 @@ export const addImageUploadedFulfilledListener = (startAppListening: AppStartLis
} as const;
// default action - just upload and alert user
if (postUploadAction.type === 'TOAST') {
const boardId = imageDTO.board_id ?? 'none';
if (lastUploadedToastTimeout !== null) {
window.clearTimeout(lastUploadedToastTimeout);
}
const toastApi = toast({
...DEFAULT_UPLOADED_TOAST,
title: postUploadAction.title || DEFAULT_UPLOADED_TOAST.title,
description: getUploadedToastDescription(boardId, state),
duration: null, // we will close the toast manually
});
lastUploadedToastTimeout = window.setTimeout(() => {
toastApi.close();
}, 3000);
/**
* We only want to change the board and view if this is the first upload of a batch, else we end up hijacking
* the user's gallery board and view selection:
* - User uploads multiple images
* - A couple uploads finish, but others are pending still
* - User changes the board selection
* - Pending uploads finish and change the board back to the original board
* - User is confused as to why the board changed
*
* Default to true to not require _all_ image upload handlers to set this value
*/
const isFirstUploadOfBatch = action.meta.arg.originalArgs.isFirstUploadOfBatch ?? true;
if (isFirstUploadOfBatch) {
dispatch(boardIdSelected({ boardId }));
dispatch(galleryViewChanged('assets'));
}
return;
if (lastUploadedToastTimeout !== null) {
window.clearTimeout(lastUploadedToastTimeout);
}
const toastApi = toast({
...DEFAULT_UPLOADED_TOAST,
title: DEFAULT_UPLOADED_TOAST.title,
description: getUploadedToastDescription(boardId, state),
duration: null, // we will close the toast manually
});
lastUploadedToastTimeout = window.setTimeout(() => {
toastApi.close();
}, 3000);
if (postUploadAction.type === 'SET_UPSCALE_INITIAL_IMAGE') {
dispatch(upscaleInitialImageChanged(imageDTO));
toast({
...DEFAULT_UPLOADED_TOAST,
description: 'set as upscale initial image',
});
return;
}
if (postUploadAction.type === 'SET_IPA_IMAGE') {
const { id } = postUploadAction;
dispatch(referenceImageIPAdapterImageChanged({ entityIdentifier: { id, type: 'reference_image' }, imageDTO }));
toast({ ...DEFAULT_UPLOADED_TOAST, description: t('toast.setControlImage') });
return;
}
if (postUploadAction.type === 'SET_RG_IP_ADAPTER_IMAGE') {
const { id, referenceImageId } = postUploadAction;
dispatch(
rgIPAdapterImageChanged({ entityIdentifier: { id, type: 'regional_guidance' }, referenceImageId, imageDTO })
);
toast({ ...DEFAULT_UPLOADED_TOAST, description: t('toast.setControlImage') });
return;
}
if (postUploadAction.type === 'SET_NODES_IMAGE') {
const { nodeId, fieldName } = postUploadAction;
dispatch(fieldImageValueChanged({ nodeId, fieldName, value: imageDTO }));
toast({ ...DEFAULT_UPLOADED_TOAST, description: `${t('toast.setNodeField')} ${fieldName}` });
return;
}
if (postUploadAction.type === 'REPLACE_LAYER_WITH_IMAGE') {
const { entityIdentifier } = postUploadAction;
const state = getState();
const imageObject = imageDTOToImageObject(imageDTO);
const { x, y } = selectCanvasSlice(state).bbox.rect;
dispatch(entityRasterized({ entityIdentifier, imageObject, position: { x, y }, replaceObjects: true }));
dispatch(entitySelected({ entityIdentifier }));
return;
/**
* We only want to change the board and view if this is the first upload of a batch, else we end up hijacking
* the user's gallery board and view selection:
* - User uploads multiple images
* - A couple uploads finish, but others are pending still
* - User changes the board selection
* - Pending uploads finish and change the board back to the original board
* - User is confused as to why the board changed
*
* Default to true to not require _all_ image upload handlers to set this value
*/
const isFirstUploadOfBatch = action.meta.arg.originalArgs.isFirstUploadOfBatch ?? true;
if (isFirstUploadOfBatch) {
dispatch(boardIdSelected({ boardId }));
dispatch(galleryViewChanged('assets'));
}
},
});

View File

@@ -1,7 +1,6 @@
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import type { AppDispatch, RootState } from 'app/store/store';
import type { SerializableObject } from 'common/types';
import {
controlLayerModelChanged,
referenceImageIPAdapterModelChanged,
@@ -41,6 +40,7 @@ import {
isSpandrelImageToImageModelConfig,
isT5EncoderModelConfig,
} from 'services/api/types';
import type { JsonObject } from 'type-fest';
const log = logger('models');
@@ -85,7 +85,7 @@ type ModelHandler = (
models: AnyModelConfig[],
state: RootState,
dispatch: AppDispatch,
log: Logger<SerializableObject>
log: Logger<JsonObject>
) => undefined;
const handleMainModels: ModelHandler = (models, state, dispatch, log) => {
@@ -164,7 +164,7 @@ const handleVAEModels: ModelHandler = (models, state, dispatch, log) => {
// We have a VAE selected, need to check if it is available
// Grab just the VAE models
const vaeModels = models.filter(isNonFluxVAEModelConfig);
const vaeModels = models.filter((m) => isNonFluxVAEModelConfig(m));
// If the current VAE model is available, we don't need to do anything
if (vaeModels.some((m) => m.key === selectedVAEModel.key)) {
@@ -297,7 +297,7 @@ const handleUpscaleModel: ModelHandler = (models, state, dispatch, log) => {
const handleT5EncoderModels: ModelHandler = (models, state, dispatch, log) => {
const selectedT5EncoderModel = state.params.t5EncoderModel;
const t5EncoderModels = models.filter(isT5EncoderModelConfig);
const t5EncoderModels = models.filter((m) => isT5EncoderModelConfig(m));
// If the currently selected model is available, we don't need to do anything
if (selectedT5EncoderModel && t5EncoderModels.some((m) => m.key === selectedT5EncoderModel.key)) {
@@ -325,7 +325,7 @@ const handleT5EncoderModels: ModelHandler = (models, state, dispatch, log) => {
const handleCLIPEmbedModels: ModelHandler = (models, state, dispatch, log) => {
const selectedCLIPEmbedModel = state.params.clipEmbedModel;
const CLIPEmbedModels = models.filter(isCLIPEmbedModelConfig);
const CLIPEmbedModels = models.filter((m) => isCLIPEmbedModelConfig(m));
// If the currently selected model is available, we don't need to do anything
if (selectedCLIPEmbedModel && CLIPEmbedModels.some((m) => m.key === selectedCLIPEmbedModel.key)) {
@@ -353,7 +353,7 @@ const handleCLIPEmbedModels: ModelHandler = (models, state, dispatch, log) => {
const handleFLUXVAEModels: ModelHandler = (models, state, dispatch, log) => {
const selectedFLUXVAEModel = state.params.fluxVAE;
const fluxVAEModels = models.filter(isFluxVAEModelConfig);
const fluxVAEModels = models.filter((m) => isFluxVAEModelConfig(m));
// If the currently selected model is available, we don't need to do anything
if (selectedFLUXVAEModel && fluxVAEModels.some((m) => m.key === selectedFLUXVAEModel.key)) {

View File

@@ -4,6 +4,8 @@ import { atom } from 'nanostores';
/**
* A fallback non-writable atom that always returns `false`, used when a nanostores atom is only conditionally available
* in a hook or component.
*
* @knipignore
*/
export const $false: ReadableAtom<boolean> = atom(false);
/**

View File

@@ -3,7 +3,6 @@ import { autoBatchEnhancer, combineReducers, configureStore } from '@reduxjs/too
import { logger } from 'app/logging/logger';
import { idbKeyValDriver } from 'app/store/enhancers/reduxRemember/driver';
import { errorHandler } from 'app/store/enhancers/reduxRemember/errors';
import type { SerializableObject } from 'common/types';
import { deepClone } from 'common/util/deepClone';
import { changeBoardModalSlice } from 'features/changeBoardModal/store/slice';
import { canvasSettingsPersistConfig, canvasSettingsSlice } from 'features/controlLayers/store/canvasSettingsSlice';
@@ -37,6 +36,7 @@ import undoable from 'redux-undo';
import { serializeError } from 'serialize-error';
import { api } from 'services/api';
import { authToastMiddleware } from 'services/api/authToastMiddleware';
import type { JsonObject } from 'type-fest';
import { STORAGE_PREFIX } from './constants';
import { actionSanitizer } from './middleware/devtools/actionSanitizer';
@@ -139,7 +139,7 @@ const unserialize: UnserializeFunction = (data, key) => {
{
persistedData: parsed,
rehydratedData: transformed,
diff: diff(parsed, transformed) as SerializableObject, // this is always serializable
diff: diff(parsed, transformed) as JsonObject, // this is always serializable
},
`Rehydrated slice "${key}"`
);

View File

@@ -25,7 +25,8 @@ export type AppFeature =
| 'invocationCache'
| 'bulkDownload'
| 'starterModels'
| 'hfToken';
| 'hfToken'
| 'invocationProgressAlert';
/**
* A disable-able Stable Diffusion feature

View File

@@ -1,251 +0,0 @@
import type { ChakraProps, FlexProps, SystemStyleObject } from '@invoke-ai/ui-library';
import { Flex, Icon, Image } from '@invoke-ai/ui-library';
import { IAILoadingImageFallback, IAINoContentFallback } from 'common/components/IAIImageFallback';
import ImageMetadataOverlay from 'common/components/ImageMetadataOverlay';
import { useImageUploadButton } from 'common/hooks/useImageUploadButton';
import type { TypesafeDraggableData, TypesafeDroppableData } from 'features/dnd/types';
import { useImageContextMenu } from 'features/gallery/components/ImageContextMenu/ImageContextMenu';
import type { MouseEvent, ReactElement, ReactNode, SyntheticEvent } from 'react';
import { memo, useCallback, useMemo, useRef } from 'react';
import { PiImageBold, PiUploadSimpleBold } from 'react-icons/pi';
import type { ImageDTO, PostUploadAction } from 'services/api/types';
import IAIDraggable from './IAIDraggable';
import IAIDroppable from './IAIDroppable';
const defaultUploadElement = <Icon as={PiUploadSimpleBold} boxSize={16} />;
const defaultNoContentFallback = <IAINoContentFallback icon={PiImageBold} />;
const baseStyles: SystemStyleObject = {
touchAction: 'none',
userSelect: 'none',
webkitUserSelect: 'none',
};
const sx: SystemStyleObject = {
...baseStyles,
'.gallery-image-container::before': {
content: '""',
display: 'inline-block',
position: 'absolute',
top: 0,
left: 0,
right: 0,
bottom: 0,
pointerEvents: 'none',
borderRadius: 'base',
},
'&[data-selected="selected"]>.gallery-image-container::before': {
boxShadow:
'inset 0px 0px 0px 3px var(--invoke-colors-invokeBlue-500), inset 0px 0px 0px 4px var(--invoke-colors-invokeBlue-800)',
},
'&[data-selected="selectedForCompare"]>.gallery-image-container::before': {
boxShadow:
'inset 0px 0px 0px 3px var(--invoke-colors-invokeGreen-300), inset 0px 0px 0px 4px var(--invoke-colors-invokeGreen-800)',
},
'&:hover>.gallery-image-container::before': {
boxShadow:
'inset 0px 0px 0px 2px var(--invoke-colors-invokeBlue-300), inset 0px 0px 0px 3px var(--invoke-colors-invokeBlue-800)',
},
'&:hover[data-selected="selected"]>.gallery-image-container::before': {
boxShadow:
'inset 0px 0px 0px 3px var(--invoke-colors-invokeBlue-400), inset 0px 0px 0px 4px var(--invoke-colors-invokeBlue-800)',
},
'&:hover[data-selected="selectedForCompare"]>.gallery-image-container::before': {
boxShadow:
'inset 0px 0px 0px 3px var(--invoke-colors-invokeGreen-200), inset 0px 0px 0px 4px var(--invoke-colors-invokeGreen-800)',
},
};
type IAIDndImageProps = FlexProps & {
imageDTO: ImageDTO | undefined;
onError?: (event: SyntheticEvent<HTMLImageElement>) => void;
onLoad?: (event: SyntheticEvent<HTMLImageElement>) => void;
onClick?: (event: MouseEvent<HTMLDivElement>) => void;
withMetadataOverlay?: boolean;
isDragDisabled?: boolean;
isDropDisabled?: boolean;
isUploadDisabled?: boolean;
minSize?: number;
postUploadAction?: PostUploadAction;
imageSx?: ChakraProps['sx'];
fitContainer?: boolean;
droppableData?: TypesafeDroppableData;
draggableData?: TypesafeDraggableData;
dropLabel?: string;
isSelected?: boolean;
isSelectedForCompare?: boolean;
thumbnail?: boolean;
noContentFallback?: ReactElement;
useThumbailFallback?: boolean;
withHoverOverlay?: boolean;
children?: JSX.Element;
uploadElement?: ReactNode;
dataTestId?: string;
};
const IAIDndImage = (props: IAIDndImageProps) => {
const {
imageDTO,
onError,
onClick,
withMetadataOverlay = false,
isDropDisabled = false,
isDragDisabled = false,
isUploadDisabled = false,
minSize = 24,
postUploadAction,
imageSx,
fitContainer = false,
droppableData,
draggableData,
dropLabel,
isSelected = false,
isSelectedForCompare = false,
thumbnail = false,
noContentFallback = defaultNoContentFallback,
uploadElement = defaultUploadElement,
useThumbailFallback,
withHoverOverlay = false,
children,
dataTestId,
...rest
} = props;
const openInNewTab = useCallback(
(e: MouseEvent) => {
if (!imageDTO) {
return;
}
if (e.button !== 1) {
return;
}
window.open(imageDTO.image_url, '_blank');
},
[imageDTO]
);
const ref = useRef<HTMLDivElement>(null);
useImageContextMenu(imageDTO, ref);
return (
<Flex
ref={ref}
width="full"
height="full"
alignItems="center"
justifyContent="center"
position="relative"
minW={minSize ? minSize : undefined}
minH={minSize ? minSize : undefined}
userSelect="none"
cursor={isDragDisabled || !imageDTO ? 'default' : 'pointer'}
sx={withHoverOverlay ? sx : baseStyles}
data-selected={isSelectedForCompare ? 'selectedForCompare' : isSelected ? 'selected' : undefined}
{...rest}
>
{imageDTO && (
<Flex
className="gallery-image-container"
w="full"
h="full"
position={fitContainer ? 'absolute' : 'relative'}
alignItems="center"
justifyContent="center"
>
<Image
src={thumbnail ? imageDTO.thumbnail_url : imageDTO.image_url}
fallbackStrategy="beforeLoadOrError"
fallbackSrc={useThumbailFallback ? imageDTO.thumbnail_url : undefined}
fallback={useThumbailFallback ? undefined : <IAILoadingImageFallback image={imageDTO} />}
onError={onError}
draggable={false}
w={imageDTO.width}
objectFit="contain"
maxW="full"
maxH="full"
borderRadius="base"
sx={imageSx}
data-testid={dataTestId}
/>
{withMetadataOverlay && <ImageMetadataOverlay imageDTO={imageDTO} />}
</Flex>
)}
{!imageDTO && !isUploadDisabled && (
<UploadButton
isUploadDisabled={isUploadDisabled}
postUploadAction={postUploadAction}
uploadElement={uploadElement}
minSize={minSize}
/>
)}
{!imageDTO && isUploadDisabled && noContentFallback}
{imageDTO && !isDragDisabled && (
<IAIDraggable
data={draggableData}
disabled={isDragDisabled || !imageDTO}
onClick={onClick}
onAuxClick={openInNewTab}
/>
)}
{children}
{!isDropDisabled && <IAIDroppable data={droppableData} disabled={isDropDisabled} dropLabel={dropLabel} />}
</Flex>
);
};
export default memo(IAIDndImage);
const UploadButton = memo(
({
isUploadDisabled,
postUploadAction,
uploadElement,
minSize,
}: {
isUploadDisabled: boolean;
postUploadAction?: PostUploadAction;
uploadElement: ReactNode;
minSize: number;
}) => {
const { getUploadButtonProps, getUploadInputProps } = useImageUploadButton({
postUploadAction,
isDisabled: isUploadDisabled,
});
const uploadButtonStyles = useMemo<SystemStyleObject>(() => {
const styles: SystemStyleObject = {
minH: minSize,
w: 'full',
h: 'full',
alignItems: 'center',
justifyContent: 'center',
borderRadius: 'base',
transitionProperty: 'common',
transitionDuration: '0.1s',
color: 'base.500',
};
if (!isUploadDisabled) {
Object.assign(styles, {
cursor: 'pointer',
bg: 'base.700',
_hover: {
bg: 'base.650',
color: 'base.300',
},
});
}
return styles;
}, [isUploadDisabled, minSize]);
return (
<Flex sx={uploadButtonStyles} {...getUploadButtonProps()}>
<input {...getUploadInputProps()} />
{uploadElement}
</Flex>
);
}
);
UploadButton.displayName = 'UploadButton';

View File

@@ -1,38 +0,0 @@
import type { BoxProps } from '@invoke-ai/ui-library';
import { Box } from '@invoke-ai/ui-library';
import { useDraggableTypesafe } from 'features/dnd/hooks/typesafeHooks';
import type { TypesafeDraggableData } from 'features/dnd/types';
import { memo, useRef } from 'react';
import { v4 as uuidv4 } from 'uuid';
type IAIDraggableProps = BoxProps & {
disabled?: boolean;
data?: TypesafeDraggableData;
};
const IAIDraggable = (props: IAIDraggableProps) => {
const { data, disabled, ...rest } = props;
const dndId = useRef(uuidv4());
const { attributes, listeners, setNodeRef } = useDraggableTypesafe({
id: dndId.current,
disabled,
data,
});
return (
<Box
ref={setNodeRef}
position="absolute"
w="full"
h="full"
top={0}
insetInlineStart={0}
{...attributes}
{...listeners}
{...rest}
/>
);
};
export default memo(IAIDraggable);

View File

@@ -1,64 +0,0 @@
import { Flex, Text } from '@invoke-ai/ui-library';
import { memo } from 'react';
type Props = {
isOver: boolean;
label?: string;
withBackdrop?: boolean;
};
const IAIDropOverlay = (props: Props) => {
const { isOver, label, withBackdrop = true } = props;
return (
<Flex position="absolute" top={0} right={0} bottom={0} left={0}>
<Flex
position="absolute"
top={0}
right={0}
bottom={0}
left={0}
w="full"
h="full"
bg={withBackdrop ? 'base.900' : 'transparent'}
opacity={0.7}
borderRadius="base"
alignItems="center"
justifyContent="center"
transitionProperty="common"
transitionDuration="0.1s"
/>
<Flex
position="absolute"
top={0.5}
right={0.5}
bottom={0.5}
left={0.5}
opacity={1}
borderWidth={1.5}
borderColor={isOver ? 'invokeYellow.300' : 'base.500'}
borderRadius="base"
borderStyle="dashed"
transitionProperty="common"
transitionDuration="0.1s"
alignItems="center"
justifyContent="center"
>
{label && (
<Text
fontSize="lg"
fontWeight="semibold"
color={isOver ? 'invokeYellow.300' : 'base.500'}
transitionProperty="common"
transitionDuration="0.1s"
textAlign="center"
>
{label}
</Text>
)}
</Flex>
</Flex>
);
};
export default memo(IAIDropOverlay);

View File

@@ -1,46 +0,0 @@
import { Box } from '@invoke-ai/ui-library';
import { useDroppableTypesafe } from 'features/dnd/hooks/typesafeHooks';
import type { TypesafeDroppableData } from 'features/dnd/types';
import { isValidDrop } from 'features/dnd/util/isValidDrop';
import { AnimatePresence } from 'framer-motion';
import { memo, useRef } from 'react';
import { v4 as uuidv4 } from 'uuid';
import IAIDropOverlay from './IAIDropOverlay';
type IAIDroppableProps = {
dropLabel?: string;
disabled?: boolean;
data?: TypesafeDroppableData;
};
const IAIDroppable = (props: IAIDroppableProps) => {
const { dropLabel, data, disabled } = props;
const dndId = useRef(uuidv4());
const { isOver, setNodeRef, active } = useDroppableTypesafe({
id: dndId.current,
disabled,
data,
});
return (
<Box
ref={setNodeRef}
position="absolute"
top={0}
right={0}
bottom={0}
left={0}
w="full"
h="full"
pointerEvents={active ? 'auto' : 'none'}
>
<AnimatePresence>
{isValidDrop(data, active?.data.current) && <IAIDropOverlay isOver={isOver} label={dropLabel} />}
</AnimatePresence>
</Box>
);
};
export default memo(IAIDroppable);

View File

@@ -1,24 +0,0 @@
import type { SystemStyleObject } from '@invoke-ai/ui-library';
import { Box, Skeleton } from '@invoke-ai/ui-library';
import { memo } from 'react';
const skeletonStyles: SystemStyleObject = {
position: 'relative',
height: 'full',
width: 'full',
'::before': {
content: "''",
display: 'block',
pt: '100%',
},
};
const IAIFillSkeleton = () => {
return (
<Skeleton sx={skeletonStyles}>
<Box position="absolute" top={0} insetInlineStart={0} height="full" width="full" />
</Skeleton>
);
};
export default memo(IAIFillSkeleton);

View File

@@ -6,7 +6,7 @@ import type { ImageDTO } from 'services/api/types';
type Props = { image: ImageDTO | undefined };
export const IAILoadingImageFallback = memo((props: Props) => {
const IAILoadingImageFallback = memo((props: Props) => {
if (props.image) {
return (
<Skeleton

View File

@@ -26,5 +26,9 @@ export const IconMenuItem = ({ tooltip, icon, ...props }: Props) => {
};
export const IconMenuItemGroup = ({ children }: { children: ReactNode }) => {
return <Flex gap={2}>{children}</Flex>;
return (
<Flex gap={2} justifyContent="space-between">
{children}
</Flex>
);
};

View File

@@ -1,28 +0,0 @@
import { Badge, Flex } from '@invoke-ai/ui-library';
import { memo } from 'react';
import type { ImageDTO } from 'services/api/types';
type ImageMetadataOverlayProps = {
imageDTO: ImageDTO;
};
const ImageMetadataOverlay = ({ imageDTO }: ImageMetadataOverlayProps) => {
return (
<Flex
pointerEvents="none"
flexDirection="column"
position="absolute"
top={0}
insetInlineStart={0}
p={2}
alignItems="flex-start"
gap={2}
>
<Badge variant="solid" colorScheme="base">
{imageDTO.width} × {imageDTO.height}
</Badge>
</Flex>
);
};
export default memo(ImageMetadataOverlay);

View File

@@ -1,89 +0,0 @@
import { Box, Flex, Heading } from '@invoke-ai/ui-library';
import { useAppSelector } from 'app/store/storeHooks';
import { selectSelectedBoardId } from 'features/gallery/store/gallerySelectors';
import { selectMaxImageUploadCount } from 'features/system/store/configSlice';
import { memo } from 'react';
import type { DropzoneState } from 'react-dropzone';
import { useHotkeys } from 'react-hotkeys-hook';
import { useTranslation } from 'react-i18next';
import { useBoardName } from 'services/api/hooks/useBoardName';
type ImageUploadOverlayProps = {
dropzone: DropzoneState;
setIsHandlingUpload: (isHandlingUpload: boolean) => void;
};
const ImageUploadOverlay = (props: ImageUploadOverlayProps) => {
const { dropzone, setIsHandlingUpload } = props;
useHotkeys(
'esc',
() => {
setIsHandlingUpload(false);
},
[setIsHandlingUpload]
);
return (
<Box position="absolute" top={0} right={0} bottom={0} left={0} zIndex={999} backdropFilter="blur(20px)">
<Flex position="absolute" top={0} right={0} bottom={0} left={0} bg="base.900" opacity={0.7} />
<Flex
position="absolute"
flexDir="column"
gap={4}
top={2}
right={2}
bottom={2}
left={2}
opacity={1}
borderWidth={2}
borderColor={dropzone.isDragAccept ? 'invokeYellow.300' : 'error.500'}
borderRadius="base"
borderStyle="dashed"
transitionProperty="common"
transitionDuration="0.1s"
alignItems="center"
justifyContent="center"
color={dropzone.isDragReject ? 'error.300' : undefined}
>
{dropzone.isDragAccept && <DragAcceptMessage />}
{!dropzone.isDragAccept && <DragRejectMessage />}
</Flex>
</Box>
);
};
export default memo(ImageUploadOverlay);
const DragAcceptMessage = () => {
const { t } = useTranslation();
const selectedBoardId = useAppSelector(selectSelectedBoardId);
const boardName = useBoardName(selectedBoardId);
return (
<>
<Heading size="lg">{t('gallery.dropToUpload')}</Heading>
<Heading size="md">{t('toast.imagesWillBeAddedTo', { boardName })}</Heading>
</>
);
};
const DragRejectMessage = () => {
const { t } = useTranslation();
const maxImageUploadCount = useAppSelector(selectMaxImageUploadCount);
if (maxImageUploadCount === undefined) {
return (
<>
<Heading size="lg">{t('toast.invalidUpload')}</Heading>
<Heading size="md">{t('toast.uploadFailedInvalidUploadDesc')}</Heading>
</>
);
}
return (
<>
<Heading size="lg">{t('toast.invalidUpload')}</Heading>
<Heading size="md">{t('toast.uploadFailedInvalidUploadDesc_withCount', { count: maxImageUploadCount })}</Heading>
</>
);
};

View File

@@ -1,5 +1,6 @@
import type { PopoverProps } from '@invoke-ai/ui-library';
import commercialLicenseBg from 'public/assets/images/commercial-license-bg.png';
import denoisingStrength from 'public/assets/images/denoising-strength.png';
export type Feature =
| 'clipSkip'
@@ -23,8 +24,10 @@ export type Feature =
| 'dynamicPrompts'
| 'dynamicPromptsMaxPrompts'
| 'dynamicPromptsSeedBehaviour'
| 'globalReferenceImage'
| 'imageFit'
| 'infillMethod'
| 'inpainting'
| 'ipAdapterMethod'
| 'lora'
| 'loraWeight'
@@ -46,6 +49,7 @@ export type Feature =
| 'paramVAEPrecision'
| 'paramWidth'
| 'patchmatchDownScaleSize'
| 'rasterLayer'
| 'refinerModel'
| 'refinerNegativeAestheticScore'
| 'refinerPositiveAestheticScore'
@@ -53,6 +57,9 @@ export type Feature =
| 'refinerStart'
| 'refinerSteps'
| 'refinerCfgScale'
| 'regionalGuidance'
| 'regionalGuidanceAndReferenceImage'
| 'regionalReferenceImage'
| 'scaleBeforeProcessing'
| 'seamlessTilingXAxis'
| 'seamlessTilingYAxis'
@@ -76,6 +83,24 @@ export const POPOVER_DATA: { [key in Feature]?: PopoverData } = {
clipSkip: {
href: 'https://support.invoke.ai/support/solutions/articles/151000178161-advanced-settings',
},
inpainting: {
href: 'https://support.invoke.ai/support/solutions/articles/151000096702-inpainting-outpainting-and-bounding-box',
},
rasterLayer: {
href: 'https://support.invoke.ai/support/solutions/articles/151000094998-raster-layers-and-initial-images',
},
regionalGuidance: {
href: 'https://support.invoke.ai/support/solutions/articles/151000165024-regional-guidance-layers',
},
regionalGuidanceAndReferenceImage: {
href: 'https://support.invoke.ai/support/solutions/articles/151000165024-regional-guidance-layers',
},
globalReferenceImage: {
href: 'https://support.invoke.ai/support/solutions/articles/151000159340-global-and-regional-reference-images-ip-adapters-',
},
regionalReferenceImage: {
href: 'https://support.invoke.ai/support/solutions/articles/151000159340-global-and-regional-reference-images-ip-adapters-',
},
controlNet: {
href: 'https://support.invoke.ai/support/solutions/articles/151000105880',
},
@@ -101,7 +126,7 @@ export const POPOVER_DATA: { [key in Feature]?: PopoverData } = {
href: 'https://support.invoke.ai/support/solutions/articles/151000158838-compositing-settings',
},
infillMethod: {
href: 'https://support.invoke.ai/support/solutions/articles/151000158841-infill-and-scaling',
href: 'https://support.invoke.ai/support/solutions/articles/151000158838-compositing-settings',
},
scaleBeforeProcessing: {
href: 'https://support.invoke.ai/support/solutions/articles/151000158841',
@@ -114,6 +139,7 @@ export const POPOVER_DATA: { [key in Feature]?: PopoverData } = {
},
paramDenoisingStrength: {
href: 'https://support.invoke.ai/support/solutions/articles/151000094998-image-to-image',
image: denoisingStrength,
},
paramHrf: {
href: 'https://support.invoke.ai/support/solutions/articles/151000096700-how-can-i-get-larger-images-what-does-upscaling-do-',

View File

@@ -1,9 +1,13 @@
import { combine } from '@atlaskit/pragmatic-drag-and-drop/combine';
import { autoScrollForElements } from '@atlaskit/pragmatic-drag-and-drop-auto-scroll/element';
import { autoScrollForExternal } from '@atlaskit/pragmatic-drag-and-drop-auto-scroll/external';
import type { ChakraProps } from '@invoke-ai/ui-library';
import { Box, Flex } from '@invoke-ai/ui-library';
import { getOverlayScrollbarsParams } from 'common/components/OverlayScrollbars/constants';
import type { OverlayScrollbarsComponentRef } from 'overlayscrollbars-react';
import { OverlayScrollbarsComponent } from 'overlayscrollbars-react';
import type { CSSProperties, PropsWithChildren } from 'react';
import { memo, useMemo } from 'react';
import { memo, useEffect, useMemo, useState } from 'react';
type Props = PropsWithChildren & {
maxHeight?: ChakraProps['maxHeight'];
@@ -11,17 +15,38 @@ type Props = PropsWithChildren & {
overflowY?: 'hidden' | 'scroll';
};
const styles: CSSProperties = { height: '100%', width: '100%' };
const styles: CSSProperties = { position: 'absolute', top: 0, left: 0, right: 0, bottom: 0 };
const ScrollableContent = ({ children, maxHeight, overflowX = 'hidden', overflowY = 'scroll' }: Props) => {
const overlayscrollbarsOptions = useMemo(
() => getOverlayScrollbarsParams(overflowX, overflowY).options,
[overflowX, overflowY]
);
const [os, osRef] = useState<OverlayScrollbarsComponentRef | null>(null);
useEffect(() => {
const osInstance = os?.osInstance();
if (!osInstance) {
return;
}
const element = osInstance.elements().viewport;
// `pragmatic-drag-and-drop-auto-scroll` requires the element to have `overflow-y: scroll` or `overflow-y: auto`
// else it logs an ugly warning. In our case, using a custom scrollbar library, it will be 'hidden' by default.
// To prevent the erroneous warning, we temporarily set the overflow-y to 'scroll' and then revert it back.
const overflowY = element.style.overflowY; // starts 'hidden'
element.style.setProperty('overflow-y', 'scroll', 'important');
const cleanup = combine(autoScrollForElements({ element }), autoScrollForExternal({ element }));
element.style.setProperty('overflow-y', overflowY);
return cleanup;
}, [os]);
return (
<Flex w="full" h="full" maxHeight={maxHeight} position="relative">
<Box position="absolute" top={0} left={0} right={0} bottom={0}>
<OverlayScrollbarsComponent defer style={styles} options={overlayscrollbarsOptions}>
<OverlayScrollbarsComponent ref={osRef} style={styles} options={overlayscrollbarsOptions}>
{children}
</OverlayScrollbarsComponent>
</Box>

View File

@@ -0,0 +1,57 @@
type Props = {
/**
* The amplitude of the wave. 0 is a straight line, higher values create more pronounced waves.
*/
amplitude: number;
/**
* The number of segments in the line. More segments create a smoother wave.
*/
segments?: number;
/**
* The color of the wave.
*/
stroke: string;
/**
* The width of the wave.
*/
strokeWidth: number;
/**
* The width of the SVG.
*/
width: number;
/**
* The height of the SVG.
*/
height: number;
};
const WavyLine = ({ amplitude, stroke, strokeWidth, width, height, segments = 5 }: Props) => {
// Calculate the path dynamically based on waviness
const generatePath = () => {
if (amplitude === 0) {
// If waviness is 0, return a straight line
return `M0,${height / 2} L${width},${height / 2}`;
}
const clampedAmplitude = Math.min(height / 2, amplitude); // Cap amplitude to half the height
const segmentWidth = width / segments;
let path = `M0,${height / 2}`; // Start in the middle of the left edge
// Loop through each segment and alternate the y position to create waves
for (let i = 1; i <= segments; i++) {
const x = i * segmentWidth;
const y = height / 2 + (i % 2 === 0 ? clampedAmplitude : -clampedAmplitude);
path += ` Q${x - segmentWidth / 2},${y} ${x},${height / 2}`;
}
return path;
};
return (
<svg width={width} height={height} viewBox={`0 0 ${width} ${height}`} xmlns="http://www.w3.org/2000/svg">
<path d={generatePath()} fill="none" stroke={stroke} strokeWidth={strokeWidth} />
</svg>
);
};
export default WavyLine;

View File

@@ -127,8 +127,6 @@ export const buildUseDisclosure = (defaultIsOpen: boolean): [() => UseDisclosure
*
* Hook to manage a boolean state. Use this for a local boolean state.
* @param defaultIsOpen Initial state of the disclosure
*
* @knipignore
*/
export const useDisclosure = (defaultIsOpen: boolean): UseDisclosure => {
const [isOpen, set] = useState(defaultIsOpen);

Some files were not shown because too many files have changed in this diff Show More