Compare commits

...

65 Commits

Author SHA1 Message Date
skunkworxdark
3cc54915e3 Bug fix to Metadata To Model & Metadata To SDXL Model for when no metadata found. 2024-02-26 17:48:06 +00:00
skunkworxdark
228497a375 Added new nodes Metadata To ControlNets, Metadata To IP-Adapters, Metadata To T2I-Adapters 2024-02-22 16:36:57 +00:00
skunkworxdark
d43cd17f0a meta to loras
- Added `Metadata to Loras`, `Metadata To SDXL LoRAs`.
- Added unet, clip and vae outputs to `Metadata To Model` & `Metadata To SDXL Model`
- Added default vae as an input to `Metadata to VAE`
2024-02-19 19:21:09 +00:00
skunkworxdark
e4f6a1078e add meta to vae
added metadata to vae
added model exists checks
2024-02-17 14:06:30 +00:00
skunkworxdark
e9da116642 Update metadata_linked.py
bug fix for metadata to model nodes
2024-02-16 21:55:00 +00:00
skunkworxdark
9024c2f11c updates 2024-02-15 22:31:45 +00:00
skunkworxdark
a1cf091f2e Revert "Revert "Merge branch 'main' into Metadata""
This reverts commit 47335ce4fd5e54e12faba192e2f95b0d9f524398.
2024-02-15 17:57:58 +00:00
skunkworxdark
a7ff82247c Revert "Merge branch 'main' into Metadata"
This reverts commit c117c392b878330f82ba4a5b489578021ad5d8e0, reversing
changes made to 095525125841ce302b7e1be368ecc958749dff52.
2024-02-15 17:57:58 +00:00
skunkworxdark
97e29cf595 add model,vae,seamless and metaToBool 2024-02-15 17:57:58 +00:00
skunkworxdark
e444e1272c checkin rename some files 2024-02-15 17:57:58 +00:00
skunkworxdark
f177798894 updates 2024-02-15 17:57:58 +00:00
skunkworxdark
cf6b2904b1 Added scheduler and updated some descriptions 2024-02-15 17:57:58 +00:00
skunkworxdark
2643f9aa30 Added custom validation 2024-02-15 17:57:57 +00:00
skunkworxdark
13449b96ef separate new metadata nodes in own py fil 2024-02-15 17:57:57 +00:00
psychedelicious
f36b5990ed fix(ui): do not provide auth headers for openapi.json 2024-02-15 10:38:26 -05:00
Millun Atluri
5706237ec7 {release} 3.7.0 (#5727)
## What type of PR is this? (check all applicable)

Release - Invoke 3.7.0

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Invoke 3.7.0 Release

## QA Instructions, Screenshots, Recordings
Test Installer: 

[InvokeAI-installer-v3.7.0.zip](https://github.com/invoke-ai/InvokeAI/files/14298200/InvokeAI-installer-v3.7.0.zip)

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan
Merge once approved
<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [X] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
1. Release on PyPi
2. Release on GitHub
3. Announce on Discord
2024-02-15 07:59:20 -07:00
Millun Atluri
163b22a7b3 {release} 3.7.0 2024-02-15 07:34:31 -07:00
Copper Phosphate
c5aeb36230 fix: repair Dockerfile for ROCm
With these changes, the Docker image can be built and executed
successfully on hosts with AMD devices with ROCm acceleration.
Previously, a ROCm-enabled version of torch would be installed, but
later removed during installation of InvokeAI itself. This was caused by
InvokeAI needing a newer torch version than was previously installed.

The fix consists of multiple components:
* Update the hardcoded versions of torch and torchvision to the versions
  currently used in pyproject.toml, so that a new version need not be
  installed during installation of InvokeAI.
* Specify --extra-index-url on installation of InvokeAI so that even if
  a verison mismatch occurs, the correct torch version should still be
  installed. This also necessitates changing --index-url to
  --extra-index-url for the Torch repo. Otherwise non-torch dependencies
  would not be found.
* In run.sh, build the image for the selected service.
2024-02-14 22:25:40 -05:00
chainchompa
5e77f0d93b Reorder exposed fields in workflow tab (#5711)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-02-14 18:32:19 -05:00
chainchompa
d3acb81743 Merge branch 'main' into chainchompa/reorder-exposed-fields 2024-02-14 18:26:35 -05:00
Jennifer Player
e0f2404c00 added reset to default back in, removed unneeded activation constraints 2024-02-14 18:07:15 -05:00
Jennifer Player
5ed7972e5f merge conflict 2024-02-14 17:28:59 -05:00
Jennifer Player
792131be01 added drag icon, added vertical strategy for smoother scrolling 2024-02-14 17:27:21 -05:00
psychedelicious
fc278c5cb1 fix(images_default): correct get_metadata error message
The error was misleading, indicating an issue with getting the image DTO, when it was actually an issue with getting metadata.
2024-02-14 16:21:39 -05:00
blessedcoolant
d7f6af1f07 possible fix: seamless not being seamless with baked 2024-02-14 16:13:11 -05:00
blessedcoolant
ff9bd040cc possible fix: Seamless not working with Custom VAE's 2024-02-14 16:13:11 -05:00
Kent Keirsey
17d5f7bebd Critical Space Removal 2024-02-14 16:13:11 -05:00
Kent Keirsey
30dae0f5aa adding back skipped layer 2024-02-14 16:13:11 -05:00
chainchompa
161000cde6 Merge branch 'main' into chainchompa/reorder-exposed-fields 2024-02-14 15:00:54 -05:00
Jennifer Player
de832f6862 formatting 2024-02-14 15:00:18 -05:00
Jennifer Player
21ba3c63de cleanup 2024-02-14 14:52:48 -05:00
Jennifer Player
a948bd1310 refactored dndsortable to be its own component 2024-02-14 14:47:28 -05:00
Jennifer Player
2071972a8c refactored to just use a new dnd context, got reordering working and fixed flicker 2024-02-14 14:20:08 -05:00
Wubbbi
5ed2f6e6c1 bump 2024-02-14 10:15:50 -05:00
Wubbbi
b77f6bd0ad Update accelerate 0.26.1 -> 0.27.0 2024-02-14 10:15:50 -05:00
Mary Hipp Rogers
34cc26a4ed revert to using fetch, add token if needed (#5720)
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-02-14 10:04:12 -05:00
Mary Hipp Rogers
9d6e4ff1fb workflow tab (#5680)
* new workflow tab UI - still using shared state with workflow editor tab

* polish workflow details

* remove workflow tab, add edit/view mode to workflow slice and get that working to switch between within editor tab

* UI updates for view/edit mode

* cleanup

* add warning to view mode

* lint

* start with isTouched false

* working on styling mode toggle

* more UX iteration

* lint

* cleanup

* save original field values to state, add indicator if they have been changed and give user choice to reset

* lint

* fix import and commit translation

* dont switch to view mode when loading a workflow

* warns before clearing editor

* use folder icon

* fix(ui): track do not erase value when resetting field value

- When adding an exposed field, we need to add it to originalExposedFieldValues
- When removing an exposed field, we need to remove it from originalExposedFieldValues
- add `useFieldValue` and `useOriginalFieldValue` hooks to encapsulate related logic

* feat(ui): use IconButton for workflow view/edit button

* feat(ui): change icon for new workflow

It was the same as the workflow tab icon, confusing bc you think it's going to somehow take you to the tab.

* feat(ui): use render props for NewWorkflowConfirmationAlertDialog

There was a lot of potentially sensitive logic shared between the new workflow button and menu items. Also, two instances of ConfirmationAlertDialog.

Using a render prop deduplicates the logic & components

* fix(ui): do not mark workflow touched when loading workflow

This was occurring because the `nodesChanged` action is called by reactflow when loading a workflow. Specifically, it calculates and sets the node dimensions as it loads.

The existing logic set `isTouched` whenever this action was called.

The changes reactflow emits have types, and we can use the change types and data to determine if a change should result in the workflow being marked as touched.

* chore(ui): lint

* chore(ui): lint

* delete empty file

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-02-14 09:02:07 -05:00
Mary Hipp
85bbf65967 only refetch intermediates on modal open if it is enabled 2024-02-14 09:47:15 +11:00
psychedelicious
3726293258 feat(nodes): improve types in graph.py
Methods `get_node` and `complete` were typed as returning a dynamically created unions `InvocationsUnion` and `InvocationOutputsUnion`, respectively.

Static type analysers cannot work with dynamic objects, so these methods end up as effectively un-annotated, returning `Unknown`.

They now return `BaseInvocation` and `BaseInvocationOutput`, respectively, which are the superclasses of all members of each union. This gives us the best type annotation that is possible.

Note: the return types of these methods are never introspected, so it doesn't really matter what they are at runtime.
2024-02-14 07:56:10 +11:00
Millun Atluri
8bd65be8c8 Quick Seamless Fixes (#5685)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ X ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ X ] No, because: It's small

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ X ] No


## Description
This pulls out some of the updates from the WIP Seamless branch that has
yet to be completed, and hardcodes values that are exposed in that
branch. Given that seamless currently does not generate seamless
textures, and this fix results in seamless outputs, it's an improvement
even if it doesn't resolve this in a "perfect" way that exposes all
variables to the end user.

better over perfect.


![f07b7e49-80c2-4659-bb36-d50ec80b1f8b](https://github.com/invoke-ai/InvokeAI/assets/31807370/36a40bd9-8fc4-41d5-bd1e-209fc828987e)
2024-02-13 11:08:07 -07:00
Millun Atluri
783442c40d Merge branch 'main' into SeamlessFixes 2024-02-13 10:38:55 -07:00
Jennifer Player
8a147bd6e6 added sortable to linear view, not saving yet 2024-02-13 11:53:49 -05:00
psychedelicious
273994b742 chore: bump diffusers 0.26.2 -> 0.26.3
https://github.com/huggingface/diffusers/releases/tag/v0.26.3

This fixes an issue with `DPMSolverSinglestepScheduler` with even numbers of steps.
2024-02-13 08:40:42 -05:00
psychedelicious
3339ad4df8 feat(nodes): seamless.py minor cleanup 2024-02-13 13:34:48 +11:00
Kent Keirsey
c3b2a8cb27 Quick Seamless Fixes 2024-02-13 13:34:48 +11:00
Hosted Weblate
daa780940b translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-02-13 13:20:30 +11:00
Riccardo Giovanetti
2289680ae1 translationBot(ui): update translation (Italian)
Currently translated at 97.2% (1377 of 1416 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-02-13 13:20:30 +11:00
B N
cda85a0637 translationBot(ui): update translation (German)
Currently translated at 79.4% (1128 of 1419 strings)

translationBot(ui): update translation (German)

Currently translated at 78.1% (1107 of 1416 strings)

Co-authored-by: B N <berndnieschalk@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-02-13 13:20:30 +11:00
psychedelicious
1d9801e7be fix(ui): add input el for workflow upload button
Need this to select the file
2024-02-13 13:18:31 +11:00
Mary Hipp
3ecb1e580f update bc button is only ever used in modal context 2024-02-13 13:18:31 +11:00
Mary Hipp
6301e58a2e move upload button into workflow library modal 2024-02-13 13:18:31 +11:00
SoheilRezaei
5dd552effa Update 020_INSTALL_MANUAL.md (#5700)
updated the commands for running InvokeAI local and web server

Co-authored-by: Millun Atluri <Millu@users.noreply.github.com>
2024-02-13 00:36:00 +00:00
Mary Hipp Rogers
25ce505628 exposed field loading state (#5704)
* remove thunk for receivedOpenApiSchema and use RTK query instead. add loading state for exposed fields

* clean up

* ignore any

* fix(ui): do not log on canceled openapi.json queries

- Rely on RTK Query for the `loadSchema` query by providing a custom `jsonReplacer` in our `dynamicBaseQuery`, so we don't need to manage error state.
- Detect when the query was canceled and do not log the error message in those situations.

* feat(ui): `utilitiesApi.endpoints.loadSchema` -> `appInfoApi.endpoints.getOpenAPISchema`

- Utilities is for server actions, move this to `appInfo` bc it fits better there.
- Rename to match convention for HTTP GET queries.
- Fix inverted logic in the `matchRejected` listener (typo'd this)

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-02-12 18:48:32 -05:00
Millun Atluri
1dd07fb1eb Updated docs on OpenPose 2024-02-12 11:12:45 -05:00
blessedcoolant
e82c21b5ba chore: rename DWPose to DW Openpose 2024-02-12 11:12:45 -05:00
blessedcoolant
50b93992cf cleanup: Remove Openpose Image Processor 2024-02-12 11:12:45 -05:00
blessedcoolant
f8e566d62a cleanup: unused util functions 2024-02-12 11:12:45 -05:00
blessedcoolant
f588b95c7f cleanup: remove unused code from the DWPose implementation 2024-02-12 11:12:45 -05:00
blessedcoolant
67daf1751c fix: lint erros 2024-02-12 11:12:45 -05:00
blessedcoolant
7d80261d47 chore: Add code attribution for the DWPoseDetector 2024-02-12 11:12:45 -05:00
blessedcoolant
67cbfeb33d feat: Add output image resizing for DWPose 2024-02-12 11:12:45 -05:00
blessedcoolant
f7998b4be0 feat: Add DWPose to Linear UI 2024-02-12 11:12:45 -05:00
blessedcoolant
675c73c94f fix: ruff lint errors 2024-02-12 11:12:45 -05:00
blessedcoolant
0a27b0379f feat: Initial implementation of DWPoseDetector 2024-02-12 11:12:45 -05:00
psychedelicious
0ef18b6477 fix(ui): enable lora when recalling
Closes #5698
2024-02-12 16:47:46 +11:00
84 changed files with 4902 additions and 631 deletions

View File

@@ -18,8 +18,8 @@ ENV INVOKEAI_SRC=/opt/invokeai
ENV VIRTUAL_ENV=/opt/venv/invokeai
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
ARG TORCH_VERSION=2.1.0
ARG TORCHVISION_VERSION=0.16
ARG TORCH_VERSION=2.1.2
ARG TORCHVISION_VERSION=0.16.2
ARG GPU_DRIVER=cuda
ARG TARGETPLATFORM="linux/amd64"
# unused but available
@@ -35,7 +35,7 @@ RUN --mount=type=cache,target=/root/.cache/pip \
if [ "$TARGETPLATFORM" = "linux/arm64" ] || [ "$GPU_DRIVER" = "cpu" ]; then \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cpu"; \
elif [ "$GPU_DRIVER" = "rocm" ]; then \
extra_index_url_arg="--index-url https://download.pytorch.org/whl/rocm5.6"; \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/rocm5.6"; \
else \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cu121"; \
fi &&\
@@ -54,7 +54,7 @@ RUN --mount=type=cache,target=/root/.cache/pip \
if [ "$GPU_DRIVER" = "cuda" ] && [ "$TARGETPLATFORM" = "linux/amd64" ]; then \
pip install -e ".[xformers]"; \
else \
pip install -e "."; \
pip install $extra_index_url_arg -e "."; \
fi
# #### Build the Web UI ------------------------------------

View File

@@ -21,7 +21,7 @@ run() {
printf "%s\n" "$build_args"
fi
docker compose build $build_args
docker compose build $build_args $service_name
unset build_args
printf "%s\n" "starting service $service_name"

View File

@@ -94,6 +94,8 @@ A model that helps generate creative QR codes that still scan. Can also be used
**Openpose**:
The OpenPose control model allows for the identification of the general pose of a character by pre-processing an existing image with a clear human structure. With advanced options, Openpose can also detect the face or hands in the image.
*Note:* The DWPose Processor has replaced the OpenPose processor in Invoke. Workflows and generations that relied on the OpenPose Processor will need to be updated to use the DWPose Processor instead.
**Mediapipe Face**:
The MediaPipe Face identification processor is able to clearly identify facial features in order to capture vivid expressions of human faces.

View File

@@ -230,13 +230,13 @@ manager, please follow these steps:
=== "local Webserver"
```bash
invokeai --web
invokeai-web
```
=== "Public Webserver"
```bash
invokeai --web --host 0.0.0.0
invokeai-web --host 0.0.0.0
```
=== "CLI"
@@ -402,4 +402,4 @@ environment variable INVOKEAI_ROOT to point to the installation directory.
Note that if you run into problems with the Conda installation, the InvokeAI
staff will **not** be able to help you out. Caveat Emptor!
[dev-chat]: https://discord.com/channels/1020123559063990373/1049495067846524939
[dev-chat]: https://discord.com/channels/1020123559063990373/1049495067846524939

View File

@@ -81,7 +81,7 @@ their descriptions.
| ONNX Text to Latents | Generates latents from conditionings. |
| ONNX Model Loader | Loads a main model, outputting its submodels. |
| OpenCV Inpaint | Simple inpaint using opencv. |
| Openpose Processor | Applies Openpose processing to image |
| DW Openpose Processor | Applies Openpose processing to image |
| PIDI Processor | Applies PIDI processing to image |
| Prompts from File | Loads prompts from a text file |
| Random Integer | Outputs a single random integer. |

View File

@@ -0,0 +1,131 @@
from typing import Optional, Union
from invokeai.app.invocations.baseinvocation import (
BaseInvocationOutput,
Input,
InputField,
InvocationContext,
OutputField,
invocation,
invocation_output,
)
from invokeai.app.invocations.controlnet_image_processors import ControlField, ControlNetInvocation
from invokeai.app.invocations.ip_adapter import IPAdapterField, IPAdapterInvocation
from invokeai.app.invocations.t2i_adapter import T2IAdapterField, T2IAdapterInvocation
from invokeai.app.shared.fields import FieldDescriptions
def append_list(new_item, items, item_cls):
"""Add an item to an exiting item or list of items then output as a list of items."""
result = []
if items is None or (isinstance(items, list) and len(items) == 0):
pass
elif isinstance(items, item_cls):
result.append(items)
elif isinstance(items, list) and all(isinstance(i, item_cls) for i in items):
result.extend(items)
else:
raise ValueError(f"Invalid adapter list format: {items}")
result.append(new_item)
return result
@invocation_output("control_list_output")
class ControlListOutput(BaseInvocationOutput):
# Outputs
control_list: list[ControlField] = OutputField(description=FieldDescriptions.control)
@invocation(
"controlnet-linked",
title="ControlNet-Linked",
tags=["controlnet"],
category="controlnet",
version="1.1.0",
)
class ControlNetLinkedInvocation(ControlNetInvocation):
"""Collects ControlNet info to pass to other nodes."""
control_list: Optional[Union[ControlField, list[ControlField]]] = InputField(
default=None,
title="ControlNet-List",
input=Input.Connection,
ui_order=0,
)
def invoke(self, context: InvocationContext) -> ControlListOutput:
# Call parent
output = super().invoke(context).control
# Append the control output to the input list
control_list = append_list(output, self.control_list, ControlField)
return ControlListOutput(control_list=control_list)
@invocation_output("ip_adapter_list_output")
class IPAdapterListOutput(BaseInvocationOutput):
# Outputs
ip_adapter_list: list[IPAdapterField] = OutputField(
description=FieldDescriptions.ip_adapter, title="IP-Adapter-List"
)
@invocation(
"ip_adapter_linked",
title="IP-Adapter-Linked",
tags=["ip_adapter", "control"],
category="ip_adapter",
version="1.1.0",
)
class IPAdapterLinkedInvocation(IPAdapterInvocation):
"""Collects IP-Adapter info to pass to other nodes."""
ip_adapter_list: Optional[Union[IPAdapterField, list[IPAdapterField]]] = InputField(
description=FieldDescriptions.ip_adapter,
title="IP-Adapter-List",
default=None,
input=Input.Connection,
ui_order=0,
)
def invoke(self, context: InvocationContext) -> IPAdapterListOutput:
# Call parent
output = super().invoke(context).ip_adapter
# Append the control output to the input list
result = append_list(output, self.ip_adapter_list, IPAdapterField)
return IPAdapterListOutput(ip_adapter_list=result)
@invocation_output("ip_adapters_output")
class T2IAdapterListOutput(BaseInvocationOutput):
# Outputs
t2i_adapter_list: list[T2IAdapterField] = OutputField(
description=FieldDescriptions.t2i_adapter, title="T2I Adapter-List"
)
@invocation(
"t2i_adapter_linked",
title="T2I-Adapter-Linked",
tags=["t2i_adapter", "control"],
category="t2i_adapter",
version="1.0.0",
)
class T2IAdapterLinkedInvocation(T2IAdapterInvocation):
"""Collects T2I-Adapter info to pass to other nodes."""
t2i_adapter_list: Optional[Union[T2IAdapterField, list[T2IAdapterField]]] = InputField(
description=FieldDescriptions.ip_adapter,
title="T2I-Adapter",
default=None,
input=Input.Connection,
ui_order=0,
)
def invoke(self, context: InvocationContext) -> T2IAdapterListOutput:
# Call parent
output = super().invoke(context).t2i_adapter
# Append the control output to the input list
t2i_adapter_list = append_list(output, self.t2i_adapter_list, T2IAdapterField)
return T2IAdapterListOutput(t2i_adapter_list=t2i_adapter_list)

View File

@@ -17,7 +17,6 @@ from controlnet_aux import (
MidasDetector,
MLSDdetector,
NormalBaeDetector,
OpenposeDetector,
PidiNetDetector,
SamDetector,
ZoeDetector,
@@ -31,6 +30,7 @@ from invokeai.app.invocations.util import validate_begin_end_step, validate_weig
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.app.shared.fields import FieldDescriptions
from invokeai.backend.image_util.depth_anything import DepthAnythingDetector
from invokeai.backend.image_util.dw_openpose import DWOpenposeDetector
from ...backend.model_management import BaseModelType
from .baseinvocation import (
@@ -276,31 +276,6 @@ class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
return processed_image
@invocation(
"openpose_image_processor",
title="Openpose Processor",
tags=["controlnet", "openpose", "pose"],
category="controlnet",
version="1.2.0",
)
class OpenposeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Openpose processing to image"""
hand_and_face: bool = InputField(default=False, description="Whether to use hands and face mode")
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image):
openpose_processor = OpenposeDetector.from_pretrained("lllyasviel/Annotators")
processed_image = openpose_processor(
image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
hand_and_face=self.hand_and_face,
)
return processed_image
@invocation(
"midas_depth_image_processor",
title="Midas Depth Processor",
@@ -624,7 +599,7 @@ class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
resolution: int = InputField(default=512, ge=64, multiple_of=64, description=FieldDescriptions.image_res)
offload: bool = InputField(default=False)
def run_processor(self, image):
def run_processor(self, image: Image.Image):
depth_anything_detector = DepthAnythingDetector()
depth_anything_detector.load_model(model_size=self.model_size)
@@ -633,3 +608,30 @@ class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
processed_image = depth_anything_detector(image=image, resolution=self.resolution, offload=self.offload)
return processed_image
@invocation(
"dw_openpose_image_processor",
title="DW Openpose Image Processor",
tags=["controlnet", "dwpose", "openpose"],
category="controlnet",
version="1.0.0",
)
class DWOpenposeImageProcessorInvocation(ImageProcessorInvocation):
"""Generates an openpose pose from an image using DWPose"""
draw_body: bool = InputField(default=True)
draw_face: bool = InputField(default=False)
draw_hands: bool = InputField(default=False)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image):
dw_openpose = DWOpenposeDetector()
processed_image = dw_openpose(
image,
draw_face=self.draw_face,
draw_hands=self.draw_hands,
draw_body=self.draw_body,
resolution=self.image_resolution,
)
return processed_image

File diff suppressed because it is too large Load Diff

View File

@@ -154,7 +154,7 @@ class ImageService(ImageServiceABC):
self.__invoker.services.logger.error("Image record not found")
raise
except Exception as e:
self.__invoker.services.logger.error("Problem getting image DTO")
self.__invoker.services.logger.error("Problem getting image metadata")
raise e
def get_workflow(self, image_name: str) -> Optional[WorkflowWithoutID]:

View File

@@ -540,7 +540,7 @@ class Graph(BaseModel):
except NodeNotFoundError:
return False
def get_node(self, node_path: str) -> InvocationsUnion:
def get_node(self, node_path: str) -> BaseInvocation:
"""Gets a node from the graph using a node path."""
# Materialized graphs may have nodes at the top level
graph, node_id = self._get_graph_and_node(node_path)
@@ -891,7 +891,7 @@ class GraphExecutionState(BaseModel):
# If next is still none, there's no next node, return None
return next_node
def complete(self, node_id: str, output: InvocationOutputsUnion):
def complete(self, node_id: str, output: BaseInvocationOutput) -> None:
"""Marks a node as complete"""
if node_id not in self.execution_graph.nodes:

View File

@@ -0,0 +1,81 @@
import numpy as np
import torch
from controlnet_aux.util import resize_image
from PIL import Image
from invokeai.backend.image_util.dw_openpose.utils import draw_bodypose, draw_facepose, draw_handpose
from invokeai.backend.image_util.dw_openpose.wholebody import Wholebody
def draw_pose(pose, H, W, draw_face=True, draw_body=True, draw_hands=True, resolution=512):
bodies = pose["bodies"]
faces = pose["faces"]
hands = pose["hands"]
candidate = bodies["candidate"]
subset = bodies["subset"]
canvas = np.zeros(shape=(H, W, 3), dtype=np.uint8)
if draw_body:
canvas = draw_bodypose(canvas, candidate, subset)
if draw_hands:
canvas = draw_handpose(canvas, hands)
if draw_face:
canvas = draw_facepose(canvas, faces)
dwpose_image = resize_image(
canvas,
resolution,
)
dwpose_image = Image.fromarray(dwpose_image)
return dwpose_image
class DWOpenposeDetector:
"""
Code from the original implementation of the DW Openpose Detector.
Credits: https://github.com/IDEA-Research/DWPose
"""
def __init__(self) -> None:
self.pose_estimation = Wholebody()
def __call__(
self, image: Image.Image, draw_face=False, draw_body=True, draw_hands=False, resolution=512
) -> Image.Image:
np_image = np.array(image)
H, W, C = np_image.shape
with torch.no_grad():
candidate, subset = self.pose_estimation(np_image)
nums, keys, locs = candidate.shape
candidate[..., 0] /= float(W)
candidate[..., 1] /= float(H)
body = candidate[:, :18].copy()
body = body.reshape(nums * 18, locs)
score = subset[:, :18]
for i in range(len(score)):
for j in range(len(score[i])):
if score[i][j] > 0.3:
score[i][j] = int(18 * i + j)
else:
score[i][j] = -1
un_visible = subset < 0.3
candidate[un_visible] = -1
# foot = candidate[:, 18:24]
faces = candidate[:, 24:92]
hands = candidate[:, 92:113]
hands = np.vstack([hands, candidate[:, 113:]])
bodies = {"candidate": body, "subset": score}
pose = {"bodies": bodies, "hands": hands, "faces": faces}
return draw_pose(
pose, H, W, draw_face=draw_face, draw_hands=draw_hands, draw_body=draw_body, resolution=resolution
)

View File

@@ -0,0 +1,128 @@
# Code from the original DWPose Implementation: https://github.com/IDEA-Research/DWPose
import cv2
import numpy as np
def nms(boxes, scores, nms_thr):
"""Single class NMS implemented in Numpy."""
x1 = boxes[:, 0]
y1 = boxes[:, 1]
x2 = boxes[:, 2]
y2 = boxes[:, 3]
areas = (x2 - x1 + 1) * (y2 - y1 + 1)
order = scores.argsort()[::-1]
keep = []
while order.size > 0:
i = order[0]
keep.append(i)
xx1 = np.maximum(x1[i], x1[order[1:]])
yy1 = np.maximum(y1[i], y1[order[1:]])
xx2 = np.minimum(x2[i], x2[order[1:]])
yy2 = np.minimum(y2[i], y2[order[1:]])
w = np.maximum(0.0, xx2 - xx1 + 1)
h = np.maximum(0.0, yy2 - yy1 + 1)
inter = w * h
ovr = inter / (areas[i] + areas[order[1:]] - inter)
inds = np.where(ovr <= nms_thr)[0]
order = order[inds + 1]
return keep
def multiclass_nms(boxes, scores, nms_thr, score_thr):
"""Multiclass NMS implemented in Numpy. Class-aware version."""
final_dets = []
num_classes = scores.shape[1]
for cls_ind in range(num_classes):
cls_scores = scores[:, cls_ind]
valid_score_mask = cls_scores > score_thr
if valid_score_mask.sum() == 0:
continue
else:
valid_scores = cls_scores[valid_score_mask]
valid_boxes = boxes[valid_score_mask]
keep = nms(valid_boxes, valid_scores, nms_thr)
if len(keep) > 0:
cls_inds = np.ones((len(keep), 1)) * cls_ind
dets = np.concatenate([valid_boxes[keep], valid_scores[keep, None], cls_inds], 1)
final_dets.append(dets)
if len(final_dets) == 0:
return None
return np.concatenate(final_dets, 0)
def demo_postprocess(outputs, img_size, p6=False):
grids = []
expanded_strides = []
strides = [8, 16, 32] if not p6 else [8, 16, 32, 64]
hsizes = [img_size[0] // stride for stride in strides]
wsizes = [img_size[1] // stride for stride in strides]
for hsize, wsize, stride in zip(hsizes, wsizes, strides, strict=False):
xv, yv = np.meshgrid(np.arange(wsize), np.arange(hsize))
grid = np.stack((xv, yv), 2).reshape(1, -1, 2)
grids.append(grid)
shape = grid.shape[:2]
expanded_strides.append(np.full((*shape, 1), stride))
grids = np.concatenate(grids, 1)
expanded_strides = np.concatenate(expanded_strides, 1)
outputs[..., :2] = (outputs[..., :2] + grids) * expanded_strides
outputs[..., 2:4] = np.exp(outputs[..., 2:4]) * expanded_strides
return outputs
def preprocess(img, input_size, swap=(2, 0, 1)):
if len(img.shape) == 3:
padded_img = np.ones((input_size[0], input_size[1], 3), dtype=np.uint8) * 114
else:
padded_img = np.ones(input_size, dtype=np.uint8) * 114
r = min(input_size[0] / img.shape[0], input_size[1] / img.shape[1])
resized_img = cv2.resize(
img,
(int(img.shape[1] * r), int(img.shape[0] * r)),
interpolation=cv2.INTER_LINEAR,
).astype(np.uint8)
padded_img[: int(img.shape[0] * r), : int(img.shape[1] * r)] = resized_img
padded_img = padded_img.transpose(swap)
padded_img = np.ascontiguousarray(padded_img, dtype=np.float32)
return padded_img, r
def inference_detector(session, oriImg):
input_shape = (640, 640)
img, ratio = preprocess(oriImg, input_shape)
ort_inputs = {session.get_inputs()[0].name: img[None, :, :, :]}
output = session.run(None, ort_inputs)
predictions = demo_postprocess(output[0], input_shape)[0]
boxes = predictions[:, :4]
scores = predictions[:, 4:5] * predictions[:, 5:]
boxes_xyxy = np.ones_like(boxes)
boxes_xyxy[:, 0] = boxes[:, 0] - boxes[:, 2] / 2.0
boxes_xyxy[:, 1] = boxes[:, 1] - boxes[:, 3] / 2.0
boxes_xyxy[:, 2] = boxes[:, 0] + boxes[:, 2] / 2.0
boxes_xyxy[:, 3] = boxes[:, 1] + boxes[:, 3] / 2.0
boxes_xyxy /= ratio
dets = multiclass_nms(boxes_xyxy, scores, nms_thr=0.45, score_thr=0.1)
if dets is not None:
final_boxes, final_scores, final_cls_inds = dets[:, :4], dets[:, 4], dets[:, 5]
isscore = final_scores > 0.3
iscat = final_cls_inds == 0
isbbox = [i and j for (i, j) in zip(isscore, iscat, strict=False)]
final_boxes = final_boxes[isbbox]
else:
final_boxes = np.array([])
return final_boxes

View File

@@ -0,0 +1,361 @@
# Code from the original DWPose Implementation: https://github.com/IDEA-Research/DWPose
from typing import List, Tuple
import cv2
import numpy as np
import onnxruntime as ort
def preprocess(
img: np.ndarray, out_bbox, input_size: Tuple[int, int] = (192, 256)
) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
"""Do preprocessing for RTMPose model inference.
Args:
img (np.ndarray): Input image in shape.
input_size (tuple): Input image size in shape (w, h).
Returns:
tuple:
- resized_img (np.ndarray): Preprocessed image.
- center (np.ndarray): Center of image.
- scale (np.ndarray): Scale of image.
"""
# get shape of image
img_shape = img.shape[:2]
out_img, out_center, out_scale = [], [], []
if len(out_bbox) == 0:
out_bbox = [[0, 0, img_shape[1], img_shape[0]]]
for i in range(len(out_bbox)):
x0 = out_bbox[i][0]
y0 = out_bbox[i][1]
x1 = out_bbox[i][2]
y1 = out_bbox[i][3]
bbox = np.array([x0, y0, x1, y1])
# get center and scale
center, scale = bbox_xyxy2cs(bbox, padding=1.25)
# do affine transformation
resized_img, scale = top_down_affine(input_size, scale, center, img)
# normalize image
mean = np.array([123.675, 116.28, 103.53])
std = np.array([58.395, 57.12, 57.375])
resized_img = (resized_img - mean) / std
out_img.append(resized_img)
out_center.append(center)
out_scale.append(scale)
return out_img, out_center, out_scale
def inference(sess: ort.InferenceSession, img: np.ndarray) -> np.ndarray:
"""Inference RTMPose model.
Args:
sess (ort.InferenceSession): ONNXRuntime session.
img (np.ndarray): Input image in shape.
Returns:
outputs (np.ndarray): Output of RTMPose model.
"""
all_out = []
# build input
for i in range(len(img)):
input = [img[i].transpose(2, 0, 1)]
# build output
sess_input = {sess.get_inputs()[0].name: input}
sess_output = []
for out in sess.get_outputs():
sess_output.append(out.name)
# run model
outputs = sess.run(sess_output, sess_input)
all_out.append(outputs)
return all_out
def postprocess(
outputs: List[np.ndarray],
model_input_size: Tuple[int, int],
center: Tuple[int, int],
scale: Tuple[int, int],
simcc_split_ratio: float = 2.0,
) -> Tuple[np.ndarray, np.ndarray]:
"""Postprocess for RTMPose model output.
Args:
outputs (np.ndarray): Output of RTMPose model.
model_input_size (tuple): RTMPose model Input image size.
center (tuple): Center of bbox in shape (x, y).
scale (tuple): Scale of bbox in shape (w, h).
simcc_split_ratio (float): Split ratio of simcc.
Returns:
tuple:
- keypoints (np.ndarray): Rescaled keypoints.
- scores (np.ndarray): Model predict scores.
"""
all_key = []
all_score = []
for i in range(len(outputs)):
# use simcc to decode
simcc_x, simcc_y = outputs[i]
keypoints, scores = decode(simcc_x, simcc_y, simcc_split_ratio)
# rescale keypoints
keypoints = keypoints / model_input_size * scale[i] + center[i] - scale[i] / 2
all_key.append(keypoints[0])
all_score.append(scores[0])
return np.array(all_key), np.array(all_score)
def bbox_xyxy2cs(bbox: np.ndarray, padding: float = 1.0) -> Tuple[np.ndarray, np.ndarray]:
"""Transform the bbox format from (x,y,w,h) into (center, scale)
Args:
bbox (ndarray): Bounding box(es) in shape (4,) or (n, 4), formatted
as (left, top, right, bottom)
padding (float): BBox padding factor that will be multilied to scale.
Default: 1.0
Returns:
tuple: A tuple containing center and scale.
- np.ndarray[float32]: Center (x, y) of the bbox in shape (2,) or
(n, 2)
- np.ndarray[float32]: Scale (w, h) of the bbox in shape (2,) or
(n, 2)
"""
# convert single bbox from (4, ) to (1, 4)
dim = bbox.ndim
if dim == 1:
bbox = bbox[None, :]
# get bbox center and scale
x1, y1, x2, y2 = np.hsplit(bbox, [1, 2, 3])
center = np.hstack([x1 + x2, y1 + y2]) * 0.5
scale = np.hstack([x2 - x1, y2 - y1]) * padding
if dim == 1:
center = center[0]
scale = scale[0]
return center, scale
def _fix_aspect_ratio(bbox_scale: np.ndarray, aspect_ratio: float) -> np.ndarray:
"""Extend the scale to match the given aspect ratio.
Args:
scale (np.ndarray): The image scale (w, h) in shape (2, )
aspect_ratio (float): The ratio of ``w/h``
Returns:
np.ndarray: The reshaped image scale in (2, )
"""
w, h = np.hsplit(bbox_scale, [1])
bbox_scale = np.where(w > h * aspect_ratio, np.hstack([w, w / aspect_ratio]), np.hstack([h * aspect_ratio, h]))
return bbox_scale
def _rotate_point(pt: np.ndarray, angle_rad: float) -> np.ndarray:
"""Rotate a point by an angle.
Args:
pt (np.ndarray): 2D point coordinates (x, y) in shape (2, )
angle_rad (float): rotation angle in radian
Returns:
np.ndarray: Rotated point in shape (2, )
"""
sn, cs = np.sin(angle_rad), np.cos(angle_rad)
rot_mat = np.array([[cs, -sn], [sn, cs]])
return rot_mat @ pt
def _get_3rd_point(a: np.ndarray, b: np.ndarray) -> np.ndarray:
"""To calculate the affine matrix, three pairs of points are required. This
function is used to get the 3rd point, given 2D points a & b.
The 3rd point is defined by rotating vector `a - b` by 90 degrees
anticlockwise, using b as the rotation center.
Args:
a (np.ndarray): The 1st point (x,y) in shape (2, )
b (np.ndarray): The 2nd point (x,y) in shape (2, )
Returns:
np.ndarray: The 3rd point.
"""
direction = a - b
c = b + np.r_[-direction[1], direction[0]]
return c
def get_warp_matrix(
center: np.ndarray,
scale: np.ndarray,
rot: float,
output_size: Tuple[int, int],
shift: Tuple[float, float] = (0.0, 0.0),
inv: bool = False,
) -> np.ndarray:
"""Calculate the affine transformation matrix that can warp the bbox area
in the input image to the output size.
Args:
center (np.ndarray[2, ]): Center of the bounding box (x, y).
scale (np.ndarray[2, ]): Scale of the bounding box
wrt [width, height].
rot (float): Rotation angle (degree).
output_size (np.ndarray[2, ] | list(2,)): Size of the
destination heatmaps.
shift (0-100%): Shift translation ratio wrt the width/height.
Default (0., 0.).
inv (bool): Option to inverse the affine transform direction.
(inv=False: src->dst or inv=True: dst->src)
Returns:
np.ndarray: A 2x3 transformation matrix
"""
shift = np.array(shift)
src_w = scale[0]
dst_w = output_size[0]
dst_h = output_size[1]
# compute transformation matrix
rot_rad = np.deg2rad(rot)
src_dir = _rotate_point(np.array([0.0, src_w * -0.5]), rot_rad)
dst_dir = np.array([0.0, dst_w * -0.5])
# get four corners of the src rectangle in the original image
src = np.zeros((3, 2), dtype=np.float32)
src[0, :] = center + scale * shift
src[1, :] = center + src_dir + scale * shift
src[2, :] = _get_3rd_point(src[0, :], src[1, :])
# get four corners of the dst rectangle in the input image
dst = np.zeros((3, 2), dtype=np.float32)
dst[0, :] = [dst_w * 0.5, dst_h * 0.5]
dst[1, :] = np.array([dst_w * 0.5, dst_h * 0.5]) + dst_dir
dst[2, :] = _get_3rd_point(dst[0, :], dst[1, :])
if inv:
warp_mat = cv2.getAffineTransform(np.float32(dst), np.float32(src))
else:
warp_mat = cv2.getAffineTransform(np.float32(src), np.float32(dst))
return warp_mat
def top_down_affine(
input_size: dict, bbox_scale: dict, bbox_center: dict, img: np.ndarray
) -> Tuple[np.ndarray, np.ndarray]:
"""Get the bbox image as the model input by affine transform.
Args:
input_size (dict): The input size of the model.
bbox_scale (dict): The bbox scale of the img.
bbox_center (dict): The bbox center of the img.
img (np.ndarray): The original image.
Returns:
tuple: A tuple containing center and scale.
- np.ndarray[float32]: img after affine transform.
- np.ndarray[float32]: bbox scale after affine transform.
"""
w, h = input_size
warp_size = (int(w), int(h))
# reshape bbox to fixed aspect ratio
bbox_scale = _fix_aspect_ratio(bbox_scale, aspect_ratio=w / h)
# get the affine matrix
center = bbox_center
scale = bbox_scale
rot = 0
warp_mat = get_warp_matrix(center, scale, rot, output_size=(w, h))
# do affine transform
img = cv2.warpAffine(img, warp_mat, warp_size, flags=cv2.INTER_LINEAR)
return img, bbox_scale
def get_simcc_maximum(simcc_x: np.ndarray, simcc_y: np.ndarray) -> Tuple[np.ndarray, np.ndarray]:
"""Get maximum response location and value from simcc representations.
Note:
instance number: N
num_keypoints: K
heatmap height: H
heatmap width: W
Args:
simcc_x (np.ndarray): x-axis SimCC in shape (K, Wx) or (N, K, Wx)
simcc_y (np.ndarray): y-axis SimCC in shape (K, Wy) or (N, K, Wy)
Returns:
tuple:
- locs (np.ndarray): locations of maximum heatmap responses in shape
(K, 2) or (N, K, 2)
- vals (np.ndarray): values of maximum heatmap responses in shape
(K,) or (N, K)
"""
N, K, Wx = simcc_x.shape
simcc_x = simcc_x.reshape(N * K, -1)
simcc_y = simcc_y.reshape(N * K, -1)
# get maximum value locations
x_locs = np.argmax(simcc_x, axis=1)
y_locs = np.argmax(simcc_y, axis=1)
locs = np.stack((x_locs, y_locs), axis=-1).astype(np.float32)
max_val_x = np.amax(simcc_x, axis=1)
max_val_y = np.amax(simcc_y, axis=1)
# get maximum value across x and y axis
mask = max_val_x > max_val_y
max_val_x[mask] = max_val_y[mask]
vals = max_val_x
locs[vals <= 0.0] = -1
# reshape
locs = locs.reshape(N, K, 2)
vals = vals.reshape(N, K)
return locs, vals
def decode(simcc_x: np.ndarray, simcc_y: np.ndarray, simcc_split_ratio) -> Tuple[np.ndarray, np.ndarray]:
"""Modulate simcc distribution with Gaussian.
Args:
simcc_x (np.ndarray[K, Wx]): model predicted simcc in x.
simcc_y (np.ndarray[K, Wy]): model predicted simcc in y.
simcc_split_ratio (int): The split ratio of simcc.
Returns:
tuple: A tuple containing center and scale.
- np.ndarray[float32]: keypoints in shape (K, 2) or (n, K, 2)
- np.ndarray[float32]: scores in shape (K,) or (n, K)
"""
keypoints, scores = get_simcc_maximum(simcc_x, simcc_y)
keypoints /= simcc_split_ratio
return keypoints, scores
def inference_pose(session, out_bbox, oriImg):
h, w = session.get_inputs()[0].shape[2:]
model_input_size = (w, h)
resized_img, center, scale = preprocess(oriImg, out_bbox, model_input_size)
outputs = inference(session, resized_img)
keypoints, scores = postprocess(outputs, model_input_size, center, scale)
return keypoints, scores

View File

@@ -0,0 +1,155 @@
# Code from the original DWPose Implementation: https://github.com/IDEA-Research/DWPose
import math
import cv2
import matplotlib
import numpy as np
eps = 0.01
def draw_bodypose(canvas, candidate, subset):
H, W, C = canvas.shape
candidate = np.array(candidate)
subset = np.array(subset)
stickwidth = 4
limbSeq = [
[2, 3],
[2, 6],
[3, 4],
[4, 5],
[6, 7],
[7, 8],
[2, 9],
[9, 10],
[10, 11],
[2, 12],
[12, 13],
[13, 14],
[2, 1],
[1, 15],
[15, 17],
[1, 16],
[16, 18],
[3, 17],
[6, 18],
]
colors = [
[255, 0, 0],
[255, 85, 0],
[255, 170, 0],
[255, 255, 0],
[170, 255, 0],
[85, 255, 0],
[0, 255, 0],
[0, 255, 85],
[0, 255, 170],
[0, 255, 255],
[0, 170, 255],
[0, 85, 255],
[0, 0, 255],
[85, 0, 255],
[170, 0, 255],
[255, 0, 255],
[255, 0, 170],
[255, 0, 85],
]
for i in range(17):
for n in range(len(subset)):
index = subset[n][np.array(limbSeq[i]) - 1]
if -1 in index:
continue
Y = candidate[index.astype(int), 0] * float(W)
X = candidate[index.astype(int), 1] * float(H)
mX = np.mean(X)
mY = np.mean(Y)
length = ((X[0] - X[1]) ** 2 + (Y[0] - Y[1]) ** 2) ** 0.5
angle = math.degrees(math.atan2(X[0] - X[1], Y[0] - Y[1]))
polygon = cv2.ellipse2Poly((int(mY), int(mX)), (int(length / 2), stickwidth), int(angle), 0, 360, 1)
cv2.fillConvexPoly(canvas, polygon, colors[i])
canvas = (canvas * 0.6).astype(np.uint8)
for i in range(18):
for n in range(len(subset)):
index = int(subset[n][i])
if index == -1:
continue
x, y = candidate[index][0:2]
x = int(x * W)
y = int(y * H)
cv2.circle(canvas, (int(x), int(y)), 4, colors[i], thickness=-1)
return canvas
def draw_handpose(canvas, all_hand_peaks):
H, W, C = canvas.shape
edges = [
[0, 1],
[1, 2],
[2, 3],
[3, 4],
[0, 5],
[5, 6],
[6, 7],
[7, 8],
[0, 9],
[9, 10],
[10, 11],
[11, 12],
[0, 13],
[13, 14],
[14, 15],
[15, 16],
[0, 17],
[17, 18],
[18, 19],
[19, 20],
]
for peaks in all_hand_peaks:
peaks = np.array(peaks)
for ie, e in enumerate(edges):
x1, y1 = peaks[e[0]]
x2, y2 = peaks[e[1]]
x1 = int(x1 * W)
y1 = int(y1 * H)
x2 = int(x2 * W)
y2 = int(y2 * H)
if x1 > eps and y1 > eps and x2 > eps and y2 > eps:
cv2.line(
canvas,
(x1, y1),
(x2, y2),
matplotlib.colors.hsv_to_rgb([ie / float(len(edges)), 1.0, 1.0]) * 255,
thickness=2,
)
for _, keyponit in enumerate(peaks):
x, y = keyponit
x = int(x * W)
y = int(y * H)
if x > eps and y > eps:
cv2.circle(canvas, (x, y), 4, (0, 0, 255), thickness=-1)
return canvas
def draw_facepose(canvas, all_lmks):
H, W, C = canvas.shape
for lmks in all_lmks:
lmks = np.array(lmks)
for lmk in lmks:
x, y = lmk
x = int(x * W)
y = int(y * H)
if x > eps and y > eps:
cv2.circle(canvas, (x, y), 3, (255, 255, 255), thickness=-1)
return canvas

View File

@@ -0,0 +1,67 @@
# Code from the original DWPose Implementation: https://github.com/IDEA-Research/DWPose
# Modified pathing to suit Invoke
import pathlib
import numpy as np
import onnxruntime as ort
from invokeai.app.services.config.config_default import InvokeAIAppConfig
from invokeai.backend.util.devices import choose_torch_device
from invokeai.backend.util.util import download_with_progress_bar
from .onnxdet import inference_detector
from .onnxpose import inference_pose
DWPOSE_MODELS = {
"yolox_l.onnx": {
"local": "any/annotators/dwpose/yolox_l.onnx",
"url": "https://huggingface.co/yzd-v/DWPose/resolve/main/yolox_l.onnx?download=true",
},
"dw-ll_ucoco_384.onnx": {
"local": "any/annotators/dwpose/dw-ll_ucoco_384.onnx",
"url": "https://huggingface.co/yzd-v/DWPose/resolve/main/dw-ll_ucoco_384.onnx?download=true",
},
}
config = InvokeAIAppConfig.get_config()
class Wholebody:
def __init__(self):
device = choose_torch_device()
providers = ["CUDAExecutionProvider"] if device == "cuda" else ["CPUExecutionProvider"]
DET_MODEL_PATH = pathlib.Path(config.models_path / DWPOSE_MODELS["yolox_l.onnx"]["local"])
if not DET_MODEL_PATH.exists():
download_with_progress_bar(DWPOSE_MODELS["yolox_l.onnx"]["url"], DET_MODEL_PATH)
POSE_MODEL_PATH = pathlib.Path(config.models_path / DWPOSE_MODELS["dw-ll_ucoco_384.onnx"]["local"])
if not POSE_MODEL_PATH.exists():
download_with_progress_bar(DWPOSE_MODELS["dw-ll_ucoco_384.onnx"]["url"], POSE_MODEL_PATH)
onnx_det = DET_MODEL_PATH
onnx_pose = POSE_MODEL_PATH
self.session_det = ort.InferenceSession(path_or_bytes=onnx_det, providers=providers)
self.session_pose = ort.InferenceSession(path_or_bytes=onnx_pose, providers=providers)
def __call__(self, oriImg):
det_result = inference_detector(self.session_det, oriImg)
keypoints, scores = inference_pose(self.session_pose, det_result, oriImg)
keypoints_info = np.concatenate((keypoints, scores[..., None]), axis=-1)
# compute neck joint
neck = np.mean(keypoints_info[:, [5, 6]], axis=1)
# neck score when visualizing pred
neck[:, 2:4] = np.logical_and(keypoints_info[:, 5, 2:4] > 0.3, keypoints_info[:, 6, 2:4] > 0.3).astype(int)
new_keypoints_info = np.insert(keypoints_info, 17, neck, axis=1)
mmpose_idx = [17, 6, 8, 10, 7, 9, 12, 14, 16, 13, 15, 2, 1, 4, 3]
openpose_idx = [1, 2, 3, 4, 6, 7, 8, 9, 10, 12, 13, 14, 15, 16, 17]
new_keypoints_info[:, openpose_idx] = new_keypoints_info[:, mmpose_idx]
keypoints_info = new_keypoints_info
keypoints, scores = keypoints_info[..., :2], keypoints_info[..., 2]
return keypoints, scores

View File

@@ -1,10 +1,11 @@
from __future__ import annotations
from contextlib import contextmanager
from typing import List, Union
from typing import Callable, List, Union
import torch.nn as nn
from diffusers.models import AutoencoderKL, UNet2DConditionModel
from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
from diffusers.models.unets.unet_2d_condition import UNet2DConditionModel
def _conv_forward_asymmetric(self, input, weight, bias):
@@ -26,70 +27,51 @@ def _conv_forward_asymmetric(self, input, weight, bias):
@contextmanager
def set_seamless(model: Union[UNet2DConditionModel, AutoencoderKL], seamless_axes: List[str]):
# Callable: (input: Tensor, weight: Tensor, bias: Optional[Tensor]) -> Tensor
to_restore: list[tuple[nn.Conv2d | nn.ConvTranspose2d, Callable]] = []
try:
to_restore = []
# Hard coded to skip down block layers, allowing for seamless tiling at the expense of prompt adherence
skipped_layers = 1
for m_name, m in model.named_modules():
if isinstance(model, UNet2DConditionModel):
if ".attentions." in m_name:
if not isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
continue
if isinstance(model, UNet2DConditionModel) and m_name.startswith("down_blocks.") and ".resnets." in m_name:
# down_blocks.1.resnets.1.conv1
_, block_num, _, resnet_num, submodule_name = m_name.split(".")
block_num = int(block_num)
resnet_num = int(resnet_num)
if block_num >= len(model.down_blocks) - skipped_layers:
continue
if ".resnets." in m_name:
if ".conv2" in m_name:
continue
if ".conv_shortcut" in m_name:
continue
"""
if isinstance(model, UNet2DConditionModel):
if False and ".upsamplers." in m_name:
# Skip the second resnet (could be configurable)
if resnet_num > 0:
continue
if False and ".downsamplers." in m_name:
# Skip Conv2d layers (could be configurable)
if submodule_name == "conv2":
continue
if True and ".resnets." in m_name:
if True and ".conv1" in m_name:
if False and "down_blocks" in m_name:
continue
if False and "mid_block" in m_name:
continue
if False and "up_blocks" in m_name:
continue
m.asymmetric_padding_mode = {}
m.asymmetric_padding = {}
m.asymmetric_padding_mode["x"] = "circular" if ("x" in seamless_axes) else "constant"
m.asymmetric_padding["x"] = (
m._reversed_padding_repeated_twice[0],
m._reversed_padding_repeated_twice[1],
0,
0,
)
m.asymmetric_padding_mode["y"] = "circular" if ("y" in seamless_axes) else "constant"
m.asymmetric_padding["y"] = (
0,
0,
m._reversed_padding_repeated_twice[2],
m._reversed_padding_repeated_twice[3],
)
if True and ".conv2" in m_name:
continue
if True and ".conv_shortcut" in m_name:
continue
if True and ".attentions." in m_name:
continue
if False and m_name in ["conv_in", "conv_out"]:
continue
"""
if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
m.asymmetric_padding_mode = {}
m.asymmetric_padding = {}
m.asymmetric_padding_mode["x"] = "circular" if ("x" in seamless_axes) else "constant"
m.asymmetric_padding["x"] = (
m._reversed_padding_repeated_twice[0],
m._reversed_padding_repeated_twice[1],
0,
0,
)
m.asymmetric_padding_mode["y"] = "circular" if ("y" in seamless_axes) else "constant"
m.asymmetric_padding["y"] = (
0,
0,
m._reversed_padding_repeated_twice[2],
m._reversed_padding_repeated_twice[3],
)
to_restore.append((m, m._conv_forward))
m._conv_forward = _conv_forward_asymmetric.__get__(m, nn.Conv2d)
to_restore.append((m, m._conv_forward))
m._conv_forward = _conv_forward_asymmetric.__get__(m, nn.Conv2d)
yield

View File

@@ -52,6 +52,7 @@
"@chakra-ui/react-use-size": "^2.1.0",
"@dagrejs/graphlib": "^2.1.13",
"@dnd-kit/core": "^6.1.0",
"@dnd-kit/sortable": "^8.0.0",
"@dnd-kit/utilities": "^3.2.2",
"@fontsource-variable/inter": "^5.0.16",
"@invoke-ai/ui-library": "^0.0.18",

View File

@@ -22,6 +22,9 @@ dependencies:
'@dnd-kit/core':
specifier: ^6.1.0
version: 6.1.0(react-dom@18.2.0)(react@18.2.0)
'@dnd-kit/sortable':
specifier: ^8.0.0
version: 8.0.0(@dnd-kit/core@6.1.0)(react@18.2.0)
'@dnd-kit/utilities':
specifier: ^3.2.2
version: 3.2.2(react@18.2.0)
@@ -2884,6 +2887,18 @@ packages:
tslib: 2.6.2
dev: false
/@dnd-kit/sortable@8.0.0(@dnd-kit/core@6.1.0)(react@18.2.0):
resolution: {integrity: sha512-U3jk5ebVXe1Lr7c2wU7SBZjcWdQP+j7peHJfCspnA81enlu88Mgd7CC8Q+pub9ubP7eKVETzJW+IBAhsqbSu/g==}
peerDependencies:
'@dnd-kit/core': ^6.1.0
react: '>=16.8.0'
dependencies:
'@dnd-kit/core': 6.1.0(react-dom@18.2.0)(react@18.2.0)
'@dnd-kit/utilities': 3.2.2(react@18.2.0)
react: 18.2.0
tslib: 2.6.2
dev: false
/@dnd-kit/utilities@3.2.2(react@18.2.0):
resolution: {integrity: sha512-+MKAJEOfaBe5SmV6t34p80MMKhjvUz0vRrvVJbPT0WElzaOJ/1xs+D+KDv+tD/NE5ujfrChEcshd4fLn0wpiqg==}
peerDependencies:

View File

@@ -69,7 +69,7 @@
"random": "Zufall",
"batch": "Stapel-Manager",
"advanced": "Erweitert",
"unifiedCanvas": "Einheitliche Leinwand",
"unifiedCanvas": "Leinwand",
"openInNewTab": "In einem neuem Tab öffnen",
"statusProcessing": "wird bearbeitet",
"linear": "Linear",
@@ -127,7 +127,7 @@
"galleryImageResetSize": "Größe zurücksetzen",
"gallerySettings": "Galerie-Einstellungen",
"maintainAspectRatio": "Seitenverhältnis beibehalten",
"autoSwitchNewImages": "Automatisch zu neuen Bildern wechseln",
"autoSwitchNewImages": "Auto-Wechsel zu neuen Bildern",
"singleColumnLayout": "Einspaltiges Layout",
"allImagesLoaded": "Alle Bilder geladen",
"loadMore": "Mehr laden",
@@ -226,7 +226,7 @@
},
"sendToImageToImage": {
"title": "An Bild zu Bild senden",
"desc": "Aktuelles Bild an Bild zu Bild senden"
"desc": "Aktuelles Bild an Bild-zu-Bild senden"
},
"deleteImage": {
"title": "Bild löschen",
@@ -258,7 +258,7 @@
},
"selectEraser": {
"title": "Radiergummi auswählen",
"desc": "Wählt den Radiergummi für die Leinwand aus"
"desc": "Wählt den Radiergummi aus"
},
"decreaseBrushSize": {
"title": "Pinselgröße verkleinern",
@@ -330,7 +330,7 @@
},
"downloadImage": {
"title": "Bild herunterladen",
"desc": "Aktuelle Leinwand herunterladen"
"desc": "Aktuelles Bild herunterladen"
},
"undoStroke": {
"title": "Pinselstrich rückgängig machen",
@@ -564,8 +564,8 @@
"img2imgStrength": "Bild-zu-Bild-Stärke",
"toggleLoopback": "Loopback umschalten",
"sendTo": "Senden an",
"sendToImg2Img": "Senden an Bild zu Bild",
"sendToUnifiedCanvas": "Senden an Unified Canvas",
"sendToImg2Img": "Senden an Bild-zu-Bild",
"sendToUnifiedCanvas": "Senden an Leinwand",
"copyImageToLink": "Bild-Link kopieren",
"downloadImage": "Bild herunterladen",
"openInViewer": "Im Viewer öffnen",
@@ -604,7 +604,9 @@
"resetComplete": "Die Web-Oberfläche wurde zurückgesetzt.",
"models": "Modelle",
"useSlidersForAll": "Schieberegler für alle Optionen verwenden",
"showAdvancedOptions": "Erweiterte Optionen anzeigen"
"showAdvancedOptions": "Erweiterte Optionen anzeigen",
"alternateCanvasLayout": "Alternatives Leinwand-Layout",
"clearIntermediatesDesc1": "Das Löschen der Zwischenprodukte setzt Leinwand und ControlNet zurück."
},
"toast": {
"tempFoldersEmptied": "Temp-Ordner geleert",
@@ -618,7 +620,7 @@
"imageSavedToGallery": "Bild in die Galerie gespeichert",
"canvasMerged": "Leinwand zusammengeführt",
"sentToImageToImage": "Gesendet an Bild zu Bild",
"sentToUnifiedCanvas": "Gesendet an Unified Canvas",
"sentToUnifiedCanvas": "Gesendet an Leinwand",
"parametersSet": "Parameter festlegen",
"parametersNotSet": "Parameter nicht festgelegt",
"parametersNotSetDesc": "Keine Metadaten für dieses Bild gefunden.",
@@ -635,7 +637,21 @@
"metadataLoadFailed": "Metadaten konnten nicht geladen werden",
"initialImageSet": "Ausgangsbild festgelegt",
"initialImageNotSet": "Ausgangsbild nicht festgelegt",
"initialImageNotSetDesc": "Ausgangsbild konnte nicht geladen werden"
"initialImageNotSetDesc": "Ausgangsbild konnte nicht geladen werden",
"setCanvasInitialImage": "Ausgangsbild setzen",
"problemMergingCanvas": "Problem bei Verschmelzung der Leinwand",
"canvasCopiedClipboard": "Leinwand in Zwischenablage kopiert",
"canvasSentControlnetAssets": "Leinwand an ControlNet & Sammlung geschickt",
"problemDownloadingCanvasDesc": "Kann Basis-Layer nicht exportieren",
"canvasDownloaded": "Leinwand heruntergeladen",
"problemSavingCanvasDesc": "Kann Basis-Layer nicht exportieren",
"canvasSavedGallery": "Leinwand in Galerie gespeichert",
"problemMergingCanvasDesc": "Kann Basis-Layer nicht exportieren",
"problemSavingCanvas": "Problem beim Speichern der Leinwand",
"problemCopyingCanvas": "Problem beim Kopieren der Leinwand",
"problemCopyingCanvasDesc": "Kann Basis-Layer nicht exportieren",
"problemDownloadingCanvas": "Problem beim Herunterladen der Leinwand",
"setAsCanvasInitialImage": "Als Ausgangsbild gesetzt"
},
"tooltip": {
"feature": {
@@ -648,7 +664,7 @@
"faceCorrection": "Gesichtskorrektur mit GFPGAN oder Codeformer: Der Algorithmus erkennt Gesichter im Bild und korrigiert alle Fehler. Ein hoher Wert verändert das Bild stärker, was zu attraktiveren Gesichtern führt. Codeformer mit einer höheren Genauigkeit bewahrt das Originalbild auf Kosten einer stärkeren Gesichtskorrektur.",
"imageToImage": "Bild zu Bild lädt ein beliebiges Bild als Ausgangsbild, aus dem dann zusammen mit dem Prompt ein neues Bild erzeugt wird. Je höher der Wert ist, desto stärker wird das Ergebnisbild verändert. Werte von 0,0 bis 1,0 sind möglich, der empfohlene Bereich ist .25-.75",
"boundingBox": "Der Begrenzungsrahmen ist derselbe wie die Einstellungen für Breite und Höhe bei Text-zu-Bild oder Bild-zu-Bild. Es wird nur der Bereich innerhalb des Rahmens verarbeitet.",
"seamCorrection": "Steuert die Behandlung von sichtbaren Übergängen, die zwischen den erzeugten Bildern auf der Leinwand auftreten.",
"seamCorrection": "Behandlung von sichtbaren Übergängen, die zwischen den erzeugten Bildern auftreten.",
"infillAndScaling": "Verwalten Sie Infill-Methoden (für maskierte oder gelöschte Bereiche der Leinwand) und Skalierung (nützlich für kleine Begrenzungsrahmengrößen)."
}
},
@@ -659,17 +675,17 @@
"maskingOptions": "Maskierungsoptionen",
"enableMask": "Maske aktivieren",
"preserveMaskedArea": "Maskierten Bereich bewahren",
"clearMask": "Maske löschen",
"clearMask": "Maske löschen (Shift+C)",
"brush": "Pinsel",
"eraser": "Radierer",
"fillBoundingBox": "Begrenzungsrahmen füllen",
"eraseBoundingBox": "Begrenzungsrahmen löschen",
"colorPicker": "Farbpipette",
"colorPicker": "Pipette",
"brushOptions": "Pinseloptionen",
"brushSize": "Größe",
"move": "Bewegen",
"resetView": "Ansicht zurücksetzen",
"mergeVisible": "Sichtbare Zusammenführen",
"mergeVisible": "Sichtbare zusammenführen",
"saveToGallery": "In Galerie speichern",
"copyToClipboard": "In Zwischenablage kopieren",
"downloadAsImage": "Als Bild herunterladen",
@@ -683,15 +699,15 @@
"darkenOutsideSelection": "Außerhalb der Auswahl verdunkeln",
"autoSaveToGallery": "Automatisch in Galerie speichern",
"saveBoxRegionOnly": "Nur Auswahlbox speichern",
"limitStrokesToBox": "Striche auf Box beschränken",
"showCanvasDebugInfo": "Zusätzliche Informationen zur Leinwand anzeigen",
"limitStrokesToBox": "Striche auf Auswahl beschränken",
"showCanvasDebugInfo": "Zusätzliche Informationen anzeigen",
"clearCanvasHistory": "Leinwand-Verlauf löschen",
"clearHistory": "Verlauf löschen",
"clearCanvasHistoryMessage": "Wenn Sie den Verlauf der Leinwand löschen, bleibt die aktuelle Leinwand intakt, aber der Verlauf der Rückgängig- und Wiederherstellung wird unwiderruflich gelöscht.",
"clearCanvasHistoryConfirm": "Sind Sie sicher, dass Sie den Verlauf der Leinwand löschen möchten?",
"clearCanvasHistoryMessage": "Wenn Sie den Verlauf löschen, bleibt die aktuelle Leinwand intakt, aber der Verlauf der Rückgängig- und Wiederherstellung wird unwiderruflich gelöscht.",
"clearCanvasHistoryConfirm": "Sind Sie sicher, dass Sie den Verlauf löschen möchten?",
"emptyTempImageFolder": "Temp-Image Ordner leeren",
"emptyFolder": "Leerer Ordner",
"emptyTempImagesFolderMessage": "Wenn Sie den Ordner für temporäre Bilder leeren, wird auch der Unified Canvas vollständig zurückgesetzt. Dies umfasst den gesamten Verlauf der Rückgängig-/Wiederherstellungsvorgänge, die Bilder im Bereitstellungsbereich und die Leinwand-Basisebene.",
"emptyTempImagesFolderMessage": "Wenn Sie den Ordner für temporäre Bilder leeren, wird die Leinwand zurückgesetzt. Dies umfasst den gesamten Verlauf der Rückgängig-/Wiederherstellungsvorgänge, die Bilder im Bereitstellungsbereich und die Leinwand-Basisebene.",
"emptyTempImagesFolderConfirm": "Sind Sie sicher, dass Sie den temporären Ordner leeren wollen?",
"activeLayer": "Aktive Ebene",
"canvasScale": "Leinwand Maßstab",
@@ -708,7 +724,7 @@
"discardAll": "Alles verwerfen",
"betaClear": "Löschen",
"betaDarkenOutside": "Außen abdunkeln",
"betaLimitToBox": "Begrenzung auf das Feld",
"betaLimitToBox": "Auf Auswahl begrenzen",
"betaPreserveMasked": "Maskiertes bewahren",
"antialiasing": "Kantenglättung",
"showResultsOn": "Zeige Ergebnisse (An)",
@@ -746,7 +762,7 @@
"autoAddBoard": "Automatisches Hinzufügen zum Ordner",
"topMessage": "Dieser Ordner enthält Bilder die in den folgenden Funktionen verwendet werden:",
"move": "Bewegen",
"menuItemAutoAdd": "Automatisches Hinzufügen zu diesem Ordner",
"menuItemAutoAdd": "Auto-Hinzufügen zu diesem Ordner",
"myBoard": "Meine Ordner",
"searchBoard": "Ordner durchsuchen...",
"noMatching": "Keine passenden Ordner",
@@ -826,7 +842,6 @@
"pidi": "PIDI",
"normalBae": "Normales BAE",
"mlsdDescription": "Minimalistischer Liniensegmentdetektor",
"openPoseDescription": "Schätzung der menschlichen Pose mit Openpose",
"control": "Kontrolle",
"coarse": "Grob",
"crop": "Zuschneiden",
@@ -839,10 +854,9 @@
"lineartAnimeDescription": "Lineart-Verarbeitung im Anime-Stil",
"minConfidence": "Minimales Vertrauen",
"megaControl": "Mega-Kontrolle",
"autoConfigure": "Prozessor automatisch konfigurieren",
"autoConfigure": "Prozessor Auto-konfig",
"normalBaeDescription": "Normale BAE-Verarbeitung",
"noneDescription": "Es wurde keine Verarbeitung angewendet",
"openPose": "Openpose / \"Pose nutzen\"",
"lineartAnime": "Lineart Anime / \"Strichzeichnung Anime\"",
"mediapipeFaceDescription": "Gesichtserkennung mit Mediapipe",
"canny": "\"Canny\"",
@@ -944,7 +958,7 @@
"initImage": "Erstes Bild",
"variations": "Seed-Gewichtungs-Paare",
"vae": "VAE",
"workflow": "Arbeitsablauf",
"workflow": "Workflow",
"scheduler": "Planer",
"noRecallParameters": "Es wurden keine Parameter zum Abrufen gefunden",
"recallParameters": "Parameter wiederherstellen"
@@ -1056,6 +1070,20 @@
"\"Per Bild\" wird einen einzigartigen Seed-Wert für jedes Bild verwenden. Dies bietet mehr Variationen."
],
"heading": "Seed-Verhalten"
},
"dynamicPrompts": {
"paragraphs": [
"\"Dynamische Prompts\" übersetzt einen Prompt in mehrere.",
"Die Ausgangs-Syntax ist \"ein {roter|grüner|blauer} ball\". Das generiert 3 Prompts: \"ein roter ball\", \"ein grüner ball\" und \"ein blauer ball\".",
"Sie können die Syntax so oft verwenden, wie Sie in einem einzigen Prompt möchten, aber stellen Sie sicher, dass die Anzahl der Prompts zur Einstellung von \"Max Prompts\" passt."
],
"heading": "Dynamische Prompts"
},
"controlNetWeight": {
"paragraphs": [
"Wie stark wird das ControlNet das generierte Bild beeinflussen wird."
],
"heading": "Einfluss"
}
},
"ui": {
@@ -1160,10 +1188,10 @@
"outputFieldInInput": "Ausgabefeld im Eingang",
"problemReadingWorkflow": "Problem beim Lesen des Arbeitsablaufs vom Bild",
"reloadNodeTemplates": "Knoten-Vorlagen neu laden",
"newWorkflow": "Neuer Arbeitsablauf",
"newWorkflow": "Neuer Arbeitsablauf / Workflow",
"newWorkflowDesc": "Einen neuen Arbeitsablauf erstellen?",
"noFieldsLinearview": "Keine Felder zur linearen Ansicht hinzugefügt",
"clearWorkflow": "Arbeitsablauf löschen",
"clearWorkflow": "Workflow löschen",
"clearWorkflowDesc": "Diesen Arbeitsablauf löschen und neu starten?",
"noConnectionInProgress": "Es besteht keine Verbindung",
"notes": "Anmerkungen",
@@ -1220,8 +1248,8 @@
"stringDescription": "Zeichenfolgen (Strings) sind Text.",
"fieldTypesMustMatch": "Feldtypen müssen übereinstimmen",
"fitViewportNodes": "An Ansichtsgröße anpassen",
"missingCanvaInitMaskImages": "Fehlende Startbilder und Masken auf der Arbeitsfläche",
"missingCanvaInitImage": "Fehlendes Startbild auf der Arbeitsfläche",
"missingCanvaInitMaskImages": "Fehlende Startbilder und Masken auf der Leinwand",
"missingCanvaInitImage": "Fehlendes Startbild auf der Leinwand",
"ipAdapterModelDescription": "IP-Adapter-Modellfeld",
"latentsPolymorphicDescription": "Zwischen Nodes können Latents weitergegeben werden.",
"loadingNodes": "Lade Nodes...",
@@ -1321,7 +1349,7 @@
"workflows": "Arbeitsabläufe",
"noSystemWorkflows": "Keine System-Arbeitsabläufe",
"workflowName": "Arbeitsablauf-Name",
"workflowIsOpen": "Arbeitsablauf ist offen",
"workflowIsOpen": "Arbeitsablauf ist geöffnet",
"saveWorkflowAs": "Arbeitsablauf speichern als",
"searchWorkflows": "Suche Arbeitsabläufe",
"newWorkflowCreated": "Neuer Arbeitsablauf erstellt",

View File

@@ -175,6 +175,7 @@
"statusUpscaling": "Upscaling",
"statusUpscalingESRGAN": "Upscaling (ESRGAN)",
"template": "Template",
"toResolve": "To resolve",
"training": "Training",
"trainingDesc1": "A dedicated workflow for training your own embeddings and checkpoints using Textual Inversion and Dreambooth from the web interface.",
"trainingDesc2": "InvokeAI already supports training custom embeddourings using Textual Inversion using the main script.",
@@ -235,6 +236,9 @@
"fill": "Fill",
"h": "H",
"handAndFace": "Hand and Face",
"face": "Face",
"body": "Body",
"hands": "Hands",
"hed": "HED",
"hedDescription": "Holistically-Nested Edge Detection",
"hideAdvanced": "Hide Advanced",
@@ -261,8 +265,8 @@
"noneDescription": "No processing applied",
"normalBae": "Normal BAE",
"normalBaeDescription": "Normal BAE processing",
"openPose": "Openpose",
"openPoseDescription": "Human pose estimation using Openpose",
"dwOpenpose": "DW Openpose",
"dwOpenposeDescription": "Human pose estimation using DW Openpose",
"pidi": "PIDI",
"pidiDescription": "PIDI image processing",
"processor": "Processor",
@@ -897,6 +901,7 @@
"doesNotExist": "does not exist",
"downloadWorkflow": "Download Workflow JSON",
"edge": "Edge",
"editMode": "Edit in Workflow Editor",
"enum": "Enum",
"enumDescription": "Enums are values that may be one of a number of options.",
"executionStateCompleted": "Completed",
@@ -992,8 +997,10 @@
"problemReadingMetadata": "Problem reading metadata from image",
"problemReadingWorkflow": "Problem reading workflow from image",
"problemSettingTitle": "Problem Setting Title",
"resetToDefaultValue": "Reset to default value",
"reloadNodeTemplates": "Reload Node Templates",
"removeLinearView": "Remove from Linear View",
"reorderLinearView": "Reorder Linear View",
"newWorkflow": "New Workflow",
"newWorkflowDesc": "Create a new workflow?",
"newWorkflowDesc2": "Your current workflow has unsaved changes.",
@@ -1064,6 +1071,7 @@
"vaeModelFieldDescription": "TODO",
"validateConnections": "Validate Connections and Graph",
"validateConnectionsHelp": "Prevent invalid connections from being made, and invalid graphs from being invoked",
"viewMode": "Use in Linear View",
"unableToGetWorkflowVersion": "Unable to get workflow schema version",
"unrecognizedWorkflowVersion": "Unrecognized workflow schema version {{version}}",
"version": "Version",

View File

@@ -795,7 +795,8 @@
"workflowDeleted": "Flusso di lavoro eliminato",
"problemRetrievingWorkflow": "Problema nel recupero del flusso di lavoro",
"resetInitialImage": "Reimposta l'immagine iniziale",
"uploadInitialImage": "Carica l'immagine iniziale"
"uploadInitialImage": "Carica l'immagine iniziale",
"problemDownloadingImage": "Impossibile scaricare l'immagine"
},
"tooltip": {
"feature": {
@@ -1134,7 +1135,10 @@
"newWorkflow": "Nuovo flusso di lavoro",
"newWorkflowDesc": "Creare un nuovo flusso di lavoro?",
"newWorkflowDesc2": "Il flusso di lavoro attuale presenta modifiche non salvate.",
"unsupportedAnyOfLength": "unione di troppi elementi ({{count}})"
"unsupportedAnyOfLength": "unione di troppi elementi ({{count}})",
"clearWorkflowDesc": "Cancellare questo flusso di lavoro e avviarne uno nuovo?",
"clearWorkflow": "Cancella il flusso di lavoro",
"clearWorkflowDesc2": "Il tuo flusso di lavoro attuale presenta modifiche non salvate."
},
"boards": {
"autoAddBoard": "Aggiungi automaticamente bacheca",
@@ -1191,7 +1195,6 @@
"f": "F",
"h": "A",
"prompt": "Prompt",
"openPoseDescription": "Stima della posa umana utilizzando Openpose",
"resizeMode": "Ridimensionamento",
"weight": "Peso",
"selectModel": "Seleziona un modello",
@@ -1672,7 +1675,9 @@
"downloadWorkflow": "Salva su file",
"uploadWorkflow": "Carica da file",
"projectWorkflows": "Flussi di lavoro del progetto",
"noWorkflows": "Nessun flusso di lavoro"
"noWorkflows": "Nessun flusso di lavoro",
"workflowCleared": "Flusso di lavoro cancellato",
"saveWorkflowToProject": "Salva flusso di lavoro nel progetto"
},
"app": {
"storeNotInitialized": "Il negozio non è inizializzato"

View File

@@ -555,7 +555,6 @@
"balanced": "バランス",
"prompt": "プロンプト",
"depthMidasDescription": "Midasを使用して深度マップを生成",
"openPoseDescription": "Openposeを使用してポーズを推定",
"control": "コントロール",
"resizeMode": "リサイズモード",
"weight": "重み",

View File

@@ -333,7 +333,6 @@
"h": "H",
"prompt": "프롬프트",
"depthMidasDescription": "Midas를 사용하여 Depth map 생성하기",
"openPoseDescription": "Openpose를 이용한 사람 포즈 추정",
"control": "Control",
"resizeMode": "크기 조정 모드",
"t2iEnabledControlNetDisabled": "$t(common.t2iAdapter) 사용 가능,$t(common.controlNet) 사용 불가능",
@@ -370,7 +369,6 @@
"normalBaeDescription": "Normal BAE 처리",
"noneDescription": "처리되지 않음",
"saveControlImage": "Control Image 저장",
"openPose": "Openpose",
"toggleControlNet": "해당 ControlNet으로 전환",
"delete": "삭제",
"controlAdapter_other": "Control Adapter(s)",

View File

@@ -1033,7 +1033,6 @@
"prompt": "Prompt",
"depthMidasDescription": "Genereer diepteblad via Midas",
"controlnet": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.controlNet))",
"openPoseDescription": "Menselijke pose-benadering via Openpose",
"control": "Controle",
"resizeMode": "Modus schaling",
"t2iEnabledControlNetDisabled": "$t(common.t2iAdapter) ingeschakeld, $t(common.controlNet)s uitgeschakeld",
@@ -1072,7 +1071,6 @@
"normalBaeDescription": "Normale BAE-verwerking",
"noneDescription": "Geen verwerking toegepast",
"saveControlImage": "Bewaar controle-afbeelding",
"openPose": "Openpose",
"toggleControlNet": "Zet deze ControlNet aan/uit",
"delete": "Verwijder",
"controlAdapter_one": "Control-adapter",

View File

@@ -1155,7 +1155,6 @@
"resetControlImage": "Сбросить контрольное изображение",
"prompt": "Запрос",
"controlnet": "$t(controlnet.controlAdapter_one) №{{number}} $t(common.controlNet)",
"openPoseDescription": "Оценка позы человека с помощью Openpose",
"resizeMode": "Режим изменения размера",
"t2iEnabledControlNetDisabled": "$t(common.t2iAdapter) включен, $t(common.controlNet)s отключен",
"weight": "Вес",

View File

@@ -259,7 +259,6 @@
"mediapipeFace": "Mediapipe Yüz",
"megaControl": "Aşırı Yönetim",
"mlsd": "M-LSD",
"openPoseDescription": "Openpose kullanarak poz belirleme",
"setControlImageDimensions": "Yönetim Görseli Boyutlarını En/Boydan Al",
"pidi": "PIDI",
"scribble": "çiziktirme",
@@ -273,7 +272,6 @@
"mlsdDescription": "Minimalist Line Segment Detector (Kolay Çizgi Parçası Algılama)",
"normalBae": "Normal BAE",
"normalBaeDescription": "Normal BAE işleme",
"openPose": "Openpose",
"resetControlImage": "Yönetim Görselini Kaldır",
"enableIPAdapter": "IP Aracını Etkinleştir",
"lineart": "Çizim",

View File

@@ -1143,7 +1143,6 @@
"balanced": "平衡",
"prompt": "Prompt (提示词控制)",
"depthMidasDescription": "使用 Midas 生成深度图",
"openPoseDescription": "使用 Openpose 进行人体姿态估计",
"resizeMode": "缩放模式",
"weight": "权重",
"selectModel": "选择一个模型",
@@ -1207,7 +1206,6 @@
"megaControl": "Mega Control (超级控制)",
"depthZoe": "Depth (Zoe)",
"colorMap": "Color",
"openPose": "Openpose",
"controlAdapter_other": "Control Adapters",
"lineartAnime": "Lineart Anime",
"canny": "Canny",

View File

@@ -2,7 +2,7 @@ import type { UnknownAction } from '@reduxjs/toolkit';
import { isAnyGraphBuilt } from 'features/nodes/store/actions';
import { nodeTemplatesBuilt } from 'features/nodes/store/nodeTemplatesSlice';
import { cloneDeep } from 'lodash-es';
import { receivedOpenAPISchema } from 'services/api/thunks/schema';
import { appInfoApi } from 'services/api/endpoints/appInfo';
import type { Graph } from 'services/api/types';
import { socketGeneratorProgress } from 'services/events/actions';
@@ -18,7 +18,7 @@ export const actionSanitizer = <A extends UnknownAction>(action: A): A => {
}
}
if (receivedOpenAPISchema.fulfilled.match(action)) {
if (appInfoApi.endpoints.getOpenAPISchema.matchFulfilled(action)) {
return {
...action,
payload: '<OpenAPI schema omitted>',

View File

@@ -23,6 +23,7 @@ import { addControlNetImageProcessedListener } from './listeners/controlNetImage
import { addEnqueueRequestedCanvasListener } from './listeners/enqueueRequestedCanvas';
import { addEnqueueRequestedLinear } from './listeners/enqueueRequestedLinear';
import { addEnqueueRequestedNodes } from './listeners/enqueueRequestedNodes';
import { addGetOpenAPISchemaListener } from './listeners/getOpenAPISchema';
import {
addImageAddedToBoardFulfilledListener,
addImageAddedToBoardRejectedListener,
@@ -47,7 +48,6 @@ import { addInitialImageSelectedListener } from './listeners/initialImageSelecte
import { addModelSelectedListener } from './listeners/modelSelected';
import { addModelsLoadedListener } from './listeners/modelsLoaded';
import { addDynamicPromptsListener } from './listeners/promptChanged';
import { addReceivedOpenAPISchemaListener } from './listeners/receivedOpenAPISchema';
import { addSocketConnectedEventListener as addSocketConnectedListener } from './listeners/socketio/socketConnected';
import { addSocketDisconnectedEventListener as addSocketDisconnectedListener } from './listeners/socketio/socketDisconnected';
import { addGeneratorProgressEventListener as addGeneratorProgressListener } from './listeners/socketio/socketGeneratorProgress';
@@ -150,7 +150,7 @@ addImageRemovedFromBoardRejectedListener();
addBoardIdSelectedListener();
// Node schemas
addReceivedOpenAPISchemaListener();
addGetOpenAPISchemaListener();
// Workflows
addWorkflowLoadRequestedListener();

View File

@@ -3,18 +3,18 @@ import { parseify } from 'common/util/serialize';
import { nodeTemplatesBuilt } from 'features/nodes/store/nodeTemplatesSlice';
import { parseSchema } from 'features/nodes/util/schema/parseSchema';
import { size } from 'lodash-es';
import { receivedOpenAPISchema } from 'services/api/thunks/schema';
import { appInfoApi } from 'services/api/endpoints/appInfo';
import { startAppListening } from '..';
export const addReceivedOpenAPISchemaListener = () => {
export const addGetOpenAPISchemaListener = () => {
startAppListening({
actionCreator: receivedOpenAPISchema.fulfilled,
matcher: appInfoApi.endpoints.getOpenAPISchema.matchFulfilled,
effect: (action, { dispatch, getState }) => {
const log = logger('system');
const schemaJSON = action.payload;
log.debug({ schemaJSON }, 'Received OpenAPI schema');
log.debug({ schemaJSON: parseify(schemaJSON) }, 'Received OpenAPI schema');
const { nodesAllowlist, nodesDenylist } = getState().config;
const nodeTemplates = parseSchema(schemaJSON, nodesAllowlist, nodesDenylist);
@@ -26,10 +26,14 @@ export const addReceivedOpenAPISchemaListener = () => {
});
startAppListening({
actionCreator: receivedOpenAPISchema.rejected,
matcher: appInfoApi.endpoints.getOpenAPISchema.matchRejected,
effect: (action) => {
const log = logger('system');
log.error({ error: parseify(action.error) }, 'Problem retrieving OpenAPI Schema');
// If action.meta.condition === true, the request was canceled/skipped because another request was in flight or
// the value was already in the cache. We don't want to log these errors.
if (!action.meta.condition) {
const log = logger('system');
log.error({ error: parseify(action.error) }, 'Problem retrieving OpenAPI Schema');
}
},
});
};

View File

@@ -1,10 +1,9 @@
import { logger } from 'app/logging/logger';
import { $baseUrl } from 'app/store/nanostores/baseUrl';
import { isEqual, size } from 'lodash-es';
import { isEqual } from 'lodash-es';
import { atom } from 'nanostores';
import { api } from 'services/api';
import { queueApi, selectQueueStatus } from 'services/api/endpoints/queue';
import { receivedOpenAPISchema } from 'services/api/thunks/schema';
import { socketConnected } from 'services/events/actions';
import { startAppListening } from '../..';
@@ -77,17 +76,4 @@ export const addSocketConnectedEventListener = () => {
}
},
});
startAppListening({
actionCreator: socketConnected,
effect: async (action, { dispatch, getState }) => {
const { nodeTemplates, config } = getState();
// We only want to re-fetch the schema if we don't have any node templates
if (!size(nodeTemplates.templates) && !config.disabledTabs.includes('nodes')) {
// This request is a createAsyncThunk - resetting API state as in the above listener
// will not trigger this request, so we need to manually do it.
dispatch(receivedOpenAPISchema());
}
},
});
};

View File

@@ -6,7 +6,6 @@ import { WorkflowMigrationError, WorkflowVersionError } from 'features/nodes/typ
import { validateWorkflow } from 'features/nodes/util/workflow/validateWorkflow';
import { addToast } from 'features/system/store/systemSlice';
import { makeToast } from 'features/system/util/makeToast';
import { setActiveTab } from 'features/ui/store/uiSlice';
import { t } from 'i18next';
import { z } from 'zod';
import { fromZodError } from 'zod-validation-error';
@@ -53,7 +52,6 @@ export const addWorkflowLoadRequestedListener = () => {
});
}
dispatch(setActiveTab('nodes'));
requestAnimationFrame(() => {
$flow.get()?.fitView();
});

View File

@@ -1,22 +1,28 @@
import { useStore } from '@nanostores/react';
import { useAppToaster } from 'app/components/Toaster';
import { $authToken } from 'app/store/nanostores/authToken';
import { useAppDispatch } from 'app/store/storeHooks';
import { imageDownloaded } from 'features/gallery/store/actions';
import { useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { useImageUrlToBlob } from './useImageUrlToBlob';
export const useDownloadImage = () => {
const toaster = useAppToaster();
const { t } = useTranslation();
const imageUrlToBlob = useImageUrlToBlob();
const dispatch = useAppDispatch();
const authToken = useStore($authToken);
const downloadImage = useCallback(
async (image_url: string, image_name: string) => {
try {
const blob = await imageUrlToBlob(image_url);
const requestOpts = authToken
? {
headers: {
Authorization: `Bearer ${authToken}`,
},
}
: {};
const blob = await fetch(image_url, requestOpts).then((resp) => resp.blob());
if (!blob) {
throw new Error('Unable to create Blob');
}
@@ -40,7 +46,7 @@ export const useDownloadImage = () => {
});
}
},
[t, toaster, imageUrlToBlob, dispatch]
[t, toaster, dispatch, authToken]
);
return { downloadImage };

View File

@@ -6,6 +6,7 @@ import CannyProcessor from './processors/CannyProcessor';
import ColorMapProcessor from './processors/ColorMapProcessor';
import ContentShuffleProcessor from './processors/ContentShuffleProcessor';
import DepthAnyThingProcessor from './processors/DepthAnyThingProcessor';
import DWOpenposeProcessor from './processors/DWOpenposeProcessor';
import HedProcessor from './processors/HedProcessor';
import LineartAnimeProcessor from './processors/LineartAnimeProcessor';
import LineartProcessor from './processors/LineartProcessor';
@@ -13,7 +14,6 @@ import MediapipeFaceProcessor from './processors/MediapipeFaceProcessor';
import MidasDepthProcessor from './processors/MidasDepthProcessor';
import MlsdImageProcessor from './processors/MlsdImageProcessor';
import NormalBaeProcessor from './processors/NormalBaeProcessor';
import OpenposeProcessor from './processors/OpenposeProcessor';
import PidiProcessor from './processors/PidiProcessor';
import ZoeDepthProcessor from './processors/ZoeDepthProcessor';
@@ -73,8 +73,8 @@ const ControlAdapterProcessorComponent = ({ id }: Props) => {
return <NormalBaeProcessor controlNetId={id} processorNode={processorNode} isEnabled={isEnabled} />;
}
if (processorNode.type === 'openpose_image_processor') {
return <OpenposeProcessor controlNetId={id} processorNode={processorNode} isEnabled={isEnabled} />;
if (processorNode.type === 'dw_openpose_image_processor') {
return <DWOpenposeProcessor controlNetId={id} processorNode={processorNode} isEnabled={isEnabled} />;
}
if (processorNode.type === 'pidi_image_processor') {

View File

@@ -0,0 +1,92 @@
import { CompositeNumberInput, CompositeSlider, Flex, FormControl, FormLabel, Switch } from '@invoke-ai/ui-library';
import { useProcessorNodeChanged } from 'features/controlAdapters/components/hooks/useProcessorNodeChanged';
import { CONTROLNET_PROCESSORS } from 'features/controlAdapters/store/constants';
import type { RequiredDWOpenposeImageProcessorInvocation } from 'features/controlAdapters/store/types';
import type { ChangeEvent } from 'react';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import ProcessorWrapper from './common/ProcessorWrapper';
const DEFAULTS = CONTROLNET_PROCESSORS.dw_openpose_image_processor
.default as RequiredDWOpenposeImageProcessorInvocation;
type Props = {
controlNetId: string;
processorNode: RequiredDWOpenposeImageProcessorInvocation;
isEnabled: boolean;
};
const DWOpenposeProcessor = (props: Props) => {
const { controlNetId, processorNode, isEnabled } = props;
const { image_resolution, draw_body, draw_face, draw_hands } = processorNode;
const processorChanged = useProcessorNodeChanged();
const { t } = useTranslation();
const handleDrawBodyChanged = useCallback(
(e: ChangeEvent<HTMLInputElement>) => {
processorChanged(controlNetId, { draw_body: e.target.checked });
},
[controlNetId, processorChanged]
);
const handleDrawFaceChanged = useCallback(
(e: ChangeEvent<HTMLInputElement>) => {
processorChanged(controlNetId, { draw_face: e.target.checked });
},
[controlNetId, processorChanged]
);
const handleDrawHandsChanged = useCallback(
(e: ChangeEvent<HTMLInputElement>) => {
processorChanged(controlNetId, { draw_hands: e.target.checked });
},
[controlNetId, processorChanged]
);
const handleImageResolutionChanged = useCallback(
(v: number) => {
processorChanged(controlNetId, { image_resolution: v });
},
[controlNetId, processorChanged]
);
return (
<ProcessorWrapper>
<Flex sx={{ flexDir: 'row', gap: 6 }}>
<FormControl isDisabled={!isEnabled} w="max-content">
<FormLabel>{t('controlnet.body')}</FormLabel>
<Switch defaultChecked={DEFAULTS.draw_body} isChecked={draw_body} onChange={handleDrawBodyChanged} />
</FormControl>
<FormControl isDisabled={!isEnabled} w="max-content">
<FormLabel>{t('controlnet.face')}</FormLabel>
<Switch defaultChecked={DEFAULTS.draw_face} isChecked={draw_face} onChange={handleDrawFaceChanged} />
</FormControl>
<FormControl isDisabled={!isEnabled} w="max-content">
<FormLabel>{t('controlnet.hands')}</FormLabel>
<Switch defaultChecked={DEFAULTS.draw_hands} isChecked={draw_hands} onChange={handleDrawHandsChanged} />
</FormControl>
</Flex>
<FormControl isDisabled={!isEnabled}>
<FormLabel>{t('controlnet.imageResolution')}</FormLabel>
<CompositeSlider
value={image_resolution}
onChange={handleImageResolutionChanged}
defaultValue={DEFAULTS.image_resolution}
min={0}
max={4096}
marks
/>
<CompositeNumberInput
value={image_resolution}
onChange={handleImageResolutionChanged}
defaultValue={DEFAULTS.image_resolution}
min={0}
max={4096}
/>
</FormControl>
</ProcessorWrapper>
);
};
export default memo(DWOpenposeProcessor);

View File

@@ -1,92 +0,0 @@
import { CompositeNumberInput, CompositeSlider, FormControl, FormLabel, Switch } from '@invoke-ai/ui-library';
import { useProcessorNodeChanged } from 'features/controlAdapters/components/hooks/useProcessorNodeChanged';
import { CONTROLNET_PROCESSORS } from 'features/controlAdapters/store/constants';
import type { RequiredOpenposeImageProcessorInvocation } from 'features/controlAdapters/store/types';
import type { ChangeEvent } from 'react';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import ProcessorWrapper from './common/ProcessorWrapper';
const DEFAULTS = CONTROLNET_PROCESSORS.openpose_image_processor.default as RequiredOpenposeImageProcessorInvocation;
type Props = {
controlNetId: string;
processorNode: RequiredOpenposeImageProcessorInvocation;
isEnabled: boolean;
};
const OpenposeProcessor = (props: Props) => {
const { controlNetId, processorNode, isEnabled } = props;
const { image_resolution, detect_resolution, hand_and_face } = processorNode;
const processorChanged = useProcessorNodeChanged();
const { t } = useTranslation();
const handleDetectResolutionChanged = useCallback(
(v: number) => {
processorChanged(controlNetId, { detect_resolution: v });
},
[controlNetId, processorChanged]
);
const handleImageResolutionChanged = useCallback(
(v: number) => {
processorChanged(controlNetId, { image_resolution: v });
},
[controlNetId, processorChanged]
);
const handleHandAndFaceChanged = useCallback(
(e: ChangeEvent<HTMLInputElement>) => {
processorChanged(controlNetId, { hand_and_face: e.target.checked });
},
[controlNetId, processorChanged]
);
return (
<ProcessorWrapper>
<FormControl isDisabled={!isEnabled}>
<FormLabel>{t('controlnet.detectResolution')}</FormLabel>
<CompositeSlider
value={detect_resolution}
onChange={handleDetectResolutionChanged}
defaultValue={DEFAULTS.detect_resolution}
min={0}
max={4096}
marks
/>
<CompositeNumberInput
value={detect_resolution}
onChange={handleDetectResolutionChanged}
defaultValue={DEFAULTS.detect_resolution}
min={0}
max={4096}
/>
</FormControl>
<FormControl isDisabled={!isEnabled}>
<FormLabel>{t('controlnet.imageResolution')}</FormLabel>
<CompositeSlider
value={image_resolution}
onChange={handleImageResolutionChanged}
defaultValue={DEFAULTS.image_resolution}
min={0}
max={4096}
marks
/>
<CompositeNumberInput
value={image_resolution}
onChange={handleImageResolutionChanged}
defaultValue={DEFAULTS.image_resolution}
min={0}
max={4096}
/>
</FormControl>
<FormControl isDisabled={!isEnabled}>
<FormLabel>{t('controlnet.handAndFace')}</FormLabel>
<Switch isChecked={hand_and_face} onChange={handleHandAndFaceChanged} />
</FormControl>
</ProcessorWrapper>
);
};
export default memo(OpenposeProcessor);

View File

@@ -205,20 +205,21 @@ export const CONTROLNET_PROCESSORS: ControlNetProcessorsDict = {
image_resolution: 512,
},
},
openpose_image_processor: {
type: 'openpose_image_processor',
dw_openpose_image_processor: {
type: 'dw_openpose_image_processor',
get label() {
return i18n.t('controlnet.openPose');
return i18n.t('controlnet.dwOpenpose');
},
get description() {
return i18n.t('controlnet.openPoseDescription');
return i18n.t('controlnet.dwOpenposeDescription');
},
default: {
id: 'openpose_image_processor',
type: 'openpose_image_processor',
detect_resolution: 512,
id: 'dw_openpose_image_processor',
type: 'dw_openpose_image_processor',
image_resolution: 512,
hand_and_face: false,
draw_body: true,
draw_face: false,
draw_hands: false,
},
},
pidi_image_processor: {
@@ -266,7 +267,7 @@ export const CONTROLNET_MODEL_DEFAULT_PROCESSORS: {
lineart_anime: 'lineart_anime_image_processor',
softedge: 'hed_image_processor',
shuffle: 'content_shuffle_image_processor',
openpose: 'openpose_image_processor',
openpose: 'dw_openpose_image_processor',
mediapipe: 'mediapipe_face_processor',
pidi: 'pidi_image_processor',
zoe: 'zoe_depth_image_processor',

View File

@@ -11,6 +11,7 @@ import type {
ColorMapImageProcessorInvocation,
ContentShuffleImageProcessorInvocation,
DepthAnythingImageProcessorInvocation,
DWOpenposeImageProcessorInvocation,
HedImageProcessorInvocation,
LineartAnimeImageProcessorInvocation,
LineartImageProcessorInvocation,
@@ -18,7 +19,6 @@ import type {
MidasDepthImageProcessorInvocation,
MlsdImageProcessorInvocation,
NormalbaeImageProcessorInvocation,
OpenposeImageProcessorInvocation,
PidiImageProcessorInvocation,
ZoeDepthImageProcessorInvocation,
} from 'services/api/types';
@@ -40,7 +40,7 @@ export type ControlAdapterProcessorNode =
| MidasDepthImageProcessorInvocation
| MlsdImageProcessorInvocation
| NormalbaeImageProcessorInvocation
| OpenposeImageProcessorInvocation
| DWOpenposeImageProcessorInvocation
| PidiImageProcessorInvocation
| ZoeDepthImageProcessorInvocation;
@@ -143,11 +143,11 @@ export type RequiredNormalbaeImageProcessorInvocation = O.Required<
>;
/**
* The Openpose processor node, with parameters flagged as required
* The DW Openpose processor node, with parameters flagged as required
*/
export type RequiredOpenposeImageProcessorInvocation = O.Required<
OpenposeImageProcessorInvocation,
'type' | 'detect_resolution' | 'image_resolution' | 'hand_and_face'
export type RequiredDWOpenposeImageProcessorInvocation = O.Required<
DWOpenposeImageProcessorInvocation,
'type' | 'image_resolution' | 'draw_body' | 'draw_face' | 'draw_hands'
>;
/**
@@ -179,7 +179,7 @@ export type RequiredControlAdapterProcessorNode =
| RequiredMidasDepthImageProcessorInvocation
| RequiredMlsdImageProcessorInvocation
| RequiredNormalbaeImageProcessorInvocation
| RequiredOpenposeImageProcessorInvocation
| RequiredDWOpenposeImageProcessorInvocation
| RequiredPidiImageProcessorInvocation
| RequiredZoeDepthImageProcessorInvocation,
'id'
@@ -299,10 +299,10 @@ export const isNormalbaeImageProcessorInvocation = (obj: unknown): obj is Normal
};
/**
* Type guard for OpenposeImageProcessorInvocation
* Type guard for DWOpenposeImageProcessorInvocation
*/
export const isOpenposeImageProcessorInvocation = (obj: unknown): obj is OpenposeImageProcessorInvocation => {
if (isObject(obj) && 'type' in obj && obj.type === 'openpose_image_processor') {
export const isDWOpenposeImageProcessorInvocation = (obj: unknown): obj is DWOpenposeImageProcessorInvocation => {
if (isObject(obj) && 'type' in obj && obj.type === 'dw_openpose_image_processor') {
return true;
}
return false;

View File

@@ -0,0 +1,23 @@
import type { DragEndEvent } from '@dnd-kit/core';
import { SortableContext, verticalListSortingStrategy } from '@dnd-kit/sortable';
import type { PropsWithChildren } from 'react';
import { memo } from 'react';
import { DndContextTypesafe } from './DndContextTypesafe';
type Props = PropsWithChildren & {
items: string[];
onDragEnd(event: DragEndEvent): void;
};
const DndSortable = (props: Props) => {
return (
<DndContextTypesafe onDragEnd={props.onDragEnd}>
<SortableContext items={props.items} strategy={verticalListSortingStrategy}>
{props.children}
</SortableContext>
</DndContextTypesafe>
);
};
export default memo(DndSortable);

View File

@@ -35,7 +35,7 @@ export const loraSlice = createSlice({
},
loraRecalled: (state, action: PayloadAction<LoRAModelConfigEntity & { weight: number }>) => {
const { model_name, id, base_model, weight } = action.payload;
state.loras[id] = { id, model_name, base_model, weight };
state.loras[id] = { id, model_name, base_model, weight, isEnabled: true };
},
loraRemoved: (state, action: PayloadAction<string>) => {
const id = action.payload;

View File

@@ -1,7 +1,6 @@
import 'reactflow/dist/style.css';
import { Flex } from '@invoke-ai/ui-library';
import { useAppSelector } from 'app/store/storeHooks';
import { IAINoContentFallback } from 'common/components/IAIImageFallback';
import TopPanel from 'features/nodes/components/flow/panels/TopPanel/TopPanel';
import { SaveWorkflowAsDialog } from 'features/workflowLibrary/components/SaveWorkflowAsDialog/SaveWorkflowAsDialog';
@@ -11,6 +10,7 @@ import type { CSSProperties } from 'react';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { MdDeviceHub } from 'react-icons/md';
import { useGetOpenAPISchemaQuery } from 'services/api/endpoints/appInfo';
import AddNodePopover from './flow/AddNodePopover/AddNodePopover';
import { Flow } from './flow/Flow';
@@ -40,7 +40,7 @@ const exit: AnimationProps['exit'] = {
};
const NodeEditor = () => {
const isReady = useAppSelector((s) => s.nodes.isReady);
const { data, isLoading } = useGetOpenAPISchemaQuery();
const { t } = useTranslation();
return (
<Flex
@@ -53,7 +53,7 @@ const NodeEditor = () => {
justifyContent="center"
>
<AnimatePresence>
{isReady && (
{data && (
<motion.div initial={initial} animate={animate} exit={exit} style={isReadyMotionStyles}>
<Flow />
<AddNodePopover />
@@ -65,7 +65,7 @@ const NodeEditor = () => {
)}
</AnimatePresence>
<AnimatePresence>
{!isReady && (
{isLoading && (
<motion.div initial={initial} animate={animate} exit={exit} style={notIsReadyMotionStyles}>
<Flex
layerStyle="first"

View File

@@ -1,6 +1,7 @@
import { IconButton } from '@invoke-ai/ui-library';
import { createSelector } from '@reduxjs/toolkit';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { useFieldValue } from 'features/nodes/hooks/useFieldValue';
import {
selectWorkflowSlice,
workflowExposedFieldAdded,
@@ -18,7 +19,7 @@ type Props = {
const FieldLinearViewToggle = ({ nodeId, fieldName }: Props) => {
const dispatch = useAppDispatch();
const { t } = useTranslation();
const value = useFieldValue(nodeId, fieldName);
const selectIsExposed = useMemo(
() =>
createSelector(selectWorkflowSlice, (workflow) => {
@@ -30,8 +31,8 @@ const FieldLinearViewToggle = ({ nodeId, fieldName }: Props) => {
const isExposed = useAppSelector(selectIsExposed);
const handleExposeField = useCallback(() => {
dispatch(workflowExposedFieldAdded({ nodeId, fieldName }));
}, [dispatch, fieldName, nodeId]);
dispatch(workflowExposedFieldAdded({ nodeId, fieldName, value }));
}, [dispatch, fieldName, nodeId, value]);
const handleUnexposeField = useCallback(() => {
dispatch(workflowExposedFieldRemoved({ nodeId, fieldName }));

View File

@@ -1,12 +1,15 @@
import { useSortable } from '@dnd-kit/sortable';
import { CSS } from '@dnd-kit/utilities';
import { Flex, Icon, IconButton, Spacer, Tooltip } from '@invoke-ai/ui-library';
import { useAppDispatch } from 'app/store/storeHooks';
import NodeSelectionOverlay from 'common/components/NodeSelectionOverlay';
import { useFieldOriginalValue } from 'features/nodes/hooks/useFieldOriginalValue';
import { useMouseOverNode } from 'features/nodes/hooks/useMouseOverNode';
import { workflowExposedFieldRemoved } from 'features/nodes/store/workflowSlice';
import { HANDLE_TOOLTIP_OPEN_DELAY } from 'features/nodes/types/constants';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { PiInfoBold, PiTrashSimpleBold } from 'react-icons/pi';
import { PiArrowCounterClockwiseBold, PiDotsSixVerticalBold, PiInfoBold, PiTrashSimpleBold } from 'react-icons/pi';
import EditableFieldTitle from './EditableFieldTitle';
import FieldTooltipContent from './FieldTooltipContent';
@@ -19,46 +22,79 @@ type Props = {
const LinearViewField = ({ nodeId, fieldName }: Props) => {
const dispatch = useAppDispatch();
const { isValueChanged, onReset } = useFieldOriginalValue(nodeId, fieldName);
const { isMouseOverNode, handleMouseOut, handleMouseOver } = useMouseOverNode(nodeId);
const { t } = useTranslation();
const handleRemoveField = useCallback(() => {
dispatch(workflowExposedFieldRemoved({ nodeId, fieldName }));
}, [dispatch, fieldName, nodeId]);
const { attributes, listeners, setNodeRef, transform, transition } = useSortable({ id: `${nodeId}.${fieldName}` });
const style = {
transform: CSS.Translate.toString(transform),
transition,
};
return (
<Flex
onMouseEnter={handleMouseOver}
onMouseLeave={handleMouseOut}
layerStyle="second"
alignItems="center"
position="relative"
borderRadius="base"
w="full"
p={4}
flexDir="column"
paddingLeft={0}
ref={setNodeRef}
style={style}
>
<Flex>
<EditableFieldTitle nodeId={nodeId} fieldName={fieldName} kind="input" />
<Spacer />
<Tooltip
label={<FieldTooltipContent nodeId={nodeId} fieldName={fieldName} kind="input" />}
openDelay={HANDLE_TOOLTIP_OPEN_DELAY}
placement="top"
>
<Flex h="full" alignItems="center">
<Icon fontSize="sm" color="base.300" as={PiInfoBold} />
</Flex>
</Tooltip>
<IconButton
aria-label={t('nodes.removeLinearView')}
tooltip={t('nodes.removeLinearView')}
variant="ghost"
size="sm"
onClick={handleRemoveField}
icon={<PiTrashSimpleBold />}
/>
<IconButton
aria-label={t('nodes.reorderLinearView')}
variant="ghost"
icon={<PiDotsSixVerticalBold />}
{...listeners}
{...attributes}
mx={2}
height="full"
/>
<Flex flexDir="column" w="full">
<Flex alignItems="center">
<EditableFieldTitle nodeId={nodeId} fieldName={fieldName} kind="input" />
<Spacer />
{isValueChanged && (
<IconButton
aria-label={t('nodes.resetToDefaultValue')}
tooltip={t('nodes.resetToDefaultValue')}
variant="ghost"
size="sm"
onClick={onReset}
icon={<PiArrowCounterClockwiseBold />}
/>
)}
<Tooltip
label={<FieldTooltipContent nodeId={nodeId} fieldName={fieldName} kind="input" />}
openDelay={HANDLE_TOOLTIP_OPEN_DELAY}
placement="top"
>
<Flex h="full" alignItems="center">
<Icon fontSize="sm" color="base.300" as={PiInfoBold} />
</Flex>
</Tooltip>
<IconButton
aria-label={t('nodes.removeLinearView')}
tooltip={t('nodes.removeLinearView')}
variant="ghost"
size="sm"
onClick={handleRemoveField}
icon={<PiTrashSimpleBold />}
/>
</Flex>
<InputFieldRenderer nodeId={nodeId} fieldName={fieldName} />
<NodeSelectionOverlay isSelected={false} isHovered={isMouseOverNode} />
</Flex>
<InputFieldRenderer nodeId={nodeId} fieldName={fieldName} />
<NodeSelectionOverlay isSelected={false} isHovered={isMouseOverNode} />
</Flex>
);
};

View File

@@ -1,25 +1,23 @@
import { Flex, Spacer } from '@invoke-ai/ui-library';
import { useAppSelector } from 'app/store/storeHooks';
import AddNodeButton from 'features/nodes/components/flow/panels/TopPanel/AddNodeButton';
import ClearFlowButton from 'features/nodes/components/flow/panels/TopPanel/ClearFlowButton';
import SaveWorkflowButton from 'features/nodes/components/flow/panels/TopPanel/SaveWorkflowButton';
import UpdateNodesButton from 'features/nodes/components/flow/panels/TopPanel/UpdateNodesButton';
import WorkflowName from 'features/nodes/components/flow/panels/TopPanel/WorkflowName';
import WorkflowLibraryButton from 'features/workflowLibrary/components/WorkflowLibraryButton';
import { WorkflowName } from 'features/nodes/components/sidePanel/WorkflowName';
import WorkflowLibraryMenu from 'features/workflowLibrary/components/WorkflowLibraryMenu/WorkflowLibraryMenu';
import { memo } from 'react';
const TopCenterPanel = () => {
const name = useAppSelector((s) => s.workflow.name);
return (
<Flex gap={2} top={2} left={2} right={2} position="absolute" alignItems="flex-start" pointerEvents="none">
<Flex flexDir="column" gap="2">
<Flex gap="2">
<AddNodeButton />
<WorkflowLibraryButton />
</Flex>
<Flex gap="2">
<AddNodeButton />
<UpdateNodesButton />
</Flex>
<Spacer />
<WorkflowName />
{!!name.length && <WorkflowName />}
<Spacer />
<ClearFlowButton />
<SaveWorkflowButton />

View File

@@ -25,6 +25,7 @@ const UpdateNodesButton = () => {
icon={<PiWarningBold />}
onClick={handleClickUpdateNodes}
pointerEvents="auto"
colorScheme="warning"
/>
);
};

View File

@@ -1,15 +0,0 @@
import { Text } from '@invoke-ai/ui-library';
import { useAppSelector } from 'app/store/storeHooks';
import { memo } from 'react';
const TopCenterPanel = () => {
const name = useAppSelector((s) => s.workflow.name);
return (
<Text m={2} fontSize="lg" userSelect="none" noOfLines={1} wordBreak="break-all" fontWeight="semibold" opacity={0.8}>
{name}
</Text>
);
};
export default memo(TopCenterPanel);

View File

@@ -1,17 +1,16 @@
import { Button } from '@invoke-ai/ui-library';
import { useAppDispatch } from 'app/store/storeHooks';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { PiArrowsClockwiseBold } from 'react-icons/pi';
import { receivedOpenAPISchema } from 'services/api/thunks/schema';
import { useLazyGetOpenAPISchemaQuery } from 'services/api/endpoints/appInfo';
const ReloadNodeTemplatesButton = () => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const [_getOpenAPISchema] = useLazyGetOpenAPISchemaQuery();
const handleReloadSchema = useCallback(() => {
dispatch(receivedOpenAPISchema());
}, [dispatch]);
_getOpenAPISchema();
}, [_getOpenAPISchema]);
return (
<Button

View File

@@ -1,15 +0,0 @@
import { Flex } from '@invoke-ai/ui-library';
import WorkflowLibraryButton from 'features/workflowLibrary/components/WorkflowLibraryButton';
import WorkflowLibraryMenu from 'features/workflowLibrary/components/WorkflowLibraryMenu/WorkflowLibraryMenu';
import { memo } from 'react';
const TopRightPanel = () => {
return (
<Flex gap={2} position="absolute" top={2} insetInlineEnd={2}>
<WorkflowLibraryButton />
<WorkflowLibraryMenu />
</Flex>
);
};
export default memo(TopRightPanel);

View File

@@ -0,0 +1,43 @@
import { Flex, IconButton } from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { workflowModeChanged } from 'features/nodes/store/workflowSlice';
import { useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { PiEyeBold, PiPencilBold } from 'react-icons/pi';
export const ModeToggle = () => {
const dispatch = useAppDispatch();
const mode = useAppSelector((s) => s.workflow.mode);
const { t } = useTranslation();
const onClickEdit = useCallback(() => {
dispatch(workflowModeChanged('edit'));
}, [dispatch]);
const onClickView = useCallback(() => {
dispatch(workflowModeChanged('view'));
}, [dispatch]);
return (
<Flex justifyContent="flex-end">
{mode === 'view' && (
<IconButton
aria-label={t('nodes.editMode')}
tooltip={t('nodes.editMode')}
onClick={onClickEdit}
icon={<PiPencilBold />}
colorScheme="invokeBlue"
/>
)}
{mode === 'edit' && (
<IconButton
aria-label={t('nodes.viewMode')}
tooltip={t('nodes.viewMode')}
onClick={onClickView}
icon={<PiEyeBold />}
colorScheme="invokeBlue"
/>
)}
</Flex>
);
};

View File

@@ -1,22 +1,37 @@
import 'reactflow/dist/style.css';
import { Flex } from '@invoke-ai/ui-library';
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppSelector } from 'app/store/storeHooks';
import { selectWorkflowSlice } from 'features/nodes/store/workflowSlice';
import QueueControls from 'features/queue/components/QueueControls';
import ResizeHandle from 'features/ui/components/tabs/ResizeHandle';
import { usePanelStorage } from 'features/ui/hooks/usePanelStorage';
import WorkflowLibraryButton from 'features/workflowLibrary/components/WorkflowLibraryButton';
import type { CSSProperties } from 'react';
import { memo, useCallback, useRef } from 'react';
import type { ImperativePanelGroupHandle } from 'react-resizable-panels';
import { Panel, PanelGroup } from 'react-resizable-panels';
import InspectorPanel from './inspector/InspectorPanel';
import { WorkflowViewMode } from './viewMode/WorkflowViewMode';
import WorkflowPanel from './workflow/WorkflowPanel';
import { WorkflowMenu } from './WorkflowMenu';
import { WorkflowName } from './WorkflowName';
const panelGroupStyles: CSSProperties = { height: '100%', width: '100%' };
const selector = createMemoizedSelector(selectWorkflowSlice, (workflow) => {
return {
mode: workflow.mode,
};
});
const NodeEditorPanelGroup = () => {
const { mode } = useAppSelector(selector);
const panelGroupRef = useRef<ImperativePanelGroupHandle>(null);
const panelStorage = usePanelStorage();
const handleDoubleClickHandle = useCallback(() => {
if (!panelGroupRef.current) {
return;
@@ -27,22 +42,33 @@ const NodeEditorPanelGroup = () => {
return (
<Flex w="full" h="full" gap={2} flexDir="column">
<QueueControls />
<PanelGroup
ref={panelGroupRef}
id="workflow-panel-group"
autoSaveId="workflow-panel-group"
direction="vertical"
style={panelGroupStyles}
storage={panelStorage}
>
<Panel id="workflow" collapsible minSize={25}>
<WorkflowPanel />
</Panel>
<ResizeHandle orientation="horizontal" onDoubleClick={handleDoubleClickHandle} />
<Panel id="inspector" collapsible minSize={25}>
<InspectorPanel />
</Panel>
</PanelGroup>
<Flex w="full" justifyContent="space-between" alignItems="center" gap="4" padding={1}>
<Flex justifyContent="space-between" alignItems="center" gap="4">
<WorkflowLibraryButton />
<WorkflowName />
</Flex>
<WorkflowMenu />
</Flex>
{mode === 'view' && <WorkflowViewMode />}
{mode === 'edit' && (
<PanelGroup
ref={panelGroupRef}
id="workflow-panel-group"
autoSaveId="workflow-panel-group"
direction="vertical"
style={panelGroupStyles}
storage={panelStorage}
>
<Panel id="workflow" collapsible minSize={25}>
<WorkflowPanel />
</Panel>
<ResizeHandle orientation="horizontal" onDoubleClick={handleDoubleClickHandle} />
<Panel id="inspector" collapsible minSize={25}>
<InspectorPanel />
</Panel>
</PanelGroup>
)}
</Flex>
);
};

View File

@@ -0,0 +1,26 @@
import { Flex } from '@invoke-ai/ui-library';
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppSelector } from 'app/store/storeHooks';
import SaveWorkflowButton from 'features/nodes/components/flow/panels/TopPanel/SaveWorkflowButton';
import { selectWorkflowSlice } from 'features/nodes/store/workflowSlice';
import { NewWorkflowButton } from 'features/workflowLibrary/components/NewWorkflowButton';
import { ModeToggle } from './ModeToggle';
const selector = createMemoizedSelector(selectWorkflowSlice, (workflow) => {
return {
mode: workflow.mode,
};
});
export const WorkflowMenu = () => {
const { mode } = useAppSelector(selector);
return (
<Flex gap="2" alignItems="center">
{mode === 'edit' && <SaveWorkflowButton />}
<NewWorkflowButton />
<ModeToggle />
</Flex>
);
};

View File

@@ -0,0 +1,37 @@
import { Flex, Icon, Text, Tooltip } from '@invoke-ai/ui-library';
import { useAppSelector } from 'app/store/storeHooks';
import { useTranslation } from 'react-i18next';
import { PiDotOutlineFill } from 'react-icons/pi';
import WorkflowInfoTooltipContent from './viewMode/WorkflowInfoTooltipContent';
import { WorkflowWarning } from './viewMode/WorkflowWarning';
export const WorkflowName = () => {
const { name, isTouched, mode } = useAppSelector((s) => s.workflow);
const { t } = useTranslation();
return (
<Flex gap="1" alignItems="center">
{name.length ? (
<Tooltip label={<WorkflowInfoTooltipContent />} placement="top">
<Text fontSize="lg" userSelect="none" noOfLines={1} wordBreak="break-all" fontWeight="semibold">
{name}
</Text>
</Tooltip>
) : (
<Text fontSize="lg" fontStyle="italic" fontWeight="semibold">
{t('workflows.unnamedWorkflow')}
</Text>
)}
{isTouched && mode === 'edit' && (
<Tooltip label="Workflow has unsaved changes">
<Flex>
<Icon as={PiDotOutlineFill} boxSize="20px" sx={{ color: 'invokeYellow.500' }} />
</Flex>
</Tooltip>
)}
<WorkflowWarning />
</Flex>
);
};

View File

@@ -0,0 +1,53 @@
import { Flex, FormLabel, Icon, IconButton, Spacer, Tooltip } from '@invoke-ai/ui-library';
import FieldTooltipContent from 'features/nodes/components/flow/nodes/Invocation/fields/FieldTooltipContent';
import InputFieldRenderer from 'features/nodes/components/flow/nodes/Invocation/fields/InputFieldRenderer';
import { useFieldLabel } from 'features/nodes/hooks/useFieldLabel';
import { useFieldOriginalValue } from 'features/nodes/hooks/useFieldOriginalValue';
import { useFieldTemplateTitle } from 'features/nodes/hooks/useFieldTemplateTitle';
import { HANDLE_TOOLTIP_OPEN_DELAY } from 'features/nodes/types/constants';
import { t } from 'i18next';
import { memo } from 'react';
import { PiArrowCounterClockwiseBold, PiInfoBold } from 'react-icons/pi';
type Props = {
nodeId: string;
fieldName: string;
};
const WorkflowField = ({ nodeId, fieldName }: Props) => {
const label = useFieldLabel(nodeId, fieldName);
const fieldTemplateTitle = useFieldTemplateTitle(nodeId, fieldName, 'input');
const { isValueChanged, onReset } = useFieldOriginalValue(nodeId, fieldName);
return (
<Flex layerStyle="second" position="relative" borderRadius="base" w="full" p={4} gap="2" flexDir="column">
<Flex alignItems="center">
<FormLabel fontSize="sm">{label || fieldTemplateTitle}</FormLabel>
<Spacer />
{isValueChanged && (
<IconButton
aria-label={t('nodes.resetToDefaultValue')}
tooltip={t('nodes.resetToDefaultValue')}
variant="ghost"
size="sm"
onClick={onReset}
icon={<PiArrowCounterClockwiseBold />}
/>
)}
<Tooltip
label={<FieldTooltipContent nodeId={nodeId} fieldName={fieldName} kind="input" />}
openDelay={HANDLE_TOOLTIP_OPEN_DELAY}
placement="top"
>
<Flex h="24px" alignItems="center">
<Icon fontSize="md" color="base.300" as={PiInfoBold} />
</Flex>
</Tooltip>
</Flex>
<InputFieldRenderer nodeId={nodeId} fieldName={fieldName} />
</Flex>
);
};
export default memo(WorkflowField);

View File

@@ -0,0 +1,68 @@
import { Box, Flex, Text } from '@invoke-ai/ui-library';
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppSelector } from 'app/store/storeHooks';
import { selectWorkflowSlice } from 'features/nodes/store/workflowSlice';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
const selector = createMemoizedSelector(selectWorkflowSlice, (workflow) => {
return {
name: workflow.name,
description: workflow.description,
notes: workflow.notes,
author: workflow.author,
tags: workflow.tags,
};
});
const WorkflowInfoTooltipContent = () => {
const { name, description, notes, author, tags } = useAppSelector(selector);
const { t } = useTranslation();
return (
<Flex flexDir="column" gap="2">
{!!name.length && (
<Box>
<Text fontWeight="semibold">{t('nodes.workflowName')}</Text>
<Text opacity={0.7} fontStyle="oblique 5deg">
{name}
</Text>
</Box>
)}
{!!author.length && (
<Box>
<Text fontWeight="semibold">{t('nodes.workflowAuthor')}</Text>
<Text opacity={0.7} fontStyle="oblique 5deg">
{author}
</Text>
</Box>
)}
{!!tags.length && (
<Box>
<Text fontWeight="semibold">{t('nodes.workflowTags')}</Text>
<Text opacity={0.7} fontStyle="oblique 5deg">
{tags}
</Text>
</Box>
)}
{!!description.length && (
<Box>
<Text fontWeight="semibold">{t('nodes.workflowDescription')}</Text>
<Text opacity={0.7} fontStyle="oblique 5deg">
{description}
</Text>
</Box>
)}
{!!notes.length && (
<Box>
<Text fontWeight="semibold">{t('nodes.workflowNotes')}</Text>
<Text opacity={0.7} fontStyle="oblique 5deg">
{notes}
</Text>
</Box>
)}
</Flex>
);
};
export default memo(WorkflowInfoTooltipContent);

View File

@@ -0,0 +1,39 @@
import { Box, Flex } from '@invoke-ai/ui-library';
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppSelector } from 'app/store/storeHooks';
import { IAINoContentFallback } from 'common/components/IAIImageFallback';
import ScrollableContent from 'common/components/OverlayScrollbars/ScrollableContent';
import { selectWorkflowSlice } from 'features/nodes/store/workflowSlice';
import { t } from 'i18next';
import { useGetOpenAPISchemaQuery } from 'services/api/endpoints/appInfo';
import WorkflowField from './WorkflowField';
const selector = createMemoizedSelector(selectWorkflowSlice, (workflow) => {
return {
fields: workflow.exposedFields,
name: workflow.name,
};
});
export const WorkflowViewMode = () => {
const { isLoading } = useGetOpenAPISchemaQuery();
const { fields } = useAppSelector(selector);
return (
<Box position="relative" w="full" h="full">
<ScrollableContent>
<Flex position="relative" flexDir="column" alignItems="flex-start" p={1} gap={2} h="full" w="full">
{isLoading ? (
<IAINoContentFallback label={t('nodes.loadingNodes')} icon={null} />
) : fields.length ? (
fields.map(({ nodeId, fieldName }) => (
<WorkflowField key={`${nodeId}.${fieldName}`} nodeId={nodeId} fieldName={fieldName} />
))
) : (
<IAINoContentFallback label={t('nodes.noFieldsLinearview')} icon={null} />
)}
</Flex>
</ScrollableContent>
</Box>
);
};

View File

@@ -0,0 +1,21 @@
import { Flex, Icon, Tooltip } from '@invoke-ai/ui-library';
import { useGetNodesNeedUpdate } from 'features/nodes/hooks/useGetNodesNeedUpdate';
import { PiWarningBold } from 'react-icons/pi';
import { WorkflowWarningTooltip } from './WorkflowWarningTooltip';
export const WorkflowWarning = () => {
const nodesNeedUpdate = useGetNodesNeedUpdate();
if (!nodesNeedUpdate) {
return <></>;
}
return (
<Tooltip label={<WorkflowWarningTooltip />}>
<Flex h="full" alignItems="center" gap="2">
<Icon color="warning.400" as={PiWarningBold} />
</Flex>
</Tooltip>
);
};

View File

@@ -0,0 +1,20 @@
import { Flex, Text } from '@invoke-ai/ui-library';
import { useTranslation } from 'react-i18next';
export const WorkflowWarningTooltip = () => {
const { t } = useTranslation();
return (
<Flex flexDir="column" gap="2">
<Flex flexDir="column" gap="2">
<Text fontWeight="semibold">{t('toast.loadedWithWarnings')}</Text>
<Flex flexDir="column">
<Text>{t('common.toResolve')}:</Text>
<Text>
{t('nodes.editMode')} &gt;&gt; {t('nodes.updateAllNodes')} &gt;&gt; {t('common.save')}
</Text>
</Flex>
</Flex>
</Flex>
);
};

View File

@@ -1,31 +1,61 @@
import { arrayMove } from '@dnd-kit/sortable';
import { Box, Flex } from '@invoke-ai/ui-library';
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppSelector } from 'app/store/storeHooks';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { IAINoContentFallback } from 'common/components/IAIImageFallback';
import ScrollableContent from 'common/components/OverlayScrollbars/ScrollableContent';
import DndSortable from 'features/dnd/components/DndSortable';
import type { DragEndEvent } from 'features/dnd/types';
import LinearViewField from 'features/nodes/components/flow/nodes/Invocation/fields/LinearViewField';
import { selectWorkflowSlice } from 'features/nodes/store/workflowSlice';
import { memo } from 'react';
import { selectWorkflowSlice, workflowExposedFieldsReordered } from 'features/nodes/store/workflowSlice';
import type { FieldIdentifier } from 'features/nodes/types/field';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { useGetOpenAPISchemaQuery } from 'services/api/endpoints/appInfo';
const selector = createMemoizedSelector(selectWorkflowSlice, (workflow) => workflow.exposedFields);
const WorkflowLinearTab = () => {
const fields = useAppSelector(selector);
const { isLoading } = useGetOpenAPISchemaQuery();
const { t } = useTranslation();
const dispatch = useAppDispatch();
const handleDragEnd = useCallback(
(event: DragEndEvent) => {
const { active, over } = event;
const fieldsStrings = fields.map((field) => `${field.nodeId}.${field.fieldName}`);
if (over && active.id !== over.id) {
const oldIndex = fieldsStrings.indexOf(active.id as string);
const newIndex = fieldsStrings.indexOf(over.id as string);
const newFields = arrayMove(fieldsStrings, oldIndex, newIndex)
.map((field) => fields.find((obj) => `${obj.nodeId}.${obj.fieldName}` === field))
.filter((field) => field) as FieldIdentifier[];
dispatch(workflowExposedFieldsReordered(newFields));
}
},
[dispatch, fields]
);
return (
<Box position="relative" w="full" h="full">
<ScrollableContent>
<Flex position="relative" flexDir="column" alignItems="flex-start" p={1} gap={2} h="full" w="full">
{fields.length ? (
fields.map(({ nodeId, fieldName }) => (
<LinearViewField key={`${nodeId}.${fieldName}`} nodeId={nodeId} fieldName={fieldName} />
))
) : (
<IAINoContentFallback label={t('nodes.noFieldsLinearview')} icon={null} />
)}
</Flex>
<DndSortable onDragEnd={handleDragEnd} items={fields.map((field) => `${field.nodeId}.${field.fieldName}`)}>
<Flex position="relative" flexDir="column" alignItems="flex-start" p={1} gap={2} h="full" w="full">
{isLoading ? (
<IAINoContentFallback label={t('nodes.loadingNodes')} icon={null} />
) : fields.length ? (
fields.map(({ nodeId, fieldName }) => (
<LinearViewField key={`${nodeId}.${fieldName}`} nodeId={nodeId} fieldName={fieldName} />
))
) : (
<IAINoContentFallback label={t('nodes.noFieldsLinearview')} icon={null} />
)}
</Flex>
</DndSortable>
</ScrollableContent>
</Box>
);

View File

@@ -12,17 +12,17 @@ const WorkflowPanel = () => {
<Flex layerStyle="first" flexDir="column" w="full" h="full" borderRadius="base" p={2} gap={2}>
<Tabs variant="line" display="flex" w="full" h="full" flexDir="column">
<TabList>
<Tab>{t('common.linear')}</Tab>
<Tab>{t('common.details')}</Tab>
<Tab>{t('common.linear')}</Tab>
<Tab>JSON</Tab>
</TabList>
<TabPanels>
<TabPanel>
<WorkflowLinearTab />
<WorkflowGeneralTab />
</TabPanel>
<TabPanel>
<WorkflowGeneralTab />
<WorkflowLinearTab />
</TabPanel>
<TabPanel>
<WorkflowJSONTab />

View File

@@ -0,0 +1,28 @@
import { createSelector } from '@reduxjs/toolkit';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { useFieldValue } from 'features/nodes/hooks/useFieldValue';
import { fieldValueReset } from 'features/nodes/store/nodesSlice';
import { selectWorkflowSlice } from 'features/nodes/store/workflowSlice';
import { isEqual } from 'lodash-es';
import { useCallback, useMemo } from 'react';
export const useFieldOriginalValue = (nodeId: string, fieldName: string) => {
const dispatch = useAppDispatch();
const selectOriginalExposedFieldValues = useMemo(
() =>
createSelector(
selectWorkflowSlice,
(workflow) =>
workflow.originalExposedFieldValues.find((v) => v.nodeId === nodeId && v.fieldName === fieldName)?.value
),
[nodeId, fieldName]
);
const originalValue = useAppSelector(selectOriginalExposedFieldValues);
const value = useFieldValue(nodeId, fieldName);
const isValueChanged = useMemo(() => !isEqual(value, originalValue), [value, originalValue]);
const onReset = useCallback(() => {
dispatch(fieldValueReset({ nodeId, fieldName, value: originalValue }));
}, [dispatch, fieldName, nodeId, originalValue]);
return { originalValue, isValueChanged, onReset };
};

View File

@@ -0,0 +1,23 @@
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppSelector } from 'app/store/storeHooks';
import { selectNodesSlice } from 'features/nodes/store/nodesSlice';
import { isInvocationNode } from 'features/nodes/types/invocation';
import { useMemo } from 'react';
export const useFieldValue = (nodeId: string, fieldName: string) => {
const selector = useMemo(
() =>
createMemoizedSelector(selectNodesSlice, (nodes) => {
const node = nodes.nodes.find((node) => node.id === nodeId);
if (!isInvocationNode(node)) {
return;
}
return node?.data.inputs[fieldName]?.value;
}),
[fieldName, nodeId]
);
const value = useAppSelector(selector);
return value;
};

View File

@@ -2,7 +2,6 @@ import type { PayloadAction } from '@reduxjs/toolkit';
import { createSlice, isAnyOf } from '@reduxjs/toolkit';
import type { PersistConfig, RootState } from 'app/store/store';
import { workflowLoaded } from 'features/nodes/store/actions';
import { nodeTemplatesBuilt } from 'features/nodes/store/nodeTemplatesSlice';
import { SHARED_NODE_PROPERTIES } from 'features/nodes/types/constants';
import type {
BoardFieldValue,
@@ -19,6 +18,7 @@ import type {
MainModelFieldValue,
SchedulerFieldValue,
SDXLRefinerModelFieldValue,
StatefulFieldValue,
StringFieldValue,
T2IAdapterModelFieldValue,
VAEModelFieldValue,
@@ -37,6 +37,7 @@ import {
zMainModelFieldValue,
zSchedulerFieldValue,
zSDXLRefinerModelFieldValue,
zStatefulFieldValue,
zStringFieldValue,
zT2IAdapterModelFieldValue,
zVAEModelFieldValue,
@@ -65,7 +66,6 @@ import {
SelectionMode,
updateEdge,
} from 'reactflow';
import { receivedOpenAPISchema } from 'services/api/thunks/schema';
import {
socketGeneratorProgress,
socketInvocationComplete,
@@ -92,7 +92,6 @@ export const initialNodesState: NodesState = {
_version: 1,
nodes: [],
edges: [],
isReady: false,
connectionStartParams: null,
connectionStartFieldType: null,
connectionMade: false,
@@ -481,6 +480,9 @@ export const nodesSlice = createSlice({
selectedEdgesChanged: (state, action: PayloadAction<string[]>) => {
state.selectedEdges = action.payload;
},
fieldValueReset: (state, action: FieldValueAction<StatefulFieldValue>) => {
fieldValueReducer(state, action, zStatefulFieldValue);
},
fieldStringValueChanged: (state, action: FieldValueAction<StringFieldValue>) => {
fieldValueReducer(state, action, zStringFieldValue);
},
@@ -677,10 +679,6 @@ export const nodesSlice = createSlice({
},
},
extraReducers: (builder) => {
builder.addCase(receivedOpenAPISchema.pending, (state) => {
state.isReady = false;
});
builder.addCase(workflowLoaded, (state, action) => {
const { nodes, edges } = action.payload;
state.nodes = applyNodeChanges(
@@ -752,9 +750,6 @@ export const nodesSlice = createSlice({
});
}
});
builder.addCase(nodeTemplatesBuilt, (state) => {
state.isReady = true;
});
},
});
@@ -770,6 +765,7 @@ export const {
edgesChanged,
edgesDeleted,
edgeUpdated,
fieldValueReset,
fieldBoardValueChanged,
fieldBooleanValueChanged,
fieldColorValueChanged,
@@ -844,7 +840,6 @@ export const isAnyNodeOrEdgeMutation = isAnyOf(
nodeIsOpenChanged,
nodeLabelChanged,
nodeNotesChanged,
nodesChanged,
nodesDeleted,
nodeUseCacheChanged,
notesNodeValueChanged,
@@ -871,7 +866,6 @@ export const nodesPersistConfig: PersistConfig<NodesState> = {
'connectionStartFieldType',
'selectedNodes',
'selectedEdges',
'isReady',
'nodesToCopy',
'edgesToCopy',
'connectionMade',

View File

@@ -1,4 +1,4 @@
import type { FieldType } from 'features/nodes/types/field';
import type { FieldIdentifier, FieldType, StatefulFieldValue } from 'features/nodes/types/field';
import type {
AnyNode,
InvocationNodeEdge,
@@ -26,7 +26,6 @@ export type NodesState = {
selectedEdges: string[];
nodeExecutionStates: Record<string, NodeExecutionState>;
viewport: Viewport;
isReady: boolean;
nodesToCopy: AnyNode[];
edgesToCopy: InvocationNodeEdge[];
isAddNodePopoverOpen: boolean;
@@ -34,9 +33,16 @@ export type NodesState = {
selectionMode: SelectionMode;
};
export type WorkflowMode = 'edit' | 'view';
export type FieldIdentifierWithValue = FieldIdentifier & {
value: StatefulFieldValue;
};
export type WorkflowsState = Omit<WorkflowV2, 'nodes' | 'edges'> & {
_version: 1;
isTouched: boolean;
mode: WorkflowMode;
originalExposedFieldValues: FieldIdentifierWithValue[];
};
export type NodeTemplatesState = {

View File

@@ -2,11 +2,16 @@ import type { PayloadAction } from '@reduxjs/toolkit';
import { createSlice } from '@reduxjs/toolkit';
import type { PersistConfig, RootState } from 'app/store/store';
import { workflowLoaded } from 'features/nodes/store/actions';
import { isAnyNodeOrEdgeMutation, nodeEditorReset, nodesDeleted } from 'features/nodes/store/nodesSlice';
import type { WorkflowsState as WorkflowState } from 'features/nodes/store/types';
import { isAnyNodeOrEdgeMutation, nodeEditorReset, nodesChanged, nodesDeleted } from 'features/nodes/store/nodesSlice';
import type {
FieldIdentifierWithValue,
WorkflowMode,
WorkflowsState as WorkflowState,
} from 'features/nodes/store/types';
import type { FieldIdentifier } from 'features/nodes/types/field';
import { isInvocationNode } from 'features/nodes/types/invocation';
import type { WorkflowCategory, WorkflowV2 } from 'features/nodes/types/workflow';
import { cloneDeep, isEqual, uniqBy } from 'lodash-es';
import { cloneDeep, isEqual, omit, uniqBy } from 'lodash-es';
export const blankWorkflow: Omit<WorkflowV2, 'nodes' | 'edges'> = {
name: '',
@@ -23,7 +28,9 @@ export const blankWorkflow: Omit<WorkflowV2, 'nodes' | 'edges'> = {
export const initialWorkflowState: WorkflowState = {
_version: 1,
isTouched: true,
isTouched: false,
mode: 'view',
originalExposedFieldValues: [],
...blankWorkflow,
};
@@ -31,15 +38,29 @@ export const workflowSlice = createSlice({
name: 'workflow',
initialState: initialWorkflowState,
reducers: {
workflowExposedFieldAdded: (state, action: PayloadAction<FieldIdentifier>) => {
workflowModeChanged: (state, action: PayloadAction<WorkflowMode>) => {
state.mode = action.payload;
},
workflowExposedFieldAdded: (state, action: PayloadAction<FieldIdentifierWithValue>) => {
state.exposedFields = uniqBy(
state.exposedFields.concat(action.payload),
state.exposedFields.concat(omit(action.payload, 'value')),
(field) => `${field.nodeId}-${field.fieldName}`
);
state.originalExposedFieldValues = uniqBy(
state.originalExposedFieldValues.concat(action.payload),
(field) => `${field.nodeId}-${field.fieldName}`
);
state.isTouched = true;
},
workflowExposedFieldRemoved: (state, action: PayloadAction<FieldIdentifier>) => {
state.exposedFields = state.exposedFields.filter((field) => !isEqual(field, action.payload));
state.originalExposedFieldValues = state.originalExposedFieldValues.filter(
(field) => !isEqual(omit(field, 'value'), action.payload)
);
state.isTouched = true;
},
workflowExposedFieldsReordered: (state, action: PayloadAction<FieldIdentifier[]>) => {
state.exposedFields = action.payload;
state.isTouched = true;
},
workflowNameChanged: (state, action: PayloadAction<string>) => {
@@ -78,15 +99,43 @@ export const workflowSlice = createSlice({
workflowIDChanged: (state, action: PayloadAction<string>) => {
state.id = action.payload;
},
workflowReset: () => cloneDeep(initialWorkflowState),
workflowSaved: (state) => {
state.isTouched = false;
},
},
extraReducers: (builder) => {
builder.addCase(workflowLoaded, (state, action) => {
const { nodes: _nodes, edges: _edges, ...workflowExtra } = action.payload;
return { ...initialWorkflowState, ...cloneDeep(workflowExtra) };
const { nodes, edges: _edges, ...workflowExtra } = action.payload;
const originalExposedFieldValues: FieldIdentifierWithValue[] = [];
workflowExtra.exposedFields.forEach((field) => {
const node = nodes.find((n) => n.id === field.nodeId);
if (!isInvocationNode(node)) {
return;
}
const input = node.data.inputs[field.fieldName];
if (!input) {
return;
}
const originalExposedFieldValue = {
nodeId: field.nodeId,
fieldName: field.fieldName,
value: input.value,
};
originalExposedFieldValues.push(originalExposedFieldValue);
});
return {
...cloneDeep(initialWorkflowState),
...cloneDeep(workflowExtra),
originalExposedFieldValues,
mode: state.mode,
};
});
builder.addCase(nodesDeleted, (state, action) => {
@@ -97,6 +146,29 @@ export const workflowSlice = createSlice({
builder.addCase(nodeEditorReset, () => cloneDeep(initialWorkflowState));
builder.addCase(nodesChanged, (state, action) => {
// Not all changes to nodes should result in the workflow being marked touched
const filteredChanges = action.payload.filter((change) => {
// We always want to mark the workflow as touched if a node is added, removed, or reset
if (['add', 'remove', 'reset'].includes(change.type)) {
return true;
}
// Position changes can change the position and the dragging status of the node - ignore if the change doesn't
// affect the position
if (change.type === 'position' && (change.position || change.positionAbsolute)) {
return true;
}
// This change isn't relevant
return false;
});
if (filteredChanges.length > 0) {
state.isTouched = true;
}
});
builder.addMatcher(isAnyNodeOrEdgeMutation, (state) => {
state.isTouched = true;
});
@@ -104,8 +176,10 @@ export const workflowSlice = createSlice({
});
export const {
workflowModeChanged,
workflowExposedFieldAdded,
workflowExposedFieldRemoved,
workflowExposedFieldsReordered,
workflowNameChanged,
workflowCategoryChanged,
workflowDescriptionChanged,
@@ -115,7 +189,6 @@ export const {
workflowVersionChanged,
workflowContactChanged,
workflowIDChanged,
workflowReset,
workflowSaved,
} = workflowSlice.actions;

View File

@@ -23,6 +23,7 @@ import {
NOISE,
NOISE_HRF,
RESIZE_HRF,
SEAMLESS,
VAE_LOADER,
} from './constants';
import { setMetadataReceivingNode, upsertMetadata } from './metadata';
@@ -30,7 +31,6 @@ import { setMetadataReceivingNode, upsertMetadata } from './metadata';
// Copy certain connections from previous DENOISE_LATENTS to new DENOISE_LATENTS_HRF.
function copyConnectionsToDenoiseLatentsHrf(graph: NonNullableGraph): void {
const destinationFields = [
'vae',
'control',
'ip_adapter',
'metadata',
@@ -107,9 +107,10 @@ export const addHrfToGraph = (state: RootState, graph: NonNullableGraph): void =
}
const log = logger('txt2img');
const { vae } = state.generation;
const { vae, seamlessXAxis, seamlessYAxis } = state.generation;
const { hrfStrength, hrfEnabled, hrfMethod } = state.hrf;
const isAutoVae = !vae;
const isSeamlessEnabled = seamlessXAxis || seamlessYAxis;
const width = state.generation.width;
const height = state.generation.height;
const optimalDimension = selectOptimalDimension(state);
@@ -158,7 +159,7 @@ export const addHrfToGraph = (state: RootState, graph: NonNullableGraph): void =
},
{
source: {
node_id: isAutoVae ? MAIN_MODEL_LOADER : VAE_LOADER,
node_id: isSeamlessEnabled ? SEAMLESS : isAutoVae ? MAIN_MODEL_LOADER : VAE_LOADER,
field: 'vae',
},
destination: {
@@ -259,7 +260,7 @@ export const addHrfToGraph = (state: RootState, graph: NonNullableGraph): void =
graph.edges.push(
{
source: {
node_id: isAutoVae ? MAIN_MODEL_LOADER : VAE_LOADER,
node_id: isSeamlessEnabled ? SEAMLESS : isAutoVae ? MAIN_MODEL_LOADER : VAE_LOADER,
field: 'vae',
},
destination: {
@@ -322,7 +323,7 @@ export const addHrfToGraph = (state: RootState, graph: NonNullableGraph): void =
graph.edges.push(
{
source: {
node_id: isAutoVae ? MAIN_MODEL_LOADER : VAE_LOADER,
node_id: isSeamlessEnabled ? SEAMLESS : isAutoVae ? MAIN_MODEL_LOADER : VAE_LOADER,
field: 'vae',
},
destination: {

View File

@@ -14,6 +14,7 @@ import {
SDXL_IMAGE_TO_IMAGE_GRAPH,
SDXL_TEXT_TO_IMAGE_GRAPH,
SEAMLESS,
VAE_LOADER,
} from './constants';
import { upsertMetadata } from './metadata';
@@ -23,7 +24,8 @@ export const addSeamlessToLinearGraph = (
modelLoaderNodeId: string
): void => {
// Remove Existing UNet Connections
const { seamlessXAxis, seamlessYAxis } = state.generation;
const { seamlessXAxis, seamlessYAxis, vae } = state.generation;
const isAutoVae = !vae;
graph.nodes[SEAMLESS] = {
id: SEAMLESS,
@@ -32,6 +34,15 @@ export const addSeamlessToLinearGraph = (
seamless_y: seamlessYAxis,
} as SeamlessModeInvocation;
if (!isAutoVae) {
graph.nodes[VAE_LOADER] = {
type: 'vae_loader',
id: VAE_LOADER,
is_intermediate: true,
vae_model: vae,
};
}
if (seamlessXAxis) {
upsertMetadata(graph, {
seamless_x: seamlessXAxis,
@@ -75,7 +86,7 @@ export const addSeamlessToLinearGraph = (
},
{
source: {
node_id: modelLoaderNodeId,
node_id: isAutoVae ? modelLoaderNodeId : VAE_LOADER,
field: 'vae',
},
destination: {

View File

@@ -21,6 +21,7 @@ import {
SDXL_IMAGE_TO_IMAGE_GRAPH,
SDXL_REFINER_INPAINT_CREATE_MASK,
SDXL_TEXT_TO_IMAGE_GRAPH,
SEAMLESS,
TEXT_TO_IMAGE_GRAPH,
VAE_LOADER,
} from './constants';
@@ -31,15 +32,16 @@ export const addVAEToGraph = (
graph: NonNullableGraph,
modelLoaderNodeId: string = MAIN_MODEL_LOADER
): void => {
const { vae, canvasCoherenceMode } = state.generation;
const { vae, canvasCoherenceMode, seamlessXAxis, seamlessYAxis } = state.generation;
const { boundingBoxScaleMethod } = state.canvas;
const { refinerModel } = state.sdxl;
const isUsingScaledDimensions = ['auto', 'manual'].includes(boundingBoxScaleMethod);
const isAutoVae = !vae;
const isSeamlessEnabled = seamlessXAxis || seamlessYAxis;
if (!isAutoVae) {
if (!isAutoVae && !isSeamlessEnabled) {
graph.nodes[VAE_LOADER] = {
type: 'vae_loader',
id: VAE_LOADER,
@@ -56,7 +58,7 @@ export const addVAEToGraph = (
) {
graph.edges.push({
source: {
node_id: isAutoVae ? modelLoaderNodeId : VAE_LOADER,
node_id: isSeamlessEnabled ? SEAMLESS : isAutoVae ? modelLoaderNodeId : VAE_LOADER,
field: 'vae',
},
destination: {
@@ -74,7 +76,7 @@ export const addVAEToGraph = (
) {
graph.edges.push({
source: {
node_id: isAutoVae ? modelLoaderNodeId : VAE_LOADER,
node_id: isSeamlessEnabled ? SEAMLESS : isAutoVae ? modelLoaderNodeId : VAE_LOADER,
field: 'vae',
},
destination: {
@@ -92,7 +94,7 @@ export const addVAEToGraph = (
) {
graph.edges.push({
source: {
node_id: isAutoVae ? modelLoaderNodeId : VAE_LOADER,
node_id: isSeamlessEnabled ? SEAMLESS : isAutoVae ? modelLoaderNodeId : VAE_LOADER,
field: 'vae',
},
destination: {
@@ -111,7 +113,7 @@ export const addVAEToGraph = (
graph.edges.push(
{
source: {
node_id: isAutoVae ? modelLoaderNodeId : VAE_LOADER,
node_id: isSeamlessEnabled ? SEAMLESS : isAutoVae ? modelLoaderNodeId : VAE_LOADER,
field: 'vae',
},
destination: {
@@ -121,7 +123,7 @@ export const addVAEToGraph = (
},
{
source: {
node_id: isAutoVae ? modelLoaderNodeId : VAE_LOADER,
node_id: isSeamlessEnabled ? SEAMLESS : isAutoVae ? modelLoaderNodeId : VAE_LOADER,
field: 'vae',
},
destination: {
@@ -131,7 +133,7 @@ export const addVAEToGraph = (
},
{
source: {
node_id: isAutoVae ? modelLoaderNodeId : VAE_LOADER,
node_id: isSeamlessEnabled ? SEAMLESS : isAutoVae ? modelLoaderNodeId : VAE_LOADER,
field: 'vae',
},
destination: {
@@ -145,7 +147,7 @@ export const addVAEToGraph = (
if (canvasCoherenceMode !== 'unmasked') {
graph.edges.push({
source: {
node_id: isAutoVae ? modelLoaderNodeId : VAE_LOADER,
node_id: isSeamlessEnabled ? SEAMLESS : isAutoVae ? modelLoaderNodeId : VAE_LOADER,
field: 'vae',
},
destination: {
@@ -160,7 +162,7 @@ export const addVAEToGraph = (
if (graph.id === SDXL_CANVAS_INPAINT_GRAPH || graph.id === SDXL_CANVAS_OUTPAINT_GRAPH) {
graph.edges.push({
source: {
node_id: isAutoVae ? modelLoaderNodeId : VAE_LOADER,
node_id: isSeamlessEnabled ? SEAMLESS : isAutoVae ? modelLoaderNodeId : VAE_LOADER,
field: 'vae',
},
destination: {

View File

@@ -101,9 +101,11 @@ const SettingsModal = ({ children, config }: SettingsModalProps) => {
const clearStorage = useClearStorage();
const handleOpenSettingsModel = useCallback(() => {
refetchIntermediatesCount();
if (shouldShowClearIntermediates) {
refetchIntermediatesCount();
}
_onSettingsModalOpen();
}, [_onSettingsModalOpen, refetchIntermediatesCount]);
}, [_onSettingsModalOpen, refetchIntermediatesCount, shouldShowClearIntermediates]);
const handleClickResetWebUI = useCallback(() => {
clearStorage();

View File

@@ -1,13 +1,28 @@
import { Box, Flex } from '@invoke-ai/ui-library';
import { useAppSelector } from 'app/store/storeHooks';
import CurrentImageDisplay from 'features/gallery/components/CurrentImage/CurrentImageDisplay';
import NodeEditor from 'features/nodes/components/NodeEditor';
import { memo } from 'react';
import { ReactFlowProvider } from 'reactflow';
const NodesTab = () => {
return (
<ReactFlowProvider>
<NodeEditor />
</ReactFlowProvider>
);
const mode = useAppSelector((s) => s.workflow.mode);
if (mode === 'edit') {
return (
<ReactFlowProvider>
<NodeEditor />
</ReactFlowProvider>
);
} else {
return (
<Box layerStyle="first" position="relative" w="full" h="full" p={2} borderRadius="base">
<Flex w="full" h="full">
<CurrentImageDisplay />
</Flex>
</Box>
);
}
};
export default memo(NodesTab);

View File

@@ -0,0 +1,26 @@
import { IconButton } from '@invoke-ai/ui-library';
import { NewWorkflowConfirmationAlertDialog } from 'features/workflowLibrary/components/NewWorkflowConfirmationAlertDialog';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { PiFilePlusBold } from 'react-icons/pi';
export const NewWorkflowButton = memo(() => {
const { t } = useTranslation();
const renderButton = useCallback(
(onClick: () => void) => (
<IconButton
aria-label={t('nodes.newWorkflow')}
tooltip={t('nodes.newWorkflow')}
icon={<PiFilePlusBold />}
onClick={onClick}
pointerEvents="auto"
/>
),
[t]
);
return <NewWorkflowConfirmationAlertDialog renderButton={renderButton} />;
});
NewWorkflowButton.displayName = 'NewWorkflowButton';

View File

@@ -0,0 +1,63 @@
import { ConfirmationAlertDialog, Flex, Text, useDisclosure } from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { nodeEditorReset } from 'features/nodes/store/nodesSlice';
import { workflowModeChanged } from 'features/nodes/store/workflowSlice';
import { addToast } from 'features/system/store/systemSlice';
import { makeToast } from 'features/system/util/makeToast';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
type Props = {
renderButton: (onClick: () => void) => JSX.Element;
};
export const NewWorkflowConfirmationAlertDialog = memo((props: Props) => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const { isOpen, onOpen, onClose } = useDisclosure();
const isTouched = useAppSelector((s) => s.workflow.isTouched);
const handleNewWorkflow = useCallback(() => {
dispatch(nodeEditorReset());
dispatch(workflowModeChanged('edit'));
dispatch(
addToast(
makeToast({
title: t('workflows.newWorkflowCreated'),
status: 'success',
})
)
);
onClose();
}, [dispatch, onClose, t]);
const onClick = useCallback(() => {
if (!isTouched) {
handleNewWorkflow();
return;
}
onOpen();
}, [handleNewWorkflow, isTouched, onOpen]);
return (
<>
{props.renderButton(onClick)}
<ConfirmationAlertDialog
isOpen={isOpen}
onClose={onClose}
title={t('nodes.newWorkflow')}
acceptCallback={handleNewWorkflow}
>
<Flex flexDir="column" gap={2}>
<Text>{t('nodes.newWorkflowDesc')}</Text>
<Text variant="subtext">{t('nodes.newWorkflowDesc2')}</Text>
</Flex>
</ConfirmationAlertDialog>
</>
);
});
NewWorkflowConfirmationAlertDialog.displayName = 'NewWorkflowConfirmationAlertDialog';

View File

@@ -0,0 +1,47 @@
import { Button } from '@invoke-ai/ui-library';
import { useWorkflowLibraryModalContext } from 'features/workflowLibrary/context/useWorkflowLibraryModalContext';
import { useLoadWorkflowFromFile } from 'features/workflowLibrary/hooks/useLoadWorkflowFromFile';
import { memo, useCallback, useRef } from 'react';
import { useDropzone } from 'react-dropzone';
import { useTranslation } from 'react-i18next';
import { PiUploadSimpleBold } from 'react-icons/pi';
const UploadWorkflowButton = () => {
const { t } = useTranslation();
const resetRef = useRef<() => void>(null);
const { onClose } = useWorkflowLibraryModalContext();
const loadWorkflowFromFile = useLoadWorkflowFromFile({ resetRef, onSuccess: onClose });
const onDropAccepted = useCallback(
(files: File[]) => {
if (!files[0]) {
return;
}
loadWorkflowFromFile(files[0]);
},
[loadWorkflowFromFile]
);
const { getInputProps, getRootProps } = useDropzone({
accept: { 'application/json': ['.json'] },
onDropAccepted,
noDrag: true,
multiple: false,
});
return (
<>
<Button
aria-label={t('workflows.uploadWorkflow')}
tooltip={t('workflows.uploadWorkflow')}
leftIcon={<PiUploadSimpleBold />}
{...getRootProps()}
pointerEvents="auto"
>
{t('workflows.uploadWorkflow')}
</Button>
<input {...getInputProps()} />
</>
);
};
export default memo(UploadWorkflowButton);

View File

@@ -2,7 +2,7 @@ import { IconButton, useDisclosure } from '@invoke-ai/ui-library';
import { WorkflowLibraryModalContext } from 'features/workflowLibrary/context/WorkflowLibraryModalContext';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiBooksBold } from 'react-icons/pi';
import { PiFolderOpenBold } from 'react-icons/pi';
import WorkflowLibraryModal from './WorkflowLibraryModal';
@@ -15,7 +15,7 @@ const WorkflowLibraryButton = () => {
<IconButton
aria-label={t('workflows.workflowLibrary')}
tooltip={t('workflows.workflowLibrary')}
icon={<PiBooksBold />}
icon={<PiFolderOpenBold />}
onClick={disclosure.onOpen}
pointerEvents="auto"
/>

View File

@@ -1,5 +1,6 @@
import type { ComboboxOnChange, ComboboxOption } from '@invoke-ai/ui-library';
import {
Box,
Button,
ButtonGroup,
Combobox,
@@ -29,6 +30,8 @@ import type { SQLiteDirection, WorkflowRecordOrderBy } from 'services/api/types'
import { useDebounce } from 'use-debounce';
import { z } from 'zod';
import UploadWorkflowButton from './UploadWorkflowButton';
const PER_PAGE = 10;
const zOrderBy = z.enum(['opened_at', 'created_at', 'updated_at', 'name']);
@@ -221,11 +224,16 @@ const WorkflowLibraryList = () => {
<IAINoContentFallback label={t('workflows.noWorkflows')} />
)}
<Divider />
{data && (
<Flex w="full" justifyContent="space-around">
<WorkflowLibraryPagination data={data} page={page} setPage={setPage} />
</Flex>
)}
<Flex w="full">
<Box flex="1">
<UploadWorkflowButton />
</Box>
<Box flex="1" textAlign="center">
{data && <WorkflowLibraryPagination data={data} page={page} setPage={setPage} />}
</Box>
<Box flex="1"></Box>
</Flex>
</>
);
};

View File

@@ -1,60 +1,22 @@
import { ConfirmationAlertDialog, Flex, MenuItem, Text, useDisclosure } from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { nodeEditorReset } from 'features/nodes/store/nodesSlice';
import { addToast } from 'features/system/store/systemSlice';
import { makeToast } from 'features/system/util/makeToast';
import { MenuItem } from '@invoke-ai/ui-library';
import { NewWorkflowConfirmationAlertDialog } from 'features/workflowLibrary/components/NewWorkflowConfirmationAlertDialog';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { PiFlowArrowBold } from 'react-icons/pi';
import { PiFilePlusBold } from 'react-icons/pi';
const NewWorkflowMenuItem = () => {
export const NewWorkflowMenuItem = memo(() => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const { isOpen, onOpen, onClose } = useDisclosure();
const isTouched = useAppSelector((s) => s.workflow.isTouched);
const handleNewWorkflow = useCallback(() => {
dispatch(nodeEditorReset());
dispatch(
addToast(
makeToast({
title: t('workflows.newWorkflowCreated'),
status: 'success',
})
)
);
onClose();
}, [dispatch, onClose, t]);
const onClick = useCallback(() => {
if (!isTouched) {
handleNewWorkflow();
return;
}
onOpen();
}, [handleNewWorkflow, isTouched, onOpen]);
return (
<>
<MenuItem as="button" icon={<PiFlowArrowBold />} onClick={onClick}>
const renderButton = useCallback(
(onClick: () => void) => (
<MenuItem as="button" icon={<PiFilePlusBold />} onClick={onClick}>
{t('nodes.newWorkflow')}
</MenuItem>
<ConfirmationAlertDialog
isOpen={isOpen}
onClose={onClose}
title={t('nodes.newWorkflow')}
acceptCallback={handleNewWorkflow}
>
<Flex flexDir="column" gap={2}>
<Text>{t('nodes.newWorkflowDesc')}</Text>
<Text variant="subtext">{t('nodes.newWorkflowDesc2')}</Text>
</Flex>
</ConfirmationAlertDialog>
</>
),
[t]
);
};
export default memo(NewWorkflowMenuItem);
return <NewWorkflowConfirmationAlertDialog renderButton={renderButton} />;
});
NewWorkflowMenuItem.displayName = 'NewWorkflowMenuItem';

View File

@@ -8,7 +8,7 @@ import {
useGlobalMenuClose,
} from '@invoke-ai/ui-library';
import DownloadWorkflowMenuItem from 'features/workflowLibrary/components/WorkflowLibraryMenu/DownloadWorkflowMenuItem';
import NewWorkflowMenuItem from 'features/workflowLibrary/components/WorkflowLibraryMenu/NewWorkflowMenuItem';
import { NewWorkflowMenuItem } from 'features/workflowLibrary/components/WorkflowLibraryMenu/NewWorkflowMenuItem';
import SaveWorkflowAsMenuItem from 'features/workflowLibrary/components/WorkflowLibraryMenu/SaveWorkflowAsMenuItem';
import SaveWorkflowMenuItem from 'features/workflowLibrary/components/WorkflowLibraryMenu/SaveWorkflowMenuItem';
import SettingsMenuItem from 'features/workflowLibrary/components/WorkflowLibraryMenu/SettingsMenuItem';

View File

@@ -10,11 +10,12 @@ import { useTranslation } from 'react-i18next';
type useLoadWorkflowFromFileOptions = {
resetRef: RefObject<() => void>;
onSuccess?: () => void;
};
type UseLoadWorkflowFromFile = (options: useLoadWorkflowFromFileOptions) => (file: File | null) => void;
export const useLoadWorkflowFromFile: UseLoadWorkflowFromFile = ({ resetRef }) => {
export const useLoadWorkflowFromFile: UseLoadWorkflowFromFile = ({ resetRef, onSuccess }) => {
const dispatch = useAppDispatch();
const logger = useLogger('nodes');
const { t } = useTranslation();
@@ -31,6 +32,7 @@ export const useLoadWorkflowFromFile: UseLoadWorkflowFromFile = ({ resetRef }) =
const parsedJSON = JSON.parse(String(rawJSON));
dispatch(workflowLoadRequested({ workflow: parsedJSON, asCopy: true }));
dispatch(workflowLoadedFromFile());
onSuccess && onSuccess();
} catch (e) {
// There was a problem reading the file
logger.error(t('nodes.unableToLoadWorkflow'));
@@ -51,7 +53,7 @@ export const useLoadWorkflowFromFile: UseLoadWorkflowFromFile = ({ resetRef }) =
// Reset the file picker internal state so that the same file can be loaded again
resetRef.current?.();
},
[dispatch, logger, resetRef, t]
[dispatch, logger, resetRef, t, onSuccess]
);
return loadWorkflowFromFile;

View File

@@ -1,3 +1,5 @@
import { $openAPISchemaUrl } from 'app/store/nanostores/openAPISchemaUrl';
import type { OpenAPIV3_1 } from 'openapi-types';
import type { paths } from 'services/api/schema';
import type { AppConfig, AppDependencyVersions, AppVersion } from 'services/api/types';
@@ -57,6 +59,14 @@ export const appInfoApi = api.injectEndpoints({
}),
invalidatesTags: ['InvocationCacheStatus'],
}),
getOpenAPISchema: build.query<OpenAPIV3_1.Document, void>({
query: () => {
const openAPISchemaUrl = $openAPISchemaUrl.get();
const url = openAPISchemaUrl ? openAPISchemaUrl : `${window.location.href.replace(/\/$/, '')}/openapi.json`;
return url;
},
providesTags: ['Schema'],
}),
}),
});
@@ -68,4 +78,6 @@ export const {
useDisableInvocationCacheMutation,
useEnableInvocationCacheMutation,
useGetInvocationCacheStatusQuery,
useGetOpenAPISchemaQuery,
useLazyGetOpenAPISchemaQuery,
} = appInfoApi;

View File

@@ -1,3 +1,4 @@
import type { FetchBaseQueryArgs } from '@reduxjs/toolkit/dist/query/fetchBaseQuery';
import type { BaseQueryFn, FetchArgs, FetchBaseQueryError, TagDescription } from '@reduxjs/toolkit/query/react';
import { createApi, fetchBaseQuery } from '@reduxjs/toolkit/query/react';
import { $authToken } from 'app/store/nanostores/authToken';
@@ -35,6 +36,7 @@ export const tagTypes = [
'SDXLRefinerModel',
'Workflow',
'WorkflowsRecent',
'Schema',
// This is invalidated on reconnect. It should be used for queries that have changing data,
// especially related to the queue and generation.
'FetchOnReconnect',
@@ -50,10 +52,22 @@ const dynamicBaseQuery: BaseQueryFn<string | FetchArgs, unknown, FetchBaseQueryE
const baseUrl = $baseUrl.get();
const authToken = $authToken.get();
const projectId = $projectId.get();
const isOpenAPIRequest =
(args instanceof Object && args.url.includes('openapi.json')) ||
(typeof args === 'string' && args.includes('openapi.json'));
const rawBaseQuery = fetchBaseQuery({
const fetchBaseQueryArgs: FetchBaseQueryArgs = {
baseUrl: baseUrl ? `${baseUrl}/api/v1` : `${window.location.href.replace(/\/$/, '')}/api/v1`,
prepareHeaders: (headers) => {
};
// When fetching the openapi.json, we need to remove circular references from the JSON.
if (isOpenAPIRequest) {
fetchBaseQueryArgs.jsonReplacer = getCircularReplacer();
}
// openapi.json isn't protected by authorization, but all other requests need to include the auth token and project id.
if (!isOpenAPIRequest) {
fetchBaseQueryArgs.prepareHeaders = (headers) => {
if (authToken) {
headers.set('Authorization', `Bearer ${authToken}`);
}
@@ -62,8 +76,10 @@ const dynamicBaseQuery: BaseQueryFn<string | FetchArgs, unknown, FetchBaseQueryE
}
return headers;
},
});
};
}
const rawBaseQuery = fetchBaseQuery(fetchBaseQueryArgs);
return rawBaseQuery(args, api, extraOptions);
};
@@ -74,3 +90,25 @@ export const api = createApi({
tagTypes,
endpoints: () => ({}),
});
function getCircularReplacer() {
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const ancestors: Record<string, any>[] = [];
// eslint-disable-next-line @typescript-eslint/no-explicit-any
return function (key: string, value: any) {
if (typeof value !== 'object' || value === null) {
return value;
}
// `this` is the object that value is contained in, i.e., its direct parent.
// eslint-disable-next-line @typescript-eslint/ban-ts-comment
// @ts-ignore don't think it's possible to not have TS complain about this...
while (ancestors.length > 0 && ancestors.at(-1) !== this) {
ancestors.pop();
}
if (ancestors.includes(value)) {
return '[Circular]';
}
ancestors.push(value);
return value;
};
}

File diff suppressed because one or more lines are too long

View File

@@ -1,40 +0,0 @@
import { createAsyncThunk } from '@reduxjs/toolkit';
import { $openAPISchemaUrl } from 'app/store/nanostores/openAPISchemaUrl';
function getCircularReplacer() {
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const ancestors: Record<string, any>[] = [];
// eslint-disable-next-line @typescript-eslint/no-explicit-any
return function (key: string, value: any) {
if (typeof value !== 'object' || value === null) {
return value;
}
// `this` is the object that value is contained in, i.e., its direct parent.
// eslint-disable-next-line @typescript-eslint/ban-ts-comment
// @ts-ignore don't think it's possible to not have TS complain about this...
while (ancestors.length > 0 && ancestors.at(-1) !== this) {
ancestors.pop();
}
if (ancestors.includes(value)) {
return '[Circular]';
}
ancestors.push(value);
return value;
};
}
export const receivedOpenAPISchema = createAsyncThunk('nodes/receivedOpenAPISchema', async (_, { rejectWithValue }) => {
try {
const openAPISchemaUrl = $openAPISchemaUrl.get();
const url = openAPISchemaUrl ? openAPISchemaUrl : `${window.location.href.replace(/\/$/, '')}/openapi.json`;
const response = await fetch(url);
const openAPISchema = await response.json();
const schemaJSON = JSON.parse(JSON.stringify(openAPISchema, getCircularReplacer()));
return schemaJSON;
} catch (error) {
return rejectWithValue({ error });
}
});

View File

@@ -156,7 +156,7 @@ export type MediapipeFaceProcessorInvocation = s['MediapipeFaceProcessorInvocati
export type MidasDepthImageProcessorInvocation = s['MidasDepthImageProcessorInvocation'];
export type MlsdImageProcessorInvocation = s['MlsdImageProcessorInvocation'];
export type NormalbaeImageProcessorInvocation = s['NormalbaeImageProcessorInvocation'];
export type OpenposeImageProcessorInvocation = s['OpenposeImageProcessorInvocation'];
export type DWOpenposeImageProcessorInvocation = s['DWOpenposeImageProcessorInvocation'];
export type PidiImageProcessorInvocation = s['PidiImageProcessorInvocation'];
export type ZoeDepthImageProcessorInvocation = s['ZoeDepthImageProcessorInvocation'];

View File

@@ -1 +1 @@
__version__ = "3.6.3"
__version__ = "3.7.0"

View File

@@ -33,11 +33,11 @@ classifiers = [
]
dependencies = [
# Core generation dependencies, pinned for reproducible builds.
"accelerate==0.26.1",
"accelerate==0.27.2",
"clip_anytorch==2.5.2", # replacing "clip @ https://github.com/openai/CLIP/archive/eaa22acb90a5876642d0507623e859909230a52d.zip",
"compel==2.0.2",
"controlnet-aux==0.0.7",
"diffusers[torch]==0.26.2",
"diffusers[torch]==0.26.3",
"invisible-watermark==0.2.0", # needed to install SDXL base and refiner using their repo_ids
"mediapipe==0.10.7", # needed for "mediapipeface" controlnet model
"numpy==1.26.4", # >1.24.0 is needed to use the 'strict' argument to np.testing.assert_array_equal()