Compare commits

..

42 Commits

Author SHA1 Message Date
Millun Atluri
b5e018972f Release/v3.4.0post2 (#5139)
## What type of PR is this? (check all applicable)

3.4.0post3

## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
N/A

## Description
3.4.0post2 release - mainly fixes duplicate LoRA patching
2023-11-21 10:01:15 +11:00
Millun Atluri
2af844385f Updated version to 3.4.0post2 2023-11-20 18:53:04 +11:00
Millun Atluri
540047e26e Updated JS files 2023-11-20 18:48:17 +11:00
Rohinish
4d8b8a2db8 fix(ui): add missing translations (#5096)
* first string only to test

* more strings changed

* almost half strings added in json file

* more strings added

* more changes

* few strings and t function changed

* resolved

* errors resolved

* chore(ui): fmt en.json

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-11-20 06:24:03 +00:00
Millun Atluri
d581a3289b Fix links to example workflows 2023-11-19 19:16:30 -08:00
Ryan Dick
d756c9b10a Fix double LoRA patching of the UNet. This was presumably added by accident due to a previous merge conflict. 2023-11-17 12:05:04 -08:00
Alexander Eichhorn
63d3212bec translationBot(ui): update translation (German)
Currently translated at 64.4% (793 of 1231 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-11-18 05:31:37 +11:00
Millun Atluri
136ff011b2 3.4.0post1 (#5115)
## What type of PR is this? (check all applicable)

3.4.0post1


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
2023-11-17 14:51:10 +11:00
Millun Atluri
3bc15a96d5 Update version to 3.4.0post1 2023-11-17 13:39:00 +11:00
Millun Atluri
43d5bb2038 Updated JS files 2023-11-17 13:36:50 +11:00
psychedelicious
8d39eab3a9 fix(ui): metadata error on img2img 2023-11-17 12:31:34 +11:00
Millun Atluri
62da69b3e8 Release/3.4 (#5112)
## What type of PR is this? (check all applicable)

3.4 Release Updates

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description


## Related Tickets & Documents


## [optional] Are there any post deployment tasks we need to perform?
2023-11-17 08:34:20 +11:00
Millun Atluri
d2852c767b Bump version to 3.4.0 2023-11-17 08:22:41 +11:00
Millun Atluri
47f33f1ed1 Update JS files for 3.4 release 2023-11-17 08:21:47 +11:00
Millun Atluri
1896c6fb44 Merge remote-tracking branch 'origin/main' into release/3.4 2023-11-17 08:09:13 +11:00
Millun Atluri
47f3515745 fix(nodes,ui): fix missed/canvas temp images in gallery (#5111)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

Resolves two bugs introduced in #5106:

1. Linear UI images sometimes didn't make it to the gallery.

This was a race condition. The VAE decode nodes were handled by the
socketInvocationComplete listener. At that moment, the image was marked
as intermediate. Immediately after this node was handled, a
LinearUIOutputInvocation, introduced in #5106, was handled by
socketInvocationComplete. This node internally sets changed the image to
not intermediate.

During the handling of that socketInvocationComplete, RTK Query would
sometimes use its cache instead of retrieving the image DTO again. The
result is that the UI never got the message that the image was not
intermediate, so it wasn't added to the gallery.

This is resolved by refactoring the socketInvocationComplete listener.
We now skip the gallery processing for linear UI events, except for the
LinearUIOutputInvocation. Images now always make it to the gallery, and
network requests to get image DTOs are substantially reduced.

2. Canvas temp images always went into the gallery

The LinearUIOutputInvocation was always setting its image's
is_intermediate to false. This included all canvas images and resulted
in all canvas temp images going to gallery.

This is resolved by making LinearUIOutputInvocation set is_intermediate
based on `self.is_intermediate`. The behaviour now more or less
mirroring the behaviour of is_intermediate on other image-outputting
nodes, except it doesn't save the image again - only changes it.

One extra minor change - LinearUIOutputInvocation only changes
is_intermediate if it differs from the image's current setting. Very
minor optimisation.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue
https://discord.com/channels/1020123559063990373/1149513625321603162/1174721072826945638

## QA Instructions, Screenshots, Recordings

Try to reproduce the issues described int he discord thread:
- Images should always go to the gallery from txt2img and img2img
- Canvas temp images should not go to the gallery unless auto-save is
enabled
<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->
2023-11-17 08:05:43 +11:00
Millun Atluri
950021a61e Merge branch 'main' into fix/missed-images-canvas-temp 2023-11-17 08:00:16 +11:00
Millun Atluri
5ee55cf46f Added unsharp mask node to communityNodes.md (#5110)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [X] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [X] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-11-17 07:51:09 +11:00
psychedelicious
91ef24e15c fix(nodes,ui): fix missed/canvas temp images in gallery
Resolves two bugs introduced in #5106:

1. Linear UI images sometimes didn't make it to the gallery.

This was a race condition. The VAE decode nodes were handled by the socketInvocationComplete listener. At that moment, the image was marked as intermediate. Immediately after this node was handled, a LinearUIOutputInvocation, introduced in #5106, was handled by socketInvocationComplete. This node internally sets changed the image to not intermediate.

During the handling of that socketInvocationComplete, RTK Query would sometimes use its cache instead of retrieving the image DTO again. The result is that the UI never got the message that the image was not intermediate, so it wasn't added to the gallery.

This is resolved by refactoring the socketInvocationComplete listener. We now skip the gallery processing for linear UI events, except for the LinearUIOutputInvocation. Images now always make it to the gallery, and network requests to get image DTOs are substantially reduced.

2. Canvas temp images always went into the gallery

The LinearUIOutputInvocation was always setting its image's is_intermediate to false. This included all canvas images and resulted in all canvas temp images going to gallery.

This is resolved by making LinearUIOutputInvocation set is_intermediate based on `self.is_intermediate`. The behaviour now more or less mirroring the behaviour of is_intermediate on other image-outputting nodes, except it doesn't save the image again - only changes it.

One extra minor change - LinearUIOutputInvocation only changes is_intermediate if it differs from the image's current setting. Very minor optimisation.
2023-11-17 07:32:04 +11:00
Jonathan
230dfdb9ad Added unsharp mask node to communityNodes.md 2023-11-16 14:25:06 -06:00
blessedcoolant
6f719b2c7a feat: add private node for linear UI image outputting (#5106)
## What type of PR is this? (check all applicable)

- [x] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

## Description

[feat: add private node for linear UI image
outputting](4599517c6c)

Add a LinearUIOutputInvocation node to be the new terminal node for
Linear UI graphs. This node is private and hidden from the Workflow
Editor, as it is an implementation detail.

The Linear UI was using the Save Image node for this purpose. It allowed
every linear graph to end a single node type, which handled saving
metadata and board. This substantially reduced the complexity of the
linear graphs.

This caused two related issues:
- Images were saved to disk twice
- Noticeable delay between when an image was decoded and showed up in
the UI

To resolve this, the new LinearUIOutputInvocation node will handle
adding an image to a board if one is provided.

Metadata is no longer provided in this unified node. Instead, the
metadata graph helpers now need to know the node to add metadata to and
provide it to the last node that actually outputs an image. This is a
`l2i` node for txt2img & img2img graphs, and a different
image-outputting node for canvas graphs.

HRF poses another complication, in that it changes the terminal node. To
handle this, a new metadata util is added called
`setMetadataReceivingNode()`. HRF calls this to change the node that
should receive the graph's metadata.

This resolves the duplicate images issue and improves perf without
otherwise changing the user experience.

---

Also fixed an issue with HRF metadata.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Closes #4688
- Closes #4645

## QA Instructions, Screenshots, Recordings

Generate some images with and without a board selected. Images should
end up in the right board per usual, but a bit quicker. Metadata should
still work.

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->
2023-11-16 20:08:55 +05:30
psychedelicious
02ce3bd303 Merge branch 'main' into feat/linear-ui-output-node 2023-11-16 19:05:13 +11:00
psychedelicious
4599517c6c feat: add private node for linear UI image outputting
Add a LinearUIOutputInvocation node to be the new terminal node for Linear UI graphs. This node is private and hidden from the Workflow Editor, as it is an implementation detail.

The Linear UI was using the Save Image node for this purpose. It allowed every linear graph to end a single node type, which handled saving metadata and board. This substantially reduced the complexity of the linear graphs.

This caused two related issues:
- Images were saved to disk twice
- Noticeable delay between when an image was decoded and showed up in the UI

To resolve this, the new LinearUIOutputInvocation node will handle adding an image to a board if one is provided.

Metadata is no longer provided in this unified node. Instead, the metadata graph helpers now need to know the node to add metadata to and provide it to the last node that actually outputs an image. This is a `l2i` node for txt2img & img2img graphs, and a different image-outputting node for canvas graphs.

HRF poses another complication, in that it changes the terminal node. To handle this, a new metadata util is added called `setMetadataReceivingNode()`. HRF calls this to change the node that should receive the graph's metadata.

This resolves the duplicate images issue and improves perf without otherwise changing the user experience.
2023-11-16 18:56:59 +11:00
psychedelicious
cc747c066c fix(nodes): fix hrf_enabled metadata item
It was a float but should be a bool
2023-11-16 18:47:31 +11:00
Surisen
3ba547a41a translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 100.0% (1229 of 1229 strings)

Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-11-16 18:23:41 +11:00
Millun Atluri
1a37827bdf (fix) docs formatting 2023-11-16 18:22:21 +11:00
Millun Atluri
16e990b6e6 Docs/3.4 updates (#5104)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-11-16 17:52:06 +11:00
Millun Atluri
be4f3fa5c6 Added LCM-LoRA 2023-11-16 16:32:55 +11:00
Millun Atluri
d0375ec234 Added FAQ 2023-11-16 16:10:43 +11:00
Millun Atluri
1bf8625b10 Updates to invocations 2023-11-16 15:35:24 +11:00
Millun Atluri
5d6040b636 Updated invocations docs 2023-11-16 15:02:06 +11:00
Millun Atluri
ead1b14ee7 feat: updateable workflow nodes (#5102)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description

[fix(nodes): bump version of nodes post-pydantic
v2](5cb3fdb64c)

This was not done, despite new metadata fields being added to many
nodes.

[feat(ui): add update node
functionality](3f6e8e9d6b)

A workflow's nodes may update itself, if its major version matches the
template's major version.

If the major versions do not match, the user will need to delete and
re-add the node (current behaviour).

The update functionality is not automatic (for now). The logic to update
the node is pretty simple, but I want to ensure it works well first
before doing it automatically when a workflow is loaded.

- New `Details` tab on Workflow Inspector, displays node title, type,
version, and notes
- Button to update the node is displayed on the `Details` tab
- Add hook to determine if a node needs an update, may be updated (i.e.
major versions match), and the callback to update the node in state
- Remove the notes modal from the little info icon
- Modularize the node building logic

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

Probably exist but not sure where.

## QA Instructions, Screenshots, Recordings

Load an old workflow with nodes that need to be updated. Click on each
node that needs updating and click the update button. Workflow should
work.

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->
2023-11-16 12:57:01 +11:00
psychedelicious
92a9355ddb chore(ui): lint 2023-11-16 12:46:56 +11:00
psychedelicious
7fcf475aec feat(ui): add Update All Nodes button 2023-11-16 12:42:25 +11:00
psychedelicious
3f6e8e9d6b feat(ui): add update node functionality
A workflow's nodes may update itself, if its major version matches the template's major version.

If the major versions do not match, the user will need to delete and re-add the node (current behaviour).

The update functionality is not automatic (for now). The logic to update the node is pretty simple, but I want to ensure it works well first before doing it automatically when a workflow is loaded.

- New `Details` tab on Workflow Inspector, displays node title, type, version, and notes
- Button to update the node is displayed on the `Details` tab
- Add hook to determine if a node needs an update, may be updated (i.e. major versions match), and the callback to update the node in state
- Remove the notes modal from the little info icon
- Modularize the node building logic
2023-11-16 11:36:20 +11:00
psychedelicious
c9655236cc chore(ui): regen types 2023-11-16 11:21:39 +11:00
psychedelicious
5cb3fdb64c fix(nodes): bump version of nodes post-pydantic v2 2023-11-16 11:14:26 +11:00
Millun Atluri
ae749ada6e pin torch==2.1.0, torchvision=0.16.0 (#5101)
## Description

pin torch==2.1.0, torchvision=0.16.0

Prevents accidental upgrade to unreleased torch 2.1.1, which breaks
stuff

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #5065
2023-11-16 09:38:04 +11:00
psychedelicious
36b8549f3a pin torch==2.1.0, torchvision=0.16.0 2023-11-16 09:28:29 +11:00
Millun Atluri
b6f356f067 Change stylecheck name from "black" to "ruff" (#5090)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because: it is trivial

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No


## Description

After the switch to the "ruff" linter, I noticed that the stylecheck
workflow is still described as "black" in the action logs. This small PR
should fix the issue.
2023-11-15 08:29:41 +11:00
Lincoln Stein
a4f1db7c02 change stylecheck name from "black" to "ruff" 2023-11-14 11:06:10 -05:00
psychedelicious
21206bafcf chore: bump pydantic and fastapi
No breaking changes for us.

Pydantic is working on its own faster JSON parser, `jiter`, and 2.5.0 starts bringing this in. See https://github.com/pydantic/jiter

There are a number of other bugfixes and minor changes in this version of pydantic.

The FastAPI update is mostly internal but let's stay up to date.
2023-11-14 14:34:14 +11:00
114 changed files with 3118 additions and 1095 deletions

View File

@@ -6,7 +6,7 @@ on:
branches: main
jobs:
black:
ruff:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3

View File

@@ -1,6 +1,6 @@
# Invocations
# Nodes
Features in InvokeAI are added in the form of modular node-like systems called
Features in InvokeAI are added in the form of modular nodes systems called
**Invocations**.
An Invocation is simply a single operation that takes in some inputs and gives
@@ -9,13 +9,34 @@ complex functionality.
## Invocations Directory
InvokeAI Invocations can be found in the `invokeai/app/invocations` directory.
InvokeAI Nodes can be found in the `invokeai/app/invocations` directory. These can be used as examples to create your own nodes.
You can add your new functionality to one of the existing Invocations in this
directory or create a new file in this directory as per your needs.
New nodes should be added to a subfolder in `nodes` direction found at the root level of the InvokeAI installation location. Nodes added to this folder will be able to be used upon application startup.
Example `nodes` subfolder structure:
```py
__init__.py # Invoke-managed custom node loader
cool_node
__init__.py # see example below
cool_node.py
my_node_pack
__init__.py # see example below
tasty_node.py
bodacious_node.py
utils.py
extra_nodes
fancy_node.py
```
Each node folder must have an `__init__.py` file that imports its nodes. Only nodes imported in the `__init__.py` file are loaded.
See the README in the nodes folder for more examples:
```py
from .cool_node import CoolInvocation
```
**Note:** _All Invocations must be inside this directory for InvokeAI to
recognize them as valid Invocations._
## Creating A New Invocation

53
docs/features/LORAS.md Normal file
View File

@@ -0,0 +1,53 @@
---
title: LoRAs & LCM-LoRAs
---
# :material-library-shelves: LoRAs & LCM-LoRAs
With the advances in research, many new capabilities are available to customize the knowledge and understanding of novel concepts not originally contained in the base model.
## LoRAs
Low-Rank Adaptation (LoRA) files are models that customize the output of Stable Diffusion
image generation. Larger than embeddings, but much smaller than full
models, they augment SD with improved understanding of subjects and
artistic styles.
Unlike TI files, LoRAs do not introduce novel vocabulary into the
model's known tokens. Instead, LoRAs augment the model's weights that
are applied to generate imagery. LoRAs may be supplied with a
"trigger" word that they have been explicitly trained on, or may
simply apply their effect without being triggered.
LoRAs are typically stored in .safetensors files, which are the most
secure way to store and transmit these types of weights. You may
install any number of `.safetensors` LoRA files simply by copying them
into the `autoimport/lora` directory of the corresponding InvokeAI models
directory (usually `invokeai` in your home directory).
To use these when generating, open the LoRA menu item in the options
panel, select the LoRAs you want to apply and ensure that they have
the appropriate weight recommended by the model provider. Typically,
most LoRAs perform best at a weight of .75-1.
## LCM-LoRAs
Latent Consistency Models (LCMs) allowed a reduced number of steps to be used to generate images with Stable Diffusion. These are created by distilling base models, creating models that only require a small number of steps to generate images. However, LCMs require that any fine-tune of a base model be distilled to be used as an LCM.
LCM-LoRAs are models that provide the benefit of LCMs but are able to be used as LoRAs and applied to any fine tune of a base model. LCM-LoRAs are created by training a small number of adapters, rather than distilling the entire fine-tuned base model. The resulting LoRA can be used the same way as a standard LoRA, but with a greatly reduced step count. This enables SDXL images to be generated up to 10x faster than without the use of LCM-LoRAs.
**Using LCM-LoRAs**
LCM-LoRAs are natively supported in InvokeAI throughout the application. To get started, install any diffusers format LCM-LoRAs using the model manager and select it in the LoRA field.
There are a number parameter differences when using LCM-LoRAs and standard generation:
- When using LCM-LoRAs, the LoRA strength should be lower than if using a standard LoRA, with 0.35 recommended as a starting point.
- The LCM scheduler should be used for generation
- CFG-Scale should be reduced to ~1
- Steps should be reduced in the range of 4-8
Standard LoRAs can also be used alongside LCM-LoRAs, but will also require a lower strength, with 0.45 being recommended as a starting point.
More information can be found here: https://huggingface.co/blog/lcm_lora#fast-inference-with-sdxl-lcm-loras

View File

@@ -1,12 +1,3 @@
---
title: Textual Inversion Embeddings and LoRAs
---
# :material-library-shelves: Textual Inversions and LoRAs
With the advances in research, many new capabilities are available to customize the knowledge and understanding of novel concepts not originally contained in the base model.
## Using Textual Inversion Files
Textual inversion (TI) files are small models that customize the output of
@@ -61,29 +52,4 @@ files it finds there for compatible models. At startup you will see a message si
>> Current embedding manager terms: <HOI4-Leader>, <princess-knight>
```
To use these when generating, simply type the `<` key in your prompt to open the Textual Inversion WebUI and
select the embedding you'd like to use. This UI has type-ahead support, so you can easily find supported embeddings.
## Using LoRAs
LoRA files are models that customize the output of Stable Diffusion
image generation. Larger than embeddings, but much smaller than full
models, they augment SD with improved understanding of subjects and
artistic styles.
Unlike TI files, LoRAs do not introduce novel vocabulary into the
model's known tokens. Instead, LoRAs augment the model's weights that
are applied to generate imagery. LoRAs may be supplied with a
"trigger" word that they have been explicitly trained on, or may
simply apply their effect without being triggered.
LoRAs are typically stored in .safetensors files, which are the most
secure way to store and transmit these types of weights. You may
install any number of `.safetensors` LoRA files simply by copying them
into the `autoimport/lora` directory of the corresponding InvokeAI models
directory (usually `invokeai` in your home directory).
To use these when generating, open the LoRA menu item in the options
panel, select the LoRAs you want to apply and ensure that they have
the appropriate weight recommended by the model provider. Typically,
most LoRAs perform best at a weight of .75-1.
select the embedding you'd like to use. This UI has type-ahead support, so you can easily find supported embeddings.

View File

@@ -20,7 +20,7 @@ a single convenient digital artist-optimized user interface.
### * [Prompt Engineering](PROMPTS.md)
Get the images you want with the InvokeAI prompt engineering language.
### * The [LoRA, LyCORIS and Textual Inversion Models](CONCEPTS.md)
### * The [LoRA, LyCORIS, LCM-LoRA Models](CONCEPTS.md)
Add custom subjects and styles using a variety of fine-tuned models.
### * [ControlNet](CONTROLNET.md)
@@ -40,7 +40,7 @@ guide also covers optimizing models to load quickly.
Teach an old model new tricks. Merge 2-3 models together to create a
new model that combines characteristics of the originals.
### * [Textual Inversion](TRAINING.md)
### * [Textual Inversion](TEXTUAL_INVERSIONS.md)
Personalize models by adding your own style or subjects.
## Other Features

43
docs/help/FAQ.md Normal file
View File

@@ -0,0 +1,43 @@
# FAQs
**Where do I get started? How can I install Invoke?**
- You can download the latest installers [here](https://github.com/invoke-ai/InvokeAI/releases) - Note that any releases marked as *pre-release* are in a beta state. You may experience some issues, but we appreciate your help testing those! For stable/reliable installations, please install the **[Latest Release](https://github.com/invoke-ai/InvokeAI/releases/latest)**
**How can I download models? Can I use models I already have downloaded?**
- Models can be downloaded through the model manager, or through option [4] in the invoke.bat/invoke.sh launcher script. To download a model through the Model Manager, use the HuggingFace Repo ID by pressing the “Copy” button next to the repository name. Alternatively, to download a model from CivitAi, use the download link in the Model Manager.
- Models that are already downloaded can be used by creating a symlink to the model location in the `autoimport` folder or by using the Model Mangers “Scan for Models” function.
**My images are taking a long time to generate. How can I speed up generation?**
- A common solution is to reduce the size of your RAM & VRAM cache to 0.25. This ensures your system has enough memory to generate images.
- Additionally, check the [hardware requirements](https://invoke-ai.github.io/InvokeAI/#hardware-requirements) to ensure that your system is capable of generating images.
- Lastly, double check your generations are happening on your GPU (if you have one). InvokeAI will log what is being used for generation upon startup.
**Ive installed Python on Windows but the installer says it cant find it?**
- Then ensure that you checked **'Add python.exe to PATH'** when installing Python. This can be found at the bottom of the Python Installer window. If you already have Python installed, this can be done with the modify / repair feature of the installer.
**Ive installed everything successfully but I still get an error about Triton when starting Invoke?**
- This can be safely ignored. InvokeAI doesn't use Triton, but if you are on Linux and wish to dismiss the error, you can install Triton.
**I updated to 3.4.0 and now xFormers cant load C++/CUDA?**
- An issue occurred with your PyTorch update. Follow these steps to fix :
1. Launch your invoke.bat / invoke.sh and select the option to open the developer console
2. Run:`pip install ".[xformers]" --upgrade --force-reinstall --extra-index-url https://download.pytorch.org/whl/cu121`
- If you run into an error with `typing_extensions`, re-open the developer console and run: `pip install -U typing-extensions`
**It says my pip is out of date - is that why my install isn't working?**
- An out of date won't cause an installation to fail. The cause of the error can likely be found above the message that says pip is out of date.
- If you saw that warning but the install went well, don't worry about it (but you can update pip afterwards if you'd like).
**How can I generate the exact same that I found on the internet?**
Most example images with prompts that you'll find on the internet have been generated using different software, so you can't expect to get identical results. In order to reproduce an image, you need to replicate the exact settings and processing steps, including (but not limited to) the model, the positive and negative prompts, the seed, the sampler, the exact image size, any upscaling steps, etc.
**Where can I get more help?**
- Create an issue on [GitHub](https://github.com/invoke-ai/InvokeAI/issues) or post in the [#help channel](https://discord.com/channels/1020123559063990373/1149510134058471514) of the InvokeAI Discord

View File

@@ -101,16 +101,13 @@ Mac and Linux machines, and runs on GPU cards with as little as 4 GB of RAM.
<div align="center"><img src="assets/invoke-web-server-1.png" width=640></div>
!!! Note
This project is rapidly evolving. Please use the [Issues tab](https://github.com/invoke-ai/InvokeAI/issues) to report bugs and make feature requests. Be sure to use the provided templates as it will help aid response time.
## :octicons-link-24: Quick Links
<div class="button-container">
<a href="installation/INSTALLATION"> <button class="button">Installation</button> </a>
<a href="features/"> <button class="button">Features</button> </a>
<a href="help/gettingStartedWithAI/"> <button class="button">Getting Started</button> </a>
<a href="help/FAQ/"> <button class="button">FAQ</button> </a>
<a href="contributing/CONTRIBUTING/"> <button class="button">Contributing</button> </a>
<a href="https://github.com/invoke-ai/InvokeAI/"> <button class="button">Code and Downloads</button> </a>
<a href="https://github.com/invoke-ai/InvokeAI/issues"> <button class="button">Bug Reports </button> </a>

View File

@@ -32,6 +32,7 @@ To use a community workflow, download the the `.json` node graph file and load i
+ [Size Stepper Nodes](#size-stepper-nodes)
+ [Text font to Image](#text-font-to-image)
+ [Thresholding](#thresholding)
+ [Unsharp Mask](#unsharp-mask)
+ [XY Image to Grid and Images to Grids nodes](#xy-image-to-grid-and-images-to-grids-nodes)
- [Example Node Template](#example-node-template)
- [Disclaimer](#disclaimer)
@@ -316,6 +317,13 @@ Highlights/Midtones/Shadows (with LUT blur enabled):
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/0a440e43-697f-4d17-82ee-f287467df0a5" width="300" />
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/0701fd0f-2ca7-4fe2-8613-2b52547bafce" width="300" />
--------------------------------
### Unsharp Mask
**Description:** Applies an unsharp mask filter to an image, preserving its alpha channel in the process.
**Node Link:** https://github.com/JPPhoto/unsharp-mask-node
--------------------------------
### XY Image to Grid and Images to Grids nodes

View File

@@ -7,12 +7,12 @@ To use them, right click on your desired workflow, follow the link to GitHub and
If you're interested in finding more workflows, checkout the [#share-your-workflows](https://discord.com/channels/1020123559063990373/1130291608097661000) channel in the InvokeAI Discord.
* [SD1.5 / SD2 Text to Image](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/Text_to_Image.json)
* [SDXL Text to Image](https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/SDXL_Text_to_Image.json)
* [SDXL Text to Image with Refiner](https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/SDXL_w_Refiner_Text_to_Image.json)
* [Multi ControlNet (Canny & Depth)](https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/Multi_ControlNet_Canny_and_Depth.json)
* [SDXL Text to Image](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/SDXL_Text_to_Image.json)
* [SDXL Text to Image with Refiner](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/SDXL_w_Refiner_Text_to_Image.json)
* [Multi ControlNet (Canny & Depth)](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/Multi_ControlNet_Canny_and_Depth.json)
* [Tiled Upscaling with ControlNet](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/ESRGAN_img2img_upscale_w_Canny_ControlNet.json)
* [Prompt From File](https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/Prompt_from_File.json)
* [Face Detailer with IP-Adapter & ControlNet](https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/Face_Detailer_with_IP-Adapter_and_Canny.json.json)
* [Prompt From File](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/Prompt_from_File.json)
* [Face Detailer with IP-Adapter & ControlNet](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/Face_Detailer_with_IP-Adapter_and_Canny.json)
* [FaceMask](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/FaceMask.json)
* [FaceOff with 2x Face Scaling](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/FaceOff_FaceScale2x.json)
* [QR Code Monster](https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/QR_Code_Monster.json)
* [QR Code Monster](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/QR_Code_Monster.json)

View File

@@ -244,7 +244,7 @@ class InvokeAiInstance:
"numpy~=1.24.0", # choose versions that won't be uninstalled during phase 2
"urllib3~=1.26.0",
"requests~=2.28.0",
"torch~=2.1.0",
"torch==2.1.0",
"torchmetrics==0.11.4",
"torchvision>=0.14.1",
"--force-reinstall",

View File

@@ -96,7 +96,7 @@ class ControlOutput(BaseInvocationOutput):
control: ControlField = OutputField(description=FieldDescriptions.control)
@invocation("controlnet", title="ControlNet", tags=["controlnet"], category="controlnet", version="1.0.0")
@invocation("controlnet", title="ControlNet", tags=["controlnet"], category="controlnet", version="1.1.0")
class ControlNetInvocation(BaseInvocation):
"""Collects ControlNet info to pass to other nodes"""
@@ -173,7 +173,7 @@ class ImageProcessorInvocation(BaseInvocation, WithMetadata, WithWorkflow):
title="Canny Processor",
tags=["controlnet", "canny"],
category="controlnet",
version="1.0.0",
version="1.1.0",
)
class CannyImageProcessorInvocation(ImageProcessorInvocation):
"""Canny edge detection for ControlNet"""
@@ -196,7 +196,7 @@ class CannyImageProcessorInvocation(ImageProcessorInvocation):
title="HED (softedge) Processor",
tags=["controlnet", "hed", "softedge"],
category="controlnet",
version="1.0.0",
version="1.1.0",
)
class HedImageProcessorInvocation(ImageProcessorInvocation):
"""Applies HED edge detection to image"""
@@ -225,7 +225,7 @@ class HedImageProcessorInvocation(ImageProcessorInvocation):
title="Lineart Processor",
tags=["controlnet", "lineart"],
category="controlnet",
version="1.0.0",
version="1.1.0",
)
class LineartImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art processing to image"""
@@ -247,7 +247,7 @@ class LineartImageProcessorInvocation(ImageProcessorInvocation):
title="Lineart Anime Processor",
tags=["controlnet", "lineart", "anime"],
category="controlnet",
version="1.0.0",
version="1.1.0",
)
class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art anime processing to image"""
@@ -270,7 +270,7 @@ class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
title="Openpose Processor",
tags=["controlnet", "openpose", "pose"],
category="controlnet",
version="1.0.0",
version="1.1.0",
)
class OpenposeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Openpose processing to image"""
@@ -295,7 +295,7 @@ class OpenposeImageProcessorInvocation(ImageProcessorInvocation):
title="Midas Depth Processor",
tags=["controlnet", "midas"],
category="controlnet",
version="1.0.0",
version="1.1.0",
)
class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Midas depth processing to image"""
@@ -322,7 +322,7 @@ class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
title="Normal BAE Processor",
tags=["controlnet"],
category="controlnet",
version="1.0.0",
version="1.1.0",
)
class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies NormalBae processing to image"""
@@ -339,7 +339,7 @@ class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
@invocation(
"mlsd_image_processor", title="MLSD Processor", tags=["controlnet", "mlsd"], category="controlnet", version="1.0.0"
"mlsd_image_processor", title="MLSD Processor", tags=["controlnet", "mlsd"], category="controlnet", version="1.1.0"
)
class MlsdImageProcessorInvocation(ImageProcessorInvocation):
"""Applies MLSD processing to image"""
@@ -362,7 +362,7 @@ class MlsdImageProcessorInvocation(ImageProcessorInvocation):
@invocation(
"pidi_image_processor", title="PIDI Processor", tags=["controlnet", "pidi"], category="controlnet", version="1.0.0"
"pidi_image_processor", title="PIDI Processor", tags=["controlnet", "pidi"], category="controlnet", version="1.1.0"
)
class PidiImageProcessorInvocation(ImageProcessorInvocation):
"""Applies PIDI processing to image"""
@@ -389,7 +389,7 @@ class PidiImageProcessorInvocation(ImageProcessorInvocation):
title="Content Shuffle Processor",
tags=["controlnet", "contentshuffle"],
category="controlnet",
version="1.0.0",
version="1.1.0",
)
class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
"""Applies content shuffle processing to image"""
@@ -419,7 +419,7 @@ class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
title="Zoe (Depth) Processor",
tags=["controlnet", "zoe", "depth"],
category="controlnet",
version="1.0.0",
version="1.1.0",
)
class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Zoe depth processing to image"""
@@ -435,7 +435,7 @@ class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
title="Mediapipe Face Processor",
tags=["controlnet", "mediapipe", "face"],
category="controlnet",
version="1.0.0",
version="1.1.0",
)
class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
"""Applies mediapipe face processing to image"""
@@ -458,7 +458,7 @@ class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
title="Leres (Depth) Processor",
tags=["controlnet", "leres", "depth"],
category="controlnet",
version="1.0.0",
version="1.1.0",
)
class LeresImageProcessorInvocation(ImageProcessorInvocation):
"""Applies leres processing to image"""
@@ -487,7 +487,7 @@ class LeresImageProcessorInvocation(ImageProcessorInvocation):
title="Tile Resample Processor",
tags=["controlnet", "tile"],
category="controlnet",
version="1.0.0",
version="1.1.0",
)
class TileResamplerProcessorInvocation(ImageProcessorInvocation):
"""Tile resampler processor"""
@@ -527,7 +527,7 @@ class TileResamplerProcessorInvocation(ImageProcessorInvocation):
title="Segment Anything Processor",
tags=["controlnet", "segmentanything"],
category="controlnet",
version="1.0.0",
version="1.1.0",
)
class SegmentAnythingProcessorInvocation(ImageProcessorInvocation):
"""Applies segment anything processing to image"""
@@ -569,7 +569,7 @@ class SamDetectorReproducibleColors(SamDetector):
title="Color Map Processor",
tags=["controlnet"],
category="controlnet",
version="1.0.0",
version="1.1.0",
)
class ColorMapImageProcessorInvocation(ImageProcessorInvocation):
"""Generates a color map from the provided image"""

View File

@@ -11,7 +11,7 @@ from invokeai.app.services.image_records.image_records_common import ImageCatego
from .baseinvocation import BaseInvocation, InputField, InvocationContext, WithMetadata, WithWorkflow, invocation
@invocation("cv_inpaint", title="OpenCV Inpaint", tags=["opencv", "inpaint"], category="inpaint", version="1.0.0")
@invocation("cv_inpaint", title="OpenCV Inpaint", tags=["opencv", "inpaint"], category="inpaint", version="1.1.0")
class CvInpaintInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Simple inpaint using opencv."""

View File

@@ -438,7 +438,7 @@ def get_faces_list(
return all_faces
@invocation("face_off", title="FaceOff", tags=["image", "faceoff", "face", "mask"], category="image", version="1.0.2")
@invocation("face_off", title="FaceOff", tags=["image", "faceoff", "face", "mask"], category="image", version="1.1.0")
class FaceOffInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Bound, extract, and mask a face from an image using MediaPipe detection"""
@@ -532,7 +532,7 @@ class FaceOffInvocation(BaseInvocation, WithWorkflow, WithMetadata):
return output
@invocation("face_mask_detection", title="FaceMask", tags=["image", "face", "mask"], category="image", version="1.0.2")
@invocation("face_mask_detection", title="FaceMask", tags=["image", "face", "mask"], category="image", version="1.1.0")
class FaceMaskInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Face mask creation using mediapipe face detection"""
@@ -650,7 +650,7 @@ class FaceMaskInvocation(BaseInvocation, WithWorkflow, WithMetadata):
@invocation(
"face_identifier", title="FaceIdentifier", tags=["image", "face", "identifier"], category="image", version="1.0.2"
"face_identifier", title="FaceIdentifier", tags=["image", "face", "identifier"], category="image", version="1.1.0"
)
class FaceIdentifierInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Outputs an image with detected face IDs printed on each face. For use with other FaceTools."""

View File

@@ -8,7 +8,7 @@ import numpy
from PIL import Image, ImageChops, ImageFilter, ImageOps
from invokeai.app.invocations.primitives import BoardField, ColorField, ImageField, ImageOutput
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.app.services.image_records.image_records_common import ImageCategory, ImageRecordChanges, ResourceOrigin
from invokeai.app.shared.fields import FieldDescriptions
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
from invokeai.backend.image_util.safety_checker import SafetyChecker
@@ -36,7 +36,7 @@ class ShowImageInvocation(BaseInvocation):
)
@invocation("blank_image", title="Blank Image", tags=["image"], category="image", version="1.0.0")
@invocation("blank_image", title="Blank Image", tags=["image"], category="image", version="1.1.0")
class BlankImageInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Creates a blank image and forwards it to the pipeline"""
@@ -66,7 +66,7 @@ class BlankImageInvocation(BaseInvocation, WithMetadata, WithWorkflow):
)
@invocation("img_crop", title="Crop Image", tags=["image", "crop"], category="image", version="1.0.0")
@invocation("img_crop", title="Crop Image", tags=["image", "crop"], category="image", version="1.1.0")
class ImageCropInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Crops an image to a specified box. The box can be outside of the image."""
@@ -100,7 +100,7 @@ class ImageCropInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("img_paste", title="Paste Image", tags=["image", "paste"], category="image", version="1.0.1")
@invocation("img_paste", title="Paste Image", tags=["image", "paste"], category="image", version="1.1.0")
class ImagePasteInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Pastes an image into another image."""
@@ -154,7 +154,7 @@ class ImagePasteInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("tomask", title="Mask from Alpha", tags=["image", "mask"], category="image", version="1.0.0")
@invocation("tomask", title="Mask from Alpha", tags=["image", "mask"], category="image", version="1.1.0")
class MaskFromAlphaInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Extracts the alpha channel of an image as a mask."""
@@ -186,7 +186,7 @@ class MaskFromAlphaInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("img_mul", title="Multiply Images", tags=["image", "multiply"], category="image", version="1.0.0")
@invocation("img_mul", title="Multiply Images", tags=["image", "multiply"], category="image", version="1.1.0")
class ImageMultiplyInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Multiplies two images together using `PIL.ImageChops.multiply()`."""
@@ -220,7 +220,7 @@ class ImageMultiplyInvocation(BaseInvocation, WithWorkflow, WithMetadata):
IMAGE_CHANNELS = Literal["A", "R", "G", "B"]
@invocation("img_chan", title="Extract Image Channel", tags=["image", "channel"], category="image", version="1.0.0")
@invocation("img_chan", title="Extract Image Channel", tags=["image", "channel"], category="image", version="1.1.0")
class ImageChannelInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Gets a channel from an image."""
@@ -253,7 +253,7 @@ class ImageChannelInvocation(BaseInvocation, WithWorkflow, WithMetadata):
IMAGE_MODES = Literal["L", "RGB", "RGBA", "CMYK", "YCbCr", "LAB", "HSV", "I", "F"]
@invocation("img_conv", title="Convert Image Mode", tags=["image", "convert"], category="image", version="1.0.0")
@invocation("img_conv", title="Convert Image Mode", tags=["image", "convert"], category="image", version="1.1.0")
class ImageConvertInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Converts an image to a different mode."""
@@ -283,7 +283,7 @@ class ImageConvertInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("img_blur", title="Blur Image", tags=["image", "blur"], category="image", version="1.0.0")
@invocation("img_blur", title="Blur Image", tags=["image", "blur"], category="image", version="1.1.0")
class ImageBlurInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Blurs an image"""
@@ -338,7 +338,7 @@ PIL_RESAMPLING_MAP = {
}
@invocation("img_resize", title="Resize Image", tags=["image", "resize"], category="image", version="1.0.0")
@invocation("img_resize", title="Resize Image", tags=["image", "resize"], category="image", version="1.1.0")
class ImageResizeInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Resizes an image to specific dimensions"""
@@ -375,7 +375,7 @@ class ImageResizeInvocation(BaseInvocation, WithMetadata, WithWorkflow):
)
@invocation("img_scale", title="Scale Image", tags=["image", "scale"], category="image", version="1.0.0")
@invocation("img_scale", title="Scale Image", tags=["image", "scale"], category="image", version="1.1.0")
class ImageScaleInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Scales an image by a factor"""
@@ -417,7 +417,7 @@ class ImageScaleInvocation(BaseInvocation, WithMetadata, WithWorkflow):
)
@invocation("img_lerp", title="Lerp Image", tags=["image", "lerp"], category="image", version="1.0.0")
@invocation("img_lerp", title="Lerp Image", tags=["image", "lerp"], category="image", version="1.1.0")
class ImageLerpInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Linear interpolation of all pixels of an image"""
@@ -451,7 +451,7 @@ class ImageLerpInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("img_ilerp", title="Inverse Lerp Image", tags=["image", "ilerp"], category="image", version="1.0.0")
@invocation("img_ilerp", title="Inverse Lerp Image", tags=["image", "ilerp"], category="image", version="1.1.0")
class ImageInverseLerpInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Inverse linear interpolation of all pixels of an image"""
@@ -485,7 +485,7 @@ class ImageInverseLerpInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("img_nsfw", title="Blur NSFW Image", tags=["image", "nsfw"], category="image", version="1.0.0")
@invocation("img_nsfw", title="Blur NSFW Image", tags=["image", "nsfw"], category="image", version="1.1.0")
class ImageNSFWBlurInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Add blur to NSFW-flagged images"""
@@ -532,7 +532,7 @@ class ImageNSFWBlurInvocation(BaseInvocation, WithMetadata, WithWorkflow):
title="Add Invisible Watermark",
tags=["image", "watermark"],
category="image",
version="1.0.0",
version="1.1.0",
)
class ImageWatermarkInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Add an invisible watermark to an image"""
@@ -561,7 +561,7 @@ class ImageWatermarkInvocation(BaseInvocation, WithMetadata, WithWorkflow):
)
@invocation("mask_edge", title="Mask Edge", tags=["image", "mask", "inpaint"], category="image", version="1.0.0")
@invocation("mask_edge", title="Mask Edge", tags=["image", "mask", "inpaint"], category="image", version="1.1.0")
class MaskEdgeInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Applies an edge mask to an image"""
@@ -612,7 +612,7 @@ class MaskEdgeInvocation(BaseInvocation, WithWorkflow, WithMetadata):
title="Combine Masks",
tags=["image", "mask", "multiply"],
category="image",
version="1.0.0",
version="1.1.0",
)
class MaskCombineInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Combine two masks together by multiplying them using `PIL.ImageChops.multiply()`."""
@@ -644,7 +644,7 @@ class MaskCombineInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("color_correct", title="Color Correct", tags=["image", "color"], category="image", version="1.0.0")
@invocation("color_correct", title="Color Correct", tags=["image", "color"], category="image", version="1.1.0")
class ColorCorrectInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""
Shifts the colors of a target image to match the reference image, optionally
@@ -755,7 +755,7 @@ class ColorCorrectInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("img_hue_adjust", title="Adjust Image Hue", tags=["image", "hue"], category="image", version="1.0.0")
@invocation("img_hue_adjust", title="Adjust Image Hue", tags=["image", "hue"], category="image", version="1.1.0")
class ImageHueAdjustmentInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Adjusts the Hue of an image."""
@@ -858,7 +858,7 @@ CHANNEL_FORMATS = {
"value",
],
category="image",
version="1.0.0",
version="1.1.0",
)
class ImageChannelOffsetInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Add or subtract a value from a specific color channel of an image."""
@@ -929,7 +929,7 @@ class ImageChannelOffsetInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"value",
],
category="image",
version="1.0.0",
version="1.1.0",
)
class ImageChannelMultiplyInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Scale a specific color channel of an image."""
@@ -988,7 +988,7 @@ class ImageChannelMultiplyInvocation(BaseInvocation, WithWorkflow, WithMetadata)
title="Save Image",
tags=["primitives", "image"],
category="primitives",
version="1.0.1",
version="1.1.0",
use_cache=False,
)
class SaveImageInvocation(BaseInvocation, WithWorkflow, WithMetadata):
@@ -1017,3 +1017,35 @@ class SaveImageInvocation(BaseInvocation, WithWorkflow, WithMetadata):
width=image_dto.width,
height=image_dto.height,
)
@invocation(
"linear_ui_output",
title="Linear UI Image Output",
tags=["primitives", "image"],
category="primitives",
version="1.0.1",
use_cache=False,
)
class LinearUIOutputInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Handles Linear UI Image Outputting tasks."""
image: ImageField = InputField(description=FieldDescriptions.image)
board: Optional[BoardField] = InputField(default=None, description=FieldDescriptions.board, input=Input.Direct)
def invoke(self, context: InvocationContext) -> ImageOutput:
image_dto = context.services.images.get_dto(self.image.image_name)
if self.board:
context.services.board_images.add_image_to_board(self.board.board_id, self.image.image_name)
if image_dto.is_intermediate != self.is_intermediate:
context.services.images.update(
self.image.image_name, changes=ImageRecordChanges(is_intermediate=self.is_intermediate)
)
return ImageOutput(
image=ImageField(image_name=self.image.image_name),
width=image_dto.width,
height=image_dto.height,
)

View File

@@ -118,7 +118,7 @@ def tile_fill_missing(im: Image.Image, tile_size: int = 16, seed: Optional[int]
return si
@invocation("infill_rgba", title="Solid Color Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0")
@invocation("infill_rgba", title="Solid Color Infill", tags=["image", "inpaint"], category="inpaint", version="1.1.0")
class InfillColorInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Infills transparent areas of an image with a solid color"""
@@ -154,7 +154,7 @@ class InfillColorInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("infill_tile", title="Tile Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0")
@invocation("infill_tile", title="Tile Infill", tags=["image", "inpaint"], category="inpaint", version="1.1.0")
class InfillTileInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Infills transparent areas of an image with tiles of the image"""
@@ -192,7 +192,7 @@ class InfillTileInvocation(BaseInvocation, WithWorkflow, WithMetadata):
@invocation(
"infill_patchmatch", title="PatchMatch Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0"
"infill_patchmatch", title="PatchMatch Infill", tags=["image", "inpaint"], category="inpaint", version="1.1.0"
)
class InfillPatchMatchInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Infills transparent areas of an image using the PatchMatch algorithm"""
@@ -245,7 +245,7 @@ class InfillPatchMatchInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("infill_lama", title="LaMa Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0")
@invocation("infill_lama", title="LaMa Infill", tags=["image", "inpaint"], category="inpaint", version="1.1.0")
class LaMaInfillInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Infills transparent areas of an image using the LaMa model"""
@@ -274,7 +274,7 @@ class LaMaInfillInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("infill_cv2", title="CV2 Infill", tags=["image", "inpaint"], category="inpaint")
@invocation("infill_cv2", title="CV2 Infill", tags=["image", "inpaint"], category="inpaint", version="1.1.0")
class CV2InfillInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Infills transparent areas of an image using OpenCV Inpainting"""

View File

@@ -706,7 +706,6 @@ class DenoiseLatentsInvocation(BaseInvocation):
)
with (
ExitStack() as exit_stack,
ModelPatcher.apply_lora_unet(unet_info.context.model, _lora_loader()),
ModelPatcher.apply_freeu(unet_info.context.model, self.unet.freeu_config),
set_seamless(unet_info.context.model, self.unet.seamless_axes),
unet_info as unet,
@@ -790,7 +789,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
title="Latents to Image",
tags=["latents", "image", "vae", "l2i"],
category="latents",
version="1.0.0",
version="1.1.0",
)
class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Generates an image from latents."""

View File

@@ -112,7 +112,7 @@ GENERATION_MODES = Literal[
]
@invocation("core_metadata", title="Core Metadata", tags=["metadata"], category="metadata", version="1.0.0")
@invocation("core_metadata", title="Core Metadata", tags=["metadata"], category="metadata", version="1.0.1")
class CoreMetadataInvocation(BaseInvocation):
"""Collects core generation metadata into a MetadataField"""
@@ -160,7 +160,7 @@ class CoreMetadataInvocation(BaseInvocation):
)
# High resolution fix metadata.
hrf_enabled: Optional[float] = InputField(
hrf_enabled: Optional[bool] = InputField(
default=None,
description="Whether or not high resolution fix was enabled.",
)

View File

@@ -326,7 +326,7 @@ class ONNXTextToLatentsInvocation(BaseInvocation):
title="ONNX Latents to Image",
tags=["latents", "image", "vae", "onnx"],
category="image",
version="1.0.0",
version="1.1.0",
)
class ONNXLatentsToImageInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Generates an image from latents."""

View File

@@ -29,7 +29,7 @@ if choose_torch_device() == torch.device("mps"):
from torch import mps
@invocation("esrgan", title="Upscale (RealESRGAN)", tags=["esrgan", "upscale"], category="esrgan", version="1.1.0")
@invocation("esrgan", title="Upscale (RealESRGAN)", tags=["esrgan", "upscale"], category="esrgan", version="1.2.0")
class ESRGANInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Upscales an image using RealESRGAN."""

View File

@@ -90,14 +90,6 @@ def get_extras():
pass
return extras
def get_extra_index() -> str:
# parsed_version.local for torch is the platform + version, eg 'cu121' or 'rocm5.6'
local = pkg_resources.get_distribution("torch").parsed_version.local
if local and 'cu' in local:
return "--extra-index-url https://download.pytorch.org/whl/cu121"
if local and 'rocm' in local:
return "--extra-index-url https://download.pytorch.org/whl/rocm5.6"
return ""
def main():
versions = get_versions()
@@ -130,15 +122,14 @@ def main():
branch = Prompt.ask("Enter an InvokeAI branch name")
extras = get_extras()
extra_index_url = get_extra_index()
print(f":crossed_fingers: Upgrading to [yellow]{tag or release or branch}[/yellow]")
if release:
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_SRC}/{release}.zip" --use-pep517 --upgrade {extra_index_url}'
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_SRC}/{release}.zip" --use-pep517 --upgrade'
elif tag:
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_TAG}/{tag}.zip" --use-pep517 --upgrade {extra_index_url}'
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_TAG}/{tag}.zip" --use-pep517 --upgrade'
else:
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_BRANCH}/{branch}.zip" --use-pep517 --upgrade {extra_index_url}'
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_BRANCH}/{branch}.zip" --use-pep517 --upgrade'
print("")
print("")
if os.system(cmd) == 0:

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,4 +1,4 @@
import{w as s,ic as T,v as l,a1 as I,id as R,ad as V,ie as z,ig as j,ih as D,ii as F,ij as G,ik as W,il as K,aG as H,im as U,io as Y}from"./index-ae455ad2.js";import{M as Z}from"./MantineProvider-8ba2c088.js";var P=String.raw,E=P`
import{I as s,ie as T,v as l,$ as A,ig as R,aa as V,ih as z,ii as j,ij as D,ik as F,il as G,im as W,io as K,az as H,ip as U,iq as Y}from"./index-f820e2e3.js";import{M as Z}from"./MantineProvider-a6a1d85c.js";var P=String.raw,E=P`
:root,
:host {
--chakra-vh: 100vh;
@@ -277,4 +277,4 @@ import{w as s,ic as T,v as l,a1 as I,id as R,ad as V,ie as z,ig as j,ih as D,ii
}
${E}
`}),g={light:"chakra-ui-light",dark:"chakra-ui-dark"};function Q(e={}){const{preventTransition:o=!0}=e,n={setDataset:r=>{const t=o?n.preventTransition():void 0;document.documentElement.dataset.theme=r,document.documentElement.style.colorScheme=r,t==null||t()},setClassName(r){document.body.classList.add(r?g.dark:g.light),document.body.classList.remove(r?g.light:g.dark)},query(){return window.matchMedia("(prefers-color-scheme: dark)")},getSystemTheme(r){var t;return((t=n.query().matches)!=null?t:r==="dark")?"dark":"light"},addListener(r){const t=n.query(),i=a=>{r(a.matches?"dark":"light")};return typeof t.addListener=="function"?t.addListener(i):t.addEventListener("change",i),()=>{typeof t.removeListener=="function"?t.removeListener(i):t.removeEventListener("change",i)}},preventTransition(){const r=document.createElement("style");return r.appendChild(document.createTextNode("*{-webkit-transition:none!important;-moz-transition:none!important;-o-transition:none!important;-ms-transition:none!important;transition:none!important}")),document.head.appendChild(r),()=>{window.getComputedStyle(document.body),requestAnimationFrame(()=>{requestAnimationFrame(()=>{document.head.removeChild(r)})})}}};return n}var X="chakra-ui-color-mode";function L(e){return{ssr:!1,type:"localStorage",get(o){if(!(globalThis!=null&&globalThis.document))return o;let n;try{n=localStorage.getItem(e)||o}catch{}return n||o},set(o){try{localStorage.setItem(e,o)}catch{}}}}var ee=L(X),M=()=>{};function S(e,o){return e.type==="cookie"&&e.ssr?e.get(o):o}function O(e){const{value:o,children:n,options:{useSystemColorMode:r,initialColorMode:t,disableTransitionOnChange:i}={},colorModeManager:a=ee}=e,d=t==="dark"?"dark":"light",[u,p]=l.useState(()=>S(a,d)),[y,b]=l.useState(()=>S(a)),{getSystemTheme:w,setClassName:k,setDataset:x,addListener:$}=l.useMemo(()=>Q({preventTransition:i}),[i]),v=t==="system"&&!u?y:u,c=l.useCallback(m=>{const f=m==="system"?w():m;p(f),k(f==="dark"),x(f),a.set(f)},[a,w,k,x]);I(()=>{t==="system"&&b(w())},[]),l.useEffect(()=>{const m=a.get();if(m){c(m);return}if(t==="system"){c("system");return}c(d)},[a,d,t,c]);const C=l.useCallback(()=>{c(v==="dark"?"light":"dark")},[v,c]);l.useEffect(()=>{if(r)return $(c)},[r,$,c]);const A=l.useMemo(()=>({colorMode:o??v,toggleColorMode:o?M:C,setColorMode:o?M:c,forced:o!==void 0}),[v,C,c,o]);return s.jsx(R.Provider,{value:A,children:n})}O.displayName="ColorModeProvider";var te=["borders","breakpoints","colors","components","config","direction","fonts","fontSizes","fontWeights","letterSpacings","lineHeights","radii","shadows","sizes","space","styles","transition","zIndices"];function re(e){return V(e)?te.every(o=>Object.prototype.hasOwnProperty.call(e,o)):!1}function h(e){return typeof e=="function"}function oe(...e){return o=>e.reduce((n,r)=>r(n),o)}var ne=e=>function(...n){let r=[...n],t=n[n.length-1];return re(t)&&r.length>1?r=r.slice(0,r.length-1):t=e,oe(...r.map(i=>a=>h(i)?i(a):ae(a,i)))(t)},ie=ne(j);function ae(...e){return z({},...e,_)}function _(e,o,n,r){if((h(e)||h(o))&&Object.prototype.hasOwnProperty.call(r,n))return(...t)=>{const i=h(e)?e(...t):e,a=h(o)?o(...t):o;return z({},i,a,_)}}var q=l.createContext({getDocument(){return document},getWindow(){return window}});q.displayName="EnvironmentContext";function N(e){const{children:o,environment:n,disabled:r}=e,t=l.useRef(null),i=l.useMemo(()=>n||{getDocument:()=>{var d,u;return(u=(d=t.current)==null?void 0:d.ownerDocument)!=null?u:document},getWindow:()=>{var d,u;return(u=(d=t.current)==null?void 0:d.ownerDocument.defaultView)!=null?u:window}},[n]),a=!r||!n;return s.jsxs(q.Provider,{value:i,children:[o,a&&s.jsx("span",{id:"__chakra_env",hidden:!0,ref:t})]})}N.displayName="EnvironmentProvider";var se=e=>{const{children:o,colorModeManager:n,portalZIndex:r,resetScope:t,resetCSS:i=!0,theme:a={},environment:d,cssVarsRoot:u,disableEnvironment:p,disableGlobalStyle:y}=e,b=s.jsx(N,{environment:d,disabled:p,children:o});return s.jsx(D,{theme:a,cssVarsRoot:u,children:s.jsxs(O,{colorModeManager:n,options:a.config,children:[i?s.jsx(J,{scope:t}):s.jsx(B,{}),!y&&s.jsx(F,{}),r?s.jsx(G,{zIndex:r,children:b}):b]})})},le=e=>function({children:n,theme:r=e,toastOptions:t,...i}){return s.jsxs(se,{theme:r,...i,children:[s.jsx(W,{value:t==null?void 0:t.defaultOptions,children:n}),s.jsx(K,{...t})]})},de=le(j);const ue=()=>l.useMemo(()=>({colorScheme:"dark",fontFamily:"'Inter Variable', sans-serif",components:{ScrollArea:{defaultProps:{scrollbarSize:10},styles:{scrollbar:{"&:hover":{backgroundColor:"var(--invokeai-colors-baseAlpha-300)"}},thumb:{backgroundColor:"var(--invokeai-colors-baseAlpha-300)"}}}}}),[]),ce=L("@@invokeai-color-mode");function me({children:e}){const{i18n:o}=H(),n=o.dir(),r=l.useMemo(()=>ie({...U,direction:n}),[n]);l.useEffect(()=>{document.body.dir=n},[n]);const t=ue();return s.jsx(Z,{theme:t,children:s.jsx(de,{theme:r,colorModeManager:ce,toastOptions:Y,children:e})})}const ve=l.memo(me);export{ve as default};
`}),g={light:"chakra-ui-light",dark:"chakra-ui-dark"};function Q(e={}){const{preventTransition:o=!0}=e,n={setDataset:r=>{const t=o?n.preventTransition():void 0;document.documentElement.dataset.theme=r,document.documentElement.style.colorScheme=r,t==null||t()},setClassName(r){document.body.classList.add(r?g.dark:g.light),document.body.classList.remove(r?g.light:g.dark)},query(){return window.matchMedia("(prefers-color-scheme: dark)")},getSystemTheme(r){var t;return((t=n.query().matches)!=null?t:r==="dark")?"dark":"light"},addListener(r){const t=n.query(),i=a=>{r(a.matches?"dark":"light")};return typeof t.addListener=="function"?t.addListener(i):t.addEventListener("change",i),()=>{typeof t.removeListener=="function"?t.removeListener(i):t.removeEventListener("change",i)}},preventTransition(){const r=document.createElement("style");return r.appendChild(document.createTextNode("*{-webkit-transition:none!important;-moz-transition:none!important;-o-transition:none!important;-ms-transition:none!important;transition:none!important}")),document.head.appendChild(r),()=>{window.getComputedStyle(document.body),requestAnimationFrame(()=>{requestAnimationFrame(()=>{document.head.removeChild(r)})})}}};return n}var X="chakra-ui-color-mode";function L(e){return{ssr:!1,type:"localStorage",get(o){if(!(globalThis!=null&&globalThis.document))return o;let n;try{n=localStorage.getItem(e)||o}catch{}return n||o},set(o){try{localStorage.setItem(e,o)}catch{}}}}var ee=L(X),M=()=>{};function S(e,o){return e.type==="cookie"&&e.ssr?e.get(o):o}function O(e){const{value:o,children:n,options:{useSystemColorMode:r,initialColorMode:t,disableTransitionOnChange:i}={},colorModeManager:a=ee}=e,d=t==="dark"?"dark":"light",[u,p]=l.useState(()=>S(a,d)),[y,b]=l.useState(()=>S(a)),{getSystemTheme:w,setClassName:k,setDataset:x,addListener:$}=l.useMemo(()=>Q({preventTransition:i}),[i]),v=t==="system"&&!u?y:u,c=l.useCallback(m=>{const f=m==="system"?w():m;p(f),k(f==="dark"),x(f),a.set(f)},[a,w,k,x]);A(()=>{t==="system"&&b(w())},[]),l.useEffect(()=>{const m=a.get();if(m){c(m);return}if(t==="system"){c("system");return}c(d)},[a,d,t,c]);const C=l.useCallback(()=>{c(v==="dark"?"light":"dark")},[v,c]);l.useEffect(()=>{if(r)return $(c)},[r,$,c]);const N=l.useMemo(()=>({colorMode:o??v,toggleColorMode:o?M:C,setColorMode:o?M:c,forced:o!==void 0}),[v,C,c,o]);return s.jsx(R.Provider,{value:N,children:n})}O.displayName="ColorModeProvider";var te=["borders","breakpoints","colors","components","config","direction","fonts","fontSizes","fontWeights","letterSpacings","lineHeights","radii","shadows","sizes","space","styles","transition","zIndices"];function re(e){return V(e)?te.every(o=>Object.prototype.hasOwnProperty.call(e,o)):!1}function h(e){return typeof e=="function"}function oe(...e){return o=>e.reduce((n,r)=>r(n),o)}var ne=e=>function(...n){let r=[...n],t=n[n.length-1];return re(t)&&r.length>1?r=r.slice(0,r.length-1):t=e,oe(...r.map(i=>a=>h(i)?i(a):ae(a,i)))(t)},ie=ne(j);function ae(...e){return z({},...e,_)}function _(e,o,n,r){if((h(e)||h(o))&&Object.prototype.hasOwnProperty.call(r,n))return(...t)=>{const i=h(e)?e(...t):e,a=h(o)?o(...t):o;return z({},i,a,_)}}var q=l.createContext({getDocument(){return document},getWindow(){return window}});q.displayName="EnvironmentContext";function I(e){const{children:o,environment:n,disabled:r}=e,t=l.useRef(null),i=l.useMemo(()=>n||{getDocument:()=>{var d,u;return(u=(d=t.current)==null?void 0:d.ownerDocument)!=null?u:document},getWindow:()=>{var d,u;return(u=(d=t.current)==null?void 0:d.ownerDocument.defaultView)!=null?u:window}},[n]),a=!r||!n;return s.jsxs(q.Provider,{value:i,children:[o,a&&s.jsx("span",{id:"__chakra_env",hidden:!0,ref:t})]})}I.displayName="EnvironmentProvider";var se=e=>{const{children:o,colorModeManager:n,portalZIndex:r,resetScope:t,resetCSS:i=!0,theme:a={},environment:d,cssVarsRoot:u,disableEnvironment:p,disableGlobalStyle:y}=e,b=s.jsx(I,{environment:d,disabled:p,children:o});return s.jsx(D,{theme:a,cssVarsRoot:u,children:s.jsxs(O,{colorModeManager:n,options:a.config,children:[i?s.jsx(J,{scope:t}):s.jsx(B,{}),!y&&s.jsx(F,{}),r?s.jsx(G,{zIndex:r,children:b}):b]})})},le=e=>function({children:n,theme:r=e,toastOptions:t,...i}){return s.jsxs(se,{theme:r,...i,children:[s.jsx(W,{value:t==null?void 0:t.defaultOptions,children:n}),s.jsx(K,{...t})]})},de=le(j);const ue=()=>l.useMemo(()=>({colorScheme:"dark",fontFamily:"'Inter Variable', sans-serif",components:{ScrollArea:{defaultProps:{scrollbarSize:10},styles:{scrollbar:{"&:hover":{backgroundColor:"var(--invokeai-colors-baseAlpha-300)"}},thumb:{backgroundColor:"var(--invokeai-colors-baseAlpha-300)"}}}}}),[]),ce=L("@@invokeai-color-mode");function me({children:e}){const{i18n:o}=H(),n=o.dir(),r=l.useMemo(()=>ie({...U,direction:n}),[n]);l.useEffect(()=>{document.body.dir=n},[n]);const t=ue();return s.jsx(Z,{theme:t,children:s.jsx(de,{theme:r,colorModeManager:ce,toastOptions:Y,children:e})})}const ve=l.memo(me);export{ve as default};

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -15,7 +15,7 @@
margin: 0;
}
</style>
<script type="module" crossorigin src="./assets/index-ae455ad2.js"></script>
<script type="module" crossorigin src="./assets/index-f820e2e3.js"></script>
</head>
<body dir="ltr">

View File

@@ -113,7 +113,14 @@
"images": "Bilder",
"copy": "Kopieren",
"download": "Runterladen",
"setCurrentImage": "Setze aktuelle Bild"
"setCurrentImage": "Setze aktuelle Bild",
"featuresWillReset": "Wenn Sie dieses Bild löschen, werden diese Funktionen sofort zurückgesetzt.",
"deleteImageBin": "Gelöschte Bilder werden an den Papierkorb Ihres Betriebssystems gesendet.",
"unableToLoad": "Galerie kann nicht geladen werden",
"downloadSelection": "Auswahl herunterladen",
"currentlyInUse": "Dieses Bild wird derzeit in den folgenden Funktionen verwendet:",
"deleteImagePermanent": "Gelöschte Bilder können nicht wiederhergestellt werden.",
"autoAssignBoardOnClick": "Board per Klick automatisch zuweisen"
},
"hotkeys": {
"keyboardShortcuts": "Tastenkürzel",
@@ -323,7 +330,8 @@
},
"nodesHotkeys": "Knoten Tastenkürzel",
"addNodes": {
"title": "Knotenpunkt hinzufügen"
"title": "Knotenpunkt hinzufügen",
"desc": "Öffnet das Menü zum Hinzufügen von Knoten"
}
},
"modelManager": {
@@ -429,7 +437,43 @@
"customConfigFileLocation": "Benutzerdefinierte Konfiguration Datei Speicherort",
"baseModel": "Basis Modell",
"convertToDiffusers": "Konvertiere zu Diffusers",
"diffusersModels": "Diffusers"
"diffusersModels": "Diffusers",
"noCustomLocationProvided": "Kein benutzerdefinierter Standort angegeben",
"onnxModels": "Onnx",
"vaeRepoID": "VAE-Repo-ID",
"weightedSum": "Gewichtete Summe",
"syncModelsDesc": "Wenn Ihre Modelle nicht mit dem Backend synchronisiert sind, können Sie sie mit dieser Option aktualisieren. Dies ist im Allgemeinen praktisch, wenn Sie Ihre models.yaml-Datei manuell aktualisieren oder Modelle zum InvokeAI-Stammordner hinzufügen, nachdem die Anwendung gestartet wurde.",
"vae": "VAE",
"noModels": "Keine Modelle gefunden",
"statusConverting": "Konvertieren",
"sigmoid": "Sigmoid",
"predictionType": "Vorhersagetyp (für Stable Diffusion 2.x-Modelle und gelegentliche Stable Diffusion 1.x-Modelle)",
"selectModel": "Wählen Sie Modell aus",
"repo_id": "Repo-ID",
"modelSyncFailed": "Modellsynchronisierung fehlgeschlagen",
"quickAdd": "Schnell hinzufügen",
"simpleModelDesc": "Geben Sie einen Pfad zu einem lokalen Diffusers-Modell, einem lokalen Checkpoint-/Safetensors-Modell, einer HuggingFace-Repo-ID oder einer Checkpoint-/Diffusers-Modell-URL an.",
"modelDeleted": "Modell gelöscht",
"inpainting": "v1 Ausmalen",
"modelUpdateFailed": "Modellaktualisierung fehlgeschlagen",
"useCustomConfig": "Benutzerdefinierte Konfiguration verwenden",
"settings": "Einstellungen",
"modelConversionFailed": "Modellkonvertierung fehlgeschlagen",
"syncModels": "Modelle synchronisieren",
"mergedModelSaveLocation": "Speicherort",
"modelType": "Modelltyp",
"modelsMerged": "Modelle zusammengeführt",
"modelsMergeFailed": "Modellzusammenführung fehlgeschlagen",
"convertToDiffusersHelpText1": "Dieses Modell wird in das 🧨 Diffusers-Format konvertiert.",
"modelsSynced": "Modelle synchronisiert",
"vaePrecision": "VAE-Präzision",
"mergeModels": "Modelle zusammenführen",
"interpolationType": "Interpolationstyp",
"oliveModels": "Olives",
"variant": "Variante",
"loraModels": "LoRAs",
"modelDeleteFailed": "Modell konnte nicht gelöscht werden",
"mergedModelName": "Zusammengeführter Modellname"
},
"parameters": {
"images": "Bilder",
@@ -716,7 +760,33 @@
"saveControlImage": "Speichere Referenz Bild",
"safe": "Speichern",
"ipAdapterImageFallback": "Kein IP Adapter Bild ausgewählt",
"resetIPAdapterImage": "Zurücksetzen vom IP Adapter Bild"
"resetIPAdapterImage": "Zurücksetzen vom IP Adapter Bild",
"pidi": "PIDI",
"normalBae": "Normales BAE",
"mlsdDescription": "Minimalistischer Liniensegmentdetektor",
"openPoseDescription": "Schätzung der menschlichen Pose mit Openpose",
"control": "Kontrolle",
"coarse": "Coarse",
"crop": "Zuschneiden",
"pidiDescription": "PIDI-Bildverarbeitung",
"mediapipeFace": "Mediapipe Gesichter",
"mlsd": "M-LSD",
"controlMode": "Steuermodus",
"cannyDescription": "Canny Ecken Erkennung",
"lineart": "Lineart",
"lineartAnimeDescription": "Lineart-Verarbeitung im Anime-Stil",
"minConfidence": "Minimales Vertrauen",
"megaControl": "Mega-Kontrolle",
"autoConfigure": "Prozessor automatisch konfigurieren",
"normalBaeDescription": "Normale BAE-Verarbeitung",
"noneDescription": "Es wurde keine Verarbeitung angewendet",
"openPose": "Openpose",
"lineartAnime": "Lineart Anime",
"mediapipeFaceDescription": "Gesichtserkennung mit Mediapipe",
"canny": "Canny",
"hedDescription": "Ganzheitlich verschachtelte Kantenerkennung",
"scribble": "Scribble",
"maxFaces": "Maximal Anzahl Gesichter"
},
"queue": {
"status": "Status",
@@ -758,7 +828,19 @@
"enqueueing": "Stapel in der Warteschlange",
"queueMaxExceeded": "Maximum von {{max_queue_size}} Elementen erreicht, würde {{skip}} Elemente überspringen",
"cancelBatchFailed": "Problem beim Abbruch vom Stapel",
"clearQueueAlertDialog2": "bist du sicher die Warteschlange zu leeren?"
"clearQueueAlertDialog2": "bist du sicher die Warteschlange zu leeren?",
"pruneSucceeded": "{{item_count}} abgeschlossene Elemente aus der Warteschlange entfernt",
"pauseSucceeded": "Prozessor angehalten",
"cancelFailed": "Problem beim Stornieren des Auftrags",
"pauseFailed": "Problem beim Anhalten des Prozessors",
"front": "Vorne",
"pruneTooltip": "Bereinigen Sie {{item_count}} abgeschlossene Aufträge",
"resumeFailed": "Problem beim wieder aufnehmen von Prozessor",
"pruneFailed": "Problem beim leeren der Warteschlange",
"pauseTooltip": "Pause von Prozessor",
"back": "Hinten",
"resumeSucceeded": "Prozessor wieder aufgenommen",
"resumeTooltip": "Prozessor wieder aufnehmen"
},
"metadata": {
"negativePrompt": "Negativ Beschreibung",
@@ -773,7 +855,20 @@
"noMetaData": "Keine Meta-Data gefunden",
"width": "Breite",
"createdBy": "Erstellt von",
"steps": "Schritte"
"steps": "Schritte",
"seamless": "Nahtlos",
"positivePrompt": "Positiver Prompt",
"generationMode": "Generierungsmodus",
"Threshold": "Noise Schwelle",
"seed": "Samen",
"perlin": "Perlin Noise",
"hiresFix": "Optimierung für hohe Auflösungen",
"initImage": "Erstes Bild",
"variations": "Samengewichtspaare",
"vae": "VAE",
"workflow": "Arbeitsablauf",
"scheduler": "Scheduler",
"noRecallParameters": "Es wurden keine Parameter zum Abrufen gefunden"
},
"popovers": {
"noiseUseCPU": {
@@ -811,11 +906,68 @@
"misses": "Cache Nötig",
"hits": "Cache Treffer",
"enable": "Aktivieren",
"clear": "Leeren"
"clear": "Leeren",
"maxCacheSize": "Maximale Cache Größe",
"cacheSize": "Cache Größe"
},
"embedding": {
"noMatchingEmbedding": "Keine passenden Embeddings",
"addEmbedding": "Embedding hinzufügen",
"incompatibleModel": "Inkompatibles Basismodell:"
},
"nodes": {
"booleanPolymorphicDescription": "Eine Sammlung boolescher Werte.",
"colorFieldDescription": "Eine RGBA-Farbe.",
"conditioningCollection": "Konditionierungssammlung",
"addNode": "Knoten hinzufügen",
"conditioningCollectionDescription": "Konditionierung kann zwischen Knoten weitergegeben werden.",
"colorPolymorphic": "Farbpolymorph",
"colorCodeEdgesHelp": "Farbkodieren Sie Kanten entsprechend ihren verbundenen Feldern",
"animatedEdges": "Animierte Kanten",
"booleanCollectionDescription": "Eine Sammlung boolescher Werte.",
"colorField": "Farbe",
"collectionItem": "Objekt in Sammlung",
"animatedEdgesHelp": "Animieren Sie ausgewählte Kanten und Kanten, die mit ausgewählten Knoten verbunden sind",
"cannotDuplicateConnection": "Es können keine doppelten Verbindungen erstellt werden",
"booleanPolymorphic": "Boolesche Polymorphie",
"colorPolymorphicDescription": "Eine Sammlung von Farben.",
"clipFieldDescription": "Tokenizer- und text_encoder-Untermodelle.",
"clipField": "Clip",
"colorCollection": "Eine Sammlung von Farben.",
"boolean": "Boolesche Werte",
"currentImage": "Aktuelles Bild",
"booleanDescription": "Boolesche Werte sind wahr oder falsch.",
"collection": "Sammlung",
"cannotConnectInputToInput": "Eingang kann nicht mit Eingang verbunden werden",
"conditioningField": "Konditionierung",
"cannotConnectOutputToOutput": "Ausgang kann nicht mit Ausgang verbunden werden",
"booleanCollection": "Boolesche Werte Sammlung",
"cannotConnectToSelf": "Es kann keine Verbindung zu sich selbst hergestellt werden",
"colorCodeEdges": "Farbkodierte Kanten",
"addNodeToolTip": "Knoten hinzufügen (Umschalt+A, Leertaste)"
},
"hrf": {
"enableHrf": "Aktivieren Sie die Korrektur für hohe Auflösungen",
"upscaleMethod": "Vergrößerungsmethoden",
"enableHrfTooltip": "Generieren Sie mit einer niedrigeren Anfangsauflösung, skalieren Sie auf die Basisauflösung hoch und führen Sie dann Image-to-Image aus.",
"metadata": {
"strength": "Hochauflösender Fix Stärke",
"enabled": "Hochauflösender Fix aktiviert",
"method": "Hochauflösender Fix Methode"
},
"hrf": "Hochauflösender Fix",
"hrfStrength": "Hochauflösende Fix Stärke",
"strengthTooltip": "Niedrigere Werte führen zu weniger Details, wodurch potenzielle Artefakte reduziert werden können."
},
"models": {
"noMatchingModels": "Keine passenden Modelle",
"loading": "lade",
"noMatchingLoRAs": "Keine passenden LoRAs",
"noLoRAsAvailable": "Keine LoRAs verfügbar",
"noModelsAvailable": "Keine Modelle verfügbar",
"selectModel": "Wählen ein Modell aus",
"noRefinerModelsInstalled": "Keine SDXL Refiner-Modelle installiert",
"noLoRAsInstalled": "Keine LoRAs installiert",
"selectLoRA": "Wählen ein LoRA aus"
}
}

View File

@@ -6,6 +6,7 @@
"flipVertically": "Flip Vertically",
"invokeProgressBar": "Invoke progress bar",
"menu": "Menu",
"mode": "Mode",
"modelSelect": "Model Select",
"modifyConfig": "Modify Config",
"nextImage": "Next Image",
@@ -30,6 +31,10 @@
"cancel": "Cancel",
"changeBoard": "Change Board",
"clearSearch": "Clear Search",
"deleteBoard": "Delete Board",
"deleteBoardAndImages": "Delete Board and Images",
"deleteBoardOnly": "Delete Board Only",
"deletedBoardsCannotbeRestored": "Deleted boards cannot be restored",
"loading": "Loading...",
"menuItemAutoAdd": "Auto-add to this Board",
"move": "Move",
@@ -51,9 +56,12 @@
"cancel": "Cancel",
"close": "Close",
"on": "On",
"checkpoint": "Checkpoint",
"communityLabel": "Community",
"controlNet": "ControlNet",
"controlAdapter": "Control Adapter",
"data": "Data",
"details": "Details",
"ipAdapter": "IP Adapter",
"t2iAdapter": "T2I Adapter",
"darkMode": "Dark Mode",
@@ -65,6 +73,7 @@
"imagePrompt": "Image Prompt",
"imageFailedToLoad": "Unable to Load Image",
"img2img": "Image To Image",
"inpaint": "inpaint",
"langArabic": "العربية",
"langBrPortuguese": "Português do Brasil",
"langDutch": "Nederlands",
@@ -93,6 +102,8 @@
"nodes": "Workflow Editor",
"nodesDesc": "A node based system for the generation of images is under development currently. Stay tuned for updates about this amazing feature.",
"openInNewTab": "Open in New Tab",
"outpaint": "outpaint",
"outputs": "Outputs",
"postProcessDesc1": "Invoke AI offers a wide variety of post processing features. Image Upscaling and Face Restoration are already available in the WebUI. You can access them from the Advanced Options menu of the Text To Image and Image To Image tabs. You can also process images directly, using the image action buttons above the current image display or in the viewer.",
"postProcessDesc2": "A dedicated UI will be released soon to facilitate more advanced post processing workflows.",
"postProcessDesc3": "The Invoke AI Command Line Interface offers various other features including Embiggen.",
@@ -100,7 +111,9 @@
"postProcessing": "Post Processing",
"random": "Random",
"reportBugLabel": "Report Bug",
"safetensors": "Safetensors",
"settingsLabel": "Settings",
"simple": "Simple",
"statusConnected": "Connected",
"statusConvertingModel": "Converting Model",
"statusDisconnected": "Disconnected",
@@ -127,6 +140,7 @@
"statusSavingImage": "Saving Image",
"statusUpscaling": "Upscaling",
"statusUpscalingESRGAN": "Upscaling (ESRGAN)",
"template": "Template",
"training": "Training",
"trainingDesc1": "A dedicated workflow for training your own embeddings and checkpoints using Textual Inversion and Dreambooth from the web interface.",
"trainingDesc2": "InvokeAI already supports training custom embeddourings using Textual Inversion using the main script.",
@@ -214,6 +228,7 @@
"setControlImageDimensions": "Set Control Image Dimensions To W/H",
"showAdvanced": "Show Advanced",
"toggleControlNet": "Toggle this ControlNet",
"unstarImage": "Unstar Image",
"w": "W",
"weight": "Weight",
"enableIPAdapter": "Enable IP Adapter",
@@ -279,6 +294,7 @@
"next": "Next",
"status": "Status",
"total": "Total",
"time": "Time",
"pending": "Pending",
"in_progress": "In Progress",
"completed": "Completed",
@@ -286,6 +302,7 @@
"canceled": "Canceled",
"completedIn": "Completed in",
"batch": "Batch",
"batchFieldValues": "Batch Field Values",
"item": "Item",
"session": "Session",
"batchValues": "Batch Values",
@@ -335,6 +352,7 @@
"loading": "Loading",
"loadMore": "Load More",
"maintainAspectRatio": "Maintain Aspect Ratio",
"noImageSelected": "No Image Selected",
"noImagesInGallery": "No Images to Display",
"setCurrentImage": "Set as Current Image",
"showGenerations": "Show Generations",
@@ -583,7 +601,7 @@
"strength": "Image to image strength",
"Threshold": "Noise Threshold",
"variations": "Seed-weight pairs",
"vae": "VAE",
"vae": "VAE",
"width": "Width",
"workflow": "Workflow"
},
@@ -606,6 +624,7 @@
"cannotUseSpaces": "Cannot Use Spaces",
"checkpointFolder": "Checkpoint Folder",
"checkpointModels": "Checkpoints",
"checkpointOrSafetensors": "$t(common.checkpoint) / $t(common.safetensors)",
"clearCheckpointFolder": "Clear Checkpoint Folder",
"closeAdvanced": "Close Advanced",
"config": "Config",
@@ -685,6 +704,7 @@
"nameValidationMsg": "Enter a name for your model",
"noCustomLocationProvided": "No Custom Location Provided",
"noModels": "No Models Found",
"noModelSelected": "No Model Selected",
"noModelsFound": "No Models Found",
"none": "none",
"notLoaded": "not loaded",
@@ -730,6 +750,8 @@
"widthValidationMsg": "Default width of your model."
},
"models": {
"addLora": "Add LoRA",
"esrganModel": "ESRGAN Model",
"loading": "loading",
"noLoRAsAvailable": "No LoRAs available",
"noMatchingLoRAs": "No matching LoRAs",
@@ -920,7 +942,10 @@
"unknownTemplate": "Unknown Template",
"unkownInvocation": "Unknown Invocation type",
"updateNode": "Update Node",
"updateAllNodes": "Update All Nodes",
"updateApp": "Update App",
"unableToUpdateNodes_one": "Unable to update {{count}} node",
"unableToUpdateNodes_other": "Unable to update {{count}} nodes",
"vaeField": "Vae",
"vaeFieldDescription": "Vae submodel.",
"vaeModelField": "VAE",
@@ -1007,6 +1032,7 @@
"maskAdjustmentsHeader": "Mask Adjustments",
"maskBlur": "Blur",
"maskBlurMethod": "Blur Method",
"maskEdge": "Mask Edge",
"negativePromptPlaceholder": "Negative Prompt",
"noiseSettings": "Noise",
"noiseThreshold": "Noise Threshold",
@@ -1054,6 +1080,7 @@
"upscale": "Upscale (Shift + U)",
"upscaleImage": "Upscale Image",
"upscaling": "Upscaling",
"unmasked": "Unmasked",
"useAll": "Use All",
"useCpuNoise": "Use CPU Noise",
"cpuNoise": "CPU Noise",
@@ -1075,6 +1102,7 @@
"dynamicPrompts": "Dynamic Prompts",
"enableDynamicPrompts": "Enable Dynamic Prompts",
"maxPrompts": "Max Prompts",
"promptsPreview": "Prompts Preview",
"promptsWithCount_one": "{{count}} Prompt",
"promptsWithCount_other": "{{count}} Prompts",
"seedBehaviour": {
@@ -1114,7 +1142,10 @@
"displayHelpIcons": "Display Help Icons",
"displayInProgress": "Display Progress Images",
"enableImageDebugging": "Enable Image Debugging",
"enableInformationalPopovers": "Enable Informational Popovers",
"enableInvisibleWatermark": "Enable Invisible Watermark",
"enableNodesEditor": "Enable Nodes Editor",
"enableNSFWChecker": "Enable NSFW Checker",
"experimental": "Experimental",
"favoriteSchedulers": "Favorite Schedulers",
"favoriteSchedulersPlaceholder": "No schedulers favorited",
@@ -1214,7 +1245,8 @@
"sentToImageToImage": "Sent To Image To Image",
"sentToUnifiedCanvas": "Sent to Unified Canvas",
"serverError": "Server Error",
"setCanvasInitialImage": "Set as canvas initial image",
"setAsCanvasInitialImage": "Set as canvas initial image",
"setCanvasInitialImage": "Set canvas initial image",
"setControlImage": "Set as control image",
"setIPAdapterImage": "Set as IP Adapter Image",
"setInitialImage": "Set as initial image",
@@ -1272,11 +1304,15 @@
},
"compositingBlur": {
"heading": "Blur",
"paragraphs": ["The blur radius of the mask."]
"paragraphs": [
"The blur radius of the mask."
]
},
"compositingBlurMethod": {
"heading": "Blur Method",
"paragraphs": ["The method of blur applied to the masked area."]
"paragraphs": [
"The method of blur applied to the masked area."
]
},
"compositingCoherencePass": {
"heading": "Coherence Pass",
@@ -1286,7 +1322,9 @@
},
"compositingCoherenceMode": {
"heading": "Mode",
"paragraphs": ["The mode of the Coherence Pass."]
"paragraphs": [
"The mode of the Coherence Pass."
]
},
"compositingCoherenceSteps": {
"heading": "Steps",
@@ -1304,7 +1342,9 @@
},
"compositingMaskAdjustments": {
"heading": "Mask Adjustments",
"paragraphs": ["Adjust the mask."]
"paragraphs": [
"Adjust the mask."
]
},
"controlNetBeginEnd": {
"heading": "Begin / End Step Percentage",
@@ -1362,7 +1402,9 @@
},
"infillMethod": {
"heading": "Infill Method",
"paragraphs": ["Method to infill the selected area."]
"paragraphs": [
"Method to infill the selected area."
]
},
"lora": {
"heading": "LoRA Weight",

View File

@@ -1222,7 +1222,8 @@
"seamless": "无缝",
"fit": "图生图匹配",
"recallParameters": "召回参数",
"noRecallParameters": "未找到要召回的参数"
"noRecallParameters": "未找到要召回的参数",
"vae": "VAE"
},
"models": {
"noMatchingModels": "无相匹配的模型",
@@ -1501,5 +1502,18 @@
"clear": "清除",
"maxCacheSize": "最大缓存大小",
"cacheSize": "缓存大小"
},
"hrf": {
"enableHrf": "启用高分辨率修复",
"upscaleMethod": "放大方法",
"enableHrfTooltip": "使用较低的分辨率进行初始生成,放大到基础分辨率后进行图生图。",
"metadata": {
"strength": "高分辨率修复强度",
"enabled": "高分辨率修复已启用",
"method": "高分辨率修复方法"
},
"hrf": "高分辨率修复",
"hrfStrength": "高分辨率修复强度",
"strengthTooltip": "值越低细节越少,但可以减少部分潜在的伪影。"
}
}

View File

@@ -113,7 +113,14 @@
"images": "Bilder",
"copy": "Kopieren",
"download": "Runterladen",
"setCurrentImage": "Setze aktuelle Bild"
"setCurrentImage": "Setze aktuelle Bild",
"featuresWillReset": "Wenn Sie dieses Bild löschen, werden diese Funktionen sofort zurückgesetzt.",
"deleteImageBin": "Gelöschte Bilder werden an den Papierkorb Ihres Betriebssystems gesendet.",
"unableToLoad": "Galerie kann nicht geladen werden",
"downloadSelection": "Auswahl herunterladen",
"currentlyInUse": "Dieses Bild wird derzeit in den folgenden Funktionen verwendet:",
"deleteImagePermanent": "Gelöschte Bilder können nicht wiederhergestellt werden.",
"autoAssignBoardOnClick": "Board per Klick automatisch zuweisen"
},
"hotkeys": {
"keyboardShortcuts": "Tastenkürzel",
@@ -323,7 +330,8 @@
},
"nodesHotkeys": "Knoten Tastenkürzel",
"addNodes": {
"title": "Knotenpunkt hinzufügen"
"title": "Knotenpunkt hinzufügen",
"desc": "Öffnet das Menü zum Hinzufügen von Knoten"
}
},
"modelManager": {
@@ -429,7 +437,43 @@
"customConfigFileLocation": "Benutzerdefinierte Konfiguration Datei Speicherort",
"baseModel": "Basis Modell",
"convertToDiffusers": "Konvertiere zu Diffusers",
"diffusersModels": "Diffusers"
"diffusersModels": "Diffusers",
"noCustomLocationProvided": "Kein benutzerdefinierter Standort angegeben",
"onnxModels": "Onnx",
"vaeRepoID": "VAE-Repo-ID",
"weightedSum": "Gewichtete Summe",
"syncModelsDesc": "Wenn Ihre Modelle nicht mit dem Backend synchronisiert sind, können Sie sie mit dieser Option aktualisieren. Dies ist im Allgemeinen praktisch, wenn Sie Ihre models.yaml-Datei manuell aktualisieren oder Modelle zum InvokeAI-Stammordner hinzufügen, nachdem die Anwendung gestartet wurde.",
"vae": "VAE",
"noModels": "Keine Modelle gefunden",
"statusConverting": "Konvertieren",
"sigmoid": "Sigmoid",
"predictionType": "Vorhersagetyp (für Stable Diffusion 2.x-Modelle und gelegentliche Stable Diffusion 1.x-Modelle)",
"selectModel": "Wählen Sie Modell aus",
"repo_id": "Repo-ID",
"modelSyncFailed": "Modellsynchronisierung fehlgeschlagen",
"quickAdd": "Schnell hinzufügen",
"simpleModelDesc": "Geben Sie einen Pfad zu einem lokalen Diffusers-Modell, einem lokalen Checkpoint-/Safetensors-Modell, einer HuggingFace-Repo-ID oder einer Checkpoint-/Diffusers-Modell-URL an.",
"modelDeleted": "Modell gelöscht",
"inpainting": "v1 Ausmalen",
"modelUpdateFailed": "Modellaktualisierung fehlgeschlagen",
"useCustomConfig": "Benutzerdefinierte Konfiguration verwenden",
"settings": "Einstellungen",
"modelConversionFailed": "Modellkonvertierung fehlgeschlagen",
"syncModels": "Modelle synchronisieren",
"mergedModelSaveLocation": "Speicherort",
"modelType": "Modelltyp",
"modelsMerged": "Modelle zusammengeführt",
"modelsMergeFailed": "Modellzusammenführung fehlgeschlagen",
"convertToDiffusersHelpText1": "Dieses Modell wird in das 🧨 Diffusers-Format konvertiert.",
"modelsSynced": "Modelle synchronisiert",
"vaePrecision": "VAE-Präzision",
"mergeModels": "Modelle zusammenführen",
"interpolationType": "Interpolationstyp",
"oliveModels": "Olives",
"variant": "Variante",
"loraModels": "LoRAs",
"modelDeleteFailed": "Modell konnte nicht gelöscht werden",
"mergedModelName": "Zusammengeführter Modellname"
},
"parameters": {
"images": "Bilder",
@@ -716,7 +760,33 @@
"saveControlImage": "Speichere Referenz Bild",
"safe": "Speichern",
"ipAdapterImageFallback": "Kein IP Adapter Bild ausgewählt",
"resetIPAdapterImage": "Zurücksetzen vom IP Adapter Bild"
"resetIPAdapterImage": "Zurücksetzen vom IP Adapter Bild",
"pidi": "PIDI",
"normalBae": "Normales BAE",
"mlsdDescription": "Minimalistischer Liniensegmentdetektor",
"openPoseDescription": "Schätzung der menschlichen Pose mit Openpose",
"control": "Kontrolle",
"coarse": "Coarse",
"crop": "Zuschneiden",
"pidiDescription": "PIDI-Bildverarbeitung",
"mediapipeFace": "Mediapipe Gesichter",
"mlsd": "M-LSD",
"controlMode": "Steuermodus",
"cannyDescription": "Canny Ecken Erkennung",
"lineart": "Lineart",
"lineartAnimeDescription": "Lineart-Verarbeitung im Anime-Stil",
"minConfidence": "Minimales Vertrauen",
"megaControl": "Mega-Kontrolle",
"autoConfigure": "Prozessor automatisch konfigurieren",
"normalBaeDescription": "Normale BAE-Verarbeitung",
"noneDescription": "Es wurde keine Verarbeitung angewendet",
"openPose": "Openpose",
"lineartAnime": "Lineart Anime",
"mediapipeFaceDescription": "Gesichtserkennung mit Mediapipe",
"canny": "Canny",
"hedDescription": "Ganzheitlich verschachtelte Kantenerkennung",
"scribble": "Scribble",
"maxFaces": "Maximal Anzahl Gesichter"
},
"queue": {
"status": "Status",
@@ -758,7 +828,19 @@
"enqueueing": "Stapel in der Warteschlange",
"queueMaxExceeded": "Maximum von {{max_queue_size}} Elementen erreicht, würde {{skip}} Elemente überspringen",
"cancelBatchFailed": "Problem beim Abbruch vom Stapel",
"clearQueueAlertDialog2": "bist du sicher die Warteschlange zu leeren?"
"clearQueueAlertDialog2": "bist du sicher die Warteschlange zu leeren?",
"pruneSucceeded": "{{item_count}} abgeschlossene Elemente aus der Warteschlange entfernt",
"pauseSucceeded": "Prozessor angehalten",
"cancelFailed": "Problem beim Stornieren des Auftrags",
"pauseFailed": "Problem beim Anhalten des Prozessors",
"front": "Vorne",
"pruneTooltip": "Bereinigen Sie {{item_count}} abgeschlossene Aufträge",
"resumeFailed": "Problem beim wieder aufnehmen von Prozessor",
"pruneFailed": "Problem beim leeren der Warteschlange",
"pauseTooltip": "Pause von Prozessor",
"back": "Hinten",
"resumeSucceeded": "Prozessor wieder aufgenommen",
"resumeTooltip": "Prozessor wieder aufnehmen"
},
"metadata": {
"negativePrompt": "Negativ Beschreibung",
@@ -773,7 +855,20 @@
"noMetaData": "Keine Meta-Data gefunden",
"width": "Breite",
"createdBy": "Erstellt von",
"steps": "Schritte"
"steps": "Schritte",
"seamless": "Nahtlos",
"positivePrompt": "Positiver Prompt",
"generationMode": "Generierungsmodus",
"Threshold": "Noise Schwelle",
"seed": "Samen",
"perlin": "Perlin Noise",
"hiresFix": "Optimierung für hohe Auflösungen",
"initImage": "Erstes Bild",
"variations": "Samengewichtspaare",
"vae": "VAE",
"workflow": "Arbeitsablauf",
"scheduler": "Scheduler",
"noRecallParameters": "Es wurden keine Parameter zum Abrufen gefunden"
},
"popovers": {
"noiseUseCPU": {
@@ -811,11 +906,68 @@
"misses": "Cache Nötig",
"hits": "Cache Treffer",
"enable": "Aktivieren",
"clear": "Leeren"
"clear": "Leeren",
"maxCacheSize": "Maximale Cache Größe",
"cacheSize": "Cache Größe"
},
"embedding": {
"noMatchingEmbedding": "Keine passenden Embeddings",
"addEmbedding": "Embedding hinzufügen",
"incompatibleModel": "Inkompatibles Basismodell:"
},
"nodes": {
"booleanPolymorphicDescription": "Eine Sammlung boolescher Werte.",
"colorFieldDescription": "Eine RGBA-Farbe.",
"conditioningCollection": "Konditionierungssammlung",
"addNode": "Knoten hinzufügen",
"conditioningCollectionDescription": "Konditionierung kann zwischen Knoten weitergegeben werden.",
"colorPolymorphic": "Farbpolymorph",
"colorCodeEdgesHelp": "Farbkodieren Sie Kanten entsprechend ihren verbundenen Feldern",
"animatedEdges": "Animierte Kanten",
"booleanCollectionDescription": "Eine Sammlung boolescher Werte.",
"colorField": "Farbe",
"collectionItem": "Objekt in Sammlung",
"animatedEdgesHelp": "Animieren Sie ausgewählte Kanten und Kanten, die mit ausgewählten Knoten verbunden sind",
"cannotDuplicateConnection": "Es können keine doppelten Verbindungen erstellt werden",
"booleanPolymorphic": "Boolesche Polymorphie",
"colorPolymorphicDescription": "Eine Sammlung von Farben.",
"clipFieldDescription": "Tokenizer- und text_encoder-Untermodelle.",
"clipField": "Clip",
"colorCollection": "Eine Sammlung von Farben.",
"boolean": "Boolesche Werte",
"currentImage": "Aktuelles Bild",
"booleanDescription": "Boolesche Werte sind wahr oder falsch.",
"collection": "Sammlung",
"cannotConnectInputToInput": "Eingang kann nicht mit Eingang verbunden werden",
"conditioningField": "Konditionierung",
"cannotConnectOutputToOutput": "Ausgang kann nicht mit Ausgang verbunden werden",
"booleanCollection": "Boolesche Werte Sammlung",
"cannotConnectToSelf": "Es kann keine Verbindung zu sich selbst hergestellt werden",
"colorCodeEdges": "Farbkodierte Kanten",
"addNodeToolTip": "Knoten hinzufügen (Umschalt+A, Leertaste)"
},
"hrf": {
"enableHrf": "Aktivieren Sie die Korrektur für hohe Auflösungen",
"upscaleMethod": "Vergrößerungsmethoden",
"enableHrfTooltip": "Generieren Sie mit einer niedrigeren Anfangsauflösung, skalieren Sie auf die Basisauflösung hoch und führen Sie dann Image-to-Image aus.",
"metadata": {
"strength": "Hochauflösender Fix Stärke",
"enabled": "Hochauflösender Fix aktiviert",
"method": "Hochauflösender Fix Methode"
},
"hrf": "Hochauflösender Fix",
"hrfStrength": "Hochauflösende Fix Stärke",
"strengthTooltip": "Niedrigere Werte führen zu weniger Details, wodurch potenzielle Artefakte reduziert werden können."
},
"models": {
"noMatchingModels": "Keine passenden Modelle",
"loading": "lade",
"noMatchingLoRAs": "Keine passenden LoRAs",
"noLoRAsAvailable": "Keine LoRAs verfügbar",
"noModelsAvailable": "Keine Modelle verfügbar",
"selectModel": "Wählen ein Modell aus",
"noRefinerModelsInstalled": "Keine SDXL Refiner-Modelle installiert",
"noLoRAsInstalled": "Keine LoRAs installiert",
"selectLoRA": "Wählen ein LoRA aus"
}
}

View File

@@ -6,6 +6,7 @@
"flipVertically": "Flip Vertically",
"invokeProgressBar": "Invoke progress bar",
"menu": "Menu",
"mode": "Mode",
"modelSelect": "Model Select",
"modifyConfig": "Modify Config",
"nextImage": "Next Image",
@@ -30,6 +31,10 @@
"cancel": "Cancel",
"changeBoard": "Change Board",
"clearSearch": "Clear Search",
"deleteBoard": "Delete Board",
"deleteBoardAndImages": "Delete Board and Images",
"deleteBoardOnly": "Delete Board Only",
"deletedBoardsCannotbeRestored": "Deleted boards cannot be restored",
"loading": "Loading...",
"menuItemAutoAdd": "Auto-add to this Board",
"move": "Move",
@@ -51,9 +56,12 @@
"cancel": "Cancel",
"close": "Close",
"on": "On",
"checkpoint": "Checkpoint",
"communityLabel": "Community",
"controlNet": "ControlNet",
"controlAdapter": "Control Adapter",
"data": "Data",
"details": "Details",
"ipAdapter": "IP Adapter",
"t2iAdapter": "T2I Adapter",
"darkMode": "Dark Mode",
@@ -65,6 +73,7 @@
"imagePrompt": "Image Prompt",
"imageFailedToLoad": "Unable to Load Image",
"img2img": "Image To Image",
"inpaint": "inpaint",
"langArabic": "العربية",
"langBrPortuguese": "Português do Brasil",
"langDutch": "Nederlands",
@@ -93,6 +102,8 @@
"nodes": "Workflow Editor",
"nodesDesc": "A node based system for the generation of images is under development currently. Stay tuned for updates about this amazing feature.",
"openInNewTab": "Open in New Tab",
"outpaint": "outpaint",
"outputs": "Outputs",
"postProcessDesc1": "Invoke AI offers a wide variety of post processing features. Image Upscaling and Face Restoration are already available in the WebUI. You can access them from the Advanced Options menu of the Text To Image and Image To Image tabs. You can also process images directly, using the image action buttons above the current image display or in the viewer.",
"postProcessDesc2": "A dedicated UI will be released soon to facilitate more advanced post processing workflows.",
"postProcessDesc3": "The Invoke AI Command Line Interface offers various other features including Embiggen.",
@@ -100,7 +111,9 @@
"postProcessing": "Post Processing",
"random": "Random",
"reportBugLabel": "Report Bug",
"safetensors": "Safetensors",
"settingsLabel": "Settings",
"simple": "Simple",
"statusConnected": "Connected",
"statusConvertingModel": "Converting Model",
"statusDisconnected": "Disconnected",
@@ -127,6 +140,7 @@
"statusSavingImage": "Saving Image",
"statusUpscaling": "Upscaling",
"statusUpscalingESRGAN": "Upscaling (ESRGAN)",
"template": "Template",
"training": "Training",
"trainingDesc1": "A dedicated workflow for training your own embeddings and checkpoints using Textual Inversion and Dreambooth from the web interface.",
"trainingDesc2": "InvokeAI already supports training custom embeddourings using Textual Inversion using the main script.",
@@ -214,6 +228,7 @@
"setControlImageDimensions": "Set Control Image Dimensions To W/H",
"showAdvanced": "Show Advanced",
"toggleControlNet": "Toggle this ControlNet",
"unstarImage": "Unstar Image",
"w": "W",
"weight": "Weight",
"enableIPAdapter": "Enable IP Adapter",
@@ -279,6 +294,7 @@
"next": "Next",
"status": "Status",
"total": "Total",
"time": "Time",
"pending": "Pending",
"in_progress": "In Progress",
"completed": "Completed",
@@ -286,6 +302,7 @@
"canceled": "Canceled",
"completedIn": "Completed in",
"batch": "Batch",
"batchFieldValues": "Batch Field Values",
"item": "Item",
"session": "Session",
"batchValues": "Batch Values",
@@ -335,6 +352,7 @@
"loading": "Loading",
"loadMore": "Load More",
"maintainAspectRatio": "Maintain Aspect Ratio",
"noImageSelected": "No Image Selected",
"noImagesInGallery": "No Images to Display",
"setCurrentImage": "Set as Current Image",
"showGenerations": "Show Generations",
@@ -583,7 +601,7 @@
"strength": "Image to image strength",
"Threshold": "Noise Threshold",
"variations": "Seed-weight pairs",
"vae": "VAE",
"vae": "VAE",
"width": "Width",
"workflow": "Workflow"
},
@@ -606,6 +624,7 @@
"cannotUseSpaces": "Cannot Use Spaces",
"checkpointFolder": "Checkpoint Folder",
"checkpointModels": "Checkpoints",
"checkpointOrSafetensors": "$t(common.checkpoint) / $t(common.safetensors)",
"clearCheckpointFolder": "Clear Checkpoint Folder",
"closeAdvanced": "Close Advanced",
"config": "Config",
@@ -685,6 +704,7 @@
"nameValidationMsg": "Enter a name for your model",
"noCustomLocationProvided": "No Custom Location Provided",
"noModels": "No Models Found",
"noModelSelected": "No Model Selected",
"noModelsFound": "No Models Found",
"none": "none",
"notLoaded": "not loaded",
@@ -730,6 +750,8 @@
"widthValidationMsg": "Default width of your model."
},
"models": {
"addLora": "Add LoRA",
"esrganModel": "ESRGAN Model",
"loading": "loading",
"noLoRAsAvailable": "No LoRAs available",
"noMatchingLoRAs": "No matching LoRAs",
@@ -920,7 +942,10 @@
"unknownTemplate": "Unknown Template",
"unkownInvocation": "Unknown Invocation type",
"updateNode": "Update Node",
"updateAllNodes": "Update All Nodes",
"updateApp": "Update App",
"unableToUpdateNodes_one": "Unable to update {{count}} node",
"unableToUpdateNodes_other": "Unable to update {{count}} nodes",
"vaeField": "Vae",
"vaeFieldDescription": "Vae submodel.",
"vaeModelField": "VAE",
@@ -1007,6 +1032,7 @@
"maskAdjustmentsHeader": "Mask Adjustments",
"maskBlur": "Blur",
"maskBlurMethod": "Blur Method",
"maskEdge": "Mask Edge",
"negativePromptPlaceholder": "Negative Prompt",
"noiseSettings": "Noise",
"noiseThreshold": "Noise Threshold",
@@ -1054,6 +1080,7 @@
"upscale": "Upscale (Shift + U)",
"upscaleImage": "Upscale Image",
"upscaling": "Upscaling",
"unmasked": "Unmasked",
"useAll": "Use All",
"useCpuNoise": "Use CPU Noise",
"cpuNoise": "CPU Noise",
@@ -1075,6 +1102,7 @@
"dynamicPrompts": "Dynamic Prompts",
"enableDynamicPrompts": "Enable Dynamic Prompts",
"maxPrompts": "Max Prompts",
"promptsPreview": "Prompts Preview",
"promptsWithCount_one": "{{count}} Prompt",
"promptsWithCount_other": "{{count}} Prompts",
"seedBehaviour": {
@@ -1114,7 +1142,10 @@
"displayHelpIcons": "Display Help Icons",
"displayInProgress": "Display Progress Images",
"enableImageDebugging": "Enable Image Debugging",
"enableInformationalPopovers": "Enable Informational Popovers",
"enableInvisibleWatermark": "Enable Invisible Watermark",
"enableNodesEditor": "Enable Nodes Editor",
"enableNSFWChecker": "Enable NSFW Checker",
"experimental": "Experimental",
"favoriteSchedulers": "Favorite Schedulers",
"favoriteSchedulersPlaceholder": "No schedulers favorited",
@@ -1214,7 +1245,8 @@
"sentToImageToImage": "Sent To Image To Image",
"sentToUnifiedCanvas": "Sent to Unified Canvas",
"serverError": "Server Error",
"setCanvasInitialImage": "Set as canvas initial image",
"setAsCanvasInitialImage": "Set as canvas initial image",
"setCanvasInitialImage": "Set canvas initial image",
"setControlImage": "Set as control image",
"setIPAdapterImage": "Set as IP Adapter Image",
"setInitialImage": "Set as initial image",
@@ -1272,11 +1304,15 @@
},
"compositingBlur": {
"heading": "Blur",
"paragraphs": ["The blur radius of the mask."]
"paragraphs": [
"The blur radius of the mask."
]
},
"compositingBlurMethod": {
"heading": "Blur Method",
"paragraphs": ["The method of blur applied to the masked area."]
"paragraphs": [
"The method of blur applied to the masked area."
]
},
"compositingCoherencePass": {
"heading": "Coherence Pass",
@@ -1286,7 +1322,9 @@
},
"compositingCoherenceMode": {
"heading": "Mode",
"paragraphs": ["The mode of the Coherence Pass."]
"paragraphs": [
"The mode of the Coherence Pass."
]
},
"compositingCoherenceSteps": {
"heading": "Steps",
@@ -1304,7 +1342,9 @@
},
"compositingMaskAdjustments": {
"heading": "Mask Adjustments",
"paragraphs": ["Adjust the mask."]
"paragraphs": [
"Adjust the mask."
]
},
"controlNetBeginEnd": {
"heading": "Begin / End Step Percentage",
@@ -1362,7 +1402,9 @@
},
"infillMethod": {
"heading": "Infill Method",
"paragraphs": ["Method to infill the selected area."]
"paragraphs": [
"Method to infill the selected area."
]
},
"lora": {
"heading": "LoRA Weight",

View File

@@ -1222,7 +1222,8 @@
"seamless": "无缝",
"fit": "图生图匹配",
"recallParameters": "召回参数",
"noRecallParameters": "未找到要召回的参数"
"noRecallParameters": "未找到要召回的参数",
"vae": "VAE"
},
"models": {
"noMatchingModels": "无相匹配的模型",
@@ -1501,5 +1502,18 @@
"clear": "清除",
"maxCacheSize": "最大缓存大小",
"cacheSize": "缓存大小"
},
"hrf": {
"enableHrf": "启用高分辨率修复",
"upscaleMethod": "放大方法",
"enableHrfTooltip": "使用较低的分辨率进行初始生成,放大到基础分辨率后进行图生图。",
"metadata": {
"strength": "高分辨率修复强度",
"enabled": "高分辨率修复已启用",
"method": "高分辨率修复方法"
},
"hrf": "高分辨率修复",
"hrfStrength": "高分辨率修复强度",
"strengthTooltip": "值越低细节越少,但可以减少部分潜在的伪影。"
}
}

View File

@@ -72,6 +72,7 @@ import { addStagingAreaImageSavedListener } from './listeners/stagingAreaImageSa
import { addTabChangedListener } from './listeners/tabChanged';
import { addUpscaleRequestedListener } from './listeners/upscaleRequested';
import { addWorkflowLoadedListener } from './listeners/workflowLoaded';
import { addUpdateAllNodesRequestedListener } from './listeners/updateAllNodesRequested';
export const listenerMiddleware = createListenerMiddleware();
@@ -178,6 +179,7 @@ addReceivedOpenAPISchemaListener();
// Workflows
addWorkflowLoadedListener();
addUpdateAllNodesRequestedListener();
// DND
addImageDroppedListener();

View File

@@ -8,7 +8,6 @@ import {
selectControlAdapterById,
} from 'features/controlAdapters/store/controlAdaptersSlice';
import { isControlNetOrT2IAdapter } from 'features/controlAdapters/store/types';
import { SAVE_IMAGE } from 'features/nodes/util/graphBuilders/constants';
import { addToast } from 'features/system/store/systemSlice';
import { t } from 'i18next';
import { imagesApi } from 'services/api/endpoints/images';
@@ -38,6 +37,7 @@ export const addControlNetImageProcessedListener = () => {
// ControlNet one-off procressing graph is just the processor node, no edges.
// Also we need to grab the image.
const nodeId = ca.processorNode.id;
const enqueueBatchArg: BatchConfig = {
prepend: true,
batch: {
@@ -46,27 +46,10 @@ export const addControlNetImageProcessedListener = () => {
[ca.processorNode.id]: {
...ca.processorNode,
is_intermediate: true,
use_cache: false,
image: { image_name: ca.controlImage },
},
[SAVE_IMAGE]: {
id: SAVE_IMAGE,
type: 'save_image',
is_intermediate: true,
use_cache: false,
},
},
edges: [
{
source: {
node_id: ca.processorNode.id,
field: 'image',
},
destination: {
node_id: SAVE_IMAGE,
field: 'image',
},
},
],
},
runs: 1,
},
@@ -90,7 +73,7 @@ export const addControlNetImageProcessedListener = () => {
socketInvocationComplete.match(action) &&
action.payload.data.queue_batch_id ===
enqueueResult.batch.batch_id &&
action.payload.data.source_node_id === SAVE_IMAGE
action.payload.data.source_node_id === nodeId
);
// We still have to check the output type

View File

@@ -79,7 +79,7 @@ export const addImageUploadedFulfilledListener = () => {
dispatch(
addToast({
...DEFAULT_UPLOADED_TOAST,
description: t('toast.setCanvasInitialImage'),
description: t('toast.setAsCanvasInitialImage'),
})
);
return;

View File

@@ -7,7 +7,10 @@ import {
imageSelected,
} from 'features/gallery/store/gallerySlice';
import { IMAGE_CATEGORIES } from 'features/gallery/store/types';
import { CANVAS_OUTPUT } from 'features/nodes/util/graphBuilders/constants';
import {
LINEAR_UI_OUTPUT,
nodeIDDenyList,
} from 'features/nodes/util/graphBuilders/constants';
import { boardsApi } from 'services/api/endpoints/boards';
import { imagesApi } from 'services/api/endpoints/images';
import { isImageOutput } from 'services/api/guards';
@@ -19,7 +22,7 @@ import {
import { startAppListening } from '../..';
// These nodes output an image, but do not actually *save* an image, so we don't want to handle the gallery logic on them
const nodeDenylist = ['load_image', 'image'];
const nodeTypeDenylist = ['load_image', 'image'];
export const addInvocationCompleteEventListener = () => {
startAppListening({
@@ -32,22 +35,31 @@ export const addInvocationCompleteEventListener = () => {
`Invocation complete (${action.payload.data.node.type})`
);
const { result, node, queue_batch_id } = data;
const { result, node, queue_batch_id, source_node_id } = data;
// This complete event has an associated image output
if (isImageOutput(result) && !nodeDenylist.includes(node.type)) {
if (
isImageOutput(result) &&
!nodeTypeDenylist.includes(node.type) &&
!nodeIDDenyList.includes(source_node_id)
) {
const { image_name } = result.image;
const { canvas, gallery } = getState();
// This populates the `getImageDTO` cache
const imageDTO = await dispatch(
imagesApi.endpoints.getImageDTO.initiate(image_name)
).unwrap();
const imageDTORequest = dispatch(
imagesApi.endpoints.getImageDTO.initiate(image_name, {
forceRefetch: true,
})
);
const imageDTO = await imageDTORequest.unwrap();
imageDTORequest.unsubscribe();
// Add canvas images to the staging area
if (
canvas.batchIds.includes(queue_batch_id) &&
[CANVAS_OUTPUT].includes(data.source_node_id)
[LINEAR_UI_OUTPUT].includes(data.source_node_id)
) {
dispatch(addImageToStagingArea(imageDTO));
}

View File

@@ -0,0 +1,52 @@
import {
getNeedsUpdate,
updateNode,
} from 'features/nodes/hooks/useNodeVersion';
import { updateAllNodesRequested } from 'features/nodes/store/actions';
import { nodeReplaced } from 'features/nodes/store/nodesSlice';
import { startAppListening } from '..';
import { logger } from 'app/logging/logger';
import { addToast } from 'features/system/store/systemSlice';
import { makeToast } from 'features/system/util/makeToast';
import { t } from 'i18next';
export const addUpdateAllNodesRequestedListener = () => {
startAppListening({
actionCreator: updateAllNodesRequested,
effect: (action, { dispatch, getState }) => {
const log = logger('nodes');
const nodes = getState().nodes.nodes;
const templates = getState().nodes.nodeTemplates;
let unableToUpdateCount = 0;
nodes.forEach((node) => {
const template = templates[node.data.type];
const needsUpdate = getNeedsUpdate(node, template);
const updatedNode = updateNode(node, template);
if (!updatedNode) {
if (needsUpdate) {
unableToUpdateCount++;
}
return;
}
dispatch(nodeReplaced({ nodeId: updatedNode.id, node: updatedNode }));
});
if (unableToUpdateCount) {
log.warn(
`Unable to update ${unableToUpdateCount} nodes. Please report this issue.`
);
dispatch(
addToast(
makeToast({
title: t('nodes.unableToUpdateNodes', {
count: unableToUpdateCount,
}),
})
)
);
}
},
});
};

View File

@@ -17,6 +17,7 @@ import IAIInformationalPopover from 'common/components/IAIInformationalPopover/I
import ScrollableContent from 'features/nodes/components/sidePanel/ScrollableContent';
import { memo } from 'react';
import { FaCircleExclamation } from 'react-icons/fa6';
import { useTranslation } from 'react-i18next';
const selector = createSelector(
stateSelector,
@@ -38,6 +39,7 @@ const listItemStyles: ChakraProps['sx'] = {
};
const ParamDynamicPromptsPreview = () => {
const { t } = useTranslation();
const { prompts, parsingError, isLoading, isError } =
useAppSelector(selector);
@@ -69,7 +71,7 @@ const ParamDynamicPromptsPreview = () => {
overflow="hidden"
textOverflow="ellipsis"
>
Prompts Preview ({prompts.length})
{t('dynamicPrompts.promptsPreview')} ({prompts.length})
{parsingError && ` - ${parsingError}`}
</FormLabel>
<Flex

View File

@@ -115,7 +115,7 @@ const DeleteBoardModal = (props: Props) => {
<AlertDialogOverlay>
<AlertDialogContent>
<AlertDialogHeader fontSize="lg" fontWeight="bold">
Delete {boardToDelete.board_name}
{t('controlnet.delete')} {boardToDelete.board_name}
</AlertDialogHeader>
<AlertDialogBody>
@@ -136,7 +136,7 @@ const DeleteBoardModal = (props: Props) => {
bottomMessage={t('boards.bottomMessage')}
/>
)}
<Text>Deleted boards cannot be restored.</Text>
<Text>{t('boards.deletedBoardsCannotbeRestored')}</Text>
<Text>
{canRestoreDeletedImagesFromBin
? t('gallery.deleteImageBin')
@@ -149,21 +149,21 @@ const DeleteBoardModal = (props: Props) => {
sx={{ justifyContent: 'space-between', width: 'full', gap: 2 }}
>
<IAIButton ref={cancelRef} onClick={handleClose}>
Cancel
{t('boards.cancel')}
</IAIButton>
<IAIButton
colorScheme="warning"
isLoading={isLoading}
onClick={handleDeleteBoardOnly}
>
Delete Board Only
{t('boards.deleteBoardOnly')}
</IAIButton>
<IAIButton
colorScheme="error"
isLoading={isLoading}
onClick={handleDeleteBoardAndImages}
>
Delete Board and Images
{t('boards.deleteBoardAndImages')}
</IAIButton>
</Flex>
</AlertDialogFooter>

View File

@@ -2,13 +2,14 @@ import { MenuItem } from '@chakra-ui/react';
import { memo, useCallback } from 'react';
import { FaTrash } from 'react-icons/fa';
import { BoardDTO } from 'services/api/types';
import { useTranslation } from 'react-i18next';
type Props = {
board: BoardDTO;
setBoardToDelete?: (board?: BoardDTO) => void;
};
const GalleryBoardContextMenuItems = ({ board, setBoardToDelete }: Props) => {
const { t } = useTranslation();
const handleDelete = useCallback(() => {
if (!setBoardToDelete) {
return;
@@ -34,7 +35,7 @@ const GalleryBoardContextMenuItems = ({ board, setBoardToDelete }: Props) => {
icon={<FaTrash />}
onClick={handleDelete}
>
Delete Board
{t('boards.deleteBoard')}
</MenuItem>
</>
);

View File

@@ -170,7 +170,10 @@ const CurrentImagePreview = () => {
useThumbailFallback
dropLabel={t('gallery.setCurrentImage')}
noContentFallback={
<IAINoContentFallback icon={FaImage} label="No image selected" />
<IAINoContentFallback
icon={FaImage}
label={t('gallery.noImageSelected')}
/>
}
dataTestId="image-preview"
/>

View File

@@ -104,7 +104,7 @@ const MultipleSelectionMenuItems = () => {
</MenuItem>
)}
<MenuItem icon={<FaFolder />} onClickCapture={handleChangeBoard}>
Change Board
{t('boards.changeBoard')}
</MenuItem>
<MenuItem
sx={{ color: 'error.600', _dark: { color: 'error.300' } }}

View File

@@ -224,14 +224,14 @@ const SingleSelectionMenuItems = (props: SingleSelectionMenuItemsProps) => {
</MenuItem>
)}
<MenuItem icon={<FaFolder />} onClickCapture={handleChangeBoard}>
Change Board
{t('boards.changeBoard')}
</MenuItem>
{imageDTO.starred ? (
<MenuItem
icon={customStarUi ? customStarUi.off.icon : <MdStar />}
onClickCapture={handleUnstarImage}
>
{customStarUi ? customStarUi.off.text : `Unstar Image`}
{customStarUi ? customStarUi.off.text : t('controlnet.unstarImage')}
</MenuItem>
) : (
<MenuItem

View File

@@ -157,6 +157,8 @@ const ImageMetadataActions = (props: Props) => {
return null;
}
console.log(metadata);
return (
<>
{metadata.created_by && (

View File

@@ -95,7 +95,7 @@ const ParamLoRASelect = () => {
return (
<IAIMantineSearchableSelect
placeholder={data.length === 0 ? 'All LoRAs added' : 'Add LoRA'}
placeholder={data.length === 0 ? 'All LoRAs added' : t('models.addLora')}
value={null}
data={data}
nothingFound="No matching LoRAs"

View File

@@ -3,8 +3,10 @@ import IAIButton from 'common/components/IAIButton';
import { useCallback, useState } from 'react';
import AdvancedAddModels from './AdvancedAddModels';
import SimpleAddModels from './SimpleAddModels';
import { useTranslation } from 'react-i18next';
export default function AddModels() {
const { t } = useTranslation();
const [addModelMode, setAddModelMode] = useState<'simple' | 'advanced'>(
'simple'
);
@@ -27,14 +29,14 @@ export default function AddModels() {
isChecked={addModelMode == 'simple'}
onClick={handleAddModelSimple}
>
Simple
{t('common.simple')}
</IAIButton>
<IAIButton
size="sm"
isChecked={addModelMode == 'advanced'}
onClick={handleAddModelAdvanced}
>
Advanced
{t('common.advanced')}
</IAIButton>
</ButtonGroup>
<Flex

View File

@@ -1,16 +1,11 @@
import { Flex } from '@chakra-ui/react';
import { SelectItem } from '@mantine/core';
import IAIMantineSelect from 'common/components/IAIMantineSelect';
import { useCallback, useState } from 'react';
import { useCallback, useMemo, useState } from 'react';
import AdvancedAddCheckpoint from './AdvancedAddCheckpoint';
import AdvancedAddDiffusers from './AdvancedAddDiffusers';
import { useTranslation } from 'react-i18next';
export const advancedAddModeData: SelectItem[] = [
{ label: 'Diffusers', value: 'diffusers' },
{ label: 'Checkpoint / Safetensors', value: 'checkpoint' },
];
export type ManualAddMode = 'diffusers' | 'checkpoint';
export default function AdvancedAddModels() {
@@ -25,6 +20,14 @@ export default function AdvancedAddModels() {
setAdvancedAddMode(v as ManualAddMode);
}, []);
const advancedAddModeData: SelectItem[] = useMemo(
() => [
{ label: t('modelManager.diffusersModels'), value: 'diffusers' },
{ label: t('modelManager.checkpointOrSafetensors'), value: 'checkpoint' },
],
[t]
);
return (
<Flex flexDirection="column" gap={4} width="100%">
<IAIMantineSelect

View File

@@ -4,13 +4,14 @@ import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAIIconButton from 'common/components/IAIIconButton';
import IAIMantineSelect from 'common/components/IAIMantineSelect';
import { motion } from 'framer-motion';
import { useCallback, useEffect, useState } from 'react';
import { useCallback, useEffect, useState, useMemo } from 'react';
import { FaTimes } from 'react-icons/fa';
import { setAdvancedAddScanModel } from '../../store/modelManagerSlice';
import AdvancedAddCheckpoint from './AdvancedAddCheckpoint';
import AdvancedAddDiffusers from './AdvancedAddDiffusers';
import { ManualAddMode, advancedAddModeData } from './AdvancedAddModels';
import { ManualAddMode } from './AdvancedAddModels';
import { useTranslation } from 'react-i18next';
import { SelectItem } from '@mantine/core';
export default function ScanAdvancedAddModels() {
const advancedAddScanModel = useAppSelector(
@@ -19,6 +20,14 @@ export default function ScanAdvancedAddModels() {
const { t } = useTranslation();
const advancedAddModeData: SelectItem[] = useMemo(
() => [
{ label: t('modelManager.diffusersModels'), value: 'diffusers' },
{ label: t('modelManager.checkpointOrSafetensors'), value: 'checkpoint' },
],
[t]
);
const [advancedAddMode, setAdvancedAddMode] =
useState<ManualAddMode>('diffusers');

View File

@@ -227,7 +227,7 @@ export default function MergeModelsPanel() {
<Flex columnGap={4}>
<IAIMantineSelect
label="Model Type"
label={t('modelManager.modelType')}
w="100%"
data={baseModelTypeSelectData}
value={baseModel}

View File

@@ -13,6 +13,7 @@ import DiffusersModelEdit from './ModelManagerPanel/DiffusersModelEdit';
import LoRAModelEdit from './ModelManagerPanel/LoRAModelEdit';
import ModelList from './ModelManagerPanel/ModelList';
import { ALL_BASE_MODELS } from 'services/api/constants';
import { useTranslation } from 'react-i18next';
export default function ModelManagerPanel() {
const [selectedModelId, setSelectedModelId] = useState<string>();
@@ -45,6 +46,7 @@ type ModelEditProps = {
};
const ModelEdit = (props: ModelEditProps) => {
const { t } = useTranslation();
const { model } = props;
if (model?.model_format === 'checkpoint') {
@@ -75,7 +77,7 @@ const ModelEdit = (props: ModelEditProps) => {
userSelect: 'none',
}}
>
<Text variant="subtext">No Model Selected</Text>
<Text variant="subtext">{t('modelManager.noModelSelected')}</Text>
</Flex>
);
};

View File

@@ -54,7 +54,7 @@ export default function SyncModelsButton(props: SyncModelsButtonProps) {
minW="max-content"
{...rest}
>
Sync Models
{t('modelManager.syncModels')}
</IAIButton>
) : (
<IAIIconButton

View File

@@ -3,7 +3,7 @@ import { memo } from 'react';
import NodeCollapseButton from '../common/NodeCollapseButton';
import NodeTitle from '../common/NodeTitle';
import InvocationNodeCollapsedHandles from './InvocationNodeCollapsedHandles';
import InvocationNodeNotes from './InvocationNodeNotes';
import InvocationNodeInfoIcon from './InvocationNodeInfoIcon';
import InvocationNodeStatusIndicator from './InvocationNodeStatusIndicator';
type Props = {
@@ -34,7 +34,7 @@ const InvocationNodeHeader = ({ nodeId, isOpen }: Props) => {
<NodeTitle nodeId={nodeId} />
<Flex alignItems="center">
<InvocationNodeStatusIndicator nodeId={nodeId} />
<InvocationNodeNotes nodeId={nodeId} />
<InvocationNodeInfoIcon nodeId={nodeId} />
</Flex>
{!isOpen && <InvocationNodeCollapsedHandles nodeId={nodeId} />}
</Flex>

View File

@@ -1,85 +1,39 @@
import {
Flex,
Icon,
Modal,
ModalBody,
ModalCloseButton,
ModalContent,
ModalFooter,
ModalHeader,
ModalOverlay,
Text,
Tooltip,
useDisclosure,
} from '@chakra-ui/react';
import { Flex, Icon, Text, Tooltip } from '@chakra-ui/react';
import { compare } from 'compare-versions';
import { useNodeData } from 'features/nodes/hooks/useNodeData';
import { useNodeLabel } from 'features/nodes/hooks/useNodeLabel';
import { useNodeTemplate } from 'features/nodes/hooks/useNodeTemplate';
import { useNodeTemplateTitle } from 'features/nodes/hooks/useNodeTemplateTitle';
import { useNodeVersion } from 'features/nodes/hooks/useNodeVersion';
import { isInvocationNodeData } from 'features/nodes/types/types';
import { memo, useMemo } from 'react';
import { FaInfoCircle } from 'react-icons/fa';
import NotesTextarea from './NotesTextarea';
import { useDoNodeVersionsMatch } from 'features/nodes/hooks/useDoNodeVersionsMatch';
import { useTranslation } from 'react-i18next';
import { FaInfoCircle } from 'react-icons/fa';
interface Props {
nodeId: string;
}
const InvocationNodeNotes = ({ nodeId }: Props) => {
const { isOpen, onOpen, onClose } = useDisclosure();
const label = useNodeLabel(nodeId);
const title = useNodeTemplateTitle(nodeId);
const doVersionsMatch = useDoNodeVersionsMatch(nodeId);
const { t } = useTranslation();
const InvocationNodeInfoIcon = ({ nodeId }: Props) => {
const { needsUpdate } = useNodeVersion(nodeId);
return (
<>
<Tooltip
label={<TooltipContent nodeId={nodeId} />}
placement="top"
shouldWrapChildren
>
<Flex
className="nodrag"
onClick={onOpen}
sx={{
alignItems: 'center',
justifyContent: 'center',
w: 8,
h: 8,
cursor: 'pointer',
}}
>
<Icon
as={FaInfoCircle}
sx={{
boxSize: 4,
w: 8,
color: doVersionsMatch ? 'base.400' : 'error.400',
}}
/>
</Flex>
</Tooltip>
<Modal isOpen={isOpen} onClose={onClose} isCentered>
<ModalOverlay />
<ModalContent>
<ModalHeader>{label || title || t('nodes.unknownNode')}</ModalHeader>
<ModalCloseButton />
<ModalBody>
<NotesTextarea nodeId={nodeId} />
</ModalBody>
<ModalFooter />
</ModalContent>
</Modal>
</>
<Tooltip
label={<TooltipContent nodeId={nodeId} />}
placement="top"
shouldWrapChildren
>
<Icon
as={FaInfoCircle}
sx={{
boxSize: 4,
w: 8,
color: needsUpdate ? 'error.400' : 'base.400',
}}
/>
</Tooltip>
);
};
export default memo(InvocationNodeNotes);
export default memo(InvocationNodeInfoIcon);
const TooltipContent = memo(({ nodeId }: { nodeId: string }) => {
const data = useNodeData(nodeId);

View File

@@ -3,15 +3,22 @@ import { useAppDispatch } from 'app/store/storeHooks';
import IAIIconButton from 'common/components/IAIIconButton';
import { addNodePopoverOpened } from 'features/nodes/store/nodesSlice';
import { memo, useCallback } from 'react';
import { FaPlus } from 'react-icons/fa';
import { FaPlus, FaSync } from 'react-icons/fa';
import { useTranslation } from 'react-i18next';
import IAIButton from 'common/components/IAIButton';
import { useGetNodesNeedUpdate } from 'features/nodes/hooks/useGetNodesNeedUpdate';
import { updateAllNodesRequested } from 'features/nodes/store/actions';
const TopLeftPanel = () => {
const dispatch = useAppDispatch();
const { t } = useTranslation();
const nodesNeedUpdate = useGetNodesNeedUpdate();
const handleOpenAddNodePopover = useCallback(() => {
dispatch(addNodePopoverOpened());
}, [dispatch]);
const handleClickUpdateNodes = useCallback(() => {
dispatch(updateAllNodesRequested());
}, [dispatch]);
return (
<Flex sx={{ gap: 2, position: 'absolute', top: 2, insetInlineStart: 2 }}>
@@ -21,6 +28,11 @@ const TopLeftPanel = () => {
icon={<FaPlus />}
onClick={handleOpenAddNodePopover}
/>
{nodesNeedUpdate && (
<IAIButton leftIcon={<FaSync />} onClick={handleClickUpdateNodes}>
{t('nodes.updateAllNodes')}
</IAIButton>
)}
</Flex>
);
};

View File

@@ -127,7 +127,7 @@ const WorkflowEditorSettings = forwardRef((_, ref) => {
py: 4,
}}
>
<Heading size="sm">General</Heading>
<Heading size="sm">{t('parameters.general')}</Heading>
<IAISwitch
formLabelProps={formLabelProps}
onChange={handleChangeShouldAnimate}
@@ -159,7 +159,7 @@ const WorkflowEditorSettings = forwardRef((_, ref) => {
helperText={t('nodes.fullyContainNodesHelp')}
/>
<Heading size="sm" pt={4}>
Advanced
{t('common.advanced')}
</Heading>
<IAISwitch
formLabelProps={formLabelProps}

View File

@@ -0,0 +1,125 @@
import {
Box,
Flex,
FormControl,
FormLabel,
HStack,
Text,
} from '@chakra-ui/react';
import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import IAIIconButton from 'common/components/IAIIconButton';
import { IAINoContentFallback } from 'common/components/IAIImageFallback';
import { useNodeVersion } from 'features/nodes/hooks/useNodeVersion';
import {
InvocationNodeData,
InvocationTemplate,
isInvocationNode,
} from 'features/nodes/types/types';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { FaSync } from 'react-icons/fa';
import { Node } from 'reactflow';
import NotesTextarea from '../../flow/nodes/Invocation/NotesTextarea';
import ScrollableContent from '../ScrollableContent';
import EditableNodeTitle from './details/EditableNodeTitle';
const selector = createSelector(
stateSelector,
({ nodes }) => {
const lastSelectedNodeId =
nodes.selectedNodes[nodes.selectedNodes.length - 1];
const lastSelectedNode = nodes.nodes.find(
(node) => node.id === lastSelectedNodeId
);
const lastSelectedNodeTemplate = lastSelectedNode
? nodes.nodeTemplates[lastSelectedNode.data.type]
: undefined;
return {
node: lastSelectedNode,
template: lastSelectedNodeTemplate,
};
},
defaultSelectorOptions
);
const InspectorDetailsTab = () => {
const { node, template } = useAppSelector(selector);
const { t } = useTranslation();
if (!template || !isInvocationNode(node)) {
return (
<IAINoContentFallback label={t('nodes.noNodeSelected')} icon={null} />
);
}
return <Content node={node} template={template} />;
};
export default memo(InspectorDetailsTab);
const Content = (props: {
node: Node<InvocationNodeData>;
template: InvocationTemplate;
}) => {
const { t } = useTranslation();
const { needsUpdate, updateNode } = useNodeVersion(props.node.id);
return (
<Box
sx={{
position: 'relative',
w: 'full',
h: 'full',
}}
>
<ScrollableContent>
<Flex
sx={{
flexDir: 'column',
position: 'relative',
p: 1,
gap: 2,
w: 'full',
}}
>
<EditableNodeTitle nodeId={props.node.data.id} />
<HStack>
<FormControl>
<FormLabel>Node Type</FormLabel>
<Text fontSize="sm" fontWeight={600}>
{props.template.title}
</Text>
</FormControl>
<Flex
flexDir="row"
alignItems="center"
justifyContent="space-between"
w="full"
>
<FormControl isInvalid={needsUpdate}>
<FormLabel>Node Version</FormLabel>
<Text fontSize="sm" fontWeight={600}>
{props.node.data.version}
</Text>
</FormControl>
{needsUpdate && (
<IAIIconButton
aria-label={t('nodes.updateNode')}
tooltip={t('nodes.updateNode')}
icon={<FaSync />}
onClick={updateNode}
/>
)}
</Flex>
</HStack>
<NotesTextarea nodeId={props.node.data.id} />
</Flex>
</ScrollableContent>
</Box>
);
};

View File

@@ -10,9 +10,11 @@ import { memo } from 'react';
import InspectorDataTab from './InspectorDataTab';
import InspectorOutputsTab from './InspectorOutputsTab';
import InspectorTemplateTab from './InspectorTemplateTab';
// import InspectorDetailsTab from './InspectorDetailsTab';
import { useTranslation } from 'react-i18next';
import InspectorDetailsTab from './InspectorDetailsTab';
const InspectorPanel = () => {
const { t } = useTranslation();
return (
<Flex
layerStyle="first"
@@ -30,16 +32,16 @@ const InspectorPanel = () => {
sx={{ display: 'flex', flexDir: 'column', w: 'full', h: 'full' }}
>
<TabList>
{/* <Tab>Details</Tab> */}
<Tab>Outputs</Tab>
<Tab>Data</Tab>
<Tab>Template</Tab>
<Tab>{t('common.details')}</Tab>
<Tab>{t('common.outputs')}</Tab>
<Tab>{t('common.data')}</Tab>
<Tab>{t('common.template')}</Tab>
</TabList>
<TabPanels>
{/* <TabPanel>
<TabPanel>
<InspectorDetailsTab />
</TabPanel> */}
</TabPanel>
<TabPanel>
<InspectorOutputsTab />
</TabPanel>

View File

@@ -0,0 +1,74 @@
import {
Editable,
EditableInput,
EditablePreview,
Flex,
} from '@chakra-ui/react';
import { useAppDispatch } from 'app/store/storeHooks';
import { useNodeLabel } from 'features/nodes/hooks/useNodeLabel';
import { useNodeTemplateTitle } from 'features/nodes/hooks/useNodeTemplateTitle';
import { nodeLabelChanged } from 'features/nodes/store/nodesSlice';
import { memo, useCallback, useEffect, useState } from 'react';
import { useTranslation } from 'react-i18next';
type Props = {
nodeId: string;
title?: string;
};
const EditableNodeTitle = ({ nodeId, title }: Props) => {
const dispatch = useAppDispatch();
const label = useNodeLabel(nodeId);
const templateTitle = useNodeTemplateTitle(nodeId);
const { t } = useTranslation();
const [localTitle, setLocalTitle] = useState('');
const handleSubmit = useCallback(
async (newTitle: string) => {
dispatch(nodeLabelChanged({ nodeId, label: newTitle }));
setLocalTitle(
label || title || templateTitle || t('nodes.problemSettingTitle')
);
},
[dispatch, nodeId, title, templateTitle, label, t]
);
const handleChange = useCallback((newTitle: string) => {
setLocalTitle(newTitle);
}, []);
useEffect(() => {
// Another component may change the title; sync local title with global state
setLocalTitle(
label || title || templateTitle || t('nodes.problemSettingTitle')
);
}, [label, templateTitle, title, t]);
return (
<Flex
sx={{
w: 'full',
h: 'full',
alignItems: 'center',
justifyContent: 'center',
}}
>
<Editable
as={Flex}
value={localTitle}
onChange={handleChange}
onSubmit={handleSubmit}
w="full"
fontWeight={600}
>
<EditablePreview noOfLines={1} />
<EditableInput
className="nodrag"
_focusVisible={{ boxShadow: 'none' }}
/>
</Editable>
</Flex>
);
};
export default memo(EditableNodeTitle);

View File

@@ -10,8 +10,10 @@ import { memo } from 'react';
import WorkflowGeneralTab from './WorkflowGeneralTab';
import WorkflowJSONTab from './WorkflowJSONTab';
import WorkflowLinearTab from './WorkflowLinearTab';
import { useTranslation } from 'react-i18next';
const WorkflowPanel = () => {
const { t } = useTranslation();
return (
<Flex
layerStyle="first"
@@ -29,8 +31,8 @@ const WorkflowPanel = () => {
sx={{ display: 'flex', flexDir: 'column', w: 'full', h: 'full' }}
>
<TabList>
<Tab>Linear</Tab>
<Tab>Details</Tab>
<Tab>{t('common.linear')}</Tab>
<Tab>{t('common.details')}</Tab>
<Tab>JSON</Tab>
</TabList>

View File

@@ -1,19 +1,10 @@
import { createSelector } from '@reduxjs/toolkit';
import { RootState } from 'app/store/store';
import { useAppSelector } from 'app/store/storeHooks';
import { reduce } from 'lodash-es';
import { useCallback } from 'react';
import { Node, useReactFlow } from 'reactflow';
import { AnyInvocationType } from 'services/events/types';
import { v4 as uuidv4 } from 'uuid';
import {
CurrentImageNodeData,
InputFieldValue,
InvocationNodeData,
NotesNodeData,
OutputFieldValue,
} from '../types/types';
import { buildInputFieldValue } from '../util/fieldValueBuilders';
import { buildNodeData } from '../store/util/buildNodeData';
import { DRAG_HANDLE_CLASSNAME, NODE_WIDTH } from '../types/constants';
const templatesSelector = createSelector(
@@ -26,14 +17,12 @@ export const SHARED_NODE_PROPERTIES: Partial<Node> = {
};
export const useBuildNodeData = () => {
const invocationTemplates = useAppSelector(templatesSelector);
const nodeTemplates = useAppSelector(templatesSelector);
const flow = useReactFlow();
return useCallback(
(type: AnyInvocationType | 'current_image' | 'notes') => {
const nodeId = uuidv4();
let _x = window.innerWidth / 2;
let _y = window.innerHeight / 2;
@@ -47,111 +36,15 @@ export const useBuildNodeData = () => {
_y = rect.height / 2 - NODE_WIDTH / 2;
}
const { x, y } = flow.project({
const position = flow.project({
x: _x,
y: _y,
});
if (type === 'current_image') {
const node: Node<CurrentImageNodeData> = {
...SHARED_NODE_PROPERTIES,
id: nodeId,
type: 'current_image',
position: { x: x, y: y },
data: {
id: nodeId,
type: 'current_image',
isOpen: true,
label: 'Current Image',
},
};
const template = nodeTemplates[type];
return node;
}
if (type === 'notes') {
const node: Node<NotesNodeData> = {
...SHARED_NODE_PROPERTIES,
id: nodeId,
type: 'notes',
position: { x: x, y: y },
data: {
id: nodeId,
isOpen: true,
label: 'Notes',
notes: '',
type: 'notes',
},
};
return node;
}
const template = invocationTemplates[type];
if (template === undefined) {
console.error(`Unable to find template ${type}.`);
return;
}
const inputs = reduce(
template.inputs,
(inputsAccumulator, inputTemplate, inputName) => {
const fieldId = uuidv4();
const inputFieldValue: InputFieldValue = buildInputFieldValue(
fieldId,
inputTemplate
);
inputsAccumulator[inputName] = inputFieldValue;
return inputsAccumulator;
},
{} as Record<string, InputFieldValue>
);
const outputs = reduce(
template.outputs,
(outputsAccumulator, outputTemplate, outputName) => {
const fieldId = uuidv4();
const outputFieldValue: OutputFieldValue = {
id: fieldId,
name: outputName,
type: outputTemplate.type,
fieldKind: 'output',
};
outputsAccumulator[outputName] = outputFieldValue;
return outputsAccumulator;
},
{} as Record<string, OutputFieldValue>
);
const invocation: Node<InvocationNodeData> = {
...SHARED_NODE_PROPERTIES,
id: nodeId,
type: 'invocation',
position: { x: x, y: y },
data: {
id: nodeId,
type,
version: template.version,
label: '',
notes: '',
isOpen: true,
embedWorkflow: false,
isIntermediate: type === 'save_image' ? false : true,
inputs,
outputs,
useCache: template.useCache,
},
};
return invocation;
return buildNodeData(type, position, template);
},
[invocationTemplates, flow]
[nodeTemplates, flow]
);
};

View File

@@ -0,0 +1,25 @@
import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import { getNeedsUpdate } from './useNodeVersion';
const selector = createSelector(
stateSelector,
(state) => {
const nodes = state.nodes.nodes;
const templates = state.nodes.nodeTemplates;
const needsUpdate = nodes.some((node) => {
const template = templates[node.data.type];
return getNeedsUpdate(node, template);
});
return needsUpdate;
},
defaultSelectorOptions
);
export const useGetNodesNeedUpdate = () => {
const getNeedsUpdate = useAppSelector(selector);
return getNeedsUpdate;
};

View File

@@ -0,0 +1,27 @@
import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import { useMemo } from 'react';
import { AnyInvocationType } from 'services/events/types';
export const useNodeTemplateByType = (
type: AnyInvocationType | 'current_image' | 'notes'
) => {
const selector = useMemo(
() =>
createSelector(
stateSelector,
({ nodes }) => {
const nodeTemplate = nodes.nodeTemplates[type];
return nodeTemplate;
},
defaultSelectorOptions
),
[type]
);
const nodeTemplate = useAppSelector(selector);
return nodeTemplate;
};

View File

@@ -0,0 +1,119 @@
import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import { satisfies } from 'compare-versions';
import { cloneDeep, defaultsDeep } from 'lodash-es';
import { useCallback, useMemo } from 'react';
import { Node } from 'reactflow';
import { AnyInvocationType } from 'services/events/types';
import { nodeReplaced } from '../store/nodesSlice';
import { buildNodeData } from '../store/util/buildNodeData';
import {
InvocationNodeData,
InvocationTemplate,
NodeData,
isInvocationNode,
zParsedSemver,
} from '../types/types';
import { useAppToaster } from 'app/components/Toaster';
import { useTranslation } from 'react-i18next';
export const getNeedsUpdate = (
node?: Node<NodeData>,
template?: InvocationTemplate
) => {
if (!isInvocationNode(node) || !template) {
return false;
}
return node.data.version !== template.version;
};
export const getMayUpdateNode = (
node?: Node<NodeData>,
template?: InvocationTemplate
) => {
const needsUpdate = getNeedsUpdate(node, template);
if (
!needsUpdate ||
!isInvocationNode(node) ||
!template ||
!node.data.version
) {
return false;
}
const templateMajor = zParsedSemver.parse(template.version).major;
return satisfies(node.data.version, `^${templateMajor}`);
};
export const updateNode = (
node?: Node<NodeData>,
template?: InvocationTemplate
) => {
const mayUpdate = getMayUpdateNode(node, template);
if (
!mayUpdate ||
!isInvocationNode(node) ||
!template ||
!node.data.version
) {
return;
}
const defaults = buildNodeData(
node.data.type as AnyInvocationType,
node.position,
template
) as Node<InvocationNodeData>;
const clone = cloneDeep(node);
clone.data.version = template.version;
defaultsDeep(clone, defaults);
return clone;
};
export const useNodeVersion = (nodeId: string) => {
const dispatch = useAppDispatch();
const toast = useAppToaster();
const { t } = useTranslation();
const selector = useMemo(
() =>
createSelector(
stateSelector,
({ nodes }) => {
const node = nodes.nodes.find((node) => node.id === nodeId);
const nodeTemplate = nodes.nodeTemplates[node?.data.type ?? ''];
return { node, nodeTemplate };
},
defaultSelectorOptions
),
[nodeId]
);
const { node, nodeTemplate } = useAppSelector(selector);
const needsUpdate = useMemo(
() => getNeedsUpdate(node, nodeTemplate),
[node, nodeTemplate]
);
const mayUpdate = useMemo(
() => getMayUpdateNode(node, nodeTemplate),
[node, nodeTemplate]
);
const _updateNode = useCallback(() => {
const needsUpdate = getNeedsUpdate(node, nodeTemplate);
const updatedNode = updateNode(node, nodeTemplate);
if (!updatedNode) {
if (needsUpdate) {
toast({ title: t('nodes.unableToUpdateNodes', { count: 1 }) });
}
return;
}
dispatch(nodeReplaced({ nodeId: updatedNode.id, node: updatedNode }));
}, [dispatch, node, nodeTemplate, t, toast]);
return { needsUpdate, mayUpdate, updateNode: _updateNode };
};

View File

@@ -21,3 +21,7 @@ export const isAnyGraphBuilt = isAnyOf(
export const workflowLoadRequested = createAction<Workflow>(
'nodes/workflowLoadRequested'
);
export const updateAllNodesRequested = createAction(
'nodes/updateAllNodesRequested'
);

View File

@@ -149,6 +149,18 @@ const nodesSlice = createSlice({
nodesChanged: (state, action: PayloadAction<NodeChange[]>) => {
state.nodes = applyNodeChanges(action.payload, state.nodes);
},
nodeReplaced: (
state,
action: PayloadAction<{ nodeId: string; node: Node }>
) => {
const nodeIndex = state.nodes.findIndex(
(n) => n.id === action.payload.nodeId
);
if (nodeIndex < 0) {
return;
}
state.nodes[nodeIndex] = action.payload.node;
},
nodeAdded: (
state,
action: PayloadAction<
@@ -1029,6 +1041,7 @@ export const {
mouseOverFieldChanged,
mouseOverNodeChanged,
nodeAdded,
nodeReplaced,
nodeEditorReset,
nodeEmbedWorkflowChanged,
nodeExclusivelySelected,

View File

@@ -0,0 +1,127 @@
import { DRAG_HANDLE_CLASSNAME } from 'features/nodes/types/constants';
import {
CurrentImageNodeData,
InputFieldValue,
InvocationNodeData,
InvocationTemplate,
NotesNodeData,
OutputFieldValue,
} from 'features/nodes/types/types';
import { buildInputFieldValue } from 'features/nodes/util/fieldValueBuilders';
import { reduce } from 'lodash-es';
import { Node, XYPosition } from 'reactflow';
import { AnyInvocationType } from 'services/events/types';
import { v4 as uuidv4 } from 'uuid';
export const SHARED_NODE_PROPERTIES: Partial<Node> = {
dragHandle: `.${DRAG_HANDLE_CLASSNAME}`,
};
export const buildNodeData = (
type: AnyInvocationType | 'current_image' | 'notes',
position: XYPosition,
template?: InvocationTemplate
):
| Node<CurrentImageNodeData>
| Node<NotesNodeData>
| Node<InvocationNodeData>
| undefined => {
const nodeId = uuidv4();
if (type === 'current_image') {
const node: Node<CurrentImageNodeData> = {
...SHARED_NODE_PROPERTIES,
id: nodeId,
type: 'current_image',
position,
data: {
id: nodeId,
type: 'current_image',
isOpen: true,
label: 'Current Image',
},
};
return node;
}
if (type === 'notes') {
const node: Node<NotesNodeData> = {
...SHARED_NODE_PROPERTIES,
id: nodeId,
type: 'notes',
position,
data: {
id: nodeId,
isOpen: true,
label: 'Notes',
notes: '',
type: 'notes',
},
};
return node;
}
if (template === undefined) {
console.error(`Unable to find template ${type}.`);
return;
}
const inputs = reduce(
template.inputs,
(inputsAccumulator, inputTemplate, inputName) => {
const fieldId = uuidv4();
const inputFieldValue: InputFieldValue = buildInputFieldValue(
fieldId,
inputTemplate
);
inputsAccumulator[inputName] = inputFieldValue;
return inputsAccumulator;
},
{} as Record<string, InputFieldValue>
);
const outputs = reduce(
template.outputs,
(outputsAccumulator, outputTemplate, outputName) => {
const fieldId = uuidv4();
const outputFieldValue: OutputFieldValue = {
id: fieldId,
name: outputName,
type: outputTemplate.type,
fieldKind: 'output',
};
outputsAccumulator[outputName] = outputFieldValue;
return outputsAccumulator;
},
{} as Record<string, OutputFieldValue>
);
const invocation: Node<InvocationNodeData> = {
...SHARED_NODE_PROPERTIES,
id: nodeId,
type: 'invocation',
position,
data: {
id: nodeId,
type,
version: template.version,
label: '',
notes: '',
isOpen: true,
embedWorkflow: false,
isIntermediate: type === 'save_image' ? false : true,
inputs,
outputs,
useCache: template.useCache,
},
};
return invocation;
};

View File

@@ -23,7 +23,7 @@ import {
RESIZE_HRF,
VAE_LOADER,
} from './constants';
import { upsertMetadata } from './metadata';
import { setMetadataReceivingNode, upsertMetadata } from './metadata';
// Copy certain connections from previous DENOISE_LATENTS to new DENOISE_LATENTS_HRF.
function copyConnectionsToDenoiseLatentsHrf(graph: NonNullableGraph): void {
@@ -369,4 +369,5 @@ export const addHrfToGraph = (
hrf_enabled: hrfEnabled,
hrf_method: hrfMethod,
});
setMetadataReceivingNode(graph, LATENTS_TO_IMAGE_HRF_HR);
};

View File

@@ -1,20 +1,20 @@
import { RootState } from 'app/store/store';
import { NonNullableGraph } from 'features/nodes/types/types';
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
import { SaveImageInvocation } from 'services/api/types';
import { LinearUIOutputInvocation } from 'services/api/types';
import {
CANVAS_OUTPUT,
LATENTS_TO_IMAGE,
LATENTS_TO_IMAGE_HRF_HR,
LINEAR_UI_OUTPUT,
NSFW_CHECKER,
SAVE_IMAGE,
WATERMARKER,
} from './constants';
/**
* Set the `use_cache` field on the linear/canvas graph's final image output node to False.
*/
export const addSaveImageNode = (
export const addLinearUIOutputNode = (
state: RootState,
graph: NonNullableGraph
): void => {
@@ -23,18 +23,18 @@ export const addSaveImageNode = (
activeTabName === 'unifiedCanvas' ? !state.canvas.shouldAutoSave : false;
const { autoAddBoardId } = state.gallery;
const saveImageNode: SaveImageInvocation = {
id: SAVE_IMAGE,
type: 'save_image',
const linearUIOutputNode: LinearUIOutputInvocation = {
id: LINEAR_UI_OUTPUT,
type: 'linear_ui_output',
is_intermediate,
use_cache: false,
board: autoAddBoardId === 'none' ? undefined : { board_id: autoAddBoardId },
};
graph.nodes[SAVE_IMAGE] = saveImageNode;
graph.nodes[LINEAR_UI_OUTPUT] = linearUIOutputNode;
const destination = {
node_id: SAVE_IMAGE,
node_id: LINEAR_UI_OUTPUT,
field: 'image',
};

View File

@@ -4,9 +4,9 @@ import { ESRGANModelName } from 'features/parameters/store/postprocessingSlice';
import {
ESRGANInvocation,
Graph,
SaveImageInvocation,
LinearUIOutputInvocation,
} from 'services/api/types';
import { REALESRGAN as ESRGAN, SAVE_IMAGE } from './constants';
import { ESRGAN, LINEAR_UI_OUTPUT } from './constants';
import { addCoreMetadataNode, upsertMetadata } from './metadata';
type Arg = {
@@ -28,9 +28,9 @@ export const buildAdHocUpscaleGraph = ({
is_intermediate: true,
};
const saveImageNode: SaveImageInvocation = {
id: SAVE_IMAGE,
type: 'save_image',
const linearUIOutputNode: LinearUIOutputInvocation = {
id: LINEAR_UI_OUTPUT,
type: 'linear_ui_output',
use_cache: false,
is_intermediate: false,
board: autoAddBoardId === 'none' ? undefined : { board_id: autoAddBoardId },
@@ -40,7 +40,7 @@ export const buildAdHocUpscaleGraph = ({
id: `adhoc-esrgan-graph`,
nodes: {
[ESRGAN]: realesrganNode,
[SAVE_IMAGE]: saveImageNode,
[LINEAR_UI_OUTPUT]: linearUIOutputNode,
},
edges: [
{
@@ -49,14 +49,14 @@ export const buildAdHocUpscaleGraph = ({
field: 'image',
},
destination: {
node_id: SAVE_IMAGE,
node_id: LINEAR_UI_OUTPUT,
field: 'image',
},
},
],
};
addCoreMetadataNode(graph, {});
addCoreMetadataNode(graph, {}, ESRGAN);
upsertMetadata(graph, {
esrgan_model: esrganModelName,
});

View File

@@ -6,7 +6,7 @@ import { addControlNetToLinearGraph } from './addControlNetToLinearGraph';
import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
import { addLoRAsToGraph } from './addLoRAsToGraph';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
import { addSaveImageNode } from './addSaveImageNode';
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
@@ -308,24 +308,30 @@ export const buildCanvasImageToImageGraph = (
});
}
addCoreMetadataNode(graph, {
generation_mode: 'img2img',
cfg_scale,
width: !isUsingScaledDimensions ? width : scaledBoundingBoxDimensions.width,
height: !isUsingScaledDimensions
? height
: scaledBoundingBoxDimensions.height,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
clip_skip: clipSkip,
strength,
init_image: initialImage.image_name,
});
addCoreMetadataNode(
graph,
{
generation_mode: 'img2img',
cfg_scale,
width: !isUsingScaledDimensions
? width
: scaledBoundingBoxDimensions.width,
height: !isUsingScaledDimensions
? height
: scaledBoundingBoxDimensions.height,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
clip_skip: clipSkip,
strength,
init_image: initialImage.image_name,
},
CANVAS_OUTPUT
);
// Add Seamless To Graph
if (seamlessXAxis || seamlessYAxis) {
@@ -357,7 +363,7 @@ export const buildCanvasImageToImageGraph = (
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
}
addSaveImageNode(state, graph);
addLinearUIOutputNode(state, graph);
return graph;
};

View File

@@ -13,7 +13,7 @@ import { addControlNetToLinearGraph } from './addControlNetToLinearGraph';
import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
import { addLoRAsToGraph } from './addLoRAsToGraph';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
import { addSaveImageNode } from './addSaveImageNode';
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
@@ -666,7 +666,7 @@ export const buildCanvasInpaintGraph = (
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
}
addSaveImageNode(state, graph);
addLinearUIOutputNode(state, graph);
return graph;
};

View File

@@ -12,7 +12,7 @@ import { addControlNetToLinearGraph } from './addControlNetToLinearGraph';
import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
import { addLoRAsToGraph } from './addLoRAsToGraph';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
import { addSaveImageNode } from './addSaveImageNode';
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
@@ -770,7 +770,7 @@ export const buildCanvasOutpaintGraph = (
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
}
addSaveImageNode(state, graph);
addLinearUIOutputNode(state, graph);
return graph;
};

View File

@@ -7,7 +7,7 @@ import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
import { addSDXLLoRAsToGraph } from './addSDXLLoRAstoGraph';
import { addSDXLRefinerToGraph } from './addSDXLRefinerToGraph';
import { addSaveImageNode } from './addSaveImageNode';
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
import { addWatermarkerToGraph } from './addWatermarkerToGraph';
@@ -319,23 +319,29 @@ export const buildCanvasSDXLImageToImageGraph = (
});
}
addCoreMetadataNode(graph, {
generation_mode: 'img2img',
cfg_scale,
width: !isUsingScaledDimensions ? width : scaledBoundingBoxDimensions.width,
height: !isUsingScaledDimensions
? height
: scaledBoundingBoxDimensions.height,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
strength,
init_image: initialImage.image_name,
});
addCoreMetadataNode(
graph,
{
generation_mode: 'img2img',
cfg_scale,
width: !isUsingScaledDimensions
? width
: scaledBoundingBoxDimensions.width,
height: !isUsingScaledDimensions
? height
: scaledBoundingBoxDimensions.height,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
strength,
init_image: initialImage.image_name,
},
CANVAS_OUTPUT
);
// Add Seamless To Graph
if (seamlessXAxis || seamlessYAxis) {
@@ -380,7 +386,7 @@ export const buildCanvasSDXLImageToImageGraph = (
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
}
addSaveImageNode(state, graph);
addLinearUIOutputNode(state, graph);
return graph;
};

View File

@@ -14,7 +14,7 @@ import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
import { addSDXLLoRAsToGraph } from './addSDXLLoRAstoGraph';
import { addSDXLRefinerToGraph } from './addSDXLRefinerToGraph';
import { addSaveImageNode } from './addSaveImageNode';
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
@@ -696,7 +696,7 @@ export const buildCanvasSDXLInpaintGraph = (
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
}
addSaveImageNode(state, graph);
addLinearUIOutputNode(state, graph);
return graph;
};

View File

@@ -13,7 +13,7 @@ import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
import { addSDXLLoRAsToGraph } from './addSDXLLoRAstoGraph';
import { addSDXLRefinerToGraph } from './addSDXLRefinerToGraph';
import { addSaveImageNode } from './addSaveImageNode';
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
@@ -799,7 +799,7 @@ export const buildCanvasSDXLOutpaintGraph = (
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
}
addSaveImageNode(state, graph);
addLinearUIOutputNode(state, graph);
return graph;
};

View File

@@ -10,7 +10,7 @@ import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
import { addSDXLLoRAsToGraph } from './addSDXLLoRAstoGraph';
import { addSDXLRefinerToGraph } from './addSDXLRefinerToGraph';
import { addSaveImageNode } from './addSaveImageNode';
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
@@ -301,21 +301,27 @@ export const buildCanvasSDXLTextToImageGraph = (
});
}
addCoreMetadataNode(graph, {
generation_mode: 'txt2img',
cfg_scale,
width: !isUsingScaledDimensions ? width : scaledBoundingBoxDimensions.width,
height: !isUsingScaledDimensions
? height
: scaledBoundingBoxDimensions.height,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
});
addCoreMetadataNode(
graph,
{
generation_mode: 'txt2img',
cfg_scale,
width: !isUsingScaledDimensions
? width
: scaledBoundingBoxDimensions.width,
height: !isUsingScaledDimensions
? height
: scaledBoundingBoxDimensions.height,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
},
CANVAS_OUTPUT
);
// Add Seamless To Graph
if (seamlessXAxis || seamlessYAxis) {
@@ -360,7 +366,7 @@ export const buildCanvasSDXLTextToImageGraph = (
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
}
addSaveImageNode(state, graph);
addLinearUIOutputNode(state, graph);
return graph;
};

View File

@@ -9,7 +9,7 @@ import { addControlNetToLinearGraph } from './addControlNetToLinearGraph';
import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
import { addLoRAsToGraph } from './addLoRAsToGraph';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
import { addSaveImageNode } from './addSaveImageNode';
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
@@ -289,22 +289,28 @@ export const buildCanvasTextToImageGraph = (
});
}
addCoreMetadataNode(graph, {
generation_mode: 'txt2img',
cfg_scale,
width: !isUsingScaledDimensions ? width : scaledBoundingBoxDimensions.width,
height: !isUsingScaledDimensions
? height
: scaledBoundingBoxDimensions.height,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
clip_skip: clipSkip,
});
addCoreMetadataNode(
graph,
{
generation_mode: 'txt2img',
cfg_scale,
width: !isUsingScaledDimensions
? width
: scaledBoundingBoxDimensions.width,
height: !isUsingScaledDimensions
? height
: scaledBoundingBoxDimensions.height,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
clip_skip: clipSkip,
},
CANVAS_OUTPUT
);
// Add Seamless To Graph
if (seamlessXAxis || seamlessYAxis) {
@@ -336,7 +342,7 @@ export const buildCanvasTextToImageGraph = (
addWatermarkerToGraph(state, graph, CANVAS_OUTPUT);
}
addSaveImageNode(state, graph);
addLinearUIOutputNode(state, graph);
return graph;
};

View File

@@ -9,7 +9,7 @@ import { addControlNetToLinearGraph } from './addControlNetToLinearGraph';
import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
import { addLoRAsToGraph } from './addLoRAsToGraph';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
import { addSaveImageNode } from './addSaveImageNode';
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
@@ -311,22 +311,26 @@ export const buildLinearImageToImageGraph = (
});
}
addCoreMetadataNode(graph, {
generation_mode: 'img2img',
cfg_scale,
height,
width,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
clip_skip: clipSkip,
strength,
init_image: initialImage.imageName,
});
addCoreMetadataNode(
graph,
{
generation_mode: 'img2img',
cfg_scale,
height,
width,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
clip_skip: clipSkip,
strength,
init_image: initialImage.imageName,
},
LATENTS_TO_IMAGE
);
// Add Seamless To Graph
if (seamlessXAxis || seamlessYAxis) {
@@ -358,7 +362,7 @@ export const buildLinearImageToImageGraph = (
addWatermarkerToGraph(state, graph);
}
addSaveImageNode(state, graph);
addLinearUIOutputNode(state, graph);
return graph;
};

View File

@@ -10,7 +10,7 @@ import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
import { addSDXLLoRAsToGraph } from './addSDXLLoRAstoGraph';
import { addSDXLRefinerToGraph } from './addSDXLRefinerToGraph';
import { addSaveImageNode } from './addSaveImageNode';
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
@@ -331,23 +331,27 @@ export const buildLinearSDXLImageToImageGraph = (
});
}
addCoreMetadataNode(graph, {
generation_mode: 'sdxl_img2img',
cfg_scale,
height,
width,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
strength,
init_image: initialImage.imageName,
positive_style_prompt: positiveStylePrompt,
negative_style_prompt: negativeStylePrompt,
});
addCoreMetadataNode(
graph,
{
generation_mode: 'sdxl_img2img',
cfg_scale,
height,
width,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
strength,
init_image: initialImage.imageName,
positive_style_prompt: positiveStylePrompt,
negative_style_prompt: negativeStylePrompt,
},
LATENTS_TO_IMAGE
);
// Add Seamless To Graph
if (seamlessXAxis || seamlessYAxis) {
@@ -388,7 +392,7 @@ export const buildLinearSDXLImageToImageGraph = (
addWatermarkerToGraph(state, graph);
}
addSaveImageNode(state, graph);
addLinearUIOutputNode(state, graph);
return graph;
};

View File

@@ -6,7 +6,7 @@ import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
import { addSDXLLoRAsToGraph } from './addSDXLLoRAstoGraph';
import { addSDXLRefinerToGraph } from './addSDXLRefinerToGraph';
import { addSaveImageNode } from './addSaveImageNode';
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
@@ -225,21 +225,25 @@ export const buildLinearSDXLTextToImageGraph = (
],
};
addCoreMetadataNode(graph, {
generation_mode: 'sdxl_txt2img',
cfg_scale,
height,
width,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
positive_style_prompt: positiveStylePrompt,
negative_style_prompt: negativeStylePrompt,
});
addCoreMetadataNode(
graph,
{
generation_mode: 'sdxl_txt2img',
cfg_scale,
height,
width,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
positive_style_prompt: positiveStylePrompt,
negative_style_prompt: negativeStylePrompt,
},
LATENTS_TO_IMAGE
);
// Add Seamless To Graph
if (seamlessXAxis || seamlessYAxis) {
@@ -280,7 +284,7 @@ export const buildLinearSDXLTextToImageGraph = (
addWatermarkerToGraph(state, graph);
}
addSaveImageNode(state, graph);
addLinearUIOutputNode(state, graph);
return graph;
};

View File

@@ -10,7 +10,7 @@ import { addHrfToGraph } from './addHrfToGraph';
import { addIPAdapterToLinearGraph } from './addIPAdapterToLinearGraph';
import { addLoRAsToGraph } from './addLoRAsToGraph';
import { addNSFWCheckerToGraph } from './addNSFWCheckerToGraph';
import { addSaveImageNode } from './addSaveImageNode';
import { addLinearUIOutputNode } from './addLinearUIOutputNode';
import { addSeamlessToLinearGraph } from './addSeamlessToLinearGraph';
import { addT2IAdaptersToLinearGraph } from './addT2IAdapterToLinearGraph';
import { addVAEToGraph } from './addVAEToGraph';
@@ -234,20 +234,24 @@ export const buildLinearTextToImageGraph = (
],
};
addCoreMetadataNode(graph, {
generation_mode: 'txt2img',
cfg_scale,
height,
width,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
clip_skip: clipSkip,
});
addCoreMetadataNode(
graph,
{
generation_mode: 'txt2img',
cfg_scale,
height,
width,
positive_prompt: positivePrompt,
negative_prompt: negativePrompt,
model,
seed,
steps,
rand_device: use_cpu ? 'cpu' : 'cuda',
scheduler,
clip_skip: clipSkip,
},
LATENTS_TO_IMAGE
);
// Add Seamless To Graph
if (seamlessXAxis || seamlessYAxis) {
@@ -286,7 +290,7 @@ export const buildLinearTextToImageGraph = (
addWatermarkerToGraph(state, graph);
}
addSaveImageNode(state, graph);
addLinearUIOutputNode(state, graph);
return graph;
};

View File

@@ -9,7 +9,7 @@ export const LATENTS_TO_IMAGE_HRF_LR = 'latents_to_image_hrf_lr';
export const IMAGE_TO_LATENTS_HRF = 'image_to_latents_hrf';
export const RESIZE_HRF = 'resize_hrf';
export const ESRGAN_HRF = 'esrgan_hrf';
export const SAVE_IMAGE = 'save_image';
export const LINEAR_UI_OUTPUT = 'linear_ui_output';
export const NSFW_CHECKER = 'nsfw_checker';
export const WATERMARKER = 'invisible_watermark';
export const NOISE = 'noise';
@@ -67,7 +67,7 @@ export const BATCH_PROMPT = 'batch_prompt';
export const BATCH_STYLE_PROMPT = 'batch_style_prompt';
export const METADATA_COLLECT = 'metadata_collect';
export const MERGE_METADATA = 'merge_metadata';
export const REALESRGAN = 'esrgan';
export const ESRGAN = 'esrgan';
export const DIVIDE = 'divide';
export const SCALE = 'scale_image';
export const SDXL_MODEL_LOADER = 'sdxl_model_loader';
@@ -82,6 +82,32 @@ export const SDXL_REFINER_INPAINT_CREATE_MASK = 'refiner_inpaint_create_mask';
export const SEAMLESS = 'seamless';
export const SDXL_REFINER_SEAMLESS = 'refiner_seamless';
// these image-outputting nodes are from the linear UI and we should not handle the gallery logic on them
// instead, we wait for LINEAR_UI_OUTPUT node, and handle it like any other image-outputting node
export const nodeIDDenyList = [
CANVAS_OUTPUT,
LATENTS_TO_IMAGE,
LATENTS_TO_IMAGE_HRF_HR,
NSFW_CHECKER,
WATERMARKER,
ESRGAN,
ESRGAN_HRF,
RESIZE_HRF,
LATENTS_TO_IMAGE_HRF_LR,
IMG2IMG_RESIZE,
INPAINT_IMAGE,
SCALED_INPAINT_IMAGE,
INPAINT_IMAGE_RESIZE_UP,
INPAINT_IMAGE_RESIZE_DOWN,
INPAINT_INFILL,
INPAINT_INFILL_RESIZE_DOWN,
INPAINT_FINAL_IMAGE,
INPAINT_CREATE_MASK,
INPAINT_MASK,
PASTE_IMAGE,
SCALE,
];
// friendly graph ids
export const TEXT_TO_IMAGE_GRAPH = 'text_to_image_graph';
export const IMAGE_TO_IMAGE_GRAPH = 'image_to_image_graph';

Some files were not shown because too many files have changed in this diff Show More