Compare commits

..

4 Commits

Author SHA1 Message Date
psychedelicious
cd528eda32 test: fixt lint check 2023-11-13 11:03:56 +11:00
psychedelicious
4a27daa149 test: violate lint check 2023-11-13 11:03:09 +11:00
psychedelicious
9eafec720d test: fix format 2023-11-13 11:02:55 +11:00
psychedelicious
3d3775c962 test: violate style check 2023-11-13 11:01:32 +11:00
294 changed files with 4454 additions and 8492 deletions

View File

@@ -6,7 +6,7 @@ on:
branches: main
jobs:
ruff:
black:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3

View File

@@ -161,7 +161,7 @@ the command `npm install -g yarn` if needed)
_For Windows/Linux with an NVIDIA GPU:_
```terminal
pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu118
```
_For Linux with an AMD GPU:_
@@ -175,7 +175,7 @@ the command `npm install -g yarn` if needed)
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
```
_For Macintoshes, either Intel or M1/M2/M3:_
_For Macintoshes, either Intel or M1/M2:_
```sh
pip install InvokeAI --use-pep517

View File

@@ -1,6 +1,6 @@
# Nodes
# Invocations
Features in InvokeAI are added in the form of modular nodes systems called
Features in InvokeAI are added in the form of modular node-like systems called
**Invocations**.
An Invocation is simply a single operation that takes in some inputs and gives
@@ -9,34 +9,13 @@ complex functionality.
## Invocations Directory
InvokeAI Nodes can be found in the `invokeai/app/invocations` directory. These can be used as examples to create your own nodes.
InvokeAI Invocations can be found in the `invokeai/app/invocations` directory.
New nodes should be added to a subfolder in `nodes` direction found at the root level of the InvokeAI installation location. Nodes added to this folder will be able to be used upon application startup.
Example `nodes` subfolder structure:
```py
__init__.py # Invoke-managed custom node loader
cool_node
__init__.py # see example below
cool_node.py
my_node_pack
__init__.py # see example below
tasty_node.py
bodacious_node.py
utils.py
extra_nodes
fancy_node.py
```
Each node folder must have an `__init__.py` file that imports its nodes. Only nodes imported in the `__init__.py` file are loaded.
See the README in the nodes folder for more examples:
```py
from .cool_node import CoolInvocation
```
You can add your new functionality to one of the existing Invocations in this
directory or create a new file in this directory as per your needs.
**Note:** _All Invocations must be inside this directory for InvokeAI to
recognize them as valid Invocations._
## Creating A New Invocation

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,12 @@
---
title: Textual Inversion Embeddings and LoRAs
---
# :material-library-shelves: Textual Inversions and LoRAs
With the advances in research, many new capabilities are available to customize the knowledge and understanding of novel concepts not originally contained in the base model.
## Using Textual Inversion Files
Textual inversion (TI) files are small models that customize the output of
@@ -52,4 +61,29 @@ files it finds there for compatible models. At startup you will see a message si
>> Current embedding manager terms: <HOI4-Leader>, <princess-knight>
```
To use these when generating, simply type the `<` key in your prompt to open the Textual Inversion WebUI and
select the embedding you'd like to use. This UI has type-ahead support, so you can easily find supported embeddings.
select the embedding you'd like to use. This UI has type-ahead support, so you can easily find supported embeddings.
## Using LoRAs
LoRA files are models that customize the output of Stable Diffusion
image generation. Larger than embeddings, but much smaller than full
models, they augment SD with improved understanding of subjects and
artistic styles.
Unlike TI files, LoRAs do not introduce novel vocabulary into the
model's known tokens. Instead, LoRAs augment the model's weights that
are applied to generate imagery. LoRAs may be supplied with a
"trigger" word that they have been explicitly trained on, or may
simply apply their effect without being triggered.
LoRAs are typically stored in .safetensors files, which are the most
secure way to store and transmit these types of weights. You may
install any number of `.safetensors` LoRA files simply by copying them
into the `autoimport/lora` directory of the corresponding InvokeAI models
directory (usually `invokeai` in your home directory).
To use these when generating, open the LoRA menu item in the options
panel, select the LoRAs you want to apply and ensure that they have
the appropriate weight recommended by the model provider. Typically,
most LoRAs perform best at a weight of .75-1.

View File

@@ -1,53 +0,0 @@
---
title: LoRAs & LCM-LoRAs
---
# :material-library-shelves: LoRAs & LCM-LoRAs
With the advances in research, many new capabilities are available to customize the knowledge and understanding of novel concepts not originally contained in the base model.
## LoRAs
Low-Rank Adaptation (LoRA) files are models that customize the output of Stable Diffusion
image generation. Larger than embeddings, but much smaller than full
models, they augment SD with improved understanding of subjects and
artistic styles.
Unlike TI files, LoRAs do not introduce novel vocabulary into the
model's known tokens. Instead, LoRAs augment the model's weights that
are applied to generate imagery. LoRAs may be supplied with a
"trigger" word that they have been explicitly trained on, or may
simply apply their effect without being triggered.
LoRAs are typically stored in .safetensors files, which are the most
secure way to store and transmit these types of weights. You may
install any number of `.safetensors` LoRA files simply by copying them
into the `autoimport/lora` directory of the corresponding InvokeAI models
directory (usually `invokeai` in your home directory).
To use these when generating, open the LoRA menu item in the options
panel, select the LoRAs you want to apply and ensure that they have
the appropriate weight recommended by the model provider. Typically,
most LoRAs perform best at a weight of .75-1.
## LCM-LoRAs
Latent Consistency Models (LCMs) allowed a reduced number of steps to be used to generate images with Stable Diffusion. These are created by distilling base models, creating models that only require a small number of steps to generate images. However, LCMs require that any fine-tune of a base model be distilled to be used as an LCM.
LCM-LoRAs are models that provide the benefit of LCMs but are able to be used as LoRAs and applied to any fine tune of a base model. LCM-LoRAs are created by training a small number of adapters, rather than distilling the entire fine-tuned base model. The resulting LoRA can be used the same way as a standard LoRA, but with a greatly reduced step count. This enables SDXL images to be generated up to 10x faster than without the use of LCM-LoRAs.
**Using LCM-LoRAs**
LCM-LoRAs are natively supported in InvokeAI throughout the application. To get started, install any diffusers format LCM-LoRAs using the model manager and select it in the LoRA field.
There are a number parameter differences when using LCM-LoRAs and standard generation:
- When using LCM-LoRAs, the LoRA strength should be lower than if using a standard LoRA, with 0.35 recommended as a starting point.
- The LCM scheduler should be used for generation
- CFG-Scale should be reduced to ~1
- Steps should be reduced in the range of 4-8
Standard LoRAs can also be used alongside LCM-LoRAs, but will also require a lower strength, with 0.45 being recommended as a starting point.
More information can be found here: https://huggingface.co/blog/lcm_lora#fast-inference-with-sdxl-lcm-loras

View File

@@ -20,7 +20,7 @@ a single convenient digital artist-optimized user interface.
### * [Prompt Engineering](PROMPTS.md)
Get the images you want with the InvokeAI prompt engineering language.
### * The [LoRA, LyCORIS, LCM-LoRA Models](CONCEPTS.md)
### * The [LoRA, LyCORIS and Textual Inversion Models](CONCEPTS.md)
Add custom subjects and styles using a variety of fine-tuned models.
### * [ControlNet](CONTROLNET.md)
@@ -40,7 +40,7 @@ guide also covers optimizing models to load quickly.
Teach an old model new tricks. Merge 2-3 models together to create a
new model that combines characteristics of the originals.
### * [Textual Inversion](TEXTUAL_INVERSIONS.md)
### * [Textual Inversion](TRAINING.md)
Personalize models by adding your own style or subjects.
## Other Features

View File

@@ -1,43 +0,0 @@
# FAQs
**Where do I get started? How can I install Invoke?**
- You can download the latest installers [here](https://github.com/invoke-ai/InvokeAI/releases) - Note that any releases marked as *pre-release* are in a beta state. You may experience some issues, but we appreciate your help testing those! For stable/reliable installations, please install the **[Latest Release](https://github.com/invoke-ai/InvokeAI/releases/latest)**
**How can I download models? Can I use models I already have downloaded?**
- Models can be downloaded through the model manager, or through option [4] in the invoke.bat/invoke.sh launcher script. To download a model through the Model Manager, use the HuggingFace Repo ID by pressing the “Copy” button next to the repository name. Alternatively, to download a model from CivitAi, use the download link in the Model Manager.
- Models that are already downloaded can be used by creating a symlink to the model location in the `autoimport` folder or by using the Model Mangers “Scan for Models” function.
**My images are taking a long time to generate. How can I speed up generation?**
- A common solution is to reduce the size of your RAM & VRAM cache to 0.25. This ensures your system has enough memory to generate images.
- Additionally, check the [hardware requirements](https://invoke-ai.github.io/InvokeAI/#hardware-requirements) to ensure that your system is capable of generating images.
- Lastly, double check your generations are happening on your GPU (if you have one). InvokeAI will log what is being used for generation upon startup.
**Ive installed Python on Windows but the installer says it cant find it?**
- Then ensure that you checked **'Add python.exe to PATH'** when installing Python. This can be found at the bottom of the Python Installer window. If you already have Python installed, this can be done with the modify / repair feature of the installer.
**Ive installed everything successfully but I still get an error about Triton when starting Invoke?**
- This can be safely ignored. InvokeAI doesn't use Triton, but if you are on Linux and wish to dismiss the error, you can install Triton.
**I updated to 3.4.0 and now xFormers cant load C++/CUDA?**
- An issue occurred with your PyTorch update. Follow these steps to fix :
1. Launch your invoke.bat / invoke.sh and select the option to open the developer console
2. Run:`pip install ".[xformers]" --upgrade --force-reinstall --extra-index-url https://download.pytorch.org/whl/cu121`
- If you run into an error with `typing_extensions`, re-open the developer console and run: `pip install -U typing-extensions`
**It says my pip is out of date - is that why my install isn't working?**
- An out of date won't cause an installation to fail. The cause of the error can likely be found above the message that says pip is out of date.
- If you saw that warning but the install went well, don't worry about it (but you can update pip afterwards if you'd like).
**How can I generate the exact same that I found on the internet?**
Most example images with prompts that you'll find on the internet have been generated using different software, so you can't expect to get identical results. In order to reproduce an image, you need to replicate the exact settings and processing steps, including (but not limited to) the model, the positive and negative prompts, the seed, the sampler, the exact image size, any upscaling steps, etc.
**Where can I get more help?**
- Create an issue on [GitHub](https://github.com/invoke-ai/InvokeAI/issues) or post in the [#help channel](https://discord.com/channels/1020123559063990373/1149510134058471514) of the InvokeAI Discord

View File

@@ -101,13 +101,16 @@ Mac and Linux machines, and runs on GPU cards with as little as 4 GB of RAM.
<div align="center"><img src="assets/invoke-web-server-1.png" width=640></div>
!!! Note
This project is rapidly evolving. Please use the [Issues tab](https://github.com/invoke-ai/InvokeAI/issues) to report bugs and make feature requests. Be sure to use the provided templates as it will help aid response time.
## :octicons-link-24: Quick Links
<div class="button-container">
<a href="installation/INSTALLATION"> <button class="button">Installation</button> </a>
<a href="features/"> <button class="button">Features</button> </a>
<a href="help/gettingStartedWithAI/"> <button class="button">Getting Started</button> </a>
<a href="help/FAQ/"> <button class="button">FAQ</button> </a>
<a href="contributing/CONTRIBUTING/"> <button class="button">Contributing</button> </a>
<a href="https://github.com/invoke-ai/InvokeAI/"> <button class="button">Code and Downloads</button> </a>
<a href="https://github.com/invoke-ai/InvokeAI/issues"> <button class="button">Bug Reports </button> </a>

View File

@@ -179,7 +179,7 @@ experimental versions later.
you will have the choice of CUDA (NVidia cards), ROCm (AMD cards),
or CPU (no graphics acceleration). On Windows, you'll have the
choice of CUDA vs CPU, and on Macs you'll be offered CPU only. When
you select CPU on M1/M2/M3 Macintoshes, you will get MPS-based
you select CPU on M1 or M2 Macintoshes, you will get MPS-based
graphics acceleration without installing additional drivers. If you
are unsure what GPU you are using, you can ask the installer to
guess.
@@ -471,7 +471,7 @@ Then type the following commands:
=== "NVIDIA System"
```bash
pip install torch torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/cu121
pip install torch torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/cu118
pip install xformers
```

View File

@@ -148,7 +148,7 @@ manager, please follow these steps:
=== "CUDA (NVidia)"
```bash
pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu118
```
=== "ROCm (AMD)"
@@ -327,7 +327,7 @@ installation protocol (important!)
=== "CUDA (NVidia)"
```bash
pip install -e .[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
pip install -e .[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu118
```
=== "ROCm (AMD)"
@@ -375,7 +375,7 @@ you can do so using this unsupported recipe:
mkdir ~/invokeai
conda create -n invokeai python=3.10
conda activate invokeai
pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu118
invokeai-configure --root ~/invokeai
invokeai --root ~/invokeai --web
```

View File

@@ -85,7 +85,7 @@ You can find which version you should download from [this link](https://docs.nvi
When installing torch and torchvision manually with `pip`, remember to provide
the argument `--extra-index-url
https://download.pytorch.org/whl/cu121` as described in the [Manual
https://download.pytorch.org/whl/cu118` as described in the [Manual
Installation Guide](020_INSTALL_MANUAL.md).
## :simple-amd: ROCm

View File

@@ -30,7 +30,7 @@ methodology for details on why running applications in such a stateless fashion
The container is configured for CUDA by default, but can be built to support AMD GPUs
by setting the `GPU_DRIVER=rocm` environment variable at Docker image build time.
Developers on Apple silicon (M1/M2/M3): You
Developers on Apple silicon (M1/M2): You
[can't access your GPU cores from Docker containers](https://github.com/pytorch/pytorch/issues/81224)
and performance is reduced compared with running it directly on macOS but for
development purposes it's fine. Once you're done with development tasks on your

View File

@@ -28,7 +28,7 @@ command line, then just be sure to activate it's virtual environment.
Then run the following three commands:
```sh
pip install xformers~=0.0.22
pip install xformers~=0.0.19
pip install triton # WON'T WORK ON WINDOWS
python -m xformers.info output
```
@@ -42,7 +42,7 @@ If all goes well, you'll see a report like the
following:
```sh
xFormers 0.0.22
xFormers 0.0.20
memory_efficient_attention.cutlassF: available
memory_efficient_attention.cutlassB: available
memory_efficient_attention.flshattF: available
@@ -59,14 +59,14 @@ swiglu.gemm_fused_operand_sum: available
swiglu.fused.p.cpp: available
is_triton_available: True
is_functorch_available: False
pytorch.version: 2.1.0+cu121
pytorch.version: 2.0.1+cu118
pytorch.cuda: available
gpu.compute_capability: 8.9
gpu.name: NVIDIA GeForce RTX 4070
build.info: available
build.cuda_version: 1108
build.python_version: 3.10.11
build.torch_version: 2.1.0+cu121
build.torch_version: 2.0.1+cu118
build.env.TORCH_CUDA_ARCH_LIST: 5.0+PTX 6.0 6.1 7.0 7.5 8.0 8.6
build.env.XFORMERS_BUILD_TYPE: Release
build.env.XFORMERS_ENABLE_DEBUG_ASSERTIONS: None
@@ -92,22 +92,33 @@ installed from source. These instructions were written for a system
running Ubuntu 22.04, but other Linux distributions should be able to
adapt this recipe.
#### 1. Install CUDA Toolkit 12.1
#### 1. Install CUDA Toolkit 11.8
You will need the CUDA developer's toolkit in order to compile and
install xFormers. **Do not try to install Ubuntu's nvidia-cuda-toolkit
package.** It is out of date and will cause conflicts among the NVIDIA
driver and binaries. Instead install the CUDA Toolkit package provided
by NVIDIA itself. Go to [CUDA Toolkit 12.1
Downloads](https://developer.nvidia.com/cuda-12-1-0-download-archive)
by NVIDIA itself. Go to [CUDA Toolkit 11.8
Downloads](https://developer.nvidia.com/cuda-11-8-0-download-archive)
and use the target selection wizard to choose your platform and Linux
distribution. Select an installer type of "runfile (local)" at the
last step.
This will provide you with a recipe for downloading and running a
install shell script that will install the toolkit and drivers.
install shell script that will install the toolkit and drivers. For
example, the install script recipe for Ubuntu 22.04 running on a
x86_64 system is:
#### 2. Confirm/Install pyTorch 2.1.0 with CUDA 12.1 support
```
wget https://developer.download.nvidia.com/compute/cuda/11.8.0/local_installers/cuda_11.8.0_520.61.05_linux.run
sudo sh cuda_11.8.0_520.61.05_linux.run
```
Rather than cut-and-paste this example, We recommend that you walk
through the toolkit wizard in order to get the most up to date
installer for your system.
#### 2. Confirm/Install pyTorch 2.01 with CUDA 11.8 support
If you are using InvokeAI 3.0.2 or higher, these will already be
installed. If not, you can check whether you have the needed libraries
@@ -122,7 +133,7 @@ Then run the command:
python -c 'exec("import torch\nprint(torch.__version__)")'
```
If it prints __2.1.0+cu121__ you're good. If not, you can install the
If it prints __1.13.1+cu118__ you're good. If not, you can install the
most up to date libraries with this command:
```sh

View File

@@ -32,7 +32,6 @@ To use a community workflow, download the the `.json` node graph file and load i
+ [Size Stepper Nodes](#size-stepper-nodes)
+ [Text font to Image](#text-font-to-image)
+ [Thresholding](#thresholding)
+ [Unsharp Mask](#unsharp-mask)
+ [XY Image to Grid and Images to Grids nodes](#xy-image-to-grid-and-images-to-grids-nodes)
- [Example Node Template](#example-node-template)
- [Disclaimer](#disclaimer)
@@ -317,13 +316,6 @@ Highlights/Midtones/Shadows (with LUT blur enabled):
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/0a440e43-697f-4d17-82ee-f287467df0a5" width="300" />
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/0701fd0f-2ca7-4fe2-8613-2b52547bafce" width="300" />
--------------------------------
### Unsharp Mask
**Description:** Applies an unsharp mask filter to an image, preserving its alpha channel in the process.
**Node Link:** https://github.com/JPPhoto/unsharp-mask-node
--------------------------------
### XY Image to Grid and Images to Grids nodes

View File

@@ -7,12 +7,12 @@ To use them, right click on your desired workflow, follow the link to GitHub and
If you're interested in finding more workflows, checkout the [#share-your-workflows](https://discord.com/channels/1020123559063990373/1130291608097661000) channel in the InvokeAI Discord.
* [SD1.5 / SD2 Text to Image](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/Text_to_Image.json)
* [SDXL Text to Image](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/SDXL_Text_to_Image.json)
* [SDXL Text to Image with Refiner](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/SDXL_w_Refiner_Text_to_Image.json)
* [Multi ControlNet (Canny & Depth)](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/Multi_ControlNet_Canny_and_Depth.json)
* [SDXL Text to Image](https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/SDXL_Text_to_Image.json)
* [SDXL Text to Image with Refiner](https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/SDXL_w_Refiner_Text_to_Image.json)
* [Multi ControlNet (Canny & Depth)](https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/Multi_ControlNet_Canny_and_Depth.json)
* [Tiled Upscaling with ControlNet](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/ESRGAN_img2img_upscale_w_Canny_ControlNet.json)
* [Prompt From File](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/Prompt_from_File.json)
* [Face Detailer with IP-Adapter & ControlNet](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/Face_Detailer_with_IP-Adapter_and_Canny.json)
* [Prompt From File](https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/Prompt_from_File.json)
* [Face Detailer with IP-Adapter & ControlNet](https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/Face_Detailer_with_IP-Adapter_and_Canny.json.json)
* [FaceMask](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/FaceMask.json)
* [FaceOff with 2x Face Scaling](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/FaceOff_FaceScale2x.json)
* [QR Code Monster](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/QR_Code_Monster.json)
* [QR Code Monster](https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/QR_Code_Monster.json)

View File

@@ -244,7 +244,7 @@ class InvokeAiInstance:
"numpy~=1.24.0", # choose versions that won't be uninstalled during phase 2
"urllib3~=1.26.0",
"requests~=2.28.0",
"torch==2.1.0",
"torch~=2.0.0",
"torchmetrics==0.11.4",
"torchvision>=0.14.1",
"--force-reinstall",
@@ -460,10 +460,10 @@ def get_torch_source() -> (Union[str, None], str):
url = "https://download.pytorch.org/whl/cpu"
if device == "cuda":
url = "https://download.pytorch.org/whl/cu121"
url = "https://download.pytorch.org/whl/cu118"
optional_modules = "[xformers,onnx-cuda]"
if device == "cuda_and_dml":
url = "https://download.pytorch.org/whl/cu121"
url = "https://download.pytorch.org/whl/cu118"
optional_modules = "[xformers,onnx-directml]"
# in all other cases, Torch wheels should be coming from PyPi as of Torch 1.13

View File

@@ -24,7 +24,6 @@ from ..services.item_storage.item_storage_sqlite import SqliteItemStorage
from ..services.latents_storage.latents_storage_disk import DiskLatentsStorage
from ..services.latents_storage.latents_storage_forward_cache import ForwardCacheLatentsStorage
from ..services.model_manager.model_manager_default import ModelManagerService
from ..services.model_records import ModelRecordServiceSQL
from ..services.names.names_default import SimpleNameService
from ..services.session_processor.session_processor_default import DefaultSessionProcessor
from ..services.session_queue.session_queue_sqlite import SqliteSessionQueue
@@ -86,7 +85,6 @@ class ApiDependencies:
invocation_cache = MemoryInvocationCache(max_cache_size=config.node_cache_size)
latents = ForwardCacheLatentsStorage(DiskLatentsStorage(f"{output_folder}/latents"))
model_manager = ModelManagerService(config, logger)
model_record_service = ModelRecordServiceSQL(db=db)
names = SimpleNameService()
performance_statistics = InvocationStatsService()
processor = DefaultInvocationProcessor()
@@ -113,7 +111,6 @@ class ApiDependencies:
latents=latents,
logger=logger,
model_manager=model_manager,
model_records=model_record_service,
names=names,
performance_statistics=performance_statistics,
processor=processor,

View File

@@ -1,164 +0,0 @@
# Copyright (c) 2023 Lincoln D. Stein
"""FastAPI route for model configuration records."""
from hashlib import sha1
from random import randbytes
from typing import List, Optional
from fastapi import Body, Path, Query, Response
from fastapi.routing import APIRouter
from pydantic import BaseModel, ConfigDict
from starlette.exceptions import HTTPException
from typing_extensions import Annotated
from invokeai.app.services.model_records import (
DuplicateModelException,
InvalidModelException,
UnknownModelException,
)
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
ModelType,
)
from ..dependencies import ApiDependencies
model_records_router = APIRouter(prefix="/v1/model/record", tags=["models"])
class ModelsList(BaseModel):
"""Return list of configs."""
models: list[AnyModelConfig]
model_config = ConfigDict(use_enum_values=True)
@model_records_router.get(
"/",
operation_id="list_model_records",
)
async def list_model_records(
base_models: Optional[List[BaseModelType]] = Query(default=None, description="Base models to include"),
model_type: Optional[ModelType] = Query(default=None, description="The type of model to get"),
) -> ModelsList:
"""Get a list of models."""
record_store = ApiDependencies.invoker.services.model_records
found_models: list[AnyModelConfig] = []
if base_models:
for base_model in base_models:
found_models.extend(record_store.search_by_attr(base_model=base_model, model_type=model_type))
else:
found_models.extend(record_store.search_by_attr(model_type=model_type))
return ModelsList(models=found_models)
@model_records_router.get(
"/i/{key}",
operation_id="get_model_record",
responses={
200: {"description": "Success"},
400: {"description": "Bad request"},
404: {"description": "The model could not be found"},
},
)
async def get_model_record(
key: str = Path(description="Key of the model record to fetch."),
) -> AnyModelConfig:
"""Get a model record"""
record_store = ApiDependencies.invoker.services.model_records
try:
return record_store.get_model(key)
except UnknownModelException as e:
raise HTTPException(status_code=404, detail=str(e))
@model_records_router.patch(
"/i/{key}",
operation_id="update_model_record",
responses={
200: {"description": "The model was updated successfully"},
400: {"description": "Bad request"},
404: {"description": "The model could not be found"},
409: {"description": "There is already a model corresponding to the new name"},
},
status_code=200,
response_model=AnyModelConfig,
)
async def update_model_record(
key: Annotated[str, Path(description="Unique key of model")],
info: Annotated[AnyModelConfig, Body(description="Model config", discriminator="type")],
) -> AnyModelConfig:
"""Update model contents with a new config. If the model name or base fields are changed, then the model is renamed."""
logger = ApiDependencies.invoker.services.logger
record_store = ApiDependencies.invoker.services.model_records
try:
model_response = record_store.update_model(key, config=info)
logger.info(f"Updated model: {key}")
except UnknownModelException as e:
raise HTTPException(status_code=404, detail=str(e))
except ValueError as e:
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
return model_response
@model_records_router.delete(
"/i/{key}",
operation_id="del_model_record",
responses={
204: {"description": "Model deleted successfully"},
404: {"description": "Model not found"},
},
status_code=204,
)
async def del_model_record(
key: str = Path(description="Unique key of model to remove from model registry."),
) -> Response:
"""Delete Model"""
logger = ApiDependencies.invoker.services.logger
try:
record_store = ApiDependencies.invoker.services.model_records
record_store.del_model(key)
logger.info(f"Deleted model: {key}")
return Response(status_code=204)
except UnknownModelException as e:
logger.error(str(e))
raise HTTPException(status_code=404, detail=str(e))
@model_records_router.post(
"/i/",
operation_id="add_model_record",
responses={
201: {"description": "The model added successfully"},
409: {"description": "There is already a model corresponding to this path or repo_id"},
415: {"description": "Unrecognized file/folder format"},
},
status_code=201,
)
async def add_model_record(
config: Annotated[AnyModelConfig, Body(description="Model config", discriminator="type")]
) -> AnyModelConfig:
"""
Add a model using the configuration information appropriate for its type.
"""
logger = ApiDependencies.invoker.services.logger
record_store = ApiDependencies.invoker.services.model_records
if config.key == "<NOKEY>":
config.key = sha1(randbytes(100)).hexdigest()
logger.info(f"Created model {config.key} for {config.name}")
try:
record_store.add_model(config.key, config)
except DuplicateModelException as e:
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
except InvalidModelException as e:
logger.error(str(e))
raise HTTPException(status_code=415)
# now fetch it out
return record_store.get_model(config.key)

View File

@@ -1,5 +1,6 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654), 2023 Kent Keirsey (https://github.com/hipsterusername), 2023 Lincoln D. Stein
import pathlib
from typing import Annotated, List, Literal, Optional, Union

View File

@@ -43,7 +43,6 @@ if True: # hack to make flake8 happy with imports coming after setting up the c
board_images,
boards,
images,
model_records,
models,
session_queue,
sessions,
@@ -107,7 +106,6 @@ app.include_router(sessions.session_router, prefix="/api")
app.include_router(utilities.utilities_router, prefix="/api")
app.include_router(models.models_router, prefix="/api")
app.include_router(model_records.model_records_router, prefix="/api")
app.include_router(images.images_router, prefix="/api")
app.include_router(boards.boards_router, prefix="/api")
app.include_router(board_images.board_images_router, prefix="/api")

View File

@@ -112,11 +112,10 @@ class CompelInvocation(BaseInvocation):
tokenizer,
ti_manager,
),
ModelPatcher.apply_clip_skip(text_encoder_info.context.model, self.clip.skipped_layers),
text_encoder_info as text_encoder,
# Apply the LoRA after text_encoder has been moved to its target device for faster patching.
ModelPatcher.apply_lora_text_encoder(text_encoder, _lora_loader()),
# Apply CLIP Skip after LoRA to prevent LoRA application from failing on skipped layers.
ModelPatcher.apply_clip_skip(text_encoder_info.context.model, self.clip.skipped_layers),
):
compel = Compel(
tokenizer=tokenizer,
@@ -235,11 +234,10 @@ class SDXLPromptInvocationBase:
tokenizer,
ti_manager,
),
ModelPatcher.apply_clip_skip(text_encoder_info.context.model, clip_field.skipped_layers),
text_encoder_info as text_encoder,
# Apply the LoRA after text_encoder has been moved to its target device for faster patching.
ModelPatcher.apply_lora(text_encoder, _lora_loader(), lora_prefix),
# Apply CLIP Skip after LoRA to prevent LoRA application from failing on skipped layers.
ModelPatcher.apply_clip_skip(text_encoder_info.context.model, clip_field.skipped_layers),
):
compel = Compel(
tokenizer=tokenizer,

View File

@@ -96,7 +96,7 @@ class ControlOutput(BaseInvocationOutput):
control: ControlField = OutputField(description=FieldDescriptions.control)
@invocation("controlnet", title="ControlNet", tags=["controlnet"], category="controlnet", version="1.1.0")
@invocation("controlnet", title="ControlNet", tags=["controlnet"], category="controlnet", version="1.0.0")
class ControlNetInvocation(BaseInvocation):
"""Collects ControlNet info to pass to other nodes"""
@@ -173,7 +173,7 @@ class ImageProcessorInvocation(BaseInvocation, WithMetadata, WithWorkflow):
title="Canny Processor",
tags=["controlnet", "canny"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class CannyImageProcessorInvocation(ImageProcessorInvocation):
"""Canny edge detection for ControlNet"""
@@ -196,7 +196,7 @@ class CannyImageProcessorInvocation(ImageProcessorInvocation):
title="HED (softedge) Processor",
tags=["controlnet", "hed", "softedge"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class HedImageProcessorInvocation(ImageProcessorInvocation):
"""Applies HED edge detection to image"""
@@ -225,7 +225,7 @@ class HedImageProcessorInvocation(ImageProcessorInvocation):
title="Lineart Processor",
tags=["controlnet", "lineart"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class LineartImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art processing to image"""
@@ -247,7 +247,7 @@ class LineartImageProcessorInvocation(ImageProcessorInvocation):
title="Lineart Anime Processor",
tags=["controlnet", "lineart", "anime"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art anime processing to image"""
@@ -270,7 +270,7 @@ class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
title="Openpose Processor",
tags=["controlnet", "openpose", "pose"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class OpenposeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Openpose processing to image"""
@@ -295,7 +295,7 @@ class OpenposeImageProcessorInvocation(ImageProcessorInvocation):
title="Midas Depth Processor",
tags=["controlnet", "midas"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Midas depth processing to image"""
@@ -322,7 +322,7 @@ class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
title="Normal BAE Processor",
tags=["controlnet"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies NormalBae processing to image"""
@@ -339,7 +339,7 @@ class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
@invocation(
"mlsd_image_processor", title="MLSD Processor", tags=["controlnet", "mlsd"], category="controlnet", version="1.1.0"
"mlsd_image_processor", title="MLSD Processor", tags=["controlnet", "mlsd"], category="controlnet", version="1.0.0"
)
class MlsdImageProcessorInvocation(ImageProcessorInvocation):
"""Applies MLSD processing to image"""
@@ -362,7 +362,7 @@ class MlsdImageProcessorInvocation(ImageProcessorInvocation):
@invocation(
"pidi_image_processor", title="PIDI Processor", tags=["controlnet", "pidi"], category="controlnet", version="1.1.0"
"pidi_image_processor", title="PIDI Processor", tags=["controlnet", "pidi"], category="controlnet", version="1.0.0"
)
class PidiImageProcessorInvocation(ImageProcessorInvocation):
"""Applies PIDI processing to image"""
@@ -389,7 +389,7 @@ class PidiImageProcessorInvocation(ImageProcessorInvocation):
title="Content Shuffle Processor",
tags=["controlnet", "contentshuffle"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
"""Applies content shuffle processing to image"""
@@ -419,7 +419,7 @@ class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
title="Zoe (Depth) Processor",
tags=["controlnet", "zoe", "depth"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Zoe depth processing to image"""
@@ -435,7 +435,7 @@ class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
title="Mediapipe Face Processor",
tags=["controlnet", "mediapipe", "face"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
"""Applies mediapipe face processing to image"""
@@ -458,7 +458,7 @@ class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
title="Leres (Depth) Processor",
tags=["controlnet", "leres", "depth"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class LeresImageProcessorInvocation(ImageProcessorInvocation):
"""Applies leres processing to image"""
@@ -487,7 +487,7 @@ class LeresImageProcessorInvocation(ImageProcessorInvocation):
title="Tile Resample Processor",
tags=["controlnet", "tile"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class TileResamplerProcessorInvocation(ImageProcessorInvocation):
"""Tile resampler processor"""
@@ -527,7 +527,7 @@ class TileResamplerProcessorInvocation(ImageProcessorInvocation):
title="Segment Anything Processor",
tags=["controlnet", "segmentanything"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class SegmentAnythingProcessorInvocation(ImageProcessorInvocation):
"""Applies segment anything processing to image"""
@@ -569,7 +569,7 @@ class SamDetectorReproducibleColors(SamDetector):
title="Color Map Processor",
tags=["controlnet"],
category="controlnet",
version="1.1.0",
version="1.0.0",
)
class ColorMapImageProcessorInvocation(ImageProcessorInvocation):
"""Generates a color map from the provided image"""

View File

@@ -11,7 +11,7 @@ from invokeai.app.services.image_records.image_records_common import ImageCatego
from .baseinvocation import BaseInvocation, InputField, InvocationContext, WithMetadata, WithWorkflow, invocation
@invocation("cv_inpaint", title="OpenCV Inpaint", tags=["opencv", "inpaint"], category="inpaint", version="1.1.0")
@invocation("cv_inpaint", title="OpenCV Inpaint", tags=["opencv", "inpaint"], category="inpaint", version="1.0.0")
class CvInpaintInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Simple inpaint using opencv."""

View File

@@ -438,7 +438,7 @@ def get_faces_list(
return all_faces
@invocation("face_off", title="FaceOff", tags=["image", "faceoff", "face", "mask"], category="image", version="1.1.0")
@invocation("face_off", title="FaceOff", tags=["image", "faceoff", "face", "mask"], category="image", version="1.0.2")
class FaceOffInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Bound, extract, and mask a face from an image using MediaPipe detection"""
@@ -532,7 +532,7 @@ class FaceOffInvocation(BaseInvocation, WithWorkflow, WithMetadata):
return output
@invocation("face_mask_detection", title="FaceMask", tags=["image", "face", "mask"], category="image", version="1.1.0")
@invocation("face_mask_detection", title="FaceMask", tags=["image", "face", "mask"], category="image", version="1.0.2")
class FaceMaskInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Face mask creation using mediapipe face detection"""
@@ -650,7 +650,7 @@ class FaceMaskInvocation(BaseInvocation, WithWorkflow, WithMetadata):
@invocation(
"face_identifier", title="FaceIdentifier", tags=["image", "face", "identifier"], category="image", version="1.1.0"
"face_identifier", title="FaceIdentifier", tags=["image", "face", "identifier"], category="image", version="1.0.2"
)
class FaceIdentifierInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Outputs an image with detected face IDs printed on each face. For use with other FaceTools."""

View File

@@ -8,7 +8,7 @@ import numpy
from PIL import Image, ImageChops, ImageFilter, ImageOps
from invokeai.app.invocations.primitives import BoardField, ColorField, ImageField, ImageOutput
from invokeai.app.services.image_records.image_records_common import ImageCategory, ImageRecordChanges, ResourceOrigin
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.app.shared.fields import FieldDescriptions
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
from invokeai.backend.image_util.safety_checker import SafetyChecker
@@ -36,7 +36,7 @@ class ShowImageInvocation(BaseInvocation):
)
@invocation("blank_image", title="Blank Image", tags=["image"], category="image", version="1.1.0")
@invocation("blank_image", title="Blank Image", tags=["image"], category="image", version="1.0.0")
class BlankImageInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Creates a blank image and forwards it to the pipeline"""
@@ -66,7 +66,7 @@ class BlankImageInvocation(BaseInvocation, WithMetadata, WithWorkflow):
)
@invocation("img_crop", title="Crop Image", tags=["image", "crop"], category="image", version="1.1.0")
@invocation("img_crop", title="Crop Image", tags=["image", "crop"], category="image", version="1.0.0")
class ImageCropInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Crops an image to a specified box. The box can be outside of the image."""
@@ -100,7 +100,7 @@ class ImageCropInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("img_paste", title="Paste Image", tags=["image", "paste"], category="image", version="1.1.0")
@invocation("img_paste", title="Paste Image", tags=["image", "paste"], category="image", version="1.0.1")
class ImagePasteInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Pastes an image into another image."""
@@ -154,7 +154,7 @@ class ImagePasteInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("tomask", title="Mask from Alpha", tags=["image", "mask"], category="image", version="1.1.0")
@invocation("tomask", title="Mask from Alpha", tags=["image", "mask"], category="image", version="1.0.0")
class MaskFromAlphaInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Extracts the alpha channel of an image as a mask."""
@@ -186,7 +186,7 @@ class MaskFromAlphaInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("img_mul", title="Multiply Images", tags=["image", "multiply"], category="image", version="1.1.0")
@invocation("img_mul", title="Multiply Images", tags=["image", "multiply"], category="image", version="1.0.0")
class ImageMultiplyInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Multiplies two images together using `PIL.ImageChops.multiply()`."""
@@ -220,7 +220,7 @@ class ImageMultiplyInvocation(BaseInvocation, WithWorkflow, WithMetadata):
IMAGE_CHANNELS = Literal["A", "R", "G", "B"]
@invocation("img_chan", title="Extract Image Channel", tags=["image", "channel"], category="image", version="1.1.0")
@invocation("img_chan", title="Extract Image Channel", tags=["image", "channel"], category="image", version="1.0.0")
class ImageChannelInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Gets a channel from an image."""
@@ -253,7 +253,7 @@ class ImageChannelInvocation(BaseInvocation, WithWorkflow, WithMetadata):
IMAGE_MODES = Literal["L", "RGB", "RGBA", "CMYK", "YCbCr", "LAB", "HSV", "I", "F"]
@invocation("img_conv", title="Convert Image Mode", tags=["image", "convert"], category="image", version="1.1.0")
@invocation("img_conv", title="Convert Image Mode", tags=["image", "convert"], category="image", version="1.0.0")
class ImageConvertInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Converts an image to a different mode."""
@@ -283,7 +283,7 @@ class ImageConvertInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("img_blur", title="Blur Image", tags=["image", "blur"], category="image", version="1.1.0")
@invocation("img_blur", title="Blur Image", tags=["image", "blur"], category="image", version="1.0.0")
class ImageBlurInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Blurs an image"""
@@ -338,7 +338,7 @@ PIL_RESAMPLING_MAP = {
}
@invocation("img_resize", title="Resize Image", tags=["image", "resize"], category="image", version="1.1.0")
@invocation("img_resize", title="Resize Image", tags=["image", "resize"], category="image", version="1.0.0")
class ImageResizeInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Resizes an image to specific dimensions"""
@@ -375,7 +375,7 @@ class ImageResizeInvocation(BaseInvocation, WithMetadata, WithWorkflow):
)
@invocation("img_scale", title="Scale Image", tags=["image", "scale"], category="image", version="1.1.0")
@invocation("img_scale", title="Scale Image", tags=["image", "scale"], category="image", version="1.0.0")
class ImageScaleInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Scales an image by a factor"""
@@ -417,7 +417,7 @@ class ImageScaleInvocation(BaseInvocation, WithMetadata, WithWorkflow):
)
@invocation("img_lerp", title="Lerp Image", tags=["image", "lerp"], category="image", version="1.1.0")
@invocation("img_lerp", title="Lerp Image", tags=["image", "lerp"], category="image", version="1.0.0")
class ImageLerpInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Linear interpolation of all pixels of an image"""
@@ -451,7 +451,7 @@ class ImageLerpInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("img_ilerp", title="Inverse Lerp Image", tags=["image", "ilerp"], category="image", version="1.1.0")
@invocation("img_ilerp", title="Inverse Lerp Image", tags=["image", "ilerp"], category="image", version="1.0.0")
class ImageInverseLerpInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Inverse linear interpolation of all pixels of an image"""
@@ -485,7 +485,7 @@ class ImageInverseLerpInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("img_nsfw", title="Blur NSFW Image", tags=["image", "nsfw"], category="image", version="1.1.0")
@invocation("img_nsfw", title="Blur NSFW Image", tags=["image", "nsfw"], category="image", version="1.0.0")
class ImageNSFWBlurInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Add blur to NSFW-flagged images"""
@@ -532,7 +532,7 @@ class ImageNSFWBlurInvocation(BaseInvocation, WithMetadata, WithWorkflow):
title="Add Invisible Watermark",
tags=["image", "watermark"],
category="image",
version="1.1.0",
version="1.0.0",
)
class ImageWatermarkInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Add an invisible watermark to an image"""
@@ -561,7 +561,7 @@ class ImageWatermarkInvocation(BaseInvocation, WithMetadata, WithWorkflow):
)
@invocation("mask_edge", title="Mask Edge", tags=["image", "mask", "inpaint"], category="image", version="1.1.0")
@invocation("mask_edge", title="Mask Edge", tags=["image", "mask", "inpaint"], category="image", version="1.0.0")
class MaskEdgeInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Applies an edge mask to an image"""
@@ -612,7 +612,7 @@ class MaskEdgeInvocation(BaseInvocation, WithWorkflow, WithMetadata):
title="Combine Masks",
tags=["image", "mask", "multiply"],
category="image",
version="1.1.0",
version="1.0.0",
)
class MaskCombineInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Combine two masks together by multiplying them using `PIL.ImageChops.multiply()`."""
@@ -644,7 +644,7 @@ class MaskCombineInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("color_correct", title="Color Correct", tags=["image", "color"], category="image", version="1.1.0")
@invocation("color_correct", title="Color Correct", tags=["image", "color"], category="image", version="1.0.0")
class ColorCorrectInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""
Shifts the colors of a target image to match the reference image, optionally
@@ -755,7 +755,7 @@ class ColorCorrectInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("img_hue_adjust", title="Adjust Image Hue", tags=["image", "hue"], category="image", version="1.1.0")
@invocation("img_hue_adjust", title="Adjust Image Hue", tags=["image", "hue"], category="image", version="1.0.0")
class ImageHueAdjustmentInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Adjusts the Hue of an image."""
@@ -858,7 +858,7 @@ CHANNEL_FORMATS = {
"value",
],
category="image",
version="1.1.0",
version="1.0.0",
)
class ImageChannelOffsetInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Add or subtract a value from a specific color channel of an image."""
@@ -929,7 +929,7 @@ class ImageChannelOffsetInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"value",
],
category="image",
version="1.1.0",
version="1.0.0",
)
class ImageChannelMultiplyInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Scale a specific color channel of an image."""
@@ -988,7 +988,7 @@ class ImageChannelMultiplyInvocation(BaseInvocation, WithWorkflow, WithMetadata)
title="Save Image",
tags=["primitives", "image"],
category="primitives",
version="1.1.0",
version="1.0.1",
use_cache=False,
)
class SaveImageInvocation(BaseInvocation, WithWorkflow, WithMetadata):
@@ -1017,35 +1017,3 @@ class SaveImageInvocation(BaseInvocation, WithWorkflow, WithMetadata):
width=image_dto.width,
height=image_dto.height,
)
@invocation(
"linear_ui_output",
title="Linear UI Image Output",
tags=["primitives", "image"],
category="primitives",
version="1.0.1",
use_cache=False,
)
class LinearUIOutputInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Handles Linear UI Image Outputting tasks."""
image: ImageField = InputField(description=FieldDescriptions.image)
board: Optional[BoardField] = InputField(default=None, description=FieldDescriptions.board, input=Input.Direct)
def invoke(self, context: InvocationContext) -> ImageOutput:
image_dto = context.services.images.get_dto(self.image.image_name)
if self.board:
context.services.board_images.add_image_to_board(self.board.board_id, self.image.image_name)
if image_dto.is_intermediate != self.is_intermediate:
context.services.images.update(
self.image.image_name, changes=ImageRecordChanges(is_intermediate=self.is_intermediate)
)
return ImageOutput(
image=ImageField(image_name=self.image.image_name),
width=image_dto.width,
height=image_dto.height,
)

View File

@@ -118,7 +118,7 @@ def tile_fill_missing(im: Image.Image, tile_size: int = 16, seed: Optional[int]
return si
@invocation("infill_rgba", title="Solid Color Infill", tags=["image", "inpaint"], category="inpaint", version="1.1.0")
@invocation("infill_rgba", title="Solid Color Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0")
class InfillColorInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Infills transparent areas of an image with a solid color"""
@@ -154,7 +154,7 @@ class InfillColorInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("infill_tile", title="Tile Infill", tags=["image", "inpaint"], category="inpaint", version="1.1.0")
@invocation("infill_tile", title="Tile Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0")
class InfillTileInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Infills transparent areas of an image with tiles of the image"""
@@ -192,7 +192,7 @@ class InfillTileInvocation(BaseInvocation, WithWorkflow, WithMetadata):
@invocation(
"infill_patchmatch", title="PatchMatch Infill", tags=["image", "inpaint"], category="inpaint", version="1.1.0"
"infill_patchmatch", title="PatchMatch Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0"
)
class InfillPatchMatchInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Infills transparent areas of an image using the PatchMatch algorithm"""
@@ -245,7 +245,7 @@ class InfillPatchMatchInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("infill_lama", title="LaMa Infill", tags=["image", "inpaint"], category="inpaint", version="1.1.0")
@invocation("infill_lama", title="LaMa Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0")
class LaMaInfillInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Infills transparent areas of an image using the LaMa model"""
@@ -274,7 +274,7 @@ class LaMaInfillInvocation(BaseInvocation, WithWorkflow, WithMetadata):
)
@invocation("infill_cv2", title="CV2 Infill", tags=["image", "inpaint"], category="inpaint", version="1.1.0")
@invocation("infill_cv2", title="CV2 Infill", tags=["image", "inpaint"], category="inpaint")
class CV2InfillInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Infills transparent areas of an image using OpenCV Inpainting"""

View File

@@ -706,6 +706,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
)
with (
ExitStack() as exit_stack,
ModelPatcher.apply_lora_unet(unet_info.context.model, _lora_loader()),
ModelPatcher.apply_freeu(unet_info.context.model, self.unet.freeu_config),
set_seamless(unet_info.context.model, self.unet.seamless_axes),
unet_info as unet,
@@ -789,7 +790,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
title="Latents to Image",
tags=["latents", "image", "vae", "l2i"],
category="latents",
version="1.1.0",
version="1.0.0",
)
class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Generates an image from latents."""

View File

@@ -112,7 +112,7 @@ GENERATION_MODES = Literal[
]
@invocation("core_metadata", title="Core Metadata", tags=["metadata"], category="metadata", version="1.0.1")
@invocation("core_metadata", title="Core Metadata", tags=["metadata"], category="metadata", version="1.0.0")
class CoreMetadataInvocation(BaseInvocation):
"""Collects core generation metadata into a MetadataField"""
@@ -160,7 +160,7 @@ class CoreMetadataInvocation(BaseInvocation):
)
# High resolution fix metadata.
hrf_enabled: Optional[bool] = InputField(
hrf_enabled: Optional[float] = InputField(
default=None,
description="Whether or not high resolution fix was enabled.",
)

View File

@@ -326,7 +326,7 @@ class ONNXTextToLatentsInvocation(BaseInvocation):
title="ONNX Latents to Image",
tags=["latents", "image", "vae", "onnx"],
category="image",
version="1.1.0",
version="1.0.0",
)
class ONNXLatentsToImageInvocation(BaseInvocation, WithMetadata, WithWorkflow):
"""Generates an image from latents."""

View File

@@ -29,7 +29,7 @@ if choose_torch_device() == torch.device("mps"):
from torch import mps
@invocation("esrgan", title="Upscale (RealESRGAN)", tags=["esrgan", "upscale"], category="esrgan", version="1.2.0")
@invocation("esrgan", title="Upscale (RealESRGAN)", tags=["esrgan", "upscale"], category="esrgan", version="1.1.0")
class ESRGANInvocation(BaseInvocation, WithWorkflow, WithMetadata):
"""Upscales an image using RealESRGAN."""

View File

@@ -22,7 +22,6 @@ if TYPE_CHECKING:
from .item_storage.item_storage_base import ItemStorageABC
from .latents_storage.latents_storage_base import LatentsStorageBase
from .model_manager.model_manager_base import ModelManagerServiceBase
from .model_records import ModelRecordServiceBase
from .names.names_base import NameServiceBase
from .session_processor.session_processor_base import SessionProcessorBase
from .session_queue.session_queue_base import SessionQueueBase
@@ -50,7 +49,6 @@ class InvocationServices:
latents: "LatentsStorageBase"
logger: "Logger"
model_manager: "ModelManagerServiceBase"
model_records: "ModelRecordServiceBase"
processor: "InvocationProcessorABC"
performance_statistics: "InvocationStatsServiceBase"
queue: "InvocationQueueABC"
@@ -78,7 +76,6 @@ class InvocationServices:
latents: "LatentsStorageBase",
logger: "Logger",
model_manager: "ModelManagerServiceBase",
model_records: "ModelRecordServiceBase",
processor: "InvocationProcessorABC",
performance_statistics: "InvocationStatsServiceBase",
queue: "InvocationQueueABC",
@@ -104,7 +101,6 @@ class InvocationServices:
self.latents = latents
self.logger = logger
self.model_manager = model_manager
self.model_records = model_records
self.processor = processor
self.performance_statistics = performance_statistics
self.queue = queue

View File

@@ -1,8 +0,0 @@
"""Init file for model record services."""
from .model_records_base import ( # noqa F401
DuplicateModelException,
InvalidModelException,
ModelRecordServiceBase,
UnknownModelException,
)
from .model_records_sql import ModelRecordServiceSQL # noqa F401

View File

@@ -1,169 +0,0 @@
# Copyright (c) 2023 Lincoln D. Stein and the InvokeAI Development Team
"""
Abstract base class for storing and retrieving model configuration records.
"""
from abc import ABC, abstractmethod
from pathlib import Path
from typing import List, Optional, Union
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelType
# should match the InvokeAI version when this is first released.
CONFIG_FILE_VERSION = "3.2.0"
class DuplicateModelException(Exception):
"""Raised on an attempt to add a model with the same key twice."""
class InvalidModelException(Exception):
"""Raised when an invalid model is detected."""
class UnknownModelException(Exception):
"""Raised on an attempt to fetch or delete a model with a nonexistent key."""
class ConfigFileVersionMismatchException(Exception):
"""Raised on an attempt to open a config with an incompatible version."""
class ModelRecordServiceBase(ABC):
"""Abstract base class for storage and retrieval of model configs."""
@property
@abstractmethod
def version(self) -> str:
"""Return the config file/database schema version."""
pass
@abstractmethod
def add_model(self, key: str, config: Union[dict, AnyModelConfig]) -> AnyModelConfig:
"""
Add a model to the database.
:param key: Unique key for the model
:param config: Model configuration record, either a dict with the
required fields or a ModelConfigBase instance.
Can raise DuplicateModelException and InvalidModelConfigException exceptions.
"""
pass
@abstractmethod
def del_model(self, key: str) -> None:
"""
Delete a model.
:param key: Unique key for the model to be deleted
Can raise an UnknownModelException
"""
pass
@abstractmethod
def update_model(self, key: str, config: Union[dict, AnyModelConfig]) -> AnyModelConfig:
"""
Update the model, returning the updated version.
:param key: Unique key for the model to be updated
:param config: Model configuration record. Either a dict with the
required fields, or a ModelConfigBase instance.
"""
pass
@abstractmethod
def get_model(self, key: str) -> AnyModelConfig:
"""
Retrieve the configuration for the indicated model.
:param key: Key of model config to be fetched.
Exceptions: UnknownModelException
"""
pass
@abstractmethod
def exists(self, key: str) -> bool:
"""
Return True if a model with the indicated key exists in the databse.
:param key: Unique key for the model to be deleted
"""
pass
@abstractmethod
def search_by_path(
self,
path: Union[str, Path],
) -> List[AnyModelConfig]:
"""Return the model(s) having the indicated path."""
pass
@abstractmethod
def search_by_hash(
self,
hash: str,
) -> List[AnyModelConfig]:
"""Return the model(s) having the indicated original hash."""
pass
@abstractmethod
def search_by_attr(
self,
model_name: Optional[str] = None,
base_model: Optional[BaseModelType] = None,
model_type: Optional[ModelType] = None,
) -> List[AnyModelConfig]:
"""
Return models matching name, base and/or type.
:param model_name: Filter by name of model (optional)
:param base_model: Filter by base model (optional)
:param model_type: Filter by type of model (optional)
If none of the optional filters are passed, will return all
models in the database.
"""
pass
def all_models(self) -> List[AnyModelConfig]:
"""Return all the model configs in the database."""
return self.search_by_attr()
def model_info_by_name(self, model_name: str, base_model: BaseModelType, model_type: ModelType) -> AnyModelConfig:
"""
Return information about a single model using its name, base type and model type.
If there are more than one model that match, raises a DuplicateModelException.
If no model matches, raises an UnknownModelException
"""
model_configs = self.search_by_attr(model_name=model_name, base_model=base_model, model_type=model_type)
if len(model_configs) > 1:
raise DuplicateModelException(
f"More than one model matched the search criteria: base_model='{base_model}', model_type='{model_type}', model_name='{model_name}'."
)
if len(model_configs) == 0:
raise UnknownModelException(
f"More than one model matched the search criteria: base_model='{base_model}', model_type='{model_type}', model_name='{model_name}'."
)
return model_configs[0]
def rename_model(
self,
key: str,
new_name: str,
) -> AnyModelConfig:
"""
Rename the indicated model. Just a special case of update_model().
In some implementations, renaming the model may involve changing where
it is stored on the filesystem. So this is broken out.
:param key: Model key
:param new_name: New name for model
"""
config = self.get_model(key)
config.name = new_name
return self.update_model(key, config)

View File

@@ -1,397 +0,0 @@
# Copyright (c) 2023 Lincoln D. Stein and the InvokeAI Development Team
"""
SQL Implementation of the ModelRecordServiceBase API
Typical usage:
from invokeai.backend.model_manager import ModelConfigStoreSQL
store = ModelConfigStoreSQL(sqlite_db)
config = dict(
path='/tmp/pokemon.bin',
name='old name',
base_model='sd-1',
type='embedding',
format='embedding_file',
)
# adding - the key becomes the model's "key" field
store.add_model('key1', config)
# updating
config.name='new name'
store.update_model('key1', config)
# checking for existence
if store.exists('key1'):
print("yes")
# fetching config
new_config = store.get_model('key1')
print(new_config.name, new_config.base)
assert new_config.key == 'key1'
# deleting
store.del_model('key1')
# searching
configs = store.search_by_path(path='/tmp/pokemon.bin')
configs = store.search_by_hash('750a499f35e43b7e1b4d15c207aa2f01')
configs = store.search_by_attr(base_model='sd-2', model_type='main')
"""
import json
import sqlite3
from pathlib import Path
from typing import List, Optional, Union
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
ModelConfigBase,
ModelConfigFactory,
ModelType,
)
from ..shared.sqlite import SqliteDatabase
from .model_records_base import (
CONFIG_FILE_VERSION,
DuplicateModelException,
ModelRecordServiceBase,
UnknownModelException,
)
class ModelRecordServiceSQL(ModelRecordServiceBase):
"""Implementation of the ModelConfigStore ABC using a SQL database."""
_db: SqliteDatabase
_cursor: sqlite3.Cursor
def __init__(self, db: SqliteDatabase):
"""
Initialize a new object from preexisting sqlite3 connection and threading lock objects.
:param conn: sqlite3 connection object
:param lock: threading Lock object
"""
super().__init__()
self._db = db
self._cursor = self._db.conn.cursor()
with self._db.lock:
# Enable foreign keys
self._db.conn.execute("PRAGMA foreign_keys = ON;")
self._create_tables()
self._db.conn.commit()
assert (
str(self.version) == CONFIG_FILE_VERSION
), f"Model config version {self.version} does not match expected version {CONFIG_FILE_VERSION}"
def _create_tables(self) -> None:
"""Create sqlite3 tables."""
# model_config table breaks out the fields that are common to all config objects
# and puts class-specific ones in a serialized json object
self._cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS model_config (
id TEXT NOT NULL PRIMARY KEY,
-- The next 3 fields are enums in python, unrestricted string here
base TEXT NOT NULL,
type TEXT NOT NULL,
name TEXT NOT NULL,
path TEXT NOT NULL,
original_hash TEXT, -- could be null
-- Serialized JSON representation of the whole config object,
-- which will contain additional fields from subclasses
config TEXT NOT NULL,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- unique constraint on combo of name, base and type
UNIQUE(name, base, type)
);
"""
)
# metadata table
self._cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS model_manager_metadata (
metadata_key TEXT NOT NULL PRIMARY KEY,
metadata_value TEXT NOT NULL
);
"""
)
# Add trigger for `updated_at`.
self._cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS model_config_updated_at
AFTER UPDATE
ON model_config FOR EACH ROW
BEGIN
UPDATE model_config SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE id = old.id;
END;
"""
)
# Add indexes for searchable fields
for stmt in [
"CREATE INDEX IF NOT EXISTS base_index ON model_config(base);",
"CREATE INDEX IF NOT EXISTS type_index ON model_config(type);",
"CREATE INDEX IF NOT EXISTS name_index ON model_config(name);",
"CREATE UNIQUE INDEX IF NOT EXISTS path_index ON model_config(path);",
]:
self._cursor.execute(stmt)
# Add our version to the metadata table
self._cursor.execute(
"""--sql
INSERT OR IGNORE into model_manager_metadata (
metadata_key,
metadata_value
)
VALUES (?,?);
""",
("version", CONFIG_FILE_VERSION),
)
def add_model(self, key: str, config: Union[dict, ModelConfigBase]) -> AnyModelConfig:
"""
Add a model to the database.
:param key: Unique key for the model
:param config: Model configuration record, either a dict with the
required fields or a ModelConfigBase instance.
Can raise DuplicateModelException and InvalidModelConfigException exceptions.
"""
record = ModelConfigFactory.make_config(config, key=key) # ensure it is a valid config obect.
json_serialized = record.model_dump_json() # and turn it into a json string.
with self._db.lock:
try:
self._cursor.execute(
"""--sql
INSERT INTO model_config (
id,
base,
type,
name,
path,
original_hash,
config
)
VALUES (?,?,?,?,?,?,?);
""",
(
key,
record.base,
record.type,
record.name,
record.path,
record.original_hash,
json_serialized,
),
)
self._db.conn.commit()
except sqlite3.IntegrityError as e:
self._db.conn.rollback()
if "UNIQUE constraint failed" in str(e):
if "model_config.path" in str(e):
msg = f"A model with path '{record.path}' is already installed"
elif "model_config.name" in str(e):
msg = f"A model with name='{record.name}', type='{record.type}', base='{record.base}' is already installed"
else:
msg = f"A model with key '{key}' is already installed"
raise DuplicateModelException(msg) from e
else:
raise e
except sqlite3.Error as e:
self._db.conn.rollback()
raise e
return self.get_model(key)
@property
def version(self) -> str:
"""Return the version of the database schema."""
with self._db.lock:
self._cursor.execute(
"""--sql
SELECT metadata_value FROM model_manager_metadata
WHERE metadata_key=?;
""",
("version",),
)
rows = self._cursor.fetchone()
if not rows:
raise KeyError("Models database does not have metadata key 'version'")
return rows[0]
def del_model(self, key: str) -> None:
"""
Delete a model.
:param key: Unique key for the model to be deleted
Can raise an UnknownModelException
"""
with self._db.lock:
try:
self._cursor.execute(
"""--sql
DELETE FROM model_config
WHERE id=?;
""",
(key,),
)
if self._cursor.rowcount == 0:
raise UnknownModelException("model not found")
self._db.conn.commit()
except sqlite3.Error as e:
self._db.conn.rollback()
raise e
def update_model(self, key: str, config: ModelConfigBase) -> AnyModelConfig:
"""
Update the model, returning the updated version.
:param key: Unique key for the model to be updated
:param config: Model configuration record. Either a dict with the
required fields, or a ModelConfigBase instance.
"""
record = ModelConfigFactory.make_config(config, key=key) # ensure it is a valid config obect
json_serialized = record.model_dump_json() # and turn it into a json string.
with self._db.lock:
try:
self._cursor.execute(
"""--sql
UPDATE model_config
SET base=?,
type=?,
name=?,
path=?,
config=?
WHERE id=?;
""",
(record.base, record.type, record.name, record.path, json_serialized, key),
)
if self._cursor.rowcount == 0:
raise UnknownModelException("model not found")
self._db.conn.commit()
except sqlite3.Error as e:
self._db.conn.rollback()
raise e
return self.get_model(key)
def get_model(self, key: str) -> AnyModelConfig:
"""
Retrieve the ModelConfigBase instance for the indicated model.
:param key: Key of model config to be fetched.
Exceptions: UnknownModelException
"""
with self._db.lock:
self._cursor.execute(
"""--sql
SELECT config FROM model_config
WHERE id=?;
""",
(key,),
)
rows = self._cursor.fetchone()
if not rows:
raise UnknownModelException("model not found")
model = ModelConfigFactory.make_config(json.loads(rows[0]))
return model
def exists(self, key: str) -> bool:
"""
Return True if a model with the indicated key exists in the databse.
:param key: Unique key for the model to be deleted
"""
count = 0
with self._db.lock:
self._cursor.execute(
"""--sql
select count(*) FROM model_config
WHERE id=?;
""",
(key,),
)
count = self._cursor.fetchone()[0]
return count > 0
def search_by_attr(
self,
model_name: Optional[str] = None,
base_model: Optional[BaseModelType] = None,
model_type: Optional[ModelType] = None,
) -> List[AnyModelConfig]:
"""
Return models matching name, base and/or type.
:param model_name: Filter by name of model (optional)
:param base_model: Filter by base model (optional)
:param model_type: Filter by type of model (optional)
If none of the optional filters are passed, will return all
models in the database.
"""
results = []
where_clause = []
bindings = []
if model_name:
where_clause.append("name=?")
bindings.append(model_name)
if base_model:
where_clause.append("base=?")
bindings.append(base_model)
if model_type:
where_clause.append("type=?")
bindings.append(model_type)
where = f"WHERE {' AND '.join(where_clause)}" if where_clause else ""
with self._db.lock:
self._cursor.execute(
f"""--sql
select config FROM model_config
{where};
""",
tuple(bindings),
)
results = [ModelConfigFactory.make_config(json.loads(x[0])) for x in self._cursor.fetchall()]
return results
def search_by_path(self, path: Union[str, Path]) -> List[ModelConfigBase]:
"""Return models with the indicated path."""
results = []
with self._db.lock:
self._cursor.execute(
"""--sql
SELECT config FROM model_config
WHERE model_path=?;
""",
(str(path),),
)
results = [ModelConfigFactory.make_config(json.loads(x[0])) for x in self._cursor.fetchall()]
return results
def search_by_hash(self, hash: str) -> List[ModelConfigBase]:
"""Return models with the indicated original_hash."""
results = []
with self._db.lock:
self._cursor.execute(
"""--sql
SELECT config FROM model_config
WHERE original_hash=?;
""",
(hash,),
)
results = [ModelConfigFactory.make_config(json.loads(x[0])) for x in self._cursor.fetchall()]
return results

View File

@@ -1,323 +0,0 @@
# Copyright (c) 2023 Lincoln D. Stein and the InvokeAI Development Team
"""
Configuration definitions for image generation models.
Typical usage:
from invokeai.backend.model_manager import ModelConfigFactory
raw = dict(path='models/sd-1/main/foo.ckpt',
name='foo',
base='sd-1',
type='main',
config='configs/stable-diffusion/v1-inference.yaml',
variant='normal',
format='checkpoint'
)
config = ModelConfigFactory.make_config(raw)
print(config.name)
Validation errors will raise an InvalidModelConfigException error.
"""
from enum import Enum
from typing import Literal, Optional, Type, Union
from pydantic import BaseModel, ConfigDict, Field, TypeAdapter
from typing_extensions import Annotated
class InvalidModelConfigException(Exception):
"""Exception for when config parser doesn't recognized this combination of model type and format."""
class BaseModelType(str, Enum):
"""Base model type."""
Any = "any"
StableDiffusion1 = "sd-1"
StableDiffusion2 = "sd-2"
StableDiffusionXL = "sdxl"
StableDiffusionXLRefiner = "sdxl-refiner"
# Kandinsky2_1 = "kandinsky-2.1"
class ModelType(str, Enum):
"""Model type."""
ONNX = "onnx"
Main = "main"
Vae = "vae"
Lora = "lora"
ControlNet = "controlnet" # used by model_probe
TextualInversion = "embedding"
IPAdapter = "ip_adapter"
CLIPVision = "clip_vision"
T2IAdapter = "t2i_adapter"
class SubModelType(str, Enum):
"""Submodel type."""
UNet = "unet"
TextEncoder = "text_encoder"
TextEncoder2 = "text_encoder_2"
Tokenizer = "tokenizer"
Tokenizer2 = "tokenizer_2"
Vae = "vae"
VaeDecoder = "vae_decoder"
VaeEncoder = "vae_encoder"
Scheduler = "scheduler"
SafetyChecker = "safety_checker"
class ModelVariantType(str, Enum):
"""Variant type."""
Normal = "normal"
Inpaint = "inpaint"
Depth = "depth"
class ModelFormat(str, Enum):
"""Storage format of model."""
Diffusers = "diffusers"
Checkpoint = "checkpoint"
Lycoris = "lycoris"
Onnx = "onnx"
Olive = "olive"
EmbeddingFile = "embedding_file"
EmbeddingFolder = "embedding_folder"
InvokeAI = "invokeai"
class SchedulerPredictionType(str, Enum):
"""Scheduler prediction type."""
Epsilon = "epsilon"
VPrediction = "v_prediction"
Sample = "sample"
class ModelConfigBase(BaseModel):
"""Base class for model configuration information."""
path: str
name: str
base: BaseModelType
type: ModelType
format: ModelFormat
key: str = Field(description="unique key for model", default="<NOKEY>")
original_hash: Optional[str] = Field(
description="original fasthash of model contents", default=None
) # this is assigned at install time and will not change
current_hash: Optional[str] = Field(
description="current fasthash of model contents", default=None
) # if model is converted or otherwise modified, this will hold updated hash
description: Optional[str] = Field(default=None)
source: Optional[str] = Field(description="Model download source (URL or repo_id)", default=None)
model_config = ConfigDict(
use_enum_values=False,
validate_assignment=True,
)
def update(self, attributes: dict):
"""Update the object with fields in dict."""
for key, value in attributes.items():
setattr(self, key, value) # may raise a validation error
class _CheckpointConfig(ModelConfigBase):
"""Model config for checkpoint-style models."""
format: Literal[ModelFormat.Checkpoint] = ModelFormat.Checkpoint
config: str = Field(description="path to the checkpoint model config file")
class _DiffusersConfig(ModelConfigBase):
"""Model config for diffusers-style models."""
format: Literal[ModelFormat.Diffusers] = ModelFormat.Diffusers
class LoRAConfig(ModelConfigBase):
"""Model config for LoRA/Lycoris models."""
type: Literal[ModelType.Lora] = ModelType.Lora
format: Literal[ModelFormat.Lycoris, ModelFormat.Diffusers]
class VaeCheckpointConfig(ModelConfigBase):
"""Model config for standalone VAE models."""
type: Literal[ModelType.Vae] = ModelType.Vae
format: Literal[ModelFormat.Checkpoint] = ModelFormat.Checkpoint
class VaeDiffusersConfig(ModelConfigBase):
"""Model config for standalone VAE models (diffusers version)."""
type: Literal[ModelType.Vae] = ModelType.Vae
format: Literal[ModelFormat.Diffusers] = ModelFormat.Diffusers
class ControlNetDiffusersConfig(_DiffusersConfig):
"""Model config for ControlNet models (diffusers version)."""
type: Literal[ModelType.ControlNet] = ModelType.ControlNet
format: Literal[ModelFormat.Diffusers] = ModelFormat.Diffusers
class ControlNetCheckpointConfig(_CheckpointConfig):
"""Model config for ControlNet models (diffusers version)."""
type: Literal[ModelType.ControlNet] = ModelType.ControlNet
format: Literal[ModelFormat.Checkpoint] = ModelFormat.Checkpoint
class TextualInversionConfig(ModelConfigBase):
"""Model config for textual inversion embeddings."""
type: Literal[ModelType.TextualInversion] = ModelType.TextualInversion
format: Literal[ModelFormat.EmbeddingFile, ModelFormat.EmbeddingFolder]
class _MainConfig(ModelConfigBase):
"""Model config for main models."""
vae: Optional[str] = Field(default=None)
variant: ModelVariantType = ModelVariantType.Normal
ztsnr_training: bool = False
class MainCheckpointConfig(_CheckpointConfig, _MainConfig):
"""Model config for main checkpoint models."""
type: Literal[ModelType.Main] = ModelType.Main
# Note that we do not need prediction_type or upcast_attention here
# because they are provided in the checkpoint's own config file.
class MainDiffusersConfig(_DiffusersConfig, _MainConfig):
"""Model config for main diffusers models."""
type: Literal[ModelType.Main] = ModelType.Main
prediction_type: SchedulerPredictionType = SchedulerPredictionType.Epsilon
upcast_attention: bool = False
class ONNXSD1Config(_MainConfig):
"""Model config for ONNX format models based on sd-1."""
type: Literal[ModelType.ONNX] = ModelType.ONNX
format: Literal[ModelFormat.Onnx, ModelFormat.Olive]
base: Literal[BaseModelType.StableDiffusion1] = BaseModelType.StableDiffusion1
prediction_type: SchedulerPredictionType = SchedulerPredictionType.Epsilon
upcast_attention: bool = False
class ONNXSD2Config(_MainConfig):
"""Model config for ONNX format models based on sd-2."""
type: Literal[ModelType.ONNX] = ModelType.ONNX
format: Literal[ModelFormat.Onnx, ModelFormat.Olive]
# No yaml config file for ONNX, so these are part of config
base: Literal[BaseModelType.StableDiffusion2] = BaseModelType.StableDiffusion2
prediction_type: SchedulerPredictionType = SchedulerPredictionType.VPrediction
upcast_attention: bool = True
class IPAdapterConfig(ModelConfigBase):
"""Model config for IP Adaptor format models."""
type: Literal[ModelType.IPAdapter] = ModelType.IPAdapter
format: Literal[ModelFormat.InvokeAI]
class CLIPVisionDiffusersConfig(ModelConfigBase):
"""Model config for ClipVision."""
type: Literal[ModelType.CLIPVision] = ModelType.CLIPVision
format: Literal[ModelFormat.Diffusers]
class T2IConfig(ModelConfigBase):
"""Model config for T2I."""
type: Literal[ModelType.T2IAdapter] = ModelType.T2IAdapter
format: Literal[ModelFormat.Diffusers]
_ONNXConfig = Annotated[Union[ONNXSD1Config, ONNXSD2Config], Field(discriminator="base")]
_ControlNetConfig = Annotated[
Union[ControlNetDiffusersConfig, ControlNetCheckpointConfig],
Field(discriminator="format"),
]
_VaeConfig = Annotated[Union[VaeDiffusersConfig, VaeCheckpointConfig], Field(discriminator="format")]
_MainModelConfig = Annotated[Union[MainDiffusersConfig, MainCheckpointConfig], Field(discriminator="format")]
AnyModelConfig = Union[
_MainModelConfig,
_ONNXConfig,
_VaeConfig,
_ControlNetConfig,
LoRAConfig,
TextualInversionConfig,
IPAdapterConfig,
CLIPVisionDiffusersConfig,
T2IConfig,
]
AnyModelConfigValidator = TypeAdapter(AnyModelConfig)
# IMPLEMENTATION NOTE:
# The preferred alternative to the above is a discriminated Union as shown
# below. However, it breaks FastAPI when used as the input Body parameter in a route.
# This is a known issue. Please see:
# https://github.com/tiangolo/fastapi/discussions/9761 and
# https://github.com/tiangolo/fastapi/discussions/9287
# AnyModelConfig = Annotated[
# Union[
# _MainModelConfig,
# _ONNXConfig,
# _VaeConfig,
# _ControlNetConfig,
# LoRAConfig,
# TextualInversionConfig,
# IPAdapterConfig,
# CLIPVisionDiffusersConfig,
# T2IConfig,
# ],
# Field(discriminator="type"),
# ]
class ModelConfigFactory(object):
"""Class for parsing config dicts into StableDiffusion Config obects."""
@classmethod
def make_config(
cls,
model_data: Union[dict, AnyModelConfig],
key: Optional[str] = None,
dest_class: Optional[Type] = None,
) -> AnyModelConfig:
"""
Return the appropriate config object from raw dict values.
:param model_data: A raw dict corresponding the obect fields to be
parsed into a ModelConfigBase obect (or descendent), or a ModelConfigBase
object, which will be passed through unchanged.
:param dest_class: The config class to be returned. If not provided, will
be selected automatically.
"""
if isinstance(model_data, ModelConfigBase):
model = model_data
elif dest_class:
model = dest_class.validate_python(model_data)
else:
model = AnyModelConfigValidator.validate_python(model_data)
if key:
model.key = key
return model

View File

@@ -1,66 +0,0 @@
# Copyright (c) 2023 Lincoln D. Stein and the InvokeAI Development Team
"""
Fast hashing of diffusers and checkpoint-style models.
Usage:
from invokeai.backend.model_managre.model_hash import FastModelHash
>>> FastModelHash.hash('/home/models/stable-diffusion-v1.5')
'a8e693a126ea5b831c96064dc569956f'
"""
import hashlib
import os
from pathlib import Path
from typing import Dict, Union
from imohash import hashfile
class FastModelHash(object):
"""FastModelHash obect provides one public class method, hash()."""
@classmethod
def hash(cls, model_location: Union[str, Path]) -> str:
"""
Return hexdigest string for model located at model_location.
:param model_location: Path to the model
"""
model_location = Path(model_location)
if model_location.is_file():
return cls._hash_file(model_location)
elif model_location.is_dir():
return cls._hash_dir(model_location)
else:
raise OSError(f"Not a valid file or directory: {model_location}")
@classmethod
def _hash_file(cls, model_location: Union[str, Path]) -> str:
"""
Fasthash a single file and return its hexdigest.
:param model_location: Path to the model file
"""
# we return md5 hash of the filehash to make it shorter
# cryptographic security not needed here
return hashlib.md5(hashfile(model_location)).hexdigest()
@classmethod
def _hash_dir(cls, model_location: Union[str, Path]) -> str:
components: Dict[str, str] = {}
for root, _dirs, files in os.walk(model_location):
for file in files:
# only tally tensor files because diffusers config files change slightly
# depending on how the model was downloaded/converted.
if not file.endswith((".ckpt", ".safetensors", ".bin", ".pt", ".pth")):
continue
path = (Path(root) / file).as_posix()
fast_hash = cls._hash_file(path)
components.update({path: fast_hash})
# hash all the model hashes together, using alphabetic file order
md5 = hashlib.md5()
for _path, fast_hash in sorted(components.items()):
md5.update(fast_hash.encode("utf-8"))
return md5.hexdigest()

View File

@@ -1,93 +0,0 @@
# Copyright (c) 2023 Lincoln D. Stein
"""Migrate from the InvokeAI v2 models.yaml format to the v3 sqlite format."""
from hashlib import sha1
from omegaconf import DictConfig, OmegaConf
from pydantic import TypeAdapter
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.model_records import (
DuplicateModelException,
ModelRecordServiceSQL,
)
from invokeai.app.services.shared.sqlite import SqliteDatabase
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
ModelType,
)
from invokeai.backend.model_manager.hash import FastModelHash
from invokeai.backend.util.logging import InvokeAILogger
ModelsValidator = TypeAdapter(AnyModelConfig)
class MigrateModelYamlToDb:
"""
Migrate the InvokeAI models.yaml format (VERSION 3.0.0) to SQL3 database format (VERSION 3.2.0)
The class has one externally useful method, migrate(), which scans the
currently models.yaml file and imports all its entries into invokeai.db.
Use this way:
from invokeai.backend.model_manager/migrate_to_db import MigrateModelYamlToDb
MigrateModelYamlToDb().migrate()
"""
config: InvokeAIAppConfig
logger: InvokeAILogger
def __init__(self):
self.config = InvokeAIAppConfig.get_config()
self.config.parse_args()
self.logger = InvokeAILogger.get_logger()
def get_db(self) -> ModelRecordServiceSQL:
"""Fetch the sqlite3 database for this installation."""
db = SqliteDatabase(self.config, self.logger)
return ModelRecordServiceSQL(db)
def get_yaml(self) -> DictConfig:
"""Fetch the models.yaml DictConfig for this installation."""
yaml_path = self.config.model_conf_path
return OmegaConf.load(yaml_path)
def migrate(self):
"""Do the migration from models.yaml to invokeai.db."""
db = self.get_db()
yaml = self.get_yaml()
for model_key, stanza in yaml.items():
if model_key == "__metadata__":
assert (
stanza["version"] == "3.0.0"
), f"This script works on version 3.0.0 yaml files, but your configuration points to a {stanza['version']} version"
continue
base_type, model_type, model_name = str(model_key).split("/")
hash = FastModelHash.hash(self.config.models_path / stanza.path)
new_key = sha1(model_key.encode("utf-8")).hexdigest()
stanza["base"] = BaseModelType(base_type)
stanza["type"] = ModelType(model_type)
stanza["name"] = model_name
stanza["original_hash"] = hash
stanza["current_hash"] = hash
new_config = ModelsValidator.validate_python(stanza)
self.logger.info(f"Adding model {model_name} with key {model_key}")
try:
db.add_model(new_key, new_config)
except DuplicateModelException:
self.logger.warning(f"Model {model_name} is already in the database")
def main():
MigrateModelYamlToDb().migrate()
if __name__ == "__main__":
main()

View File

@@ -748,7 +748,7 @@ class ControlNetModel(ModelMixin, ConfigMixin, FromOriginalControlnetMixin):
scales = scales * conditioning_scale
down_block_res_samples = [
sample * scale for sample, scale in zip(down_block_res_samples, scales, strict=False)
sample * scale for sample, scale in zip(down_block_res_samples, scales, strict=True)
]
mid_block_res_sample = mid_block_res_sample * scales[-1] # last one
else:

View File

@@ -5,7 +5,6 @@ import math
import multiprocessing as mp
import os
import re
import warnings
from collections import abc
from inspect import isfunction
from pathlib import Path
@@ -15,10 +14,8 @@ from threading import Thread
import numpy as np
import requests
import torch
from diffusers import logging as diffusers_logging
from PIL import Image, ImageDraw, ImageFont
from tqdm import tqdm
from transformers import logging as transformers_logging
import invokeai.backend.util.logging as logger
@@ -382,21 +379,3 @@ class Chdir(object):
def __exit__(self, *args):
os.chdir(self.original)
class SilenceWarnings(object):
"""Context manager to temporarily lower verbosity of diffusers & transformers warning messages."""
def __enter__(self):
"""Set verbosity to error."""
self.transformers_verbosity = transformers_logging.get_verbosity()
self.diffusers_verbosity = diffusers_logging.get_verbosity()
transformers_logging.set_verbosity_error()
diffusers_logging.set_verbosity_error()
warnings.simplefilter("ignore")
def __exit__(self, type, value, traceback):
"""Restore logger verbosity to state before context was entered."""
transformers_logging.set_verbosity(self.transformers_verbosity)
diffusers_logging.set_verbosity(self.diffusers_verbosity)
warnings.simplefilter("default")

View File

@@ -24,7 +24,6 @@ module.exports = {
root: true,
rules: {
curly: 'error',
'react/jsx-no-bind': ['error', { allowBind: true }],
'react/jsx-curly-brace-presence': [
'error',
{ props: 'never', children: 'never' },

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,4 +1,4 @@
import{I as s,ie as T,v as l,$ as A,ig as R,aa as V,ih as z,ii as j,ij as D,ik as F,il as G,im as W,io as K,az as H,ip as U,iq as Y}from"./index-f820e2e3.js";import{M as Z}from"./MantineProvider-a6a1d85c.js";var P=String.raw,E=P`
import{w as s,i2 as T,v as l,a2 as I,i3 as R,ae as V,i4 as z,i5 as j,i6 as D,i7 as F,i8 as G,i9 as W,ia as K,aG as H,ib as U,ic as Y}from"./index-27e8922c.js";import{M as Z}from"./MantineProvider-70b4f32d.js";var P=String.raw,E=P`
:root,
:host {
--chakra-vh: 100vh;
@@ -277,4 +277,4 @@ import{I as s,ie as T,v as l,$ as A,ig as R,aa as V,ih as z,ii as j,ij as D,ik a
}
${E}
`}),g={light:"chakra-ui-light",dark:"chakra-ui-dark"};function Q(e={}){const{preventTransition:o=!0}=e,n={setDataset:r=>{const t=o?n.preventTransition():void 0;document.documentElement.dataset.theme=r,document.documentElement.style.colorScheme=r,t==null||t()},setClassName(r){document.body.classList.add(r?g.dark:g.light),document.body.classList.remove(r?g.light:g.dark)},query(){return window.matchMedia("(prefers-color-scheme: dark)")},getSystemTheme(r){var t;return((t=n.query().matches)!=null?t:r==="dark")?"dark":"light"},addListener(r){const t=n.query(),i=a=>{r(a.matches?"dark":"light")};return typeof t.addListener=="function"?t.addListener(i):t.addEventListener("change",i),()=>{typeof t.removeListener=="function"?t.removeListener(i):t.removeEventListener("change",i)}},preventTransition(){const r=document.createElement("style");return r.appendChild(document.createTextNode("*{-webkit-transition:none!important;-moz-transition:none!important;-o-transition:none!important;-ms-transition:none!important;transition:none!important}")),document.head.appendChild(r),()=>{window.getComputedStyle(document.body),requestAnimationFrame(()=>{requestAnimationFrame(()=>{document.head.removeChild(r)})})}}};return n}var X="chakra-ui-color-mode";function L(e){return{ssr:!1,type:"localStorage",get(o){if(!(globalThis!=null&&globalThis.document))return o;let n;try{n=localStorage.getItem(e)||o}catch{}return n||o},set(o){try{localStorage.setItem(e,o)}catch{}}}}var ee=L(X),M=()=>{};function S(e,o){return e.type==="cookie"&&e.ssr?e.get(o):o}function O(e){const{value:o,children:n,options:{useSystemColorMode:r,initialColorMode:t,disableTransitionOnChange:i}={},colorModeManager:a=ee}=e,d=t==="dark"?"dark":"light",[u,p]=l.useState(()=>S(a,d)),[y,b]=l.useState(()=>S(a)),{getSystemTheme:w,setClassName:k,setDataset:x,addListener:$}=l.useMemo(()=>Q({preventTransition:i}),[i]),v=t==="system"&&!u?y:u,c=l.useCallback(m=>{const f=m==="system"?w():m;p(f),k(f==="dark"),x(f),a.set(f)},[a,w,k,x]);A(()=>{t==="system"&&b(w())},[]),l.useEffect(()=>{const m=a.get();if(m){c(m);return}if(t==="system"){c("system");return}c(d)},[a,d,t,c]);const C=l.useCallback(()=>{c(v==="dark"?"light":"dark")},[v,c]);l.useEffect(()=>{if(r)return $(c)},[r,$,c]);const N=l.useMemo(()=>({colorMode:o??v,toggleColorMode:o?M:C,setColorMode:o?M:c,forced:o!==void 0}),[v,C,c,o]);return s.jsx(R.Provider,{value:N,children:n})}O.displayName="ColorModeProvider";var te=["borders","breakpoints","colors","components","config","direction","fonts","fontSizes","fontWeights","letterSpacings","lineHeights","radii","shadows","sizes","space","styles","transition","zIndices"];function re(e){return V(e)?te.every(o=>Object.prototype.hasOwnProperty.call(e,o)):!1}function h(e){return typeof e=="function"}function oe(...e){return o=>e.reduce((n,r)=>r(n),o)}var ne=e=>function(...n){let r=[...n],t=n[n.length-1];return re(t)&&r.length>1?r=r.slice(0,r.length-1):t=e,oe(...r.map(i=>a=>h(i)?i(a):ae(a,i)))(t)},ie=ne(j);function ae(...e){return z({},...e,_)}function _(e,o,n,r){if((h(e)||h(o))&&Object.prototype.hasOwnProperty.call(r,n))return(...t)=>{const i=h(e)?e(...t):e,a=h(o)?o(...t):o;return z({},i,a,_)}}var q=l.createContext({getDocument(){return document},getWindow(){return window}});q.displayName="EnvironmentContext";function I(e){const{children:o,environment:n,disabled:r}=e,t=l.useRef(null),i=l.useMemo(()=>n||{getDocument:()=>{var d,u;return(u=(d=t.current)==null?void 0:d.ownerDocument)!=null?u:document},getWindow:()=>{var d,u;return(u=(d=t.current)==null?void 0:d.ownerDocument.defaultView)!=null?u:window}},[n]),a=!r||!n;return s.jsxs(q.Provider,{value:i,children:[o,a&&s.jsx("span",{id:"__chakra_env",hidden:!0,ref:t})]})}I.displayName="EnvironmentProvider";var se=e=>{const{children:o,colorModeManager:n,portalZIndex:r,resetScope:t,resetCSS:i=!0,theme:a={},environment:d,cssVarsRoot:u,disableEnvironment:p,disableGlobalStyle:y}=e,b=s.jsx(I,{environment:d,disabled:p,children:o});return s.jsx(D,{theme:a,cssVarsRoot:u,children:s.jsxs(O,{colorModeManager:n,options:a.config,children:[i?s.jsx(J,{scope:t}):s.jsx(B,{}),!y&&s.jsx(F,{}),r?s.jsx(G,{zIndex:r,children:b}):b]})})},le=e=>function({children:n,theme:r=e,toastOptions:t,...i}){return s.jsxs(se,{theme:r,...i,children:[s.jsx(W,{value:t==null?void 0:t.defaultOptions,children:n}),s.jsx(K,{...t})]})},de=le(j);const ue=()=>l.useMemo(()=>({colorScheme:"dark",fontFamily:"'Inter Variable', sans-serif",components:{ScrollArea:{defaultProps:{scrollbarSize:10},styles:{scrollbar:{"&:hover":{backgroundColor:"var(--invokeai-colors-baseAlpha-300)"}},thumb:{backgroundColor:"var(--invokeai-colors-baseAlpha-300)"}}}}}),[]),ce=L("@@invokeai-color-mode");function me({children:e}){const{i18n:o}=H(),n=o.dir(),r=l.useMemo(()=>ie({...U,direction:n}),[n]);l.useEffect(()=>{document.body.dir=n},[n]);const t=ue();return s.jsx(Z,{theme:t,children:s.jsx(de,{theme:r,colorModeManager:ce,toastOptions:Y,children:e})})}const ve=l.memo(me);export{ve as default};
`}),g={light:"chakra-ui-light",dark:"chakra-ui-dark"};function Q(e={}){const{preventTransition:o=!0}=e,n={setDataset:r=>{const t=o?n.preventTransition():void 0;document.documentElement.dataset.theme=r,document.documentElement.style.colorScheme=r,t==null||t()},setClassName(r){document.body.classList.add(r?g.dark:g.light),document.body.classList.remove(r?g.light:g.dark)},query(){return window.matchMedia("(prefers-color-scheme: dark)")},getSystemTheme(r){var t;return((t=n.query().matches)!=null?t:r==="dark")?"dark":"light"},addListener(r){const t=n.query(),i=a=>{r(a.matches?"dark":"light")};return typeof t.addListener=="function"?t.addListener(i):t.addEventListener("change",i),()=>{typeof t.removeListener=="function"?t.removeListener(i):t.removeEventListener("change",i)}},preventTransition(){const r=document.createElement("style");return r.appendChild(document.createTextNode("*{-webkit-transition:none!important;-moz-transition:none!important;-o-transition:none!important;-ms-transition:none!important;transition:none!important}")),document.head.appendChild(r),()=>{window.getComputedStyle(document.body),requestAnimationFrame(()=>{requestAnimationFrame(()=>{document.head.removeChild(r)})})}}};return n}var X="chakra-ui-color-mode";function L(e){return{ssr:!1,type:"localStorage",get(o){if(!(globalThis!=null&&globalThis.document))return o;let n;try{n=localStorage.getItem(e)||o}catch{}return n||o},set(o){try{localStorage.setItem(e,o)}catch{}}}}var ee=L(X),M=()=>{};function S(e,o){return e.type==="cookie"&&e.ssr?e.get(o):o}function O(e){const{value:o,children:n,options:{useSystemColorMode:r,initialColorMode:t,disableTransitionOnChange:i}={},colorModeManager:a=ee}=e,d=t==="dark"?"dark":"light",[u,p]=l.useState(()=>S(a,d)),[y,b]=l.useState(()=>S(a)),{getSystemTheme:w,setClassName:k,setDataset:x,addListener:$}=l.useMemo(()=>Q({preventTransition:i}),[i]),v=t==="system"&&!u?y:u,c=l.useCallback(m=>{const f=m==="system"?w():m;p(f),k(f==="dark"),x(f),a.set(f)},[a,w,k,x]);I(()=>{t==="system"&&b(w())},[]),l.useEffect(()=>{const m=a.get();if(m){c(m);return}if(t==="system"){c("system");return}c(d)},[a,d,t,c]);const C=l.useCallback(()=>{c(v==="dark"?"light":"dark")},[v,c]);l.useEffect(()=>{if(r)return $(c)},[r,$,c]);const A=l.useMemo(()=>({colorMode:o??v,toggleColorMode:o?M:C,setColorMode:o?M:c,forced:o!==void 0}),[v,C,c,o]);return s.jsx(R.Provider,{value:A,children:n})}O.displayName="ColorModeProvider";var te=["borders","breakpoints","colors","components","config","direction","fonts","fontSizes","fontWeights","letterSpacings","lineHeights","radii","shadows","sizes","space","styles","transition","zIndices"];function re(e){return V(e)?te.every(o=>Object.prototype.hasOwnProperty.call(e,o)):!1}function h(e){return typeof e=="function"}function oe(...e){return o=>e.reduce((n,r)=>r(n),o)}var ne=e=>function(...n){let r=[...n],t=n[n.length-1];return re(t)&&r.length>1?r=r.slice(0,r.length-1):t=e,oe(...r.map(i=>a=>h(i)?i(a):ae(a,i)))(t)},ie=ne(j);function ae(...e){return z({},...e,_)}function _(e,o,n,r){if((h(e)||h(o))&&Object.prototype.hasOwnProperty.call(r,n))return(...t)=>{const i=h(e)?e(...t):e,a=h(o)?o(...t):o;return z({},i,a,_)}}var q=l.createContext({getDocument(){return document},getWindow(){return window}});q.displayName="EnvironmentContext";function N(e){const{children:o,environment:n,disabled:r}=e,t=l.useRef(null),i=l.useMemo(()=>n||{getDocument:()=>{var d,u;return(u=(d=t.current)==null?void 0:d.ownerDocument)!=null?u:document},getWindow:()=>{var d,u;return(u=(d=t.current)==null?void 0:d.ownerDocument.defaultView)!=null?u:window}},[n]),a=!r||!n;return s.jsxs(q.Provider,{value:i,children:[o,a&&s.jsx("span",{id:"__chakra_env",hidden:!0,ref:t})]})}N.displayName="EnvironmentProvider";var se=e=>{const{children:o,colorModeManager:n,portalZIndex:r,resetScope:t,resetCSS:i=!0,theme:a={},environment:d,cssVarsRoot:u,disableEnvironment:p,disableGlobalStyle:y}=e,b=s.jsx(N,{environment:d,disabled:p,children:o});return s.jsx(D,{theme:a,cssVarsRoot:u,children:s.jsxs(O,{colorModeManager:n,options:a.config,children:[i?s.jsx(J,{scope:t}):s.jsx(B,{}),!y&&s.jsx(F,{}),r?s.jsx(G,{zIndex:r,children:b}):b]})})},le=e=>function({children:n,theme:r=e,toastOptions:t,...i}){return s.jsxs(se,{theme:r,...i,children:[s.jsx(W,{value:t==null?void 0:t.defaultOptions,children:n}),s.jsx(K,{...t})]})},de=le(j);const ue=()=>l.useMemo(()=>({colorScheme:"dark",fontFamily:"'Inter Variable', sans-serif",components:{ScrollArea:{defaultProps:{scrollbarSize:10},styles:{scrollbar:{"&:hover":{backgroundColor:"var(--invokeai-colors-baseAlpha-300)"}},thumb:{backgroundColor:"var(--invokeai-colors-baseAlpha-300)"}}}}}),[]),ce=L("@@invokeai-color-mode");function me({children:e}){const{i18n:o}=H(),n=o.dir(),r=l.useMemo(()=>ie({...U,direction:n}),[n]);l.useEffect(()=>{document.body.dir=n},[n]);const t=ue();return s.jsx(Z,{theme:t,children:s.jsx(de,{theme:r,colorModeManager:ce,toastOptions:Y,children:e})})}const ve=l.memo(me);export{ve as default};

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -15,7 +15,7 @@
margin: 0;
}
</style>
<script type="module" crossorigin src="./assets/index-f820e2e3.js"></script>
<script type="module" crossorigin src="./assets/index-27e8922c.js"></script>
</head>
<body dir="ltr">

View File

@@ -4,14 +4,14 @@
"reportBugLabel": "Fehler melden",
"settingsLabel": "Einstellungen",
"img2img": "Bild zu Bild",
"nodes": "Knoten Editor",
"nodes": "Knoten",
"langGerman": "Deutsch",
"nodesDesc": "Ein knotenbasiertes System, für die Erzeugung von Bildern, ist derzeit in der Entwicklung. Bleiben Sie gespannt auf Updates zu dieser fantastischen Funktion.",
"postProcessing": "Nachbearbeitung",
"postProcessDesc1": "InvokeAI bietet eine breite Palette von Nachbearbeitungsfunktionen. Bildhochskalierung und Gesichtsrekonstruktion sind bereits in der WebUI verfügbar. Sie können sie über das Menü Erweiterte Optionen der Reiter Text in Bild und Bild in Bild aufrufen. Sie können Bilder auch direkt bearbeiten, indem Sie die Schaltflächen für Bildaktionen oberhalb der aktuellen Bildanzeige oder im Viewer verwenden.",
"postProcessDesc2": "Eine spezielle Benutzeroberfläche wird in Kürze veröffentlicht, um erweiterte Nachbearbeitungs-Workflows zu erleichtern.",
"postProcessDesc3": "Die InvokeAI Kommandozeilen-Schnittstelle bietet verschiedene andere Funktionen, darunter Embiggen.",
"training": "trainieren",
"training": "Training",
"trainingDesc1": "Ein spezieller Arbeitsablauf zum Trainieren Ihrer eigenen Embeddings und Checkpoints mit Textual Inversion und Dreambooth über die Weboberfläche.",
"trainingDesc2": "InvokeAI unterstützt bereits das Training von benutzerdefinierten Embeddings mit Textual Inversion unter Verwendung des Hauptskripts.",
"upload": "Hochladen",
@@ -38,14 +38,14 @@
"statusUpscalingESRGAN": "Hochskalierung (ESRGAN)",
"statusLoadingModel": "Laden des Modells",
"statusModelChanged": "Modell Geändert",
"cancel": "Abbrechen",
"cancel": "Abbruch",
"accept": "Annehmen",
"back": "Zurück",
"langEnglish": "Englisch",
"langDutch": "Niederländisch",
"langFrench": "Französisch",
"langItalian": "Italienisch",
"langPortuguese": "Portugiesisch",
"langPortuguese": "Portogisisch",
"langRussian": "Russisch",
"langUkranian": "Ukrainisch",
"hotkeysLabel": "Tastenkombinationen",
@@ -58,44 +58,12 @@
"langArabic": "Arabisch",
"langKorean": "Koreanisch",
"langHebrew": "Hebräisch",
"langSpanish": "Spanisch",
"t2iAdapter": "T2I Adapter",
"communityLabel": "Gemeinschaft",
"dontAskMeAgain": "Frag mich nicht nochmal",
"loadingInvokeAI": "Lade Invoke AI",
"statusMergedModels": "Modelle zusammengeführt",
"areYouSure": "Bist du dir sicher?",
"statusConvertingModel": "Model konvertieren",
"on": "An",
"nodeEditor": "Knoten Editor",
"statusMergingModels": "Modelle zusammenführen",
"langSimplifiedChinese": "Vereinfachtes Chinesisch",
"ipAdapter": "IP Adapter",
"controlAdapter": "Control Adapter",
"auto": "Automatisch",
"controlNet": "ControlNet",
"imageFailedToLoad": "Kann Bild nicht laden",
"statusModelConverted": "Model konvertiert",
"modelManager": "Model Manager",
"lightMode": "Heller Modus",
"generate": "Erstellen",
"learnMore": "Mehr lernen",
"darkMode": "Dunkler Modus",
"loading": "Lade",
"random": "Zufall",
"batch": "Stapel-Manager",
"advanced": "Erweitert",
"langBrPortuguese": "Portugiesisch (Brasilien)",
"unifiedCanvas": "Einheitliche Leinwand",
"openInNewTab": "In einem neuem Tab öffnen",
"statusProcessing": "wird bearbeitet",
"linear": "Linear",
"imagePrompt": "Bild Prompt"
"langSpanish": "Spanisch"
},
"gallery": {
"generations": "Erzeugungen",
"showGenerations": "Zeige Erzeugnisse",
"uploads": "Uploads",
"uploads": "Hochgelades",
"showUploads": "Zeige Uploads",
"galleryImageSize": "Bildgröße",
"galleryImageResetSize": "Größe zurücksetzen",
@@ -105,22 +73,7 @@
"singleColumnLayout": "Einspaltiges Layout",
"allImagesLoaded": "Alle Bilder geladen",
"loadMore": "Mehr laden",
"noImagesInGallery": "Keine Bilder in der Galerie",
"loading": "Lade",
"preparingDownload": "bereite Download vor",
"preparingDownloadFailed": "Problem beim Download vorbereiten",
"deleteImage": "Lösche Bild",
"images": "Bilder",
"copy": "Kopieren",
"download": "Runterladen",
"setCurrentImage": "Setze aktuelle Bild",
"featuresWillReset": "Wenn Sie dieses Bild löschen, werden diese Funktionen sofort zurückgesetzt.",
"deleteImageBin": "Gelöschte Bilder werden an den Papierkorb Ihres Betriebssystems gesendet.",
"unableToLoad": "Galerie kann nicht geladen werden",
"downloadSelection": "Auswahl herunterladen",
"currentlyInUse": "Dieses Bild wird derzeit in den folgenden Funktionen verwendet:",
"deleteImagePermanent": "Gelöschte Bilder können nicht wiederhergestellt werden.",
"autoAssignBoardOnClick": "Board per Klick automatisch zuweisen"
"noImagesInGallery": "Keine Bilder in der Galerie"
},
"hotkeys": {
"keyboardShortcuts": "Tastenkürzel",
@@ -129,8 +82,7 @@
"galleryHotkeys": "Galerie Tastenkürzel",
"unifiedCanvasHotkeys": "Unified Canvas Tastenkürzel",
"invoke": {
"desc": "Ein Bild erzeugen",
"title": "Invoke"
"desc": "Ein Bild erzeugen"
},
"cancel": {
"title": "Abbrechen",
@@ -214,7 +166,7 @@
},
"toggleGalleryPin": {
"title": "Galerie anheften umschalten",
"desc": "Heftet die Galerie an die Benutzeroberfläche bzw. löst die sie"
"desc": "Heftet die Galerie an die Benutzeroberfläche bzw. löst die sie."
},
"increaseGalleryThumbSize": {
"title": "Größe der Galeriebilder erhöhen",
@@ -327,11 +279,6 @@
"acceptStagingImage": {
"title": "Staging-Bild akzeptieren",
"desc": "Akzeptieren Sie das aktuelle Bild des Staging-Bereichs"
},
"nodesHotkeys": "Knoten Tastenkürzel",
"addNodes": {
"title": "Knotenpunkt hinzufügen",
"desc": "Öffnet das Menü zum Hinzufügen von Knoten"
}
},
"modelManager": {
@@ -348,7 +295,7 @@
"config": "Konfiguration",
"configValidationMsg": "Pfad zur Konfigurationsdatei Ihres Models.",
"modelLocation": "Ort des Models",
"modelLocationValidationMsg": "Pfad zum Speicherort Ihres Models",
"modelLocationValidationMsg": "Pfad zum Speicherort Ihres Models.",
"vaeLocation": "VAE Ort",
"vaeLocationValidationMsg": "Pfad zum Speicherort Ihres VAE.",
"width": "Breite",
@@ -381,99 +328,11 @@
"deleteModel": "Model löschen",
"deleteConfig": "Konfiguration löschen",
"deleteMsg1": "Möchten Sie diesen Model-Eintrag wirklich aus InvokeAI löschen?",
"deleteMsg2": "Dadurch WIRD das Modell von der Festplatte gelöscht WENN es im InvokeAI Root Ordner liegt. Wenn es in einem anderem Ordner liegt wird das Modell NICHT von der Festplatte gelöscht.",
"deleteMsg2": "Dadurch wird die Modellprüfpunktdatei nicht von Ihrer Festplatte gelöscht. Sie können sie bei Bedarf erneut hinzufügen.",
"customConfig": "Benutzerdefinierte Konfiguration",
"invokeRoot": "InvokeAI Ordner",
"formMessageDiffusersVAELocationDesc": "Falls nicht angegeben, sucht InvokeAI nach der VAE-Datei innerhalb des oben angegebenen Modell Speicherortes.",
"checkpointModels": "Kontrollpunkte",
"convert": "Umwandeln",
"addCheckpointModel": "Kontrollpunkt / SafeTensors Modell hinzufügen",
"allModels": "Alle Modelle",
"alpha": "Alpha",
"addDifference": "Unterschied hinzufügen",
"convertToDiffusersHelpText2": "Bei diesem Vorgang wird Ihr Eintrag im Modell-Manager durch die Diffusor-Version desselben Modells ersetzt.",
"convertToDiffusersHelpText5": "Bitte stellen Sie sicher, dass Sie über genügend Speicherplatz verfügen. Die Modelle sind in der Regel zwischen 2 GB und 7 GB groß.",
"convertToDiffusersHelpText3": "Ihre Kontrollpunktdatei auf der Festplatte wird NICHT gelöscht oder in irgendeiner Weise verändert. Sie können Ihren Kontrollpunkt dem Modell-Manager wieder hinzufügen, wenn Sie dies wünschen.",
"convertToDiffusersHelpText4": "Dies ist ein einmaliger Vorgang. Er kann je nach den Spezifikationen Ihres Computers etwa 30-60 Sekunden dauern.",
"convertToDiffusersHelpText6": "Möchten Sie dieses Modell konvertieren?",
"custom": "Benutzerdefiniert",
"modelConverted": "Modell umgewandelt",
"inverseSigmoid": "Inverses Sigmoid",
"invokeAIFolder": "Invoke AI Ordner",
"formMessageDiffusersModelLocationDesc": "Bitte geben Sie mindestens einen an.",
"customSaveLocation": "Benutzerdefinierter Speicherort",
"formMessageDiffusersVAELocation": "VAE Speicherort",
"mergedModelCustomSaveLocation": "Benutzerdefinierter Pfad",
"modelMergeHeaderHelp2": "Nur Diffusers sind für die Zusammenführung verfügbar. Wenn Sie ein Kontrollpunktmodell zusammenführen möchten, konvertieren Sie es bitte zuerst in Diffusers.",
"manual": "Manuell",
"modelManager": "Modell Manager",
"modelMergeAlphaHelp": "Alpha steuert die Überblendungsstärke für die Modelle. Niedrigere Alphawerte führen zu einem geringeren Einfluss des zweiten Modells.",
"modelMergeHeaderHelp1": "Sie können bis zu drei verschiedene Modelle miteinander kombinieren, um eine Mischung zu erstellen, die Ihren Bedürfnissen entspricht.",
"ignoreMismatch": "Unstimmigkeiten zwischen ausgewählten Modellen ignorieren",
"model": "Modell",
"convertToDiffusersSaveLocation": "Speicherort",
"pathToCustomConfig": "Pfad zur benutzerdefinierten Konfiguration",
"v1": "v1",
"modelMergeInterpAddDifferenceHelp": "In diesem Modus wird zunächst Modell 3 von Modell 2 subtrahiert. Die resultierende Version wird mit Modell 1 mit dem oben eingestellten Alphasatz gemischt.",
"modelTwo": "Modell 2",
"modelOne": "Modell 1",
"v2_base": "v2 (512px)",
"scanForModels": "Nach Modellen suchen",
"name": "Name",
"safetensorModels": "SafeTensors",
"pickModelType": "Modell Typ auswählen",
"sameFolder": "Gleicher Ordner",
"modelThree": "Modell 3",
"v2_768": "v2 (768px)",
"none": "Nix",
"repoIDValidationMsg": "Online Repo Ihres Modells",
"vaeRepoIDValidationMsg": "Online Repo Ihrer VAE",
"importModels": "Importiere Modelle",
"merge": "Zusammenführen",
"addDiffuserModel": "Diffusers hinzufügen",
"advanced": "Erweitert",
"closeAdvanced": "Schließe Erweitert",
"convertingModelBegin": "Konvertiere Modell. Bitte warten.",
"customConfigFileLocation": "Benutzerdefinierte Konfiguration Datei Speicherort",
"baseModel": "Basis Modell",
"convertToDiffusers": "Konvertiere zu Diffusers",
"diffusersModels": "Diffusers",
"noCustomLocationProvided": "Kein benutzerdefinierter Standort angegeben",
"onnxModels": "Onnx",
"vaeRepoID": "VAE-Repo-ID",
"weightedSum": "Gewichtete Summe",
"syncModelsDesc": "Wenn Ihre Modelle nicht mit dem Backend synchronisiert sind, können Sie sie mit dieser Option aktualisieren. Dies ist im Allgemeinen praktisch, wenn Sie Ihre models.yaml-Datei manuell aktualisieren oder Modelle zum InvokeAI-Stammordner hinzufügen, nachdem die Anwendung gestartet wurde.",
"vae": "VAE",
"noModels": "Keine Modelle gefunden",
"statusConverting": "Konvertieren",
"sigmoid": "Sigmoid",
"predictionType": "Vorhersagetyp (für Stable Diffusion 2.x-Modelle und gelegentliche Stable Diffusion 1.x-Modelle)",
"selectModel": "Wählen Sie Modell aus",
"repo_id": "Repo-ID",
"modelSyncFailed": "Modellsynchronisierung fehlgeschlagen",
"quickAdd": "Schnell hinzufügen",
"simpleModelDesc": "Geben Sie einen Pfad zu einem lokalen Diffusers-Modell, einem lokalen Checkpoint-/Safetensors-Modell, einer HuggingFace-Repo-ID oder einer Checkpoint-/Diffusers-Modell-URL an.",
"modelDeleted": "Modell gelöscht",
"inpainting": "v1 Ausmalen",
"modelUpdateFailed": "Modellaktualisierung fehlgeschlagen",
"useCustomConfig": "Benutzerdefinierte Konfiguration verwenden",
"settings": "Einstellungen",
"modelConversionFailed": "Modellkonvertierung fehlgeschlagen",
"syncModels": "Modelle synchronisieren",
"mergedModelSaveLocation": "Speicherort",
"modelType": "Modelltyp",
"modelsMerged": "Modelle zusammengeführt",
"modelsMergeFailed": "Modellzusammenführung fehlgeschlagen",
"convertToDiffusersHelpText1": "Dieses Modell wird in das 🧨 Diffusers-Format konvertiert.",
"modelsSynced": "Modelle synchronisiert",
"vaePrecision": "VAE-Präzision",
"mergeModels": "Modelle zusammenführen",
"interpolationType": "Interpolationstyp",
"oliveModels": "Olives",
"variant": "Variante",
"loraModels": "LoRAs",
"modelDeleteFailed": "Modell konnte nicht gelöscht werden",
"mergedModelName": "Zusammengeführter Modellname"
"checkpointModels": "Kontrollpunkte"
},
"parameters": {
"images": "Bilder",
@@ -493,7 +352,7 @@
"type": "Art",
"strength": "Stärke",
"upscaling": "Hochskalierung",
"upscale": "Hochskalieren (Shift + U)",
"upscale": "Hochskalieren",
"upscaleImage": "Bild hochskalieren",
"scale": "Maßstab",
"otherOptions": "Andere Optionen",
@@ -510,7 +369,7 @@
"seamCorrectionHeader": "Nahtkorrektur",
"infillScalingHeader": "Infill und Skalierung",
"img2imgStrength": "Bild-zu-Bild-Stärke",
"toggleLoopback": "Loopback umschalten",
"toggleLoopback": "Toggle Loopback",
"sendTo": "Senden an",
"sendToImg2Img": "Senden an Bild zu Bild",
"sendToUnifiedCanvas": "Senden an Unified Canvas",
@@ -525,20 +384,8 @@
"initialImage": "Ursprüngliches Bild",
"showOptionsPanel": "Optionsleiste zeigen",
"cancel": {
"setType": "Abbruchart festlegen",
"immediate": "Sofort abbrechen",
"schedule": "Abbrechen nach der aktuellen Iteration",
"isScheduled": "Abbrechen"
},
"copyImage": "Bild kopieren",
"denoisingStrength": "Stärke der Entrauschung",
"symmetry": "Symmetrie",
"imageToImage": "Bild zu Bild",
"info": "Information",
"general": "Allgemein",
"hiresStrength": "High Res Stärke",
"hidePreview": "Verstecke Vorschau",
"showPreview": "Zeige Vorschau"
"setType": "Abbruchart festlegen"
}
},
"settings": {
"displayInProgress": "Bilder in Bearbeitung anzeigen",
@@ -549,9 +396,7 @@
"resetWebUI": "Web-Oberfläche zurücksetzen",
"resetWebUIDesc1": "Das Zurücksetzen der Web-Oberfläche setzt nur den lokalen Cache des Browsers mit Ihren Bildern und gespeicherten Einstellungen zurück. Es werden keine Bilder von der Festplatte gelöscht.",
"resetWebUIDesc2": "Wenn die Bilder nicht in der Galerie angezeigt werden oder etwas anderes nicht funktioniert, versuchen Sie bitte, die Einstellungen zurückzusetzen, bevor Sie einen Fehler auf GitHub melden.",
"resetComplete": "Die Web-Oberfläche wurde zurückgesetzt.",
"models": "Modelle",
"useSlidersForAll": "Schieberegler für alle Optionen verwenden"
"resetComplete": "Die Web-Oberfläche wurde zurückgesetzt. Aktualisieren Sie die Seite, um sie neu zu laden."
},
"toast": {
"tempFoldersEmptied": "Temp-Ordner geleert",
@@ -561,7 +406,7 @@
"imageCopied": "Bild kopiert",
"imageLinkCopied": "Bildlink kopiert",
"imageNotLoaded": "Kein Bild geladen",
"imageNotLoadedDesc": "Konnte kein Bild finden",
"imageNotLoadedDesc": "Kein Bild gefunden, das an das Bild zu Bild-Modul gesendet werden kann",
"imageSavedToGallery": "Bild in die Galerie gespeichert",
"canvasMerged": "Leinwand zusammengeführt",
"sentToImageToImage": "Gesendet an Bild zu Bild",
@@ -631,7 +476,7 @@
"autoSaveToGallery": "Automatisch in Galerie speichern",
"saveBoxRegionOnly": "Nur Auswahlbox speichern",
"limitStrokesToBox": "Striche auf Box beschränken",
"showCanvasDebugInfo": "Zusätzliche Informationen zur Leinwand anzeigen",
"showCanvasDebugInfo": "Leinwand-Debug-Infos anzeigen",
"clearCanvasHistory": "Leinwand-Verlauf löschen",
"clearHistory": "Verlauf löschen",
"clearCanvasHistoryMessage": "Wenn Sie den Verlauf der Leinwand löschen, bleibt die aktuelle Leinwand intakt, aber der Verlauf der Rückgängig- und Wiederherstellung wird unwiderruflich gelöscht.",
@@ -656,17 +501,14 @@
"betaClear": "Löschen",
"betaDarkenOutside": "Außen abdunkeln",
"betaLimitToBox": "Begrenzung auf das Feld",
"betaPreserveMasked": "Maskiertes bewahren",
"antialiasing": "Kantenglättung",
"showResultsOn": "Zeige Ergebnisse (An)",
"showResultsOff": "Zeige Ergebnisse (Aus)"
"betaPreserveMasked": "Maskiertes bewahren"
},
"accessibility": {
"modelSelect": "Model Auswahl",
"uploadImage": "Bild hochladen",
"previousImage": "Voriges Bild",
"useThisParameter": "Benutze diesen Parameter",
"copyMetadataJson": "Kopiere Metadaten JSON",
"copyMetadataJson": "Kopiere metadata JSON",
"zoomIn": "Vergrößern",
"rotateClockwise": "Im Uhrzeigersinn drehen",
"flipHorizontally": "Horizontal drehen",
@@ -675,299 +517,9 @@
"toggleAutoscroll": "Auroscroll ein/ausschalten",
"toggleLogViewer": "Log Betrachter ein/ausschalten",
"showOptionsPanel": "Zeige Optionen",
"reset": "Zurücksetzten",
"reset": "Zurücksetzen",
"nextImage": "Nächstes Bild",
"zoomOut": "Verkleinern",
"rotateCounterClockwise": "Gegen den Uhrzeigersinn verdrehen",
"showGalleryPanel": "Galeriefenster anzeigen",
"exitViewer": "Betrachten beenden",
"menu": "Menü",
"loadMore": "Mehr laden",
"invokeProgressBar": "Invoke Fortschrittsanzeige"
},
"boards": {
"autoAddBoard": "Automatisches Hinzufügen zum Ordner",
"topMessage": "Dieser Ordner enthält Bilder die in den folgenden Funktionen verwendet werden:",
"move": "Bewegen",
"menuItemAutoAdd": "Automatisches Hinzufügen zu diesem Ordner",
"myBoard": "Meine Ordner",
"searchBoard": "Ordner durchsuchen...",
"noMatching": "Keine passenden Ordner",
"selectBoard": "Ordner aussuchen",
"cancel": "Abbrechen",
"addBoard": "Ordner hinzufügen",
"uncategorized": "Nicht kategorisiert",
"downloadBoard": "Ordner runterladen",
"changeBoard": "Ordner wechseln",
"loading": "Laden...",
"clearSearch": "Suche leeren",
"bottomMessage": "Durch das Löschen dieses Ordners und seiner Bilder werden alle Funktionen zurückgesetzt, die sie derzeit verwenden."
},
"controlnet": {
"showAdvanced": "Zeige Erweitert",
"contentShuffleDescription": "Mischt den Inhalt von einem Bild",
"addT2IAdapter": "$t(common.t2iAdapter) hinzufügen",
"importImageFromCanvas": "Importieren Bild von Zeichenfläche",
"lineartDescription": "Konvertiere Bild zu Lineart",
"importMaskFromCanvas": "Importiere Maske von Zeichenfläche",
"hed": "HED",
"hideAdvanced": "Verstecke Erweitert",
"contentShuffle": "Inhalt mischen",
"controlNetEnabledT2IDisabled": "$t(common.controlNet) ist aktiv, $t(common.t2iAdapter) ist deaktiviert",
"ipAdapterModel": "Adapter Modell",
"beginEndStepPercent": "Start / Ende Step Prozent",
"duplicate": "Kopieren",
"f": "F",
"h": "H",
"depthMidasDescription": "Tiefenmap erstellen mit Midas",
"controlnet": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.controlNet))",
"t2iEnabledControlNetDisabled": "$t(common.t2iAdapter) ist aktiv, $t(common.controlNet) ist deaktiviert",
"weight": "Breite",
"selectModel": "Wähle ein Modell",
"depthMidas": "Tiefe (Midas)",
"w": "W",
"addControlNet": "$t(common.controlNet) hinzufügen",
"none": "Kein",
"incompatibleBaseModel": "Inkompatibles Basismodell:",
"enableControlnet": "Aktiviere ControlNet",
"detectResolution": "Auflösung erkennen",
"controlNetT2IMutexDesc": "$t(common.controlNet) und $t(common.t2iAdapter) zur gleichen Zeit wird nicht unterstützt.",
"ip_adapter": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.ipAdapter))",
"fill": "Füllen",
"addIPAdapter": "$t(common.ipAdapter) hinzufügen",
"colorMapDescription": "Erstelle eine Farbkarte von diesem Bild",
"t2i_adapter": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.t2iAdapter))",
"imageResolution": "Bild Auflösung",
"depthZoe": "Tiefe (Zoe)",
"colorMap": "Farbe",
"lowThreshold": "Niedrige Schwelle",
"highThreshold": "Hohe Schwelle",
"toggleControlNet": "Schalten ControlNet um",
"delete": "Löschen",
"controlAdapter_one": "Control Adapter",
"controlAdapter_other": "Control Adapters",
"colorMapTileSize": "Tile Größe",
"depthZoeDescription": "Tiefenmap erstellen mit Zoe",
"setControlImageDimensions": "Setze Control Bild Auflösung auf Breite/Höhe",
"handAndFace": "Hand und Gesicht",
"enableIPAdapter": "Aktiviere IP Adapter",
"resize": "Größe ändern",
"resetControlImage": "Zurücksetzen vom Referenz Bild",
"balanced": "Ausgewogen",
"prompt": "Prompt",
"resizeMode": "Größenänderungsmodus",
"processor": "Prozessor",
"saveControlImage": "Speichere Referenz Bild",
"safe": "Speichern",
"ipAdapterImageFallback": "Kein IP Adapter Bild ausgewählt",
"resetIPAdapterImage": "Zurücksetzen vom IP Adapter Bild",
"pidi": "PIDI",
"normalBae": "Normales BAE",
"mlsdDescription": "Minimalistischer Liniensegmentdetektor",
"openPoseDescription": "Schätzung der menschlichen Pose mit Openpose",
"control": "Kontrolle",
"coarse": "Coarse",
"crop": "Zuschneiden",
"pidiDescription": "PIDI-Bildverarbeitung",
"mediapipeFace": "Mediapipe Gesichter",
"mlsd": "M-LSD",
"controlMode": "Steuermodus",
"cannyDescription": "Canny Ecken Erkennung",
"lineart": "Lineart",
"lineartAnimeDescription": "Lineart-Verarbeitung im Anime-Stil",
"minConfidence": "Minimales Vertrauen",
"megaControl": "Mega-Kontrolle",
"autoConfigure": "Prozessor automatisch konfigurieren",
"normalBaeDescription": "Normale BAE-Verarbeitung",
"noneDescription": "Es wurde keine Verarbeitung angewendet",
"openPose": "Openpose",
"lineartAnime": "Lineart Anime",
"mediapipeFaceDescription": "Gesichtserkennung mit Mediapipe",
"canny": "Canny",
"hedDescription": "Ganzheitlich verschachtelte Kantenerkennung",
"scribble": "Scribble",
"maxFaces": "Maximal Anzahl Gesichter"
},
"queue": {
"status": "Status",
"cancelTooltip": "Aktuellen Aufgabe abbrechen",
"queueEmpty": "Warteschlange leer",
"in_progress": "In Arbeit",
"queueFront": "An den Anfang der Warteschlange tun",
"completed": "Fertig",
"queueBack": "In die Warteschlange",
"clearFailed": "Probleme beim leeren der Warteschlange",
"clearSucceeded": "Warteschlange geleert",
"pause": "Pause",
"cancelSucceeded": "Auftrag abgebrochen",
"queue": "Warteschlange",
"batch": "Stapel",
"pending": "Ausstehend",
"clear": "Leeren",
"prune": "Leeren",
"total": "Gesamt",
"canceled": "Abgebrochen",
"clearTooltip": "Abbrechen und alle Aufträge leeren",
"current": "Aktuell",
"failed": "Fehler",
"cancelItem": "Abbruch Auftrag",
"next": "Nächste",
"cancel": "Abbruch",
"session": "Sitzung",
"queueTotal": "{{total}} Gesamt",
"resume": "Wieder aufnehmen",
"item": "Auftrag",
"notReady": "Warteschlange noch nicht bereit",
"batchValues": "Stapel Werte",
"queueCountPrediction": "{{predicted}} zur Warteschlange hinzufügen",
"queuedCount": "{{pending}} wartenden Elemente",
"clearQueueAlertDialog": "Die Warteschlange leeren, stoppt den aktuellen Prozess und leert die Warteschlange komplett.",
"completedIn": "Fertig in",
"cancelBatchSucceeded": "Stapel abgebrochen",
"cancelBatch": "Stapel stoppen",
"enqueueing": "Stapel in der Warteschlange",
"queueMaxExceeded": "Maximum von {{max_queue_size}} Elementen erreicht, würde {{skip}} Elemente überspringen",
"cancelBatchFailed": "Problem beim Abbruch vom Stapel",
"clearQueueAlertDialog2": "bist du sicher die Warteschlange zu leeren?",
"pruneSucceeded": "{{item_count}} abgeschlossene Elemente aus der Warteschlange entfernt",
"pauseSucceeded": "Prozessor angehalten",
"cancelFailed": "Problem beim Stornieren des Auftrags",
"pauseFailed": "Problem beim Anhalten des Prozessors",
"front": "Vorne",
"pruneTooltip": "Bereinigen Sie {{item_count}} abgeschlossene Aufträge",
"resumeFailed": "Problem beim wieder aufnehmen von Prozessor",
"pruneFailed": "Problem beim leeren der Warteschlange",
"pauseTooltip": "Pause von Prozessor",
"back": "Hinten",
"resumeSucceeded": "Prozessor wieder aufgenommen",
"resumeTooltip": "Prozessor wieder aufnehmen"
},
"metadata": {
"negativePrompt": "Negativ Beschreibung",
"metadata": "Meta-Data",
"strength": "Bild zu Bild stärke",
"imageDetails": "Bild Details",
"model": "Modell",
"noImageDetails": "Keine Bild Details gefunden",
"cfgScale": "CFG-Skala",
"fit": "Bild zu Bild passen",
"height": "Höhe",
"noMetaData": "Keine Meta-Data gefunden",
"width": "Breite",
"createdBy": "Erstellt von",
"steps": "Schritte",
"seamless": "Nahtlos",
"positivePrompt": "Positiver Prompt",
"generationMode": "Generierungsmodus",
"Threshold": "Noise Schwelle",
"seed": "Samen",
"perlin": "Perlin Noise",
"hiresFix": "Optimierung für hohe Auflösungen",
"initImage": "Erstes Bild",
"variations": "Samengewichtspaare",
"vae": "VAE",
"workflow": "Arbeitsablauf",
"scheduler": "Scheduler",
"noRecallParameters": "Es wurden keine Parameter zum Abrufen gefunden"
},
"popovers": {
"noiseUseCPU": {
"heading": "Nutze Prozessor rauschen"
},
"paramModel": {
"heading": "Modell"
},
"paramIterations": {
"heading": "Iterationen"
},
"paramCFGScale": {
"heading": "CFG-Skala"
},
"paramSteps": {
"heading": "Schritte"
},
"lora": {
"heading": "LoRA Gewichte"
},
"infillMethod": {
"heading": "Füllmethode"
},
"paramVAE": {
"heading": "VAE"
}
},
"ui": {
"lockRatio": "Verhältnis sperren",
"hideProgressImages": "Verstecke Prozess Bild",
"showProgressImages": "Zeige Prozess Bild"
},
"invocationCache": {
"disable": "Deaktivieren",
"misses": "Cache Nötig",
"hits": "Cache Treffer",
"enable": "Aktivieren",
"clear": "Leeren",
"maxCacheSize": "Maximale Cache Größe",
"cacheSize": "Cache Größe"
},
"embedding": {
"noMatchingEmbedding": "Keine passenden Embeddings",
"addEmbedding": "Embedding hinzufügen",
"incompatibleModel": "Inkompatibles Basismodell:"
},
"nodes": {
"booleanPolymorphicDescription": "Eine Sammlung boolescher Werte.",
"colorFieldDescription": "Eine RGBA-Farbe.",
"conditioningCollection": "Konditionierungssammlung",
"addNode": "Knoten hinzufügen",
"conditioningCollectionDescription": "Konditionierung kann zwischen Knoten weitergegeben werden.",
"colorPolymorphic": "Farbpolymorph",
"colorCodeEdgesHelp": "Farbkodieren Sie Kanten entsprechend ihren verbundenen Feldern",
"animatedEdges": "Animierte Kanten",
"booleanCollectionDescription": "Eine Sammlung boolescher Werte.",
"colorField": "Farbe",
"collectionItem": "Objekt in Sammlung",
"animatedEdgesHelp": "Animieren Sie ausgewählte Kanten und Kanten, die mit ausgewählten Knoten verbunden sind",
"cannotDuplicateConnection": "Es können keine doppelten Verbindungen erstellt werden",
"booleanPolymorphic": "Boolesche Polymorphie",
"colorPolymorphicDescription": "Eine Sammlung von Farben.",
"clipFieldDescription": "Tokenizer- und text_encoder-Untermodelle.",
"clipField": "Clip",
"colorCollection": "Eine Sammlung von Farben.",
"boolean": "Boolesche Werte",
"currentImage": "Aktuelles Bild",
"booleanDescription": "Boolesche Werte sind wahr oder falsch.",
"collection": "Sammlung",
"cannotConnectInputToInput": "Eingang kann nicht mit Eingang verbunden werden",
"conditioningField": "Konditionierung",
"cannotConnectOutputToOutput": "Ausgang kann nicht mit Ausgang verbunden werden",
"booleanCollection": "Boolesche Werte Sammlung",
"cannotConnectToSelf": "Es kann keine Verbindung zu sich selbst hergestellt werden",
"colorCodeEdges": "Farbkodierte Kanten",
"addNodeToolTip": "Knoten hinzufügen (Umschalt+A, Leertaste)"
},
"hrf": {
"enableHrf": "Aktivieren Sie die Korrektur für hohe Auflösungen",
"upscaleMethod": "Vergrößerungsmethoden",
"enableHrfTooltip": "Generieren Sie mit einer niedrigeren Anfangsauflösung, skalieren Sie auf die Basisauflösung hoch und führen Sie dann Image-to-Image aus.",
"metadata": {
"strength": "Hochauflösender Fix Stärke",
"enabled": "Hochauflösender Fix aktiviert",
"method": "Hochauflösender Fix Methode"
},
"hrf": "Hochauflösender Fix",
"hrfStrength": "Hochauflösende Fix Stärke",
"strengthTooltip": "Niedrigere Werte führen zu weniger Details, wodurch potenzielle Artefakte reduziert werden können."
},
"models": {
"noMatchingModels": "Keine passenden Modelle",
"loading": "lade",
"noMatchingLoRAs": "Keine passenden LoRAs",
"noLoRAsAvailable": "Keine LoRAs verfügbar",
"noModelsAvailable": "Keine Modelle verfügbar",
"selectModel": "Wählen ein Modell aus",
"noRefinerModelsInstalled": "Keine SDXL Refiner-Modelle installiert",
"noLoRAsInstalled": "Keine LoRAs installiert",
"selectLoRA": "Wählen ein LoRA aus"
"rotateCounterClockwise": "Gegen den Uhrzeigersinn verdrehen"
}
}

View File

@@ -6,7 +6,6 @@
"flipVertically": "Flip Vertically",
"invokeProgressBar": "Invoke progress bar",
"menu": "Menu",
"mode": "Mode",
"modelSelect": "Model Select",
"modifyConfig": "Modify Config",
"nextImage": "Next Image",
@@ -31,10 +30,6 @@
"cancel": "Cancel",
"changeBoard": "Change Board",
"clearSearch": "Clear Search",
"deleteBoard": "Delete Board",
"deleteBoardAndImages": "Delete Board and Images",
"deleteBoardOnly": "Delete Board Only",
"deletedBoardsCannotbeRestored": "Deleted boards cannot be restored",
"loading": "Loading...",
"menuItemAutoAdd": "Auto-add to this Board",
"move": "Move",
@@ -56,12 +51,9 @@
"cancel": "Cancel",
"close": "Close",
"on": "On",
"checkpoint": "Checkpoint",
"communityLabel": "Community",
"controlNet": "ControlNet",
"controlAdapter": "Control Adapter",
"data": "Data",
"details": "Details",
"ipAdapter": "IP Adapter",
"t2iAdapter": "T2I Adapter",
"darkMode": "Dark Mode",
@@ -73,14 +65,13 @@
"imagePrompt": "Image Prompt",
"imageFailedToLoad": "Unable to Load Image",
"img2img": "Image To Image",
"inpaint": "inpaint",
"langArabic": "العربية",
"langBrPortuguese": "Português do Brasil",
"langDutch": "Nederlands",
"langEnglish": "English",
"langFrench": "Français",
"langGerman": "German",
"langHebrew": "Hebrew",
"langGerman": "Deutsch",
"langHebrew": "עברית",
"langItalian": "Italiano",
"langJapanese": "日本語",
"langKorean": "한국어",
@@ -102,8 +93,6 @@
"nodes": "Workflow Editor",
"nodesDesc": "A node based system for the generation of images is under development currently. Stay tuned for updates about this amazing feature.",
"openInNewTab": "Open in New Tab",
"outpaint": "outpaint",
"outputs": "Outputs",
"postProcessDesc1": "Invoke AI offers a wide variety of post processing features. Image Upscaling and Face Restoration are already available in the WebUI. You can access them from the Advanced Options menu of the Text To Image and Image To Image tabs. You can also process images directly, using the image action buttons above the current image display or in the viewer.",
"postProcessDesc2": "A dedicated UI will be released soon to facilitate more advanced post processing workflows.",
"postProcessDesc3": "The Invoke AI Command Line Interface offers various other features including Embiggen.",
@@ -111,9 +100,7 @@
"postProcessing": "Post Processing",
"random": "Random",
"reportBugLabel": "Report Bug",
"safetensors": "Safetensors",
"settingsLabel": "Settings",
"simple": "Simple",
"statusConnected": "Connected",
"statusConvertingModel": "Converting Model",
"statusDisconnected": "Disconnected",
@@ -140,7 +127,6 @@
"statusSavingImage": "Saving Image",
"statusUpscaling": "Upscaling",
"statusUpscalingESRGAN": "Upscaling (ESRGAN)",
"template": "Template",
"training": "Training",
"trainingDesc1": "A dedicated workflow for training your own embeddings and checkpoints using Textual Inversion and Dreambooth from the web interface.",
"trainingDesc2": "InvokeAI already supports training custom embeddourings using Textual Inversion using the main script.",
@@ -151,9 +137,9 @@
"controlnet": {
"controlAdapter_one": "Control Adapter",
"controlAdapter_other": "Control Adapters",
"controlnet": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.controlNet))",
"ip_adapter": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.ipAdapter))",
"t2i_adapter": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.t2iAdapter))",
"controlnet": "$t(controlnet.controlAdapter) #{{number}} ($t(common.controlNet))",
"ip_adapter": "$t(controlnet.controlAdapter) #{{number}} ($t(common.ipAdapter))",
"t2i_adapter": "$t(controlnet.controlAdapter) #{{number}} ($t(common.t2iAdapter))",
"addControlNet": "Add $t(common.controlNet)",
"addIPAdapter": "Add $t(common.ipAdapter)",
"addT2IAdapter": "Add $t(common.t2iAdapter)",
@@ -228,7 +214,6 @@
"setControlImageDimensions": "Set Control Image Dimensions To W/H",
"showAdvanced": "Show Advanced",
"toggleControlNet": "Toggle this ControlNet",
"unstarImage": "Unstar Image",
"w": "W",
"weight": "Weight",
"enableIPAdapter": "Enable IP Adapter",
@@ -236,19 +221,6 @@
"resetIPAdapterImage": "Reset IP Adapter Image",
"ipAdapterImageFallback": "No IP Adapter Image Selected"
},
"hrf": {
"hrf": "High Resolution Fix",
"enableHrf": "Enable High Resolution Fix",
"enableHrfTooltip": "Generate with a lower initial resolution, upscale to the base resolution, then run Image-to-Image.",
"upscaleMethod": "Upscale Method",
"hrfStrength": "High Resolution Fix Strength",
"strengthTooltip": "Lower values result in fewer details, which may reduce potential artifacts.",
"metadata": {
"enabled": "High Resolution Fix Enabled",
"strength": "High Resolution Fix Strength",
"method": "High Resolution Fix Method"
}
},
"embedding": {
"addEmbedding": "Add Embedding",
"incompatibleModel": "Incompatible base model:",
@@ -294,7 +266,6 @@
"next": "Next",
"status": "Status",
"total": "Total",
"time": "Time",
"pending": "Pending",
"in_progress": "In Progress",
"completed": "Completed",
@@ -302,7 +273,6 @@
"canceled": "Canceled",
"completedIn": "Completed in",
"batch": "Batch",
"batchFieldValues": "Batch Field Values",
"item": "Item",
"session": "Session",
"batchValues": "Batch Values",
@@ -352,7 +322,6 @@
"loading": "Loading",
"loadMore": "Load More",
"maintainAspectRatio": "Maintain Aspect Ratio",
"noImageSelected": "No Image Selected",
"noImagesInGallery": "No Images to Display",
"setCurrentImage": "Set as Current Image",
"showGenerations": "Show Generations",
@@ -590,10 +559,8 @@
"negativePrompt": "Negative Prompt",
"noImageDetails": "No image details found",
"noMetaData": "No metadata found",
"noRecallParameters": "No parameters to recall found",
"perlin": "Perlin Noise",
"positivePrompt": "Positive Prompt",
"recallParameters": "Recall Parameters",
"scheduler": "Scheduler",
"seamless": "Seamless",
"seed": "Seed",
@@ -601,7 +568,6 @@
"strength": "Image to image strength",
"Threshold": "Noise Threshold",
"variations": "Seed-weight pairs",
"vae": "VAE",
"width": "Width",
"workflow": "Workflow"
},
@@ -624,7 +590,6 @@
"cannotUseSpaces": "Cannot Use Spaces",
"checkpointFolder": "Checkpoint Folder",
"checkpointModels": "Checkpoints",
"checkpointOrSafetensors": "$t(common.checkpoint) / $t(common.safetensors)",
"clearCheckpointFolder": "Clear Checkpoint Folder",
"closeAdvanced": "Close Advanced",
"config": "Config",
@@ -704,7 +669,6 @@
"nameValidationMsg": "Enter a name for your model",
"noCustomLocationProvided": "No Custom Location Provided",
"noModels": "No Models Found",
"noModelSelected": "No Model Selected",
"noModelsFound": "No Models Found",
"none": "none",
"notLoaded": "not loaded",
@@ -750,17 +714,13 @@
"widthValidationMsg": "Default width of your model."
},
"models": {
"addLora": "Add LoRA",
"esrganModel": "ESRGAN Model",
"loading": "loading",
"noLoRAsAvailable": "No LoRAs available",
"noMatchingLoRAs": "No matching LoRAs",
"noMatchingModels": "No matching Models",
"noModelsAvailable": "No models available",
"selectLoRA": "Select a LoRA",
"selectModel": "Select a Model",
"noLoRAsInstalled": "No LoRAs installed",
"noRefinerModelsInstalled": "No SDXL Refiner models installed"
"selectModel": "Select a Model"
},
"nodes": {
"addNode": "Add Node",
@@ -942,10 +902,7 @@
"unknownTemplate": "Unknown Template",
"unkownInvocation": "Unknown Invocation type",
"updateNode": "Update Node",
"updateAllNodes": "Update All Nodes",
"updateApp": "Update App",
"unableToUpdateNodes_one": "Unable to update {{count}} node",
"unableToUpdateNodes_other": "Unable to update {{count}} nodes",
"vaeField": "Vae",
"vaeFieldDescription": "Vae submodel.",
"vaeModelField": "VAE",
@@ -1032,7 +989,6 @@
"maskAdjustmentsHeader": "Mask Adjustments",
"maskBlur": "Blur",
"maskBlurMethod": "Blur Method",
"maskEdge": "Mask Edge",
"negativePromptPlaceholder": "Negative Prompt",
"noiseSettings": "Noise",
"noiseThreshold": "Noise Threshold",
@@ -1080,7 +1036,6 @@
"upscale": "Upscale (Shift + U)",
"upscaleImage": "Upscale Image",
"upscaling": "Upscaling",
"unmasked": "Unmasked",
"useAll": "Use All",
"useCpuNoise": "Use CPU Noise",
"cpuNoise": "CPU Noise",
@@ -1102,7 +1057,6 @@
"dynamicPrompts": "Dynamic Prompts",
"enableDynamicPrompts": "Enable Dynamic Prompts",
"maxPrompts": "Max Prompts",
"promptsPreview": "Prompts Preview",
"promptsWithCount_one": "{{count}} Prompt",
"promptsWithCount_other": "{{count}} Prompts",
"seedBehaviour": {
@@ -1142,10 +1096,7 @@
"displayHelpIcons": "Display Help Icons",
"displayInProgress": "Display Progress Images",
"enableImageDebugging": "Enable Image Debugging",
"enableInformationalPopovers": "Enable Informational Popovers",
"enableInvisibleWatermark": "Enable Invisible Watermark",
"enableNodesEditor": "Enable Nodes Editor",
"enableNSFWChecker": "Enable NSFW Checker",
"experimental": "Experimental",
"favoriteSchedulers": "Favorite Schedulers",
"favoriteSchedulersPlaceholder": "No schedulers favorited",
@@ -1162,13 +1113,13 @@
"showProgressInViewer": "Show Progress Images in Viewer",
"ui": "User Interface",
"useSlidersForAll": "Use Sliders For All Options",
"clearIntermediatesDisabled": "Queue must be empty to clear intermediates",
"clearIntermediatesDesc1": "Clearing intermediates will reset your Canvas and ControlNet state.",
"clearIntermediatesDesc2": "Intermediate images are byproducts of generation, different from the result images in the gallery. Clearing intermediates will free disk space.",
"clearIntermediatesDesc3": "Your gallery images will not be deleted.",
"clearIntermediates": "Clear Intermediates",
"clearIntermediatesWithCount_one": "Clear {{count}} Intermediate",
"clearIntermediatesWithCount_other": "Clear {{count}} Intermediates",
"clearIntermediatesWithCount_zero": "No Intermediates to Clear",
"intermediatesCleared_one": "Cleared {{count}} Intermediate",
"intermediatesCleared_other": "Cleared {{count}} Intermediates",
"intermediatesClearedFailed": "Problem Clearing Intermediates"
@@ -1245,8 +1196,7 @@
"sentToImageToImage": "Sent To Image To Image",
"sentToUnifiedCanvas": "Sent to Unified Canvas",
"serverError": "Server Error",
"setAsCanvasInitialImage": "Set as canvas initial image",
"setCanvasInitialImage": "Set canvas initial image",
"setCanvasInitialImage": "Set as canvas initial image",
"setControlImage": "Set as control image",
"setIPAdapterImage": "Set as IP Adapter Image",
"setInitialImage": "Set as initial image",
@@ -1304,15 +1254,11 @@
},
"compositingBlur": {
"heading": "Blur",
"paragraphs": [
"The blur radius of the mask."
]
"paragraphs": ["The blur radius of the mask."]
},
"compositingBlurMethod": {
"heading": "Blur Method",
"paragraphs": [
"The method of blur applied to the masked area."
]
"paragraphs": ["The method of blur applied to the masked area."]
},
"compositingCoherencePass": {
"heading": "Coherence Pass",
@@ -1322,9 +1268,7 @@
},
"compositingCoherenceMode": {
"heading": "Mode",
"paragraphs": [
"The mode of the Coherence Pass."
]
"paragraphs": ["The mode of the Coherence Pass."]
},
"compositingCoherenceSteps": {
"heading": "Steps",
@@ -1342,9 +1286,7 @@
},
"compositingMaskAdjustments": {
"heading": "Mask Adjustments",
"paragraphs": [
"Adjust the mask."
]
"paragraphs": ["Adjust the mask."]
},
"controlNetBeginEnd": {
"heading": "Begin / End Step Percentage",
@@ -1402,9 +1344,7 @@
},
"infillMethod": {
"heading": "Infill Method",
"paragraphs": [
"Method to infill the selected area."
]
"paragraphs": ["Method to infill the selected area."]
},
"lora": {
"heading": "LoRA Weight",

View File

@@ -87,9 +87,7 @@
"learnMore": "Per saperne di più",
"ipAdapter": "Adattatore IP",
"t2iAdapter": "Adattatore T2I",
"controlAdapter": "Adattatore di Controllo",
"controlNet": "ControlNet",
"auto": "Automatico"
"controlAdapter": "Adattatore di Controllo"
},
"gallery": {
"generations": "Generazioni",
@@ -117,10 +115,7 @@
"currentlyInUse": "Questa immagine è attualmente utilizzata nelle seguenti funzionalità:",
"copy": "Copia",
"download": "Scarica",
"setCurrentImage": "Imposta come immagine corrente",
"preparingDownload": "Preparazione del download",
"preparingDownloadFailed": "Problema durante la preparazione del download",
"downloadSelection": "Scarica gli elementi selezionati"
"setCurrentImage": "Imposta come immagine corrente"
},
"hotkeys": {
"keyboardShortcuts": "Tasti rapidi",
@@ -473,8 +468,7 @@
"useCustomConfig": "Utilizza configurazione personalizzata",
"closeAdvanced": "Chiudi Avanzate",
"modelType": "Tipo di modello",
"customConfigFileLocation": "Posizione del file di configurazione personalizzato",
"vaePrecision": "Precisione VAE"
"customConfigFileLocation": "Posizione del file di configurazione personalizzato"
},
"parameters": {
"images": "Immagini",
@@ -576,12 +570,9 @@
"systemBusy": "Sistema occupato",
"unableToInvoke": "Impossibile invocare",
"systemDisconnected": "Sistema disconnesso",
"noControlImageForControlAdapter": "L'adattatore di controllo #{{number}} non ha un'immagine di controllo",
"noModelForControlAdapter": "Nessun modello selezionato per l'adattatore di controllo #{{number}}.",
"incompatibleBaseModelForControlAdapter": "Il modello dell'adattatore di controllo #{{number}} non è compatibile con il modello principale.",
"missingNodeTemplate": "Modello di nodo mancante",
"missingInputForField": "{{nodeLabel}} -> {{fieldLabel}} ingresso mancante",
"missingFieldTemplate": "Modello di campo mancante"
"noControlImageForControlAdapter": "L'adattatore di controllo {{number}} non ha un'immagine di controllo",
"noModelForControlAdapter": "Nessun modello selezionato per l'adattatore di controllo {{number}}.",
"incompatibleBaseModelForControlAdapter": "Il modello dell'adattatore di controllo {{number}} non è compatibile con il modello principale."
},
"enableNoiseSettings": "Abilita le impostazioni del rumore",
"cpuNoise": "Rumore CPU",
@@ -592,7 +583,7 @@
"iterations": "Iterazioni",
"iterationsWithCount_one": "{{count}} Iterazione",
"iterationsWithCount_many": "{{count}} Iterazioni",
"iterationsWithCount_other": "{{count}} Iterazioni",
"iterationsWithCount_other": "",
"seamlessX&Y": "Senza cuciture X & Y",
"isAllowedToUpscale": {
"useX2Model": "L'immagine è troppo grande per l'ampliamento con il modello x4, utilizza il modello x2",
@@ -600,8 +591,7 @@
},
"seamlessX": "Senza cuciture X",
"seamlessY": "Senza cuciture Y",
"imageActions": "Azioni Immagine",
"aspectRatioFree": "Libere"
"imageActions": "Azioni Immagine"
},
"settings": {
"models": "Modelli",
@@ -630,19 +620,7 @@
"beta": "Beta",
"enableNodesEditor": "Abilita l'editor dei nodi",
"experimental": "Sperimentale",
"autoChangeDimensions": "Aggiorna L/A alle impostazioni predefinite del modello in caso di modifica",
"clearIntermediates": "Cancella le immagini intermedie",
"clearIntermediatesDesc3": "Le immagini della galleria non verranno eliminate.",
"clearIntermediatesDesc2": "Le immagini intermedie sono sottoprodotti della generazione, diversi dalle immagini risultanti nella galleria. La cancellazione degli intermedi libererà spazio su disco.",
"intermediatesCleared_one": "Cancellata {{count}} immagine intermedia",
"intermediatesCleared_many": "Cancellate {{count}} immagini intermedie",
"intermediatesCleared_other": "Cancellate {{count}} immagini intermedie",
"clearIntermediatesDesc1": "La cancellazione delle immagini intermedie ripristinerà lo stato di Tela Unificata e ControlNet.",
"intermediatesClearedFailed": "Problema con la cancellazione delle immagini intermedie",
"clearIntermediatesWithCount_one": "Cancella {{count}} immagine intermedia",
"clearIntermediatesWithCount_many": "Cancella {{count}} immagini intermedie",
"clearIntermediatesWithCount_other": "Cancella {{count}} immagini intermedie",
"clearIntermediatesDisabled": "La coda deve essere vuota per cancellare le immagini intermedie"
"autoChangeDimensions": "Aggiorna L/A alle impostazioni predefinite del modello in caso di modifica"
},
"toast": {
"tempFoldersEmptied": "Cartella temporanea svuotata",
@@ -692,9 +670,9 @@
"nodesUnrecognizedTypes": "Impossibile caricare. Il grafico ha tipi di dati non riconosciuti",
"nodesNotValidJSON": "JSON non valido",
"nodesBrokenConnections": "Impossibile caricare. Alcune connessioni sono interrotte.",
"baseModelChangedCleared_one": "Il modello base è stato modificato, cancellato o disabilitato {{count}} sotto-modello incompatibile",
"baseModelChangedCleared_many": "Il modello base è stato modificato, cancellato o disabilitato {{count}} sotto-modelli incompatibili",
"baseModelChangedCleared_other": "Il modello base è stato modificato, cancellato o disabilitato {{count}} sotto-modelli incompatibili",
"baseModelChangedCleared_one": "Il modello base è stato modificato, cancellato o disabilitato {{number}} sotto-modello incompatibile",
"baseModelChangedCleared_many": "",
"baseModelChangedCleared_other": "",
"imageSavingFailed": "Salvataggio dell'immagine non riuscito",
"canvasSentControlnetAssets": "Tela inviata a ControlNet & Risorse",
"problemCopyingCanvasDesc": "Impossibile copiare la tela",
@@ -888,145 +866,7 @@
"workflowValidation": "Errore di convalida del flusso di lavoro",
"workflowAuthor": "Autore",
"workflowName": "Nome",
"workflowNotes": "Note",
"unhandledInputProperty": "Proprietà di input non gestita",
"versionUnknown": " Versione sconosciuta",
"unableToValidateWorkflow": "Impossibile convalidare il flusso di lavoro",
"updateApp": "Aggiorna App",
"problemReadingWorkflow": "Problema durante la lettura del flusso di lavoro dall'immagine",
"unableToLoadWorkflow": "Impossibile caricare il flusso di lavoro",
"updateNode": "Aggiorna nodo",
"version": "Versione",
"notes": "Note",
"problemSettingTitle": "Problema nell'impostazione del titolo",
"unkownInvocation": "Tipo di invocazione sconosciuta",
"unknownTemplate": "Modello sconosciuto",
"nodeType": "Tipo di nodo",
"vaeField": "VAE",
"unhandledOutputProperty": "Proprietà di output non gestita",
"notesDescription": "Aggiunge note sul tuo flusso di lavoro",
"unknownField": "Campo sconosciuto",
"unknownNode": "Nodo sconosciuto",
"vaeFieldDescription": "Sotto modello VAE.",
"booleanPolymorphicDescription": "Una raccolta di booleani.",
"missingTemplate": "Modello mancante",
"outputSchemaNotFound": "Schema di output non trovato",
"colorFieldDescription": "Un colore RGBA.",
"maybeIncompatible": "Potrebbe essere incompatibile con quello installato",
"noNodeSelected": "Nessun nodo selezionato",
"colorPolymorphic": "Colore polimorfico",
"booleanCollectionDescription": "Una raccolta di booleani.",
"colorField": "Colore",
"nodeTemplate": "Modello di nodo",
"nodeOpacity": "Opacità del nodo",
"pickOne": "Sceglierne uno",
"outputField": "Campo di output",
"nodeSearch": "Cerca nodi",
"nodeOutputs": "Uscite del nodo",
"collectionItem": "Oggetto della raccolta",
"noConnectionInProgress": "Nessuna connessione in corso",
"noConnectionData": "Nessun dato di connessione",
"outputFields": "Campi di output",
"cannotDuplicateConnection": "Impossibile creare connessioni duplicate",
"booleanPolymorphic": "Polimorfico booleano",
"colorPolymorphicDescription": "Una collezione di colori polimorfici.",
"missingCanvaInitImage": "Immagine iniziale della tela mancante",
"clipFieldDescription": "Sottomodelli di tokenizzatore e codificatore di testo.",
"noImageFoundState": "Nessuna immagine iniziale trovata nello stato",
"clipField": "CLIP",
"noMatchingNodes": "Nessun nodo corrispondente",
"noFieldType": "Nessun tipo di campo",
"colorCollection": "Una collezione di colori.",
"noOutputSchemaName": "Nessun nome dello schema di output trovato nell'oggetto di riferimento",
"boolean": "Booleani",
"missingCanvaInitMaskImages": "Immagini di inizializzazione e maschera della tela mancanti",
"oNNXModelField": "Modello ONNX",
"node": "Nodo",
"booleanDescription": "I booleani sono veri o falsi.",
"collection": "Raccolta",
"cannotConnectInputToInput": "Impossibile collegare Input a Input",
"cannotConnectOutputToOutput": "Impossibile collegare Output ad Output",
"booleanCollection": "Raccolta booleana",
"cannotConnectToSelf": "Impossibile connettersi a se stesso",
"mismatchedVersion": "Ha una versione non corrispondente",
"outputNode": "Nodo di Output",
"loadingNodes": "Caricamento nodi...",
"oNNXModelFieldDescription": "Campo del modello ONNX.",
"denoiseMaskFieldDescription": "La maschera di riduzione del rumore può essere passata tra i nodi",
"floatCollectionDescription": "Una raccolta di numeri virgola mobile.",
"enum": "Enumeratore",
"float": "In virgola mobile",
"doesNotExist": "non esiste",
"currentImageDescription": "Visualizza l'immagine corrente nell'editor dei nodi",
"fieldTypesMustMatch": "I tipi di campo devono corrispondere",
"edge": "Bordo",
"enumDescription": "Gli enumeratori sono valori che possono essere una delle diverse opzioni.",
"denoiseMaskField": "Maschera riduzione rumore",
"currentImage": "Immagine corrente",
"floatCollection": "Raccolta in virgola mobile",
"inputField": "Campo di Input",
"controlFieldDescription": "Informazioni di controllo passate tra i nodi.",
"skippingUnknownOutputType": "Tipo di campo di output sconosciuto saltato",
"latentsFieldDescription": "Le immagini latenti possono essere passate tra i nodi.",
"ipAdapterPolymorphicDescription": "Una raccolta di adattatori IP.",
"latentsPolymorphicDescription": "Le immagini latenti possono essere passate tra i nodi.",
"ipAdapterCollection": "Raccolta Adattatori IP",
"conditioningCollection": "Raccolta condizionamenti",
"ipAdapterPolymorphic": "Adattatore IP Polimorfico",
"integerPolymorphicDescription": "Una raccolta di numeri interi.",
"conditioningCollectionDescription": "Il condizionamento può essere passato tra i nodi.",
"skippingReservedFieldType": "Tipo di campo riservato saltato",
"conditioningPolymorphic": "Condizionamento Polimorfico",
"integer": "Numero Intero",
"latentsCollection": "Raccolta Latenti",
"sourceNode": "Nodo di origine",
"integerDescription": "Gli interi sono numeri senza punto decimale.",
"stringPolymorphic": "Stringa polimorfica",
"conditioningPolymorphicDescription": "Il condizionamento può essere passato tra i nodi.",
"skipped": "Saltato",
"imagePolymorphic": "Immagine Polimorfica",
"imagePolymorphicDescription": "Una raccolta di immagini.",
"floatPolymorphic": "Numeri in virgola mobile Polimorfici",
"ipAdapterCollectionDescription": "Una raccolta di adattatori IP.",
"stringCollectionDescription": "Una raccolta di stringhe.",
"unableToParseNode": "Impossibile analizzare il nodo",
"controlCollection": "Raccolta di Controllo",
"stringCollection": "Raccolta di stringhe",
"inputMayOnlyHaveOneConnection": "L'ingresso può avere solo una connessione",
"ipAdapter": "Adattatore IP",
"integerCollection": "Raccolta di numeri interi",
"controlCollectionDescription": "Informazioni di controllo passate tra i nodi.",
"skippedReservedInput": "Campo di input riservato saltato",
"inputNode": "Nodo di Input",
"imageField": "Immagine",
"skippedReservedOutput": "Campo di output riservato saltato",
"integerCollectionDescription": "Una raccolta di numeri interi.",
"conditioningFieldDescription": "Il condizionamento può essere passato tra i nodi.",
"stringDescription": "Le stringhe sono testo.",
"integerPolymorphic": "Numero intero Polimorfico",
"ipAdapterModel": "Modello Adattatore IP",
"latentsPolymorphic": "Latenti polimorfici",
"skippingInputNoTemplate": "Campo di input senza modello saltato",
"ipAdapterDescription": "Un adattatore di prompt di immagini (Adattatore IP).",
"stringPolymorphicDescription": "Una raccolta di stringhe.",
"skippingUnknownInputType": "Tipo di campo di input sconosciuto saltato",
"controlField": "Controllo",
"ipAdapterModelDescription": "Campo Modello adattatore IP",
"invalidOutputSchema": "Schema di output non valido",
"floatDescription": "I numeri in virgola mobile sono numeri con un punto decimale.",
"floatPolymorphicDescription": "Una raccolta di numeri in virgola mobile.",
"conditioningField": "Condizionamento",
"string": "Stringa",
"latentsField": "Latenti",
"connectionWouldCreateCycle": "La connessione creerebbe un ciclo",
"inputFields": "Campi di Input",
"uNetFieldDescription": "Sub-modello UNet.",
"imageCollectionDescription": "Una raccolta di immagini.",
"imageFieldDescription": "Le immagini possono essere passate tra i nodi.",
"unableToParseEdge": "Impossibile analizzare il bordo",
"latentsCollectionDescription": "Le immagini latenti possono essere passate tra i nodi.",
"imageCollection": "Raccolta Immagini",
"loRAModelField": "LoRA"
"workflowNotes": "Note"
},
"boards": {
"autoAddBoard": "Aggiungi automaticamente bacheca",
@@ -1043,8 +883,7 @@
"searchBoard": "Cerca bacheche ...",
"noMatching": "Nessuna bacheca corrispondente",
"selectBoard": "Seleziona una Bacheca",
"uncategorized": "Non categorizzato",
"downloadBoard": "Scarica la bacheca"
"uncategorized": "Non categorizzato"
},
"controlnet": {
"contentShuffleDescription": "Rimescola il contenuto di un'immagine",
@@ -1112,13 +951,8 @@
"addControlNet": "Aggiungi $t(common.controlNet)",
"controlNetT2IMutexDesc": "$t(common.controlNet) e $t(common.t2iAdapter) contemporaneamente non sono attualmente supportati.",
"addIPAdapter": "Aggiungi $t(common.ipAdapter)",
"controlAdapter_one": "Adattatore di Controllo",
"controlAdapter_many": "Adattatori di Controllo",
"controlAdapter_other": "Adattatori di Controllo",
"megaControl": "Mega ControlNet",
"minConfidence": "Confidenza minima",
"scribble": "Scribble",
"amult": "Angolo di illuminazione"
"controlAdapter": "Adattatore di Controllo",
"megaControl": "Mega ControlNet"
},
"queue": {
"queueFront": "Aggiungi all'inizio della coda",
@@ -1145,9 +979,7 @@
"pause": "Sospendi",
"pruneTooltip": "Rimuovi {{item_count}} elementi completati",
"cancelSucceeded": "Elemento annullato",
"batchQueuedDesc_one": "Aggiunta {{count}} sessione a {{direction}} della coda",
"batchQueuedDesc_many": "Aggiunte {{count}} sessioni a {{direction}} della coda",
"batchQueuedDesc_other": "Aggiunte {{count}} sessioni a {{direction}} della coda",
"batchQueuedDesc": "Aggiunte {{item_count}} sessioni a {{direction}} della coda",
"graphQueued": "Grafico in coda",
"batch": "Lotto",
"clearQueueAlertDialog": "Lo svuotamento della coda annulla immediatamente tutti gli elementi in elaborazione e cancella completamente la coda.",
@@ -1193,9 +1025,7 @@
"noLoRAsAvailable": "Nessun LoRA disponibile",
"noModelsAvailable": "Nessun modello disponibile",
"selectModel": "Seleziona un modello",
"selectLoRA": "Seleziona un LoRA",
"noRefinerModelsInstalled": "Nessun modello SDXL Refiner installato",
"noLoRAsInstalled": "Nessun LoRA installato"
"selectLoRA": "Seleziona un LoRA"
},
"invocationCache": {
"disable": "Disabilita",
@@ -1226,7 +1056,7 @@
"maxPrompts": "Numero massimo di prompt",
"promptsWithCount_one": "{{count}} Prompt",
"promptsWithCount_many": "{{count}} Prompt",
"promptsWithCount_other": "{{count}} Prompt",
"promptsWithCount_other": "",
"dynamicPrompts": "Prompt dinamici"
},
"popovers": {
@@ -1438,8 +1268,7 @@
"controlNet": {
"paragraphs": [
"ControlNet fornisce una guida al processo di generazione, aiutando a creare immagini con composizione, struttura o stile controllati, a seconda del modello selezionato."
],
"heading": "ControlNet"
]
}
},
"sdxl": {
@@ -1484,21 +1313,6 @@
"createdBy": "Creato da",
"workflow": "Flusso di lavoro",
"steps": "Passi",
"scheduler": "Campionatore",
"recallParameters": "Richiama i parametri",
"noRecallParameters": "Nessun parametro da richiamare trovato"
},
"hrf": {
"enableHrf": "Abilita Correzione Alta Risoluzione",
"upscaleMethod": "Metodo di ampliamento",
"enableHrfTooltip": "Genera con una risoluzione iniziale inferiore, esegue l'ampliamento alla risoluzione di base, quindi esegue Immagine a Immagine.",
"metadata": {
"strength": "Forza della Correzione Alta Risoluzione",
"enabled": "Correzione Alta Risoluzione Abilitata",
"method": "Metodo della Correzione Alta Risoluzione"
},
"hrf": "Correzione Alta Risoluzione",
"hrfStrength": "Forza della Correzione Alta Risoluzione",
"strengthTooltip": "Valori più bassi comportano meno dettagli, il che può ridurre potenziali artefatti."
"scheduler": "Campionatore"
}
}

View File

@@ -1,6 +1,6 @@
{
"common": {
"languagePickerLabel": "言語",
"languagePickerLabel": "言語選択",
"reportBugLabel": "バグ報告",
"settingsLabel": "設定",
"langJapanese": "日本語",
@@ -63,34 +63,11 @@
"langFrench": "Français",
"langGerman": "Deutsch",
"langPortuguese": "Português",
"nodes": "ワークフローエディター",
"nodes": "ノード",
"langKorean": "한국어",
"langPolish": "Polski",
"txt2img": "txt2img",
"postprocessing": "Post Processing",
"t2iAdapter": "T2I アダプター",
"communityLabel": "コミュニティ",
"dontAskMeAgain": "次回から確認しない",
"areYouSure": "本当によろしいですか?",
"on": "オン",
"nodeEditor": "ノードエディター",
"ipAdapter": "IPアダプター",
"controlAdapter": "コントロールアダプター",
"auto": "自動",
"openInNewTab": "新しいタブで開く",
"controlNet": "コントロールネット",
"statusProcessing": "処理中",
"linear": "リニア",
"imageFailedToLoad": "画像が読み込めません",
"imagePrompt": "画像プロンプト",
"modelManager": "モデルマネージャー",
"lightMode": "ライトモード",
"generate": "生成",
"learnMore": "もっと学ぶ",
"darkMode": "ダークモード",
"random": "ランダム",
"batch": "バッチマネージャー",
"advanced": "高度な設定"
"postprocessing": "Post Processing"
},
"gallery": {
"uploads": "アップロード",
@@ -297,7 +274,7 @@
"config": "Config",
"configValidationMsg": "モデルの設定ファイルへのパス",
"modelLocation": "モデルの場所",
"modelLocationValidationMsg": "ディフューザーモデルのあるローカルフォルダーのパスを入力してください",
"modelLocationValidationMsg": "モデルが配置されている場所へのパス。",
"repo_id": "Repo ID",
"repoIDValidationMsg": "モデルのリモートリポジトリ",
"vaeLocation": "VAEの場所",
@@ -332,79 +309,12 @@
"delete": "削除",
"deleteModel": "モデルを削除",
"deleteConfig": "設定を削除",
"deleteMsg1": "InvokeAIからこのモデルを削除してよろしいですか?",
"deleteMsg2": "これは、モデルがInvokeAIルートフォルダ内にある場合、ディスクからモデルを削除します。カスタム保存場所を使用している場合、モデルはディスクから削除されません。",
"deleteMsg1": "InvokeAIからこのモデルエントリーを削除してよろしいですか?",
"deleteMsg2": "これは、ドライブからモデルのCheckpointファイルを削除するものではありません。必要であればそれらを読み込むことができます。",
"formMessageDiffusersModelLocation": "Diffusersモデルの場所",
"formMessageDiffusersModelLocationDesc": "最低でも1つは入力してください。",
"formMessageDiffusersVAELocation": "VAEの場所s",
"formMessageDiffusersVAELocationDesc": "指定しない場合、InvokeAIは上記のモデルの場所にあるVAEファイルを探します。",
"importModels": "モデルをインポート",
"custom": "カスタム",
"none": "なし",
"convert": "変換",
"statusConverting": "変換中",
"cannotUseSpaces": "スペースは使えません",
"convertToDiffusersHelpText6": "このモデルを変換しますか?",
"checkpointModels": "チェックポイント",
"settings": "設定",
"convertingModelBegin": "モデルを変換しています...",
"baseModel": "ベースモデル",
"modelDeleteFailed": "モデルの削除ができませんでした",
"convertToDiffusers": "ディフューザーに変換",
"alpha": "アルファ",
"diffusersModels": "ディフューザー",
"pathToCustomConfig": "カスタム設定のパス",
"noCustomLocationProvided": "カスタムロケーションが指定されていません",
"modelConverted": "モデル変換が完了しました",
"weightedSum": "重み付け総和",
"inverseSigmoid": "逆シグモイド",
"invokeAIFolder": "Invoke AI フォルダ",
"syncModelsDesc": "モデルがバックエンドと同期していない場合、このオプションを使用してモデルを更新できます。通常、モデル.yamlファイルを手動で更新したり、アプリケーションの起動後にモデルをInvokeAIルートフォルダに追加した場合に便利です。",
"noModels": "モデルが見つかりません",
"sigmoid": "シグモイド",
"merge": "マージ",
"modelMergeInterpAddDifferenceHelp": "このモードでは、モデル3がまずモデル2から減算されます。その結果得られたバージョンが、上記で設定されたアルファ率でモデル1とブレンドされます。",
"customConfig": "カスタム設定",
"predictionType": "予測タイプ(安定したディフュージョン 2.x モデルおよび一部の安定したディフュージョン 1.x モデル用)",
"selectModel": "モデルを選択",
"modelSyncFailed": "モデルの同期に失敗しました",
"quickAdd": "クイック追加",
"simpleModelDesc": "ローカルのDiffusersモデル、ローカルのチェックポイント/safetensorsモデル、HuggingFaceリポジトリのID、またはチェックポイント/ DiffusersモデルのURLへのパスを指定してください。",
"customSaveLocation": "カスタム保存場所",
"advanced": "高度な設定",
"modelDeleted": "モデルが削除されました",
"convertToDiffusersHelpText2": "このプロセスでは、モデルマネージャーのエントリーを同じモデルのディフューザーバージョンに置き換えます。",
"modelUpdateFailed": "モデル更新が失敗しました",
"useCustomConfig": "カスタム設定を使用する",
"convertToDiffusersHelpText5": "十分なディスク空き容量があることを確認してください。モデルは一般的に2GBから7GBのサイズがあります。",
"modelConversionFailed": "モデル変換が失敗しました",
"modelEntryDeleted": "モデルエントリーが削除されました",
"syncModels": "モデルを同期",
"mergedModelSaveLocation": "保存場所",
"closeAdvanced": "高度な設定を閉じる",
"modelType": "モデルタイプ",
"modelsMerged": "モデルマージ完了",
"modelsMergeFailed": "モデルマージ失敗",
"scanForModels": "モデルをスキャン",
"customConfigFileLocation": "カスタム設定ファイルの場所",
"convertToDiffusersHelpText1": "このモデルは 🧨 Diffusers フォーマットに変換されます。",
"modelsSynced": "モデルが同期されました",
"invokeRoot": "InvokeAIフォルダ",
"mergedModelCustomSaveLocation": "カスタムパス",
"mergeModels": "マージモデル",
"interpolationType": "補間タイプ",
"modelMergeHeaderHelp2": "マージできるのはDiffusersのみです。チェックポイントモデルをマージしたい場合は、まずDiffusersに変換してください。",
"convertToDiffusersSaveLocation": "保存場所",
"pickModelType": "モデルタイプを選択",
"sameFolder": "同じフォルダ",
"convertToDiffusersHelpText3": "チェックポイントファイルは、InvokeAIルートフォルダ内にある場合、ディスクから削除されます。カスタムロケーションにある場合は、削除されません。",
"loraModels": "LoRA",
"modelMergeAlphaHelp": "アルファはモデルのブレンド強度を制御します。アルファ値が低いと、2番目のモデルの影響が低くなります。",
"addDifference": "差分を追加",
"modelMergeHeaderHelp1": "あなたのニーズに適したブレンドを作成するために、異なるモデルを最大3つまでマージすることができます。",
"ignoreMismatch": "選択されたモデル間の不一致を無視する",
"convertToDiffusersHelpText4": "これは一回限りのプロセスです。コンピュータの仕様によっては、約30秒から60秒かかる可能性があります。",
"mergedModelName": "マージされたモデル名"
"formMessageDiffusersVAELocationDesc": "指定しない場合、InvokeAIは上記のモデルの場所にあるVAEファイルを探します。"
},
"parameters": {
"images": "画像",
@@ -530,8 +440,7 @@
"next": "次",
"accept": "同意",
"showHide": "表示/非表示",
"discardAll": "すべて破棄",
"snapToGrid": "グリッドにスナップ"
"discardAll": "すべて破棄"
},
"accessibility": {
"modelSelect": "モデルを選択",
@@ -543,7 +452,7 @@
"useThisParameter": "このパラメータを使用する",
"copyMetadataJson": "メタデータをコピー(JSON)",
"zoomIn": "ズームイン",
"exitViewer": "ビューアーを終了",
"exitViewer": "ExitViewer",
"zoomOut": "ズームアウト",
"rotateCounterClockwise": "反時計回りに回転",
"rotateClockwise": "時計回りに回転",
@@ -552,265 +461,6 @@
"toggleAutoscroll": "自動スクロールの切替",
"modifyConfig": "Modify Config",
"toggleLogViewer": "Log Viewerの切替",
"showOptionsPanel": "サイドパネルを表示",
"showGalleryPanel": "ギャラリーパネルを表示",
"menu": "メニュー",
"loadMore": "さらに読み込む"
},
"controlnet": {
"resize": "リサイズ",
"showAdvanced": "高度な設定を表示",
"addT2IAdapter": "$t(common.t2iAdapter)を追加",
"importImageFromCanvas": "キャンバスから画像をインポート",
"lineartDescription": "画像を線画に変換",
"importMaskFromCanvas": "キャンバスからマスクをインポート",
"hideAdvanced": "高度な設定を非表示",
"ipAdapterModel": "アダプターモデル",
"resetControlImage": "コントロール画像をリセット",
"beginEndStepPercent": "開始 / 終了ステップパーセンテージ",
"duplicate": "複製",
"balanced": "バランス",
"prompt": "プロンプト",
"depthMidasDescription": "Midasを使用して深度マップを生成",
"openPoseDescription": "Openposeを使用してポーズを推定",
"control": "コントロール",
"resizeMode": "リサイズモード",
"weight": "重み",
"selectModel": "モデルを選択",
"crop": "切り抜き",
"w": "幅",
"processor": "プロセッサー",
"addControlNet": "$t(common.controlNet)を追加",
"none": "なし",
"incompatibleBaseModel": "互換性のないベースモデル:",
"enableControlnet": "コントロールネットを有効化",
"detectResolution": "検出解像度",
"controlNetT2IMutexDesc": "$t(common.controlNet)と$t(common.t2iAdapter)の同時使用は現在サポートされていません。",
"pidiDescription": "PIDI画像処理",
"controlMode": "コントロールモード",
"fill": "塗りつぶし",
"cannyDescription": "Canny 境界検出",
"addIPAdapter": "$t(common.ipAdapter)を追加",
"colorMapDescription": "画像からカラーマップを生成",
"lineartAnimeDescription": "アニメスタイルの線画処理",
"imageResolution": "画像解像度",
"megaControl": "メガコントロール",
"lowThreshold": "最低閾値",
"autoConfigure": "プロセッサーを自動設定",
"highThreshold": "最大閾値",
"saveControlImage": "コントロール画像を保存",
"toggleControlNet": "このコントロールネットを切り替え",
"delete": "削除",
"controlAdapter_other": "コントロールアダプター",
"colorMapTileSize": "タイルサイズ",
"ipAdapterImageFallback": "IP Adapterの画像が選択されていません",
"mediapipeFaceDescription": "Mediapipeを使用して顔を検出",
"depthZoeDescription": "Zoeを使用して深度マップを生成",
"setControlImageDimensions": "コントロール画像のサイズを幅と高さにセット",
"resetIPAdapterImage": "IP Adapterの画像をリセット",
"handAndFace": "手と顔",
"enableIPAdapter": "IP Adapterを有効化",
"amult": "a_mult",
"contentShuffleDescription": "画像の内容をシャッフルします",
"bgth": "bg_th",
"controlNetEnabledT2IDisabled": "$t(common.controlNet) が有効化され、$t(common.t2iAdapter)s が無効化されました",
"controlnet": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.controlNet))",
"t2iEnabledControlNetDisabled": "$t(common.t2iAdapter) が有効化され、$t(common.controlNet)s が無効化されました",
"ip_adapter": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.ipAdapter))",
"t2i_adapter": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.t2iAdapter))",
"minConfidence": "最小確信度",
"colorMap": "Color",
"noneDescription": "処理は行われていません",
"canny": "Canny",
"hedDescription": "階層的エッジ検出",
"maxFaces": "顔の最大数"
},
"metadata": {
"seamless": "シームレス",
"Threshold": "ノイズ閾値",
"seed": "シード",
"width": "幅",
"workflow": "ワークフロー",
"steps": "ステップ",
"scheduler": "スケジューラー",
"positivePrompt": "ポジティブプロンプト",
"strength": "Image to Image 強度",
"perlin": "パーリンノイズ",
"recallParameters": "パラメータを呼び出す"
},
"queue": {
"queueEmpty": "キューが空です",
"pauseSucceeded": "処理が一時停止されました",
"queueFront": "キューの先頭へ追加",
"queueBack": "キューに追加",
"queueCountPrediction": "{{predicted}}をキューに追加",
"queuedCount": "保留中 {{pending}}",
"pause": "一時停止",
"queue": "キュー",
"pauseTooltip": "処理を一時停止",
"cancel": "キャンセル",
"queueTotal": "合計 {{total}}",
"resumeSucceeded": "処理が再開されました",
"resumeTooltip": "処理を再開",
"resume": "再会",
"status": "ステータス",
"pruneSucceeded": "キューから完了アイテム{{item_count}}件を削除しました",
"cancelTooltip": "現在のアイテムをキャンセル",
"in_progress": "進行中",
"notReady": "キューに追加できません",
"batchFailedToQueue": "バッチをキューに追加できませんでした",
"completed": "完了",
"batchValues": "バッチの値",
"cancelFailed": "アイテムのキャンセルに問題があります",
"batchQueued": "バッチをキューに追加しました",
"pauseFailed": "処理の一時停止に問題があります",
"clearFailed": "キューのクリアに問題があります",
"front": "先頭",
"clearSucceeded": "キューがクリアされました",
"pruneTooltip": "{{item_count}} の完了アイテムを削除",
"cancelSucceeded": "アイテムがキャンセルされました",
"batchQueuedDesc_other": "{{count}} セッションをキューの{{direction}}に追加しました",
"graphQueued": "グラフをキューに追加しました",
"batch": "バッチ",
"clearQueueAlertDialog": "キューをクリアすると、処理中のアイテムは直ちにキャンセルされ、キューは完全にクリアされます。",
"pending": "保留中",
"resumeFailed": "処理の再開に問題があります",
"clear": "クリア",
"total": "合計",
"canceled": "キャンセル",
"pruneFailed": "キューの削除に問題があります",
"cancelBatchSucceeded": "バッチがキャンセルされました",
"clearTooltip": "全てのアイテムをキャンセルしてクリア",
"current": "現在",
"failed": "失敗",
"cancelItem": "項目をキャンセル",
"next": "次",
"cancelBatch": "バッチをキャンセル",
"session": "セッション",
"enqueueing": "バッチをキューに追加",
"queueMaxExceeded": "{{max_queue_size}} の最大値を超えたため、{{skip}} をスキップします",
"cancelBatchFailed": "バッチのキャンセルに問題があります",
"clearQueueAlertDialog2": "キューをクリアしてもよろしいですか?",
"item": "アイテム",
"graphFailedToQueue": "グラフをキューに追加できませんでした"
},
"models": {
"noMatchingModels": "一致するモデルがありません",
"loading": "読み込み中",
"noMatchingLoRAs": "一致するLoRAがありません",
"noLoRAsAvailable": "使用可能なLoRAがありません",
"noModelsAvailable": "使用可能なモデルがありません",
"selectModel": "モデルを選択してください",
"selectLoRA": "LoRAを選択してください"
},
"nodes": {
"addNode": "ノードを追加",
"boardField": "ボード",
"boolean": "ブーリアン",
"boardFieldDescription": "ギャラリーボード",
"addNodeToolTip": "ノードを追加 (Shift+A, Space)",
"booleanPolymorphicDescription": "ブーリアンのコレクション。",
"inputField": "入力フィールド",
"latentsFieldDescription": "潜在空間はノード間で伝達できます。",
"floatCollectionDescription": "浮動小数点のコレクション。",
"missingTemplate": "テンプレートが見つかりません",
"ipAdapterPolymorphicDescription": "IP-Adaptersのコレクション。",
"latentsPolymorphicDescription": "潜在空間はノード間で伝達できます。",
"colorFieldDescription": "RGBAカラー。",
"ipAdapterCollection": "IP-Adapterコレクション",
"conditioningCollection": "条件付きコレクション",
"hideGraphNodes": "グラフオーバーレイを非表示",
"loadWorkflow": "ワークフローを読み込み",
"integerPolymorphicDescription": "整数のコレクション。",
"hideLegendNodes": "フィールドタイプの凡例を非表示",
"float": "浮動小数点",
"booleanCollectionDescription": "ブーリアンのコレクション。",
"integer": "整数",
"colorField": "カラー",
"nodeTemplate": "ノードテンプレート",
"integerDescription": "整数は小数点を持たない数値です。",
"imagePolymorphicDescription": "画像のコレクション。",
"doesNotExist": "存在しません",
"ipAdapterCollectionDescription": "IP-Adaptersのコレクション。",
"inputMayOnlyHaveOneConnection": "入力は1つの接続しか持つことができません",
"nodeOutputs": "ノード出力",
"currentImageDescription": "ノードエディタ内の現在の画像を表示",
"downloadWorkflow": "ワークフローのJSONをダウンロード",
"integerCollection": "整数コレクション",
"collectionItem": "コレクションアイテム",
"fieldTypesMustMatch": "フィールドタイプが一致している必要があります",
"edge": "輪郭",
"inputNode": "入力ノード",
"imageField": "画像",
"animatedEdgesHelp": "選択したエッジおよび選択したノードに接続されたエッジをアニメーション化します",
"cannotDuplicateConnection": "重複した接続は作れません",
"noWorkflow": "ワークフローがありません",
"integerCollectionDescription": "整数のコレクション。",
"colorPolymorphicDescription": "カラーのコレクション。",
"missingCanvaInitImage": "キャンバスの初期画像が見つかりません",
"clipFieldDescription": "トークナイザーとテキストエンコーダーサブモデル。",
"fullyContainNodesHelp": "ノードは選択ボックス内に完全に存在する必要があります",
"clipField": "クリップ",
"nodeType": "ノードタイプ",
"executionStateInProgress": "処理中",
"executionStateError": "エラー",
"ipAdapterModel": "IP-Adapterモデル",
"ipAdapterDescription": "イメージプロンプトアダプター(IP-Adapter)。",
"missingCanvaInitMaskImages": "キャンバスの初期画像およびマスクが見つかりません",
"hideMinimapnodes": "ミニマップを非表示",
"fitViewportNodes": "全体を表示",
"executionStateCompleted": "完了",
"node": "ノード",
"currentImage": "現在の画像",
"controlField": "コントロール",
"booleanDescription": "ブーリアンはtrueかfalseです。",
"collection": "コレクション",
"ipAdapterModelDescription": "IP-Adapterモデルフィールド",
"cannotConnectInputToInput": "入力から入力には接続できません",
"invalidOutputSchema": "無効な出力スキーマ",
"floatDescription": "浮動小数点は、小数点を持つ数値です。",
"floatPolymorphicDescription": "浮動小数点のコレクション。",
"floatCollection": "浮動小数点コレクション",
"latentsField": "潜在空間",
"cannotConnectOutputToOutput": "出力から出力には接続できません",
"booleanCollection": "ブーリアンコレクション",
"cannotConnectToSelf": "自身のノードには接続できません",
"inputFields": "入力フィールド(複数)",
"colorCodeEdges": "カラー-Code Edges",
"imageCollectionDescription": "画像のコレクション。",
"loadingNodes": "ノードを読み込み中...",
"imageCollection": "画像コレクション"
},
"boards": {
"autoAddBoard": "自動追加するボード",
"move": "移動",
"menuItemAutoAdd": "このボードに自動追加",
"myBoard": "マイボード",
"searchBoard": "ボードを検索...",
"noMatching": "一致するボードがありません",
"selectBoard": "ボードを選択",
"cancel": "キャンセル",
"addBoard": "ボードを追加",
"uncategorized": "未分類",
"downloadBoard": "ボードをダウンロード",
"changeBoard": "ボードを変更",
"loading": "ロード中...",
"topMessage": "このボードには、以下の機能で使用されている画像が含まれています:",
"bottomMessage": "このボードおよび画像を削除すると、現在これらを利用している機能はリセットされます。",
"clearSearch": "検索をクリア"
},
"embedding": {
"noMatchingEmbedding": "一致する埋め込みがありません",
"addEmbedding": "埋め込みを追加",
"incompatibleModel": "互換性のないベースモデル:"
},
"invocationCache": {
"invocationCache": "呼び出しキャッシュ",
"clearSucceeded": "呼び出しキャッシュをクリアしました",
"clearFailed": "呼び出しキャッシュのクリアに問題があります",
"enable": "有効",
"clear": "クリア",
"maxCacheSize": "最大キャッシュサイズ",
"cacheSize": "キャッシュサイズ"
"showOptionsPanel": "オプションパネルを表示"
}
}

View File

@@ -79,18 +79,7 @@
"modelManager": "Modelbeheer",
"darkMode": "Donkere modus",
"lightMode": "Lichte modus",
"communityLabel": "Gemeenschap",
"t2iAdapter": "T2I-adapter",
"on": "Aan",
"nodeEditor": "Knooppunteditor",
"ipAdapter": "IP-adapter",
"controlAdapter": "Control-adapter",
"auto": "Autom.",
"controlNet": "ControlNet",
"statusProcessing": "Bezig met verwerken",
"imageFailedToLoad": "Kan afbeelding niet laden",
"learnMore": "Meer informatie",
"advanced": "Uitgebreid"
"communityLabel": "Gemeenschap"
},
"gallery": {
"generations": "Gegenereerde afbeeldingen",
@@ -106,22 +95,12 @@
"allImagesLoaded": "Alle afbeeldingen geladen",
"loadMore": "Laad meer",
"noImagesInGallery": "Geen afbeeldingen om te tonen",
"deleteImage": "Verwijder afbeelding",
"deleteImageBin": "Verwijderde afbeeldingen worden naar de prullenbak van je besturingssysteem gestuurd.",
"deleteImagePermanent": "Verwijderde afbeeldingen kunnen niet worden hersteld.",
"deleteImage": "Wis afbeelding",
"deleteImageBin": "Gewiste afbeeldingen worden naar de prullenbak van je besturingssysteem gestuurd.",
"deleteImagePermanent": "Gewiste afbeeldingen kunnen niet worden hersteld.",
"assets": "Eigen onderdelen",
"images": "Afbeeldingen",
"autoAssignBoardOnClick": "Ken automatisch bord toe bij klikken",
"featuresWillReset": "Als je deze afbeelding verwijdert, dan worden deze functies onmiddellijk teruggezet.",
"loading": "Bezig met laden",
"unableToLoad": "Kan galerij niet laden",
"preparingDownload": "Bezig met voorbereiden van download",
"preparingDownloadFailed": "Fout bij voorbereiden van download",
"downloadSelection": "Download selectie",
"currentlyInUse": "Deze afbeelding is momenteel in gebruik door de volgende functies:",
"copy": "Kopieer",
"download": "Download",
"setCurrentImage": "Stel in als huidige afbeelding"
"autoAssignBoardOnClick": "Ken automatisch bord toe bij klikken"
},
"hotkeys": {
"keyboardShortcuts": "Sneltoetsen",
@@ -353,7 +332,7 @@
"config": "Configuratie",
"configValidationMsg": "Pad naar het configuratiebestand van je model.",
"modelLocation": "Locatie model",
"modelLocationValidationMsg": "Geef het pad naar een lokale map waar je Diffusers-model wordt bewaard",
"modelLocationValidationMsg": "Pad naar waar je model zich bevindt.",
"vaeLocation": "Locatie VAE",
"vaeLocationValidationMsg": "Pad naar waar je VAE zich bevindt.",
"width": "Breedte",
@@ -386,11 +365,11 @@
"deleteModel": "Verwijder model",
"deleteConfig": "Verwijder configuratie",
"deleteMsg1": "Weet je zeker dat je dit model wilt verwijderen uit InvokeAI?",
"deleteMsg2": "Hiermee ZAL het model van schijf worden verwijderd als het zich bevindt in de beginmap van InvokeAI. Als je het model vanaf een eigen locatie gebruikt, dan ZAL het model NIET van schijf worden verwijderd.",
"deleteMsg2": "Hiermee ZAL het model van schijf worden verwijderd als het zich bevindt in de InvokeAI-beginmap. Als je het model vanaf een eigen locatie gebruikt, dan ZAL het model NIET van schijf worden verwijderd.",
"formMessageDiffusersVAELocationDesc": "Indien niet opgegeven, dan zal InvokeAI kijken naar het VAE-bestand in de hierboven gegeven modellocatie.",
"repoIDValidationMsg": "Online repository van je model",
"formMessageDiffusersModelLocation": "Locatie Diffusers-model",
"convertToDiffusersHelpText3": "Je checkpoint-bestand op de schijf ZAL worden verwijderd als het zich in de beginmap van InvokeAI bevindt. Het ZAL NIET worden verwijderd als het zich in een andere locatie bevindt.",
"convertToDiffusersHelpText3": "Je checkpoint-bestand op schijf ZAL worden verwijderd als het zich in de InvokeAI root map bevindt. Het zal NIET worden verwijderd als het zich in een andere locatie bevindt.",
"convertToDiffusersHelpText6": "Wil je dit model omzetten?",
"allModels": "Alle modellen",
"checkpointModels": "Checkpoints",
@@ -458,24 +437,14 @@
"noCustomLocationProvided": "Geen Aangepaste Locatie Opgegeven",
"syncModels": "Synchroniseer Modellen",
"modelsSynced": "Modellen Gesynchroniseerd",
"modelSyncFailed": "Synchronisatie modellen mislukt",
"modelSyncFailed": "Synchronisatie Modellen Gefaald",
"modelDeleteFailed": "Model kon niet verwijderd worden",
"convertingModelBegin": "Model aan het converteren. Even geduld.",
"importModels": "Importeer Modellen",
"syncModelsDesc": "Als je modellen niet meer synchroon zijn met de backend, kan je ze met deze optie vernieuwen. Dit wordt meestal gebruikt in het geval je het bestand models.yaml met de hand bewerkt of als je modellen aan de beginmap van InvokeAI toevoegt nadat de applicatie gestart is.",
"syncModelsDesc": "Als je modellen niet meer synchroon zijn met de backend, kan je ze met deze optie verversen. Dit wordt typisch gebruikt in het geval je het models.yaml bestand met de hand bewerkt of als je modellen aan de InvokeAI root map toevoegt nadat de applicatie gestart werd.",
"loraModels": "LoRA's",
"onnxModels": "Onnx",
"oliveModels": "Olives",
"noModels": "Geen modellen gevonden",
"predictionType": "Soort voorspelling (voor Stable Diffusion 2.x-modellen en incidentele Stable Diffusion 1.x-modellen)",
"quickAdd": "Voeg snel toe",
"simpleModelDesc": "Geef een pad naar een lokaal Diffusers-model, lokale-checkpoint- / safetensors-model, een HuggingFace-repo-ID of een url naar een checkpoint- / Diffusers-model.",
"advanced": "Uitgebreid",
"useCustomConfig": "Gebruik eigen configuratie",
"closeAdvanced": "Sluit uitgebreid",
"modelType": "Soort model",
"customConfigFileLocation": "Locatie eigen configuratiebestand",
"vaePrecision": "Nauwkeurigheid VAE"
"oliveModels": "Olives"
},
"parameters": {
"images": "Afbeeldingen",
@@ -496,7 +465,7 @@
"type": "Soort",
"strength": "Sterkte",
"upscaling": "Opschalen",
"upscale": "Vergroot (Shift + U)",
"upscale": "Schaal op",
"upscaleImage": "Schaal afbeelding op",
"scale": "Schaal",
"otherOptions": "Andere opties",
@@ -527,7 +496,7 @@
"useInitImg": "Gebruik initiële afbeelding",
"info": "Info",
"initialImage": "Initiële afbeelding",
"showOptionsPanel": "Toon deelscherm Opties (O of T)",
"showOptionsPanel": "Toon deelscherm Opties",
"symmetry": "Symmetrie",
"hSymmetryStep": "Stap horiz. symmetrie",
"vSymmetryStep": "Stap vert. symmetrie",
@@ -535,8 +504,7 @@
"immediate": "Annuleer direct",
"isScheduled": "Annuleren",
"setType": "Stel annuleervorm in",
"schedule": "Annuleer na huidige iteratie",
"cancel": "Annuleer"
"schedule": "Annuleer na huidige iteratie"
},
"general": "Algemeen",
"copyImage": "Kopieer afbeelding",
@@ -552,7 +520,7 @@
"boundingBoxWidth": "Tekenvak breedte",
"boundingBoxHeight": "Tekenvak hoogte",
"clipSkip": "Overslaan CLIP",
"aspectRatio": "Beeldverhouding",
"aspectRatio": "Verhouding",
"negativePromptPlaceholder": "Negatieve prompt",
"controlNetControlMode": "Aansturingsmodus",
"positivePromptPlaceholder": "Positieve prompt",
@@ -564,46 +532,7 @@
"coherenceSteps": "Stappen",
"coherenceStrength": "Sterkte",
"seamHighThreshold": "Hoog",
"seamLowThreshold": "Laag",
"invoke": {
"noNodesInGraph": "Geen knooppunten in graaf",
"noModelSelected": "Geen model ingesteld",
"invoke": "Start",
"noPrompts": "Geen prompts gegenereerd",
"systemBusy": "Systeem is bezig",
"noInitialImageSelected": "Geen initiële afbeelding gekozen",
"missingInputForField": "{{nodeLabel}} -> {{fieldLabel}} invoer ontbreekt",
"noControlImageForControlAdapter": "Controle-adapter #{{number}} heeft geen controle-afbeelding",
"noModelForControlAdapter": "Control-adapter #{{number}} heeft geen model ingesteld staan.",
"unableToInvoke": "Kan niet starten",
"incompatibleBaseModelForControlAdapter": "Model van controle-adapter #{{number}} is ongeldig in combinatie met het hoofdmodel.",
"systemDisconnected": "Systeem is niet verbonden",
"missingNodeTemplate": "Knooppuntsjabloon ontbreekt",
"readyToInvoke": "Klaar om te starten",
"missingFieldTemplate": "Veldsjabloon ontbreekt",
"addingImagesTo": "Bezig met toevoegen van afbeeldingen aan"
},
"seamlessX&Y": "Naadloos X en Y",
"isAllowedToUpscale": {
"useX2Model": "Afbeelding is te groot om te vergroten met het x4-model. Gebruik hiervoor het x2-model",
"tooLarge": "Afbeelding is te groot om te vergoten. Kies een kleinere afbeelding"
},
"aspectRatioFree": "Vrij",
"cpuNoise": "CPU-ruis",
"patchmatchDownScaleSize": "Verklein",
"gpuNoise": "GPU-ruis",
"seamlessX": "Naadloos X",
"useCpuNoise": "Gebruik CPU-ruis",
"clipSkipWithLayerCount": "Overslaan CLIP {{layerCount}}",
"seamlessY": "Naadloos Y",
"manualSeed": "Handmatige seedwaarde",
"imageActions": "Afbeeldingshandeling",
"randomSeed": "Willekeurige seedwaarde",
"iterations": "Iteraties",
"iterationsWithCount_one": "{{count}} iteratie",
"iterationsWithCount_other": "{{count}} iteraties",
"enableNoiseSettings": "Schakel ruisinstellingen in",
"coherenceMode": "Modus"
"seamLowThreshold": "Laag"
},
"settings": {
"models": "Modellen",
@@ -615,14 +544,14 @@
"resetWebUI": "Herstel web-UI",
"resetWebUIDesc1": "Herstel web-UI herstelt alleen de lokale afbeeldingscache en de onthouden instellingen van je browser. Het verwijdert geen afbeeldingen van schijf.",
"resetWebUIDesc2": "Als afbeeldingen niet getoond worden in de galerij of iets anders werkt niet, probeer dan eerst deze herstelfunctie voordat je een fout aanmeldt op GitHub.",
"resetComplete": "Webinterface is hersteld.",
"resetComplete": "Webgebruikersinterface is hersteld.",
"useSlidersForAll": "Gebruik schuifbalken voor alle opties",
"consoleLogLevel": "Niveau logboek",
"consoleLogLevel": "Logboekniveau",
"shouldLogToConsole": "Schrijf logboek naar console",
"developer": "Ontwikkelaar",
"general": "Algemeen",
"showProgressInViewer": "Toon voortgangsafbeeldingen in viewer",
"generation": "Genereren",
"generation": "Generatie",
"ui": "Gebruikersinterface",
"antialiasProgressImages": "Voer anti-aliasing uit op voortgangsafbeeldingen",
"showAdvancedOptions": "Toon uitgebreide opties",
@@ -631,17 +560,8 @@
"beta": "Bèta",
"experimental": "Experimenteel",
"alternateCanvasLayout": "Omwisselen Canvas Layout",
"enableNodesEditor": "Schakel Knooppunteditor in",
"autoChangeDimensions": "Werk B/H bij naar modelstandaard bij wijziging",
"clearIntermediates": "Wis tussentijdse afbeeldingen",
"clearIntermediatesDesc3": "Je galerijafbeeldingen zullen niet worden verwijderd.",
"clearIntermediatesWithCount_one": "Wis {{count}} tussentijdse afbeelding",
"clearIntermediatesWithCount_other": "Wis {{count}} tussentijdse afbeeldingen",
"clearIntermediatesDesc2": "Tussentijdse afbeeldingen zijn nevenproducten bij het genereren. Deze wijken af van de uitvoerafbeeldingen in de galerij. Als je tussentijdse afbeeldingen wist, wordt schijfruimte vrijgemaakt.",
"intermediatesCleared_one": "{{count}} tussentijdse afbeelding gewist",
"intermediatesCleared_other": "{{count}} tussentijdse afbeeldingen gewist",
"clearIntermediatesDesc1": "Als je tussentijdse afbeeldingen wist, dan wordt de staat hersteld van je canvas en van ControlNet.",
"intermediatesClearedFailed": "Fout bij wissen van tussentijdse afbeeldingen"
"enableNodesEditor": "Knopen Editor Inschakelen",
"autoChangeDimensions": "Werk bij wijziging afmetingen bij naar modelstandaard"
},
"toast": {
"tempFoldersEmptied": "Tijdelijke map geleegd",
@@ -690,42 +610,7 @@
"nodesCorruptedGraph": "Kan niet laden. Graph lijkt corrupt.",
"nodesUnrecognizedTypes": "Laden mislukt. Graph heeft onherkenbare types",
"nodesBrokenConnections": "Laden mislukt. Sommige verbindingen zijn verbroken.",
"nodesNotValidGraph": "Geen geldige knooppunten graph",
"baseModelChangedCleared_one": "Basismodel is gewijzigd: {{count}} niet-compatibel submodel weggehaald of uitgeschakeld",
"baseModelChangedCleared_other": "Basismodel is gewijzigd: {{count}} niet-compatibele submodellen weggehaald of uitgeschakeld",
"imageSavingFailed": "Fout bij bewaren afbeelding",
"canvasSentControlnetAssets": "Canvas gestuurd naar ControlNet en Assets",
"problemCopyingCanvasDesc": "Kan basislaag niet exporteren",
"loadedWithWarnings": "Werkstroom geladen met waarschuwingen",
"setInitialImage": "Ingesteld als initiële afbeelding",
"canvasCopiedClipboard": "Canvas gekopieerd naar klembord",
"setControlImage": "Ingesteld als controle-afbeelding",
"setNodeField": "Ingesteld als knooppuntveld",
"problemSavingMask": "Fout bij bewaren masker",
"problemSavingCanvasDesc": "Kan basislaag niet exporteren",
"maskSavedAssets": "Masker bewaard in Assets",
"modelAddFailed": "Fout bij toevoegen model",
"problemDownloadingCanvas": "Fout bij downloaden van canvas",
"problemMergingCanvas": "Fout bij samenvoegen canvas",
"setCanvasInitialImage": "Ingesteld als initiële canvasafbeelding",
"imageUploaded": "Afbeelding geüpload",
"addedToBoard": "Toegevoegd aan bord",
"workflowLoaded": "Werkstroom geladen",
"modelAddedSimple": "Model toegevoegd",
"problemImportingMaskDesc": "Kan masker niet exporteren",
"problemCopyingCanvas": "Fout bij kopiëren canvas",
"problemSavingCanvas": "Fout bij bewaren canvas",
"canvasDownloaded": "Canvas gedownload",
"setIPAdapterImage": "Ingesteld als IP-adapterafbeelding",
"problemMergingCanvasDesc": "Kan basislaag niet exporteren",
"problemDownloadingCanvasDesc": "Kan basislaag niet exporteren",
"problemSavingMaskDesc": "Kan masker niet exporteren",
"imageSaved": "Afbeelding bewaard",
"maskSentControlnetAssets": "Masker gestuurd naar ControlNet en Assets",
"canvasSavedGallery": "Canvas bewaard in galerij",
"imageUploadFailed": "Fout bij uploaden afbeelding",
"modelAdded": "Model toegevoegd: {{modelName}}",
"problemImportingMask": "Fout bij importeren masker"
"nodesNotValidGraph": "Geen geldige knooppunten graph"
},
"tooltip": {
"feature": {
@@ -800,9 +685,7 @@
"betaDarkenOutside": "Verduister buiten tekenvak",
"betaLimitToBox": "Beperk tot tekenvak",
"betaPreserveMasked": "Behoud masker",
"antialiasing": "Anti-aliasing",
"showResultsOn": "Toon resultaten (aan)",
"showResultsOff": "Toon resultaten (uit)"
"antialiasing": "Anti-aliasing"
},
"accessibility": {
"exitViewer": "Stop viewer",
@@ -824,9 +707,7 @@
"toggleAutoscroll": "Autom. scrollen aan/uit",
"toggleLogViewer": "Logboekviewer aan/uit",
"showOptionsPanel": "Toon zijscherm",
"menu": "Menu",
"showGalleryPanel": "Toon deelscherm Galerij",
"loadMore": "Laad meer"
"menu": "Menu"
},
"ui": {
"showProgressImages": "Toon voortgangsafbeeldingen",
@@ -849,661 +730,6 @@
"resetWorkflow": "Herstel werkstroom",
"resetWorkflowDesc": "Weet je zeker dat je deze werkstroom wilt herstellen?",
"resetWorkflowDesc2": "Herstel van een werkstroom haalt alle knooppunten, randen en werkstroomdetails weg.",
"downloadWorkflow": "Download JSON van werkstroom",
"booleanPolymorphicDescription": "Een verzameling Booleanse waarden.",
"scheduler": "Planner",
"inputField": "Invoerveld",
"controlFieldDescription": "Controlegegevens doorgegeven tussen knooppunten.",
"skippingUnknownOutputType": "Overslaan van onbekend soort uitvoerveld",
"latentsFieldDescription": "Latents kunnen worden doorgegeven tussen knooppunten.",
"denoiseMaskFieldDescription": "Ontruisingsmasker kan worden doorgegeven tussen knooppunten",
"floatCollectionDescription": "Een verzameling zwevende-kommagetallen.",
"missingTemplate": "Ontbrekende sjabloon",
"outputSchemaNotFound": "Uitvoerschema niet gevonden",
"ipAdapterPolymorphicDescription": "Een verzameling IP-adapters.",
"workflowDescription": "Korte beschrijving",
"latentsPolymorphicDescription": "Latents kunnen worden doorgegeven tussen knooppunten.",
"colorFieldDescription": "Een RGBA-kleur.",
"mainModelField": "Model",
"unhandledInputProperty": "Onverwerkt invoerkenmerk",
"versionUnknown": " Versie onbekend",
"ipAdapterCollection": "Verzameling IP-adapters",
"conditioningCollection": "Verzameling conditionering",
"maybeIncompatible": "Is mogelijk niet compatibel met geïnstalleerde knooppunten",
"ipAdapterPolymorphic": "Polymorfisme IP-adapter",
"noNodeSelected": "Geen knooppunt gekozen",
"addNode": "Voeg knooppunt toe",
"unableToValidateWorkflow": "Kan werkstroom niet valideren",
"enum": "Enumeratie",
"integerPolymorphicDescription": "Een verzameling gehele getallen.",
"noOutputRecorded": "Geen uitvoer opgenomen",
"updateApp": "Werk app bij",
"conditioningCollectionDescription": "Conditionering kan worden doorgegeven tussen knooppunten.",
"colorPolymorphic": "Polymorfisme kleur",
"colorCodeEdgesHelp": "Kleurgecodeerde randen op basis van hun verbonden velden",
"collectionDescription": "TODO",
"float": "Zwevende-kommagetal",
"workflowContact": "Contactpersoon",
"skippingReservedFieldType": "Overslaan van gereserveerd veldsoort",
"animatedEdges": "Geanimeerde randen",
"booleanCollectionDescription": "Een verzameling van Booleanse waarden.",
"sDXLMainModelFieldDescription": "SDXL-modelveld.",
"conditioningPolymorphic": "Polymorfisme conditionering",
"integer": "Geheel getal",
"colorField": "Kleur",
"boardField": "Bord",
"nodeTemplate": "Sjabloon knooppunt",
"latentsCollection": "Verzameling latents",
"problemReadingWorkflow": "Fout bij lezen van werkstroom uit afbeelding",
"sourceNode": "Bronknooppunt",
"nodeOpacity": "Dekking knooppunt",
"pickOne": "Kies er een",
"collectionItemDescription": "TODO",
"integerDescription": "Gehele getallen zijn getallen zonder een decimaalteken.",
"outputField": "Uitvoerveld",
"unableToLoadWorkflow": "Kan werkstroom niet valideren",
"snapToGrid": "Lijn uit op raster",
"stringPolymorphic": "Polymorfisme tekenreeks",
"conditioningPolymorphicDescription": "Conditionering kan worden doorgegeven tussen knooppunten.",
"noFieldsLinearview": "Geen velden toegevoegd aan lineaire weergave",
"skipped": "Overgeslagen",
"imagePolymorphic": "Polymorfisme afbeelding",
"nodeSearch": "Zoek naar knooppunten",
"updateNode": "Werk knooppunt bij",
"sDXLRefinerModelFieldDescription": "Beschrijving",
"imagePolymorphicDescription": "Een verzameling afbeeldingen.",
"floatPolymorphic": "Polymorfisme zwevende-kommagetal",
"version": "Versie",
"doesNotExist": "bestaat niet",
"ipAdapterCollectionDescription": "Een verzameling van IP-adapters.",
"stringCollectionDescription": "Een verzameling tekenreeksen.",
"unableToParseNode": "Kan knooppunt niet inlezen",
"controlCollection": "Controle-verzameling",
"validateConnections": "Valideer verbindingen en graaf",
"stringCollection": "Verzameling tekenreeksen",
"inputMayOnlyHaveOneConnection": "Invoer mag slechts een enkele verbinding hebben",
"notes": "Opmerkingen",
"uNetField": "UNet",
"nodeOutputs": "Uitvoer knooppunt",
"currentImageDescription": "Toont de huidige afbeelding in de knooppunteditor",
"validateConnectionsHelp": "Voorkom dat er ongeldige verbindingen worden gelegd en dat er ongeldige grafen worden aangeroepen",
"problemSettingTitle": "Fout bij instellen titel",
"ipAdapter": "IP-adapter",
"integerCollection": "Verzameling gehele getallen",
"collectionItem": "Verzamelingsonderdeel",
"noConnectionInProgress": "Geen verbinding bezig te maken",
"vaeModelField": "VAE",
"controlCollectionDescription": "Controlegegevens doorgegeven tussen knooppunten.",
"skippedReservedInput": "Overgeslagen gereserveerd invoerveld",
"workflowVersion": "Versie",
"noConnectionData": "Geen verbindingsgegevens",
"outputFields": "Uitvoervelden",
"fieldTypesMustMatch": "Veldsoorten moeten overeenkomen",
"workflow": "Werkstroom",
"edge": "Rand",
"inputNode": "Invoerknooppunt",
"enumDescription": "Enumeraties zijn waarden die uit een aantal opties moeten worden gekozen.",
"unkownInvocation": "Onbekende aanroepsoort",
"loRAModelFieldDescription": "TODO",
"imageField": "Afbeelding",
"skippedReservedOutput": "Overgeslagen gereserveerd uitvoerveld",
"animatedEdgesHelp": "Animeer gekozen randen en randen verbonden met de gekozen knooppunten",
"cannotDuplicateConnection": "Kan geen dubbele verbindingen maken",
"booleanPolymorphic": "Polymorfisme Booleaanse waarden",
"unknownTemplate": "Onbekend sjabloon",
"noWorkflow": "Geen werkstroom",
"removeLinearView": "Verwijder uit lineaire weergave",
"colorCollectionDescription": "TODO",
"integerCollectionDescription": "Een verzameling gehele getallen.",
"colorPolymorphicDescription": "Een verzameling kleuren.",
"sDXLMainModelField": "SDXL-model",
"workflowTags": "Labels",
"denoiseMaskField": "Ontruisingsmasker",
"schedulerDescription": "Beschrijving",
"missingCanvaInitImage": "Ontbrekende initialisatie-afbeelding voor canvas",
"conditioningFieldDescription": "Conditionering kan worden doorgegeven tussen knooppunten.",
"clipFieldDescription": "Submodellen voor tokenizer en text_encoder.",
"fullyContainNodesHelp": "Knooppunten moeten zich volledig binnen het keuzevak bevinden om te worden gekozen",
"noImageFoundState": "Geen initiële afbeelding gevonden in de staat",
"workflowValidation": "Validatiefout werkstroom",
"clipField": "Clip",
"stringDescription": "Tekenreeksen zijn tekst.",
"nodeType": "Soort knooppunt",
"noMatchingNodes": "Geen overeenkomende knooppunten",
"fullyContainNodes": "Omvat knooppunten volledig om ze te kiezen",
"integerPolymorphic": "Polymorfisme geheel getal",
"executionStateInProgress": "Bezig",
"noFieldType": "Geen soort veld",
"colorCollection": "Een verzameling kleuren.",
"executionStateError": "Fout",
"noOutputSchemaName": "Geen naam voor uitvoerschema gevonden in referentieobject",
"ipAdapterModel": "Model IP-adapter",
"latentsPolymorphic": "Polymorfisme latents",
"vaeModelFieldDescription": "Beschrijving",
"skippingInputNoTemplate": "Overslaan van invoerveld zonder sjabloon",
"ipAdapterDescription": "Een Afbeeldingsprompt-adapter (IP-adapter).",
"boolean": "Booleaanse waarden",
"missingCanvaInitMaskImages": "Ontbrekende initialisatie- en maskerafbeeldingen voor canvas",
"problemReadingMetadata": "Fout bij lezen van metagegevens uit afbeelding",
"stringPolymorphicDescription": "Een verzameling tekenreeksen.",
"oNNXModelField": "ONNX-model",
"executionStateCompleted": "Voltooid",
"node": "Knooppunt",
"skippingUnknownInputType": "Overslaan van onbekend soort invoerveld",
"workflowAuthor": "Auteur",
"currentImage": "Huidige afbeelding",
"controlField": "Controle",
"workflowName": "Naam",
"booleanDescription": "Booleanse waarden zijn waar en onwaar.",
"collection": "Verzameling",
"ipAdapterModelDescription": "Modelveld IP-adapter",
"cannotConnectInputToInput": "Kan invoer niet aan invoer verbinden",
"invalidOutputSchema": "Ongeldig uitvoerschema",
"boardFieldDescription": "Een galerijbord",
"floatDescription": "Zwevende-kommagetallen zijn getallen met een decimaalteken.",
"floatPolymorphicDescription": "Een verzameling zwevende-kommagetallen.",
"vaeField": "Vae",
"conditioningField": "Conditionering",
"unhandledOutputProperty": "Onverwerkt uitvoerkenmerk",
"workflowNotes": "Opmerkingen",
"string": "Tekenreeks",
"floatCollection": "Verzameling zwevende-kommagetallen",
"latentsField": "Latents",
"cannotConnectOutputToOutput": "Kan uitvoer niet aan uitvoer verbinden",
"booleanCollection": "Verzameling Booleaanse waarden",
"connectionWouldCreateCycle": "Verbinding zou cyclisch worden",
"cannotConnectToSelf": "Kan niet aan zichzelf verbinden",
"notesDescription": "Voeg opmerkingen toe aan je werkstroom",
"unknownField": "Onbekend veld",
"inputFields": "Invoervelden",
"colorCodeEdges": "Kleurgecodeerde randen",
"uNetFieldDescription": "UNet-submodel.",
"unknownNode": "Onbekend knooppunt",
"imageCollectionDescription": "Een verzameling afbeeldingen.",
"mismatchedVersion": "Heeft niet-overeenkomende versie",
"vaeFieldDescription": "Vae-submodel.",
"imageFieldDescription": "Afbeeldingen kunnen worden doorgegeven tussen knooppunten.",
"outputNode": "Uitvoerknooppunt",
"addNodeToolTip": "Voeg knooppunt toe (Shift+A, spatie)",
"loadingNodes": "Bezig met laden van knooppunten...",
"snapToGridHelp": "Lijn knooppunten uit op raster bij verplaatsing",
"workflowSettings": "Instellingen werkstroomeditor",
"mainModelFieldDescription": "TODO",
"sDXLRefinerModelField": "Verfijningsmodel",
"loRAModelField": "LoRA",
"unableToParseEdge": "Kan rand niet inlezen",
"latentsCollectionDescription": "Latents kunnen worden doorgegeven tussen knooppunten.",
"oNNXModelFieldDescription": "ONNX-modelveld.",
"imageCollection": "Afbeeldingsverzameling"
},
"controlnet": {
"amult": "a_mult",
"resize": "Schaal",
"showAdvanced": "Toon uitgebreide opties",
"contentShuffleDescription": "Verschuift het materiaal in de afbeelding",
"bgth": "bg_th",
"addT2IAdapter": "Voeg $t(common.t2iAdapter) toe",
"pidi": "PIDI",
"importImageFromCanvas": "Importeer afbeelding uit canvas",
"lineartDescription": "Zet afbeelding om naar line-art",
"normalBae": "Normale BAE",
"importMaskFromCanvas": "Importeer masker uit canvas",
"hed": "HED",
"hideAdvanced": "Verberg uitgebreid",
"contentShuffle": "Verschuif materiaal",
"controlNetEnabledT2IDisabled": "$t(common.controlNet) ingeschakeld, $t(common.t2iAdapter)s uitgeschakeld",
"ipAdapterModel": "Adaptermodel",
"resetControlImage": "Herstel controle-afbeelding",
"beginEndStepPercent": "Percentage begin-/eindstap",
"mlsdDescription": "Minimalistische herkenning lijnsegmenten",
"duplicate": "Maak kopie",
"balanced": "Gebalanceerd",
"f": "F",
"h": "H",
"prompt": "Prompt",
"depthMidasDescription": "Genereer diepteblad via Midas",
"controlnet": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.controlNet))",
"openPoseDescription": "Menselijke pose-benadering via Openpose",
"control": "Controle",
"resizeMode": "Modus schaling",
"t2iEnabledControlNetDisabled": "$t(common.t2iAdapter) ingeschakeld, $t(common.controlNet)s uitgeschakeld",
"coarse": "Grof",
"weight": "Gewicht",
"selectModel": "Kies een model",
"crop": "Snij bij",
"depthMidas": "Diepte (Midas)",
"w": "B",
"processor": "Verwerker",
"addControlNet": "Voeg $t(common.controlNet) toe",
"none": "Geen",
"incompatibleBaseModel": "Niet-compatibel basismodel:",
"enableControlnet": "Schakel ControlNet in",
"detectResolution": "Herken resolutie",
"controlNetT2IMutexDesc": "Gelijktijdig gebruik van $t(common.controlNet) en $t(common.t2iAdapter) wordt op dit moment niet ondersteund.",
"ip_adapter": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.ipAdapter))",
"pidiDescription": "PIDI-afbeeldingsverwerking",
"mediapipeFace": "Mediapipe - Gezicht",
"mlsd": "M-LSD",
"controlMode": "Controlemodus",
"fill": "Vul",
"cannyDescription": "Herkenning Canny-rand",
"addIPAdapter": "Voeg $t(common.ipAdapter) toe",
"lineart": "Line-art",
"colorMapDescription": "Genereert een kleurenblad van de afbeelding",
"lineartAnimeDescription": "Lineartverwerking in anime-stijl",
"t2i_adapter": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.t2iAdapter))",
"minConfidence": "Min. vertrouwensniveau",
"imageResolution": "Resolutie afbeelding",
"megaControl": "Zeer veel controle",
"depthZoe": "Diepte (Zoe)",
"colorMap": "Kleur",
"lowThreshold": "Lage drempelwaarde",
"autoConfigure": "Configureer verwerker automatisch",
"highThreshold": "Hoge drempelwaarde",
"normalBaeDescription": "Normale BAE-verwerking",
"noneDescription": "Geen verwerking toegepast",
"saveControlImage": "Bewaar controle-afbeelding",
"openPose": "Openpose",
"toggleControlNet": "Zet deze ControlNet aan/uit",
"delete": "Verwijder",
"controlAdapter_one": "Control-adapter",
"controlAdapter_other": "Control-adapters",
"safe": "Veilig",
"colorMapTileSize": "Grootte tegel",
"lineartAnime": "Line-art voor anime",
"ipAdapterImageFallback": "Geen IP-adapterafbeelding gekozen",
"mediapipeFaceDescription": "Gezichtsherkenning met Mediapipe",
"canny": "Canny",
"depthZoeDescription": "Genereer diepteblad via Zoe",
"hedDescription": "Herkenning van holistisch-geneste randen",
"setControlImageDimensions": "Stel afmetingen controle-afbeelding in op B/H",
"scribble": "Krabbel",
"resetIPAdapterImage": "Herstel IP-adapterafbeelding",
"handAndFace": "Hand en gezicht",
"enableIPAdapter": "Schakel IP-adapter in",
"maxFaces": "Max. gezichten"
},
"dynamicPrompts": {
"seedBehaviour": {
"perPromptDesc": "Gebruik een verschillende seedwaarde per afbeelding",
"perIterationLabel": "Seedwaarde per iteratie",
"perIterationDesc": "Gebruik een verschillende seedwaarde per iteratie",
"perPromptLabel": "Seedwaarde per afbeelding",
"label": "Gedrag seedwaarde"
},
"enableDynamicPrompts": "Schakel dynamische prompts in",
"combinatorial": "Combinatorisch genereren",
"maxPrompts": "Max. prompts",
"promptsWithCount_one": "{{count}} prompt",
"promptsWithCount_other": "{{count}} prompts",
"dynamicPrompts": "Dynamische prompts"
},
"popovers": {
"noiseUseCPU": {
"paragraphs": [
"Bepaalt of ruis wordt gegenereerd op de CPU of de GPU.",
"Met CPU-ruis ingeschakeld zal een bepaalde seedwaarde dezelfde afbeelding opleveren op welke machine dan ook.",
"Er is geen prestatieverschil bij het inschakelen van CPU-ruis."
],
"heading": "Gebruik CPU-ruis"
},
"paramScheduler": {
"paragraphs": [
"De planner bepaalt hoe ruis per iteratie wordt toegevoegd aan een afbeelding of hoe een monster wordt bijgewerkt op basis van de uitvoer van een model."
],
"heading": "Planner"
},
"scaleBeforeProcessing": {
"paragraphs": [
"Schaalt het gekozen gebied naar de grootte die het meest geschikt is voor het model, vooraf aan het proces van het afbeeldingen genereren."
],
"heading": "Schaal vooraf aan verwerking"
},
"compositingMaskAdjustments": {
"heading": "Aanpassingen masker",
"paragraphs": [
"Pas het masker aan."
]
},
"paramRatio": {
"heading": "Beeldverhouding",
"paragraphs": [
"De beeldverhouding van de afmetingen van de afbeelding die wordt gegenereerd.",
"Een afbeeldingsgrootte (in aantal pixels) equivalent aan 512x512 wordt aanbevolen voor SD1.5-modellen. Een grootte-equivalent van 1024x1024 wordt aanbevolen voor SDXL-modellen."
]
},
"compositingCoherenceSteps": {
"heading": "Stappen",
"paragraphs": [
"Het aantal te gebruiken ontruisingsstappen in de coherentiefase.",
"Gelijk aan de hoofdparameter Stappen."
]
},
"dynamicPrompts": {
"paragraphs": [
"Dynamische prompts vormt een enkele prompt om in vele.",
"De basissyntax is \"a {red|green|blue} ball\". Dit zal de volgende drie prompts geven: \"a red ball\", \"a green ball\" en \"a blue ball\".",
"Gebruik de syntax zo vaak als je wilt in een enkele prompt, maar zorg ervoor dat het aantal gegenereerde prompts in lijn ligt met de instelling Max. prompts."
],
"heading": "Dynamische prompts"
},
"paramVAE": {
"paragraphs": [
"Het model gebruikt voor het vertalen van AI-uitvoer naar de uiteindelijke afbeelding."
],
"heading": "VAE"
},
"compositingBlur": {
"heading": "Vervaging",
"paragraphs": [
"De vervagingsstraal van het masker."
]
},
"paramIterations": {
"paragraphs": [
"Het aantal te genereren afbeeldingen.",
"Als dynamische prompts is ingeschakeld, dan zal elke prompt dit aantal keer gegenereerd worden."
],
"heading": "Iteraties"
},
"paramVAEPrecision": {
"heading": "Nauwkeurigheid VAE",
"paragraphs": [
"De nauwkeurigheid gebruikt tijdens de VAE-codering en -decodering. FP16/halve nauwkeurig is efficiënter, ten koste van kleine afbeeldingsvariaties."
]
},
"compositingCoherenceMode": {
"heading": "Modus",
"paragraphs": [
"De modus van de coherentiefase."
]
},
"paramSeed": {
"paragraphs": [
"Bepaalt de startruis die gebruikt wordt bij het genereren.",
"Schakel \"Willekeurige seedwaarde\" uit om identieke resultaten te krijgen met dezelfde genereer-instellingen."
],
"heading": "Seedwaarde"
},
"controlNetResizeMode": {
"heading": "Schaalmodus",
"paragraphs": [
"Hoe de ControlNet-afbeelding zal worden geschaald aan de uitvoergrootte van de afbeelding."
]
},
"controlNetBeginEnd": {
"paragraphs": [
"Op welke stappen van het ontruisingsproces ControlNet worden toegepast.",
"ControlNets die worden toegepast aan het begin begeleiden het compositieproces. ControlNets die worden toegepast aan het eind zorgen voor details."
],
"heading": "Percentage begin- / eindstap"
},
"dynamicPromptsSeedBehaviour": {
"paragraphs": [
"Bepaalt hoe de seedwaarde wordt gebruikt bij het genereren van prompts.",
"Per iteratie zal een unieke seedwaarde worden gebruikt voor elke iteratie. Gebruik dit om de promptvariaties binnen een enkele seedwaarde te verkennen.",
"Bijvoorbeeld: als je vijf prompts heb, dan zal voor elke afbeelding dezelfde seedwaarde gebruikt worden.",
"De optie Per afbeelding zal een unieke seedwaarde voor elke afbeelding gebruiken. Dit biedt meer variatie."
],
"heading": "Gedrag seedwaarde"
},
"clipSkip": {
"paragraphs": [
"Kies hoeveel CLIP-modellagen je wilt overslaan.",
"Bepaalde modellen werken beter met bepaalde Overslaan CLIP-instellingen.",
"Een hogere waarde geeft meestal een minder gedetailleerde afbeelding."
],
"heading": "Overslaan CLIP"
},
"paramModel": {
"heading": "Model",
"paragraphs": [
"Model gebruikt voor de ontruisingsstappen.",
"Verschillende modellen zijn meestal getraind om zich te specialiseren in het maken van bepaalde esthetische resultaten en materiaal."
]
},
"compositingCoherencePass": {
"heading": "Coherentiefase",
"paragraphs": [
"Een tweede ronde ontruising helpt bij het samenstellen van de erin- of eruitgetekende afbeelding."
]
},
"paramDenoisingStrength": {
"paragraphs": [
"Hoeveel ruis wordt toegevoegd aan de invoerafbeelding.",
"0 levert een identieke afbeelding op, waarbij 1 een volledig nieuwe afbeelding oplevert."
],
"heading": "Ontruisingssterkte"
},
"compositingStrength": {
"heading": "Sterkte",
"paragraphs": [
"Ontruisingssterkte voor de coherentiefase.",
"Gelijk aan de parameter Ontruisingssterkte Afbeelding naar afbeelding."
]
},
"paramNegativeConditioning": {
"paragraphs": [
"Het genereerproces voorkomt de gegeven begrippen in de negatieve prompt. Gebruik dit om bepaalde zaken of voorwerpen uit te sluiten van de uitvoerafbeelding.",
"Ondersteunt Compel-syntax en -embeddingen."
],
"heading": "Negatieve prompt"
},
"compositingBlurMethod": {
"heading": "Vervagingsmethode",
"paragraphs": [
"De methode van de vervaging die wordt toegepast op het gemaskeerd gebied."
]
},
"dynamicPromptsMaxPrompts": {
"heading": "Max. prompts",
"paragraphs": [
"Beperkt het aantal prompts die kunnen worden gegenereerd door dynamische prompts."
]
},
"infillMethod": {
"paragraphs": [
"Methode om een gekozen gebied in te vullen."
],
"heading": "Invulmethode"
},
"controlNetWeight": {
"heading": "Gewicht",
"paragraphs": [
"Hoe sterk ControlNet effect heeft op de gegeneerde afbeelding."
]
},
"controlNet": {
"heading": "ControlNet",
"paragraphs": [
"ControlNets begeleidt het genereerproces, waarbij geholpen wordt bij het maken van afbeeldingen met aangestuurde compositie, structuur of stijl, afhankelijk van het gekozen model."
]
},
"paramCFGScale": {
"heading": "CFG-schaal",
"paragraphs": [
"Bepaalt hoeveel je prompt invloed heeft op het genereerproces."
]
},
"controlNetControlMode": {
"paragraphs": [
"Geeft meer gewicht aan ofwel de prompt danwel ControlNet."
],
"heading": "Controlemodus"
},
"paramSteps": {
"heading": "Stappen",
"paragraphs": [
"Het aantal uit te voeren stappen tijdens elke generatie.",
"Een hoger aantal stappen geven meestal betere afbeeldingen, ten koste van een hogere benodigde tijd om te genereren."
]
},
"paramPositiveConditioning": {
"heading": "Positieve prompt",
"paragraphs": [
"Begeleidt het generartieproces. Gebruik een woord of frase naar keuze.",
"Syntaxes en embeddings voor Compel en dynamische prompts."
]
},
"lora": {
"heading": "Gewicht LoRA",
"paragraphs": [
"Een hogere LoRA-gewicht zal leiden tot een groter effect op de uiteindelijke afbeelding."
]
}
},
"metadata": {
"seamless": "Naadloos",
"positivePrompt": "Positieve prompt",
"negativePrompt": "Negatieve prompt",
"generationMode": "Genereermodus",
"Threshold": "Drempelwaarde ruis",
"metadata": "Metagegevens",
"strength": "Sterkte Afbeelding naar afbeelding",
"seed": "Seedwaarde",
"imageDetails": "Afbeeldingsdetails",
"perlin": "Perlin-ruis",
"model": "Model",
"noImageDetails": "Geen afbeeldingsdetails gevonden",
"hiresFix": "Optimalisatie voor hoge resolutie",
"cfgScale": "CFG-schaal",
"fit": "Schaal aanpassen in Afbeelding naar afbeelding",
"initImage": "Initiële afbeelding",
"recallParameters": "Opnieuw aan te roepen parameters",
"height": "Hoogte",
"variations": "Paren seedwaarde-gewicht",
"noMetaData": "Geen metagegevens gevonden",
"width": "Breedte",
"createdBy": "Gemaakt door",
"workflow": "Werkstroom",
"steps": "Stappen",
"scheduler": "Planner",
"noRecallParameters": "Geen opnieuw uit te voeren parameters gevonden"
},
"queue": {
"status": "Status",
"pruneSucceeded": "{{item_count}} voltooide onderdelen uit wachtrij opgeruimd",
"cancelTooltip": "Annuleer huidig onderdeel",
"queueEmpty": "Wachtrij leeg",
"pauseSucceeded": "Verwerker onderbroken",
"in_progress": "Bezig",
"queueFront": "Voeg vooraan toe in wachtrij",
"notReady": "Fout bij plaatsen in wachtrij",
"batchFailedToQueue": "Fout bij reeks in wachtrij plaatsen",
"completed": "Voltooid",
"queueBack": "Voeg toe aan wachtrij",
"batchValues": "Reekswaarden",
"cancelFailed": "Fout bij annuleren onderdeel",
"queueCountPrediction": "Voeg {{predicted}} toe aan wachtrij",
"batchQueued": "Reeks in wachtrij geplaatst",
"pauseFailed": "Fout bij onderbreken verwerker",
"clearFailed": "Fout bij wissen van wachtrij",
"queuedCount": "{{pending}} wachtend",
"front": "begin",
"clearSucceeded": "Wachtrij gewist",
"pause": "Onderbreek",
"pruneTooltip": "Ruim {{item_count}} voltooide onderdelen op",
"cancelSucceeded": "Onderdeel geannuleerd",
"batchQueuedDesc_one": "Voeg {{count}} sessie toe aan het {{direction}} van de wachtrij",
"batchQueuedDesc_other": "Voeg {{count}} sessies toe aan het {{direction}} van de wachtrij",
"graphQueued": "Graaf in wachtrij geplaatst",
"queue": "Wachtrij",
"batch": "Reeks",
"clearQueueAlertDialog": "Als je de wachtrij onmiddellijk wist, dan worden alle onderdelen die bezig zijn geannuleerd en wordt de wachtrij volledig gewist.",
"pending": "Wachtend",
"completedIn": "Voltooid na",
"resumeFailed": "Fout bij hervatten verwerker",
"clear": "Wis",
"prune": "Ruim op",
"total": "Totaal",
"canceled": "Geannuleerd",
"pruneFailed": "Fout bij opruimen van wachtrij",
"cancelBatchSucceeded": "Reeks geannuleerd",
"clearTooltip": "Annuleer en wis alle onderdelen",
"current": "Huidig",
"pauseTooltip": "Onderbreek verwerker",
"failed": "Mislukt",
"cancelItem": "Annuleer onderdeel",
"next": "Volgende",
"cancelBatch": "Annuleer reeks",
"back": "eind",
"cancel": "Annuleer",
"session": "Sessie",
"queueTotal": "Totaal {{total}}",
"resumeSucceeded": "Verwerker hervat",
"enqueueing": "Bezig met toevoegen van reeks aan wachtrij",
"resumeTooltip": "Hervat verwerker",
"queueMaxExceeded": "Max. aantal van {{max_queue_size}} overschreden, {{skip}} worden overgeslagen",
"resume": "Hervat",
"cancelBatchFailed": "Fout bij annuleren van reeks",
"clearQueueAlertDialog2": "Weet je zeker dat je de wachtrij wilt wissen?",
"item": "Onderdeel",
"graphFailedToQueue": "Fout bij toevoegen graaf aan wachtrij"
},
"sdxl": {
"refinerStart": "Startwaarde verfijning",
"selectAModel": "Kies een model",
"scheduler": "Planner",
"cfgScale": "CFG-schaal",
"negStylePrompt": "Negatieve-stijlprompt",
"noModelsAvailable": "Geen modellen beschikbaar",
"refiner": "Verfijning",
"negAestheticScore": "Negatieve esthetische score",
"useRefiner": "Gebruik verfijning",
"denoisingStrength": "Sterkte ontruising",
"refinermodel": "Verfijningsmodel",
"posAestheticScore": "Positieve esthetische score",
"concatPromptStyle": "Plak prompt- en stijltekst aan elkaar",
"loading": "Bezig met laden...",
"steps": "Stappen",
"posStylePrompt": "Positieve-stijlprompt"
},
"models": {
"noMatchingModels": "Geen overeenkomend modellen",
"loading": "bezig met laden",
"noMatchingLoRAs": "Geen overeenkomende LoRA's",
"noLoRAsAvailable": "Geen LoRA's beschikbaar",
"noModelsAvailable": "Geen modellen beschikbaar",
"selectModel": "Kies een model",
"selectLoRA": "Kies een LoRA"
},
"boards": {
"autoAddBoard": "Voeg automatisch bord toe",
"topMessage": "Dit bord bevat afbeeldingen die in gebruik zijn door de volgende functies:",
"move": "Verplaats",
"menuItemAutoAdd": "Voeg dit automatisch toe aan bord",
"myBoard": "Mijn bord",
"searchBoard": "Zoek borden...",
"noMatching": "Geen overeenkomende borden",
"selectBoard": "Kies een bord",
"cancel": "Annuleer",
"addBoard": "Voeg bord toe",
"bottomMessage": "Als je dit bord en alle afbeeldingen erop verwijdert, dan worden alle functies teruggezet die ervan gebruik maken.",
"uncategorized": "Zonder categorie",
"downloadBoard": "Download bord",
"changeBoard": "Wijzig bord",
"loading": "Bezig met laden...",
"clearSearch": "Maak zoekopdracht leeg"
},
"invocationCache": {
"disable": "Schakel uit",
"misses": "Mislukt cacheverzoek",
"enableFailed": "Fout bij inschakelen aanroepcache",
"invocationCache": "Aanroepcache",
"clearSucceeded": "Aanroepcache gewist",
"enableSucceeded": "Aanroepcache ingeschakeld",
"clearFailed": "Fout bij wissen aanroepcache",
"hits": "Gelukt cacheverzoek",
"disableSucceeded": "Aanroepcache uitgeschakeld",
"disableFailed": "Fout bij uitschakelen aanroepcache",
"enable": "Schakel in",
"clear": "Wis",
"maxCacheSize": "Max. grootte cache",
"cacheSize": "Grootte cache"
},
"embedding": {
"noMatchingEmbedding": "Geen overeenkomende embeddings",
"addEmbedding": "Voeg embedding toe",
"incompatibleModel": "Niet-compatibel basismodel:"
"downloadWorkflow": "Download JSON van werkstroom"
}
}

View File

@@ -88,9 +88,7 @@
"t2iAdapter": "T2I Adapter",
"ipAdapter": "IP Adapter",
"controlAdapter": "Control Adapter",
"controlNet": "ControlNet",
"on": "开",
"auto": "自动"
"controlNet": "ControlNet"
},
"gallery": {
"generations": "生成的图像",
@@ -474,8 +472,7 @@
"vae": "VAE",
"oliveModels": "Olive",
"loraModels": "LoRA",
"alpha": "Alpha",
"vaePrecision": "VAE 精度"
"alpha": "Alpha"
},
"parameters": {
"images": "图像",
@@ -598,11 +595,7 @@
"useX2Model": "图像太大,无法使用 x4 模型,使用 x2 模型作为替代",
"tooLarge": "图像太大无法进行放大,请选择更小的图像"
},
"iterationsWithCount_other": "{{count}} 次迭代生成",
"seamlessX&Y": "无缝 X & Y",
"aspectRatioFree": "自由",
"seamlessX": "无缝 X",
"seamlessY": "无缝 Y"
"iterationsWithCount_other": "{{count}} 次迭代生成"
},
"settings": {
"models": "模型",
@@ -635,11 +628,10 @@
"clearIntermediates": "清除中间产物",
"clearIntermediatesDesc3": "您图库中的图像不会被删除。",
"clearIntermediatesDesc2": "中间产物图像是生成过程中产生的副产品,与图库中的结果图像不同。清除中间产物可释放磁盘空间。",
"intermediatesCleared_other": "已清除 {{count}} 个中间产物",
"intermediatesCleared_other": "已清除 {{number}} 个中间产物",
"clearIntermediatesDesc1": "清除中间产物会重置您的画布和 ControlNet 状态。",
"intermediatesClearedFailed": "清除中间产物时出现问题",
"clearIntermediatesWithCount_other": "清除 {{count}} 个中间产物",
"clearIntermediatesDisabled": "队列为空才能清理中间产物"
"noIntermediates": "没有可清除的中间产物"
},
"toast": {
"tempFoldersEmptied": "临时文件夹已清空",
@@ -722,7 +714,7 @@
"canvasSavedGallery": "画布已保存到图库",
"imageUploadFailed": "图像上传失败",
"problemImportingMask": "导入遮罩时出现问题",
"baseModelChangedCleared_other": "基础模型已更改, 已清除或禁用 {{count}} 个不兼容的子模型"
"baseModelChangedCleared_other": "基础模型已更改, 已清除或禁用 {{number}} 个不兼容的子模型"
},
"unifiedCanvas": {
"layer": "图层",
@@ -866,7 +858,7 @@
"version": "版本",
"validateConnections": "验证连接和节点图",
"inputMayOnlyHaveOneConnection": "输入仅能有一个连接",
"notes": "注释",
"notes": "节点",
"nodeOutputs": "节点输出",
"currentImageDescription": "在节点编辑器中显示当前图像",
"validateConnectionsHelp": "防止建立无效连接和调用无效节点图",
@@ -892,11 +884,11 @@
"currentImage": "当前图像",
"workflowName": "名称",
"cannotConnectInputToInput": "无法将输入连接到输入",
"workflowNotes": "注释",
"workflowNotes": "节点",
"cannotConnectOutputToOutput": "无法将输出连接到输出",
"connectionWouldCreateCycle": "连接将创建一个循环",
"cannotConnectToSelf": "无法连接自己",
"notesDescription": "添加有关您的工作流的注释",
"notesDescription": "添加有关您的工作流的节点",
"unknownField": "未知",
"colorCodeEdges": "边缘颜色编码",
"unknownNode": "未知节点",
@@ -1011,27 +1003,7 @@
"booleanCollection": "布尔值合集",
"imageCollectionDescription": "一个图像合集。",
"loRAModelField": "LoRA",
"imageCollection": "图像合集",
"ipAdapterPolymorphicDescription": "一个 IP-Adapters Collection 合集。",
"ipAdapterCollection": "IP-Adapters 合集",
"conditioningCollection": "条件合集",
"ipAdapterPolymorphic": "IP-Adapters 多态",
"conditioningCollectionDescription": "条件可以在节点间传递。",
"colorPolymorphic": "颜色多态",
"conditioningPolymorphic": "条件多态",
"latentsCollection": "Latents 合集",
"stringPolymorphic": "字符多态",
"conditioningPolymorphicDescription": "条件可以在节点间传递。",
"imagePolymorphic": "图像多态",
"floatPolymorphic": "浮点多态",
"ipAdapterCollectionDescription": "一个 IP-Adapters Collection 合集。",
"ipAdapter": "IP-Adapter",
"booleanPolymorphic": "布尔多态",
"conditioningFieldDescription": "条件可以在节点间传递。",
"integerPolymorphic": "整数多态",
"latentsPolymorphic": "Latents 多态",
"conditioningField": "条件",
"latentsField": "Latents"
"imageCollection": "图像合集"
},
"controlnet": {
"resize": "直接缩放",
@@ -1101,21 +1073,21 @@
"contentShuffle": "Content Shuffle",
"f": "F",
"h": "H",
"controlnet": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.controlNet))",
"controlnet": "$t(controlnet.controlAdapter) #{{number}} ($t(common.controlNet))",
"control": "Control (普通控制)",
"coarse": "Coarse",
"depthMidas": "Depth (Midas)",
"w": "W",
"ip_adapter": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.ipAdapter))",
"ip_adapter": "$t(controlnet.controlAdapter) #{{number}} ($t(common.ipAdapter))",
"mediapipeFace": "Mediapipe Face",
"mlsd": "M-LSD",
"lineart": "Lineart",
"t2i_adapter": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.t2iAdapter))",
"t2i_adapter": "$t(controlnet.controlAdapter) #{{number}} ($t(common.t2iAdapter))",
"megaControl": "Mega Control (超级控制)",
"depthZoe": "Depth (Zoe)",
"colorMap": "Color",
"openPose": "Openpose",
"controlAdapter_other": "Control Adapters",
"controlAdapter": "Control Adapter",
"lineartAnime": "Lineart Anime",
"canny": "Canny"
},
@@ -1169,7 +1141,7 @@
"queuedCount": "{{pending}} 待处理",
"front": "前",
"pruneTooltip": "修剪 {{item_count}} 个已完成的项目",
"batchQueuedDesc_other": "在队列的 {{direction}} 中添加了 {{count}} 个会话",
"batchQueuedDesc": "在队列的 {{direction}} 中添加了 {{item_count}} 个会话",
"graphQueued": "节点图已加入队列",
"back": "后",
"session": "会话",
@@ -1220,10 +1192,7 @@
"steps": "步数",
"scheduler": "调度器",
"seamless": "无缝",
"fit": "图生图匹配",
"recallParameters": "召回参数",
"noRecallParameters": "未找到要召回的参数",
"vae": "VAE"
"fit": "图生图适应"
},
"models": {
"noMatchingModels": "无相匹配的模型",
@@ -1232,9 +1201,7 @@
"noLoRAsAvailable": "无可用 LoRA",
"noModelsAvailable": "无可用模型",
"selectModel": "选择一个模型",
"selectLoRA": "选择一个 LoRA",
"noRefinerModelsInstalled": "无已安装的 SDXL Refiner 模型",
"noLoRAsInstalled": "无已安装的 LoRA"
"selectLoRA": "选择一个 LoRA"
},
"boards": {
"autoAddBoard": "自动添加面板",
@@ -1502,18 +1469,5 @@
"clear": "清除",
"maxCacheSize": "最大缓存大小",
"cacheSize": "缓存大小"
},
"hrf": {
"enableHrf": "启用高分辨率修复",
"upscaleMethod": "放大方法",
"enableHrfTooltip": "使用较低的分辨率进行初始生成,放大到基础分辨率后进行图生图。",
"metadata": {
"strength": "高分辨率修复强度",
"enabled": "高分辨率修复已启用",
"method": "高分辨率修复方法"
},
"hrf": "高分辨率修复",
"hrfStrength": "高分辨率修复强度",
"strengthTooltip": "值越低细节越少,但可以减少部分潜在的伪影。"
}
}

View File

@@ -113,14 +113,7 @@
"images": "Bilder",
"copy": "Kopieren",
"download": "Runterladen",
"setCurrentImage": "Setze aktuelle Bild",
"featuresWillReset": "Wenn Sie dieses Bild löschen, werden diese Funktionen sofort zurückgesetzt.",
"deleteImageBin": "Gelöschte Bilder werden an den Papierkorb Ihres Betriebssystems gesendet.",
"unableToLoad": "Galerie kann nicht geladen werden",
"downloadSelection": "Auswahl herunterladen",
"currentlyInUse": "Dieses Bild wird derzeit in den folgenden Funktionen verwendet:",
"deleteImagePermanent": "Gelöschte Bilder können nicht wiederhergestellt werden.",
"autoAssignBoardOnClick": "Board per Klick automatisch zuweisen"
"setCurrentImage": "Setze aktuelle Bild"
},
"hotkeys": {
"keyboardShortcuts": "Tastenkürzel",
@@ -330,8 +323,7 @@
},
"nodesHotkeys": "Knoten Tastenkürzel",
"addNodes": {
"title": "Knotenpunkt hinzufügen",
"desc": "Öffnet das Menü zum Hinzufügen von Knoten"
"title": "Knotenpunkt hinzufügen"
}
},
"modelManager": {
@@ -437,43 +429,7 @@
"customConfigFileLocation": "Benutzerdefinierte Konfiguration Datei Speicherort",
"baseModel": "Basis Modell",
"convertToDiffusers": "Konvertiere zu Diffusers",
"diffusersModels": "Diffusers",
"noCustomLocationProvided": "Kein benutzerdefinierter Standort angegeben",
"onnxModels": "Onnx",
"vaeRepoID": "VAE-Repo-ID",
"weightedSum": "Gewichtete Summe",
"syncModelsDesc": "Wenn Ihre Modelle nicht mit dem Backend synchronisiert sind, können Sie sie mit dieser Option aktualisieren. Dies ist im Allgemeinen praktisch, wenn Sie Ihre models.yaml-Datei manuell aktualisieren oder Modelle zum InvokeAI-Stammordner hinzufügen, nachdem die Anwendung gestartet wurde.",
"vae": "VAE",
"noModels": "Keine Modelle gefunden",
"statusConverting": "Konvertieren",
"sigmoid": "Sigmoid",
"predictionType": "Vorhersagetyp (für Stable Diffusion 2.x-Modelle und gelegentliche Stable Diffusion 1.x-Modelle)",
"selectModel": "Wählen Sie Modell aus",
"repo_id": "Repo-ID",
"modelSyncFailed": "Modellsynchronisierung fehlgeschlagen",
"quickAdd": "Schnell hinzufügen",
"simpleModelDesc": "Geben Sie einen Pfad zu einem lokalen Diffusers-Modell, einem lokalen Checkpoint-/Safetensors-Modell, einer HuggingFace-Repo-ID oder einer Checkpoint-/Diffusers-Modell-URL an.",
"modelDeleted": "Modell gelöscht",
"inpainting": "v1 Ausmalen",
"modelUpdateFailed": "Modellaktualisierung fehlgeschlagen",
"useCustomConfig": "Benutzerdefinierte Konfiguration verwenden",
"settings": "Einstellungen",
"modelConversionFailed": "Modellkonvertierung fehlgeschlagen",
"syncModels": "Modelle synchronisieren",
"mergedModelSaveLocation": "Speicherort",
"modelType": "Modelltyp",
"modelsMerged": "Modelle zusammengeführt",
"modelsMergeFailed": "Modellzusammenführung fehlgeschlagen",
"convertToDiffusersHelpText1": "Dieses Modell wird in das 🧨 Diffusers-Format konvertiert.",
"modelsSynced": "Modelle synchronisiert",
"vaePrecision": "VAE-Präzision",
"mergeModels": "Modelle zusammenführen",
"interpolationType": "Interpolationstyp",
"oliveModels": "Olives",
"variant": "Variante",
"loraModels": "LoRAs",
"modelDeleteFailed": "Modell konnte nicht gelöscht werden",
"mergedModelName": "Zusammengeführter Modellname"
"diffusersModels": "Diffusers"
},
"parameters": {
"images": "Bilder",
@@ -760,33 +716,7 @@
"saveControlImage": "Speichere Referenz Bild",
"safe": "Speichern",
"ipAdapterImageFallback": "Kein IP Adapter Bild ausgewählt",
"resetIPAdapterImage": "Zurücksetzen vom IP Adapter Bild",
"pidi": "PIDI",
"normalBae": "Normales BAE",
"mlsdDescription": "Minimalistischer Liniensegmentdetektor",
"openPoseDescription": "Schätzung der menschlichen Pose mit Openpose",
"control": "Kontrolle",
"coarse": "Coarse",
"crop": "Zuschneiden",
"pidiDescription": "PIDI-Bildverarbeitung",
"mediapipeFace": "Mediapipe Gesichter",
"mlsd": "M-LSD",
"controlMode": "Steuermodus",
"cannyDescription": "Canny Ecken Erkennung",
"lineart": "Lineart",
"lineartAnimeDescription": "Lineart-Verarbeitung im Anime-Stil",
"minConfidence": "Minimales Vertrauen",
"megaControl": "Mega-Kontrolle",
"autoConfigure": "Prozessor automatisch konfigurieren",
"normalBaeDescription": "Normale BAE-Verarbeitung",
"noneDescription": "Es wurde keine Verarbeitung angewendet",
"openPose": "Openpose",
"lineartAnime": "Lineart Anime",
"mediapipeFaceDescription": "Gesichtserkennung mit Mediapipe",
"canny": "Canny",
"hedDescription": "Ganzheitlich verschachtelte Kantenerkennung",
"scribble": "Scribble",
"maxFaces": "Maximal Anzahl Gesichter"
"resetIPAdapterImage": "Zurücksetzen vom IP Adapter Bild"
},
"queue": {
"status": "Status",
@@ -828,19 +758,7 @@
"enqueueing": "Stapel in der Warteschlange",
"queueMaxExceeded": "Maximum von {{max_queue_size}} Elementen erreicht, würde {{skip}} Elemente überspringen",
"cancelBatchFailed": "Problem beim Abbruch vom Stapel",
"clearQueueAlertDialog2": "bist du sicher die Warteschlange zu leeren?",
"pruneSucceeded": "{{item_count}} abgeschlossene Elemente aus der Warteschlange entfernt",
"pauseSucceeded": "Prozessor angehalten",
"cancelFailed": "Problem beim Stornieren des Auftrags",
"pauseFailed": "Problem beim Anhalten des Prozessors",
"front": "Vorne",
"pruneTooltip": "Bereinigen Sie {{item_count}} abgeschlossene Aufträge",
"resumeFailed": "Problem beim wieder aufnehmen von Prozessor",
"pruneFailed": "Problem beim leeren der Warteschlange",
"pauseTooltip": "Pause von Prozessor",
"back": "Hinten",
"resumeSucceeded": "Prozessor wieder aufgenommen",
"resumeTooltip": "Prozessor wieder aufnehmen"
"clearQueueAlertDialog2": "bist du sicher die Warteschlange zu leeren?"
},
"metadata": {
"negativePrompt": "Negativ Beschreibung",
@@ -855,20 +773,7 @@
"noMetaData": "Keine Meta-Data gefunden",
"width": "Breite",
"createdBy": "Erstellt von",
"steps": "Schritte",
"seamless": "Nahtlos",
"positivePrompt": "Positiver Prompt",
"generationMode": "Generierungsmodus",
"Threshold": "Noise Schwelle",
"seed": "Samen",
"perlin": "Perlin Noise",
"hiresFix": "Optimierung für hohe Auflösungen",
"initImage": "Erstes Bild",
"variations": "Samengewichtspaare",
"vae": "VAE",
"workflow": "Arbeitsablauf",
"scheduler": "Scheduler",
"noRecallParameters": "Es wurden keine Parameter zum Abrufen gefunden"
"steps": "Schritte"
},
"popovers": {
"noiseUseCPU": {
@@ -906,68 +811,11 @@
"misses": "Cache Nötig",
"hits": "Cache Treffer",
"enable": "Aktivieren",
"clear": "Leeren",
"maxCacheSize": "Maximale Cache Größe",
"cacheSize": "Cache Größe"
"clear": "Leeren"
},
"embedding": {
"noMatchingEmbedding": "Keine passenden Embeddings",
"addEmbedding": "Embedding hinzufügen",
"incompatibleModel": "Inkompatibles Basismodell:"
},
"nodes": {
"booleanPolymorphicDescription": "Eine Sammlung boolescher Werte.",
"colorFieldDescription": "Eine RGBA-Farbe.",
"conditioningCollection": "Konditionierungssammlung",
"addNode": "Knoten hinzufügen",
"conditioningCollectionDescription": "Konditionierung kann zwischen Knoten weitergegeben werden.",
"colorPolymorphic": "Farbpolymorph",
"colorCodeEdgesHelp": "Farbkodieren Sie Kanten entsprechend ihren verbundenen Feldern",
"animatedEdges": "Animierte Kanten",
"booleanCollectionDescription": "Eine Sammlung boolescher Werte.",
"colorField": "Farbe",
"collectionItem": "Objekt in Sammlung",
"animatedEdgesHelp": "Animieren Sie ausgewählte Kanten und Kanten, die mit ausgewählten Knoten verbunden sind",
"cannotDuplicateConnection": "Es können keine doppelten Verbindungen erstellt werden",
"booleanPolymorphic": "Boolesche Polymorphie",
"colorPolymorphicDescription": "Eine Sammlung von Farben.",
"clipFieldDescription": "Tokenizer- und text_encoder-Untermodelle.",
"clipField": "Clip",
"colorCollection": "Eine Sammlung von Farben.",
"boolean": "Boolesche Werte",
"currentImage": "Aktuelles Bild",
"booleanDescription": "Boolesche Werte sind wahr oder falsch.",
"collection": "Sammlung",
"cannotConnectInputToInput": "Eingang kann nicht mit Eingang verbunden werden",
"conditioningField": "Konditionierung",
"cannotConnectOutputToOutput": "Ausgang kann nicht mit Ausgang verbunden werden",
"booleanCollection": "Boolesche Werte Sammlung",
"cannotConnectToSelf": "Es kann keine Verbindung zu sich selbst hergestellt werden",
"colorCodeEdges": "Farbkodierte Kanten",
"addNodeToolTip": "Knoten hinzufügen (Umschalt+A, Leertaste)"
},
"hrf": {
"enableHrf": "Aktivieren Sie die Korrektur für hohe Auflösungen",
"upscaleMethod": "Vergrößerungsmethoden",
"enableHrfTooltip": "Generieren Sie mit einer niedrigeren Anfangsauflösung, skalieren Sie auf die Basisauflösung hoch und führen Sie dann Image-to-Image aus.",
"metadata": {
"strength": "Hochauflösender Fix Stärke",
"enabled": "Hochauflösender Fix aktiviert",
"method": "Hochauflösender Fix Methode"
},
"hrf": "Hochauflösender Fix",
"hrfStrength": "Hochauflösende Fix Stärke",
"strengthTooltip": "Niedrigere Werte führen zu weniger Details, wodurch potenzielle Artefakte reduziert werden können."
},
"models": {
"noMatchingModels": "Keine passenden Modelle",
"loading": "lade",
"noMatchingLoRAs": "Keine passenden LoRAs",
"noLoRAsAvailable": "Keine LoRAs verfügbar",
"noModelsAvailable": "Keine Modelle verfügbar",
"selectModel": "Wählen ein Modell aus",
"noRefinerModelsInstalled": "Keine SDXL Refiner-Modelle installiert",
"noLoRAsInstalled": "Keine LoRAs installiert",
"selectLoRA": "Wählen ein LoRA aus"
}
}

View File

@@ -6,7 +6,6 @@
"flipVertically": "Flip Vertically",
"invokeProgressBar": "Invoke progress bar",
"menu": "Menu",
"mode": "Mode",
"modelSelect": "Model Select",
"modifyConfig": "Modify Config",
"nextImage": "Next Image",
@@ -31,10 +30,6 @@
"cancel": "Cancel",
"changeBoard": "Change Board",
"clearSearch": "Clear Search",
"deleteBoard": "Delete Board",
"deleteBoardAndImages": "Delete Board and Images",
"deleteBoardOnly": "Delete Board Only",
"deletedBoardsCannotbeRestored": "Deleted boards cannot be restored",
"loading": "Loading...",
"menuItemAutoAdd": "Auto-add to this Board",
"move": "Move",
@@ -56,12 +51,9 @@
"cancel": "Cancel",
"close": "Close",
"on": "On",
"checkpoint": "Checkpoint",
"communityLabel": "Community",
"controlNet": "ControlNet",
"controlAdapter": "Control Adapter",
"data": "Data",
"details": "Details",
"ipAdapter": "IP Adapter",
"t2iAdapter": "T2I Adapter",
"darkMode": "Dark Mode",
@@ -73,7 +65,6 @@
"imagePrompt": "Image Prompt",
"imageFailedToLoad": "Unable to Load Image",
"img2img": "Image To Image",
"inpaint": "inpaint",
"langArabic": "العربية",
"langBrPortuguese": "Português do Brasil",
"langDutch": "Nederlands",
@@ -102,8 +93,6 @@
"nodes": "Workflow Editor",
"nodesDesc": "A node based system for the generation of images is under development currently. Stay tuned for updates about this amazing feature.",
"openInNewTab": "Open in New Tab",
"outpaint": "outpaint",
"outputs": "Outputs",
"postProcessDesc1": "Invoke AI offers a wide variety of post processing features. Image Upscaling and Face Restoration are already available in the WebUI. You can access them from the Advanced Options menu of the Text To Image and Image To Image tabs. You can also process images directly, using the image action buttons above the current image display or in the viewer.",
"postProcessDesc2": "A dedicated UI will be released soon to facilitate more advanced post processing workflows.",
"postProcessDesc3": "The Invoke AI Command Line Interface offers various other features including Embiggen.",
@@ -111,9 +100,7 @@
"postProcessing": "Post Processing",
"random": "Random",
"reportBugLabel": "Report Bug",
"safetensors": "Safetensors",
"settingsLabel": "Settings",
"simple": "Simple",
"statusConnected": "Connected",
"statusConvertingModel": "Converting Model",
"statusDisconnected": "Disconnected",
@@ -140,7 +127,6 @@
"statusSavingImage": "Saving Image",
"statusUpscaling": "Upscaling",
"statusUpscalingESRGAN": "Upscaling (ESRGAN)",
"template": "Template",
"training": "Training",
"trainingDesc1": "A dedicated workflow for training your own embeddings and checkpoints using Textual Inversion and Dreambooth from the web interface.",
"trainingDesc2": "InvokeAI already supports training custom embeddourings using Textual Inversion using the main script.",
@@ -228,7 +214,6 @@
"setControlImageDimensions": "Set Control Image Dimensions To W/H",
"showAdvanced": "Show Advanced",
"toggleControlNet": "Toggle this ControlNet",
"unstarImage": "Unstar Image",
"w": "W",
"weight": "Weight",
"enableIPAdapter": "Enable IP Adapter",
@@ -294,7 +279,6 @@
"next": "Next",
"status": "Status",
"total": "Total",
"time": "Time",
"pending": "Pending",
"in_progress": "In Progress",
"completed": "Completed",
@@ -302,7 +286,6 @@
"canceled": "Canceled",
"completedIn": "Completed in",
"batch": "Batch",
"batchFieldValues": "Batch Field Values",
"item": "Item",
"session": "Session",
"batchValues": "Batch Values",
@@ -352,7 +335,6 @@
"loading": "Loading",
"loadMore": "Load More",
"maintainAspectRatio": "Maintain Aspect Ratio",
"noImageSelected": "No Image Selected",
"noImagesInGallery": "No Images to Display",
"setCurrentImage": "Set as Current Image",
"showGenerations": "Show Generations",
@@ -601,7 +583,7 @@
"strength": "Image to image strength",
"Threshold": "Noise Threshold",
"variations": "Seed-weight pairs",
"vae": "VAE",
"vae": "VAE",
"width": "Width",
"workflow": "Workflow"
},
@@ -624,7 +606,6 @@
"cannotUseSpaces": "Cannot Use Spaces",
"checkpointFolder": "Checkpoint Folder",
"checkpointModels": "Checkpoints",
"checkpointOrSafetensors": "$t(common.checkpoint) / $t(common.safetensors)",
"clearCheckpointFolder": "Clear Checkpoint Folder",
"closeAdvanced": "Close Advanced",
"config": "Config",
@@ -704,7 +685,6 @@
"nameValidationMsg": "Enter a name for your model",
"noCustomLocationProvided": "No Custom Location Provided",
"noModels": "No Models Found",
"noModelSelected": "No Model Selected",
"noModelsFound": "No Models Found",
"none": "none",
"notLoaded": "not loaded",
@@ -750,8 +730,6 @@
"widthValidationMsg": "Default width of your model."
},
"models": {
"addLora": "Add LoRA",
"esrganModel": "ESRGAN Model",
"loading": "loading",
"noLoRAsAvailable": "No LoRAs available",
"noMatchingLoRAs": "No matching LoRAs",
@@ -942,10 +920,7 @@
"unknownTemplate": "Unknown Template",
"unkownInvocation": "Unknown Invocation type",
"updateNode": "Update Node",
"updateAllNodes": "Update All Nodes",
"updateApp": "Update App",
"unableToUpdateNodes_one": "Unable to update {{count}} node",
"unableToUpdateNodes_other": "Unable to update {{count}} nodes",
"vaeField": "Vae",
"vaeFieldDescription": "Vae submodel.",
"vaeModelField": "VAE",
@@ -1032,7 +1007,6 @@
"maskAdjustmentsHeader": "Mask Adjustments",
"maskBlur": "Blur",
"maskBlurMethod": "Blur Method",
"maskEdge": "Mask Edge",
"negativePromptPlaceholder": "Negative Prompt",
"noiseSettings": "Noise",
"noiseThreshold": "Noise Threshold",
@@ -1080,7 +1054,6 @@
"upscale": "Upscale (Shift + U)",
"upscaleImage": "Upscale Image",
"upscaling": "Upscaling",
"unmasked": "Unmasked",
"useAll": "Use All",
"useCpuNoise": "Use CPU Noise",
"cpuNoise": "CPU Noise",
@@ -1102,7 +1075,6 @@
"dynamicPrompts": "Dynamic Prompts",
"enableDynamicPrompts": "Enable Dynamic Prompts",
"maxPrompts": "Max Prompts",
"promptsPreview": "Prompts Preview",
"promptsWithCount_one": "{{count}} Prompt",
"promptsWithCount_other": "{{count}} Prompts",
"seedBehaviour": {
@@ -1142,10 +1114,7 @@
"displayHelpIcons": "Display Help Icons",
"displayInProgress": "Display Progress Images",
"enableImageDebugging": "Enable Image Debugging",
"enableInformationalPopovers": "Enable Informational Popovers",
"enableInvisibleWatermark": "Enable Invisible Watermark",
"enableNodesEditor": "Enable Nodes Editor",
"enableNSFWChecker": "Enable NSFW Checker",
"experimental": "Experimental",
"favoriteSchedulers": "Favorite Schedulers",
"favoriteSchedulersPlaceholder": "No schedulers favorited",
@@ -1245,8 +1214,7 @@
"sentToImageToImage": "Sent To Image To Image",
"sentToUnifiedCanvas": "Sent to Unified Canvas",
"serverError": "Server Error",
"setAsCanvasInitialImage": "Set as canvas initial image",
"setCanvasInitialImage": "Set canvas initial image",
"setCanvasInitialImage": "Set as canvas initial image",
"setControlImage": "Set as control image",
"setIPAdapterImage": "Set as IP Adapter Image",
"setInitialImage": "Set as initial image",
@@ -1304,15 +1272,11 @@
},
"compositingBlur": {
"heading": "Blur",
"paragraphs": [
"The blur radius of the mask."
]
"paragraphs": ["The blur radius of the mask."]
},
"compositingBlurMethod": {
"heading": "Blur Method",
"paragraphs": [
"The method of blur applied to the masked area."
]
"paragraphs": ["The method of blur applied to the masked area."]
},
"compositingCoherencePass": {
"heading": "Coherence Pass",
@@ -1322,9 +1286,7 @@
},
"compositingCoherenceMode": {
"heading": "Mode",
"paragraphs": [
"The mode of the Coherence Pass."
]
"paragraphs": ["The mode of the Coherence Pass."]
},
"compositingCoherenceSteps": {
"heading": "Steps",
@@ -1342,9 +1304,7 @@
},
"compositingMaskAdjustments": {
"heading": "Mask Adjustments",
"paragraphs": [
"Adjust the mask."
]
"paragraphs": ["Adjust the mask."]
},
"controlNetBeginEnd": {
"heading": "Begin / End Step Percentage",
@@ -1402,9 +1362,7 @@
},
"infillMethod": {
"heading": "Infill Method",
"paragraphs": [
"Method to infill the selected area."
]
"paragraphs": ["Method to infill the selected area."]
},
"lora": {
"heading": "LoRA Weight",

View File

@@ -1222,8 +1222,7 @@
"seamless": "无缝",
"fit": "图生图匹配",
"recallParameters": "召回参数",
"noRecallParameters": "未找到要召回的参数",
"vae": "VAE"
"noRecallParameters": "未找到要召回的参数"
},
"models": {
"noMatchingModels": "无相匹配的模型",
@@ -1502,18 +1501,5 @@
"clear": "清除",
"maxCacheSize": "最大缓存大小",
"cacheSize": "缓存大小"
},
"hrf": {
"enableHrf": "启用高分辨率修复",
"upscaleMethod": "放大方法",
"enableHrfTooltip": "使用较低的分辨率进行初始生成,放大到基础分辨率后进行图生图。",
"metadata": {
"strength": "高分辨率修复强度",
"enabled": "高分辨率修复已启用",
"method": "高分辨率修复方法"
},
"hrf": "高分辨率修复",
"hrfStrength": "高分辨率修复强度",
"strengthTooltip": "值越低细节越少,但可以减少部分潜在的伪影。"
}
}

View File

@@ -72,7 +72,6 @@ import { addStagingAreaImageSavedListener } from './listeners/stagingAreaImageSa
import { addTabChangedListener } from './listeners/tabChanged';
import { addUpscaleRequestedListener } from './listeners/upscaleRequested';
import { addWorkflowLoadedListener } from './listeners/workflowLoaded';
import { addUpdateAllNodesRequestedListener } from './listeners/updateAllNodesRequested';
export const listenerMiddleware = createListenerMiddleware();
@@ -179,7 +178,6 @@ addReceivedOpenAPISchemaListener();
// Workflows
addWorkflowLoadedListener();
addUpdateAllNodesRequestedListener();
// DND
addImageDroppedListener();

View File

@@ -8,6 +8,7 @@ import {
selectControlAdapterById,
} from 'features/controlAdapters/store/controlAdaptersSlice';
import { isControlNetOrT2IAdapter } from 'features/controlAdapters/store/types';
import { SAVE_IMAGE } from 'features/nodes/util/graphBuilders/constants';
import { addToast } from 'features/system/store/systemSlice';
import { t } from 'i18next';
import { imagesApi } from 'services/api/endpoints/images';
@@ -37,7 +38,6 @@ export const addControlNetImageProcessedListener = () => {
// ControlNet one-off procressing graph is just the processor node, no edges.
// Also we need to grab the image.
const nodeId = ca.processorNode.id;
const enqueueBatchArg: BatchConfig = {
prepend: true,
batch: {
@@ -46,10 +46,27 @@ export const addControlNetImageProcessedListener = () => {
[ca.processorNode.id]: {
...ca.processorNode,
is_intermediate: true,
use_cache: false,
image: { image_name: ca.controlImage },
},
[SAVE_IMAGE]: {
id: SAVE_IMAGE,
type: 'save_image',
is_intermediate: true,
use_cache: false,
},
},
edges: [
{
source: {
node_id: ca.processorNode.id,
field: 'image',
},
destination: {
node_id: SAVE_IMAGE,
field: 'image',
},
},
],
},
runs: 1,
},
@@ -73,7 +90,7 @@ export const addControlNetImageProcessedListener = () => {
socketInvocationComplete.match(action) &&
action.payload.data.queue_batch_id ===
enqueueResult.batch.batch_id &&
action.payload.data.source_node_id === nodeId
action.payload.data.source_node_id === SAVE_IMAGE
);
// We still have to check the output type

View File

@@ -79,7 +79,7 @@ export const addImageUploadedFulfilledListener = () => {
dispatch(
addToast({
...DEFAULT_UPLOADED_TOAST,
description: t('toast.setAsCanvasInitialImage'),
description: t('toast.setCanvasInitialImage'),
})
);
return;

View File

@@ -7,10 +7,7 @@ import {
imageSelected,
} from 'features/gallery/store/gallerySlice';
import { IMAGE_CATEGORIES } from 'features/gallery/store/types';
import {
LINEAR_UI_OUTPUT,
nodeIDDenyList,
} from 'features/nodes/util/graphBuilders/constants';
import { CANVAS_OUTPUT } from 'features/nodes/util/graphBuilders/constants';
import { boardsApi } from 'services/api/endpoints/boards';
import { imagesApi } from 'services/api/endpoints/images';
import { isImageOutput } from 'services/api/guards';
@@ -22,7 +19,7 @@ import {
import { startAppListening } from '../..';
// These nodes output an image, but do not actually *save* an image, so we don't want to handle the gallery logic on them
const nodeTypeDenylist = ['load_image', 'image'];
const nodeDenylist = ['load_image', 'image'];
export const addInvocationCompleteEventListener = () => {
startAppListening({
@@ -35,31 +32,22 @@ export const addInvocationCompleteEventListener = () => {
`Invocation complete (${action.payload.data.node.type})`
);
const { result, node, queue_batch_id, source_node_id } = data;
const { result, node, queue_batch_id } = data;
// This complete event has an associated image output
if (
isImageOutput(result) &&
!nodeTypeDenylist.includes(node.type) &&
!nodeIDDenyList.includes(source_node_id)
) {
if (isImageOutput(result) && !nodeDenylist.includes(node.type)) {
const { image_name } = result.image;
const { canvas, gallery } = getState();
// This populates the `getImageDTO` cache
const imageDTORequest = dispatch(
imagesApi.endpoints.getImageDTO.initiate(image_name, {
forceRefetch: true,
})
);
const imageDTO = await imageDTORequest.unwrap();
imageDTORequest.unsubscribe();
const imageDTO = await dispatch(
imagesApi.endpoints.getImageDTO.initiate(image_name)
).unwrap();
// Add canvas images to the staging area
if (
canvas.batchIds.includes(queue_batch_id) &&
[LINEAR_UI_OUTPUT].includes(data.source_node_id)
[CANVAS_OUTPUT].includes(data.source_node_id)
) {
dispatch(addImageToStagingArea(imageDTO));
}

View File

@@ -1,52 +0,0 @@
import {
getNeedsUpdate,
updateNode,
} from 'features/nodes/hooks/useNodeVersion';
import { updateAllNodesRequested } from 'features/nodes/store/actions';
import { nodeReplaced } from 'features/nodes/store/nodesSlice';
import { startAppListening } from '..';
import { logger } from 'app/logging/logger';
import { addToast } from 'features/system/store/systemSlice';
import { makeToast } from 'features/system/util/makeToast';
import { t } from 'i18next';
export const addUpdateAllNodesRequestedListener = () => {
startAppListening({
actionCreator: updateAllNodesRequested,
effect: (action, { dispatch, getState }) => {
const log = logger('nodes');
const nodes = getState().nodes.nodes;
const templates = getState().nodes.nodeTemplates;
let unableToUpdateCount = 0;
nodes.forEach((node) => {
const template = templates[node.data.type];
const needsUpdate = getNeedsUpdate(node, template);
const updatedNode = updateNode(node, template);
if (!updatedNode) {
if (needsUpdate) {
unableToUpdateCount++;
}
return;
}
dispatch(nodeReplaced({ nodeId: updatedNode.id, node: updatedNode }));
});
if (unableToUpdateCount) {
log.warn(
`Unable to update ${unableToUpdateCount} nodes. Please report this issue.`
);
dispatch(
addToast(
makeToast({
title: t('nodes.unableToUpdateNodes', {
count: unableToUpdateCount,
}),
})
)
);
}
},
});
};

View File

@@ -19,7 +19,7 @@ import sdxlReducer from 'features/sdxl/store/sdxlSlice';
import configReducer from 'features/system/store/configSlice';
import systemReducer from 'features/system/store/systemSlice';
import queueReducer from 'features/queue/store/queueSlice';
import modelmanagerReducer from 'features/modelManager/store/modelManagerSlice';
import modelmanagerReducer from 'features/ui/components/tabs/ModelManager/store/modelManagerSlice';
import hotkeysReducer from 'features/ui/store/hotkeysSlice';
import uiReducer from 'features/ui/store/uiSlice';
import dynamicMiddlewares from 'redux-dynamic-middlewares';

View File

@@ -8,14 +8,7 @@ import {
forwardRef,
useDisclosure,
} from '@chakra-ui/react';
import {
cloneElement,
memo,
ReactElement,
ReactNode,
useCallback,
useRef,
} from 'react';
import { cloneElement, memo, ReactElement, ReactNode, useRef } from 'react';
import { useTranslation } from 'react-i18next';
import IAIButton from './IAIButton';
@@ -45,15 +38,15 @@ const IAIAlertDialog = forwardRef((props: Props, ref) => {
const { isOpen, onOpen, onClose } = useDisclosure();
const cancelRef = useRef<HTMLButtonElement | null>(null);
const handleAccept = useCallback(() => {
const handleAccept = () => {
acceptCallback();
onClose();
}, [acceptCallback, onClose]);
};
const handleCancel = useCallback(() => {
const handleCancel = () => {
cancelCallback && cancelCallback();
onClose();
}, [cancelCallback, onClose]);
};
return (
<>

View File

@@ -0,0 +1,43 @@
import { Box, Flex, Icon } from '@chakra-ui/react';
import { memo } from 'react';
import { FaExclamation } from 'react-icons/fa';
const IAIErrorLoadingImageFallback = () => {
return (
<Box
sx={{
position: 'relative',
height: 'full',
width: 'full',
'::before': {
content: "''",
display: 'block',
pt: '100%',
},
}}
>
<Flex
sx={{
position: 'absolute',
top: 0,
insetInlineStart: 0,
height: 'full',
width: 'full',
alignItems: 'center',
justifyContent: 'center',
borderRadius: 'base',
bg: 'base.100',
color: 'base.500',
_dark: {
color: 'base.700',
bg: 'base.850',
},
}}
>
<Icon as={FaExclamation} boxSize={16} opacity={0.7} />
</Flex>
</Box>
);
};
export default memo(IAIErrorLoadingImageFallback);

View File

@@ -0,0 +1,8 @@
import { chakra } from '@chakra-ui/react';
/**
* Chakra-enabled <form />
*/
const IAIForm = chakra.form;
export default IAIForm;

View File

@@ -0,0 +1,15 @@
import { FormErrorMessage, FormErrorMessageProps } from '@chakra-ui/react';
import { ReactNode } from 'react';
type IAIFormErrorMessageProps = FormErrorMessageProps & {
children: ReactNode | string;
};
export default function IAIFormErrorMessage(props: IAIFormErrorMessageProps) {
const { children, ...rest } = props;
return (
<FormErrorMessage color="error.400" {...rest}>
{children}
</FormErrorMessage>
);
}

View File

@@ -0,0 +1,15 @@
import { FormHelperText, FormHelperTextProps } from '@chakra-ui/react';
import { ReactNode } from 'react';
type IAIFormHelperTextProps = FormHelperTextProps & {
children: ReactNode | string;
};
export default function IAIFormHelperText(props: IAIFormHelperTextProps) {
const { children, ...rest } = props;
return (
<FormHelperText margin={0} color="base.400" {...rest}>
{children}
</FormHelperText>
);
}

View File

@@ -0,0 +1,25 @@
import { Flex, useColorMode } from '@chakra-ui/react';
import { ReactElement } from 'react';
import { mode } from 'theme/util/mode';
export function IAIFormItemWrapper({
children,
}: {
children: ReactElement | ReactElement[];
}) {
const { colorMode } = useColorMode();
return (
<Flex
sx={{
flexDirection: 'column',
padding: 4,
rowGap: 4,
borderRadius: 'base',
width: 'full',
bg: mode('base.100', 'base.900')(colorMode),
}}
>
{children}
</Flex>
);
}

View File

@@ -0,0 +1,25 @@
import {
Checkbox,
CheckboxProps,
FormControl,
FormControlProps,
FormLabel,
} from '@chakra-ui/react';
import { memo, ReactNode } from 'react';
type IAIFullCheckboxProps = CheckboxProps & {
label: string | ReactNode;
formControlProps?: FormControlProps;
};
const IAIFullCheckbox = (props: IAIFullCheckboxProps) => {
const { label, formControlProps, ...rest } = props;
return (
<FormControl {...formControlProps}>
<FormLabel>{label}</FormLabel>
<Checkbox colorScheme="accent" {...rest} />
</FormControl>
);
};
export default memo(IAIFullCheckbox);

View File

@@ -1,7 +1,6 @@
import { useColorMode } from '@chakra-ui/react';
import { TextInput, TextInputProps } from '@mantine/core';
import { useChakraThemeTokens } from 'common/hooks/useChakraThemeTokens';
import { useCallback } from 'react';
import { mode } from 'theme/util/mode';
type IAIMantineTextInputProps = TextInputProps;
@@ -21,37 +20,26 @@ export default function IAIMantineTextInput(props: IAIMantineTextInputProps) {
} = useChakraThemeTokens();
const { colorMode } = useColorMode();
const stylesFunc = useCallback(
() => ({
input: {
color: mode(base900, base100)(colorMode),
backgroundColor: mode(base50, base900)(colorMode),
borderColor: mode(base200, base800)(colorMode),
borderWidth: 2,
outline: 'none',
':focus': {
borderColor: mode(accent300, accent500)(colorMode),
return (
<TextInput
styles={() => ({
input: {
color: mode(base900, base100)(colorMode),
backgroundColor: mode(base50, base900)(colorMode),
borderColor: mode(base200, base800)(colorMode),
borderWidth: 2,
outline: 'none',
':focus': {
borderColor: mode(accent300, accent500)(colorMode),
},
},
},
label: {
color: mode(base700, base300)(colorMode),
fontWeight: 'normal' as const,
marginBottom: 4,
},
}),
[
accent300,
accent500,
base100,
base200,
base300,
base50,
base700,
base800,
base900,
colorMode,
]
label: {
color: mode(base700, base300)(colorMode),
fontWeight: 'normal',
marginBottom: 4,
},
})}
{...rest}
/>
);
return <TextInput styles={stylesFunc} {...rest} />;
}

View File

@@ -98,34 +98,28 @@ const IAINumberInput = forwardRef((props: Props, ref) => {
}
}, [value, valueAsString]);
const handleOnChange = useCallback(
(v: string) => {
setValueAsString(v);
// This allows negatives and decimals e.g. '-123', `.5`, `-0.2`, etc.
if (!v.match(numberStringRegex)) {
// Cast the value to number. Floor it if it should be an integer.
onChange(isInteger ? Math.floor(Number(v)) : Number(v));
}
},
[isInteger, onChange]
);
const handleOnChange = (v: string) => {
setValueAsString(v);
// This allows negatives and decimals e.g. '-123', `.5`, `-0.2`, etc.
if (!v.match(numberStringRegex)) {
// Cast the value to number. Floor it if it should be an integer.
onChange(isInteger ? Math.floor(Number(v)) : Number(v));
}
};
/**
* Clicking the steppers allows the value to go outside bounds; we need to
* clamp it on blur and floor it if needed.
*/
const handleBlur = useCallback(
(e: FocusEvent<HTMLInputElement>) => {
const clamped = clamp(
isInteger ? Math.floor(Number(e.target.value)) : Number(e.target.value),
min,
max
);
setValueAsString(String(clamped));
onChange(clamped);
},
[isInteger, max, min, onChange]
);
const handleBlur = (e: FocusEvent<HTMLInputElement>) => {
const clamped = clamp(
isInteger ? Math.floor(Number(e.target.value)) : Number(e.target.value),
min,
max
);
setValueAsString(String(clamped));
onChange(clamped);
};
const handleKeyDown = useCallback(
(e: KeyboardEvent<HTMLInputElement>) => {

View File

@@ -6,7 +6,7 @@ import {
Tooltip,
TooltipProps,
} from '@chakra-ui/react';
import { memo, MouseEvent, useCallback } from 'react';
import { memo, MouseEvent } from 'react';
import IAIOption from './IAIOption';
type IAISelectProps = SelectProps & {
@@ -33,16 +33,15 @@ const IAISelect = (props: IAISelectProps) => {
spaceEvenly,
...rest
} = props;
const handleClick = useCallback((e: MouseEvent<HTMLDivElement>) => {
e.stopPropagation();
e.nativeEvent.stopImmediatePropagation();
e.nativeEvent.stopPropagation();
e.nativeEvent.cancelBubble = true;
}, []);
return (
<FormControl
isDisabled={isDisabled}
onClick={handleClick}
onClick={(e: MouseEvent<HTMLDivElement>) => {
e.stopPropagation();
e.nativeEvent.stopImmediatePropagation();
e.nativeEvent.stopPropagation();
e.nativeEvent.cancelBubble = true;
}}
sx={
horizontal
? {

View File

@@ -186,13 +186,6 @@ const IAISlider = forwardRef((props: IAIFullSliderProps, ref) => {
[dispatch]
);
const handleMouseEnter = useCallback(() => setShowTooltip(true), []);
const handleMouseLeave = useCallback(() => setShowTooltip(false), []);
const handleStepperClick = useCallback(
() => onChange(Number(localInputValue)),
[localInputValue, onChange]
);
return (
<FormControl
ref={ref}
@@ -226,8 +219,8 @@ const IAISlider = forwardRef((props: IAIFullSliderProps, ref) => {
max={max}
step={step}
onChange={handleSliderChange}
onMouseEnter={handleMouseEnter}
onMouseLeave={handleMouseLeave}
onMouseEnter={() => setShowTooltip(true)}
onMouseLeave={() => setShowTooltip(false)}
focusThumbOnChange={false}
isDisabled={isDisabled}
{...rest}
@@ -339,8 +332,12 @@ const IAISlider = forwardRef((props: IAIFullSliderProps, ref) => {
{...sliderNumberInputFieldProps}
/>
<NumberInputStepper {...sliderNumberInputStepperProps}>
<NumberIncrementStepper onClick={handleStepperClick} />
<NumberDecrementStepper onClick={handleStepperClick} />
<NumberIncrementStepper
onClick={() => onChange(Number(localInputValue))}
/>
<NumberDecrementStepper
onClick={() => onChange(Number(localInputValue))}
/>
</NumberInputStepper>
</NumberInput>
)}

View File

@@ -146,15 +146,16 @@ const ImageUploader = (props: ImageUploaderProps) => {
};
}, [inputRef]);
const handleKeyDown = useCallback((e: KeyboardEvent) => {
// Bail out if user hits spacebar - do not open the uploader
if (e.key === ' ') {
return;
}
}, []);
return (
<Box {...getRootProps({ style: {} })} onKeyDown={handleKeyDown}>
<Box
{...getRootProps({ style: {} })}
onKeyDown={(e: KeyboardEvent) => {
// Bail out if user hits spacebar - do not open the uploader
if (e.key === ' ') {
return;
}
}}
>
<input {...getInputProps()} />
{children}
<AnimatePresence>

View File

@@ -0,0 +1,23 @@
import { Flex, Icon } from '@chakra-ui/react';
import { memo } from 'react';
import { FaImage } from 'react-icons/fa';
const SelectImagePlaceholder = () => {
return (
<Flex
sx={{
w: 'full',
h: 'full',
// bg: 'base.800',
borderRadius: 'base',
alignItems: 'center',
justifyContent: 'center',
aspectRatio: '1/1',
}}
>
<Icon color="base.400" boxSize={32} as={FaImage}></Icon>
</Flex>
);
};
export default memo(SelectImagePlaceholder);

View File

@@ -0,0 +1,24 @@
import { useBreakpoint } from '@chakra-ui/react';
export default function useResolution():
| 'mobile'
| 'tablet'
| 'desktop'
| 'unknown' {
const breakpointValue = useBreakpoint();
const mobileResolutions = ['base', 'sm'];
const tabletResolutions = ['md', 'lg'];
const desktopResolutions = ['xl', '2xl'];
if (mobileResolutions.includes(breakpointValue)) {
return 'mobile';
}
if (tabletResolutions.includes(breakpointValue)) {
return 'tablet';
}
if (desktopResolutions.includes(breakpointValue)) {
return 'desktop';
}
return 'unknown';
}

View File

@@ -0,0 +1,7 @@
import dateFormat from 'dateformat';
/**
* Get a `now` timestamp with 1s precision, formatted as ISO datetime.
*/
export const getTimestamp = () =>
dateFormat(new Date(), `yyyy-mm-dd'T'HH:MM:ss:lo`);

View File

@@ -0,0 +1,71 @@
// TODO: Restore variations
// Support code from v2.3 in here.
// export const stringToSeedWeights = (
// string: string
// ): InvokeAI.SeedWeights | boolean => {
// const stringPairs = string.split(',');
// const arrPairs = stringPairs.map((p) => p.split(':'));
// const pairs = arrPairs.map((p: Array<string>): InvokeAI.SeedWeightPair => {
// return { seed: Number(p[0]), weight: Number(p[1]) };
// });
// if (!validateSeedWeights(pairs)) {
// return false;
// }
// return pairs;
// };
// export const validateSeedWeights = (
// seedWeights: InvokeAI.SeedWeights | string
// ): boolean => {
// return typeof seedWeights === 'string'
// ? Boolean(stringToSeedWeights(seedWeights))
// : Boolean(
// seedWeights.length &&
// !seedWeights.some((pair: InvokeAI.SeedWeightPair) => {
// const { seed, weight } = pair;
// const isSeedValid = !isNaN(parseInt(seed.toString(), 10));
// const isWeightValid =
// !isNaN(parseInt(weight.toString(), 10)) &&
// weight >= 0 &&
// weight <= 1;
// return !(isSeedValid && isWeightValid);
// })
// );
// };
// export const seedWeightsToString = (
// seedWeights: InvokeAI.SeedWeights
// ): string => {
// return seedWeights.reduce((acc, pair, i, arr) => {
// const { seed, weight } = pair;
// acc += `${seed}:${weight}`;
// if (i !== arr.length - 1) {
// acc += ',';
// }
// return acc;
// }, '');
// };
// export const seedWeightsToArray = (
// seedWeights: InvokeAI.SeedWeights
// ): Array<Array<number>> => {
// return seedWeights.map((pair: InvokeAI.SeedWeightPair) => [
// pair.seed,
// pair.weight,
// ]);
// };
// export const stringToSeedWeightsArray = (
// string: string
// ): Array<Array<number>> => {
// const stringPairs = string.split(',');
// const arrPairs = stringPairs.map((p) => p.split(':'));
// return arrPairs.map(
// (p: Array<string>): Array<number> => [parseInt(p[0], 10), parseFloat(p[1])]
// );
// };
export default {};

View File

@@ -5,22 +5,17 @@ import { clearCanvasHistory } from 'features/canvas/store/canvasSlice';
import { useTranslation } from 'react-i18next';
import { FaTrash } from 'react-icons/fa';
import { isStagingSelector } from '../store/canvasSelectors';
import { memo, useCallback } from 'react';
import { memo } from 'react';
const ClearCanvasHistoryButtonModal = () => {
const isStaging = useAppSelector(isStagingSelector);
const dispatch = useAppDispatch();
const { t } = useTranslation();
const acceptCallback = useCallback(
() => dispatch(clearCanvasHistory()),
[dispatch]
);
return (
<IAIAlertDialog
title={t('unifiedCanvas.clearCanvasHistory')}
acceptCallback={acceptCallback}
acceptCallback={() => dispatch(clearCanvasHistory())}
acceptButtonText={t('unifiedCanvas.clearHistory')}
triggerComponent={
<IAIButton size="sm" leftIcon={<FaTrash />} isDisabled={isStaging}>

View File

@@ -20,8 +20,7 @@ import {
} from 'features/canvas/store/canvasSlice';
import { rgbaColorToString } from 'features/canvas/util/colorToString';
import { isEqual } from 'lodash-es';
import { ChangeEvent, memo, useCallback } from 'react';
import { RgbaColor } from 'react-colorful';
import { memo } from 'react';
import { useHotkeys } from 'react-hotkeys-hook';
import { useTranslation } from 'react-i18next';
@@ -96,35 +95,18 @@ const IAICanvasMaskOptions = () => {
[isMaskEnabled]
);
const handleToggleMaskLayer = useCallback(() => {
const handleToggleMaskLayer = () => {
dispatch(setLayer(layer === 'mask' ? 'base' : 'mask'));
}, [dispatch, layer]);
};
const handleClearMask = useCallback(() => {
dispatch(clearMask());
}, [dispatch]);
const handleClearMask = () => dispatch(clearMask());
const handleToggleEnableMask = useCallback(() => {
const handleToggleEnableMask = () =>
dispatch(setIsMaskEnabled(!isMaskEnabled));
}, [dispatch, isMaskEnabled]);
const handleSaveMask = useCallback(async () => {
const handleSaveMask = async () => {
dispatch(canvasMaskSavedToGallery());
}, [dispatch]);
const handleChangePreserveMaskedArea = useCallback(
(e: ChangeEvent<HTMLInputElement>) => {
dispatch(setShouldPreserveMaskedArea(e.target.checked));
},
[dispatch]
);
const handleChangeMaskColor = useCallback(
(newColor: RgbaColor) => {
dispatch(setMaskColor(newColor));
},
[dispatch]
);
};
return (
<IAIPopover
@@ -149,10 +131,15 @@ const IAICanvasMaskOptions = () => {
<IAISimpleCheckbox
label={t('unifiedCanvas.preserveMaskedArea')}
isChecked={shouldPreserveMaskedArea}
onChange={handleChangePreserveMaskedArea}
onChange={(e) =>
dispatch(setShouldPreserveMaskedArea(e.target.checked))
}
/>
<Box sx={{ paddingTop: 2, paddingBottom: 2 }}>
<IAIColorPicker color={maskColor} onChange={handleChangeMaskColor} />
<IAIColorPicker
color={maskColor}
onChange={(newColor) => dispatch(setMaskColor(newColor))}
/>
</Box>
<IAIButton size="sm" leftIcon={<FaSave />} onClick={handleSaveMask}>
Save Mask

View File

@@ -10,7 +10,6 @@ import { redo } from 'features/canvas/store/canvasSlice';
import { stateSelector } from 'app/store/store';
import { isEqual } from 'lodash-es';
import { useTranslation } from 'react-i18next';
import { useCallback } from 'react';
const canvasRedoSelector = createSelector(
[stateSelector, activeTabNameSelector],
@@ -35,9 +34,9 @@ export default function IAICanvasRedoButton() {
const { t } = useTranslation();
const handleRedo = useCallback(() => {
const handleRedo = () => {
dispatch(redo());
}, [dispatch]);
};
useHotkeys(
['meta+shift+z', 'ctrl+shift+z', 'control+y', 'meta+y'],

View File

@@ -18,7 +18,7 @@ import {
} from 'features/canvas/store/canvasSlice';
import { isEqual } from 'lodash-es';
import { ChangeEvent, memo, useCallback } from 'react';
import { ChangeEvent, memo } from 'react';
import { useHotkeys } from 'react-hotkeys-hook';
import { useTranslation } from 'react-i18next';
import { FaWrench } from 'react-icons/fa';
@@ -86,52 +86,8 @@ const IAICanvasSettingsButtonPopover = () => {
[shouldSnapToGrid]
);
const handleChangeShouldSnapToGrid = useCallback(
(e: ChangeEvent<HTMLInputElement>) =>
dispatch(setShouldSnapToGrid(e.target.checked)),
[dispatch]
);
const handleChangeShouldShowIntermediates = useCallback(
(e: ChangeEvent<HTMLInputElement>) =>
dispatch(setShouldShowIntermediates(e.target.checked)),
[dispatch]
);
const handleChangeShouldShowGrid = useCallback(
(e: ChangeEvent<HTMLInputElement>) =>
dispatch(setShouldShowGrid(e.target.checked)),
[dispatch]
);
const handleChangeShouldDarkenOutsideBoundingBox = useCallback(
(e: ChangeEvent<HTMLInputElement>) =>
dispatch(setShouldDarkenOutsideBoundingBox(e.target.checked)),
[dispatch]
);
const handleChangeShouldAutoSave = useCallback(
(e: ChangeEvent<HTMLInputElement>) =>
dispatch(setShouldAutoSave(e.target.checked)),
[dispatch]
);
const handleChangeShouldCropToBoundingBoxOnSave = useCallback(
(e: ChangeEvent<HTMLInputElement>) =>
dispatch(setShouldCropToBoundingBoxOnSave(e.target.checked)),
[dispatch]
);
const handleChangeShouldRestrictStrokesToBox = useCallback(
(e: ChangeEvent<HTMLInputElement>) =>
dispatch(setShouldRestrictStrokesToBox(e.target.checked)),
[dispatch]
);
const handleChangeShouldShowCanvasDebugInfo = useCallback(
(e: ChangeEvent<HTMLInputElement>) =>
dispatch(setShouldShowCanvasDebugInfo(e.target.checked)),
[dispatch]
);
const handleChangeShouldAntialias = useCallback(
(e: ChangeEvent<HTMLInputElement>) =>
dispatch(setShouldAntialias(e.target.checked)),
[dispatch]
);
const handleChangeShouldSnapToGrid = (e: ChangeEvent<HTMLInputElement>) =>
dispatch(setShouldSnapToGrid(e.target.checked));
return (
<IAIPopover
@@ -148,12 +104,14 @@ const IAICanvasSettingsButtonPopover = () => {
<IAISimpleCheckbox
label={t('unifiedCanvas.showIntermediates')}
isChecked={shouldShowIntermediates}
onChange={handleChangeShouldShowIntermediates}
onChange={(e) =>
dispatch(setShouldShowIntermediates(e.target.checked))
}
/>
<IAISimpleCheckbox
label={t('unifiedCanvas.showGrid')}
isChecked={shouldShowGrid}
onChange={handleChangeShouldShowGrid}
onChange={(e) => dispatch(setShouldShowGrid(e.target.checked))}
/>
<IAISimpleCheckbox
label={t('unifiedCanvas.snapToGrid')}
@@ -163,33 +121,41 @@ const IAICanvasSettingsButtonPopover = () => {
<IAISimpleCheckbox
label={t('unifiedCanvas.darkenOutsideSelection')}
isChecked={shouldDarkenOutsideBoundingBox}
onChange={handleChangeShouldDarkenOutsideBoundingBox}
onChange={(e) =>
dispatch(setShouldDarkenOutsideBoundingBox(e.target.checked))
}
/>
<IAISimpleCheckbox
label={t('unifiedCanvas.autoSaveToGallery')}
isChecked={shouldAutoSave}
onChange={handleChangeShouldAutoSave}
onChange={(e) => dispatch(setShouldAutoSave(e.target.checked))}
/>
<IAISimpleCheckbox
label={t('unifiedCanvas.saveBoxRegionOnly')}
isChecked={shouldCropToBoundingBoxOnSave}
onChange={handleChangeShouldCropToBoundingBoxOnSave}
onChange={(e) =>
dispatch(setShouldCropToBoundingBoxOnSave(e.target.checked))
}
/>
<IAISimpleCheckbox
label={t('unifiedCanvas.limitStrokesToBox')}
isChecked={shouldRestrictStrokesToBox}
onChange={handleChangeShouldRestrictStrokesToBox}
onChange={(e) =>
dispatch(setShouldRestrictStrokesToBox(e.target.checked))
}
/>
<IAISimpleCheckbox
label={t('unifiedCanvas.showCanvasDebugInfo')}
isChecked={shouldShowCanvasDebugInfo}
onChange={handleChangeShouldShowCanvasDebugInfo}
onChange={(e) =>
dispatch(setShouldShowCanvasDebugInfo(e.target.checked))
}
/>
<IAISimpleCheckbox
label={t('unifiedCanvas.antialiasing')}
isChecked={shouldAntialias}
onChange={handleChangeShouldAntialias}
onChange={(e) => dispatch(setShouldAntialias(e.target.checked))}
/>
<ClearCanvasHistoryButtonModal />
</Flex>

Some files were not shown because too many files have changed in this diff Show More