Compare commits

..

1 Commits

Author SHA1 Message Date
psychedelicious
527c806f7b feat(nodes): extract denoise function 2023-10-20 16:31:11 +11:00
76 changed files with 3013 additions and 9928 deletions

1
.gitattributes vendored
View File

@@ -2,4 +2,3 @@
# Only affects text files and ignores other file types.
# For more info see: https://www.aleksandrhovhannisyan.com/blog/crlf-vs-lf-normalizing-line-endings-in-git/
* text=auto
docker/** text eol=lf

View File

@@ -11,5 +11,5 @@ INVOKEAI_ROOT=
# HUGGING_FACE_HUB_TOKEN=
## optional variables specific to the docker setup.
# GPU_DRIVER=cuda # or rocm
# CONTAINER_UID=1000
# GPU_DRIVER=cuda
# CONTAINER_UID=1000

View File

@@ -18,8 +18,8 @@ ENV INVOKEAI_SRC=/opt/invokeai
ENV VIRTUAL_ENV=/opt/venv/invokeai
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
ARG TORCH_VERSION=2.1.0
ARG TORCHVISION_VERSION=0.16
ARG TORCH_VERSION=2.0.1
ARG TORCHVISION_VERSION=0.15.2
ARG GPU_DRIVER=cuda
ARG TARGETPLATFORM="linux/amd64"
# unused but available
@@ -35,7 +35,7 @@ RUN --mount=type=cache,target=/root/.cache/pip \
if [ "$TARGETPLATFORM" = "linux/arm64" ] || [ "$GPU_DRIVER" = "cpu" ]; then \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cpu"; \
elif [ "$GPU_DRIVER" = "rocm" ]; then \
extra_index_url_arg="--index-url https://download.pytorch.org/whl/rocm5.6"; \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/rocm5.4.2"; \
else \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cu121"; \
fi &&\

View File

@@ -15,10 +15,6 @@ services:
- driver: nvidia
count: 1
capabilities: [gpu]
# For AMD support, comment out the deploy section above and uncomment the devices section below:
#devices:
# - /dev/kfd:/dev/kfd
# - /dev/dri:/dev/dri
build:
context: ..
dockerfile: docker/Dockerfile

View File

@@ -7,5 +7,5 @@ set -e
SCRIPTDIR=$(dirname "${BASH_SOURCE[0]}")
cd "$SCRIPTDIR" || exit 1
docker compose up -d
docker compose up --build -d
docker compose logs -f

View File

@@ -150,6 +150,7 @@ Start/End - 0 represents the start of the generation, 1 represents the end. The
Additionally, each section can be expanded with the "Show Advanced" button in order to manipulate settings for the image pre-processor that adjusts your uploaded image before using it in during the generation process.
**Note:** T2I-Adapter models and ControlNet models cannot currently be used together.
## IP-Adapter

View File

@@ -99,14 +99,3 @@ If using an AMD GPU:
Use the standard `docker compose up` command, and generally the `docker compose` [CLI](https://docs.docker.com/compose/reference/) as usual.
Once the container starts up (and configures the InvokeAI root directory if this is a new installation), you can access InvokeAI at [http://localhost:9090](http://localhost:9090)
## Troubleshooting / FAQ
- Q: I am running on Windows under WSL2, and am seeing a "no such file or directory" error.
- A: Your `docker-entrypoint.sh` file likely has Windows (CRLF) as opposed to Unix (LF) line endings,
and you may have cloned this repository before the issue was fixed. To solve this, please change
the line endings in the `docker-entrypoint.sh` file to `LF`. You can do this in VSCode
(`Ctrl+P` and search for "line endings"), or by using the `dos2unix` utility in WSL.
Finally, you may delete `docker-entrypoint.sh` followed by `git pull; git checkout docker/docker-entrypoint.sh`
to reset the file to its most recent version.
For more information on this issue, please see the [Docker Desktop documentation](https://docs.docker.com/desktop/troubleshoot/topics/#avoid-unexpected-syntax-errors-use-unix-style-line-endings-for-files-in-containers)

View File

@@ -4,16 +4,11 @@ These are nodes that have been developed by the community, for the community. If
If you'd like to submit a node for the community, please refer to the [node creation overview](contributingNodes.md).
To use a node, add the node to the `nodes` folder found in your InvokeAI install location.
The suggested method is to use `git clone` to clone the repository the node is found in. This allows for easy updates of the node in the future.
If you'd prefer, you can also just download the `.py` file from the linked repository and add it to the `nodes` folder.
To download a node, simply download the `.py` node file from the link and add it to the `invokeai/app/invocations` folder in your Invoke AI install location. If you used the automated installation, this can be found inside the `.venv` folder. Along with the node, an example node graph should be provided to help you get started with the node.
To use a community workflow, download the the `.json` node graph file and load it into Invoke AI via the **Load Workflow** button in the Workflow Editor.
- Community Nodes
+ [Average Images](#average-images)
+ [Depth Map from Wavefront OBJ](#depth-map-from-wavefront-obj)
+ [Film Grain](#film-grain)
+ [Generative Grammar-Based Prompt Nodes](#generative-grammar-based-prompt-nodes)
@@ -38,13 +33,6 @@ To use a community workflow, download the the `.json` node graph file and load i
- [Help](#help)
--------------------------------
### Average Images
**Description:** This node takes in a collection of images of the same size and averages them as output. It converts everything to RGB mode first.
**Node Link:** https://github.com/JPPhoto/average-images-node
--------------------------------
### Depth Map from Wavefront OBJ
@@ -189,8 +177,12 @@ This includes 15 Nodes:
**Node Link:** https://github.com/helix4u/load_video_frame
**Example Node Graph:** https://github.com/helix4u/load_video_frame/blob/main/Example_Workflow.json
**Output Example:**
<img src="https://raw.githubusercontent.com/helix4u/load_video_frame/main/_git_assets/testmp4_embed_converted.gif" width="500" />
<img src="https://raw.githubusercontent.com/helix4u/load_video_frame/main/testmp4_embed_converted.gif" width="500" />
[Full mp4 of Example Output test.mp4](https://github.com/helix4u/load_video_frame/blob/main/test.mp4)
--------------------------------
### Make 3D
@@ -333,9 +325,9 @@ See full docs here: https://github.com/skunkworxdark/XYGrid_nodes/edit/main/READ
**Description:** This node allows you to do super cool things with InvokeAI.
**Node Link:** https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/app/invocations/prompt.py
**Node Link:** https://github.com/invoke-ai/InvokeAI/fake_node.py
**Example Workflow:** https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/Prompt_from_File.json
**Example Node Graph:** https://github.com/invoke-ai/InvokeAI/fake_node_graph.json
**Output Examples**

View File

@@ -4,7 +4,7 @@ To learn about the specifics of creating a new node, please visit our [Node crea
Once youve created a node and confirmed that it behaves as expected locally, follow these steps:
- Make sure the node is contained in a new Python (.py) file. Preferably, the node is in a repo with a README detailing the nodes usage & examples to help others more easily use your node. Including the tag "invokeai-node" in your repository's README can also help other users find it more easily.
- Make sure the node is contained in a new Python (.py) file. Preferrably, the node is in a repo with a README detaling the nodes usage & examples to help others more easily use your node.
- Submit a pull request with a link to your node(s) repo in GitHub against the `main` branch to add the node to the [Community Nodes](communityNodes.md) list
- Make sure you are following the template below and have provided all relevant details about the node and what it does. Example output images and workflows are very helpful for other users looking to use your node.
- A maintainer will review the pull request and node. If the node is aligned with the direction of the project, you may be asked for permission to include it in the core project.

View File

@@ -2,17 +2,13 @@
We've curated some example workflows for you to get started with Workflows in InvokeAI
To use them, right click on your desired workflow, follow the link to GitHub and click the "⬇" button to download the raw file. You can then use the "Load Workflow" functionality in InvokeAI to load the workflow and start generating images!
To use them, right click on your desired workflow, press "Download Linked File". You can then use the "Load Workflow" functionality in InvokeAI to load the workflow and start generating images!
If you're interested in finding more workflows, checkout the [#share-your-workflows](https://discord.com/channels/1020123559063990373/1130291608097661000) channel in the InvokeAI Discord.
* [SD1.5 / SD2 Text to Image](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/Text_to_Image.json)
* [SDXL Text to Image](https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/SDXL_Text_to_Image.json)
* [SDXL Text to Image with Refiner](https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/SDXL_w_Refiner_Text_to_Image.json)
* [Multi ControlNet (Canny & Depth)](https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/Multi_ControlNet_Canny_and_Depth.json)
* [Tiled Upscaling with ControlNet](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/ESRGAN_img2img_upscale_w_Canny_ControlNet.json)
* [Prompt From File](https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/Prompt_from_File.json)
* [Face Detailer with IP-Adapter & ControlNet](https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/Face_Detailer_with_IP-Adapter_and_Canny.json.json)
* [SDXL Text to Image](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/SDXL_Text_to_Image.json)
* [SDXL (with Refiner) Text to Image](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/SDXL_Text_to_Image.json)
* [Tiled Upscaling with ControlNet](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/ESRGAN_img2img_upscale w_Canny_ControlNet.json)
* [FaceMask](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/FaceMask.json)
* [FaceOff with 2x Face Scaling](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/FaceOff_FaceScale2x.json)
* [QR Code Monster](https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/QR_Code_Monster.json)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,985 +0,0 @@
{
"name": "Multi ControlNet (Canny & Depth)",
"author": "Millu",
"description": "A sample workflow using canny & depth ControlNets to guide the generation process. ",
"version": "0.1.0",
"contact": "millun@invoke.ai",
"tags": "ControlNet, canny, depth",
"notes": "",
"exposedFields": [
{
"nodeId": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"fieldName": "model"
},
{
"nodeId": "7ce68934-3419-42d4-ac70-82cfc9397306",
"fieldName": "prompt"
},
{
"nodeId": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
"fieldName": "prompt"
},
{
"nodeId": "c4b23e64-7986-40c4-9cad-46327b12e204",
"fieldName": "image"
},
{
"nodeId": "8e860e51-5045-456e-bf04-9a62a2a5c49e",
"fieldName": "image"
}
],
"meta": {
"version": "1.0.0"
},
"nodes": [
{
"id": "8e860e51-5045-456e-bf04-9a62a2a5c49e",
"type": "invocation",
"data": {
"id": "8e860e51-5045-456e-bf04-9a62a2a5c49e",
"type": "image",
"inputs": {
"image": {
"id": "189c8adf-68cc-4774-a729-49da89f6fdf1",
"name": "image",
"type": "ImageField",
"fieldKind": "input",
"label": "Depth Input Image"
}
},
"outputs": {
"image": {
"id": "1a31cacd-9d19-4f32-b558-c5e4aa39ce73",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "12f298fd-1d11-4cca-9426-01240f7ec7cf",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "c47dabcb-44e8-40c9-992d-81dca59f598e",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 225,
"position": {
"x": 3617.163483500202,
"y": 40.5529847930888
}
},
{
"id": "a33199c2-8340-401e-b8a2-42ffa875fc1c",
"type": "invocation",
"data": {
"id": "a33199c2-8340-401e-b8a2-42ffa875fc1c",
"type": "controlnet",
"inputs": {
"image": {
"id": "4e0a3172-d3c2-4005-a84c-fa12a404f8a0",
"name": "image",
"type": "ImageField",
"fieldKind": "input",
"label": ""
},
"control_model": {
"id": "8cb2d998-4086-430a-8b13-94cbc81e3ca3",
"name": "control_model",
"type": "ControlNetModelField",
"fieldKind": "input",
"label": "",
"value": {
"model_name": "sd-controlnet-depth",
"base_model": "sd-1"
}
},
"control_weight": {
"id": "5e32bd8a-9dc8-42d8-9bcc-c2b0460c0b0f",
"name": "control_weight",
"type": "FloatPolymorphic",
"fieldKind": "input",
"label": "",
"value": 1
},
"begin_step_percent": {
"id": "c258a276-352a-416c-8358-152f11005c0c",
"name": "begin_step_percent",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 0
},
"end_step_percent": {
"id": "43001125-0d70-4f87-8e79-da6603ad6c33",
"name": "end_step_percent",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 1
},
"control_mode": {
"id": "d2f14561-9443-4374-9270-e2f05007944e",
"name": "control_mode",
"type": "enum",
"fieldKind": "input",
"label": "",
"value": "balanced"
},
"resize_mode": {
"id": "727ee7d3-8bf6-4c7d-8b8a-43546b3b59cd",
"name": "resize_mode",
"type": "enum",
"fieldKind": "input",
"label": "",
"value": "just_resize"
}
},
"outputs": {
"control": {
"id": "b034aa0f-4d0d-46e4-b5e3-e25a9588d087",
"name": "control",
"type": "ControlField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 508,
"position": {
"x": 4477.604342844504,
"y": -49.39005411272677
}
},
{
"id": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
"type": "invocation",
"data": {
"id": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
"type": "compel",
"inputs": {
"prompt": {
"id": "7c2c4771-2161-4d77-aced-ff8c4b3f1c15",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "Negative Prompt",
"value": ""
},
"clip": {
"id": "06d59e91-9cca-411d-bf05-86b099b3e8f7",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "858bc33c-134c-4bf6-8855-f943e1d26f14",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 261,
"position": {
"x": 4444.706437017514,
"y": -924.0715320874991
}
},
{
"id": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"type": "invocation",
"data": {
"id": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"type": "main_model_loader",
"inputs": {
"model": {
"id": "f4a915a5-593e-4b6d-9198-c78eb5cefaed",
"name": "model",
"type": "MainModelField",
"fieldKind": "input",
"label": "",
"value": {
"model_name": "stable-diffusion-v1-5",
"base_model": "sd-1",
"model_type": "main"
}
}
},
"outputs": {
"unet": {
"id": "ee24fb16-da38-4c66-9fbc-e8f296ed40d2",
"name": "unet",
"type": "UNetField",
"fieldKind": "output"
},
"clip": {
"id": "f3fb0524-8803-41c1-86db-a61a13ee6a33",
"name": "clip",
"type": "ClipField",
"fieldKind": "output"
},
"vae": {
"id": "5c4878a8-b40f-44ab-b146-1c1f42c860b3",
"name": "vae",
"type": "VaeField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 226,
"position": {
"x": 3837.096149678291,
"y": -1050.015351148365
}
},
{
"id": "7ce68934-3419-42d4-ac70-82cfc9397306",
"type": "invocation",
"data": {
"id": "7ce68934-3419-42d4-ac70-82cfc9397306",
"type": "compel",
"inputs": {
"prompt": {
"id": "7c2c4771-2161-4d77-aced-ff8c4b3f1c15",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "Positive Prompt",
"value": ""
},
"clip": {
"id": "06d59e91-9cca-411d-bf05-86b099b3e8f7",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "858bc33c-134c-4bf6-8855-f943e1d26f14",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 261,
"position": {
"x": 4449.356038911986,
"y": -1201.659695420063
}
},
{
"id": "d204d184-f209-4fae-a0a1-d152800844e1",
"type": "invocation",
"data": {
"id": "d204d184-f209-4fae-a0a1-d152800844e1",
"type": "controlnet",
"inputs": {
"image": {
"id": "4e0a3172-d3c2-4005-a84c-fa12a404f8a0",
"name": "image",
"type": "ImageField",
"fieldKind": "input",
"label": ""
},
"control_model": {
"id": "8cb2d998-4086-430a-8b13-94cbc81e3ca3",
"name": "control_model",
"type": "ControlNetModelField",
"fieldKind": "input",
"label": "",
"value": {
"model_name": "sd-controlnet-canny",
"base_model": "sd-1"
}
},
"control_weight": {
"id": "5e32bd8a-9dc8-42d8-9bcc-c2b0460c0b0f",
"name": "control_weight",
"type": "FloatPolymorphic",
"fieldKind": "input",
"label": "",
"value": 1
},
"begin_step_percent": {
"id": "c258a276-352a-416c-8358-152f11005c0c",
"name": "begin_step_percent",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 0
},
"end_step_percent": {
"id": "43001125-0d70-4f87-8e79-da6603ad6c33",
"name": "end_step_percent",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 1
},
"control_mode": {
"id": "d2f14561-9443-4374-9270-e2f05007944e",
"name": "control_mode",
"type": "enum",
"fieldKind": "input",
"label": "",
"value": "balanced"
},
"resize_mode": {
"id": "727ee7d3-8bf6-4c7d-8b8a-43546b3b59cd",
"name": "resize_mode",
"type": "enum",
"fieldKind": "input",
"label": "",
"value": "just_resize"
}
},
"outputs": {
"control": {
"id": "b034aa0f-4d0d-46e4-b5e3-e25a9588d087",
"name": "control",
"type": "ControlField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 508,
"position": {
"x": 4479.68542130465,
"y": -618.4221638099414
}
},
{
"id": "c4b23e64-7986-40c4-9cad-46327b12e204",
"type": "invocation",
"data": {
"id": "c4b23e64-7986-40c4-9cad-46327b12e204",
"type": "image",
"inputs": {
"image": {
"id": "189c8adf-68cc-4774-a729-49da89f6fdf1",
"name": "image",
"type": "ImageField",
"fieldKind": "input",
"label": "Canny Input Image"
}
},
"outputs": {
"image": {
"id": "1a31cacd-9d19-4f32-b558-c5e4aa39ce73",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "12f298fd-1d11-4cca-9426-01240f7ec7cf",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "c47dabcb-44e8-40c9-992d-81dca59f598e",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 225,
"position": {
"x": 3593.7474460420153,
"y": -538.1200472386865
}
},
{
"id": "ca4d5059-8bfb-447f-b415-da0faba5a143",
"type": "invocation",
"data": {
"id": "ca4d5059-8bfb-447f-b415-da0faba5a143",
"type": "collect",
"inputs": {
"item": {
"id": "b16ae602-8708-4b1b-8d4f-9e0808d429ab",
"name": "item",
"type": "CollectionItem",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"collection": {
"id": "d8987dd8-dec8-4d94-816a-3e356af29884",
"name": "collection",
"type": "Collection",
"fieldKind": "output"
}
},
"label": "ControlNet Collection",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 104,
"position": {
"x": 4866.191497139488,
"y": -299.0538619537037
}
},
{
"id": "018b1214-c2af-43a7-9910-fb687c6726d7",
"type": "invocation",
"data": {
"id": "018b1214-c2af-43a7-9910-fb687c6726d7",
"type": "midas_depth_image_processor",
"inputs": {
"metadata": {
"id": "77f91980-c696-4a18-a9ea-6e2fc329a747",
"name": "metadata",
"type": "MetadataField",
"fieldKind": "input",
"label": ""
},
"image": {
"id": "50710a20-2af5-424d-9d17-aa08167829c6",
"name": "image",
"type": "ImageField",
"fieldKind": "input",
"label": ""
},
"a_mult": {
"id": "f3b26f9d-2498-415e-9c01-197a8d06c0a5",
"name": "a_mult",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 2
},
"bg_th": {
"id": "4b1eb3ae-9d4a-47d6-b0ed-da62501e007f",
"name": "bg_th",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 0.1
}
},
"outputs": {
"image": {
"id": "b4ed637c-c4a0-4fdd-a24e-36d6412e4ccf",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "6bf9b609-d72c-4239-99bd-390a73cc3a9c",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "3e8aef09-cf44-4e3e-a490-d3c9e7b23119",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 339,
"position": {
"x": 4054.229311491893,
"y": -31.611411056365725
}
},
{
"id": "c826ba5e-9676-4475-b260-07b85e88753c",
"type": "invocation",
"data": {
"id": "c826ba5e-9676-4475-b260-07b85e88753c",
"type": "canny_image_processor",
"inputs": {
"metadata": {
"id": "08331ea6-99df-4e61-a919-204d9bfa8fb2",
"name": "metadata",
"type": "MetadataField",
"fieldKind": "input",
"label": ""
},
"image": {
"id": "33a37284-06ac-459c-ba93-1655e4f69b2d",
"name": "image",
"type": "ImageField",
"fieldKind": "input",
"label": ""
},
"low_threshold": {
"id": "21ec18a3-50c5-4ba1-9642-f921744d594f",
"name": "low_threshold",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 100
},
"high_threshold": {
"id": "ebeab271-a5ff-4c88-acfd-1d0271ab6ed4",
"name": "high_threshold",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 200
}
},
"outputs": {
"image": {
"id": "c0caadbf-883f-4cb4-a62d-626b9c81fc4e",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "df225843-8098-49c0-99d1-3b0b6600559f",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "e4abe0de-aa16-41f3-9cd7-968b49db5da3",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 339,
"position": {
"x": 4095.757337055795,
"y": -455.63440891935863
}
},
{
"id": "9db25398-c869-4a63-8815-c6559341ef12",
"type": "invocation",
"data": {
"id": "9db25398-c869-4a63-8815-c6559341ef12",
"type": "l2i",
"inputs": {
"metadata": {
"id": "2f269793-72e5-4ff3-b76c-fab4f93e983f",
"name": "metadata",
"type": "MetadataField",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "4aaedd3b-cc77-420c-806e-c7fa74ec4cdf",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"vae": {
"id": "432b066a-2462-4d18-83d9-64620b72df45",
"name": "vae",
"type": "VaeField",
"fieldKind": "input",
"label": ""
},
"tiled": {
"id": "61f86e0f-7c46-40f8-b3f5-fe2f693595ca",
"name": "tiled",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
},
"fp32": {
"id": "39b6c89a-37ef-4a7e-9509-daeca49d5092",
"name": "fp32",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
}
},
"outputs": {
"image": {
"id": "6204e9b0-61dd-4250-b685-2092ba0e28e6",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "b4140649-8d5d-4d2d-bfa6-09e389ede5f9",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "f3a0c0c8-fc24-4646-8be1-ed8cdd140828",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": false,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 267,
"position": {
"x": 5678.726701377887,
"y": -351.6792416734579
}
},
{
"id": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"type": "invocation",
"data": {
"id": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"type": "denoise_latents",
"inputs": {
"positive_conditioning": {
"id": "869cd309-c238-444b-a1a0-5021f99785ba",
"name": "positive_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"negative_conditioning": {
"id": "343447b4-1e37-4e9e-8ac7-4d04864066af",
"name": "negative_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"noise": {
"id": "b556571e-0cf9-4e03-8cfc-5caad937d957",
"name": "noise",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"steps": {
"id": "a3b3d2de-9308-423e-b00d-c209c3e6e808",
"name": "steps",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 10
},
"cfg_scale": {
"id": "b13c50a4-ec7e-4579-b0ef-2fe5df2605ea",
"name": "cfg_scale",
"type": "FloatPolymorphic",
"fieldKind": "input",
"label": "",
"value": 7.5
},
"denoising_start": {
"id": "57d5d755-f58f-4347-b991-f0bca4a0ab29",
"name": "denoising_start",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 0
},
"denoising_end": {
"id": "323e78a6-880a-4d73-a62c-70faff965aa6",
"name": "denoising_end",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 1
},
"scheduler": {
"id": "c25fdc17-a089-43ac-953e-067c45d5c76b",
"name": "scheduler",
"type": "Scheduler",
"fieldKind": "input",
"label": "",
"value": "euler"
},
"unet": {
"id": "6cde662b-e633-4569-b6b4-ec87c52c9c11",
"name": "unet",
"type": "UNetField",
"fieldKind": "input",
"label": ""
},
"control": {
"id": "276a4df9-bb26-4505-a4d3-a94e18c7b541",
"name": "control",
"type": "ControlPolymorphic",
"fieldKind": "input",
"label": ""
},
"ip_adapter": {
"id": "48d40c51-b5e2-4457-a428-eef0696695e8",
"name": "ip_adapter",
"type": "IPAdapterPolymorphic",
"fieldKind": "input",
"label": ""
},
"t2i_adapter": {
"id": "75dd8af2-e7d7-48b4-a574-edd9f6e686ad",
"name": "t2i_adapter",
"type": "T2IAdapterPolymorphic",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "9223d67b-1dd7-4b34-a45f-ed0a725d9702",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"denoise_mask": {
"id": "4ee99177-6923-4b7f-8fe0-d721dd7cb05b",
"name": "denoise_mask",
"type": "DenoiseMaskField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"latents": {
"id": "7fb4e326-a974-43e8-9ee7-2e3ab235819d",
"name": "latents",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "6bb8acd0-8973-4195-a095-e376385dc705",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "795dea52-1c7d-4e64-99f7-2f60ec6e3ab9",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.4.0"
},
"width": 320,
"height": 646,
"position": {
"x": 5274.672987098195,
"y": -823.0752416664332
}
}
],
"edges": [
{
"source": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"sourceHandle": "clip",
"target": "7ce68934-3419-42d4-ac70-82cfc9397306",
"targetHandle": "clip",
"id": "reactflow__edge-54486974-835b-4d81-8f82-05f9f32ce9e9clip-7ce68934-3419-42d4-ac70-82cfc9397306clip",
"type": "default"
},
{
"source": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"sourceHandle": "clip",
"target": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
"targetHandle": "clip",
"id": "reactflow__edge-54486974-835b-4d81-8f82-05f9f32ce9e9clip-273e3f96-49ea-4dc5-9d5b-9660390f14e1clip",
"type": "default"
},
{
"source": "a33199c2-8340-401e-b8a2-42ffa875fc1c",
"sourceHandle": "control",
"target": "ca4d5059-8bfb-447f-b415-da0faba5a143",
"targetHandle": "item",
"id": "reactflow__edge-a33199c2-8340-401e-b8a2-42ffa875fc1ccontrol-ca4d5059-8bfb-447f-b415-da0faba5a143item",
"type": "default"
},
{
"source": "d204d184-f209-4fae-a0a1-d152800844e1",
"sourceHandle": "control",
"target": "ca4d5059-8bfb-447f-b415-da0faba5a143",
"targetHandle": "item",
"id": "reactflow__edge-d204d184-f209-4fae-a0a1-d152800844e1control-ca4d5059-8bfb-447f-b415-da0faba5a143item",
"type": "default"
},
{
"source": "8e860e51-5045-456e-bf04-9a62a2a5c49e",
"sourceHandle": "image",
"target": "018b1214-c2af-43a7-9910-fb687c6726d7",
"targetHandle": "image",
"id": "reactflow__edge-8e860e51-5045-456e-bf04-9a62a2a5c49eimage-018b1214-c2af-43a7-9910-fb687c6726d7image",
"type": "default"
},
{
"source": "018b1214-c2af-43a7-9910-fb687c6726d7",
"sourceHandle": "image",
"target": "a33199c2-8340-401e-b8a2-42ffa875fc1c",
"targetHandle": "image",
"id": "reactflow__edge-018b1214-c2af-43a7-9910-fb687c6726d7image-a33199c2-8340-401e-b8a2-42ffa875fc1cimage",
"type": "default"
},
{
"source": "c4b23e64-7986-40c4-9cad-46327b12e204",
"sourceHandle": "image",
"target": "c826ba5e-9676-4475-b260-07b85e88753c",
"targetHandle": "image",
"id": "reactflow__edge-c4b23e64-7986-40c4-9cad-46327b12e204image-c826ba5e-9676-4475-b260-07b85e88753cimage",
"type": "default"
},
{
"source": "c826ba5e-9676-4475-b260-07b85e88753c",
"sourceHandle": "image",
"target": "d204d184-f209-4fae-a0a1-d152800844e1",
"targetHandle": "image",
"id": "reactflow__edge-c826ba5e-9676-4475-b260-07b85e88753cimage-d204d184-f209-4fae-a0a1-d152800844e1image",
"type": "default"
},
{
"source": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"sourceHandle": "vae",
"target": "9db25398-c869-4a63-8815-c6559341ef12",
"targetHandle": "vae",
"id": "reactflow__edge-54486974-835b-4d81-8f82-05f9f32ce9e9vae-9db25398-c869-4a63-8815-c6559341ef12vae",
"type": "default"
},
{
"source": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"sourceHandle": "latents",
"target": "9db25398-c869-4a63-8815-c6559341ef12",
"targetHandle": "latents",
"id": "reactflow__edge-ac481b7f-08bf-4a9d-9e0c-3a82ea5243celatents-9db25398-c869-4a63-8815-c6559341ef12latents",
"type": "default"
},
{
"source": "ca4d5059-8bfb-447f-b415-da0faba5a143",
"sourceHandle": "collection",
"target": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"targetHandle": "control",
"id": "reactflow__edge-ca4d5059-8bfb-447f-b415-da0faba5a143collection-ac481b7f-08bf-4a9d-9e0c-3a82ea5243cecontrol",
"type": "default"
},
{
"source": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"sourceHandle": "unet",
"target": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"targetHandle": "unet",
"id": "reactflow__edge-54486974-835b-4d81-8f82-05f9f32ce9e9unet-ac481b7f-08bf-4a9d-9e0c-3a82ea5243ceunet",
"type": "default"
},
{
"source": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
"sourceHandle": "conditioning",
"target": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"targetHandle": "negative_conditioning",
"id": "reactflow__edge-273e3f96-49ea-4dc5-9d5b-9660390f14e1conditioning-ac481b7f-08bf-4a9d-9e0c-3a82ea5243cenegative_conditioning",
"type": "default"
},
{
"source": "7ce68934-3419-42d4-ac70-82cfc9397306",
"sourceHandle": "conditioning",
"target": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"targetHandle": "positive_conditioning",
"id": "reactflow__edge-7ce68934-3419-42d4-ac70-82cfc9397306conditioning-ac481b7f-08bf-4a9d-9e0c-3a82ea5243cepositive_conditioning",
"type": "default"
}
]
}

View File

@@ -1,719 +0,0 @@
{
"name": "Prompt from File",
"author": "InvokeAI",
"description": "Sample workflow using prompt from file capabilities of InvokeAI ",
"version": "0.1.0",
"contact": "millun@invoke.ai",
"tags": "text2image, prompt from file, default",
"notes": "",
"exposedFields": [
{
"nodeId": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"fieldName": "model"
},
{
"nodeId": "1b7e0df8-8589-4915-a4ea-c0088f15d642",
"fieldName": "file_path"
}
],
"meta": {
"version": "1.0.0"
},
"nodes": [
{
"id": "c2eaf1ba-5708-4679-9e15-945b8b432692",
"type": "invocation",
"data": {
"id": "c2eaf1ba-5708-4679-9e15-945b8b432692",
"type": "compel",
"inputs": {
"prompt": {
"id": "dcdf3f6d-9b96-4bcd-9b8d-f992fefe4f62",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "",
"value": ""
},
"clip": {
"id": "3f1981c9-d8a9-42eb-a739-4f120eb80745",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "46205e6c-c5e2-44cb-9c82-1cd20b95674a",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 261,
"position": {
"x": 1177.3417789657444,
"y": -102.0924766641035
}
},
{
"id": "1b7e0df8-8589-4915-a4ea-c0088f15d642",
"type": "invocation",
"data": {
"id": "1b7e0df8-8589-4915-a4ea-c0088f15d642",
"type": "prompt_from_file",
"inputs": {
"file_path": {
"id": "37e37684-4f30-4ec8-beae-b333e550f904",
"name": "file_path",
"type": "string",
"fieldKind": "input",
"label": "Prompts File Path",
"value": ""
},
"pre_prompt": {
"id": "7de02feb-819a-4992-bad3-72a30920ddea",
"name": "pre_prompt",
"type": "string",
"fieldKind": "input",
"label": "",
"value": ""
},
"post_prompt": {
"id": "95f191d8-a282-428e-bd65-de8cb9b7513a",
"name": "post_prompt",
"type": "string",
"fieldKind": "input",
"label": "",
"value": ""
},
"start_line": {
"id": "efee9a48-05ab-4829-8429-becfa64a0782",
"name": "start_line",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1
},
"max_prompts": {
"id": "abebb428-3d3d-49fd-a482-4e96a16fff08",
"name": "max_prompts",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1
}
},
"outputs": {
"collection": {
"id": "77d5d7f1-9877-4ab1-9a8c-33e9ffa9abf3",
"name": "collection",
"type": "StringCollection",
"fieldKind": "output"
}
},
"label": "Prompts from File",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 589,
"position": {
"x": 394.181884547075,
"y": -423.5345157864633
}
},
{
"id": "1b89067c-3f6b-42c8-991f-e3055789b251",
"type": "invocation",
"data": {
"id": "1b89067c-3f6b-42c8-991f-e3055789b251",
"type": "iterate",
"inputs": {
"collection": {
"id": "4c564bf8-5ed6-441e-ad2c-dda265d5785f",
"name": "collection",
"type": "Collection",
"fieldKind": "input",
"label": "",
"value": []
}
},
"outputs": {
"item": {
"id": "36340f9a-e7a5-4afa-b4b5-313f4e292380",
"name": "item",
"type": "CollectionItem",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 104,
"position": {
"x": 792.8735298060233,
"y": -432.6964953027252
}
},
{
"id": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"type": "invocation",
"data": {
"id": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"type": "main_model_loader",
"inputs": {
"model": {
"id": "3f264259-3418-47d5-b90d-b6600e36ae46",
"name": "model",
"type": "MainModelField",
"fieldKind": "input",
"label": "",
"value": {
"model_name": "stable-diffusion-v1-5",
"base_model": "sd-1",
"model_type": "main"
}
}
},
"outputs": {
"unet": {
"id": "8e182ea2-9d0a-4c02-9407-27819288d4b5",
"name": "unet",
"type": "UNetField",
"fieldKind": "output"
},
"clip": {
"id": "d67d9d30-058c-46d5-bded-3d09d6d1aa39",
"name": "clip",
"type": "ClipField",
"fieldKind": "output"
},
"vae": {
"id": "89641601-0429-4448-98d5-190822d920d8",
"name": "vae",
"type": "VaeField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 226,
"position": {
"x": -47.66201354137797,
"y": -299.218193067033
}
},
{
"id": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
"type": "invocation",
"data": {
"id": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
"type": "compel",
"inputs": {
"prompt": {
"id": "dcdf3f6d-9b96-4bcd-9b8d-f992fefe4f62",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "",
"value": ""
},
"clip": {
"id": "3f1981c9-d8a9-42eb-a739-4f120eb80745",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "46205e6c-c5e2-44cb-9c82-1cd20b95674a",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 261,
"position": {
"x": 1175.0187896425462,
"y": -420.64289413577114
}
},
{
"id": "0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77",
"type": "invocation",
"data": {
"id": "0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77",
"type": "noise",
"inputs": {
"seed": {
"id": "b722d84a-eeee-484f-bef2-0250c027cb67",
"name": "seed",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"width": {
"id": "d5f8ce11-0502-4bfc-9a30-5757dddf1f94",
"name": "width",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 512
},
"height": {
"id": "f187d5ff-38a5-4c3f-b780-fc5801ef34af",
"name": "height",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 512
},
"use_cpu": {
"id": "12f112b8-8b76-4816-b79e-662edc9f9aa5",
"name": "use_cpu",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": true
}
},
"outputs": {
"noise": {
"id": "08576ad1-96d9-42d2-96ef-6f5c1961933f",
"name": "noise",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "f3e1f94a-258d-41ff-9789-bd999bd9f40d",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "6cefc357-4339-415e-a951-49b9c2be32f4",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 389,
"position": {
"x": 809.1964864135837,
"y": 183.2735123359796
}
},
{
"id": "dfc20e07-7aef-4fc0-a3a1-7bf68ec6a4e5",
"type": "invocation",
"data": {
"id": "dfc20e07-7aef-4fc0-a3a1-7bf68ec6a4e5",
"type": "rand_int",
"inputs": {
"low": {
"id": "b9fc6cf1-469c-4037-9bf0-04836965826f",
"name": "low",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"high": {
"id": "06eac725-0f60-4ba2-b8cd-7ad9f757488c",
"name": "high",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 2147483647
}
},
"outputs": {
"value": {
"id": "df08c84e-7346-4e92-9042-9e5cb773aaff",
"name": "value",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": false,
"version": "1.0.0"
},
"width": 320,
"height": 218,
"position": {
"x": 354.19913145404166,
"y": 301.86324846905165
}
},
{
"id": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
"type": "invocation",
"data": {
"id": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
"type": "l2i",
"inputs": {
"metadata": {
"id": "022e4b33-562b-438d-b7df-41c3fd931f40",
"name": "metadata",
"type": "MetadataField",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "67cb6c77-a394-4a66-a6a9-a0a7dcca69ec",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"vae": {
"id": "7b3fd9ad-a4ef-4e04-89fa-3832a9902dbd",
"name": "vae",
"type": "VaeField",
"fieldKind": "input",
"label": ""
},
"tiled": {
"id": "5ac5680d-3add-4115-8ec0-9ef5bb87493b",
"name": "tiled",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
},
"fp32": {
"id": "db8297f5-55f8-452f-98cf-6572c2582152",
"name": "fp32",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
}
},
"outputs": {
"image": {
"id": "d8778d0c-592a-4960-9280-4e77e00a7f33",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "c8b0a75a-f5de-4ff2-9227-f25bb2b97bec",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "83c05fbf-76b9-49ab-93c4-fa4b10e793e4",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 267,
"position": {
"x": 2037.861329274915,
"y": -329.8393457509562
}
},
{
"id": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"type": "invocation",
"data": {
"id": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"type": "denoise_latents",
"inputs": {
"positive_conditioning": {
"id": "751fb35b-3f23-45ce-af1c-053e74251337",
"name": "positive_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"negative_conditioning": {
"id": "b9dc06b6-7481-4db1-a8c2-39d22a5eacff",
"name": "negative_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"noise": {
"id": "6e15e439-3390-48a4-8031-01e0e19f0e1d",
"name": "noise",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"steps": {
"id": "bfdfb3df-760b-4d51-b17b-0abb38b976c2",
"name": "steps",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 10
},
"cfg_scale": {
"id": "47770858-322e-41af-8494-d8b63ed735f3",
"name": "cfg_scale",
"type": "FloatPolymorphic",
"fieldKind": "input",
"label": "",
"value": 7.5
},
"denoising_start": {
"id": "2ba78720-ee02-4130-a348-7bc3531f790b",
"name": "denoising_start",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 0
},
"denoising_end": {
"id": "a874dffb-d433-4d1a-9f59-af4367bb05e4",
"name": "denoising_end",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 1
},
"scheduler": {
"id": "36e021ad-b762-4fe4-ad4d-17f0291c40b2",
"name": "scheduler",
"type": "Scheduler",
"fieldKind": "input",
"label": "",
"value": "euler"
},
"unet": {
"id": "98d3282d-f9f6-4b5e-b9e8-58658f1cac78",
"name": "unet",
"type": "UNetField",
"fieldKind": "input",
"label": ""
},
"control": {
"id": "f2ea3216-43d5-42b4-887f-36e8f7166d53",
"name": "control",
"type": "ControlPolymorphic",
"fieldKind": "input",
"label": ""
},
"ip_adapter": {
"id": "d0780610-a298-47c8-a54e-70e769e0dfe2",
"name": "ip_adapter",
"type": "IPAdapterPolymorphic",
"fieldKind": "input",
"label": ""
},
"t2i_adapter": {
"id": "fdb40970-185e-4ea8-8bb5-88f06f91f46a",
"name": "t2i_adapter",
"type": "T2IAdapterPolymorphic",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "e05b538a-1b5a-4aa5-84b1-fd2361289a81",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"denoise_mask": {
"id": "463a419e-df30-4382-8ffb-b25b25abe425",
"name": "denoise_mask",
"type": "DenoiseMaskField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"latents": {
"id": "559ee688-66cf-4139-8b82-3d3aa69995ce",
"name": "latents",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "0b4285c2-e8b9-48e5-98f6-0a49d3f98fd2",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "8b0881b9-45e5-47d5-b526-24b6661de0ee",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.4.0"
},
"width": 320,
"height": 646,
"position": {
"x": 1570.9941088179146,
"y": -407.6505491604564
}
}
],
"edges": [
{
"source": "1b7e0df8-8589-4915-a4ea-c0088f15d642",
"sourceHandle": "collection",
"target": "1b89067c-3f6b-42c8-991f-e3055789b251",
"targetHandle": "collection",
"id": "reactflow__edge-1b7e0df8-8589-4915-a4ea-c0088f15d642collection-1b89067c-3f6b-42c8-991f-e3055789b251collection",
"type": "default"
},
{
"source": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"sourceHandle": "clip",
"target": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
"targetHandle": "clip",
"id": "reactflow__edge-d6353b7f-b447-4e17-8f2e-80a88c91d426clip-fc9d0e35-a6de-4a19-84e1-c72497c823f6clip",
"type": "default"
},
{
"source": "1b89067c-3f6b-42c8-991f-e3055789b251",
"sourceHandle": "item",
"target": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
"targetHandle": "prompt",
"id": "reactflow__edge-1b89067c-3f6b-42c8-991f-e3055789b251item-fc9d0e35-a6de-4a19-84e1-c72497c823f6prompt",
"type": "default"
},
{
"source": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"sourceHandle": "clip",
"target": "c2eaf1ba-5708-4679-9e15-945b8b432692",
"targetHandle": "clip",
"id": "reactflow__edge-d6353b7f-b447-4e17-8f2e-80a88c91d426clip-c2eaf1ba-5708-4679-9e15-945b8b432692clip",
"type": "default"
},
{
"source": "dfc20e07-7aef-4fc0-a3a1-7bf68ec6a4e5",
"sourceHandle": "value",
"target": "0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77",
"targetHandle": "seed",
"id": "reactflow__edge-dfc20e07-7aef-4fc0-a3a1-7bf68ec6a4e5value-0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77seed",
"type": "default"
},
{
"source": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
"sourceHandle": "conditioning",
"target": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"targetHandle": "positive_conditioning",
"id": "reactflow__edge-fc9d0e35-a6de-4a19-84e1-c72497c823f6conditioning-2fb1577f-0a56-4f12-8711-8afcaaaf1d5epositive_conditioning",
"type": "default"
},
{
"source": "c2eaf1ba-5708-4679-9e15-945b8b432692",
"sourceHandle": "conditioning",
"target": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"targetHandle": "negative_conditioning",
"id": "reactflow__edge-c2eaf1ba-5708-4679-9e15-945b8b432692conditioning-2fb1577f-0a56-4f12-8711-8afcaaaf1d5enegative_conditioning",
"type": "default"
},
{
"source": "0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77",
"sourceHandle": "noise",
"target": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"targetHandle": "noise",
"id": "reactflow__edge-0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77noise-2fb1577f-0a56-4f12-8711-8afcaaaf1d5enoise",
"type": "default"
},
{
"source": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"sourceHandle": "unet",
"target": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"targetHandle": "unet",
"id": "reactflow__edge-d6353b7f-b447-4e17-8f2e-80a88c91d426unet-2fb1577f-0a56-4f12-8711-8afcaaaf1d5eunet",
"type": "default"
},
{
"source": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"sourceHandle": "latents",
"target": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
"targetHandle": "latents",
"id": "reactflow__edge-2fb1577f-0a56-4f12-8711-8afcaaaf1d5elatents-491ec988-3c77-4c37-af8a-39a0c4e7a2a1latents",
"type": "default"
},
{
"source": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"sourceHandle": "vae",
"target": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
"targetHandle": "vae",
"id": "reactflow__edge-d6353b7f-b447-4e17-8f2e-80a88c91d426vae-491ec988-3c77-4c37-af8a-39a0c4e7a2a1vae",
"type": "default"
}
]
}

View File

@@ -1,758 +0,0 @@
{
"name": "QR Code Monster",
"author": "InvokeAI",
"description": "Sample workflow for create images with QR code Monster ControlNet",
"version": "1.0.1",
"contact": "invoke@invoke.ai",
"tags": "qrcode, controlnet, default",
"notes": "",
"exposedFields": [
{
"nodeId": "a6cc0986-f928-4a7e-8d44-ba2d4b36f54a",
"fieldName": "image"
},
{
"nodeId": "aca3b054-bfba-4392-bd20-6476f59504df",
"fieldName": "prompt"
},
{
"nodeId": "3db7cee0-31e2-4a3d-94a1-268cb16177dd",
"fieldName": "prompt"
}
],
"meta": {
"version": "1.0.0"
},
"nodes": [
{
"id": "3db7cee0-31e2-4a3d-94a1-268cb16177dd",
"type": "invocation",
"data": {
"id": "3db7cee0-31e2-4a3d-94a1-268cb16177dd",
"type": "compel",
"inputs": {
"prompt": {
"id": "6a1fe244-5656-4f8c-91d1-1fb474e28807",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "Negative Prompt",
"value": ""
},
"clip": {
"id": "f24688f3-29b8-4a2d-8603-046e5a5c7250",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "700528eb-3f8b-4745-b540-34f919b5b228",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "Prompt",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 261,
"position": {
"x": 773.0502679628016,
"y": 1622.4836086770556
}
},
{
"id": "610384f1-6f0c-4847-a9a2-37ce7f456ed1",
"type": "invocation",
"data": {
"id": "610384f1-6f0c-4847-a9a2-37ce7f456ed1",
"type": "main_model_loader",
"inputs": {
"model": {
"id": "cb36b6d3-6c1f-4911-a200-646745b0ff74",
"name": "model",
"type": "MainModelField",
"fieldKind": "input",
"label": "",
"value": {
"model_name": "stable-diffusion-v1-5",
"base_model": "sd-1",
"model_type": "main"
}
}
},
"outputs": {
"unet": {
"id": "7246895b-b252-49bc-b952-8d801b4672f7",
"name": "unet",
"type": "UNetField",
"fieldKind": "output"
},
"clip": {
"id": "3c2aedb8-30d5-4d4b-99df-d06a0d7bedc6",
"name": "clip",
"type": "ClipField",
"fieldKind": "output"
},
"vae": {
"id": "b9743815-5501-4bbb-8bde-8bd6ba298a4e",
"name": "vae",
"type": "VaeField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 226,
"position": {
"x": 211.58866462619744,
"y": 1376.0542388105248
}
},
{
"id": "aca3b054-bfba-4392-bd20-6476f59504df",
"type": "invocation",
"data": {
"id": "aca3b054-bfba-4392-bd20-6476f59504df",
"type": "compel",
"inputs": {
"prompt": {
"id": "6a1fe244-5656-4f8c-91d1-1fb474e28807",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "Positive Prompt",
"value": ""
},
"clip": {
"id": "f24688f3-29b8-4a2d-8603-046e5a5c7250",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "700528eb-3f8b-4745-b540-34f919b5b228",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 261,
"position": {
"x": 770.6491131680111,
"y": 1316.379247112241
}
},
{
"id": "a6cc0986-f928-4a7e-8d44-ba2d4b36f54a",
"type": "invocation",
"data": {
"id": "a6cc0986-f928-4a7e-8d44-ba2d4b36f54a",
"type": "image",
"inputs": {
"image": {
"id": "89ba5d58-28c9-4e04-a5df-79fb7a6f3531",
"name": "image",
"type": "ImageField",
"fieldKind": "input",
"label": "QR Code / Hidden Image"
}
},
"outputs": {
"image": {
"id": "54335653-0e17-42da-b9e8-83c5fb5af670",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "a3c65953-39ea-4d97-8858-d65154ff9d11",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "2c7db511-ebc9-4286-a46b-bc11e0fd779f",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 225,
"position": {
"x": 700.5034176864369,
"y": 1981.749600549388
}
},
{
"id": "280fd8a7-3b0c-49fe-8be4-6246e08b6c9a",
"type": "invocation",
"data": {
"id": "280fd8a7-3b0c-49fe-8be4-6246e08b6c9a",
"type": "noise",
"inputs": {
"seed": {
"id": "7c6c76dd-127b-4829-b1ec-430790cb7ed7",
"name": "seed",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"width": {
"id": "8ec6a525-a421-40d8-a17e-39e7b6836438",
"name": "width",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 512
},
"height": {
"id": "6af1e58a-e2ee-4ec4-9f06-d8d0412922ca",
"name": "height",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 512
},
"use_cpu": {
"id": "26662e99-5720-43a6-a5d8-06c9dab0e261",
"name": "use_cpu",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": true
}
},
"outputs": {
"noise": {
"id": "cb4c4dfc-a744-49eb-af4f-677448e28407",
"name": "noise",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "97e87be6-e81f-40a3-a522-28ebe4aad0ac",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "80784420-f1e1-47b0-bd1d-1d381a15e22d",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": false,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 32,
"position": {
"x": 1182.460291960481,
"y": 1759.592972960265
}
},
{
"id": "2ac03cf6-0326-454a-bed0-d8baef2bf30d",
"type": "invocation",
"data": {
"id": "2ac03cf6-0326-454a-bed0-d8baef2bf30d",
"type": "controlnet",
"inputs": {
"image": {
"id": "1f683889-9f14-40c8-af29-4b991b211a3a",
"name": "image",
"type": "ImageField",
"fieldKind": "input",
"label": ""
},
"control_model": {
"id": "a933b21d-22c1-4e06-818f-15416b971282",
"name": "control_model",
"type": "ControlNetModelField",
"fieldKind": "input",
"label": "",
"value": {
"model_name": "qrcode_monster",
"base_model": "sd-1"
}
},
"control_weight": {
"id": "198a0825-e55e-4496-bc54-c3d7b02f3d75",
"name": "control_weight",
"type": "FloatPolymorphic",
"fieldKind": "input",
"label": "",
"value": 1.4
},
"begin_step_percent": {
"id": "c85ce42f-22af-42a0-8993-676002fb275e",
"name": "begin_step_percent",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 0
},
"end_step_percent": {
"id": "a61a65c4-9e6f-4fe2-96a5-1294d17ec6e4",
"name": "end_step_percent",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 1
},
"control_mode": {
"id": "1aa45cfa-0249-46b7-bf24-3e38e92f5fa0",
"name": "control_mode",
"type": "enum",
"fieldKind": "input",
"label": "",
"value": "balanced"
},
"resize_mode": {
"id": "a89d3cb9-a141-4cea-bb49-977bf267377b",
"name": "resize_mode",
"type": "enum",
"fieldKind": "input",
"label": "",
"value": "just_resize"
}
},
"outputs": {
"control": {
"id": "c9a1fc7e-cb25-45a9-adff-1a97c9ff04d6",
"name": "control",
"type": "ControlField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 508,
"position": {
"x": 1165.434407461108,
"y": 1862.916856351665
}
},
{
"id": "28542b66-5a00-4780-a318-0a036d2df914",
"type": "invocation",
"data": {
"id": "28542b66-5a00-4780-a318-0a036d2df914",
"type": "l2i",
"inputs": {
"metadata": {
"id": "a38e8f55-7f2c-4fcc-a71f-d51e2eb0374a",
"name": "metadata",
"type": "MetadataField",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "80e97bc8-e716-4175-9115-5b58495aa30c",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"vae": {
"id": "5641bce6-ac2b-47eb-bb32-2f290026b7e1",
"name": "vae",
"type": "VaeField",
"fieldKind": "input",
"label": ""
},
"tiled": {
"id": "9e75eb16-ae48-47ed-b180-e0409d377436",
"name": "tiled",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
},
"fp32": {
"id": "0518b0ce-ee37-437b-8437-cc2976a3279f",
"name": "fp32",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
}
},
"outputs": {
"image": {
"id": "ec2ff985-a7eb-401f-92c4-1217cddad6a2",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "ba1d1720-6d67-4eca-9e9d-b97d08636774",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "10bcf8f4-6394-422f-b0c0-51680f3bfb25",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 267,
"position": {
"x": 2110.8415693683014,
"y": 1487.253341116115
}
},
{
"id": "9755ae4c-ef30-4db3-80f6-a31f98979a11",
"type": "invocation",
"data": {
"id": "9755ae4c-ef30-4db3-80f6-a31f98979a11",
"type": "denoise_latents",
"inputs": {
"positive_conditioning": {
"id": "8e6aceaa-a986-4ab2-9c04-5b1027b3daf6",
"name": "positive_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"negative_conditioning": {
"id": "fbbaa712-ca1a-420b-9016-763f2a29d68c",
"name": "negative_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"noise": {
"id": "a3b3d5d2-c0f9-4b89-a9b3-8de9418f7bb5",
"name": "noise",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"steps": {
"id": "e491e664-2f8c-4f49-b3e4-57b051fbb9c5",
"name": "steps",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 10
},
"cfg_scale": {
"id": "f0318abd-ed65-4cad-86a7-48d1c19a6d14",
"name": "cfg_scale",
"type": "FloatPolymorphic",
"fieldKind": "input",
"label": "",
"value": 7.5
},
"denoising_start": {
"id": "f7c24c51-496f-44c4-836a-c734e529fec0",
"name": "denoising_start",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 0
},
"denoising_end": {
"id": "54f7656a-fb0d-4d9e-a459-f700f7dccd2e",
"name": "denoising_end",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 1
},
"scheduler": {
"id": "363ee440-040d-499b-bf84-bf5391b08681",
"name": "scheduler",
"type": "Scheduler",
"fieldKind": "input",
"label": "",
"value": "euler"
},
"unet": {
"id": "5c93d4e5-1064-4700-ab1d-d12e1e9b5ba7",
"name": "unet",
"type": "UNetField",
"fieldKind": "input",
"label": ""
},
"control": {
"id": "e1948eb3-7407-43b0-93e3-139470f186b7",
"name": "control",
"type": "ControlPolymorphic",
"fieldKind": "input",
"label": ""
},
"ip_adapter": {
"id": "5675b2c3-adfb-49ee-b33c-26bdbfab1fed",
"name": "ip_adapter",
"type": "IPAdapterPolymorphic",
"fieldKind": "input",
"label": ""
},
"t2i_adapter": {
"id": "89cd4ab3-3bfc-4063-9de5-91d42305c651",
"name": "t2i_adapter",
"type": "T2IAdapterPolymorphic",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "ec01df90-5042-418d-b6d6-86b251c13770",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"denoise_mask": {
"id": "561cde00-cb20-42ae-9bd3-4f477f73fbe1",
"name": "denoise_mask",
"type": "DenoiseMaskField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"latents": {
"id": "f9addefe-efcc-4e01-8945-6ebbc934b002",
"name": "latents",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "6d48f78b-d681-422a-8677-0111bd0625f1",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "f25997b8-6316-44ce-b696-b82e4ed51ae5",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.4.0"
},
"width": 320,
"height": 646,
"position": {
"x": 1597.9598293300219,
"y": 1420.4637727891632
}
},
{
"id": "59349822-af20-4e0e-a53f-3ba135d00c3f",
"type": "invocation",
"data": {
"id": "59349822-af20-4e0e-a53f-3ba135d00c3f",
"type": "rand_int",
"inputs": {
"low": {
"id": "051f22f9-2d4f-414f-bc51-84af2d626efa",
"name": "low",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"high": {
"id": "77206186-f264-4224-9589-f925cf903dc9",
"name": "high",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 2147483647
}
},
"outputs": {
"value": {
"id": "a7ed9387-3a24-4d34-b7c5-f713bd544ab1",
"name": "value",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": false,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": false,
"version": "1.0.0"
},
"width": 320,
"height": 32,
"position": {
"x": 1178.16746986153,
"y": 1663.9433412808876
}
}
],
"edges": [
{
"source": "59349822-af20-4e0e-a53f-3ba135d00c3f",
"target": "280fd8a7-3b0c-49fe-8be4-6246e08b6c9a",
"id": "59349822-af20-4e0e-a53f-3ba135d00c3f-280fd8a7-3b0c-49fe-8be4-6246e08b6c9a-collapsed",
"type": "collapsed"
},
{
"source": "610384f1-6f0c-4847-a9a2-37ce7f456ed1",
"sourceHandle": "clip",
"target": "aca3b054-bfba-4392-bd20-6476f59504df",
"targetHandle": "clip",
"id": "reactflow__edge-610384f1-6f0c-4847-a9a2-37ce7f456ed1clip-aca3b054-bfba-4392-bd20-6476f59504dfclip",
"type": "default"
},
{
"source": "610384f1-6f0c-4847-a9a2-37ce7f456ed1",
"sourceHandle": "clip",
"target": "3db7cee0-31e2-4a3d-94a1-268cb16177dd",
"targetHandle": "clip",
"id": "reactflow__edge-610384f1-6f0c-4847-a9a2-37ce7f456ed1clip-3db7cee0-31e2-4a3d-94a1-268cb16177ddclip",
"type": "default"
},
{
"source": "a6cc0986-f928-4a7e-8d44-ba2d4b36f54a",
"sourceHandle": "image",
"target": "2ac03cf6-0326-454a-bed0-d8baef2bf30d",
"targetHandle": "image",
"id": "reactflow__edge-a6cc0986-f928-4a7e-8d44-ba2d4b36f54aimage-2ac03cf6-0326-454a-bed0-d8baef2bf30dimage",
"type": "default"
},
{
"source": "610384f1-6f0c-4847-a9a2-37ce7f456ed1",
"sourceHandle": "vae",
"target": "28542b66-5a00-4780-a318-0a036d2df914",
"targetHandle": "vae",
"id": "reactflow__edge-610384f1-6f0c-4847-a9a2-37ce7f456ed1vae-28542b66-5a00-4780-a318-0a036d2df914vae",
"type": "default"
},
{
"source": "280fd8a7-3b0c-49fe-8be4-6246e08b6c9a",
"sourceHandle": "noise",
"target": "9755ae4c-ef30-4db3-80f6-a31f98979a11",
"targetHandle": "noise",
"id": "reactflow__edge-280fd8a7-3b0c-49fe-8be4-6246e08b6c9anoise-9755ae4c-ef30-4db3-80f6-a31f98979a11noise",
"type": "default"
},
{
"source": "3db7cee0-31e2-4a3d-94a1-268cb16177dd",
"sourceHandle": "conditioning",
"target": "9755ae4c-ef30-4db3-80f6-a31f98979a11",
"targetHandle": "negative_conditioning",
"id": "reactflow__edge-3db7cee0-31e2-4a3d-94a1-268cb16177ddconditioning-9755ae4c-ef30-4db3-80f6-a31f98979a11negative_conditioning",
"type": "default"
},
{
"source": "aca3b054-bfba-4392-bd20-6476f59504df",
"sourceHandle": "conditioning",
"target": "9755ae4c-ef30-4db3-80f6-a31f98979a11",
"targetHandle": "positive_conditioning",
"id": "reactflow__edge-aca3b054-bfba-4392-bd20-6476f59504dfconditioning-9755ae4c-ef30-4db3-80f6-a31f98979a11positive_conditioning",
"type": "default"
},
{
"source": "610384f1-6f0c-4847-a9a2-37ce7f456ed1",
"sourceHandle": "unet",
"target": "9755ae4c-ef30-4db3-80f6-a31f98979a11",
"targetHandle": "unet",
"id": "reactflow__edge-610384f1-6f0c-4847-a9a2-37ce7f456ed1unet-9755ae4c-ef30-4db3-80f6-a31f98979a11unet",
"type": "default"
},
{
"source": "2ac03cf6-0326-454a-bed0-d8baef2bf30d",
"sourceHandle": "control",
"target": "9755ae4c-ef30-4db3-80f6-a31f98979a11",
"targetHandle": "control",
"id": "reactflow__edge-2ac03cf6-0326-454a-bed0-d8baef2bf30dcontrol-9755ae4c-ef30-4db3-80f6-a31f98979a11control",
"type": "default"
},
{
"source": "9755ae4c-ef30-4db3-80f6-a31f98979a11",
"sourceHandle": "latents",
"target": "28542b66-5a00-4780-a318-0a036d2df914",
"targetHandle": "latents",
"id": "reactflow__edge-9755ae4c-ef30-4db3-80f6-a31f98979a11latents-28542b66-5a00-4780-a318-0a036d2df914latents",
"type": "default"
},
{
"source": "59349822-af20-4e0e-a53f-3ba135d00c3f",
"sourceHandle": "value",
"target": "280fd8a7-3b0c-49fe-8be4-6246e08b6c9a",
"targetHandle": "seed",
"id": "reactflow__edge-59349822-af20-4e0e-a53f-3ba135d00c3fvalue-280fd8a7-3b0c-49fe-8be4-6246e08b6c9aseed",
"type": "default"
}
]
}

View File

@@ -26,6 +26,10 @@
{
"nodeId": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"fieldName": "style"
},
{
"nodeId": "87ee6243-fb0d-4f77-ad5f-56591659339e",
"fieldName": "steps"
}
],
"meta": {
@@ -36,6 +40,7 @@
"id": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"type": "sdxl_compel_prompt",
"inputs": {
@@ -130,12 +135,10 @@
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
"isIntermediate": true
},
"width": 320,
"height": 793,
"height": 764,
"position": {
"x": 1275,
"y": -350
@@ -145,6 +148,7 @@
"id": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "noise",
"inputs": {
@@ -205,9 +209,7 @@
"isOpen": false,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
"isIntermediate": true
},
"width": 320,
"height": 32,
@@ -216,10 +218,83 @@
"y": -300
}
},
{
"id": "dbcd2f98-d809-48c8-bf64-2635f88a2fe9",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "dbcd2f98-d809-48c8-bf64-2635f88a2fe9",
"type": "l2i",
"inputs": {
"tiled": {
"id": "24f5bc7b-f6a1-425d-8ab1-f50b4db5d0df",
"name": "tiled",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
},
"fp32": {
"id": "b146d873-ffb9-4767-986a-5360504841a2",
"name": "fp32",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": true
},
"latents": {
"id": "65441abd-7713-4b00-9d8d-3771404002e8",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"vae": {
"id": "a478b833-6e13-4611-9a10-842c89603c74",
"name": "vae",
"type": "VaeField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"image": {
"id": "c87ae925-f858-417a-8940-8708ba9b4b53",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "4bcb8512-b5a1-45f1-9e52-6e92849f9d6c",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "23e41c00-a354-48e8-8f59-5875679c27ab",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": true,
"isIntermediate": false
},
"width": 320,
"height": 224,
"position": {
"x": 2025,
"y": -250
}
},
{
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"type": "rand_int",
"inputs": {
@@ -252,9 +327,7 @@
"isOpen": false,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": false,
"version": "1.0.0"
"isIntermediate": true
},
"width": 320,
"height": 32,
@@ -267,6 +340,7 @@
"id": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"type": "sdxl_model_loader",
"inputs": {
@@ -277,7 +351,7 @@
"fieldKind": "input",
"label": "",
"value": {
"model_name": "stable-diffusion-xl-base-1-0",
"model_name": "stable-diffusion-xl-base-1.0",
"base_model": "sdxl",
"model_type": "main"
}
@@ -313,12 +387,10 @@
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
"isIntermediate": true
},
"width": 320,
"height": 258,
"height": 234,
"position": {
"x": 475,
"y": 25
@@ -328,6 +400,7 @@
"id": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"type": "sdxl_compel_prompt",
"inputs": {
@@ -422,143 +495,48 @@
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
"isIntermediate": true
},
"width": 320,
"height": 793,
"height": 764,
"position": {
"x": 900,
"y": -350
}
},
{
"id": "63e91020-83b2-4f35-b174-ad9692aabb48",
"id": "87ee6243-fb0d-4f77-ad5f-56591659339e",
"type": "invocation",
"data": {
"id": "63e91020-83b2-4f35-b174-ad9692aabb48",
"type": "l2i",
"inputs": {
"metadata": {
"id": "88971324-3fdb-442d-b8b7-7612478a8622",
"name": "metadata",
"type": "MetadataField",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "da0e40cb-c49f-4fa5-9856-338b91a65f6b",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"vae": {
"id": "ae5164ce-1710-4ec5-a83a-6113a0d1b5c0",
"name": "vae",
"type": "VaeField",
"fieldKind": "input",
"label": ""
},
"tiled": {
"id": "2ccfd535-1a7b-4ecf-84db-9430a64fb3d7",
"name": "tiled",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
},
"fp32": {
"id": "64f07d5a-54a2-429c-8c5b-0c2a3a8e5cd5",
"name": "fp32",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
}
},
"outputs": {
"image": {
"id": "9b281eaa-6504-407d-a5ca-1e5e8020a4bf",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "98e545f3-b53b-490d-b94d-bed9418ccc75",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "4a74bd43-d7f7-4c7f-bb3b-d09bb2992c46",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": false,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 267,
"position": {
"x": 2112.5626808057173,
"y": -174.24042139280238
}
},
{
"id": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
"type": "invocation",
"data": {
"id": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
"version": "1.0.0",
"id": "87ee6243-fb0d-4f77-ad5f-56591659339e",
"type": "denoise_latents",
"inputs": {
"positive_conditioning": {
"id": "29b73dfa-a06e-4b4a-a844-515b9eb93a81",
"name": "positive_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"negative_conditioning": {
"id": "a81e6f5b-f4de-4919-b483-b6e2f067465a",
"name": "negative_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"noise": {
"id": "4ba06bb7-eb45-4fb9-9984-31001b545587",
"id": "4884a4b7-cc19-4fea-83c7-1f940e6edd24",
"name": "noise",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"steps": {
"id": "36ee8a45-ca69-44bc-9bc3-aa881e6045c0",
"id": "4c61675c-b6b9-41ac-b187-b5c13b587039",
"name": "steps",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 10
"value": 36
},
"cfg_scale": {
"id": "2a2024e0-a736-46ec-933c-c1c1ebe96943",
"id": "f8213f35-4637-4a1a-83f4-1f8cfb9ccd2c",
"name": "cfg_scale",
"type": "FloatPolymorphic",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 7.5
},
"denoising_start": {
"id": "be219d5e-41b7-430a-8fb5-bc21a31ad219",
"id": "01e2f30d-0acd-4e21-98b9-a9b8e24c6db2",
"name": "denoising_start",
"type": "float",
"fieldKind": "input",
@@ -566,7 +544,7 @@
"value": 0
},
"denoising_end": {
"id": "3adfb7ae-c9f7-4a40-b6e0-4c2050bd1a99",
"id": "3db95479-a73b-4c75-9b44-08daec16b224",
"name": "denoising_end",
"type": "float",
"fieldKind": "input",
@@ -574,71 +552,71 @@
"value": 1
},
"scheduler": {
"id": "14423e0d-7215-4ee0-b065-f9e95eaa8d7d",
"id": "db8430a9-64c3-4c54-ae38-9f597cf7b6d5",
"name": "scheduler",
"type": "Scheduler",
"fieldKind": "input",
"label": "",
"value": "euler"
},
"unet": {
"id": "e73bbf98-6489-492b-b83c-faed215febac",
"name": "unet",
"type": "UNetField",
"fieldKind": "input",
"label": ""
},
"control": {
"id": "dab351b3-0c86-4ea5-9782-4e8edbfb0607",
"id": "599b49e8-6435-4576-be41-a5155f3a17e3",
"name": "control",
"type": "ControlPolymorphic",
"fieldKind": "input",
"label": ""
},
"ip_adapter": {
"id": "192daea0-a90a-43cc-a2ee-0114a8e90318",
"name": "ip_adapter",
"type": "IPAdapterPolymorphic",
"fieldKind": "input",
"label": ""
},
"t2i_adapter": {
"id": "ee386a55-d4c7-48c1-ac57-7bc4e3aada7a",
"name": "t2i_adapter",
"type": "T2IAdapterPolymorphic",
"type": "ControlField",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "3a922c6a-3d8c-4c9e-b3ec-2f4d81cda077",
"id": "226f9e91-454e-4159-9fa6-019c0cf29277",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"denoise_mask": {
"id": "cd7ce032-835f-495f-8b45-d57272f33132",
"id": "de019cb6-7fb5-45bf-a266-22e20889893f",
"name": "denoise_mask",
"type": "DenoiseMaskField",
"fieldKind": "input",
"label": ""
},
"positive_conditioning": {
"id": "02fc400a-110d-470e-8411-f404f966a949",
"name": "positive_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"negative_conditioning": {
"id": "4bd3bdfa-fcf4-42be-8e47-1e314255798f",
"name": "negative_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"unet": {
"id": "7c2d58a8-b5f1-4e63-8ffd-8ada52c35832",
"name": "unet",
"type": "UNetField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"latents": {
"id": "6260b84f-8361-470a-98d8-5b22a45c2d8c",
"id": "6a6fa492-de26-4e95-b1d9-a322fe37eb13",
"name": "latents",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "aede0ecf-25b6-46be-aa30-b77f79715deb",
"id": "a9790729-7d6c-4418-903d-4da961fccf56",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "519abf62-d475-48ef-ab8f-66136bc0e499",
"id": "fa74efe5-7330-4a3c-b256-c82a544585b4",
"name": "height",
"type": "integer",
"fieldKind": "output"
@@ -648,15 +626,13 @@
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.4.0"
"isIntermediate": true
},
"width": 320,
"height": 646,
"height": 558,
"position": {
"x": 1642.955772577545,
"y": -230.2485847594651
"x": 1650,
"y": -250
}
}
],
@@ -710,42 +686,50 @@
{
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"sourceHandle": "vae",
"target": "63e91020-83b2-4f35-b174-ad9692aabb48",
"target": "dbcd2f98-d809-48c8-bf64-2635f88a2fe9",
"targetHandle": "vae",
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22vae-63e91020-83b2-4f35-b174-ad9692aabb48vae",
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22vae-dbcd2f98-d809-48c8-bf64-2635f88a2fe9vae",
"type": "default"
},
{
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"sourceHandle": "unet",
"target": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
"targetHandle": "unet",
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22unet-50a36525-3c0a-4cc5-977c-e4bfc3fd6dfbunet",
"source": "87ee6243-fb0d-4f77-ad5f-56591659339e",
"sourceHandle": "latents",
"target": "dbcd2f98-d809-48c8-bf64-2635f88a2fe9",
"targetHandle": "latents",
"id": "reactflow__edge-87ee6243-fb0d-4f77-ad5f-56591659339elatents-dbcd2f98-d809-48c8-bf64-2635f88a2fe9latents",
"type": "default"
},
{
"source": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"sourceHandle": "conditioning",
"target": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
"target": "87ee6243-fb0d-4f77-ad5f-56591659339e",
"targetHandle": "positive_conditioning",
"id": "reactflow__edge-faf965a4-7530-427b-b1f3-4ba6505c2a08conditioning-50a36525-3c0a-4cc5-977c-e4bfc3fd6dfbpositive_conditioning",
"id": "reactflow__edge-faf965a4-7530-427b-b1f3-4ba6505c2a08conditioning-87ee6243-fb0d-4f77-ad5f-56591659339epositive_conditioning",
"type": "default"
},
{
"source": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"sourceHandle": "conditioning",
"target": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
"target": "87ee6243-fb0d-4f77-ad5f-56591659339e",
"targetHandle": "negative_conditioning",
"id": "reactflow__edge-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204conditioning-50a36525-3c0a-4cc5-977c-e4bfc3fd6dfbnegative_conditioning",
"id": "reactflow__edge-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204conditioning-87ee6243-fb0d-4f77-ad5f-56591659339enegative_conditioning",
"type": "default"
},
{
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"sourceHandle": "unet",
"target": "87ee6243-fb0d-4f77-ad5f-56591659339e",
"targetHandle": "unet",
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22unet-87ee6243-fb0d-4f77-ad5f-56591659339eunet",
"type": "default"
},
{
"source": "55705012-79b9-4aac-9f26-c0b10309785b",
"sourceHandle": "noise",
"target": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
"target": "87ee6243-fb0d-4f77-ad5f-56591659339e",
"targetHandle": "noise",
"id": "reactflow__edge-55705012-79b9-4aac-9f26-c0b10309785bnoise-50a36525-3c0a-4cc5-977c-e4bfc3fd6dfbnoise",
"id": "reactflow__edge-55705012-79b9-4aac-9f26-c0b10309785bnoise-87ee6243-fb0d-4f77-ad5f-56591659339enoise",
"type": "default"
}
]
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -18,6 +18,10 @@
{
"nodeId": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"fieldName": "prompt"
},
{
"nodeId": "75899702-fa44-46d2-b2d5-3e17f234c3e7",
"fieldName": "steps"
}
],
"meta": {
@@ -28,6 +32,7 @@
"id": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"type": "compel",
"inputs": {
@@ -59,21 +64,20 @@
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
"isIntermediate": true
},
"width": 320,
"height": 261,
"height": 235,
"position": {
"x": 995.7263915923627,
"y": 239.67783573351227
"x": 1400,
"y": -75
}
},
{
"id": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "noise",
"inputs": {
@@ -134,21 +138,92 @@
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
"isIntermediate": true
},
"width": 320,
"height": 389,
"height": 364,
"position": {
"x": 993.4442117555518,
"y": 605.6757415334787
"x": 1000,
"y": 350
}
},
{
"id": "dbcd2f98-d809-48c8-bf64-2635f88a2fe9",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "dbcd2f98-d809-48c8-bf64-2635f88a2fe9",
"type": "l2i",
"inputs": {
"tiled": {
"id": "24f5bc7b-f6a1-425d-8ab1-f50b4db5d0df",
"name": "tiled",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
},
"fp32": {
"id": "b146d873-ffb9-4767-986a-5360504841a2",
"name": "fp32",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
},
"latents": {
"id": "65441abd-7713-4b00-9d8d-3771404002e8",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"vae": {
"id": "a478b833-6e13-4611-9a10-842c89603c74",
"name": "vae",
"type": "VaeField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"image": {
"id": "c87ae925-f858-417a-8940-8708ba9b4b53",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "4bcb8512-b5a1-45f1-9e52-6e92849f9d6c",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "23e41c00-a354-48e8-8f59-5875679c27ab",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": true,
"isIntermediate": false
},
"width": 320,
"height": 266,
"position": {
"x": 1800,
"y": 200
}
},
{
"id": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"type": "main_model_loader",
"inputs": {
@@ -186,24 +261,23 @@
}
},
"label": "",
"isOpen": true,
"isOpen": false,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
"isIntermediate": true
},
"width": 320,
"height": 226,
"height": 32,
"position": {
"x": 163.04436745878343,
"y": 254.63156870373479
"x": 1000,
"y": 200
}
},
{
"id": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"type": "compel",
"inputs": {
@@ -235,21 +309,20 @@
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
"isIntermediate": true
},
"width": 320,
"height": 261,
"height": 235,
"position": {
"x": 595.7263915923627,
"y": 239.67783573351227
"x": 1000,
"y": -75
}
},
{
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"type": "rand_int",
"inputs": {
@@ -279,66 +352,51 @@
}
},
"label": "Random Seed",
"isOpen": true,
"isOpen": false,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": false,
"version": "1.0.0"
"isIntermediate": true
},
"width": 320,
"height": 218,
"height": 32,
"position": {
"x": 541.094822888628,
"y": 694.5704476446829
"x": 1000,
"y": 275
}
},
{
"id": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"id": "75899702-fa44-46d2-b2d5-3e17f234c3e7",
"type": "invocation",
"data": {
"id": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"version": "1.0.0",
"id": "75899702-fa44-46d2-b2d5-3e17f234c3e7",
"type": "denoise_latents",
"inputs": {
"positive_conditioning": {
"id": "90b7f4f8-ada7-4028-8100-d2e54f192052",
"name": "positive_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"negative_conditioning": {
"id": "9393779e-796c-4f64-b740-902a1177bf53",
"name": "negative_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"noise": {
"id": "8e17f1e5-4f98-40b1-b7f4-86aeeb4554c1",
"id": "8b18f3eb-40d2-45c1-9a9d-28d6af0dce2b",
"name": "noise",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"steps": {
"id": "9b63302d-6bd2-42c9-ac13-9b1afb51af88",
"id": "0be4373c-46f3-441c-80a7-a4bb6ceb498c",
"name": "steps",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 10
"value": 36
},
"cfg_scale": {
"id": "87dd04d3-870e-49e1-98bf-af003a810109",
"id": "107267ce-4666-4cd7-94b3-7476b7973ae9",
"name": "cfg_scale",
"type": "FloatPolymorphic",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 7.5
},
"denoising_start": {
"id": "f369d80f-4931-4740-9bcd-9f0620719fab",
"id": "d2ce9f0f-5fc2-48b2-b917-53442941e9a1",
"name": "denoising_start",
"type": "float",
"fieldKind": "input",
@@ -346,7 +404,7 @@
"value": 0
},
"denoising_end": {
"id": "747d10e5-6f02-445c-994c-0604d814de8c",
"id": "8ad51505-b8d0-422a-beb8-96fc6fc6b65f",
"name": "denoising_end",
"type": "float",
"fieldKind": "input",
@@ -354,71 +412,71 @@
"value": 1
},
"scheduler": {
"id": "1de84a4e-3a24-4ec8-862b-16ce49633b9b",
"id": "53092874-a43b-4623-91a2-76e62fdb1f2e",
"name": "scheduler",
"type": "Scheduler",
"fieldKind": "input",
"label": "",
"value": "euler"
},
"unet": {
"id": "ffa6fef4-3ce2-4bdb-9296-9a834849489b",
"name": "unet",
"type": "UNetField",
"fieldKind": "input",
"label": ""
},
"control": {
"id": "077b64cb-34be-4fcc-83f2-e399807a02bd",
"id": "7abe57cc-469d-437e-ad72-a18efa28215f",
"name": "control",
"type": "ControlPolymorphic",
"fieldKind": "input",
"label": ""
},
"ip_adapter": {
"id": "1d6948f7-3a65-4a65-a20c-768b287251aa",
"name": "ip_adapter",
"type": "IPAdapterPolymorphic",
"fieldKind": "input",
"label": ""
},
"t2i_adapter": {
"id": "75e67b09-952f-4083-aaf4-6b804d690412",
"name": "t2i_adapter",
"type": "T2IAdapterPolymorphic",
"type": "ControlField",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "334d4ba3-5a99-4195-82c5-86fb3f4f7d43",
"id": "add8bbe5-14d0-42d4-a867-9c65ab8dd129",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"denoise_mask": {
"id": "0d3dbdbf-b014-4e95-8b18-ff2ff9cb0bfa",
"id": "f373a190-0fc8-45b7-ae62-c4aa8e9687e1",
"name": "denoise_mask",
"type": "DenoiseMaskField",
"fieldKind": "input",
"label": ""
},
"positive_conditioning": {
"id": "c7160303-8a23-4f15-9197-855d48802a7f",
"name": "positive_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"negative_conditioning": {
"id": "fd750efa-1dfc-4d0b-accb-828e905ba320",
"name": "negative_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"unet": {
"id": "af1f41ba-ce2a-4314-8d7f-494bb5800381",
"name": "unet",
"type": "UNetField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"latents": {
"id": "70fa5bbc-0c38-41bb-861a-74d6d78d2f38",
"id": "8508d04d-f999-4a44-94d0-388ab1401d27",
"name": "latents",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "98ee0e6c-82aa-4e8f-8be5-dc5f00ee47f0",
"id": "93dc8287-0a2a-4320-83a4-5e994b7ba23e",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "e8cb184a-5e1a-47c8-9695-4b8979564f5d",
"id": "d9862f5c-0ab5-46fa-8c29-5059bb581d96",
"name": "height",
"type": "integer",
"fieldKind": "output"
@@ -428,95 +486,13 @@
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.4.0"
"isIntermediate": true
},
"width": 320,
"height": 646,
"height": 558,
"position": {
"x": 1476.5794704734735,
"y": 256.80174342731783
}
},
{
"id": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
"type": "invocation",
"data": {
"id": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
"type": "l2i",
"inputs": {
"metadata": {
"id": "ab375f12-0042-4410-9182-29e30db82c85",
"name": "metadata",
"type": "MetadataField",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "3a7e7efd-bff5-47d7-9d48-615127afee78",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"vae": {
"id": "a1f5f7a1-0795-4d58-b036-7820c0b0ef2b",
"name": "vae",
"type": "VaeField",
"fieldKind": "input",
"label": ""
},
"tiled": {
"id": "da52059a-0cee-4668-942f-519aa794d739",
"name": "tiled",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
},
"fp32": {
"id": "c4841df3-b24e-4140-be3b-ccd454c2522c",
"name": "fp32",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
}
},
"outputs": {
"image": {
"id": "72d667d0-cf85-459d-abf2-28bd8b823fe7",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "c8c907d8-1066-49d1-b9a6-83bdcd53addc",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "230f359c-b4ea-436c-b372-332d7dcdca85",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": false,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 267,
"position": {
"x": 2037.9648469717395,
"y": 426.10844427600136
"x": 1400,
"y": 200
}
}
],
@@ -546,52 +522,52 @@
"type": "default"
},
{
"source": "55705012-79b9-4aac-9f26-c0b10309785b",
"sourceHandle": "noise",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"targetHandle": "noise",
"id": "reactflow__edge-55705012-79b9-4aac-9f26-c0b10309785bnoise-eea2702a-19fb-45b5-9d75-56b4211ec03cnoise",
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"sourceHandle": "vae",
"target": "dbcd2f98-d809-48c8-bf64-2635f88a2fe9",
"targetHandle": "vae",
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8vae-dbcd2f98-d809-48c8-bf64-2635f88a2fe9vae",
"type": "default"
},
{
"source": "75899702-fa44-46d2-b2d5-3e17f234c3e7",
"sourceHandle": "latents",
"target": "dbcd2f98-d809-48c8-bf64-2635f88a2fe9",
"targetHandle": "latents",
"id": "reactflow__edge-75899702-fa44-46d2-b2d5-3e17f234c3e7latents-dbcd2f98-d809-48c8-bf64-2635f88a2fe9latents",
"type": "default"
},
{
"source": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"sourceHandle": "conditioning",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"target": "75899702-fa44-46d2-b2d5-3e17f234c3e7",
"targetHandle": "positive_conditioning",
"id": "reactflow__edge-7d8bf987-284f-413a-b2fd-d825445a5d6cconditioning-eea2702a-19fb-45b5-9d75-56b4211ec03cpositive_conditioning",
"id": "reactflow__edge-7d8bf987-284f-413a-b2fd-d825445a5d6cconditioning-75899702-fa44-46d2-b2d5-3e17f234c3e7positive_conditioning",
"type": "default"
},
{
"source": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"sourceHandle": "conditioning",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"target": "75899702-fa44-46d2-b2d5-3e17f234c3e7",
"targetHandle": "negative_conditioning",
"id": "reactflow__edge-93dc02a4-d05b-48ed-b99c-c9b616af3402conditioning-eea2702a-19fb-45b5-9d75-56b4211ec03cnegative_conditioning",
"id": "reactflow__edge-93dc02a4-d05b-48ed-b99c-c9b616af3402conditioning-75899702-fa44-46d2-b2d5-3e17f234c3e7negative_conditioning",
"type": "default"
},
{
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"sourceHandle": "unet",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"target": "75899702-fa44-46d2-b2d5-3e17f234c3e7",
"targetHandle": "unet",
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8unet-eea2702a-19fb-45b5-9d75-56b4211ec03cunet",
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8unet-75899702-fa44-46d2-b2d5-3e17f234c3e7unet",
"type": "default"
},
{
"source": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"sourceHandle": "latents",
"target": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
"targetHandle": "latents",
"id": "reactflow__edge-eea2702a-19fb-45b5-9d75-56b4211ec03clatents-58c957f5-0d01-41fc-a803-b2bbf0413d4flatents",
"type": "default"
},
{
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"sourceHandle": "vae",
"target": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
"targetHandle": "vae",
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8vae-58c957f5-0d01-41fc-a803-b2bbf0413d4fvae",
"source": "55705012-79b9-4aac-9f26-c0b10309785b",
"sourceHandle": "noise",
"target": "75899702-fa44-46d2-b2d5-3e17f234c3e7",
"targetHandle": "noise",
"id": "reactflow__edge-55705012-79b9-4aac-9f26-c0b10309785bnoise-75899702-fa44-46d2-b2d5-3e17f234c3e7noise",
"type": "default"
}
]
}
}

View File

@@ -460,10 +460,10 @@ def get_torch_source() -> (Union[str, None], str):
url = "https://download.pytorch.org/whl/cpu"
if device == "cuda":
url = "https://download.pytorch.org/whl/cu121"
url = "https://download.pytorch.org/whl/cu118"
optional_modules = "[xformers,onnx-cuda]"
if device == "cuda_and_dml":
url = "https://download.pytorch.org/whl/cu121"
url = "https://download.pytorch.org/whl/cu118"
optional_modules = "[xformers,onnx-directml]"
# in all other cases, Torch wheels should be coming from PyPi as of Torch 1.13

View File

@@ -92,10 +92,6 @@ class FieldDescriptions:
inclusive_low = "The inclusive low value"
exclusive_high = "The exclusive high value"
decimal_places = "The number of decimal places to round to"
freeu_s1 = 'Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to mitigate the "oversmoothing effect" in the enhanced denoising process.'
freeu_s2 = 'Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to mitigate the "oversmoothing effect" in the enhanced denoising process.'
freeu_b1 = "Scaling factor for stage 1 to amplify the contributions of backbone features."
freeu_b2 = "Scaling factor for stage 2 to amplify the contributions of backbone features."
class Input(str, Enum):

View File

@@ -108,14 +108,13 @@ class CompelInvocation(BaseInvocation):
print(f'Warn: trigger: "{trigger}" not found')
with (
ModelPatcher.apply_lora_text_encoder(text_encoder_info.context.model, _lora_loader()),
ModelPatcher.apply_ti(tokenizer_info.context.model, text_encoder_info.context.model, ti_list) as (
tokenizer,
ti_manager,
),
ModelPatcher.apply_clip_skip(text_encoder_info.context.model, self.clip.skipped_layers),
text_encoder_info as text_encoder,
# Apply the LoRA after text_encoder has been moved to its target device for faster patching.
ModelPatcher.apply_lora_text_encoder(text_encoder, _lora_loader()),
):
compel = Compel(
tokenizer=tokenizer,
@@ -230,14 +229,13 @@ class SDXLPromptInvocationBase:
print(f'Warn: trigger: "{trigger}" not found')
with (
ModelPatcher.apply_lora(text_encoder_info.context.model, _lora_loader(), lora_prefix),
ModelPatcher.apply_ti(tokenizer_info.context.model, text_encoder_info.context.model, ti_list) as (
tokenizer,
ti_manager,
),
ModelPatcher.apply_clip_skip(text_encoder_info.context.model, clip_field.skipped_layers),
text_encoder_info as text_encoder,
# Apply the LoRA after text_encoder has been moved to its target device for faster patching.
ModelPatcher.apply_lora(text_encoder, _lora_loader(), lora_prefix),
):
compel = Compel(
tokenizer=tokenizer,

View File

@@ -67,7 +67,7 @@ class IPAdapterInvocation(BaseInvocation):
# weight: float = InputField(default=1.0, description="The weight of the IP-Adapter.", ui_type=UIType.Float)
weight: Union[float, List[float]] = InputField(
default=1, ge=-1, description="The weight given to the IP-Adapter", ui_type=UIType.Float, title="Weight"
default=1, ge=0, description="The weight given to the IP-Adapter", ui_type=UIType.Float, title="Weight"
)
begin_step_percent: float = InputField(

View File

@@ -2,7 +2,7 @@
from contextlib import ExitStack
from functools import singledispatchmethod
from typing import List, Literal, Optional, Union
from typing import Callable, List, Literal, Optional, Union
import einops
import numpy as np
@@ -651,8 +651,20 @@ class DenoiseLatentsInvocation(BaseInvocation):
return 1 - mask, masked_latents
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
# Get the source node id (we are invoking the prepared node)
graph_execution_state = context.services.graph_execution_manager.get(context.graph_execution_state_id)
source_node_id = graph_execution_state.prepared_source_mapping[self.id]
def step_callback(state: PipelineIntermediateState):
self.dispatch_progress(context, source_node_id, state, self.unet.unet.base_model)
return self.denoise(context, step_callback)
@torch.no_grad()
def denoise(
self, context: InvocationContext, step_callback: Callable[[PipelineIntermediateState], None]
) -> LatentsOutput:
with SilenceWarnings(): # this quenches NSFW nag from diffusers
seed = None
noise = None
@@ -687,13 +699,6 @@ class DenoiseLatentsInvocation(BaseInvocation):
do_classifier_free_guidance=True,
)
# Get the source node id (we are invoking the prepared node)
graph_execution_state = context.services.graph_execution_manager.get(context.graph_execution_state_id)
source_node_id = graph_execution_state.prepared_source_mapping[self.id]
def step_callback(state: PipelineIntermediateState):
self.dispatch_progress(context, source_node_id, state, self.unet.unet.base_model)
def _lora_loader():
for lora in self.unet.loras:
lora_info = context.services.model_manager.get_model(
@@ -711,11 +716,8 @@ class DenoiseLatentsInvocation(BaseInvocation):
with (
ExitStack() as exit_stack,
ModelPatcher.apply_lora_unet(unet_info.context.model, _lora_loader()),
ModelPatcher.apply_freeu(unet_info.context.model, self.unet.freeu_config),
set_seamless(unet_info.context.model, self.unet.seamless_axes),
unet_info as unet,
# Apply the LoRA after unet has been moved to its target device for faster patching.
ModelPatcher.apply_lora_unet(unet, _lora_loader()),
):
latents = latents.to(device=unet.device, dtype=unet.dtype)
if noise is not None:

View File

@@ -182,8 +182,8 @@ class IntegerMathInvocation(BaseInvocation):
operation: INTEGER_OPERATIONS = InputField(
default="ADD", description="The operation to perform", ui_choice_labels=INTEGER_OPERATIONS_LABELS
)
a: int = InputField(default=1, description=FieldDescriptions.num_1)
b: int = InputField(default=1, description=FieldDescriptions.num_2)
a: int = InputField(default=0, description=FieldDescriptions.num_1)
b: int = InputField(default=0, description=FieldDescriptions.num_2)
@field_validator("b")
def no_unrepresentable_results(cls, v: int, info: ValidationInfo):
@@ -256,8 +256,8 @@ class FloatMathInvocation(BaseInvocation):
operation: FLOAT_OPERATIONS = InputField(
default="ADD", description="The operation to perform", ui_choice_labels=FLOAT_OPERATIONS_LABELS
)
a: float = InputField(default=1, description=FieldDescriptions.num_1)
b: float = InputField(default=1, description=FieldDescriptions.num_2)
a: float = InputField(default=0, description=FieldDescriptions.num_1)
b: float = InputField(default=0, description=FieldDescriptions.num_2)
@field_validator("b")
def no_unrepresentable_results(cls, v: float, info: ValidationInfo):

View File

@@ -107,16 +107,11 @@ class MergeMetadataInvocation(BaseInvocation):
return MetadataOutput(metadata=MetadataField.model_validate(data))
GENERATION_MODES = Literal[
"txt2img", "img2img", "inpaint", "outpaint", "sdxl_txt2img", "sdxl_img2img", "sdxl_inpaint", "sdxl_outpaint"
]
@invocation("core_metadata", title="Core Metadata", tags=["metadata"], category="metadata", version="1.0.0")
class CoreMetadataInvocation(BaseInvocation):
"""Collects core generation metadata into a MetadataField"""
generation_mode: Optional[GENERATION_MODES] = InputField(
generation_mode: Literal["txt2img", "img2img", "inpaint", "outpaint"] = InputField(
default=None,
description="The generation mode that output this image",
)

View File

@@ -17,22 +17,6 @@ from .baseinvocation import (
invocation_output,
)
# TODO: Permanent fix for this
# from invokeai.app.invocations.shared import FreeUConfig
class FreeUConfig(BaseModel):
"""
Configuration for the FreeU hyperparameters.
- https://huggingface.co/docs/diffusers/main/en/using-diffusers/freeu
- https://github.com/ChenyangSi/FreeU
"""
s1: float = Field(ge=-1, le=3, description=FieldDescriptions.freeu_s1)
s2: float = Field(ge=-1, le=3, description=FieldDescriptions.freeu_s2)
b1: float = Field(ge=-1, le=3, description=FieldDescriptions.freeu_b1)
b2: float = Field(ge=-1, le=3, description=FieldDescriptions.freeu_b2)
class ModelInfo(BaseModel):
model_name: str = Field(description="Info to load submodel")
@@ -52,7 +36,6 @@ class UNetField(BaseModel):
scheduler: ModelInfo = Field(description="Info to load scheduler submodel")
loras: List[LoraInfo] = Field(description="Loras to apply on model loading")
seamless_axes: List[str] = Field(default_factory=list, description='Axes("x" and "y") to which apply seamless')
freeu_config: Optional[FreeUConfig] = Field(default=None, description="FreeU configuration")
class ClipField(BaseModel):
@@ -68,32 +51,13 @@ class VaeField(BaseModel):
seamless_axes: List[str] = Field(default_factory=list, description='Axes("x" and "y") to which apply seamless')
@invocation_output("unet_output")
class UNetOutput(BaseInvocationOutput):
"""Base class for invocations that output a UNet field"""
unet: UNetField = OutputField(description=FieldDescriptions.unet, title="UNet")
@invocation_output("vae_output")
class VAEOutput(BaseInvocationOutput):
"""Base class for invocations that output a VAE field"""
vae: VaeField = OutputField(description=FieldDescriptions.vae, title="VAE")
@invocation_output("clip_output")
class CLIPOutput(BaseInvocationOutput):
"""Base class for invocations that output a CLIP field"""
clip: ClipField = OutputField(description=FieldDescriptions.clip, title="CLIP")
@invocation_output("model_loader_output")
class ModelLoaderOutput(UNetOutput, CLIPOutput, VAEOutput):
class ModelLoaderOutput(BaseInvocationOutput):
"""Model loader output"""
pass
unet: UNetField = OutputField(description=FieldDescriptions.unet, title="UNet")
clip: ClipField = OutputField(description=FieldDescriptions.clip, title="CLIP")
vae: VaeField = OutputField(description=FieldDescriptions.vae, title="VAE")
class MainModelField(BaseModel):
@@ -402,6 +366,13 @@ class VAEModelField(BaseModel):
model_config = ConfigDict(protected_namespaces=())
@invocation_output("vae_loader_output")
class VaeLoaderOutput(BaseInvocationOutput):
"""VAE output"""
vae: VaeField = OutputField(description=FieldDescriptions.vae, title="VAE")
@invocation("vae_loader", title="VAE", tags=["vae", "model"], category="model", version="1.0.0")
class VaeLoaderInvocation(BaseInvocation):
"""Loads a VAE model, outputting a VaeLoaderOutput"""
@@ -413,7 +384,7 @@ class VaeLoaderInvocation(BaseInvocation):
title="VAE",
)
def invoke(self, context: InvocationContext) -> VAEOutput:
def invoke(self, context: InvocationContext) -> VaeLoaderOutput:
base_model = self.vae_model.base_model
model_name = self.vae_model.model_name
model_type = ModelType.Vae
@@ -424,7 +395,7 @@ class VaeLoaderInvocation(BaseInvocation):
model_type=model_type,
):
raise Exception(f"Unkown vae name: {model_name}!")
return VAEOutput(
return VaeLoaderOutput(
vae=VaeField(
vae=ModelInfo(
model_name=model_name,
@@ -486,24 +457,3 @@ class SeamlessModeInvocation(BaseInvocation):
vae.seamless_axes = seamless_axes_list
return SeamlessModeOutput(unet=unet, vae=vae)
@invocation("freeu", title="FreeU", tags=["freeu"], category="unet", version="1.0.0")
class FreeUInvocation(BaseInvocation):
"""
Applies FreeU to the UNet. Suggested values (b1/b2/s1/s2):
SD1.5: 1.2/1.4/0.9/0.2,
SD2: 1.1/1.2/0.9/0.2,
SDXL: 1.1/1.2/0.6/0.4,
"""
unet: UNetField = InputField(description=FieldDescriptions.unet, input=Input.Connection, title="UNet")
b1: float = InputField(default=1.2, ge=-1, le=3, description=FieldDescriptions.freeu_b1)
b2: float = InputField(default=1.4, ge=-1, le=3, description=FieldDescriptions.freeu_b2)
s1: float = InputField(default=0.9, ge=-1, le=3, description=FieldDescriptions.freeu_s1)
s2: float = InputField(default=0.2, ge=-1, le=3, description=FieldDescriptions.freeu_s2)
def invoke(self, context: InvocationContext) -> UNetOutput:
self.unet.freeu_config = FreeUConfig(s1=self.s1, s2=self.s2, b1=self.b1, b2=self.b2)
return UNetOutput(unet=self.unet)

View File

@@ -293,7 +293,7 @@ class DenoiseMaskField(BaseModel):
"""An inpaint mask field"""
mask_name: str = Field(description="The name of the mask image")
masked_latents_name: Optional[str] = Field(default=None, description="The name of the masked image latents")
masked_latents_name: Optional[str] = Field(description="The name of the masked image latents")
@invocation_output("denoise_mask_output")

View File

@@ -1,16 +0,0 @@
from pydantic import BaseModel, Field
from invokeai.app.invocations.baseinvocation import FieldDescriptions
class FreeUConfig(BaseModel):
"""
Configuration for the FreeU hyperparameters.
- https://huggingface.co/docs/diffusers/main/en/using-diffusers/freeu
- https://github.com/ChenyangSi/FreeU
"""
s1: float = Field(ge=-1, le=3, description=FieldDescriptions.freeu_s1)
s2: float = Field(ge=-1, le=3, description=FieldDescriptions.freeu_s2)
b1: float = Field(ge=-1, le=3, description=FieldDescriptions.freeu_b1)
b2: float = Field(ge=-1, le=3, description=FieldDescriptions.freeu_b2)

View File

@@ -45,7 +45,6 @@ InvokeAI:
ram: 13.5
vram: 0.25
lazy_offload: true
log_memory_usage: false
Device:
device: auto
precision: auto
@@ -262,7 +261,6 @@ class InvokeAIAppConfig(InvokeAISettings):
ram : float = Field(default=7.5, gt=0, description="Maximum memory amount used by model cache for rapid switching (floating point number, GB)", json_schema_extra=Categories.ModelCache, )
vram : float = Field(default=0.25, ge=0, description="Amount of VRAM reserved for model storage (floating point number, GB)", json_schema_extra=Categories.ModelCache, )
lazy_offload : bool = Field(default=True, description="Keep models in VRAM until their space is needed", json_schema_extra=Categories.ModelCache, )
log_memory_usage : bool = Field(default=False, description="If True, a memory snapshot will be captured before and after every model cache operation, and the result will be logged (at debug level). There is a time cost to capturing the memory snapshots, so it is recommended to only enable this feature if you are actively inspecting the model cache's behaviour.", json_schema_extra=Categories.ModelCache)
# DEVICE
device : Literal["auto", "cpu", "cuda", "cuda:1", "mps"] = Field(default="auto", description="Generation device", json_schema_extra=Categories.Device)

View File

@@ -1 +0,0 @@
from .model_manager_default import ModelManagerService # noqa F401

View File

@@ -57,7 +57,7 @@ class SqliteWorkflowRecordsStorage(WorkflowRecordsStorageBase):
INSERT INTO workflows(workflow)
VALUES (?);
""",
(workflow.model_dump_json(),),
(workflow.json(),),
)
self._conn.commit()
except Exception:

View File

@@ -460,12 +460,6 @@ class ModelInstall(object):
possible_conf = path.with_suffix(".yaml")
if possible_conf.exists():
legacy_conf = str(self.relative_to_root(possible_conf))
else:
legacy_conf = Path(
self.config.root_path,
"configs/controlnet",
("cldm_v15.yaml" if info.base_type == BaseModelType("sd-1") else "cldm_v21.yaml"),
)
if legacy_conf:
attributes.update(dict(config=str(legacy_conf)))

View File

@@ -1,6 +1,6 @@
from __future__ import annotations
import pickle
import copy
from contextlib import contextmanager
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union
@@ -12,8 +12,6 @@ from diffusers.models import UNet2DConditionModel
from safetensors.torch import load_file
from transformers import CLIPTextModel, CLIPTokenizer
from invokeai.app.invocations.shared import FreeUConfig
from .models.lora import LoRAModel
"""
@@ -56,6 +54,24 @@ class ModelPatcher:
return (module_key, module)
@staticmethod
def _lora_forward_hook(
applied_loras: List[Tuple[LoRAModel, float]],
layer_name: str,
):
def lora_forward(module, input_h, output):
if len(applied_loras) == 0:
return output
for lora, weight in applied_loras:
layer = lora.layers.get(layer_name, None)
if layer is None:
continue
output += layer.forward(module, input_h, weight)
return output
return lora_forward
@classmethod
@contextmanager
def apply_lora_unet(
@@ -113,40 +129,21 @@ class ModelPatcher:
if not layer_key.startswith(prefix):
continue
# TODO(ryand): A non-negligible amount of time is currently spent resolving LoRA keys. This
# should be improved in the following ways:
# 1. The key mapping could be more-efficiently pre-computed. This would save time every time a
# LoRA model is applied.
# 2. From an API perspective, there's no reason that the `ModelPatcher` should be aware of the
# intricacies of Stable Diffusion key resolution. It should just expect the input LoRA
# weights to have valid keys.
module_key, module = cls._resolve_lora_key(model, layer_key, prefix)
# All of the LoRA weight calculations will be done on the same device as the module weight.
# (Performance will be best if this is a CUDA device.)
device = module.weight.device
dtype = module.weight.dtype
if module_key not in original_weights:
original_weights[module_key] = module.weight.detach().to(device="cpu", copy=True)
layer_scale = layer.alpha / layer.rank if (layer.alpha and layer.rank) else 1.0
# We intentionally move to the target device first, then cast. Experimentally, this was found to
# be significantly faster for 16-bit CPU tensors being moved to a CUDA device than doing the
# same thing in a single call to '.to(...)'.
layer.to(device=device)
# enable autocast to calc fp16 loras on cpu
# with torch.autocast(device_type="cpu"):
layer.to(dtype=torch.float32)
# TODO(ryand): Using torch.autocast(...) over explicit casting may offer a speed benefit on CUDA
# devices here. Experimentally, it was found to be very slow on CPU. More investigation needed.
layer_weight = layer.get_weight(module.weight) * (lora_weight * layer_scale)
layer.to(device="cpu")
layer_scale = layer.alpha / layer.rank if (layer.alpha and layer.rank) else 1.0
layer_weight = layer.get_weight(original_weights[module_key]) * lora_weight * layer_scale
if module.weight.shape != layer_weight.shape:
# TODO: debug on lycoris
layer_weight = layer_weight.reshape(module.weight.shape)
module.weight += layer_weight.to(dtype=dtype)
module.weight += layer_weight.to(device=module.weight.device, dtype=module.weight.dtype)
yield # wait for context manager exit
@@ -167,13 +164,7 @@ class ModelPatcher:
new_tokens_added = None
try:
# HACK: The CLIPTokenizer API does not include a way to remove tokens after calling add_tokens(...). As a
# workaround, we create a full copy of `tokenizer` so that its original behavior can be restored after
# exiting this `apply_ti(...)` context manager.
#
# In a previous implementation, the deep copy was obtained with `ti_tokenizer = copy.deepcopy(tokenizer)`,
# but a pickle roundtrip was found to be much faster (1 sec vs. 0.05 secs).
ti_tokenizer = pickle.loads(pickle.dumps(tokenizer))
ti_tokenizer = copy.deepcopy(tokenizer)
ti_manager = TextualInversionManager(ti_tokenizer)
init_tokens_count = text_encoder.resize_token_embeddings(None).num_embeddings
@@ -205,9 +196,7 @@ class ModelPatcher:
if model_embeddings.weight.data[token_id].shape != embedding.shape:
raise ValueError(
f"Cannot load embedding for {trigger}. It was trained on a model with token dimension"
f" {embedding.shape[0]}, but the current model has token dimension"
f" {model_embeddings.weight.data[token_id].shape[0]}."
f"Cannot load embedding for {trigger}. It was trained on a model with token dimension {embedding.shape[0]}, but the current model has token dimension {model_embeddings.weight.data[token_id].shape[0]}."
)
model_embeddings.weight.data[token_id] = embedding.to(
@@ -242,25 +231,6 @@ class ModelPatcher:
while len(skipped_layers) > 0:
text_encoder.text_model.encoder.layers.append(skipped_layers.pop())
@classmethod
@contextmanager
def apply_freeu(
cls,
unet: UNet2DConditionModel,
freeu_config: Optional[FreeUConfig] = None,
):
did_apply_freeu = False
try:
if freeu_config is not None:
unet.enable_freeu(b1=freeu_config.b1, b2=freeu_config.b2, s1=freeu_config.s1, s2=freeu_config.s2)
did_apply_freeu = True
yield
finally:
if did_apply_freeu:
unet.disable_freeu()
class TextualInversionModel:
embedding: torch.Tensor # [n, 768]|[n, 1280]
@@ -287,8 +257,7 @@ class TextualInversionModel:
if "string_to_param" in state_dict:
if len(state_dict["string_to_param"]) > 1:
print(
f'Warn: Embedding "{file_path.name}" contains multiple tokens, which is not supported. The first'
" token will be used."
f'Warn: Embedding "{file_path.name}" contains multiple tokens, which is not supported. The first token will be used.'
)
result.embedding = next(iter(state_dict["string_to_param"].values()))
@@ -466,13 +435,7 @@ class ONNXModelPatcher:
orig_embeddings = None
try:
# HACK: The CLIPTokenizer API does not include a way to remove tokens after calling add_tokens(...). As a
# workaround, we create a full copy of `tokenizer` so that its original behavior can be restored after
# exiting this `apply_ti(...)` context manager.
#
# In a previous implementation, the deep copy was obtained with `ti_tokenizer = copy.deepcopy(tokenizer)`,
# but a pickle roundtrip was found to be much faster (1 sec vs. 0.05 secs).
ti_tokenizer = pickle.loads(pickle.dumps(tokenizer))
ti_tokenizer = copy.deepcopy(tokenizer)
ti_manager = TextualInversionManager(ti_tokenizer)
def _get_trigger(ti_name, index):
@@ -507,9 +470,7 @@ class ONNXModelPatcher:
if embeddings[token_id].shape != embedding.shape:
raise ValueError(
f"Cannot load embedding for {trigger}. It was trained on a model with token dimension"
f" {embedding.shape[0]}, but the current model has token dimension"
f" {embeddings[token_id].shape[0]}."
f"Cannot load embedding for {trigger}. It was trained on a model with token dimension {embedding.shape[0]}, but the current model has token dimension {embeddings[token_id].shape[0]}."
)
embeddings[token_id] = embedding

View File

@@ -64,7 +64,7 @@ class MemorySnapshot:
return cls(process_ram, vram, malloc_info)
def get_pretty_snapshot_diff(snapshot_1: Optional[MemorySnapshot], snapshot_2: Optional[MemorySnapshot]) -> str:
def get_pretty_snapshot_diff(snapshot_1: MemorySnapshot, snapshot_2: MemorySnapshot) -> str:
"""Get a pretty string describing the difference between two `MemorySnapshot`s."""
def get_msg_line(prefix: str, val1: int, val2: int):
@@ -73,9 +73,6 @@ def get_pretty_snapshot_diff(snapshot_1: Optional[MemorySnapshot], snapshot_2: O
msg = ""
if snapshot_1 is None or snapshot_2 is None:
return msg
msg += get_msg_line("Process RAM", snapshot_1.process_ram, snapshot_2.process_ram)
if snapshot_1.malloc_info is not None and snapshot_2.malloc_info is not None:

View File

@@ -117,7 +117,6 @@ class ModelCache(object):
lazy_offloading: bool = True,
sha_chunksize: int = 16777216,
logger: types.ModuleType = logger,
log_memory_usage: bool = False,
):
"""
:param max_cache_size: Maximum size of the RAM cache [6.0 GB]
@@ -127,10 +126,6 @@ class ModelCache(object):
:param lazy_offloading: Keep model in VRAM until another model needs to be loaded
:param sequential_offload: Conserve VRAM by loading and unloading each stage of the pipeline sequentially
:param sha_chunksize: Chunksize to use when calculating sha256 model hash
:param log_memory_usage: If True, a memory snapshot will be captured before and after every model cache
operation, and the result will be logged (at debug level). There is a time cost to capturing the memory
snapshots, so it is recommended to disable this feature unless you are actively inspecting the model cache's
behaviour.
"""
self.model_infos: Dict[str, ModelBase] = dict()
# allow lazy offloading only when vram cache enabled
@@ -142,7 +137,6 @@ class ModelCache(object):
self.storage_device: torch.device = storage_device
self.sha_chunksize = sha_chunksize
self.logger = logger
self._log_memory_usage = log_memory_usage
# used for stats collection
self.stats = None
@@ -150,11 +144,6 @@ class ModelCache(object):
self._cached_models = dict()
self._cache_stack = list()
def _capture_memory_snapshot(self) -> Optional[MemorySnapshot]:
if self._log_memory_usage:
return MemorySnapshot.capture()
return None
def get_key(
self,
model_path: str,
@@ -234,10 +223,10 @@ class ModelCache(object):
# Load the model from disk and capture a memory snapshot before/after.
start_load_time = time.time()
snapshot_before = self._capture_memory_snapshot()
snapshot_before = MemorySnapshot.capture()
with skip_torch_weight_init():
model = model_info.get_model(child_type=submodel, torch_dtype=self.precision)
snapshot_after = self._capture_memory_snapshot()
snapshot_after = MemorySnapshot.capture()
end_load_time = time.time()
self_reported_model_size_after_load = model_info.get_size(submodel)
@@ -286,9 +275,9 @@ class ModelCache(object):
return
start_model_to_time = time.time()
snapshot_before = self._capture_memory_snapshot()
snapshot_before = MemorySnapshot.capture()
cache_entry.model.to(target_device)
snapshot_after = self._capture_memory_snapshot()
snapshot_after = MemorySnapshot.capture()
end_model_to_time = time.time()
self.logger.debug(
f"Moved model '{key}' from {source_device} to"
@@ -297,12 +286,7 @@ class ModelCache(object):
f"{get_pretty_snapshot_diff(snapshot_before, snapshot_after)}"
)
if (
snapshot_before is not None
and snapshot_after is not None
and snapshot_before.vram is not None
and snapshot_after.vram is not None
):
if snapshot_before.vram is not None and snapshot_after.vram is not None:
vram_change = abs(snapshot_before.vram - snapshot_after.vram)
# If the estimated model size does not match the change in VRAM, log a warning.
@@ -438,17 +422,12 @@ class ModelCache(object):
self.logger.debug(f"Before unloading: cached_models={len(self._cached_models)}")
pos = 0
models_cleared = 0
while current_size + bytes_needed > maximum_size and pos < len(self._cache_stack):
model_key = self._cache_stack[pos]
cache_entry = self._cached_models[model_key]
refs = sys.getrefcount(cache_entry.model)
# HACK: This is a workaround for a memory-management issue that we haven't tracked down yet. We are directly
# going against the advice in the Python docs by using `gc.get_referrers(...)` in this way:
# https://docs.python.org/3/library/gc.html#gc.get_referrers
# manualy clear local variable references of just finished function calls
# for some reason python don't want to collect it even by gc.collect() immidiately
if refs > 2:
@@ -474,16 +453,15 @@ class ModelCache(object):
f" refs: {refs}"
)
# Expected refs:
# 2 refs:
# 1 from cache_entry
# 1 from getrefcount function
# 1 from onnx runtime object
if not cache_entry.locked and refs <= (3 if "onnx" in model_key else 2):
if not cache_entry.locked and refs <= 3 if "onnx" in model_key else 2:
self.logger.debug(
f"Unloading model {model_key} to free {(model_size/GIG):.2f} GB (-{(cache_entry.size/GIG):.2f} GB)"
)
current_size -= cache_entry.size
models_cleared += 1
if self.stats:
self.stats.cleared += 1
del self._cache_stack[pos]
@@ -493,20 +471,7 @@ class ModelCache(object):
else:
pos += 1
if models_cleared > 0:
# There would likely be some 'garbage' to be collected regardless of whether a model was cleared or not, but
# there is a significant time cost to calling `gc.collect()`, so we want to use it sparingly. (The time cost
# is high even if no garbage gets collected.)
#
# Calling gc.collect(...) when a model is cleared seems like a good middle-ground:
# - If models had to be cleared, it's a signal that we are close to our memory limit.
# - If models were cleared, there's a good chance that there's a significant amount of garbage to be
# collected.
#
# Keep in mind that gc is only responsible for handling reference cycles. Most objects should be cleaned up
# immediately when their reference count hits 0.
gc.collect()
gc.collect()
torch.cuda.empty_cache()
if choose_torch_device() == torch.device("mps"):
mps.empty_cache()
@@ -526,6 +491,7 @@ class ModelCache(object):
vram_in_use = torch.cuda.memory_allocated()
self.logger.debug(f"{(vram_in_use/GIG):.2f}GB VRAM used for models; max allowed={(reserved/GIG):.2f}GB")
gc.collect()
torch.cuda.empty_cache()
if choose_torch_device() == torch.device("mps"):
mps.empty_cache()

View File

@@ -17,7 +17,7 @@ def skip_torch_weight_init():
completely unnecessary if the intent is to load checkpoint weights from disk for the layer. This context manager
monkey-patches common torch layers to skip the weight initialization step.
"""
torch_modules = [torch.nn.Linear, torch.nn.modules.conv._ConvNd, torch.nn.Embedding]
torch_modules = [torch.nn.Linear, torch.nn.modules.conv._ConvNd]
saved_functions = [m.reset_parameters for m in torch_modules]
try:

View File

@@ -351,7 +351,6 @@ class ModelManager(object):
precision=precision,
sequential_offload=sequential_offload,
logger=logger,
log_memory_usage=self.app_config.log_memory_usage,
)
self._read_models(config)

View File

@@ -453,9 +453,9 @@ class PipelineFolderProbe(FolderProbeBase):
else:
with open(self.folder_path / "scheduler" / "scheduler_config.json", "r") as file:
scheduler_conf = json.load(file)
if scheduler_conf.get("prediction_type", "epsilon") == "v_prediction":
if scheduler_conf["prediction_type"] == "v_prediction":
return SchedulerPredictionType.VPrediction
elif scheduler_conf.get("prediction_type", "epsilon") == "epsilon":
elif scheduler_conf["prediction_type"] == "epsilon":
return SchedulerPredictionType.Epsilon
else:
return None

View File

@@ -67,7 +67,6 @@ class SubModelType(str, Enum):
VaeEncoder = "vae_encoder"
Scheduler = "scheduler"
SafetyChecker = "safety_checker"
FeatureExtractor = "feature_extractor"
# MoVQ = "movq"

View File

@@ -132,14 +132,13 @@ def _convert_controlnet_ckpt_and_cache(
model_path: str,
output_path: str,
base_model: BaseModelType,
model_config: str,
model_config: ControlNetModel.CheckpointConfig,
) -> str:
"""
Convert the controlnet from checkpoint format to diffusers format,
cache it to disk, and return Path to converted
file. If already on disk then just returns Path.
"""
print(f"DEBUG: controlnet config = {model_config}")
app_config = InvokeAIAppConfig.get_config()
weights = app_config.root_path / model_path
output_path = Path(output_path)

View File

@@ -440,19 +440,33 @@ class IA3Layer(LoRALayerBase):
class LoRAModelRaw: # (torch.nn.Module):
_name: str
layers: Dict[str, LoRALayer]
_device: torch.device
_dtype: torch.dtype
def __init__(
self,
name: str,
layers: Dict[str, LoRALayer],
device: torch.device,
dtype: torch.dtype,
):
self._name = name
self._device = device or torch.cpu
self._dtype = dtype or torch.float32
self.layers = layers
@property
def name(self):
return self._name
@property
def device(self):
return self._device
@property
def dtype(self):
return self._dtype
def to(
self,
device: Optional[torch.device] = None,
@@ -461,6 +475,8 @@ class LoRAModelRaw: # (torch.nn.Module):
# TODO: try revert if exception?
for key, layer in self.layers.items():
layer.to(device=device, dtype=dtype)
self._device = device
self._dtype = dtype
def calc_size(self) -> int:
model_size = 0
@@ -541,6 +557,8 @@ class LoRAModelRaw: # (torch.nn.Module):
file_path = Path(file_path)
model = cls(
device=device,
dtype=dtype,
name=file_path.stem, # TODO:
layers=dict(),
)

View File

@@ -1,446 +0,0 @@
# Normalized Model Manager
This is proof-of-principle code that refactors model storage to be
more space efficient and less dependent on the particulars of Stable
Diffusion models. The driving observation is that there is a
significant amount of redundancy in Stable Diffusion models. For
example, the VAE, tokenizer and safety checker are frequently the same
across multiple models derived from the same base models.
The way the normalized model manager works is that when a main
(pipeline) model is ingested, each of its submodels ("vae", "unet" and
so forth) is scanned and hashed using a fast sampling and hashing
algorithm. If the submodel has a hash that hasn't been seen before, it
is copied into a folder within INVOKEAI_ROOT, and we create a new
database entry with the submodel's path and a reference count of "1".
If the submodel has a hash that has previously been seen, then we
update the database to bump up the submodel's reference count.
Checkpoint files (.bin, .ckpt and .safetensors) are converted into
diffusers format prior to ingestion. The system directly imports
simple models, such as LoRAs and standalone VAEs, and normalizes them
if previously seen. This has benefits when a user tries to ingest the
same VAE twice under different names.
Additional database tables map the relationship between main models
and their submodels, and to record which base model(s) a submodel is
compatible with.
## Installation and Testing
To test, checkout the PR and run `pip install -e .`. This will create
a command called `invokeai-nmm` (for "normalized model
manager"). To ingest a single model:
```
invokeai-nmm ingest my_model.safetensors
```
To ingest a whole directory of models:
```
invokeai-nmm ingest my_models/*
```
These commands will create a sqlite3 database of model data in
`INVOKEAI_ROOT/databases/normalized_models.db`, copy the model data
into a blobs directory under `INVOKEAI_ROOT/model_blobs`, and create
appropriate entries in the database. You can then use the API to
retrieve information on pipelines and submodels.
The `invokeai-nmm` tool has a number of other features, including
listing models and examining pipeline subparts. In addition, it has an
`export` command which will reconstitute a diffusers pipeline by
creating a directory containing symbolic links into the blogs
directory.
Use `invokeai-nmm --help` to get a summary of commands and their
flags.
## Benchmarking
To test the performance of the normalied model system, I ingested a
InvokeAI models directory of 117 different models (35 main models, 52
LoRAs, 9 controlnets, 8 embeddings and miscellaneous others). The
ingestion, which included the conversion of multiple checkpoint to
diffusers models, took about 2 minutes. Prior to ingestion, the
directory took up 189.5 GB. After ingestion, it was reduced to 160 GB,
an overall 16% reduction in size and a savings of 29 GB.
I was a surprised at the relatively modest space savings and checked that
submodels were indeed being shared. They were:
```
sqlite> select part_id,type,refcount from simple_model order by refcount desc,type;
┌─────────┬───────────────────┬──────────┐
│ part_id │ type │ refcount │
├─────────┼───────────────────┼──────────┤
│ 28 │ tokenizer │ 9 │
│ 67 │ feature_extractor │ 7 │
│ 33 │ feature_extractor │ 5 │
│ 38 │ tokenizer │ 5 │
│ 26 │ safety_checker │ 4 │
│ 32 │ safety_checker │ 4 │
│ 37 │ scheduler │ 4 │
│ 29 │ vae │ 3 │
│ 30 │ feature_extractor │ 2 │
│ 72 │ safety_checker │ 2 │
│ 54 │ scheduler │ 2 │
│ 100 │ scheduler │ 2 │
│ 71 │ text_encoder │ 2 │
│ 90 │ text_encoder │ 2 │
│ 99 │ text_encoder_2 │ 2 │
│ 98 │ tokenizer_2 │ 2 │
│ 44 │ vae │ 2 │
│ 73 │ vae │ 2 │
│ 91 │ vae │ 2 │
│ 97 │ vae │ 2 │
│ 1 │ clip_vision │ 1 │
│ 2 │ clip_vision │ 1 │
...
```
As expected, submodels that don't change from model to model, such as
the tokenizer and safety checker, are frequently shared across main
models. So were the VAEs, but less frequently than I expected. On
further inspection, the spread of VAEs was explained by the following
formatting differences:
1. Whether the VAE weights are .bin or .safetensors
2. Whether it is an fp16 or fp32 VAE
3. Actual differences in the VAE's training
Ironically, checkpoint models downloaded from Civitai are more likely
to share submodels than diffusers pipelines directly downloaded from
HuggingFace. This is because the checkpoints pass through a uniform
conversion process, while diffusers downloaded directly from
HuggingFace are more likely to have format-related differences.
## Database tables
This illustrates the database schema.
### SIMPLE_MODEL
This provides the type and path of each fundamental model. The type
can be any of the ModelType enums, including clip_vision, etc.
```
┌─────────┬───────────────────┬──────────────────────────────────┬──────────┬──────────────────────────────────────────────────────────────────────────────────────────┐
│ part_id │ type │ hash │ refcount │ path │
├─────────┼───────────────────┼──────────────────────────────────┼──────────┼──────────────────────────────────────────────────────────────────────────────────────────┤
│ 26 │ safety_checker │ 76b420d8f641411021ec1dadca767cf7 │ 4 │ /opt/model_blobs/safety_checker-7214b322-1069-4753-a4d5-fe9e18915ca7 │
│ 28 │ tokenizer │ 44e42c7bf25b5e32e8d7de0b822cf012 │ 9 │ /opt/model_blobs/tokenizer-caeb7f7f-e3db-4d67-8f60-1a4831e1aef2 │
│ 29 │ vae │ c9aa45f52c5d4e15a22677f34436d373 │ 3 │ /opt/model_blobs/vae-7e7d96ee-074f-45dc-8c43-c9902b0d0671 │
│ 30 │ feature_extractor │ 3240f79383fdf6ea7f24bbd5569cb106 │ 2 │ /opt/model_blobs/feature_extractor-a5bb8ceb-2c15-4b7f-bd43-964396440f6c │
│ 32 │ safety_checker │ 2e2f7732cff3349350bc99f3e7ab3998 │ 4 │ /opt/model_blobs/safety_checker-ef70c446-e3a1-445c-b216-d7c4acfdbcda │
└─────────┴───────────────────┴──────────────────────────────────┴──────────┴──────────────────────────────────────────────────────────────────────────────────────────┘
```
The Refcount indicates how many pipelines the fundamental is being
shared with. The path is where the submodel is stored, and uses a
randomly-assigned file/directory name to avoid collisions.
The `type` field is a SQLITE3 ENUM that maps to the values of the
`ModelType` enum.
### MODEL_NAME
The MODEL_NAME table stores the name and other metadata of a top-level
model. The same table is used for both simple models (one part only)
and pipeline models (multiple parts).
Note that in the current implementation, the model name is forced to
be unique and is currently used as the identifier for retrieving
models from the database. This is a simplifying implementation detail;
in a real system the name would be supplemented with some sort of
anonymous key.
Only top-level models are entered into the MODEL_NAME table. The
models contained in subfolders of a pipeline become unnamed anonymous
parts stored in SIMPLE_MODEL and associated with the named model(s)
that use them in the MODEL_PARTS table described next.
An interesting piece of behavior is that the same simple model can be
both anonymous and named. Consider a VAE that is first imported from
the 'vae' folder of a main model. Because it is part of a larger
pipeline, there will be an entry for the VAE in SIMPLE_MODEL with a
refcount of 1, but not in the MODEL_NAME table. However let's say
that, at a later date, the user ingests the same model as a named
standalone VAE. The system will detect that this is the same model,
and will create a named entry to the VAE in MODEL_NAME that identifies
the VAE as its sole part. In SIMPLE_MODEL, the VAE's refcount will be
bumped up to 2. Thus, the same simple model can be retrieved in two
ways: by requesting the "vae" submodel of the named pipeline, or by
requesting it via its standalone name.
The MODEL_NAME table has fields for the model's name, its source, and
description. The `is_pipeline` field is True if the named model is a
pipeline that contains subparts. In the case of a pipeline, then the
`table_of_contents` field will hold a copy of the contents of
`model_index.json`. This is used for the sole purpose of regenerating
a de-normalized diffusers folder from the database.
```
├──────────┼────────────────────────────────┼────────────────────────────────────────────────────────────┼───────────────────────────────────────────────┼─────────────┼───────────────────┤
│ model_id │ name │ source │ description │ is_pipeline │ table_of_contents │
├──────────┼────────────────────────────────┼────────────────────────────────────────────────────────────┼───────────────────────────────────────────────┼─────────────┼───────────────────┤
│ 1 │ ip_adapter_sd_image_encoder │ /opt/models/any/clip_vision/ip_adapter_sd_image_encoder │ Imported model ip_adapter_sd_image_encoder │ 0 │ │
│ 2 │ ip_adapter_sd_image_encoder_01 │ /opt/models/any/clip_vision/ip_adapter_sd_image_encoder_01 │ Imported model ip_adapter_sd_image_encoder_01 │ 0 │ │
│ 3 │ ip_adapter_sdxl_image_encoder │ /opt/models/any/clip_vision/ip_adapter_sdxl_image_encoder │ Imported model ip_adapter_sdxl_image_encoder │ 0 │ │
│ 4 │ control_v11e_sd15_ip2p │ /opt/models/sd-1/controlnet/control_v11e_sd15_ip2p │ Imported model control_v11e_sd15_ip2p │ 0 │ │
│ 5 │ control_v11e_sd15_shuffle │ /opt/models/sd-1/controlnet/control_v11e_sd15_shuffle │ Imported model control_v11e_sd15_shuffle │ 0 │ │
│ 6 │ control_v11f1e_sd15_tile │ /opt/models/sd-1/controlnet/control_v11f1e_sd15_tile │ Imported model control_v11f1e_sd15_tile │ 0 │ │
│ 7 │ control_v11f1p_sd15_depth │ /opt/models/sd-1/controlnet/control_v11f1p_sd15_depth │ Imported model control_v11f1p_sd15_depth │ 0 │ │
│ 8 │ control_v11p_sd15_canny │ /opt/models/sd-1/controlnet/control_v11p_sd15_canny │ Imported model control_v11p_sd15_canny │ 0 │ │
│ 9 │ control_v11p_sd15_inpaint │ /opt/models/sd-1/controlnet/control_v11p_sd15_inpaint │ Imported model control_v11p_sd15_inpaint │ 0 │ │
│ 10 │ control_v11p_sd15_lineart │ /opt/models/sd-1/controlnet/control_v11p_sd15_lineart │ Imported model control_v11p_sd15_lineart │ 0 │ │
└──────────┴────────────────────────────────┴─────────-──────────────────────────────────────────────────┴───────────────────────────────────────────────┴─────────────┴───────────────────┘
```
### MODEL_PARTS
The MODEL_PARTS table maps the `model_id` field from MODEL_NAME to the
`part_id` field of SIMPLE_MODEL, as shown below. The `part_name` field
contains the subfolder name that the part was located in at model
ingestion time.
There is not exactly a one-to-one correspondence between the
MODEL_PARTS `part_name` and the SIMPLE_MODEL `type` fields. For
example, SDXL models have part_names of `text_encoder` and
`text_encoder_2`, both of which point to a simple model of type
`text_encoder`.
For one-part model such as LoRAs, the `part_name` is `root`.
```
┌──────────┬─────────┬───────────────────┐
│ model_id │ part_id │ part_name │
├──────────┼─────────┼───────────────────┤
│ 6 │ 6 │ root │
│ 25 │ 25 │ unet │
│ 25 │ 26 │ safety_checker │
│ 25 │ 27 │ text_encoder │
│ 25 │ 28 │ tokenizer │
│ 25 │ 29 │ vae │
│ 25 │ 30 │ feature_extractor │
│ 25 │ 31 │ scheduler │
│ 26 │ 32 │ safety_checker │
│ 26 │ 33 │ feature_extractor │
│ 26 │ 34 │ unet │
└──────────┴─────────┴───────────────────┘
```
### MODEL_BASE
The MODEL_BASE table maps simple models to the base models that they
are compatible with. A simple model may be compatible with one base
only (e.g. an SDXL-based `unet`); it may be compatible with multiple
bases (e.g. a VAE that works with either `sd-1` or `sd-2`); or it may
be compatible with all models (e.g. a `clip_vision` model).
This table has two fields, the `part_id` and the `base` it is
compatible with. The base is a SQLITE ENUM that corresponds to the
`BaseModelType` enum.
```
sqlite> select * from model_base limit 8;
┌─────────┬──────────────┐
│ part_id │ base │
├─────────┼──────────────┤
│ 1 │ sd-1 │
│ 1 │ sd-2 │
│ 1 │ sdxl │
│ 1 │ sdxl-refiner │
│ 2 │ sd-1 │
│ 2 │ sd-2 │
│ 2 │ sdxl │
│ 2 │ sdxl-refiner │
└─────────┴──────────────┘
```
At ingestion time, the MODEL_BASE table is populated using the
following algorithm:
1. If the ingested model is a multi-part pipeline, then each of its
parts is assigned the base determined by probing the pipeline as a
whole.
2. If the ingested model is a single-part simple model, then its part
is assigned to the base returned by probing the simple model.
3. Any models that return `BaseModelType.Any` at probe time will be
assigned to all four of the base model types as shown in the
example above.
Interestingly, the table will "learn" when the same simple model is
compatible with multiple bases. Consider a sequence of events in which
the user ingests an sd-1 model containing a VAE. The VAE will
initially get a single row in the MODEL_BASE table with base
"sd-1". Next the user ingests an sd-2 model that contains the same
VAE. The system will recognize that the same VAE is being used for a
model with a different base, and will add a new row to the table
indicating that this VAE is compatible with either sd-1 or sd-2.
When retrieving information about a multipart pipeline using the API,
the system will intersect the base compatibility of all the components
of the pipeline until it finds the set of base(s) that all the
subparts are compatible with.
## The API
Initialization will look something like this:
```
from pathlib import Path
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.backend.normalized_mm.normalized_model_manager import NormalizedModelManager
config = InvokeAIAppConfig.get_config()
config.parse_args()
nmm = NormalizedModelManager(config)
```
At the current time, the InvokeAIAppConfig object is used only to
locate the root directory path and the location of the `databases`
subdirectory.
## Ingesting a model
Apply the `ingest()` method to a checkpoint or diffusers folder Path
and an optional model name. If the model name isn't provided, then it
will be derived from the stem of the ingested filename/folder.
```
model_config = nmm.ingest(
Path('/tmp/models/slick_anime.safetensors'),
name="Slick Anime",
)
```
Depending on what is being ingested, the call will return either a
`SimpleModelConfig` or a `PipelineConfig` object, which are slightly
different from each other:
```
@dataclass
class SimpleModelConfig:
"""Submodel name, description, type and path."""
name: str
description: str
base_models: Set[BaseModelType]
type: ExtendedModelType
path: Path
@dataclass
class PipelineConfig:
"""Pipeline model name, description, type and parts."""
name: str
description: str
base_models: Set[BaseModelType]
parts: Dict[str, ModelPart] # part_name -> ModelPart
@dataclass
class ModelPart:
"""Type and path of a pipeline submodel."""
type: ExtendedModelType
path: Path
refcount: int
```
For more control, you can directly call the `ingest_pipeline_model()`
or `ingest_simple_model()` methods, which operate on multi-part
pipelines and single-part models respectively.
Note that the `ExtendedModelType` class is an enum created from the
union of the current model manager's `ModelType` and
`SubModelType`. This was necessary to support the SIMPLE_MODEL table's
`type` field.
## Fetching a model
To fetch a simple model, call `get_model()` with the name of the model
and optionally its part_name. This returns a `SimpleModelConfig` object.
```
model_info = nmm.get_model(name='stable-diffusion-v1-5', part='unet')
print(model_info.path)
print(model_info.description)
print(model_info.base_models)
```
If the model only has one part, leave out the `part` argument, or use
`part=root`:
```
model_info = nmm.get_model(name='detail_slider_v1')
```
To fetch information about a pipeline model, call `get_pipeline()`:
```
model_info = nmm.get_pipeline('stable-diffusion-v1-5')
for part_name, part in model_info.parts.items():
print(f'{part_name} is located at {part.path}')
```
This returns a `PipelineConfig` object, which you can then interrogate
to get the model's name, description, list of base models it is
compatible with, and its parts. The latter is a dict mapping the
part_name (the original subfolder name) to a `ModelPart` object that
contains the part's type, refcount and path.
## Exporting a model
To export a model back into its native format (diffusers for main,
safetensors for other types), use `export_pipeline`:
```
nmm.export_pipeline(name='stable-diffusion-v1-5', destination='/path/to/export/folder')
```
The model will be exported to the indicated folder as a folder at
`/path/to/export/folder/stable-diffusion-v1-5`. It will contain a copy
of the original `model_index.json` file, and a series of symbolic
links pointing into the model blobs directory for each of the
subfolders.
Despite its name, `export_pipeline()` works as expected with simple
models as well.
## Listing models in the database
There is currently a `list_models()` method that retrieves a list of
all the **named** models in the database. It doesn't currently provide any
way of filtering by name, type or base compatibility, but these are
easy to add in the future.
`list_models()` returns a list of `ModelListing` objects:
```
class ModelListing:
"""Slightly simplified object for generating listings."""
name: str
description: str
source: str
type: ModelType
base_models: Set[BaseModelType]
```
An alternative implementation might return a list of
Union[SimpleModelConfig, PipelineConfig], but it seemed cleanest to
return a uniform list.
## Deleting models
Model deletion is not currently fully implemented. When implemented,
deletion of a named model will decrement the refcount of each of its
subparts and then delete parts whose refcount has reached zero. The
appropriate triggers for incrementing and decrementing the refcount
have already been implemented in the database schema.

View File

@@ -1,108 +0,0 @@
#!/usr/bin/env python
import argparse
import sys
from pathlib import Path
from typing import Optional
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.backend.normalized_mm.normalized_model_manager import (
DuplicateModelException,
InvalidModelException,
ModelNotFoundException,
NormalizedModelManager,
)
config: InvokeAIAppConfig = InvokeAIAppConfig.get_config()
model_manager: Optional[NormalizedModelManager] = None
def list_parts(args):
try:
model = model_manager.get_pipeline(args.model_name)
print(f"Components of model {args.model_name}:")
print(f" {'ROLE':20s} {'TYPE':20s} {'REFCOUNT':8} PATH")
for role, part in model.parts.items():
print(f" {role:20s} {part.type:20s} {part.refcount:4d} {part.path}")
except ModelNotFoundException:
print(f"{args.model_name}: model not found")
def list_models(args):
model_list = model_manager.list_models()
print(f"{'NAME':30s} {'TYPE':10s} {'BASE(S)':10s} {'DESCRIPTION':40s} ORIGINAL SOURCE")
for model in model_list:
print(
f"{model.name:30s} {model.type.value:10s} {', '.join([x.value for x in model.base_models]):10s} {model.description:40s} {model.source}"
)
def ingest_models(args):
for path in args.model_paths:
try:
print(f"ingesting {path}...", end="")
model_manager.ingest(path)
print("success.")
except (OSError, InvalidModelException, DuplicateModelException) as e:
print(f"FAILED: {e}")
def export_model(args):
print(f"exporting {args.model_name} to {args.destination}...", end="")
try:
model_manager.export_pipeline(args.model_name, args.destination)
print("success.")
except (OSError, ModelNotFoundException, InvalidModelException) as e:
print(f"FAILED: {e}")
def main():
global model_manager
global config
parser = argparse.ArgumentParser(description="Normalized model manager util")
parser.add_argument("--root_dir", dest="root", type=str, default=None, help="path to INVOKEAI_ROOT")
subparsers = parser.add_subparsers(help="commands")
parser_ingest = subparsers.add_parser("ingest", help="ingest checkpoint or diffusers models")
parser_ingest.add_argument("model_paths", type=Path, nargs="+", help="paths to one or more models to be ingested")
parser_ingest.set_defaults(func=ingest_models)
parser_export = subparsers.add_parser("export", help="export a pipeline to indicated directory")
parser_export.add_argument(
"model_name",
type=str,
help="name of model to export",
)
parser_export.add_argument(
"destination",
type=Path,
help="path to destination to export pipeline to",
)
parser_export.set_defaults(func=export_model)
parser_list = subparsers.add_parser("list", help="list models")
parser_list.set_defaults(func=list_models)
parser_listparts = subparsers.add_parser("list-parts", help="list the parts of a pipeline model")
parser_listparts.add_argument(
"model_name",
type=str,
help="name of pipeline model to list parts of",
)
parser_listparts.set_defaults(func=list_parts)
if len(sys.argv) <= 1:
sys.argv.append("--help")
args = parser.parse_args()
if args.root:
config.parse_args(["--root", args.root])
else:
config.parse_args([])
model_manager = NormalizedModelManager(config)
args.func(args)
if __name__ == "__main__":
main()

View File

@@ -1,66 +0,0 @@
# Copyright (c) 2023 Lincoln D. Stein and the InvokeAI Development Team
"""
Fast hashing of diffusers and checkpoint-style models.
Usage:
from invokeai.backend.model_managre.model_hash import FastModelHash
>>> FastModelHash.hash('/home/models/stable-diffusion-v1.5')
'a8e693a126ea5b831c96064dc569956f'
"""
import hashlib
import os
from pathlib import Path
from typing import Dict, Union
from imohash import hashfile
from invokeai.backend.model_management.models import InvalidModelException
class FastModelHash(object):
"""FastModelHash obect provides one public class method, hash()."""
@classmethod
def hash(cls, model_location: Union[str, Path]) -> str:
"""
Return hexdigest string for model located at model_location.
:param model_location: Path to the model
"""
model_location = Path(model_location)
if model_location.is_file():
return cls._hash_file(model_location)
elif model_location.is_dir():
return cls._hash_dir(model_location)
else:
raise InvalidModelException(f"Not a valid file or directory: {model_location}")
@classmethod
def _hash_file(cls, model_location: Union[str, Path]) -> str:
"""
Fasthash a single file and return its hexdigest.
:param model_location: Path to the model file
"""
# we return md5 hash of the filehash to make it shorter
# cryptographic security not needed here
return hashlib.md5(hashfile(model_location)).hexdigest()
@classmethod
def _hash_dir(cls, model_location: Union[str, Path]) -> str:
components: Dict[str, str] = {}
for root, dirs, files in os.walk(model_location):
for file in files:
path = Path(root) / file
if path.name == "config.json": # don't use - varies according to diffusers version
continue
fast_hash = cls._hash_file(path.as_posix())
components.update({path: fast_hash})
# hash all the model hashes together, using alphabetic file order
md5 = hashlib.md5()
for path, fast_hash in sorted(components.items()):
md5.update(fast_hash.encode("utf-8"))
return md5.hexdigest()

View File

@@ -1,601 +0,0 @@
import sqlite3
from dataclasses import dataclass
from enum import Enum
from pathlib import Path
from shutil import copy, copytree
from tempfile import TemporaryDirectory
from typing import Dict, List, Optional, Set, Tuple, Union
from uuid import uuid4
from diffusers import StableDiffusionInpaintPipeline, StableDiffusionPipeline
from invokeai.app.services.config import InvokeAIAppConfig
from ..model_management import BaseModelType, DuplicateModelException, ModelNotFoundException, ModelType, SubModelType
from ..model_management.convert_ckpt_to_diffusers import convert_ckpt_to_diffusers
from ..model_management.model_probe import InvalidModelException, ModelProbe, ModelVariantType
from ..util.devices import choose_torch_device, torch_dtype
from .hash import FastModelHash
# We create a new enumeration for model types
model_types = {x.name: x.value for x in ModelType}
model_types.update({x.name: x.value for x in SubModelType})
ExtendedModelType = Enum("ExtendedModelType", model_types, type=str)
# Turn into a SQL enum
MODEL_TYPES = {x.value for x in ExtendedModelType}
MODEL_SQL_ENUM = ",".join([f'"{x}"' for x in MODEL_TYPES])
# Again
BASE_TYPES = {x.value for x in BaseModelType}
BASE_SQL_ENUM = ",".join([f'"{x}"' for x in BASE_TYPES])
@dataclass
class ModelPart:
"""Type and path of a pipeline submodel."""
type: ExtendedModelType
path: Path
refcount: int
@dataclass
class SimpleModelConfig:
"""Submodel name, description, type and path."""
name: str
description: str
base_models: Set[BaseModelType]
type: ExtendedModelType
path: Path
@dataclass
class PipelineConfig:
"""Pipeline model name, description, type and parts."""
name: str
description: str
base_models: Set[BaseModelType]
parts: Dict[str, ModelPart]
@dataclass
class ModelListing:
"""Slightly simplified object for generating listings."""
name: str
description: str
source: str
type: ModelType
base_models: Set[BaseModelType]
class NormalizedModelManager:
_conn: sqlite3.Connection
_cursor: sqlite3.Cursor
_blob_directory: Path
def __init__(self, config=InvokeAIAppConfig):
database_file = config.db_path.parent / "normalized_models.db"
Path(database_file).parent.mkdir(parents=True, exist_ok=True)
self._conn = sqlite3.connect(database_file, check_same_thread=True)
self._conn.row_factory = sqlite3.Row
self._conn.isolation_level = "DEFERRED"
self._cursor = self._conn.cursor()
self._blob_directory = config.root_path / "model_blobs"
self._blob_directory.mkdir(parents=True, exist_ok=True)
self._conn.execute("PRAGMA foreign_keys = ON;")
self._create_tables()
self._conn.commit()
def ingest(self, model_path: Path, name: Optional[str] = None) -> Union[SimpleModelConfig, PipelineConfig]:
"""Ingest a simple or pipeline model into the normalized models database."""
model_path = model_path.absolute()
info = ModelProbe.probe(model_path)
if info.model_type == ModelType.Main:
return self.ingest_pipeline_model(model_path, name)
else:
return self.ingest_simple_model(model_path, name)
def ingest_simple_model(self, model_path: Path, name: Optional[str] = None) -> SimpleModelConfig:
"""Insert a simple one-part model, returning its config."""
model_name = name or model_path.stem
model_hash = FastModelHash.hash(model_path)
try:
# retrieve or create the single part that goes into this model
part_id = self._lookup_part_by_hash(model_hash) or self._install_part(model_hash, model_path)
# create the model name/source entry
self._cursor.execute(
"""--sql
INSERT INTO model_name (
name, source, description, is_pipeline
)
VALUES (?, ?, ?, 0);
""",
(model_name, model_path.as_posix(), f"Imported model {model_name}"),
)
# associate the part with the model
model_id = self._cursor.lastrowid
self._cursor.execute(
"""--sql
INSERT INTO model_parts (
model_id, part_id
)
VALUES (?, ?);
""",
(
model_id,
part_id,
),
)
self._conn.commit()
except sqlite3.Error as e:
self._conn.rollback()
if isinstance(e, sqlite3.IntegrityError):
raise DuplicateModelException(f"a model named {model_name} is already in the database") from e
else:
raise e
return self.get_model(model_name)
def ingest_pipeline_model(self, model_path: Path, name: Optional[str] = None) -> PipelineConfig:
"""Insert the components of a diffusers pipeline."""
if model_path.is_file(): # convert to diffusers before ingesting
name = name or model_path.stem
with TemporaryDirectory() as tmp_dir:
_convert_ckpt(model_path, Path(tmp_dir))
result = self._ingest_pipeline_model(Path(tmp_dir), name, source=model_path)
return result
else:
return self._ingest_pipeline_model(model_path, name)
def _ingest_pipeline_model(
self, model_path: Path, name: Optional[str] = None, source: Optional[Path] = None
) -> PipelineConfig:
"""Insert the components of a diffusers pipeline."""
model_name = name or model_path.stem
model_index = model_path / "model_index.json"
assert (
model_index.exists()
), f"{model_path} does not look like a diffusers model: model_index.json is missing" # check that it is a diffuers
with open(model_index, "r") as file:
toc = file.read()
base_type = ModelProbe.probe(model_path).base_type
source = source or model_path
try:
# create a name entry for the pipeline and insert its table of contents
self._cursor.execute(
"""--sql
INSERT INTO model_name (
name, source, description, is_pipeline, table_of_contents
)
VALUES(?, ?, ?, "1", ?);
""",
(model_name, source.as_posix(), f"Normalized pipeline {model_name}", toc),
)
pipeline_id = self._cursor.lastrowid
# now we create or retrieve each of the parts
subdirectories = [x for x in model_path.iterdir() if x.is_dir()]
parts_to_insert = []
bases_to_insert = []
for submodel in subdirectories:
part_name = submodel.stem
part_path = submodel
part_hash = FastModelHash.hash(part_path)
part_id = self._lookup_part_by_hash(part_hash) or self._install_part(part_hash, part_path, {base_type})
parts_to_insert.append((pipeline_id, part_id, part_name))
bases_to_insert.append((part_id, base_type.value))
# insert the parts into the part list
self._cursor.executemany(
"""--sql
INSERT INTO model_parts (
model_id, part_id, part_name
)
VALUES(?, ?, ?);
""",
parts_to_insert,
)
# update the base types - over time each simple model will get tagged
# with all the base types of any pipelines that use it, which is a feature... I think?
self._cursor.executemany(
"""--sql
INSERT OR IGNORE INTO model_base (
part_id, base
)
VALUES(?, ?);
""",
bases_to_insert,
)
self._conn.commit()
except sqlite3.Error as e:
self._conn.rollback()
if isinstance(e, sqlite3.IntegrityError):
raise DuplicateModelException(f"a model named {model_name} is already in the database") from e
else:
raise e
return self.get_pipeline(model_name)
# in this p-o-p implementation, we assume that the model name is unique
def get_model(self, name: str, part: Optional[str] = "root") -> SimpleModelConfig:
"""Fetch a simple model. Use optional `part` to specify the diffusers subfolder."""
self._cursor.execute(
"""--sql
SELECT a.source, a.description, c.type, b.part_name, c.path, d.base
FROM model_name as a,
model_parts as b,
simple_model as c,
model_base as d
WHERE a.name=?
AND a.model_id=b.model_id
AND b.part_id=c.part_id
AND b.part_id=d.part_id
AND b.part_name=?
""",
(name, part),
)
rows = self._cursor.fetchall()
if len(rows) == 0:
raise ModelNotFoundException
bases: Set[BaseModelType] = {BaseModelType(x["base"]) for x in rows}
return SimpleModelConfig(
name=name,
description=rows[0]["description"],
base_models=bases,
type=ExtendedModelType(rows[0]["type"]),
path=Path(rows[0]["path"]),
)
# in this p-o-p implementation, we assume that the model name is unique
def get_pipeline(self, name: str) -> PipelineConfig:
"""Fetch a pipeline model."""
self._cursor.execute(
"""--sql
SELECT a.source, a.description, c.type, b.part_name, c.path, d.base, c.refcount
FROM model_name as a,
model_parts as b,
simple_model as c,
model_base as d
WHERE a.name=?
AND a.model_id=b.model_id
AND b.part_id=c.part_id
AND b.part_id=d.part_id
""",
(name,),
)
rows = self._cursor.fetchall()
if len(rows) == 0:
raise ModelNotFoundException
# Find the intersection of base models supported by each part.
# Need a more pythonic way of doing this!
bases: Dict[str, Set] = dict()
base_union: Set[BaseModelType] = set()
parts = dict()
for row in rows:
part_name = row["part_name"]
base = row["base"]
if not bases.get(part_name):
bases[part_name] = set()
bases[part_name].add(base)
base_union.add(base)
parts[part_name] = ModelPart(row["type"], row["path"], row["refcount"])
for base_set in bases.values():
base_union = base_union.intersection(base_set)
return PipelineConfig(
name=name,
description=rows[0]["description"],
base_models={BaseModelType(x) for x in base_union},
parts=parts,
)
def list_models(self) -> List[ModelListing]:
"""Get a listing of models. No filtering implemented yet."""
# get simple models first
self._cursor.execute(
"""--sql
SELECT name, source, is_pipeline
FROM model_name;
""",
(),
)
results: List[ModelListing] = []
for row in self._cursor.fetchall():
if row["is_pipeline"]:
pipeline = self.get_pipeline(row["name"])
results.append(
ModelListing(
name=pipeline.name,
description=pipeline.description,
source=row["source"],
type=ModelType.Main,
base_models=pipeline.base_models,
)
)
else:
model = self.get_model(row["name"])
results.append(
ModelListing(
name=model.name,
description=model.description,
source=row["source"],
type=model.type,
base_models=model.base_models,
)
)
return results
def export_pipeline(self, name: str, destination: Path) -> Path:
"""Reconstruction the pipeline as a set of symbolic links in folder indicated by destination."""
# get the model_index.json file (the "toc")
self._cursor.execute(
"""--sql
SELECT table_of_contents, is_pipeline
FROM model_name
WHERE name=?
""",
(name,),
)
row = self._cursor.fetchone()
if row is None:
raise ModelNotFoundException
# if the destination exists and is a directory, then we create
# a new subdirectory using the model name
if destination.exists() and destination.is_dir():
destination = destination / name
# now check that the (possibly new) destination doesn't already exist
if destination.exists():
raise OSError(f"{destination}: path or directory exists; won't overwrite")
if row["is_pipeline"]:
# write the toc
toc = row[0]
destination.mkdir(parents=True)
with open(destination / "model_index.json", "w") as model_index:
model_index.write(toc)
# symlink the subfolders
model = self.get_pipeline(name)
for part_name, part_config in model.parts.items():
source_path = destination / part_name
target_path = part_config.path
source_path.symlink_to(target_path)
else:
model = self.get_model(name)
destination = Path(destination.as_posix() + model.path.suffix)
destination.symlink_to(model.path)
return destination
def _lookup_part_by_hash(self, hash: str) -> Optional[int]:
self._cursor.execute(
"""--sql
SELECT part_id from simple_model
WHERE hash=?;
""",
(hash,),
)
rows = self._cursor.fetchone()
if not rows:
return None
return rows[0]
# may raise an exception
def _install_part(self, model_hash: str, model_path: Path, base_types: Set[BaseModelType] = set()) -> int:
(model_type, model_base) = self._probe_model(model_path)
if model_base is None:
model_bases = base_types
else:
# hack logic to test multiple base type compatibility
model_bases = set()
if model_type == ExtendedModelType("vae") and model_base == BaseModelType("sd-1"):
model_bases = {BaseModelType("sd-1"), BaseModelType("sd-2")}
elif model_base == BaseModelType("any"):
model_bases = {BaseModelType(x) for x in BASE_TYPES}
else:
model_bases = {BaseModelType(model_base)}
# make the storage name slightly easier to interpret
blob_name = model_type.value + "-" + str(uuid4())
if model_path.is_file() and model_path.suffix:
blob_name += model_path.suffix
destination = self._blob_directory / blob_name
assert not destination.exists(), f"a path named {destination} already exists"
if model_path.is_dir():
copytree(model_path, destination)
else:
copy(model_path, destination)
# create entry in the model_path table
self._cursor.execute(
"""--sql
INSERT INTO simple_model (
type, hash, path
)
VALUES (?, ?, ?);
""",
(model_type.value, model_hash, destination.as_posix()),
)
# id of the inserted row
part_id = self._cursor.lastrowid
# create base compatibility info
for base in model_bases:
self._cursor.execute(
"""--sql
INSERT INTO model_base (part_id, base)
VALUES (?, ?);
""",
(part_id, BaseModelType(base).value),
)
return part_id
def _create_tables(self):
self._cursor.execute(
f"""--sql
CREATE TABLE IF NOT EXISTS simple_model (
part_id INTEGER PRIMARY KEY,
type TEXT CHECK( type IN ({MODEL_SQL_ENUM}) ) NOT NULL,
hash TEXT UNIQUE,
refcount INTEGER NOT NULL DEFAULT '0',
path TEXT NOT NULL
);
"""
)
self._cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS model_name (
model_id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
source TEXT,
description TEXT,
is_pipeline BOOLEAN NOT NULL DEFAULT '0',
table_of_contents TEXT, -- this is the contents of model_index.json
UNIQUE(name)
);
"""
)
self._cursor.execute(
f"""--sql
CREATE TABLE IF NOT EXISTS model_base (
part_id TEXT NOT NULL,
base TEXT CHECK( base in ({BASE_SQL_ENUM}) ) NOT NULL,
FOREIGN KEY(part_id) REFERENCES simple_model(part_id),
UNIQUE(part_id,base)
);
"""
)
self._cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS model_parts (
model_id INTEGER NOT NULL,
part_id INTEGER NOT NULL,
part_name TEXT DEFAULT 'root', -- to do: use enum
FOREIGN KEY(model_id) REFERENCES model_name(model_id),
FOREIGN KEY(part_id) REFERENCES simple_model(part_id),
UNIQUE(model_id, part_id, part_name)
);
"""
)
self._cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS insert_model_refcount
AFTER INSERT
ON model_parts FOR EACH ROW
BEGIN
UPDATE simple_model SET refcount=refcount+1 WHERE simple_model.part_id=new.part_id;
END;
"""
)
self._cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS delete_model_refcount
AFTER DELETE
ON model_parts FOR EACH ROW
BEGIN
UPDATE simple_model SET refcount=refcount-1 WHERE simple_model.part_id=old.part_id;
END;
"""
)
self._cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS update_model_refcount
AFTER UPDATE
ON model_parts FOR EACH ROW
BEGIN
UPDATE simple_model SET refcount=refcount-1 WHERE simple_model.part_id=old.part_id;
UPDATE simple_model SET refcount=refcount+1 WHERE simple_model.part_id=new.part_id;
END;
"""
)
def _probe_model(self, model_path: Path) -> Tuple[ExtendedModelType, Optional[BaseModelType]]:
try:
model_info = ModelProbe.probe(model_path)
return (model_info.model_type, model_info.base_type)
except InvalidModelException:
return (ExtendedModelType(model_path.stem), None)
# Adapted from invokeai/backend/model_management/models/stable_diffusion.py
# This code should be moved into its own module
def _convert_ckpt(checkpoint_path: Path, output_path: Path) -> Path:
"""
Convert checkpoint model to diffusers format.
The converted model will be stored atat output_path.
"""
app_config = InvokeAIAppConfig.get_config()
weights = checkpoint_path
model_info = ModelProbe.probe(checkpoint_path)
base_type = model_info.base_type
variant = model_info.variant_type
pipeline_class = StableDiffusionInpaintPipeline if variant == "inpaint" else StableDiffusionPipeline
config_file = app_config.legacy_conf_path / __select_ckpt_config(base_type, variant)
precision = torch_dtype(choose_torch_device())
model_base_to_model_type = {
BaseModelType.StableDiffusion1: "FrozenCLIPEmbedder",
BaseModelType.StableDiffusion2: "FrozenOpenCLIPEmbedder",
BaseModelType.StableDiffusionXL: "SDXL",
BaseModelType.StableDiffusionXLRefiner: "SDXL-Refiner",
}
convert_ckpt_to_diffusers(
weights.as_posix(),
output_path.as_posix(),
model_type=model_base_to_model_type[base_type],
model_version=base_type,
model_variant=variant,
original_config_file=config_file,
extract_ema=True,
scan_needed=True,
pipeline_class=pipeline_class,
from_safetensors=weights.suffix == ".safetensors",
precision=precision,
)
return output_path
def __select_ckpt_config(version: BaseModelType, variant: ModelVariantType):
ckpt_configs: Dict[BaseModelType, Dict[ModelVariantType, Optional[str]]] = {
BaseModelType.StableDiffusion1: {
ModelVariantType.Normal: "v1-inference.yaml",
ModelVariantType.Inpaint: "v1-inpainting-inference.yaml",
},
BaseModelType.StableDiffusion2: {
ModelVariantType.Normal: "v2-inference-v.yaml", # best guess, as we can't differentiate with base(512)
ModelVariantType.Inpaint: "v2-inpainting-inference.yaml",
ModelVariantType.Depth: "v2-midas-inference.yaml",
},
BaseModelType.StableDiffusionXL: {
ModelVariantType.Normal: "sd_xl_base.yaml",
ModelVariantType.Inpaint: None,
ModelVariantType.Depth: None,
},
BaseModelType.StableDiffusionXLRefiner: {
ModelVariantType.Normal: "sd_xl_refiner.yaml",
ModelVariantType.Inpaint: None,
ModelVariantType.Depth: None,
},
}
return ckpt_configs[version][variant]

View File

@@ -546,13 +546,11 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
# Handle ControlNet(s) and T2I-Adapter(s)
down_block_additional_residuals = None
mid_block_additional_residual = None
down_intrablock_additional_residuals = None
# if control_data is not None and t2i_adapter_data is not None:
# TODO(ryand): This is a limitation of the UNet2DConditionModel API, not a fundamental incompatibility
# between ControlNets and T2I-Adapters. We will try to fix this upstream in diffusers.
# raise Exception("ControlNet(s) and T2I-Adapter(s) cannot be used simultaneously (yet).")
# elif control_data is not None:
if control_data is not None:
if control_data is not None and t2i_adapter_data is not None:
# TODO(ryand): This is a limitation of the UNet2DConditionModel API, not a fundamental incompatibility
# between ControlNets and T2I-Adapters. We will try to fix this upstream in diffusers.
raise Exception("ControlNet(s) and T2I-Adapter(s) cannot be used simultaneously (yet).")
elif control_data is not None:
down_block_additional_residuals, mid_block_additional_residual = self.invokeai_diffuser.do_controlnet_step(
control_data=control_data,
sample=latent_model_input,
@@ -561,8 +559,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
total_step_count=total_step_count,
conditioning_data=conditioning_data,
)
# elif t2i_adapter_data is not None:
if t2i_adapter_data is not None:
elif t2i_adapter_data is not None:
accum_adapter_state = None
for single_t2i_adapter_data in t2i_adapter_data:
# Determine the T2I-Adapter weights for the current denoising step.
@@ -587,8 +584,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
for idx, value in enumerate(single_t2i_adapter_data.adapter_state):
accum_adapter_state[idx] += value * t2i_adapter_weight
# down_block_additional_residuals = accum_adapter_state
down_intrablock_additional_residuals = accum_adapter_state
down_block_additional_residuals = accum_adapter_state
uc_noise_pred, c_noise_pred = self.invokeai_diffuser.do_unet_step(
sample=latent_model_input,
@@ -597,9 +593,8 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
total_step_count=total_step_count,
conditioning_data=conditioning_data,
# extra:
down_block_additional_residuals=down_block_additional_residuals, # for ControlNet
mid_block_additional_residual=mid_block_additional_residual, # for ControlNet
down_intrablock_additional_residuals=down_intrablock_additional_residuals, # for T2I-Adapter
down_block_additional_residuals=down_block_additional_residuals,
mid_block_additional_residual=mid_block_additional_residual,
)
guidance_scale = conditioning_data.guidance_scale

View File

@@ -260,6 +260,7 @@ class InvokeAIDiffuserComponent:
conditioning_data,
**kwargs,
)
else:
(
unconditioned_next_x,
@@ -409,15 +410,6 @@ class InvokeAIDiffuserComponent:
uncond_down_block.append(_uncond_down)
cond_down_block.append(_cond_down)
uncond_down_intrablock, cond_down_intrablock = None, None
down_intrablock_additional_residuals = kwargs.pop("down_intrablock_additional_residuals", None)
if down_intrablock_additional_residuals is not None:
uncond_down_intrablock, cond_down_intrablock = [], []
for down_intrablock in down_intrablock_additional_residuals:
_uncond_down, _cond_down = down_intrablock.chunk(2)
uncond_down_intrablock.append(_uncond_down)
cond_down_intrablock.append(_cond_down)
uncond_mid_block, cond_mid_block = None, None
mid_block_additional_residual = kwargs.pop("mid_block_additional_residual", None)
if mid_block_additional_residual is not None:
@@ -449,7 +441,6 @@ class InvokeAIDiffuserComponent:
cross_attention_kwargs=cross_attention_kwargs,
down_block_additional_residuals=uncond_down_block,
mid_block_additional_residual=uncond_mid_block,
down_intrablock_additional_residuals=uncond_down_intrablock,
added_cond_kwargs=added_cond_kwargs,
**kwargs,
)
@@ -479,7 +470,6 @@ class InvokeAIDiffuserComponent:
cross_attention_kwargs=cross_attention_kwargs,
down_block_additional_residuals=cond_down_block,
mid_block_additional_residual=cond_mid_block,
down_intrablock_additional_residuals=cond_down_intrablock,
added_cond_kwargs=added_cond_kwargs,
**kwargs,
)
@@ -504,15 +494,6 @@ class InvokeAIDiffuserComponent:
uncond_down_block.append(_uncond_down)
cond_down_block.append(_cond_down)
uncond_down_intrablock, cond_down_intrablock = None, None
down_intrablock_additional_residuals = kwargs.pop("down_intrablock_additional_residuals", None)
if down_intrablock_additional_residuals is not None:
uncond_down_intrablock, cond_down_intrablock = [], []
for down_intrablock in down_intrablock_additional_residuals:
_uncond_down, _cond_down = down_intrablock.chunk(2)
uncond_down_intrablock.append(_uncond_down)
cond_down_intrablock.append(_cond_down)
uncond_mid_block, cond_mid_block = None, None
mid_block_additional_residual = kwargs.pop("mid_block_additional_residual", None)
if mid_block_additional_residual is not None:
@@ -541,7 +522,6 @@ class InvokeAIDiffuserComponent:
{"swap_cross_attn_context": cross_attn_processor_context},
down_block_additional_residuals=uncond_down_block,
mid_block_additional_residual=uncond_mid_block,
down_intrablock_additional_residuals=uncond_down_intrablock,
added_cond_kwargs=added_cond_kwargs,
**kwargs,
)
@@ -561,7 +541,6 @@ class InvokeAIDiffuserComponent:
{"swap_cross_attn_context": cross_attn_processor_context},
down_block_additional_residuals=cond_down_block,
mid_block_additional_residual=cond_mid_block,
down_intrablock_additional_residuals=cond_down_intrablock,
added_cond_kwargs=added_cond_kwargs,
**kwargs,
)

View File

@@ -41,7 +41,7 @@ from transformers import CLIPTextModel, CLIPTokenizer
# invokeai stuff
from invokeai.app.services.config import InvokeAIAppConfig, PagingArgumentParser
from invokeai.app.services.model_manager import ModelManagerService
from invokeai.app.services.model_manager_service import ModelManagerService
from invokeai.backend.model_management.models import SubModelType
if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"):

View File

@@ -117,6 +117,9 @@ sd-1/embedding/EasyNegative:
recommended: True
sd-1/embedding/ahx-beta-453407d:
repo_id: sd-concepts-library/ahx-beta-453407d
sd-1/lora/LowRA:
path: https://civitai.com/api/download/models/63006
recommended: True
sd-1/lora/Ink scenery:
path: https://civitai.com/api/download/models/83390
sd-1/ip_adapter/ip_adapter_sd15:

View File

@@ -1,79 +0,0 @@
model:
target: cldm.cldm.ControlLDM
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: "jpg"
cond_stage_key: "txt"
control_key: "hint"
image_size: 64
channels: 4
cond_stage_trainable: false
conditioning_key: crossattn
monitor: val/loss_simple_ema
scale_factor: 0.18215
use_ema: False
only_mid_control: False
control_stage_config:
target: cldm.cldm.ControlNet
params:
image_size: 32 # unused
in_channels: 4
hint_channels: 3
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_heads: 8
use_spatial_transformer: True
transformer_depth: 1
context_dim: 768
use_checkpoint: True
legacy: False
unet_config:
target: cldm.cldm.ControlledUnetModel
params:
image_size: 32 # unused
in_channels: 4
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_heads: 8
use_spatial_transformer: True
transformer_depth: 1
context_dim: 768
use_checkpoint: True
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.FrozenCLIPEmbedder

View File

@@ -1,85 +0,0 @@
model:
target: cldm.cldm.ControlLDM
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: "jpg"
cond_stage_key: "txt"
control_key: "hint"
image_size: 64
channels: 4
cond_stage_trainable: false
conditioning_key: crossattn
monitor: val/loss_simple_ema
scale_factor: 0.18215
use_ema: False
only_mid_control: False
control_stage_config:
target: cldm.cldm.ControlNet
params:
use_checkpoint: True
image_size: 32 # unused
in_channels: 4
hint_channels: 3
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_head_channels: 64 # need to fix for flash-attn
use_spatial_transformer: True
use_linear_in_transformer: True
transformer_depth: 1
context_dim: 1024
legacy: False
unet_config:
target: cldm.cldm.ControlledUnetModel
params:
use_checkpoint: True
image_size: 32 # unused
in_channels: 4
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_head_channels: 64 # need to fix for flash-attn
use_spatial_transformer: True
use_linear_in_transformer: True
transformer_depth: 1
context_dim: 1024
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
#attn_type: "vanilla-xformers"
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.FrozenOpenCLIPEmbedder
params:
freeze: True
layer: "penultimate"

View File

@@ -50,7 +50,7 @@ def invokeai_is_running() -> bool:
return False
def welcome(latest_release: str, latest_prerelease: str):
def welcome(versions: dict):
@group()
def text():
yield f"InvokeAI Version: [bold yellow]{__version__}"
@@ -61,10 +61,9 @@ def welcome(latest_release: str, latest_prerelease: str):
yield "making the web frontend unusable. Please downgrade to the latest release if this happens."
yield ""
yield "[bold yellow]Options:"
yield f"""[1] Update to the latest [bold]official release[/bold] ([italic]{latest_release}[/italic])
[2] Update to the latest [bold]pre-release[/bold] (may be buggy; caveat emptor!) ([italic]{latest_prerelease}[/italic])
[3] Manually enter the [bold]tag name[/bold] for the version you wish to update to
[4] Manually enter the [bold]branch name[/bold] for the version you wish to update to"""
yield f"""[1] Update to the latest official release ([italic]{versions[0]['tag_name']}[/italic])
[2] Manually enter the [bold]tag name[/bold] for the version you wish to update to
[3] Manually enter the [bold]branch name[/bold] for the version you wish to update to"""
console.rule()
print(
@@ -93,17 +92,12 @@ def get_extras():
def main():
versions = get_versions()
released_versions = [x for x in versions if not (x["draft"] or x["prerelease"])]
prerelease_versions = [x for x in versions if not x["draft"] and x["prerelease"]]
latest_release = released_versions[0]["tag_name"] if len(released_versions) else None
latest_prerelease = prerelease_versions[0]["tag_name"] if len(prerelease_versions) else None
if invokeai_is_running():
print(":exclamation: [bold red]Please terminate all running instances of InvokeAI before updating.[/red bold]")
input("Press any key to continue...")
return
welcome(latest_release, latest_prerelease)
welcome(versions)
tag = None
branch = None
@@ -111,13 +105,11 @@ def main():
choice = Prompt.ask("Choice:", choices=["1", "2", "3", "4"], default="1")
if choice == "1":
release = latest_release
release = versions[0]["tag_name"]
elif choice == "2":
release = latest_prerelease
elif choice == "3":
while not tag:
tag = Prompt.ask("Enter an InvokeAI tag name")
elif choice == "4":
elif choice == "3":
while not branch:
branch = Prompt.ask("Enter an InvokeAI branch name")

View File

@@ -9,7 +9,7 @@ import curses
import sys
from argparse import Namespace
from pathlib import Path
from typing import List
from typing import List, Optional
import npyscreen
from npyscreen import widget
@@ -131,7 +131,6 @@ class mergeModelsForm(npyscreen.FormMultiPageAction):
values=[
"Models Built on SD-1.x",
"Models Built on SD-2.x",
"Models Built on SDXL",
],
value=[self.current_base],
columns=4,
@@ -274,10 +273,9 @@ class mergeModelsForm(npyscreen.FormMultiPageAction):
else:
interp = self.interpolations[self.merge_method.value[0]]
bases = ["sd-1", "sd-2", "sdxl"]
args = dict(
model_names=models,
base_model=BaseModelType(bases[self.base_select.value[0]]),
base_model=tuple(BaseModelType)[self.base_select.value[0]],
alpha=self.alpha.value,
interp=interp,
force=self.force.value,
@@ -311,7 +309,7 @@ class mergeModelsForm(npyscreen.FormMultiPageAction):
else:
return True
def get_model_names(self, base_model: BaseModelType = BaseModelType.StableDiffusion1) -> List[str]:
def get_model_names(self, base_model: Optional[BaseModelType] = None) -> List[str]:
model_names = [
info["model_name"]
for info in self.model_manager.list_models(model_type=ModelType.Main, base_model=base_model)
@@ -320,8 +318,7 @@ class mergeModelsForm(npyscreen.FormMultiPageAction):
return sorted(model_names)
def _populate_models(self, value=None):
bases = ["sd-1", "sd-2", "sdxl"]
base_model = BaseModelType(bases[value[0]])
base_model = tuple(BaseModelType)[value[0]]
self.model_names = self.get_model_names(base_model)
models_plus_none = self.model_names.copy()

View File

@@ -4,14 +4,14 @@
"reportBugLabel": "Fehler melden",
"settingsLabel": "Einstellungen",
"img2img": "Bild zu Bild",
"nodes": "Knoten Editor",
"nodes": "Knoten",
"langGerman": "Deutsch",
"nodesDesc": "Ein knotenbasiertes System, für die Erzeugung von Bildern, ist derzeit in der Entwicklung. Bleiben Sie gespannt auf Updates zu dieser fantastischen Funktion.",
"postProcessing": "Nachbearbeitung",
"postProcessDesc1": "InvokeAI bietet eine breite Palette von Nachbearbeitungsfunktionen. Bildhochskalierung und Gesichtsrekonstruktion sind bereits in der WebUI verfügbar. Sie können sie über das Menü Erweiterte Optionen der Reiter Text in Bild und Bild in Bild aufrufen. Sie können Bilder auch direkt bearbeiten, indem Sie die Schaltflächen für Bildaktionen oberhalb der aktuellen Bildanzeige oder im Viewer verwenden.",
"postProcessDesc2": "Eine spezielle Benutzeroberfläche wird in Kürze veröffentlicht, um erweiterte Nachbearbeitungs-Workflows zu erleichtern.",
"postProcessDesc3": "Die InvokeAI Kommandozeilen-Schnittstelle bietet verschiedene andere Funktionen, darunter Embiggen.",
"training": "trainieren",
"training": "Training",
"trainingDesc1": "Ein spezieller Arbeitsablauf zum Trainieren Ihrer eigenen Embeddings und Checkpoints mit Textual Inversion und Dreambooth über die Weboberfläche.",
"trainingDesc2": "InvokeAI unterstützt bereits das Training von benutzerdefinierten Embeddings mit Textual Inversion unter Verwendung des Hauptskripts.",
"upload": "Hochladen",
@@ -38,14 +38,14 @@
"statusUpscalingESRGAN": "Hochskalierung (ESRGAN)",
"statusLoadingModel": "Laden des Modells",
"statusModelChanged": "Modell Geändert",
"cancel": "Abbrechen",
"cancel": "Abbruch",
"accept": "Annehmen",
"back": "Zurück",
"langEnglish": "Englisch",
"langDutch": "Niederländisch",
"langFrench": "Französisch",
"langItalian": "Italienisch",
"langPortuguese": "Portugiesisch",
"langPortuguese": "Portogisisch",
"langRussian": "Russisch",
"langUkranian": "Ukrainisch",
"hotkeysLabel": "Tastenkombinationen",
@@ -58,44 +58,12 @@
"langArabic": "Arabisch",
"langKorean": "Koreanisch",
"langHebrew": "Hebräisch",
"langSpanish": "Spanisch",
"t2iAdapter": "T2I Adapter",
"communityLabel": "Gemeinschaft",
"dontAskMeAgain": "Frag mich nicht nochmal",
"loadingInvokeAI": "Lade Invoke AI",
"statusMergedModels": "Modelle zusammengeführt",
"areYouSure": "Bist du dir sicher?",
"statusConvertingModel": "Model konvertieren",
"on": "An",
"nodeEditor": "Knoten Editor",
"statusMergingModels": "Modelle zusammenführen",
"langSimplifiedChinese": "Vereinfachtes Chinesisch",
"ipAdapter": "IP Adapter",
"controlAdapter": "Control Adapter",
"auto": "Automatisch",
"controlNet": "ControlNet",
"imageFailedToLoad": "Kann Bild nicht laden",
"statusModelConverted": "Model konvertiert",
"modelManager": "Model Manager",
"lightMode": "Heller Modus",
"generate": "Erstellen",
"learnMore": "Mehr lernen",
"darkMode": "Dunkler Modus",
"loading": "Lade",
"random": "Zufall",
"batch": "Stapel-Manager",
"advanced": "Erweitert",
"langBrPortuguese": "Portugiesisch (Brasilien)",
"unifiedCanvas": "Einheitliche Leinwand",
"openInNewTab": "In einem neuem Tab öffnen",
"statusProcessing": "wird bearbeitet",
"linear": "Linear",
"imagePrompt": "Bild Prompt"
"langSpanish": "Spanisch"
},
"gallery": {
"generations": "Erzeugungen",
"showGenerations": "Zeige Erzeugnisse",
"uploads": "Uploads",
"uploads": "Hochgelades",
"showUploads": "Zeige Uploads",
"galleryImageSize": "Bildgröße",
"galleryImageResetSize": "Größe zurücksetzen",
@@ -105,15 +73,7 @@
"singleColumnLayout": "Einspaltiges Layout",
"allImagesLoaded": "Alle Bilder geladen",
"loadMore": "Mehr laden",
"noImagesInGallery": "Keine Bilder in der Galerie",
"loading": "Lade",
"preparingDownload": "bereite Download vor",
"preparingDownloadFailed": "Problem beim Download vorbereiten",
"deleteImage": "Lösche Bild",
"images": "Bilder",
"copy": "Kopieren",
"download": "Runterladen",
"setCurrentImage": "Setze aktuelle Bild"
"noImagesInGallery": "Keine Bilder in der Galerie"
},
"hotkeys": {
"keyboardShortcuts": "Tastenkürzel",
@@ -122,8 +82,7 @@
"galleryHotkeys": "Galerie Tastenkürzel",
"unifiedCanvasHotkeys": "Unified Canvas Tastenkürzel",
"invoke": {
"desc": "Ein Bild erzeugen",
"title": "Invoke"
"desc": "Ein Bild erzeugen"
},
"cancel": {
"title": "Abbrechen",
@@ -207,7 +166,7 @@
},
"toggleGalleryPin": {
"title": "Galerie anheften umschalten",
"desc": "Heftet die Galerie an die Benutzeroberfläche bzw. löst die sie"
"desc": "Heftet die Galerie an die Benutzeroberfläche bzw. löst die sie."
},
"increaseGalleryThumbSize": {
"title": "Größe der Galeriebilder erhöhen",
@@ -320,10 +279,6 @@
"acceptStagingImage": {
"title": "Staging-Bild akzeptieren",
"desc": "Akzeptieren Sie das aktuelle Bild des Staging-Bereichs"
},
"nodesHotkeys": "Knoten Tastenkürzel",
"addNodes": {
"title": "Knotenpunkt hinzufügen"
}
},
"modelManager": {
@@ -340,7 +295,7 @@
"config": "Konfiguration",
"configValidationMsg": "Pfad zur Konfigurationsdatei Ihres Models.",
"modelLocation": "Ort des Models",
"modelLocationValidationMsg": "Pfad zum Speicherort Ihres Models",
"modelLocationValidationMsg": "Pfad zum Speicherort Ihres Models.",
"vaeLocation": "VAE Ort",
"vaeLocationValidationMsg": "Pfad zum Speicherort Ihres VAE.",
"width": "Breite",
@@ -373,63 +328,11 @@
"deleteModel": "Model löschen",
"deleteConfig": "Konfiguration löschen",
"deleteMsg1": "Möchten Sie diesen Model-Eintrag wirklich aus InvokeAI löschen?",
"deleteMsg2": "Dadurch WIRD das Modell von der Festplatte gelöscht WENN es im InvokeAI Root Ordner liegt. Wenn es in einem anderem Ordner liegt wird das Modell NICHT von der Festplatte gelöscht.",
"deleteMsg2": "Dadurch wird die Modellprüfpunktdatei nicht von Ihrer Festplatte gelöscht. Sie können sie bei Bedarf erneut hinzufügen.",
"customConfig": "Benutzerdefinierte Konfiguration",
"invokeRoot": "InvokeAI Ordner",
"formMessageDiffusersVAELocationDesc": "Falls nicht angegeben, sucht InvokeAI nach der VAE-Datei innerhalb des oben angegebenen Modell Speicherortes.",
"checkpointModels": "Kontrollpunkte",
"convert": "Umwandeln",
"addCheckpointModel": "Kontrollpunkt / SafeTensors Modell hinzufügen",
"allModels": "Alle Modelle",
"alpha": "Alpha",
"addDifference": "Unterschied hinzufügen",
"convertToDiffusersHelpText2": "Bei diesem Vorgang wird Ihr Eintrag im Modell-Manager durch die Diffusor-Version desselben Modells ersetzt.",
"convertToDiffusersHelpText5": "Bitte stellen Sie sicher, dass Sie über genügend Speicherplatz verfügen. Die Modelle sind in der Regel zwischen 2 GB und 7 GB groß.",
"convertToDiffusersHelpText3": "Ihre Kontrollpunktdatei auf der Festplatte wird NICHT gelöscht oder in irgendeiner Weise verändert. Sie können Ihren Kontrollpunkt dem Modell-Manager wieder hinzufügen, wenn Sie dies wünschen.",
"convertToDiffusersHelpText4": "Dies ist ein einmaliger Vorgang. Er kann je nach den Spezifikationen Ihres Computers etwa 30-60 Sekunden dauern.",
"convertToDiffusersHelpText6": "Möchten Sie dieses Modell konvertieren?",
"custom": "Benutzerdefiniert",
"modelConverted": "Modell umgewandelt",
"inverseSigmoid": "Inverses Sigmoid",
"invokeAIFolder": "Invoke AI Ordner",
"formMessageDiffusersModelLocationDesc": "Bitte geben Sie mindestens einen an.",
"customSaveLocation": "Benutzerdefinierter Speicherort",
"formMessageDiffusersVAELocation": "VAE Speicherort",
"mergedModelCustomSaveLocation": "Benutzerdefinierter Pfad",
"modelMergeHeaderHelp2": "Nur Diffusers sind für die Zusammenführung verfügbar. Wenn Sie ein Kontrollpunktmodell zusammenführen möchten, konvertieren Sie es bitte zuerst in Diffusers.",
"manual": "Manuell",
"modelManager": "Modell Manager",
"modelMergeAlphaHelp": "Alpha steuert die Überblendungsstärke für die Modelle. Niedrigere Alphawerte führen zu einem geringeren Einfluss des zweiten Modells.",
"modelMergeHeaderHelp1": "Sie können bis zu drei verschiedene Modelle miteinander kombinieren, um eine Mischung zu erstellen, die Ihren Bedürfnissen entspricht.",
"ignoreMismatch": "Unstimmigkeiten zwischen ausgewählten Modellen ignorieren",
"model": "Modell",
"convertToDiffusersSaveLocation": "Speicherort",
"pathToCustomConfig": "Pfad zur benutzerdefinierten Konfiguration",
"v1": "v1",
"modelMergeInterpAddDifferenceHelp": "In diesem Modus wird zunächst Modell 3 von Modell 2 subtrahiert. Die resultierende Version wird mit Modell 1 mit dem oben eingestellten Alphasatz gemischt.",
"modelTwo": "Modell 2",
"modelOne": "Modell 1",
"v2_base": "v2 (512px)",
"scanForModels": "Nach Modellen suchen",
"name": "Name",
"safetensorModels": "SafeTensors",
"pickModelType": "Modell Typ auswählen",
"sameFolder": "Gleicher Ordner",
"modelThree": "Modell 3",
"v2_768": "v2 (768px)",
"none": "Nix",
"repoIDValidationMsg": "Online Repo Ihres Modells",
"vaeRepoIDValidationMsg": "Online Repo Ihrer VAE",
"importModels": "Importiere Modelle",
"merge": "Zusammenführen",
"addDiffuserModel": "Diffusers hinzufügen",
"advanced": "Erweitert",
"closeAdvanced": "Schließe Erweitert",
"convertingModelBegin": "Konvertiere Modell. Bitte warten.",
"customConfigFileLocation": "Benutzerdefinierte Konfiguration Datei Speicherort",
"baseModel": "Basis Modell",
"convertToDiffusers": "Konvertiere zu Diffusers",
"diffusersModels": "Diffusers"
"checkpointModels": "Kontrollpunkte"
},
"parameters": {
"images": "Bilder",
@@ -449,7 +352,7 @@
"type": "Art",
"strength": "Stärke",
"upscaling": "Hochskalierung",
"upscale": "Hochskalieren (Shift + U)",
"upscale": "Hochskalieren",
"upscaleImage": "Bild hochskalieren",
"scale": "Maßstab",
"otherOptions": "Andere Optionen",
@@ -466,7 +369,7 @@
"seamCorrectionHeader": "Nahtkorrektur",
"infillScalingHeader": "Infill und Skalierung",
"img2imgStrength": "Bild-zu-Bild-Stärke",
"toggleLoopback": "Loopback umschalten",
"toggleLoopback": "Toggle Loopback",
"sendTo": "Senden an",
"sendToImg2Img": "Senden an Bild zu Bild",
"sendToUnifiedCanvas": "Senden an Unified Canvas",
@@ -481,20 +384,8 @@
"initialImage": "Ursprüngliches Bild",
"showOptionsPanel": "Optionsleiste zeigen",
"cancel": {
"setType": "Abbruchart festlegen",
"immediate": "Sofort abbrechen",
"schedule": "Abbrechen nach der aktuellen Iteration",
"isScheduled": "Abbrechen"
},
"copyImage": "Bild kopieren",
"denoisingStrength": "Stärke der Entrauschung",
"symmetry": "Symmetrie",
"imageToImage": "Bild zu Bild",
"info": "Information",
"general": "Allgemein",
"hiresStrength": "High Res Stärke",
"hidePreview": "Verstecke Vorschau",
"showPreview": "Zeige Vorschau"
"setType": "Abbruchart festlegen"
}
},
"settings": {
"displayInProgress": "Bilder in Bearbeitung anzeigen",
@@ -505,9 +396,7 @@
"resetWebUI": "Web-Oberfläche zurücksetzen",
"resetWebUIDesc1": "Das Zurücksetzen der Web-Oberfläche setzt nur den lokalen Cache des Browsers mit Ihren Bildern und gespeicherten Einstellungen zurück. Es werden keine Bilder von der Festplatte gelöscht.",
"resetWebUIDesc2": "Wenn die Bilder nicht in der Galerie angezeigt werden oder etwas anderes nicht funktioniert, versuchen Sie bitte, die Einstellungen zurückzusetzen, bevor Sie einen Fehler auf GitHub melden.",
"resetComplete": "Die Web-Oberfläche wurde zurückgesetzt.",
"models": "Modelle",
"useSlidersForAll": "Schieberegler für alle Optionen verwenden"
"resetComplete": "Die Web-Oberfläche wurde zurückgesetzt. Aktualisieren Sie die Seite, um sie neu zu laden."
},
"toast": {
"tempFoldersEmptied": "Temp-Ordner geleert",
@@ -517,7 +406,7 @@
"imageCopied": "Bild kopiert",
"imageLinkCopied": "Bildlink kopiert",
"imageNotLoaded": "Kein Bild geladen",
"imageNotLoadedDesc": "Konnte kein Bild finden",
"imageNotLoadedDesc": "Kein Bild gefunden, das an das Bild zu Bild-Modul gesendet werden kann",
"imageSavedToGallery": "Bild in die Galerie gespeichert",
"canvasMerged": "Leinwand zusammengeführt",
"sentToImageToImage": "Gesendet an Bild zu Bild",
@@ -587,7 +476,7 @@
"autoSaveToGallery": "Automatisch in Galerie speichern",
"saveBoxRegionOnly": "Nur Auswahlbox speichern",
"limitStrokesToBox": "Striche auf Box beschränken",
"showCanvasDebugInfo": "Zusätzliche Informationen zur Leinwand anzeigen",
"showCanvasDebugInfo": "Leinwand-Debug-Infos anzeigen",
"clearCanvasHistory": "Leinwand-Verlauf löschen",
"clearHistory": "Verlauf löschen",
"clearCanvasHistoryMessage": "Wenn Sie den Verlauf der Leinwand löschen, bleibt die aktuelle Leinwand intakt, aber der Verlauf der Rückgängig- und Wiederherstellung wird unwiderruflich gelöscht.",
@@ -612,17 +501,14 @@
"betaClear": "Löschen",
"betaDarkenOutside": "Außen abdunkeln",
"betaLimitToBox": "Begrenzung auf das Feld",
"betaPreserveMasked": "Maskiertes bewahren",
"antialiasing": "Kantenglättung",
"showResultsOn": "Zeige Ergebnisse (An)",
"showResultsOff": "Zeige Ergebnisse (Aus)"
"betaPreserveMasked": "Maskiertes bewahren"
},
"accessibility": {
"modelSelect": "Model Auswahl",
"uploadImage": "Bild hochladen",
"previousImage": "Voriges Bild",
"useThisParameter": "Benutze diesen Parameter",
"copyMetadataJson": "Kopiere Metadaten JSON",
"copyMetadataJson": "Kopiere metadata JSON",
"zoomIn": "Vergrößern",
"rotateClockwise": "Im Uhrzeigersinn drehen",
"flipHorizontally": "Horizontal drehen",
@@ -631,191 +517,9 @@
"toggleAutoscroll": "Auroscroll ein/ausschalten",
"toggleLogViewer": "Log Betrachter ein/ausschalten",
"showOptionsPanel": "Zeige Optionen",
"reset": "Zurücksetzten",
"reset": "Zurücksetzen",
"nextImage": "Nächstes Bild",
"zoomOut": "Verkleinern",
"rotateCounterClockwise": "Gegen den Uhrzeigersinn verdrehen",
"showGalleryPanel": "Galeriefenster anzeigen",
"exitViewer": "Betrachten beenden",
"menu": "Menü",
"loadMore": "Mehr laden",
"invokeProgressBar": "Invoke Fortschrittsanzeige"
},
"boards": {
"autoAddBoard": "Automatisches Hinzufügen zum Ordner",
"topMessage": "Dieser Ordner enthält Bilder die in den folgenden Funktionen verwendet werden:",
"move": "Bewegen",
"menuItemAutoAdd": "Automatisches Hinzufügen zu diesem Ordner",
"myBoard": "Meine Ordner",
"searchBoard": "Ordner durchsuchen...",
"noMatching": "Keine passenden Ordner",
"selectBoard": "Ordner aussuchen",
"cancel": "Abbrechen",
"addBoard": "Ordner hinzufügen",
"uncategorized": "Nicht kategorisiert",
"downloadBoard": "Ordner runterladen",
"changeBoard": "Ordner wechseln",
"loading": "Laden...",
"clearSearch": "Suche leeren",
"bottomMessage": "Durch das Löschen dieses Ordners und seiner Bilder werden alle Funktionen zurückgesetzt, die sie derzeit verwenden."
},
"controlnet": {
"showAdvanced": "Zeige Erweitert",
"contentShuffleDescription": "Mischt den Inhalt von einem Bild",
"addT2IAdapter": "$t(common.t2iAdapter) hinzufügen",
"importImageFromCanvas": "Importieren Bild von Zeichenfläche",
"lineartDescription": "Konvertiere Bild zu Lineart",
"importMaskFromCanvas": "Importiere Maske von Zeichenfläche",
"hed": "HED",
"hideAdvanced": "Verstecke Erweitert",
"contentShuffle": "Inhalt mischen",
"controlNetEnabledT2IDisabled": "$t(common.controlNet) ist aktiv, $t(common.t2iAdapter) ist deaktiviert",
"ipAdapterModel": "Adapter Modell",
"beginEndStepPercent": "Start / Ende Step Prozent",
"duplicate": "Kopieren",
"f": "F",
"h": "H",
"depthMidasDescription": "Tiefenmap erstellen mit Midas",
"controlnet": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.controlNet))",
"t2iEnabledControlNetDisabled": "$t(common.t2iAdapter) ist aktiv, $t(common.controlNet) ist deaktiviert",
"weight": "Breite",
"selectModel": "Wähle ein Modell",
"depthMidas": "Tiefe (Midas)",
"w": "W",
"addControlNet": "$t(common.controlNet) hinzufügen",
"none": "Kein",
"incompatibleBaseModel": "Inkompatibles Basismodell:",
"enableControlnet": "Aktiviere ControlNet",
"detectResolution": "Auflösung erkennen",
"controlNetT2IMutexDesc": "$t(common.controlNet) und $t(common.t2iAdapter) zur gleichen Zeit wird nicht unterstützt.",
"ip_adapter": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.ipAdapter))",
"fill": "Füllen",
"addIPAdapter": "$t(common.ipAdapter) hinzufügen",
"colorMapDescription": "Erstelle eine Farbkarte von diesem Bild",
"t2i_adapter": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.t2iAdapter))",
"imageResolution": "Bild Auflösung",
"depthZoe": "Tiefe (Zoe)",
"colorMap": "Farbe",
"lowThreshold": "Niedrige Schwelle",
"highThreshold": "Hohe Schwelle",
"toggleControlNet": "Schalten ControlNet um",
"delete": "Löschen",
"controlAdapter_one": "Control Adapter",
"controlAdapter_other": "Control Adapters",
"colorMapTileSize": "Tile Größe",
"depthZoeDescription": "Tiefenmap erstellen mit Zoe",
"setControlImageDimensions": "Setze Control Bild Auflösung auf Breite/Höhe",
"handAndFace": "Hand und Gesicht",
"enableIPAdapter": "Aktiviere IP Adapter",
"resize": "Größe ändern",
"resetControlImage": "Zurücksetzen vom Referenz Bild",
"balanced": "Ausgewogen",
"prompt": "Prompt",
"resizeMode": "Größenänderungsmodus",
"processor": "Prozessor",
"saveControlImage": "Speichere Referenz Bild",
"safe": "Speichern",
"ipAdapterImageFallback": "Kein IP Adapter Bild ausgewählt",
"resetIPAdapterImage": "Zurücksetzen vom IP Adapter Bild"
},
"queue": {
"status": "Status",
"cancelTooltip": "Aktuellen Aufgabe abbrechen",
"queueEmpty": "Warteschlange leer",
"in_progress": "In Arbeit",
"queueFront": "An den Anfang der Warteschlange tun",
"completed": "Fertig",
"queueBack": "In die Warteschlange",
"clearFailed": "Probleme beim leeren der Warteschlange",
"clearSucceeded": "Warteschlange geleert",
"pause": "Pause",
"cancelSucceeded": "Auftrag abgebrochen",
"queue": "Warteschlange",
"batch": "Stapel",
"pending": "Ausstehend",
"clear": "Leeren",
"prune": "Leeren",
"total": "Gesamt",
"canceled": "Abgebrochen",
"clearTooltip": "Abbrechen und alle Aufträge leeren",
"current": "Aktuell",
"failed": "Fehler",
"cancelItem": "Abbruch Auftrag",
"next": "Nächste",
"cancel": "Abbruch",
"session": "Sitzung",
"queueTotal": "{{total}} Gesamt",
"resume": "Wieder aufnehmen",
"item": "Auftrag",
"notReady": "Warteschlange noch nicht bereit",
"batchValues": "Stapel Werte",
"queueCountPrediction": "{{predicted}} zur Warteschlange hinzufügen",
"queuedCount": "{{pending}} wartenden Elemente",
"clearQueueAlertDialog": "Die Warteschlange leeren, stoppt den aktuellen Prozess und leert die Warteschlange komplett.",
"completedIn": "Fertig in",
"cancelBatchSucceeded": "Stapel abgebrochen",
"cancelBatch": "Stapel stoppen",
"enqueueing": "Stapel in der Warteschlange",
"queueMaxExceeded": "Maximum von {{max_queue_size}} Elementen erreicht, würde {{skip}} Elemente überspringen",
"cancelBatchFailed": "Problem beim Abbruch vom Stapel",
"clearQueueAlertDialog2": "bist du sicher die Warteschlange zu leeren?"
},
"metadata": {
"negativePrompt": "Negativ Beschreibung",
"metadata": "Meta-Data",
"strength": "Bild zu Bild stärke",
"imageDetails": "Bild Details",
"model": "Modell",
"noImageDetails": "Keine Bild Details gefunden",
"cfgScale": "CFG-Skala",
"fit": "Bild zu Bild passen",
"height": "Höhe",
"noMetaData": "Keine Meta-Data gefunden",
"width": "Breite",
"createdBy": "Erstellt von",
"steps": "Schritte"
},
"popovers": {
"noiseUseCPU": {
"heading": "Nutze Prozessor rauschen"
},
"paramModel": {
"heading": "Modell"
},
"paramIterations": {
"heading": "Iterationen"
},
"paramCFGScale": {
"heading": "CFG-Skala"
},
"paramSteps": {
"heading": "Schritte"
},
"lora": {
"heading": "LoRA Gewichte"
},
"infillMethod": {
"heading": "Füllmethode"
},
"paramVAE": {
"heading": "VAE"
}
},
"ui": {
"lockRatio": "Verhältnis sperren",
"hideProgressImages": "Verstecke Prozess Bild",
"showProgressImages": "Zeige Prozess Bild"
},
"invocationCache": {
"disable": "Deaktivieren",
"misses": "Cache Nötig",
"hits": "Cache Treffer",
"enable": "Aktivieren",
"clear": "Leeren"
},
"embedding": {
"noMatchingEmbedding": "Keine passenden Embeddings",
"addEmbedding": "Embedding hinzufügen",
"incompatibleModel": "Inkompatibles Basismodell:"
"rotateCounterClockwise": "Gegen den Uhrzeigersinn verdrehen"
}
}

View File

@@ -70,8 +70,8 @@
"langDutch": "Nederlands",
"langEnglish": "English",
"langFrench": "Français",
"langGerman": "German",
"langHebrew": "Hebrew",
"langGerman": "Deutsch",
"langHebrew": "עברית",
"langItalian": "Italiano",
"langJapanese": "日本語",
"langKorean": "한국어",
@@ -722,9 +722,7 @@
"noMatchingModels": "No matching Models",
"noModelsAvailable": "No models available",
"selectLoRA": "Select a LoRA",
"selectModel": "Select a Model",
"noLoRAsInstalled": "No LoRAs installed",
"noRefinerModelsInstalled": "No SDXL Refiner models installed"
"selectModel": "Select a Model"
},
"nodes": {
"addNode": "Add Node",
@@ -1124,6 +1122,7 @@
"clearIntermediates": "Clear Intermediates",
"clearIntermediatesWithCount_one": "Clear {{count}} Intermediate",
"clearIntermediatesWithCount_other": "Clear {{count}} Intermediates",
"clearIntermediatesWithCount_zero": "No Intermediates to Clear",
"intermediatesCleared_one": "Cleared {{count}} Intermediate",
"intermediatesCleared_other": "Cleared {{count}} Intermediates",
"intermediatesClearedFailed": "Problem Clearing Intermediates"
@@ -1258,15 +1257,11 @@
},
"compositingBlur": {
"heading": "Blur",
"paragraphs": [
"The blur radius of the mask."
]
"paragraphs": ["The blur radius of the mask."]
},
"compositingBlurMethod": {
"heading": "Blur Method",
"paragraphs": [
"The method of blur applied to the masked area."
]
"paragraphs": ["The method of blur applied to the masked area."]
},
"compositingCoherencePass": {
"heading": "Coherence Pass",
@@ -1276,9 +1271,7 @@
},
"compositingCoherenceMode": {
"heading": "Mode",
"paragraphs": [
"The mode of the Coherence Pass."
]
"paragraphs": ["The mode of the Coherence Pass."]
},
"compositingCoherenceSteps": {
"heading": "Steps",
@@ -1296,9 +1289,7 @@
},
"compositingMaskAdjustments": {
"heading": "Mask Adjustments",
"paragraphs": [
"Adjust the mask."
]
"paragraphs": ["Adjust the mask."]
},
"controlNetBeginEnd": {
"heading": "Begin / End Step Percentage",
@@ -1356,9 +1347,7 @@
},
"infillMethod": {
"heading": "Infill Method",
"paragraphs": [
"Method to infill the selected area."
]
"paragraphs": ["Method to infill the selected area."]
},
"lora": {
"heading": "LoRA Weight",

View File

@@ -1025,8 +1025,7 @@
"imageFieldDescription": "Le immagini possono essere passate tra i nodi.",
"unableToParseEdge": "Impossibile analizzare il bordo",
"latentsCollectionDescription": "Le immagini latenti possono essere passate tra i nodi.",
"imageCollection": "Raccolta Immagini",
"loRAModelField": "LoRA"
"imageCollection": "Raccolta Immagini"
},
"boards": {
"autoAddBoard": "Aggiungi automaticamente bacheca",
@@ -1117,8 +1116,7 @@
"controlAdapter_other": "Adattatori di Controllo",
"megaControl": "Mega ControlNet",
"minConfidence": "Confidenza minima",
"scribble": "Scribble",
"amult": "Angolo di illuminazione"
"scribble": "Scribble"
},
"queue": {
"queueFront": "Aggiungi all'inizio della coda",
@@ -1193,9 +1191,7 @@
"noLoRAsAvailable": "Nessun LoRA disponibile",
"noModelsAvailable": "Nessun modello disponibile",
"selectModel": "Seleziona un modello",
"selectLoRA": "Seleziona un LoRA",
"noRefinerModelsInstalled": "Nessun modello SDXL Refiner installato",
"noLoRAsInstalled": "Nessun LoRA installato"
"selectLoRA": "Seleziona un LoRA"
},
"invocationCache": {
"disable": "Disabilita",

View File

@@ -1,6 +1,6 @@
{
"common": {
"languagePickerLabel": "言語",
"languagePickerLabel": "言語選択",
"reportBugLabel": "バグ報告",
"settingsLabel": "設定",
"langJapanese": "日本語",
@@ -63,34 +63,11 @@
"langFrench": "Français",
"langGerman": "Deutsch",
"langPortuguese": "Português",
"nodes": "ワークフローエディター",
"nodes": "ノード",
"langKorean": "한국어",
"langPolish": "Polski",
"txt2img": "txt2img",
"postprocessing": "Post Processing",
"t2iAdapter": "T2I アダプター",
"communityLabel": "コミュニティ",
"dontAskMeAgain": "次回から確認しない",
"areYouSure": "本当によろしいですか?",
"on": "オン",
"nodeEditor": "ノードエディター",
"ipAdapter": "IPアダプター",
"controlAdapter": "コントロールアダプター",
"auto": "自動",
"openInNewTab": "新しいタブで開く",
"controlNet": "コントロールネット",
"statusProcessing": "処理中",
"linear": "リニア",
"imageFailedToLoad": "画像が読み込めません",
"imagePrompt": "画像プロンプト",
"modelManager": "モデルマネージャー",
"lightMode": "ライトモード",
"generate": "生成",
"learnMore": "もっと学ぶ",
"darkMode": "ダークモード",
"random": "ランダム",
"batch": "バッチマネージャー",
"advanced": "高度な設定"
"postprocessing": "Post Processing"
},
"gallery": {
"uploads": "アップロード",
@@ -297,7 +274,7 @@
"config": "Config",
"configValidationMsg": "モデルの設定ファイルへのパス",
"modelLocation": "モデルの場所",
"modelLocationValidationMsg": "ディフューザーモデルのあるローカルフォルダーのパスを入力してください",
"modelLocationValidationMsg": "モデルが配置されている場所へのパス。",
"repo_id": "Repo ID",
"repoIDValidationMsg": "モデルのリモートリポジトリ",
"vaeLocation": "VAEの場所",
@@ -332,79 +309,12 @@
"delete": "削除",
"deleteModel": "モデルを削除",
"deleteConfig": "設定を削除",
"deleteMsg1": "InvokeAIからこのモデルを削除してよろしいですか?",
"deleteMsg2": "これは、モデルがInvokeAIルートフォルダ内にある場合、ディスクからモデルを削除します。カスタム保存場所を使用している場合、モデルはディスクから削除されません。",
"deleteMsg1": "InvokeAIからこのモデルエントリーを削除してよろしいですか?",
"deleteMsg2": "これは、ドライブからモデルのCheckpointファイルを削除するものではありません。必要であればそれらを読み込むことができます。",
"formMessageDiffusersModelLocation": "Diffusersモデルの場所",
"formMessageDiffusersModelLocationDesc": "最低でも1つは入力してください。",
"formMessageDiffusersVAELocation": "VAEの場所s",
"formMessageDiffusersVAELocationDesc": "指定しない場合、InvokeAIは上記のモデルの場所にあるVAEファイルを探します。",
"importModels": "モデルをインポート",
"custom": "カスタム",
"none": "なし",
"convert": "変換",
"statusConverting": "変換中",
"cannotUseSpaces": "スペースは使えません",
"convertToDiffusersHelpText6": "このモデルを変換しますか?",
"checkpointModels": "チェックポイント",
"settings": "設定",
"convertingModelBegin": "モデルを変換しています...",
"baseModel": "ベースモデル",
"modelDeleteFailed": "モデルの削除ができませんでした",
"convertToDiffusers": "ディフューザーに変換",
"alpha": "アルファ",
"diffusersModels": "ディフューザー",
"pathToCustomConfig": "カスタム設定のパス",
"noCustomLocationProvided": "カスタムロケーションが指定されていません",
"modelConverted": "モデル変換が完了しました",
"weightedSum": "重み付け総和",
"inverseSigmoid": "逆シグモイド",
"invokeAIFolder": "Invoke AI フォルダ",
"syncModelsDesc": "モデルがバックエンドと同期していない場合、このオプションを使用してモデルを更新できます。通常、モデル.yamlファイルを手動で更新したり、アプリケーションの起動後にモデルをInvokeAIルートフォルダに追加した場合に便利です。",
"noModels": "モデルが見つかりません",
"sigmoid": "シグモイド",
"merge": "マージ",
"modelMergeInterpAddDifferenceHelp": "このモードでは、モデル3がまずモデル2から減算されます。その結果得られたバージョンが、上記で設定されたアルファ率でモデル1とブレンドされます。",
"customConfig": "カスタム設定",
"predictionType": "予測タイプ(安定したディフュージョン 2.x モデルおよび一部の安定したディフュージョン 1.x モデル用)",
"selectModel": "モデルを選択",
"modelSyncFailed": "モデルの同期に失敗しました",
"quickAdd": "クイック追加",
"simpleModelDesc": "ローカルのDiffusersモデル、ローカルのチェックポイント/safetensorsモデル、HuggingFaceリポジトリのID、またはチェックポイント/ DiffusersモデルのURLへのパスを指定してください。",
"customSaveLocation": "カスタム保存場所",
"advanced": "高度な設定",
"modelDeleted": "モデルが削除されました",
"convertToDiffusersHelpText2": "このプロセスでは、モデルマネージャーのエントリーを同じモデルのディフューザーバージョンに置き換えます。",
"modelUpdateFailed": "モデル更新が失敗しました",
"useCustomConfig": "カスタム設定を使用する",
"convertToDiffusersHelpText5": "十分なディスク空き容量があることを確認してください。モデルは一般的に2GBから7GBのサイズがあります。",
"modelConversionFailed": "モデル変換が失敗しました",
"modelEntryDeleted": "モデルエントリーが削除されました",
"syncModels": "モデルを同期",
"mergedModelSaveLocation": "保存場所",
"closeAdvanced": "高度な設定を閉じる",
"modelType": "モデルタイプ",
"modelsMerged": "モデルマージ完了",
"modelsMergeFailed": "モデルマージ失敗",
"scanForModels": "モデルをスキャン",
"customConfigFileLocation": "カスタム設定ファイルの場所",
"convertToDiffusersHelpText1": "このモデルは 🧨 Diffusers フォーマットに変換されます。",
"modelsSynced": "モデルが同期されました",
"invokeRoot": "InvokeAIフォルダ",
"mergedModelCustomSaveLocation": "カスタムパス",
"mergeModels": "マージモデル",
"interpolationType": "補間タイプ",
"modelMergeHeaderHelp2": "マージできるのはDiffusersのみです。チェックポイントモデルをマージしたい場合は、まずDiffusersに変換してください。",
"convertToDiffusersSaveLocation": "保存場所",
"pickModelType": "モデルタイプを選択",
"sameFolder": "同じフォルダ",
"convertToDiffusersHelpText3": "チェックポイントファイルは、InvokeAIルートフォルダ内にある場合、ディスクから削除されます。カスタムロケーションにある場合は、削除されません。",
"loraModels": "LoRA",
"modelMergeAlphaHelp": "アルファはモデルのブレンド強度を制御します。アルファ値が低いと、2番目のモデルの影響が低くなります。",
"addDifference": "差分を追加",
"modelMergeHeaderHelp1": "あなたのニーズに適したブレンドを作成するために、異なるモデルを最大3つまでマージすることができます。",
"ignoreMismatch": "選択されたモデル間の不一致を無視する",
"convertToDiffusersHelpText4": "これは一回限りのプロセスです。コンピュータの仕様によっては、約30秒から60秒かかる可能性があります。",
"mergedModelName": "マージされたモデル名"
"formMessageDiffusersVAELocationDesc": "指定しない場合、InvokeAIは上記のモデルの場所にあるVAEファイルを探します。"
},
"parameters": {
"images": "画像",
@@ -530,8 +440,7 @@
"next": "次",
"accept": "同意",
"showHide": "表示/非表示",
"discardAll": "すべて破棄",
"snapToGrid": "グリッドにスナップ"
"discardAll": "すべて破棄"
},
"accessibility": {
"modelSelect": "モデルを選択",
@@ -543,7 +452,7 @@
"useThisParameter": "このパラメータを使用する",
"copyMetadataJson": "メタデータをコピー(JSON)",
"zoomIn": "ズームイン",
"exitViewer": "ビューアーを終了",
"exitViewer": "ExitViewer",
"zoomOut": "ズームアウト",
"rotateCounterClockwise": "反時計回りに回転",
"rotateClockwise": "時計回りに回転",
@@ -552,265 +461,6 @@
"toggleAutoscroll": "自動スクロールの切替",
"modifyConfig": "Modify Config",
"toggleLogViewer": "Log Viewerの切替",
"showOptionsPanel": "サイドパネルを表示",
"showGalleryPanel": "ギャラリーパネルを表示",
"menu": "メニュー",
"loadMore": "さらに読み込む"
},
"controlnet": {
"resize": "リサイズ",
"showAdvanced": "高度な設定を表示",
"addT2IAdapter": "$t(common.t2iAdapter)を追加",
"importImageFromCanvas": "キャンバスから画像をインポート",
"lineartDescription": "画像を線画に変換",
"importMaskFromCanvas": "キャンバスからマスクをインポート",
"hideAdvanced": "高度な設定を非表示",
"ipAdapterModel": "アダプターモデル",
"resetControlImage": "コントロール画像をリセット",
"beginEndStepPercent": "開始 / 終了ステップパーセンテージ",
"duplicate": "複製",
"balanced": "バランス",
"prompt": "プロンプト",
"depthMidasDescription": "Midasを使用して深度マップを生成",
"openPoseDescription": "Openposeを使用してポーズを推定",
"control": "コントロール",
"resizeMode": "リサイズモード",
"weight": "重み",
"selectModel": "モデルを選択",
"crop": "切り抜き",
"w": "幅",
"processor": "プロセッサー",
"addControlNet": "$t(common.controlNet)を追加",
"none": "なし",
"incompatibleBaseModel": "互換性のないベースモデル:",
"enableControlnet": "コントロールネットを有効化",
"detectResolution": "検出解像度",
"controlNetT2IMutexDesc": "$t(common.controlNet)と$t(common.t2iAdapter)の同時使用は現在サポートされていません。",
"pidiDescription": "PIDI画像処理",
"controlMode": "コントロールモード",
"fill": "塗りつぶし",
"cannyDescription": "Canny 境界検出",
"addIPAdapter": "$t(common.ipAdapter)を追加",
"colorMapDescription": "画像からカラーマップを生成",
"lineartAnimeDescription": "アニメスタイルの線画処理",
"imageResolution": "画像解像度",
"megaControl": "メガコントロール",
"lowThreshold": "最低閾値",
"autoConfigure": "プロセッサーを自動設定",
"highThreshold": "最大閾値",
"saveControlImage": "コントロール画像を保存",
"toggleControlNet": "このコントロールネットを切り替え",
"delete": "削除",
"controlAdapter_other": "コントロールアダプター",
"colorMapTileSize": "タイルサイズ",
"ipAdapterImageFallback": "IP Adapterの画像が選択されていません",
"mediapipeFaceDescription": "Mediapipeを使用して顔を検出",
"depthZoeDescription": "Zoeを使用して深度マップを生成",
"setControlImageDimensions": "コントロール画像のサイズを幅と高さにセット",
"resetIPAdapterImage": "IP Adapterの画像をリセット",
"handAndFace": "手と顔",
"enableIPAdapter": "IP Adapterを有効化",
"amult": "a_mult",
"contentShuffleDescription": "画像の内容をシャッフルします",
"bgth": "bg_th",
"controlNetEnabledT2IDisabled": "$t(common.controlNet) が有効化され、$t(common.t2iAdapter)s が無効化されました",
"controlnet": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.controlNet))",
"t2iEnabledControlNetDisabled": "$t(common.t2iAdapter) が有効化され、$t(common.controlNet)s が無効化されました",
"ip_adapter": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.ipAdapter))",
"t2i_adapter": "$t(controlnet.controlAdapter_one) #{{number}} ($t(common.t2iAdapter))",
"minConfidence": "最小確信度",
"colorMap": "Color",
"noneDescription": "処理は行われていません",
"canny": "Canny",
"hedDescription": "階層的エッジ検出",
"maxFaces": "顔の最大数"
},
"metadata": {
"seamless": "シームレス",
"Threshold": "ノイズ閾値",
"seed": "シード",
"width": "幅",
"workflow": "ワークフロー",
"steps": "ステップ",
"scheduler": "スケジューラー",
"positivePrompt": "ポジティブプロンプト",
"strength": "Image to Image 強度",
"perlin": "パーリンノイズ",
"recallParameters": "パラメータを呼び出す"
},
"queue": {
"queueEmpty": "キューが空です",
"pauseSucceeded": "処理が一時停止されました",
"queueFront": "キューの先頭へ追加",
"queueBack": "キューに追加",
"queueCountPrediction": "{{predicted}}をキューに追加",
"queuedCount": "保留中 {{pending}}",
"pause": "一時停止",
"queue": "キュー",
"pauseTooltip": "処理を一時停止",
"cancel": "キャンセル",
"queueTotal": "合計 {{total}}",
"resumeSucceeded": "処理が再開されました",
"resumeTooltip": "処理を再開",
"resume": "再会",
"status": "ステータス",
"pruneSucceeded": "キューから完了アイテム{{item_count}}件を削除しました",
"cancelTooltip": "現在のアイテムをキャンセル",
"in_progress": "進行中",
"notReady": "キューに追加できません",
"batchFailedToQueue": "バッチをキューに追加できませんでした",
"completed": "完了",
"batchValues": "バッチの値",
"cancelFailed": "アイテムのキャンセルに問題があります",
"batchQueued": "バッチをキューに追加しました",
"pauseFailed": "処理の一時停止に問題があります",
"clearFailed": "キューのクリアに問題があります",
"front": "先頭",
"clearSucceeded": "キューがクリアされました",
"pruneTooltip": "{{item_count}} の完了アイテムを削除",
"cancelSucceeded": "アイテムがキャンセルされました",
"batchQueuedDesc_other": "{{count}} セッションをキューの{{direction}}に追加しました",
"graphQueued": "グラフをキューに追加しました",
"batch": "バッチ",
"clearQueueAlertDialog": "キューをクリアすると、処理中のアイテムは直ちにキャンセルされ、キューは完全にクリアされます。",
"pending": "保留中",
"resumeFailed": "処理の再開に問題があります",
"clear": "クリア",
"total": "合計",
"canceled": "キャンセル",
"pruneFailed": "キューの削除に問題があります",
"cancelBatchSucceeded": "バッチがキャンセルされました",
"clearTooltip": "全てのアイテムをキャンセルしてクリア",
"current": "現在",
"failed": "失敗",
"cancelItem": "項目をキャンセル",
"next": "次",
"cancelBatch": "バッチをキャンセル",
"session": "セッション",
"enqueueing": "バッチをキューに追加",
"queueMaxExceeded": "{{max_queue_size}} の最大値を超えたため、{{skip}} をスキップします",
"cancelBatchFailed": "バッチのキャンセルに問題があります",
"clearQueueAlertDialog2": "キューをクリアしてもよろしいですか?",
"item": "アイテム",
"graphFailedToQueue": "グラフをキューに追加できませんでした"
},
"models": {
"noMatchingModels": "一致するモデルがありません",
"loading": "読み込み中",
"noMatchingLoRAs": "一致するLoRAがありません",
"noLoRAsAvailable": "使用可能なLoRAがありません",
"noModelsAvailable": "使用可能なモデルがありません",
"selectModel": "モデルを選択してください",
"selectLoRA": "LoRAを選択してください"
},
"nodes": {
"addNode": "ノードを追加",
"boardField": "ボード",
"boolean": "ブーリアン",
"boardFieldDescription": "ギャラリーボード",
"addNodeToolTip": "ノードを追加 (Shift+A, Space)",
"booleanPolymorphicDescription": "ブーリアンのコレクション。",
"inputField": "入力フィールド",
"latentsFieldDescription": "潜在空間はノード間で伝達できます。",
"floatCollectionDescription": "浮動小数点のコレクション。",
"missingTemplate": "テンプレートが見つかりません",
"ipAdapterPolymorphicDescription": "IP-Adaptersのコレクション。",
"latentsPolymorphicDescription": "潜在空間はノード間で伝達できます。",
"colorFieldDescription": "RGBAカラー。",
"ipAdapterCollection": "IP-Adapterコレクション",
"conditioningCollection": "条件付きコレクション",
"hideGraphNodes": "グラフオーバーレイを非表示",
"loadWorkflow": "ワークフローを読み込み",
"integerPolymorphicDescription": "整数のコレクション。",
"hideLegendNodes": "フィールドタイプの凡例を非表示",
"float": "浮動小数点",
"booleanCollectionDescription": "ブーリアンのコレクション。",
"integer": "整数",
"colorField": "カラー",
"nodeTemplate": "ノードテンプレート",
"integerDescription": "整数は小数点を持たない数値です。",
"imagePolymorphicDescription": "画像のコレクション。",
"doesNotExist": "存在しません",
"ipAdapterCollectionDescription": "IP-Adaptersのコレクション。",
"inputMayOnlyHaveOneConnection": "入力は1つの接続しか持つことができません",
"nodeOutputs": "ノード出力",
"currentImageDescription": "ノードエディタ内の現在の画像を表示",
"downloadWorkflow": "ワークフローのJSONをダウンロード",
"integerCollection": "整数コレクション",
"collectionItem": "コレクションアイテム",
"fieldTypesMustMatch": "フィールドタイプが一致している必要があります",
"edge": "輪郭",
"inputNode": "入力ノード",
"imageField": "画像",
"animatedEdgesHelp": "選択したエッジおよび選択したノードに接続されたエッジをアニメーション化します",
"cannotDuplicateConnection": "重複した接続は作れません",
"noWorkflow": "ワークフローがありません",
"integerCollectionDescription": "整数のコレクション。",
"colorPolymorphicDescription": "カラーのコレクション。",
"missingCanvaInitImage": "キャンバスの初期画像が見つかりません",
"clipFieldDescription": "トークナイザーとテキストエンコーダーサブモデル。",
"fullyContainNodesHelp": "ノードは選択ボックス内に完全に存在する必要があります",
"clipField": "クリップ",
"nodeType": "ノードタイプ",
"executionStateInProgress": "処理中",
"executionStateError": "エラー",
"ipAdapterModel": "IP-Adapterモデル",
"ipAdapterDescription": "イメージプロンプトアダプター(IP-Adapter)。",
"missingCanvaInitMaskImages": "キャンバスの初期画像およびマスクが見つかりません",
"hideMinimapnodes": "ミニマップを非表示",
"fitViewportNodes": "全体を表示",
"executionStateCompleted": "完了",
"node": "ノード",
"currentImage": "現在の画像",
"controlField": "コントロール",
"booleanDescription": "ブーリアンはtrueかfalseです。",
"collection": "コレクション",
"ipAdapterModelDescription": "IP-Adapterモデルフィールド",
"cannotConnectInputToInput": "入力から入力には接続できません",
"invalidOutputSchema": "無効な出力スキーマ",
"floatDescription": "浮動小数点は、小数点を持つ数値です。",
"floatPolymorphicDescription": "浮動小数点のコレクション。",
"floatCollection": "浮動小数点コレクション",
"latentsField": "潜在空間",
"cannotConnectOutputToOutput": "出力から出力には接続できません",
"booleanCollection": "ブーリアンコレクション",
"cannotConnectToSelf": "自身のノードには接続できません",
"inputFields": "入力フィールド(複数)",
"colorCodeEdges": "カラー-Code Edges",
"imageCollectionDescription": "画像のコレクション。",
"loadingNodes": "ノードを読み込み中...",
"imageCollection": "画像コレクション"
},
"boards": {
"autoAddBoard": "自動追加するボード",
"move": "移動",
"menuItemAutoAdd": "このボードに自動追加",
"myBoard": "マイボード",
"searchBoard": "ボードを検索...",
"noMatching": "一致するボードがありません",
"selectBoard": "ボードを選択",
"cancel": "キャンセル",
"addBoard": "ボードを追加",
"uncategorized": "未分類",
"downloadBoard": "ボードをダウンロード",
"changeBoard": "ボードを変更",
"loading": "ロード中...",
"topMessage": "このボードには、以下の機能で使用されている画像が含まれています:",
"bottomMessage": "このボードおよび画像を削除すると、現在これらを利用している機能はリセットされます。",
"clearSearch": "検索をクリア"
},
"embedding": {
"noMatchingEmbedding": "一致する埋め込みがありません",
"addEmbedding": "埋め込みを追加",
"incompatibleModel": "互換性のないベースモデル:"
},
"invocationCache": {
"invocationCache": "呼び出しキャッシュ",
"clearSucceeded": "呼び出しキャッシュをクリアしました",
"clearFailed": "呼び出しキャッシュのクリアに問題があります",
"enable": "有効",
"clear": "クリア",
"maxCacheSize": "最大キャッシュサイズ",
"cacheSize": "キャッシュサイズ"
"showOptionsPanel": "オプションパネルを表示"
}
}

View File

@@ -866,7 +866,7 @@
"version": "版本",
"validateConnections": "验证连接和节点图",
"inputMayOnlyHaveOneConnection": "输入仅能有一个连接",
"notes": "注释",
"notes": "节点",
"nodeOutputs": "节点输出",
"currentImageDescription": "在节点编辑器中显示当前图像",
"validateConnectionsHelp": "防止建立无效连接和调用无效节点图",
@@ -892,11 +892,11 @@
"currentImage": "当前图像",
"workflowName": "名称",
"cannotConnectInputToInput": "无法将输入连接到输入",
"workflowNotes": "注释",
"workflowNotes": "节点",
"cannotConnectOutputToOutput": "无法将输出连接到输出",
"connectionWouldCreateCycle": "连接将创建一个循环",
"cannotConnectToSelf": "无法连接自己",
"notesDescription": "添加有关您的工作流的注释",
"notesDescription": "添加有关您的工作流的节点",
"unknownField": "未知",
"colorCodeEdges": "边缘颜色编码",
"unknownNode": "未知节点",

View File

@@ -12,7 +12,6 @@ import { addFirstListImagesListener } from './listeners/addFirstListImagesListen
import { addAnyEnqueuedListener } from './listeners/anyEnqueued';
import { addAppConfigReceivedListener } from './listeners/appConfigReceived';
import { addAppStartedListener } from './listeners/appStarted';
import { addBatchEnqueuedListener } from './listeners/batchEnqueued';
import { addDeleteBoardAndImagesFulfilledListener } from './listeners/boardAndImagesDeleted';
import { addBoardIdSelectedListener } from './listeners/boardIdSelected';
import { addCanvasCopiedToClipboardListener } from './listeners/canvasCopiedToClipboard';
@@ -72,6 +71,8 @@ import { addStagingAreaImageSavedListener } from './listeners/stagingAreaImageSa
import { addTabChangedListener } from './listeners/tabChanged';
import { addUpscaleRequestedListener } from './listeners/upscaleRequested';
import { addWorkflowLoadedListener } from './listeners/workflowLoaded';
import { addBatchEnqueuedListener } from './listeners/batchEnqueued';
import { addControlAdapterAddedOrEnabledListener } from './listeners/controlAdapterAddedOrEnabled';
export const listenerMiddleware = createListenerMiddleware();
@@ -199,3 +200,7 @@ addTabChangedListener();
// Dynamic prompts
addDynamicPromptsListener();
// Display toast when controlnet or t2i adapter enabled
// TODO: Remove when they can both be enabled at same time
addControlAdapterAddedOrEnabledListener();

View File

@@ -0,0 +1,87 @@
import { isAnyOf } from '@reduxjs/toolkit';
import {
controlAdapterAdded,
controlAdapterAddedFromImage,
controlAdapterIsEnabledChanged,
controlAdapterRecalled,
selectControlAdapterAll,
selectControlAdapterById,
} from 'features/controlAdapters/store/controlAdaptersSlice';
import { ControlAdapterType } from 'features/controlAdapters/store/types';
import { addToast } from 'features/system/store/systemSlice';
import i18n from 'i18n';
import { startAppListening } from '..';
const isAnyControlAdapterAddedOrEnabled = isAnyOf(
controlAdapterAdded,
controlAdapterAddedFromImage,
controlAdapterRecalled,
controlAdapterIsEnabledChanged
);
/**
* Until we can have both controlnet and t2i adapter enabled at once, they are mutually exclusive
* This displays a toast when one is enabled and the other is already enabled, or one is added
* with the other enabled
*/
export const addControlAdapterAddedOrEnabledListener = () => {
startAppListening({
matcher: isAnyControlAdapterAddedOrEnabled,
effect: async (action, { dispatch, getOriginalState }) => {
const controlAdapters = getOriginalState().controlAdapters;
const hasEnabledControlNets = selectControlAdapterAll(
controlAdapters
).some((ca) => ca.isEnabled && ca.type === 'controlnet');
const hasEnabledT2IAdapters = selectControlAdapterAll(
controlAdapters
).some((ca) => ca.isEnabled && ca.type === 't2i_adapter');
let caType: ControlAdapterType | null = null;
if (controlAdapterAdded.match(action)) {
caType = action.payload.type;
}
if (controlAdapterAddedFromImage.match(action)) {
caType = action.payload.type;
}
if (controlAdapterRecalled.match(action)) {
caType = action.payload.type;
}
if (controlAdapterIsEnabledChanged.match(action)) {
const _caType = selectControlAdapterById(
controlAdapters,
action.payload.id
)?.type;
if (!_caType) {
return;
}
caType = _caType;
}
if (
(caType === 'controlnet' && hasEnabledT2IAdapters) ||
(caType === 't2i_adapter' && hasEnabledControlNets)
) {
const title =
caType === 'controlnet'
? i18n.t('controlnet.controlNetEnabledT2IDisabled')
: i18n.t('controlnet.t2iEnabledControlNetDisabled');
const description = i18n.t('controlnet.controlNetT2IMutexDesc');
dispatch(
addToast({
title,
description,
status: 'warning',
})
);
}
},
});
};

View File

@@ -88,6 +88,61 @@ export const selectValidT2IAdapters = (controlAdapters: ControlAdaptersState) =>
(ca.processorType === 'none' && Boolean(ca.controlImage)))
);
// TODO: I think we can safely remove this?
// const disableAllIPAdapters = (
// state: ControlAdaptersState,
// exclude?: string
// ) => {
// const updates: Update<ControlAdapterConfig>[] = selectAllIPAdapters(state)
// .filter((ca) => ca.id !== exclude)
// .map((ca) => ({
// id: ca.id,
// changes: { isEnabled: false },
// }));
// caAdapter.updateMany(state, updates);
// };
const disableAllControlNets = (
state: ControlAdaptersState,
exclude?: string
) => {
const updates: Update<ControlAdapterConfig>[] = selectAllControlNets(state)
.filter((ca) => ca.id !== exclude)
.map((ca) => ({
id: ca.id,
changes: { isEnabled: false },
}));
caAdapter.updateMany(state, updates);
};
const disableAllT2IAdapters = (
state: ControlAdaptersState,
exclude?: string
) => {
const updates: Update<ControlAdapterConfig>[] = selectAllT2IAdapters(state)
.filter((ca) => ca.id !== exclude)
.map((ca) => ({
id: ca.id,
changes: { isEnabled: false },
}));
caAdapter.updateMany(state, updates);
};
const disableIncompatibleControlAdapters = (
state: ControlAdaptersState,
type: ControlAdapterType,
exclude?: string
) => {
if (type === 'controlnet') {
// we cannot do controlnet + t2i adapter, if we are enabled a controlnet, disable all t2is
disableAllT2IAdapters(state, exclude);
}
if (type === 't2i_adapter') {
// we cannot do controlnet + t2i adapter, if we are enabled a t2i, disable controlnets
disableAllControlNets(state, exclude);
}
};
export const controlAdaptersSlice = createSlice({
name: 'controlAdapters',
initialState: initialControlAdapterState,
@@ -103,6 +158,7 @@ export const controlAdaptersSlice = createSlice({
) => {
const { id, type, overrides } = action.payload;
caAdapter.addOne(state, buildControlAdapter(id, type, overrides));
disableIncompatibleControlAdapters(state, type, id);
},
prepare: ({
type,
@@ -119,6 +175,8 @@ export const controlAdaptersSlice = createSlice({
action: PayloadAction<ControlAdapterConfig>
) => {
caAdapter.addOne(state, action.payload);
const { type, id } = action.payload;
disableIncompatibleControlAdapters(state, type, id);
},
controlAdapterDuplicated: {
reducer: (
@@ -138,6 +196,8 @@ export const controlAdaptersSlice = createSlice({
isEnabled: true,
});
caAdapter.addOne(state, newControlAdapter);
const { type } = newControlAdapter;
disableIncompatibleControlAdapters(state, type, newId);
},
prepare: (id: string) => {
return { payload: { id, newId: uuidv4() } };
@@ -157,6 +217,7 @@ export const controlAdaptersSlice = createSlice({
state,
buildControlAdapter(id, type, { controlImage })
);
disableIncompatibleControlAdapters(state, type, id);
},
prepare: (payload: {
type: ControlAdapterType;
@@ -174,6 +235,12 @@ export const controlAdaptersSlice = createSlice({
) => {
const { id, isEnabled } = action.payload;
caAdapter.updateOne(state, { id, changes: { isEnabled } });
if (isEnabled) {
// we are enabling a control adapter. due to limitations in the current system, we may need to disable other adapters
// TODO: disable when multiple IP adapters are supported
const ca = selectControlAdapterById(state, id);
ca && disableIncompatibleControlAdapters(state, ca.type, id);
}
},
controlAdapterImageChanged: (
state,

View File

@@ -16,13 +16,15 @@ const ParamDynamicPromptsCollapse = () => {
() =>
createSelector(stateSelector, ({ dynamicPrompts }) => {
const count = dynamicPrompts.prompts.length;
if (count > 1) {
if (count === 1) {
return t('dynamicPrompts.promptsWithCount_one', {
count,
});
} else {
return t('dynamicPrompts.promptsWithCount_other', {
count,
});
}
return;
}),
[t]
);

View File

@@ -10,7 +10,6 @@ import { loraAdded } from 'features/lora/store/loraSlice';
import { MODEL_TYPE_MAP } from 'features/parameters/types/constants';
import { forEach } from 'lodash-es';
import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { useGetLoRAModelsQuery } from 'services/api/endpoints/models';
const selector = createSelector(
@@ -25,7 +24,7 @@ const ParamLoRASelect = () => {
const dispatch = useAppDispatch();
const { loras } = useAppSelector(selector);
const { data: loraModels } = useGetLoRAModelsQuery();
const { t } = useTranslation();
const currentMainModel = useAppSelector(
(state: RootState) => state.generation.model
);
@@ -80,7 +79,7 @@ const ParamLoRASelect = () => {
return (
<Flex sx={{ justifyContent: 'center', p: 2 }}>
<Text sx={{ fontSize: 'sm', color: 'base.500', _dark: 'base.700' }}>
{t('models.noLoRAsInstalled')}
No LoRAs Loaded
</Text>
</Flex>
);

View File

@@ -7,7 +7,7 @@ import {
SaveImageInvocation,
} from 'services/api/types';
import { REALESRGAN as ESRGAN, SAVE_IMAGE } from './constants';
import { addCoreMetadataNode, upsertMetadata } from './metadata';
import { addCoreMetadataNode } from './metadata';
type Arg = {
image_name: string;
@@ -56,8 +56,7 @@ export const buildAdHocUpscaleGraph = ({
],
};
addCoreMetadataNode(graph, {});
upsertMetadata(graph, {
addCoreMetadataNode(graph, {
esrgan_model: esrganModelName,
});

View File

@@ -5,7 +5,7 @@ import { METADATA, SAVE_IMAGE } from './constants';
export const addCoreMetadataNode = (
graph: NonNullableGraph,
metadata: Partial<CoreMetadataInvocation>
metadata: Partial<CoreMetadataInvocation> | JsonObject
): void => {
graph.nodes[METADATA] = {
id: METADATA,

View File

@@ -28,7 +28,9 @@ export default function ParamAdvancedCollapse() {
const activeLabel = useMemo(() => {
const activeLabel: string[] = [];
if (!shouldUseCpuNoise) {
if (shouldUseCpuNoise) {
activeLabel.push(t('parameters.cpuNoise'));
} else {
activeLabel.push(t('parameters.gpuNoise'));
}

View File

@@ -4,13 +4,12 @@ import { RootState, stateSelector } from 'app/store/store';
import { useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import IAICollapse from 'common/components/IAICollapse';
import { useFeatureStatus } from 'features/system/hooks/useFeatureStatus';
import { useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import ParamHrfHeight from './ParamHrfHeight';
import ParamHrfStrength from './ParamHrfStrength';
import ParamHrfToggle from './ParamHrfToggle';
import ParamHrfWidth from './ParamHrfWidth';
import ParamHrfHeight from './ParamHrfHeight';
import { useFeatureStatus } from 'features/system/hooks/useFeatureStatus';
const selector = createSelector(
stateSelector,
@@ -23,14 +22,15 @@ const selector = createSelector(
);
export default function ParamHrfCollapse() {
const { t } = useTranslation();
const isHRFFeatureEnabled = useFeatureStatus('hrf').isFeatureEnabled;
const { hrfEnabled } = useAppSelector(selector);
const activeLabel = useMemo(() => {
if (hrfEnabled) {
return t('common.on');
return 'On';
} else {
return 'Off';
}
}, [t, hrfEnabled]);
}, [hrfEnabled]);
if (!isHRFFeatureEnabled) {
return null;

View File

@@ -1,4 +1,4 @@
import { Flex, Text } from '@chakra-ui/react';
import { Flex } from '@chakra-ui/react';
import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppSelector } from 'app/store/storeHooks';
@@ -14,7 +14,6 @@ import ParamSDXLRefinerStart from './SDXLRefiner/ParamSDXLRefinerStart';
import ParamSDXLRefinerSteps from './SDXLRefiner/ParamSDXLRefinerSteps';
import ParamUseSDXLRefiner from './SDXLRefiner/ParamUseSDXLRefiner';
import { useTranslation } from 'react-i18next';
import { useIsRefinerAvailable } from 'services/api/hooks/useIsRefinerAvailable';
const selector = createSelector(
stateSelector,
@@ -32,19 +31,6 @@ const selector = createSelector(
const ParamSDXLRefinerCollapse = () => {
const { activeLabel, shouldUseSliders } = useAppSelector(selector);
const { t } = useTranslation();
const isRefinerAvailable = useIsRefinerAvailable();
if (!isRefinerAvailable) {
return (
<IAICollapse label={t('sdxl.refiner')} activeLabel={activeLabel}>
<Flex sx={{ justifyContent: 'center', p: 2 }}>
<Text sx={{ fontSize: 'sm', color: 'base.500', _dark: 'base.700' }}>
{t('models.noRefinerModelsInstalled')}
</Text>
</Flex>
</IAICollapse>
);
}
return (
<IAICollapse label={t('sdxl.refiner')} activeLabel={activeLabel}>

File diff suppressed because one or more lines are too long

View File

@@ -42,7 +42,7 @@ dependencies = [
"datasets",
# When bumping diffusers beyond 0.21, make sure to address this:
# https://github.com/invoke-ai/InvokeAI/blob/fc09ab7e13cb7ca5389100d149b6422ace7b8ed3/invokeai/app/invocations/latent.py#L513
"diffusers[torch]~=0.22.0",
"diffusers[torch]~=0.21.0",
"dnspython~=2.4.0",
"dynamicprompts",
"easing-functions",
@@ -80,8 +80,8 @@ dependencies = [
"semver~=3.0.1",
"send2trash",
"test-tube~=0.7.5",
"torch~=2.1.0",
"torchvision~=0.16",
"torch~=2.0.1",
"torchvision~=0.15.2",
"torchmetrics~=0.11.0",
"torchsde~=0.2.5",
"transformers~=4.31.0",
@@ -109,8 +109,8 @@ dependencies = [
"pytest-datadir",
]
"xformers" = [
"xformers==0.0.22post7; sys_platform!='darwin'",
"triton; sys_platform=='linux'",
"xformers~=0.0.19; sys_platform!='darwin'",
"triton; sys_platform=='linux'",
]
"onnx" = ["onnxruntime"]
"onnx-cuda" = ["onnxruntime-gpu"]
@@ -140,7 +140,6 @@ dependencies = [
"invokeai-node-web" = "invokeai.app.api_app:invoke_api"
"invokeai-import-images" = "invokeai.frontend.install.import_images:main"
"invokeai-db-maintenance" = "invokeai.backend.util.db_maintenance:main"
"invokeai-nmm" = "invokeai.backend.normalized_mm.cli:main"
[project.urls]
"Homepage" = "https://invoke-ai.github.io/InvokeAI/"
@@ -167,7 +166,6 @@ version = { attr = "invokeai.version.__version__" }
]
[tool.setuptools.package-data]
"invokeai.app.assets" = ["**/*.png"]
"invokeai.assets.fonts" = ["**/*.ttf"]
"invokeai.backend" = ["**.png"]
"invokeai.configs" = ["*.example", "**/*.yaml", "*.txt"]
@@ -207,7 +205,6 @@ exclude = [
"build",
"dist",
"invokeai/frontend/web/node_modules/",
".venv*",
]
[tool.black]

View File

@@ -1,102 +0,0 @@
# test that if the model's device changes while the lora is applied, the weights can still be restored
# test that LoRA patching works on both CPU and CUDA
import pytest
import torch
from invokeai.backend.model_management.lora import ModelPatcher
from invokeai.backend.model_management.models.lora import LoRALayer, LoRAModelRaw
@pytest.mark.parametrize(
"device",
[
"cpu",
pytest.param("cuda", marks=pytest.mark.skipif(not torch.cuda.is_available(), reason="requires CUDA device")),
],
)
@torch.no_grad()
def test_apply_lora(device):
"""Test the basic behavior of ModelPatcher.apply_lora(...). Check that patching and unpatching produce the correct
result, and that model/LoRA tensors are moved between devices as expected.
"""
linear_in_features = 4
linear_out_features = 8
lora_dim = 2
model = torch.nn.ModuleDict(
{"linear_layer_1": torch.nn.Linear(linear_in_features, linear_out_features, device=device, dtype=torch.float16)}
)
lora_layers = {
"linear_layer_1": LoRALayer(
layer_key="linear_layer_1",
values={
"lora_down.weight": torch.ones((lora_dim, linear_in_features), device="cpu", dtype=torch.float16),
"lora_up.weight": torch.ones((linear_out_features, lora_dim), device="cpu", dtype=torch.float16),
},
)
}
lora = LoRAModelRaw("lora_name", lora_layers)
lora_weight = 0.5
orig_linear_weight = model["linear_layer_1"].weight.data.detach().clone()
expected_patched_linear_weight = orig_linear_weight + (lora_dim * lora_weight)
with ModelPatcher.apply_lora(model, [(lora, lora_weight)], prefix=""):
# After patching, all LoRA layer weights should have been moved back to the cpu.
assert lora_layers["linear_layer_1"].up.device.type == "cpu"
assert lora_layers["linear_layer_1"].down.device.type == "cpu"
# After patching, the patched model should still be on its original device.
assert model["linear_layer_1"].weight.data.device.type == device
torch.testing.assert_close(model["linear_layer_1"].weight.data, expected_patched_linear_weight)
# After unpatching, the original model weights should have been restored on the original device.
assert model["linear_layer_1"].weight.data.device.type == device
torch.testing.assert_close(model["linear_layer_1"].weight.data, orig_linear_weight)
@pytest.mark.skipif(not torch.cuda.is_available(), reason="requires CUDA device")
@torch.no_grad()
def test_apply_lora_change_device():
"""Test that if LoRA patching is applied on the CPU, and then the patched model is moved to the GPU, unpatching
still behaves correctly.
"""
linear_in_features = 4
linear_out_features = 8
lora_dim = 2
# Initialize the model on the CPU.
model = torch.nn.ModuleDict(
{"linear_layer_1": torch.nn.Linear(linear_in_features, linear_out_features, device="cpu", dtype=torch.float16)}
)
lora_layers = {
"linear_layer_1": LoRALayer(
layer_key="linear_layer_1",
values={
"lora_down.weight": torch.ones((lora_dim, linear_in_features), device="cpu", dtype=torch.float16),
"lora_up.weight": torch.ones((linear_out_features, lora_dim), device="cpu", dtype=torch.float16),
},
)
}
lora = LoRAModelRaw("lora_name", lora_layers)
orig_linear_weight = model["linear_layer_1"].weight.data.detach().clone()
with ModelPatcher.apply_lora(model, [(lora, 0.5)], prefix=""):
# After patching, all LoRA layer weights should have been moved back to the cpu.
assert lora_layers["linear_layer_1"].up.device.type == "cpu"
assert lora_layers["linear_layer_1"].down.device.type == "cpu"
# After patching, the patched model should still be on the CPU.
assert model["linear_layer_1"].weight.data.device.type == "cpu"
# Move the model to the GPU.
assert model.to("cuda")
# After unpatching, the original model weights should have been restored on the GPU.
assert model["linear_layer_1"].weight.data.device.type == "cuda"
torch.testing.assert_close(model["linear_layer_1"].weight.data, orig_linear_weight, check_device=False)

View File

@@ -13,11 +13,10 @@ def test_memory_snapshot_capture():
snapshots = [
MemorySnapshot(process_ram=1, vram=2, malloc_info=Struct_mallinfo2()),
MemorySnapshot(process_ram=1, vram=2, malloc_info=None),
MemorySnapshot(process_ram=1, vram=None, malloc_info=Struct_mallinfo2()),
MemorySnapshot(process_ram=1, vram=None, malloc_info=None),
None,
MemorySnapshot(process_ram=1.0, vram=2.0, malloc_info=Struct_mallinfo2()),
MemorySnapshot(process_ram=1.0, vram=2.0, malloc_info=None),
MemorySnapshot(process_ram=1.0, vram=None, malloc_info=Struct_mallinfo2()),
MemorySnapshot(process_ram=1.0, vram=None, malloc_info=None),
]
@@ -27,12 +26,10 @@ def test_get_pretty_snapshot_diff(snapshot_1, snapshot_2):
"""Test that get_pretty_snapshot_diff() works with various combinations of missing MemorySnapshot fields."""
msg = get_pretty_snapshot_diff(snapshot_1, snapshot_2)
expected_lines = 0
if snapshot_1 is not None and snapshot_2 is not None:
expected_lines = 1
if snapshot_1.vram is not None and snapshot_2.vram is not None:
expected_lines += 1
if snapshot_1.vram is not None and snapshot_2.vram is not None:
expected_lines += 1
if snapshot_1.malloc_info is not None and snapshot_2.malloc_info is not None:
expected_lines += 5
if snapshot_1.malloc_info is not None and snapshot_2.malloc_info is not None:
expected_lines += 5
assert len(msg.splitlines()) == expected_lines

View File

@@ -11,7 +11,6 @@ from invokeai.backend.model_management.model_load_optimizations import _no_op, s
(torch.nn.Conv1d, {"in_channels": 10, "out_channels": 20, "kernel_size": 3}),
(torch.nn.Conv2d, {"in_channels": 10, "out_channels": 20, "kernel_size": 3}),
(torch.nn.Conv3d, {"in_channels": 10, "out_channels": 20, "kernel_size": 3}),
(torch.nn.Embedding, {"num_embeddings": 10, "embedding_dim": 10}),
],
)
def test_skip_torch_weight_init_linear(torch_module, layer_args):
@@ -37,14 +36,12 @@ def test_skip_torch_weight_init_linear(torch_module, layer_args):
# Check that reset_parameters is skipped while `skip_torch_weight_init()` is active.
assert reset_params_fn_during == _no_op
assert not torch.allclose(layer_before.weight, layer_during.weight)
if hasattr(layer_before, "bias"):
assert not torch.allclose(layer_before.bias, layer_during.bias)
assert not torch.allclose(layer_before.bias, layer_during.bias)
# Check that the original behavior is restored after `skip_torch_weight_init()` ends.
assert reset_params_fn_before is reset_params_fn_after
assert torch.allclose(layer_before.weight, layer_after.weight)
if hasattr(layer_before, "bias"):
assert torch.allclose(layer_before.bias, layer_after.bias)
assert torch.allclose(layer_before.bias, layer_after.bias)
def test_skip_torch_weight_init_restores_base_class_behavior():