Compare commits

..

4 Commits

Author SHA1 Message Date
psychedelicious
95f010b9b8 chore: ruff E721
Looks like in the latest version of ruff, E721 was added or changed and now catches something it didn't before.
2024-06-28 08:19:14 +10:00
psychedelicious
539124ab92 chore: bump version v4.2.5post1 2024-06-28 07:56:18 +10:00
Ryan Dick
18d905579d ruff format 2024-06-28 07:55:42 +10:00
psychedelicious
cdc174d5d2 fix(backend): mps should not use non_blocking
We can get black outputs when moving tensors from CPU to MPS. It appears MPS to CPU is fine. See:
- https://github.com/pytorch/pytorch/issues/107455
- https://discuss.pytorch.org/t/should-we-set-non-blocking-to-true/38234/28

Changes:
- Add properties for each device on `TorchDevice` as a convenience.
- Add `get_non_blocking` static method on `TorchDevice`. This utility takes a torch device and returns the flag to be used for non_blocking when moving a tensor to the device provided.
- Update model patching and caching APIs to use this new utility.

Fixes: #6545
2024-06-28 07:55:34 +10:00
234 changed files with 5388 additions and 8347 deletions

View File

@@ -9,9 +9,9 @@ runs:
node-version: '18'
- name: setup pnpm
uses: pnpm/action-setup@v4
uses: pnpm/action-setup@v2
with:
version: 8.15.6
version: 8
run_install: false
- name: get pnpm store directory

View File

@@ -8,7 +8,7 @@
## QA Instructions
<!--WHEN APPLICABLE: Describe how you have tested the changes in this PR. Provide enough detail that a reviewer can reproduce your tests.-->
<!--WHEN APPLICABLE: Describe how we can test the changes in this PR.-->
## Merge Plan

View File

@@ -12,24 +12,12 @@
Invoke is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. Invoke offers an industry leading web-based UI, and serves as the foundation for multiple commercial products.
Invoke is available in two editions:
| **Community Edition** | **Professional Edition** |
|----------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------|
| **For users looking for a locally installed, self-hosted and self-managed service** | **For users or teams looking for a cloud-hosted, fully managed service** |
| - Free to use under a commercially-friendly license | - Monthly subscription fee with three different plan levels |
| - Download and install on compatible hardware | - Offers additional benefits, including multi-user support, improved model training, and more |
| - Includes all core studio features: generate, refine, iterate on images, and build workflows | - Hosted in the cloud for easy, secure model access and scalability |
| Quick Start -> [Installation and Updates][installation docs] | More Information -> [www.invoke.com/pricing](https://www.invoke.com/pricing) |
[Installation and Updates][installation docs] - [Documentation and Tutorials][docs home] - [Bug Reports][github issues] - [Contributing][contributing docs]
<div align="center">
![Highlighted Features - Canvas and Workflows](https://github.com/invoke-ai/InvokeAI/assets/31807370/708f7a82-084f-4860-bfbe-e2588c53548d)
# Documentation
| **Quick Links** |
|----------------------------------------------------------------------------------------------------------------------------|
| [Installation and Updates][installation docs] - [Documentation and Tutorials][docs home] - [Bug Reports][github issues] - [Contributing][contributing docs] |
</div>
## Quick Start
@@ -49,33 +37,6 @@ Invoke is available in two editions:
More detail, including hardware requirements and manual install instructions, are available in the [installation documentation][installation docs].
## Docker Container
We publish official container images in Github Container Registry: https://github.com/invoke-ai/InvokeAI/pkgs/container/invokeai. Both CUDA and ROCm images are available. Check the above link for relevant tags.
> [!IMPORTANT]
> Ensure that Docker is set up to use the GPU. Refer to [NVIDIA][nvidia docker docs] or [AMD][amd docker docs] documentation.
### Generate!
Run the container, modifying the command as necessary:
```bash
docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
```
Then open `http://localhost:9090` and install some models using the Model Manager tab to begin generating.
For ROCm, add `--device /dev/kfd --device /dev/dri` to the `docker run` command.
### Persist your data
You will likely want to persist your workspace outside of the container. Use the `--volume /home/myuser/invokeai:/invokeai` flag to mount some local directory (using its **absolute** path) to the `/invokeai` path inside the container. Your generated images and models will reside there. You can use this directory with other InvokeAI installations, or switch between runtime directories as needed.
### DIY
Build your own image and customize the environment to match your needs using our `docker-compose` stack. See [README.md](./docker/README.md) in the [docker](./docker) directory.
## Troubleshooting, FAQ and Support
Please review our [FAQ][faq] for solutions to common installation problems and other issues.
@@ -153,5 +114,3 @@ Original portions of the software are Copyright © 2024 by respective contributo
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases/latest
[translation status badge]: https://hosted.weblate.org/widgets/invokeai/-/svg-badge.svg
[translation status link]: https://hosted.weblate.org/engage/invokeai/
[nvidia docker docs]: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
[amd docker docs]: https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html

View File

@@ -19,9 +19,8 @@
## INVOKEAI_PORT is the port on which the InvokeAI web interface will be available
# INVOKEAI_PORT=9090
## GPU_DRIVER can be set to either `cuda` or `rocm` to enable GPU support in the container accordingly.
# GPU_DRIVER=cuda #| rocm
## GPU_DRIVER can be set to either `nvidia` or `rocm` to enable GPU support in the container accordingly.
# GPU_DRIVER=nvidia #| rocm
## CONTAINER_UID can be set to the UID of the user on the host system that should own the files in the container.
## It is usually not necessary to change this. Use `id -u` on the host system to find the UID.
# CONTAINER_UID=1000

View File

@@ -1,75 +1,41 @@
# Invoke in Docker
# InvokeAI Containerized
- Ensure that Docker can use the GPU on your system
- This documentation assumes Linux, but should work similarly under Windows with WSL2
- We don't recommend running Invoke in Docker on macOS at this time. It works, but very slowly.
All commands should be run within the `docker` directory: `cd docker`
## Quickstart :lightning:
## Quickstart :rocket:
No `docker compose`, no persistence, just a simple one-liner using the official images:
On a known working Linux+Docker+CUDA (Nvidia) system, execute `./run.sh` in this directory. It will take a few minutes - depending on your internet speed - to install the core models. Once the application starts up, open `http://localhost:9090` in your browser to Invoke!
**CUDA:**
For more configuration options (using an AMD GPU, custom root directory location, etc): read on.
```bash
docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
```
**ROCm:**
```bash
docker run --device /dev/kfd --device /dev/dri --publish 9090:9090 ghcr.io/invoke-ai/invokeai:main-rocm
```
Open `http://localhost:9090` in your browser once the container finishes booting, install some models, and generate away!
> [!TIP]
> To persist your data (including downloaded models) outside of the container, add a `--volume/-v` flag to the above command, e.g.: `docker run --volume /some/local/path:/invokeai <...the rest of the command>`
## Customize the container
We ship the `run.sh` script, which is a convenient wrapper around `docker compose` for cases where custom image build args are needed. Alternatively, the familiar `docker compose` commands work just as well.
```bash
cd docker
cp .env.sample .env
# edit .env to your liking if you need to; it is well commented.
./run.sh
```
It will take a few minutes to build the image the first time. Once the application starts up, open `http://localhost:9090` in your browser to invoke!
## Docker setup in detail
## Detailed setup
#### Linux
1. Ensure builkit is enabled in the Docker daemon settings (`/etc/docker/daemon.json`)
2. Install the `docker compose` plugin using your package manager, or follow a [tutorial](https://docs.docker.com/compose/install/linux/#install-using-the-repository).
- The deprecated `docker-compose` (hyphenated) CLI probably won't work. Update to a recent version.
- The deprecated `docker-compose` (hyphenated) CLI continues to work for now.
3. Ensure docker daemon is able to access the GPU.
- [NVIDIA docs](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
- [AMD docs](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html)
- You may need to install [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
#### macOS
> [!TIP]
> You'll be better off installing Invoke directly on your system, because Docker can not use the GPU on macOS.
If you are still reading:
1. Ensure Docker has at least 16GB RAM
2. Enable VirtioFS for file sharing
3. Enable `docker compose` V2 support
This is done via Docker Desktop preferences.
This is done via Docker Desktop preferences
### Configure the Invoke Environment
### Configure Invoke environment
1. Make a copy of `.env.sample` and name it `.env` (`cp .env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to the desired location of the InvokeAI runtime directory. It may be an existing directory from a previous installation (post 4.0.0).
1. Make a copy of `.env.sample` and name it `.env` (`cp .env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to:
a. the desired location of the InvokeAI runtime directory, or
b. an existing, v3.0.0 compatible runtime directory.
1. Execute `run.sh`
The image will be built automatically if needed.
The runtime directory (holding models and outputs) will be created in the location specified by `INVOKEAI_ROOT`. The default location is `~/invokeai`. Navigate to the Model Manager tab and install some models before generating.
The runtime directory (holding models and outputs) will be created in the location specified by `INVOKEAI_ROOT`. The default location is `~/invokeai`. The runtime directory will be populated with the base configs and models necessary to start generating.
### Use a GPU
@@ -77,9 +43,9 @@ The runtime directory (holding models and outputs) will be created in the locati
- WSL2 is *required* for Windows.
- only `x86_64` architecture is supported.
The Docker daemon on the system must be already set up to use the GPU. In case of Linux, this involves installing `nvidia-docker-runtime` and configuring the `nvidia` runtime as default. Steps will be different for AMD. Please see Docker/NVIDIA/AMD documentation for the most up-to-date instructions for using your GPU with Docker.
The Docker daemon on the system must be already set up to use the GPU. In case of Linux, this involves installing `nvidia-docker-runtime` and configuring the `nvidia` runtime as default. Steps will be different for AMD. Please see Docker documentation for the most up-to-date instructions for using your GPU with Docker.
To use an AMD GPU, set `GPU_DRIVER=rocm` in your `.env` file before running `./run.sh`.
To use an AMD GPU, set `GPU_DRIVER=rocm` in your `.env` file.
## Customize
@@ -93,10 +59,10 @@ Values are optional, but setting `INVOKEAI_ROOT` is highly recommended. The defa
INVOKEAI_ROOT=/Volumes/WorkDrive/invokeai
HUGGINGFACE_TOKEN=the_actual_token
CONTAINER_UID=1000
GPU_DRIVER=cuda
GPU_DRIVER=nvidia
```
Any environment variables supported by InvokeAI can be set here. See the [Configuration docs](https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/) for further detail.
Any environment variables supported by InvokeAI can be set here - please see the [Configuration docs](https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/) for further detail.
## Even More Customizing!

View File

@@ -1,5 +1,7 @@
# Copyright (c) 2023 Eugene Brodsky https://github.com/ebr
version: '3.8'
x-invokeai: &invokeai
image: "local/invokeai:latest"
build:
@@ -30,7 +32,7 @@ x-invokeai: &invokeai
services:
invokeai-cuda:
invokeai-nvidia:
<<: *invokeai
deploy:
resources:

View File

@@ -23,18 +23,18 @@ usermod -u ${USER_ID} ${USER} 1>/dev/null
# but it is useful to have the full SSH server e.g. on Runpod.
# (use SCP to copy files to/from the image, etc)
if [[ -v "PUBLIC_KEY" ]] && [[ ! -d "${HOME}/.ssh" ]]; then
apt-get update
apt-get install -y openssh-server
pushd "$HOME"
mkdir -p .ssh
echo "${PUBLIC_KEY}" >.ssh/authorized_keys
chmod -R 700 .ssh
popd
service ssh start
apt-get update
apt-get install -y openssh-server
pushd "$HOME"
mkdir -p .ssh
echo "${PUBLIC_KEY}" > .ssh/authorized_keys
chmod -R 700 .ssh
popd
service ssh start
fi
mkdir -p "${INVOKEAI_ROOT}"
chown --recursive ${USER} "${INVOKEAI_ROOT}" || true
chown --recursive ${USER} "${INVOKEAI_ROOT}"
cd "${INVOKEAI_ROOT}"
# Run the CMD as the Container User (not root).

View File

@@ -8,15 +8,11 @@ run() {
local build_args=""
local profile=""
# create .env file if it doesn't exist, otherwise docker compose will fail
touch .env
# parse .env file for build args
build_args=$(awk '$1 ~ /=[^$]/ && $0 !~ /^#/ {print "--build-arg " $0 " "}' .env) &&
profile="$(awk -F '=' '/GPU_DRIVER/ {print $2}' .env)"
# default to 'cuda' profile
[[ -z "$profile" ]] && profile="cuda"
[[ -z "$profile" ]] && profile="nvidia"
local service_name="invokeai-$profile"

View File

@@ -408,7 +408,7 @@ config = get_config()
logger = InvokeAILogger.get_logger(config=config)
db = SqliteDatabase(config.db_path, logger)
record_store = ModelRecordServiceSQL(db, logger)
record_store = ModelRecordServiceSQL(db)
queue = DownloadQueueService()
queue.start()

View File

@@ -4,37 +4,50 @@ title: Installing with Docker
# :fontawesome-brands-docker: Docker
!!! warning "macOS users"
!!! warning "macOS and AMD GPU Users"
Docker can not access the GPU on macOS, so your generation speeds will be slow. [Install InvokeAI](INSTALLATION.md) instead.
We highly recommend to Install InvokeAI locally using [these instructions](INSTALLATION.md),
because Docker containers can not access the GPU on macOS.
!!! warning "AMD GPU Users"
Container support for AMD GPUs has been reported to work by the community, but has not received
extensive testing. Please make sure to set the `GPU_DRIVER=rocm` environment variable (see below), and
use the `build.sh` script to build the image for this to take effect at build time.
!!! tip "Linux and Windows Users"
Configure Docker to access your machine's GPU.
For optimal performance, configure your Docker daemon to access your machine's GPU.
Docker Desktop on Windows [includes GPU support](https://www.docker.com/blog/wsl-2-gpu-support-for-docker-desktop-on-nvidia-gpus/).
Linux users should follow the [NVIDIA](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) or [AMD](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html) documentation.
Linux users should install and configure the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
## Why containers?
They provide a flexible, reliable way to build and deploy InvokeAI.
See [Processes](https://12factor.net/processes) under the Twelve-Factor App
methodology for details on why running applications in such a stateless fashion is important.
The container is configured for CUDA by default, but can be built to support AMD GPUs
by setting the `GPU_DRIVER=rocm` environment variable at Docker image build time.
Developers on Apple silicon (M1/M2/M3): You
[can't access your GPU cores from Docker containers](https://github.com/pytorch/pytorch/issues/81224)
and performance is reduced compared with running it directly on macOS but for
development purposes it's fine. Once you're done with development tasks on your
laptop you can build for the target platform and architecture and deploy to
another environment with NVIDIA GPUs on-premises or in the cloud.
## TL;DR
Ensure your Docker setup is able to use your GPU. Then:
```bash
docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
```
Once the container starts up, open http://localhost:9090 in your browser, install some models, and start generating.
## Build-It-Yourself
All the docker materials are located inside the [docker](https://github.com/invoke-ai/InvokeAI/tree/main/docker) directory in the Git repo.
This assumes properly configured Docker on Linux or Windows/WSL2. Read on for detailed customization options.
```bash
# docker compose commands should be run from the `docker` directory
cd docker
cp .env.sample .env
docker compose up
```
We also ship the `run.sh` convenience script. See the `docker/README.md` file for detailed instructions on how to customize the docker setup to your needs.
## Installation in a Linux container (desktop)
### Prerequisites
@@ -45,9 +58,18 @@ Preferences, Resources, Advanced. Increase the CPUs and Memory to avoid this
[Issue](https://github.com/invoke-ai/InvokeAI/issues/342). You may need to
increase Swap and Disk image size too.
#### Get a Huggingface-Token
Besides the Docker Agent you will need an Account on
[huggingface.co](https://huggingface.co/join).
After you succesfully registered your account, go to
[huggingface.co/settings/tokens](https://huggingface.co/settings/tokens), create
a token and copy it, since you will need in for the next step.
### Setup
Set up your environment variables. In the `docker` directory, make a copy of `.env.sample` and name it `.env`. Make changes as necessary.
Set up your environmnent variables. In the `docker` directory, make a copy of `.env.sample` and name it `.env`. Make changes as necessary.
Any environment variables supported by InvokeAI can be set here - please see the [CONFIGURATION](../features/CONFIGURATION.md) for further detail.
@@ -81,9 +103,10 @@ Once the container starts up (and configures the InvokeAI root directory if this
## Troubleshooting / FAQ
- Q: I am running on Windows under WSL2, and am seeing a "no such file or directory" error.
- A: Your `docker-entrypoint.sh` might have has Windows (CRLF) line endings, depending how you cloned the repository.
To solve this, change the line endings in the `docker-entrypoint.sh` file to `LF`. You can do this in VSCode
- A: Your `docker-entrypoint.sh` file likely has Windows (CRLF) as opposed to Unix (LF) line endings,
and you may have cloned this repository before the issue was fixed. To solve this, please change
the line endings in the `docker-entrypoint.sh` file to `LF`. You can do this in VSCode
(`Ctrl+P` and search for "line endings"), or by using the `dos2unix` utility in WSL.
Finally, you may delete `docker-entrypoint.sh` followed by `git pull; git checkout docker/docker-entrypoint.sh`
to reset the file to its most recent version.
For more information on this issue, see [Docker Desktop documentation](https://docs.docker.com/desktop/troubleshoot/topics/#avoid-unexpected-syntax-errors-use-unix-style-line-endings-for-files-in-containers)
For more information on this issue, please see the [Docker Desktop documentation](https://docs.docker.com/desktop/troubleshoot/topics/#avoid-unexpected-syntax-errors-use-unix-style-line-endings-for-files-in-containers)

View File

@@ -13,7 +13,7 @@ echo 2. Open the developer console
echo 3. Command-line help
echo Q - Quit
echo.
echo To update, download and run the installer from https://github.com/invoke-ai/InvokeAI/releases/latest
echo To update, download and run the installer from https://github.com/invoke-ai/InvokeAI/releases/latest.
echo.
set /P choice="Please enter 1-4, Q: [1] "
if not defined choice set choice=1

View File

@@ -4,39 +4,37 @@ from logging import Logger
import torch
from invokeai.app.services.board_image_records.board_image_records_sqlite import SqliteBoardImageRecordStorage
from invokeai.app.services.board_images.board_images_default import BoardImagesService
from invokeai.app.services.board_records.board_records_sqlite import SqliteBoardRecordStorage
from invokeai.app.services.boards.boards_default import BoardService
from invokeai.app.services.bulk_download.bulk_download_default import BulkDownloadService
from invokeai.app.services.config.config_default import InvokeAIAppConfig
from invokeai.app.services.download.download_default import DownloadQueueService
from invokeai.app.services.events.events_fastapievents import FastAPIEventService
from invokeai.app.services.image_files.image_files_disk import DiskImageFileStorage
from invokeai.app.services.image_records.image_records_sqlite import SqliteImageRecordStorage
from invokeai.app.services.images.images_default import ImageService
from invokeai.app.services.invocation_cache.invocation_cache_memory import MemoryInvocationCache
from invokeai.app.services.invocation_services import InvocationServices
from invokeai.app.services.invocation_stats.invocation_stats_default import InvocationStatsService
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.model_images.model_images_default import ModelImageFileStorageDisk
from invokeai.app.services.model_manager.model_manager_default import ModelManagerService
from invokeai.app.services.model_records.model_records_sql import ModelRecordServiceSQL
from invokeai.app.services.names.names_default import SimpleNameService
from invokeai.app.services.object_serializer.object_serializer_disk import ObjectSerializerDisk
from invokeai.app.services.object_serializer.object_serializer_forward_cache import ObjectSerializerForwardCache
from invokeai.app.services.session_processor.session_processor_default import (
DefaultSessionProcessor,
DefaultSessionRunner,
)
from invokeai.app.services.session_queue.session_queue_sqlite import SqliteSessionQueue
from invokeai.app.services.shared.sqlite.sqlite_util import init_db
from invokeai.app.services.urls.urls_default import LocalUrlService
from invokeai.app.services.workflow_records.workflow_records_sqlite import SqliteWorkflowRecordsStorage
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningFieldData
from invokeai.backend.util.logging import InvokeAILogger
from invokeai.version.invokeai_version import __version__
from ..services.board_image_records.board_image_records_sqlite import SqliteBoardImageRecordStorage
from ..services.board_images.board_images_default import BoardImagesService
from ..services.board_records.board_records_sqlite import SqliteBoardRecordStorage
from ..services.boards.boards_default import BoardService
from ..services.bulk_download.bulk_download_default import BulkDownloadService
from ..services.config import InvokeAIAppConfig
from ..services.download import DownloadQueueService
from ..services.events.events_fastapievents import FastAPIEventService
from ..services.image_files.image_files_disk import DiskImageFileStorage
from ..services.image_records.image_records_sqlite import SqliteImageRecordStorage
from ..services.images.images_default import ImageService
from ..services.invocation_cache.invocation_cache_memory import MemoryInvocationCache
from ..services.invocation_services import InvocationServices
from ..services.invocation_stats.invocation_stats_default import InvocationStatsService
from ..services.invoker import Invoker
from ..services.model_images.model_images_default import ModelImageFileStorageDisk
from ..services.model_manager.model_manager_default import ModelManagerService
from ..services.model_records import ModelRecordServiceSQL
from ..services.names.names_default import SimpleNameService
from ..services.session_processor.session_processor_default import DefaultSessionProcessor, DefaultSessionRunner
from ..services.session_queue.session_queue_sqlite import SqliteSessionQueue
from ..services.urls.urls_default import LocalUrlService
from ..services.workflow_records.workflow_records_sqlite import SqliteWorkflowRecordsStorage
# TODO: is there a better way to achieve this?
def check_internet() -> bool:
@@ -99,7 +97,7 @@ class ApiDependencies:
model_images_service = ModelImageFileStorageDisk(model_images_folder / "model_images")
model_manager = ModelManagerService.build_model_manager(
app_config=configuration,
model_record_service=ModelRecordServiceSQL(db=db, logger=logger),
model_record_service=ModelRecordServiceSQL(db=db),
download_queue=download_queue_service,
events=events,
)

View File

@@ -10,13 +10,14 @@ from fastapi import Body
from fastapi.routing import APIRouter
from pydantic import BaseModel, Field
from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.invocations.upscale import ESRGAN_MODELS
from invokeai.app.services.invocation_cache.invocation_cache_common import InvocationCacheStatus
from invokeai.backend.image_util.infill_methods.patchmatch import PatchMatch
from invokeai.backend.util.logging import logging
from invokeai.version import __version__
from ..dependencies import ApiDependencies
class LogLevel(int, Enum):
NotSet = logging.NOTSET

View File

@@ -2,7 +2,7 @@ from fastapi import Body, HTTPException
from fastapi.routing import APIRouter
from pydantic import BaseModel, Field
from invokeai.app.api.dependencies import ApiDependencies
from ..dependencies import ApiDependencies
board_images_router = APIRouter(prefix="/v1/board_images", tags=["boards"])

View File

@@ -4,11 +4,12 @@ from fastapi import Body, HTTPException, Path, Query
from fastapi.routing import APIRouter
from pydantic import BaseModel, Field
from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.services.board_records.board_records_common import BoardChanges, UncategorizedImageCounts
from invokeai.app.services.board_records.board_records_common import BoardChanges
from invokeai.app.services.boards.boards_common import BoardDTO
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from ..dependencies import ApiDependencies
boards_router = APIRouter(prefix="/v1/boards", tags=["boards"])
@@ -31,7 +32,6 @@ class DeleteBoardResult(BaseModel):
)
async def create_board(
board_name: str = Query(description="The name of the board to create"),
is_private: bool = Query(default=False, description="Whether the board is private"),
) -> BoardDTO:
"""Creates a board"""
try:
@@ -118,13 +118,15 @@ async def list_boards(
all: Optional[bool] = Query(default=None, description="Whether to list all boards"),
offset: Optional[int] = Query(default=None, description="The page offset"),
limit: Optional[int] = Query(default=None, description="The number of boards per page"),
include_archived: bool = Query(default=False, description="Whether or not to include archived boards in list"),
) -> Union[OffsetPaginatedResults[BoardDTO], list[BoardDTO]]:
"""Gets a list of boards"""
if all:
return ApiDependencies.invoker.services.boards.get_all(include_archived)
return ApiDependencies.invoker.services.boards.get_all()
elif offset is not None and limit is not None:
return ApiDependencies.invoker.services.boards.get_many(offset, limit, include_archived)
return ApiDependencies.invoker.services.boards.get_many(
offset,
limit,
)
else:
raise HTTPException(
status_code=400,
@@ -146,14 +148,3 @@ async def list_all_board_image_names(
board_id,
)
return image_names
@boards_router.get(
"/uncategorized/counts",
operation_id="get_uncategorized_image_counts",
response_model=UncategorizedImageCounts,
)
async def get_uncategorized_image_counts() -> UncategorizedImageCounts:
"""Gets count of images and assets for uncategorized images (images with no board assocation)"""
return ApiDependencies.invoker.services.board_records.get_uncategorized_image_counts()

View File

@@ -8,12 +8,13 @@ from fastapi.routing import APIRouter
from pydantic.networks import AnyHttpUrl
from starlette.exceptions import HTTPException
from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.services.download import (
DownloadJob,
UnknownJobIDException,
)
from ..dependencies import ApiDependencies
download_queue_router = APIRouter(prefix="/v1/download_queue", tags=["download_queue"])

View File

@@ -8,16 +8,12 @@ from fastapi.routing import APIRouter
from PIL import Image
from pydantic import BaseModel, Field, JsonValue
from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.invocations.fields import MetadataField
from invokeai.app.services.image_records.image_records_common import (
ImageCategory,
ImageRecordChanges,
ResourceOrigin,
)
from invokeai.app.services.image_records.image_records_common import ImageCategory, ImageRecordChanges, ResourceOrigin
from invokeai.app.services.images.images_common import ImageDTO, ImageUrlsDTO
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
from ..dependencies import ApiDependencies
images_router = APIRouter(prefix="/v1/images", tags=["images"])
@@ -233,14 +229,21 @@ async def get_image_workflow(
)
async def get_image_full(
image_name: str = Path(description="The name of full-resolution image file to get"),
) -> Response:
) -> FileResponse:
"""Gets a full-resolution image file"""
try:
path = ApiDependencies.invoker.services.images.get_path(image_name)
with open(path, "rb") as f:
content = f.read()
response = Response(content, media_type="image/png")
if not ApiDependencies.invoker.services.images.validate_path(path):
raise HTTPException(status_code=404)
response = FileResponse(
path,
media_type="image/png",
filename=image_name,
content_disposition_type="inline",
)
response.headers["Cache-Control"] = f"max-age={IMAGE_MAX_AGE}"
return response
except Exception:
@@ -261,14 +264,15 @@ async def get_image_full(
)
async def get_image_thumbnail(
image_name: str = Path(description="The name of thumbnail image file to get"),
) -> Response:
) -> FileResponse:
"""Gets a thumbnail image file"""
try:
path = ApiDependencies.invoker.services.images.get_path(image_name, thumbnail=True)
with open(path, "rb") as f:
content = f.read()
response = Response(content, media_type="image/webp")
if not ApiDependencies.invoker.services.images.validate_path(path):
raise HTTPException(status_code=404)
response = FileResponse(path, media_type="image/webp", content_disposition_type="inline")
response.headers["Cache-Control"] = f"max-age={IMAGE_MAX_AGE}"
return response
except Exception:
@@ -312,14 +316,16 @@ async def list_image_dtos(
),
offset: int = Query(default=0, description="The page offset"),
limit: int = Query(default=10, description="The number of images per page"),
order_dir: SQLiteDirection = Query(default=SQLiteDirection.Descending, description="The order of sort"),
starred_first: bool = Query(default=True, description="Whether to sort by starred images first"),
search_term: Optional[str] = Query(default=None, description="The term to search for"),
) -> OffsetPaginatedResults[ImageDTO]:
"""Gets a list of image DTOs"""
image_dtos = ApiDependencies.invoker.services.images.get_many(
offset, limit, starred_first, order_dir, image_origin, categories, is_intermediate, board_id, search_term
offset,
limit,
image_origin,
categories,
is_intermediate,
board_id,
)
return image_dtos

View File

@@ -3,9 +3,9 @@
import io
import pathlib
import shutil
import traceback
from copy import deepcopy
from tempfile import TemporaryDirectory
from typing import Any, Dict, List, Optional, Type
from fastapi import Body, Path, Query, Response, UploadFile
@@ -16,10 +16,10 @@ from pydantic import AnyHttpUrl, BaseModel, ConfigDict, Field
from starlette.exceptions import HTTPException
from typing_extensions import Annotated
from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.services.model_images.model_images_common import ModelImageFileNotFoundException
from invokeai.app.services.model_install.model_install_common import ModelInstallJob
from invokeai.app.services.model_records import (
DuplicateModelException,
InvalidModelException,
ModelRecordChanges,
UnknownModelException,
@@ -30,12 +30,15 @@ from invokeai.backend.model_manager.config import (
MainCheckpointConfig,
ModelFormat,
ModelType,
SubModelType,
)
from invokeai.backend.model_manager.metadata.fetch.huggingface import HuggingFaceMetadataFetch
from invokeai.backend.model_manager.metadata.metadata_base import ModelMetadataWithFiles, UnknownMetadataException
from invokeai.backend.model_manager.search import ModelSearch
from invokeai.backend.model_manager.starter_models import STARTER_MODELS, StarterModel, StarterModelWithoutDependencies
from ..dependencies import ApiDependencies
model_manager_router = APIRouter(prefix="/v2/models", tags=["model_manager"])
# images are immutable; set a high max-age
@@ -171,6 +174,18 @@ async def get_model_record(
raise HTTPException(status_code=404, detail=str(e))
# @model_manager_router.get("/summary", operation_id="list_model_summary")
# async def list_model_summary(
# page: int = Query(default=0, description="The page to get"),
# per_page: int = Query(default=10, description="The number of models per page"),
# order_by: ModelRecordOrderBy = Query(default=ModelRecordOrderBy.Default, description="The attribute to order by"),
# ) -> PaginatedResults[ModelSummary]:
# """Gets a page of model summary data."""
# record_store = ApiDependencies.invoker.services.model_manager.store
# results: PaginatedResults[ModelSummary] = record_store.list_models(page=page, per_page=per_page, order_by=order_by)
# return results
class FoundModel(BaseModel):
path: str = Field(description="Path to the model")
is_installed: bool = Field(description="Whether or not the model is already installed")
@@ -731,36 +746,39 @@ async def convert_model(
logger.error(f"The model with key {key} is not a main checkpoint model.")
raise HTTPException(400, f"The model with key {key} is not a main checkpoint model.")
with TemporaryDirectory(dir=ApiDependencies.invoker.services.configuration.models_path) as tmpdir:
convert_path = pathlib.Path(tmpdir) / pathlib.Path(model_config.path).stem
converted_model = loader.load_model(model_config)
# write the converted file to the convert path
raw_model = converted_model.model
assert hasattr(raw_model, "save_pretrained")
raw_model.save_pretrained(convert_path)
assert convert_path.exists()
# loading the model will convert it into a cached diffusers file
try:
cc_size = loader.convert_cache.max_size
if cc_size == 0: # temporary set the convert cache to a positive number so that cached model is written
loader._convert_cache.max_size = 1.0
loader.load_model(model_config, submodel_type=SubModelType.Scheduler)
finally:
loader._convert_cache.max_size = cc_size
# temporarily rename the original safetensors file so that there is no naming conflict
original_name = model_config.name
model_config.name = f"{original_name}.DELETE"
changes = ModelRecordChanges(name=model_config.name)
store.update_model(key, changes=changes)
# Get the path of the converted model from the loader
cache_path = loader.convert_cache.cache_path(key)
assert cache_path.exists()
# install the diffusers
try:
new_key = installer.install_path(
convert_path,
config={
"name": original_name,
"description": model_config.description,
"hash": model_config.hash,
"source": model_config.source,
},
)
except Exception as e:
logger.error(str(e))
store.update_model(key, changes=ModelRecordChanges(name=original_name))
raise HTTPException(status_code=409, detail=str(e))
# temporarily rename the original safetensors file so that there is no naming conflict
original_name = model_config.name
model_config.name = f"{original_name}.DELETE"
changes = ModelRecordChanges(name=model_config.name)
store.update_model(key, changes=changes)
# install the diffusers
try:
new_key = installer.install_path(
cache_path,
config={
"name": original_name,
"description": model_config.description,
"hash": model_config.hash,
"source": model_config.source,
},
)
except DuplicateModelException as e:
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
# Update the model image if the model had one
try:
@@ -773,8 +791,8 @@ async def convert_model(
# delete the original safetensors file
installer.delete(key)
# delete the temporary directory
# shutil.rmtree(cache_path)
# delete the cached version
shutil.rmtree(cache_path)
# return the config record for the new diffusers directory
new_config = store.get_model(new_key)

View File

@@ -4,7 +4,6 @@ from fastapi import Body, Path, Query
from fastapi.routing import APIRouter
from pydantic import BaseModel
from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.services.session_processor.session_processor_common import SessionProcessorStatus
from invokeai.app.services.session_queue.session_queue_common import (
QUEUE_ITEM_STATUS,
@@ -20,6 +19,8 @@ from invokeai.app.services.session_queue.session_queue_common import (
)
from invokeai.app.services.shared.pagination import CursorPaginatedResults
from ..dependencies import ApiDependencies
session_queue_router = APIRouter(prefix="/v1/queue", tags=["queue"])

View File

@@ -20,9 +20,14 @@ from torch.backends.mps import is_available as is_mps_available
# noinspection PyUnresolvedReferences
import invokeai.backend.util.hotfixes # noqa: F401 (monkeypatching on import)
import invokeai.frontend.web as web_dir
from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.api.no_cache_staticfiles import NoCacheStaticFiles
from invokeai.app.api.routers import (
from invokeai.app.services.config.config_default import get_config
from invokeai.app.util.custom_openapi import get_openapi_func
from invokeai.backend.util.devices import TorchDevice
from ..backend.util.logging import InvokeAILogger
from .api.dependencies import ApiDependencies
from .api.routers import (
app_info,
board_images,
boards,
@@ -33,11 +38,7 @@ from invokeai.app.api.routers import (
utilities,
workflows,
)
from invokeai.app.api.sockets import SocketIO
from invokeai.app.services.config.config_default import get_config
from invokeai.app.util.custom_openapi import get_openapi_func
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.logging import InvokeAILogger
from .api.sockets import SocketIO
app_config = get_config()
@@ -161,7 +162,6 @@ def invoke_api() -> None:
# Taken from https://waylonwalker.com/python-find-available-port/, thanks Waylon!
# https://github.com/WaylonWalker
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.settimeout(1)
if s.connect_ex(("localhost", port)) == 0:
return find_port(port=port + 1)
else:

View File

@@ -40,7 +40,7 @@ from invokeai.app.util.misc import uuid_string
from invokeai.backend.util.logging import InvokeAILogger
if TYPE_CHECKING:
from invokeai.app.services.invocation_services import InvocationServices
from ..services.invocation_services import InvocationServices
logger = InvokeAILogger.get_logger()

View File

@@ -4,12 +4,13 @@
import numpy as np
from pydantic import ValidationInfo, field_validator
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import InputField
from invokeai.app.invocations.primitives import IntegerCollectionOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.misc import SEED_MAX
from .baseinvocation import BaseInvocation, invocation
from .fields import InputField
@invocation(
"range", title="Integer Range", tags=["collection", "integer", "range"], category="collections", version="1.0.0"

View File

@@ -5,7 +5,6 @@ from compel import Compel, ReturnedEmbeddingsType
from compel.prompt_parser import Blend, Conjunction, CrossAttentionControlSubstitute, FlattenedPrompt, Fragment
from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.fields import (
ConditioningField,
FieldDescriptions,
@@ -15,7 +14,6 @@ from invokeai.app.invocations.fields import (
TensorField,
UIComponent,
)
from invokeai.app.invocations.model import CLIPField
from invokeai.app.invocations.primitives import ConditioningOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.ti_utils import generate_ti_list
@@ -28,6 +26,9 @@ from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
)
from invokeai.backend.util.devices import TorchDevice
from .baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from .model import CLIPField
# unconditioned: Optional[torch.Tensor]

View File

@@ -1,5 +1,6 @@
from typing import Literal
from invokeai.backend.stable_diffusion.schedulers import SCHEDULER_MAP
from invokeai.backend.util.devices import TorchDevice
LATENT_SCALE_FACTOR = 8
@@ -10,6 +11,9 @@ factor is hard-coded to a literal '8' rather than using this constant.
The ratio of image:latent dimensions is LATENT_SCALE_FACTOR:1, or 8:1.
"""
SCHEDULER_NAME_VALUES = Literal[tuple(SCHEDULER_MAP.keys())]
"""A literal type representing the valid scheduler names."""
IMAGE_MODES = Literal["L", "RGB", "RGBA", "CMYK", "YCbCr", "LAB", "HSV", "I", "F"]
"""A literal type for PIL image modes supported by Invoke"""

View File

@@ -22,13 +22,6 @@ from controlnet_aux.util import HWC3, ade_palette
from PIL import Image
from pydantic import BaseModel, Field, field_validator, model_validator
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import (
FieldDescriptions,
ImageField,
@@ -52,6 +45,8 @@ from invokeai.backend.image_util.lineart_anime import LineartAnimeProcessor
from invokeai.backend.image_util.util import np_to_pil, pil_to_np
from invokeai.backend.util.devices import TorchDevice
from .baseinvocation import BaseInvocation, BaseInvocationOutput, Classification, invocation, invocation_output
class ControlField(BaseModel):
image: ImageField = Field(description="The control image")

View File

@@ -5,11 +5,13 @@ import cv2 as cv
import numpy
from PIL import Image, ImageOps
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import ImageField, InputField, WithBoard, WithMetadata
from invokeai.app.invocations.fields import ImageField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from .baseinvocation import BaseInvocation, invocation
from .fields import InputField, WithBoard, WithMetadata
@invocation("cv_inpaint", title="OpenCV Inpaint", tags=["opencv", "inpaint"], category="inpaint", version="1.3.1")
class CvInpaintInvocation(BaseInvocation, WithMetadata, WithBoard):

View File

@@ -17,7 +17,7 @@ from torchvision.transforms.functional import resize as tv_resize
from transformers import CLIPVisionModelWithProjection
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR, SCHEDULER_NAME_VALUES
from invokeai.app.invocations.controlnet_image_processors import ControlField
from invokeai.app.invocations.fields import (
ConditioningField,
@@ -54,7 +54,6 @@ from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
TextConditioningRegions,
)
from invokeai.backend.stable_diffusion.schedulers import SCHEDULER_MAP
from invokeai.backend.stable_diffusion.schedulers.schedulers import SCHEDULER_NAME_VALUES
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.hotfixes import ControlNetModel
from invokeai.backend.util.mask import to_standard_float_mask

View File

@@ -160,7 +160,6 @@ class FieldDescriptions:
fp32 = "Whether or not to use full float32 precision"
precision = "Precision to use"
tiled = "Processing using overlapping tiles (reduce memory consumption)"
vae_tile_size = "The tile size for VAE tiling in pixels (image space). If set to 0, the default tile size for the model will be used. Larger tile sizes generally produce better results at the cost of higher memory usage."
detect_res = "Pixel resolution for detection"
image_res = "Pixel resolution for output image"
safe_mode = "Whether or not to use safe mode"

View File

@@ -6,7 +6,6 @@ import cv2
import numpy
from PIL import Image, ImageChops, ImageFilter, ImageOps
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.constants import IMAGE_MODES
from invokeai.app.invocations.fields import (
ColorField,
@@ -22,6 +21,8 @@ from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
from invokeai.backend.image_util.safety_checker import SafetyChecker
from .baseinvocation import BaseInvocation, Classification, invocation
@invocation("show_image", title="Show Image", tags=["image"], category="image", version="1.0.1")
class ShowImageInvocation(BaseInvocation):

View File

@@ -1,4 +1,3 @@
from contextlib import nullcontext
from functools import singledispatchmethod
import einops
@@ -13,7 +12,7 @@ from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
from diffusers.models.autoencoders.autoencoder_tiny import AutoencoderTiny
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.constants import DEFAULT_PRECISION, LATENT_SCALE_FACTOR
from invokeai.app.invocations.constants import DEFAULT_PRECISION
from invokeai.app.invocations.fields import (
FieldDescriptions,
ImageField,
@@ -25,7 +24,6 @@ from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager import LoadedModel
from invokeai.backend.stable_diffusion.diffusers_pipeline import image_resized_to_grid_as_tensor
from invokeai.backend.stable_diffusion.vae_tiling import patch_vae_tiling_params
@invocation(
@@ -33,7 +31,7 @@ from invokeai.backend.stable_diffusion.vae_tiling import patch_vae_tiling_params
title="Image to Latents",
tags=["latents", "image", "vae", "i2l"],
category="latents",
version="1.1.0",
version="1.0.2",
)
class ImageToLatentsInvocation(BaseInvocation):
"""Encodes an image into latents."""
@@ -46,17 +44,12 @@ class ImageToLatentsInvocation(BaseInvocation):
input=Input.Connection,
)
tiled: bool = InputField(default=False, description=FieldDescriptions.tiled)
# NOTE: tile_size = 0 is a special value. We use this rather than `int | None`, because the workflow UI does not
# offer a way to directly set None values.
tile_size: int = InputField(default=0, multiple_of=8, description=FieldDescriptions.vae_tile_size)
fp32: bool = InputField(default=DEFAULT_PRECISION == torch.float32, description=FieldDescriptions.fp32)
@staticmethod
def vae_encode(
vae_info: LoadedModel, upcast: bool, tiled: bool, image_tensor: torch.Tensor, tile_size: int = 0
) -> torch.Tensor:
def vae_encode(vae_info: LoadedModel, upcast: bool, tiled: bool, image_tensor: torch.Tensor) -> torch.Tensor:
with vae_info as vae:
assert isinstance(vae, (AutoencoderKL, AutoencoderTiny))
assert isinstance(vae, torch.nn.Module)
orig_dtype = vae.dtype
if upcast:
vae.to(dtype=torch.float32)
@@ -88,18 +81,9 @@ class ImageToLatentsInvocation(BaseInvocation):
else:
vae.disable_tiling()
tiling_context = nullcontext()
if tile_size > 0:
tiling_context = patch_vae_tiling_params(
vae,
tile_sample_min_size=tile_size,
tile_latent_min_size=tile_size // LATENT_SCALE_FACTOR,
tile_overlap_factor=0.25,
)
# non_noised_latents_from_image
image_tensor = image_tensor.to(device=vae.device, dtype=vae.dtype)
with torch.inference_mode(), tiling_context:
with torch.inference_mode():
latents = ImageToLatentsInvocation._encode_to_tensor(vae, image_tensor)
latents = vae.config.scaling_factor * latents
@@ -117,9 +101,7 @@ class ImageToLatentsInvocation(BaseInvocation):
if image_tensor.dim() == 3:
image_tensor = einops.rearrange(image_tensor, "c h w -> 1 c h w")
latents = self.vae_encode(
vae_info=vae_info, upcast=self.fp32, tiled=self.tiled, image_tensor=image_tensor, tile_size=self.tile_size
)
latents = self.vae_encode(vae_info, self.fp32, self.tiled, image_tensor)
latents = latents.to("cpu")
name = context.tensors.save(tensor=latents)

View File

@@ -3,9 +3,7 @@ from typing import Literal, get_args
from PIL import Image
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import ColorField, ImageField, InputField, WithBoard, WithMetadata
from invokeai.app.invocations.image import PIL_RESAMPLING_MAP, PIL_RESAMPLING_MODES
from invokeai.app.invocations.fields import ColorField, ImageField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.misc import SEED_MAX
@@ -16,6 +14,10 @@ from invokeai.backend.image_util.infill_methods.patchmatch import PatchMatch, in
from invokeai.backend.image_util.infill_methods.tile import infill_tile
from invokeai.backend.util.logging import InvokeAILogger
from .baseinvocation import BaseInvocation, invocation
from .fields import InputField, WithBoard, WithMetadata
from .image import PIL_RESAMPLING_MAP, PIL_RESAMPLING_MODES
logger = InvokeAILogger.get_logger()

View File

@@ -1,5 +1,3 @@
from contextlib import nullcontext
import torch
from diffusers.image_processor import VaeImageProcessor
from diffusers.models.attention_processor import (
@@ -10,9 +8,10 @@ from diffusers.models.attention_processor import (
)
from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
from diffusers.models.autoencoders.autoencoder_tiny import AutoencoderTiny
from diffusers.models.unets.unet_2d_condition import UNet2DConditionModel
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.constants import DEFAULT_PRECISION, LATENT_SCALE_FACTOR
from invokeai.app.invocations.constants import DEFAULT_PRECISION
from invokeai.app.invocations.fields import (
FieldDescriptions,
Input,
@@ -25,7 +24,6 @@ from invokeai.app.invocations.model import VAEField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.stable_diffusion import set_seamless
from invokeai.backend.stable_diffusion.vae_tiling import patch_vae_tiling_params
from invokeai.backend.util.devices import TorchDevice
@@ -34,7 +32,7 @@ from invokeai.backend.util.devices import TorchDevice
title="Latents to Image",
tags=["latents", "image", "vae", "l2i"],
category="latents",
version="1.3.0",
version="1.2.2",
)
class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Generates an image from latents."""
@@ -48,9 +46,6 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
input=Input.Connection,
)
tiled: bool = InputField(default=False, description=FieldDescriptions.tiled)
# NOTE: tile_size = 0 is a special value. We use this rather than `int | None`, because the workflow UI does not
# offer a way to directly set None values.
tile_size: int = InputField(default=0, multiple_of=8, description=FieldDescriptions.vae_tile_size)
fp32: bool = InputField(default=DEFAULT_PRECISION == torch.float32, description=FieldDescriptions.fp32)
@torch.no_grad()
@@ -58,9 +53,9 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
latents = context.tensors.load(self.latents.latents_name)
vae_info = context.models.load(self.vae.vae)
assert isinstance(vae_info.model, (AutoencoderKL, AutoencoderTiny))
assert isinstance(vae_info.model, (UNet2DConditionModel, AutoencoderKL, AutoencoderTiny))
with set_seamless(vae_info.model, self.vae.seamless_axes), vae_info as vae:
assert isinstance(vae, (AutoencoderKL, AutoencoderTiny))
assert isinstance(vae, torch.nn.Module)
latents = latents.to(vae.device)
if self.fp32:
vae.to(dtype=torch.float32)
@@ -92,19 +87,10 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
else:
vae.disable_tiling()
tiling_context = nullcontext()
if self.tile_size > 0:
tiling_context = patch_vae_tiling_params(
vae,
tile_sample_min_size=self.tile_size,
tile_latent_min_size=self.tile_size // LATENT_SCALE_FACTOR,
tile_overlap_factor=0.25,
)
# clear memory as vae decode can request a lot
TorchDevice.empty_cache()
with torch.inference_mode(), tiling_context:
with torch.inference_mode():
# copied from diffusers pipeline
latents = latents / vae.config.scaling_factor
image = vae.decode(latents, return_dict=False)[0]

View File

@@ -5,11 +5,12 @@ from typing import Literal
import numpy as np
from pydantic import ValidationInfo, field_validator
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import FieldDescriptions, InputField
from invokeai.app.invocations.primitives import FloatOutput, IntegerOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from .baseinvocation import BaseInvocation, invocation
@invocation("add", title="Add Integers", tags=["math", "add"], category="math", version="1.0.1")
class AddInvocation(BaseInvocation):

View File

@@ -14,7 +14,8 @@ from invokeai.app.invocations.fields import (
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.controlnet_utils import CONTROLNET_MODE_VALUES, CONTROLNET_RESIZE_VALUES
from invokeai.version.invokeai_version import __version__
from ...version import __version__
class MetadataItemField(BaseModel):

View File

@@ -3,17 +3,18 @@ from typing import List, Optional
from pydantic import BaseModel, Field
from invokeai.app.invocations.baseinvocation import (
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.shared.models import FreeUConfig
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelType, SubModelType
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.shared.models import FreeUConfig
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelType, SubModelType
class ModelIdentifierField(BaseModel):

View File

@@ -4,12 +4,18 @@
import torch
from pydantic import field_validator
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.fields import FieldDescriptions, InputField, LatentsField, OutputField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.misc import SEED_MAX
from invokeai.backend.util.devices import TorchDevice
from ...backend.util.devices import TorchDevice
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
invocation,
invocation_output,
)
"""
Utilities

View File

@@ -39,11 +39,12 @@ from easing_functions import (
)
from matplotlib.ticker import MaxNLocator
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import InputField
from invokeai.app.invocations.primitives import FloatCollectionOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from .baseinvocation import BaseInvocation, invocation
from .fields import InputField
@invocation(
"float_range",

View File

@@ -4,7 +4,6 @@ from typing import Optional
import torch
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.fields import (
ColorField,
@@ -22,6 +21,13 @@ from invokeai.app.invocations.fields import (
from invokeai.app.services.images.images_common import ImageDTO
from invokeai.app.services.shared.invocation_context import InvocationContext
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
invocation,
invocation_output,
)
"""
Primitives: Boolean, Integer, Float, String, Image, Latents, Conditioning, Color
- primitive nodes

View File

@@ -5,11 +5,12 @@ import numpy as np
from dynamicprompts.generators import CombinatorialPromptGenerator, RandomPromptGenerator
from pydantic import field_validator
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import InputField, UIComponent
from invokeai.app.invocations.primitives import StringCollectionOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from .baseinvocation import BaseInvocation, invocation
from .fields import InputField, UIComponent
@invocation(
"dynamic_prompt",

View File

@@ -1,4 +1,5 @@
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.constants import SCHEDULER_NAME_VALUES
from invokeai.app.invocations.fields import (
FieldDescriptions,
InputField,
@@ -6,7 +7,6 @@ from invokeai.app.invocations.fields import (
UIType,
)
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.stable_diffusion.schedulers.schedulers import SCHEDULER_NAME_VALUES
@invocation_output("scheduler_output")

View File

@@ -1,9 +1,15 @@
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.fields import FieldDescriptions, InputField, OutputField, UIType
from invokeai.app.invocations.model import CLIPField, ModelIdentifierField, UNetField, VAEField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager import SubModelType
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
invocation,
invocation_output,
)
from .model import CLIPField, ModelIdentifierField, UNetField, VAEField
@invocation_output("sdxl_model_loader_output")
class SDXLModelLoaderOutput(BaseInvocationOutput):

View File

@@ -2,11 +2,17 @@
import re
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.fields import InputField, OutputField, UIComponent
from invokeai.app.invocations.primitives import StringOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
invocation,
invocation_output,
)
from .fields import InputField, OutputField, UIComponent
from .primitives import StringOutput
@invocation_output("string_pos_neg_output")
class StringPosNegOutput(BaseInvocationOutput):

View File

@@ -8,7 +8,7 @@ from diffusers.schedulers.scheduling_utils import SchedulerMixin
from pydantic import field_validator
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR, SCHEDULER_NAME_VALUES
from invokeai.app.invocations.controlnet_image_processors import ControlField
from invokeai.app.invocations.denoise_latents import DenoiseLatentsInvocation, get_scheduler
from invokeai.app.invocations.fields import (
@@ -29,7 +29,6 @@ from invokeai.backend.stable_diffusion.multi_diffusion_pipeline import (
MultiDiffusionPipeline,
MultiDiffusionRegionConditioning,
)
from invokeai.backend.stable_diffusion.schedulers.schedulers import SCHEDULER_NAME_VALUES
from invokeai.backend.tiles.tiles import (
calc_tiles_min_overlap,
)

View File

@@ -6,13 +6,15 @@ import numpy as np
from PIL import Image
from pydantic import ConfigDict
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import ImageField, InputField, WithBoard, WithMetadata
from invokeai.app.invocations.fields import ImageField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.image_util.basicsr.rrdbnet_arch import RRDBNet
from invokeai.backend.image_util.realesrgan.realesrgan import RealESRGAN
from .baseinvocation import BaseInvocation, invocation
from .fields import InputField, WithBoard, WithMetadata
# TODO: Populate this from disk?
# TODO: Use model manager to load?
ESRGAN_MODELS = Literal[

View File

@@ -2,11 +2,12 @@ import sqlite3
import threading
from typing import Optional, cast
from invokeai.app.services.board_image_records.board_image_records_base import BoardImageRecordStorageBase
from invokeai.app.services.image_records.image_records_common import ImageRecord, deserialize_image_record
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from .board_image_records_base import BoardImageRecordStorageBase
class SqliteBoardImageRecordStorage(BoardImageRecordStorageBase):
_conn: sqlite3.Connection

View File

@@ -1,8 +1,9 @@
from typing import Optional
from invokeai.app.services.board_images.board_images_base import BoardImagesServiceABC
from invokeai.app.services.invoker import Invoker
from .board_images_base import BoardImagesServiceABC
class BoardImagesService(BoardImagesServiceABC):
__invoker: Invoker

View File

@@ -1,8 +1,9 @@
from abc import ABC, abstractmethod
from invokeai.app.services.board_records.board_records_common import BoardChanges, BoardRecord, UncategorizedImageCounts
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from .board_records_common import BoardChanges, BoardRecord
class BoardRecordStorageBase(ABC):
"""Low-level service responsible for interfacing with the board record store."""
@@ -39,17 +40,16 @@ class BoardRecordStorageBase(ABC):
@abstractmethod
def get_many(
self, offset: int = 0, limit: int = 10, include_archived: bool = False
self,
offset: int = 0,
limit: int = 10,
) -> OffsetPaginatedResults[BoardRecord]:
"""Gets many board records."""
pass
@abstractmethod
def get_all(self, include_archived: bool = False) -> list[BoardRecord]:
def get_all(
self,
) -> list[BoardRecord]:
"""Gets all board records."""
pass
@abstractmethod
def get_uncategorized_image_counts(self) -> UncategorizedImageCounts:
"""Gets count of images and assets for uncategorized images (images with no board assocation)."""
pass

View File

@@ -1,5 +1,5 @@
from datetime import datetime
from typing import Any, Optional, Union
from typing import Optional, Union
from pydantic import BaseModel, Field
@@ -22,29 +22,19 @@ class BoardRecord(BaseModelExcludeNull):
"""The updated timestamp of the image."""
cover_image_name: Optional[str] = Field(default=None, description="The name of the cover image of the board.")
"""The name of the cover image of the board."""
archived: bool = Field(description="Whether or not the board is archived.")
"""Whether or not the board is archived."""
is_private: Optional[bool] = Field(default=None, description="Whether the board is private.")
"""Whether the board is private."""
image_count: int = Field(description="The number of images in the board.")
asset_count: int = Field(description="The number of assets in the board.")
def deserialize_board_record(board_dict: dict[str, Any]) -> BoardRecord:
def deserialize_board_record(board_dict: dict) -> BoardRecord:
"""Deserializes a board record."""
# Retrieve all the values, setting "reasonable" defaults if they are not present.
board_id = board_dict.get("board_id", "unknown")
board_name = board_dict.get("board_name", "unknown")
cover_image_name = board_dict.get("cover_image_name", None)
cover_image_name = board_dict.get("cover_image_name", "unknown")
created_at = board_dict.get("created_at", get_iso_timestamp())
updated_at = board_dict.get("updated_at", get_iso_timestamp())
deleted_at = board_dict.get("deleted_at", get_iso_timestamp())
archived = board_dict.get("archived", False)
is_private = board_dict.get("is_private", False)
image_count = board_dict.get("image_count", 0)
asset_count = board_dict.get("asset_count", 0)
return BoardRecord(
board_id=board_id,
@@ -53,40 +43,30 @@ def deserialize_board_record(board_dict: dict[str, Any]) -> BoardRecord:
created_at=created_at,
updated_at=updated_at,
deleted_at=deleted_at,
archived=archived,
is_private=is_private,
image_count=image_count,
asset_count=asset_count,
)
class BoardChanges(BaseModel, extra="forbid"):
board_name: Optional[str] = Field(default=None, description="The board's new name.")
cover_image_name: Optional[str] = Field(default=None, description="The name of the board's new cover image.")
archived: Optional[bool] = Field(default=None, description="Whether or not the board is archived")
class BoardRecordNotFoundException(Exception):
"""Raised when an board record is not found."""
def __init__(self, message: str = "Board record not found"):
def __init__(self, message="Board record not found"):
super().__init__(message)
class BoardRecordSaveException(Exception):
"""Raised when an board record cannot be saved."""
def __init__(self, message: str = "Board record not saved"):
def __init__(self, message="Board record not saved"):
super().__init__(message)
class BoardRecordDeleteException(Exception):
"""Raised when an board record cannot be deleted."""
def __init__(self, message: str = "Board record not deleted"):
def __init__(self, message="Board record not deleted"):
super().__init__(message)
class UncategorizedImageCounts(BaseModel):
image_count: int = Field(description="The number of uncategorized images.")
asset_count: int = Field(description="The number of uncategorized assets.")

View File

@@ -1,116 +1,20 @@
import sqlite3
import threading
from dataclasses import dataclass
from typing import Union, cast
from invokeai.app.services.board_records.board_records_base import BoardRecordStorageBase
from invokeai.app.services.board_records.board_records_common import (
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from invokeai.app.util.misc import uuid_string
from .board_records_base import BoardRecordStorageBase
from .board_records_common import (
BoardChanges,
BoardRecord,
BoardRecordDeleteException,
BoardRecordNotFoundException,
BoardRecordSaveException,
UncategorizedImageCounts,
deserialize_board_record,
)
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from invokeai.app.util.misc import uuid_string
BASE_BOARD_RECORD_QUERY = """
-- This query retrieves board records, joining with the board_images and images tables to get image counts and cover image names.
-- It is not a complete query, as it is missing a GROUP BY or WHERE clause (and is unterminated).
SELECT b.board_id,
b.board_name,
b.created_at,
b.updated_at,
b.archived,
-- Count the number of images in the board, alias image_count
COUNT(
CASE
WHEN i.image_category in ('general') -- "Images" are images in the 'general' category
AND i.is_intermediate = 0 THEN 1 -- Intermediates are not counted
END
) AS image_count,
-- Count the number of assets in the board, alias asset_count
COUNT(
CASE
WHEN i.image_category in ('control', 'mask', 'user', 'other') -- "Assets" are images in any of the other categories ('control', 'mask', 'user', 'other')
AND i.is_intermediate = 0 THEN 1 -- Intermediates are not counted
END
) AS asset_count,
-- Get the name of the the most recent image in the board, alias cover_image_name
(
SELECT bi.image_name
FROM board_images bi
JOIN images i ON bi.image_name = i.image_name
WHERE bi.board_id = b.board_id
AND i.is_intermediate = 0 -- Intermediates cannot be cover images
ORDER BY i.created_at DESC -- Sort by created_at to get the most recent image
LIMIT 1
) AS cover_image_name
FROM boards b
LEFT JOIN board_images bi ON b.board_id = bi.board_id
LEFT JOIN images i ON bi.image_name = i.image_name
"""
@dataclass
class PaginatedBoardRecordsQueries:
main_query: str
total_count_query: str
def get_paginated_list_board_records_queries(include_archived: bool) -> PaginatedBoardRecordsQueries:
"""Gets a query to retrieve a paginated list of board records."""
archived_condition = "WHERE b.archived = 0" if not include_archived else ""
# The GROUP BY must be added _after_ the WHERE clause!
main_query = f"""
{BASE_BOARD_RECORD_QUERY}
{archived_condition}
GROUP BY b.board_id,
b.board_name,
b.created_at,
b.updated_at
ORDER BY b.created_at DESC
LIMIT ? OFFSET ?;
"""
total_count_query = f"""
SELECT COUNT(*)
FROM boards b
{archived_condition};
"""
return PaginatedBoardRecordsQueries(main_query=main_query, total_count_query=total_count_query)
def get_list_all_board_records_query(include_archived: bool) -> str:
"""Gets a query to retrieve all board records."""
archived_condition = "WHERE b.archived = 0" if not include_archived else ""
# The GROUP BY must be added _after_ the WHERE clause!
return f"""
{BASE_BOARD_RECORD_QUERY}
{archived_condition}
GROUP BY b.board_id,
b.board_name,
b.created_at,
b.updated_at
ORDER BY b.created_at DESC;
"""
def get_board_record_query() -> str:
"""Gets a query to retrieve a board record."""
return f"""
{BASE_BOARD_RECORD_QUERY}
WHERE b.board_id = ?;
"""
class SqliteBoardRecordStorage(BoardRecordStorageBase):
@@ -173,7 +77,11 @@ class SqliteBoardRecordStorage(BoardRecordStorageBase):
try:
self._lock.acquire()
self._cursor.execute(
get_board_record_query(),
"""--sql
SELECT *
FROM boards
WHERE board_id = ?;
""",
(board_id,),
)
@@ -185,7 +93,7 @@ class SqliteBoardRecordStorage(BoardRecordStorageBase):
self._lock.release()
if result is None:
raise BoardRecordNotFoundException
return deserialize_board_record(dict(result))
return BoardRecord(**dict(result))
def update(
self,
@@ -217,17 +125,6 @@ class SqliteBoardRecordStorage(BoardRecordStorageBase):
(changes.cover_image_name, board_id),
)
# Change the archived status of a board
if changes.archived is not None:
self._cursor.execute(
"""--sql
UPDATE boards
SET archived = ?
WHERE board_id = ?;
""",
(changes.archived, board_id),
)
self._conn.commit()
except sqlite3.Error as e:
self._conn.rollback()
@@ -237,22 +134,36 @@ class SqliteBoardRecordStorage(BoardRecordStorageBase):
return self.get(board_id)
def get_many(
self, offset: int = 0, limit: int = 10, include_archived: bool = False
self,
offset: int = 0,
limit: int = 10,
) -> OffsetPaginatedResults[BoardRecord]:
try:
self._lock.acquire()
queries = get_paginated_list_board_records_queries(include_archived=include_archived)
# Get all the boards
self._cursor.execute(
queries.main_query,
"""--sql
SELECT *
FROM boards
ORDER BY created_at DESC
LIMIT ? OFFSET ?;
""",
(limit, offset),
)
result = cast(list[sqlite3.Row], self._cursor.fetchall())
boards = [deserialize_board_record(dict(r)) for r in result]
self._cursor.execute(queries.total_count_query)
# Get the total number of boards
self._cursor.execute(
"""--sql
SELECT COUNT(*)
FROM boards
WHERE 1=1;
"""
)
count = cast(int, self._cursor.fetchone()[0])
return OffsetPaginatedResults[BoardRecord](items=boards, offset=offset, limit=limit, total=count)
@@ -263,12 +174,24 @@ class SqliteBoardRecordStorage(BoardRecordStorageBase):
finally:
self._lock.release()
def get_all(self, include_archived: bool = False) -> list[BoardRecord]:
def get_all(
self,
) -> list[BoardRecord]:
try:
self._lock.acquire()
self._cursor.execute(get_list_all_board_records_query(include_archived=include_archived))
# Get all the boards
self._cursor.execute(
"""--sql
SELECT *
FROM boards
ORDER BY created_at DESC
"""
)
result = cast(list[sqlite3.Row], self._cursor.fetchall())
boards = [deserialize_board_record(dict(r)) for r in result]
return boards
except sqlite3.Error as e:
@@ -276,28 +199,3 @@ class SqliteBoardRecordStorage(BoardRecordStorageBase):
raise e
finally:
self._lock.release()
def get_uncategorized_image_counts(self) -> UncategorizedImageCounts:
try:
self._lock.acquire()
query = """
-- Get the count of uncategorized images and assets.
SELECT
CASE
WHEN i.image_category = 'general' THEN 'image_count' -- "Images" are images in the 'general' category
ELSE 'asset_count' -- "Assets" are images in any of the other categories ('control', 'mask', 'user', 'other')
END AS category_type,
COUNT(*) AS unassigned_count
FROM images i
LEFT JOIN board_images bi ON i.image_name = bi.image_name
WHERE bi.board_id IS NULL -- Uncategorized images have no board association
AND i.is_intermediate = 0 -- Omit intermediates from the counts
GROUP BY category_type; -- Group by category_type alias, as derived from the image_category column earlier
"""
self._cursor.execute(query)
results = self._cursor.fetchall()
image_count = dict(results)["image_count"]
asset_count = dict(results)["asset_count"]
return UncategorizedImageCounts(image_count=image_count, asset_count=asset_count)
finally:
self._lock.release()

View File

@@ -1,9 +1,10 @@
from abc import ABC, abstractmethod
from invokeai.app.services.board_records.board_records_common import BoardChanges
from invokeai.app.services.boards.boards_common import BoardDTO
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from .boards_common import BoardDTO
class BoardServiceABC(ABC):
"""High-level service for board management."""
@@ -43,12 +44,16 @@ class BoardServiceABC(ABC):
@abstractmethod
def get_many(
self, offset: int = 0, limit: int = 10, include_archived: bool = False
self,
offset: int = 0,
limit: int = 10,
) -> OffsetPaginatedResults[BoardDTO]:
"""Gets many boards."""
pass
@abstractmethod
def get_all(self, include_archived: bool = False) -> list[BoardDTO]:
def get_all(
self,
) -> list[BoardDTO]:
"""Gets all boards."""
pass

View File

@@ -1,8 +1,23 @@
from invokeai.app.services.board_records.board_records_common import BoardRecord
from typing import Optional
from pydantic import Field
from ..board_records.board_records_common import BoardRecord
# TODO(psyche): BoardDTO is now identical to BoardRecord. We should consider removing it.
class BoardDTO(BoardRecord):
"""Deserialized board record."""
"""Deserialized board record with cover image URL and image count."""
pass
cover_image_name: Optional[str] = Field(description="The name of the board's cover image.")
"""The URL of the thumbnail of the most recent image in the board."""
image_count: int = Field(description="The number of images in the board.")
"""The number of images in the board."""
def board_record_to_dto(board_record: BoardRecord, cover_image_name: Optional[str], image_count: int) -> BoardDTO:
"""Converts a board record to a board DTO."""
return BoardDTO(
**board_record.model_dump(exclude={"cover_image_name"}),
cover_image_name=cover_image_name,
image_count=image_count,
)

View File

@@ -1,9 +1,11 @@
from invokeai.app.services.board_records.board_records_common import BoardChanges
from invokeai.app.services.boards.boards_base import BoardServiceABC
from invokeai.app.services.boards.boards_common import BoardDTO
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from .boards_base import BoardServiceABC
from .boards_common import board_record_to_dto
class BoardService(BoardServiceABC):
__invoker: Invoker
@@ -16,11 +18,17 @@ class BoardService(BoardServiceABC):
board_name: str,
) -> BoardDTO:
board_record = self.__invoker.services.board_records.save(board_name)
return BoardDTO.model_validate(board_record.model_dump())
return board_record_to_dto(board_record, None, 0)
def get_dto(self, board_id: str) -> BoardDTO:
board_record = self.__invoker.services.board_records.get(board_id)
return BoardDTO.model_validate(board_record.model_dump())
cover_image = self.__invoker.services.image_records.get_most_recent_image_for_board(board_record.board_id)
if cover_image:
cover_image_name = cover_image.image_name
else:
cover_image_name = None
image_count = self.__invoker.services.board_image_records.get_image_count_for_board(board_id)
return board_record_to_dto(board_record, cover_image_name, image_count)
def update(
self,
@@ -28,19 +36,44 @@ class BoardService(BoardServiceABC):
changes: BoardChanges,
) -> BoardDTO:
board_record = self.__invoker.services.board_records.update(board_id, changes)
return BoardDTO.model_validate(board_record.model_dump())
cover_image = self.__invoker.services.image_records.get_most_recent_image_for_board(board_record.board_id)
if cover_image:
cover_image_name = cover_image.image_name
else:
cover_image_name = None
image_count = self.__invoker.services.board_image_records.get_image_count_for_board(board_id)
return board_record_to_dto(board_record, cover_image_name, image_count)
def delete(self, board_id: str) -> None:
self.__invoker.services.board_records.delete(board_id)
def get_many(
self, offset: int = 0, limit: int = 10, include_archived: bool = False
) -> OffsetPaginatedResults[BoardDTO]:
board_records = self.__invoker.services.board_records.get_many(offset, limit, include_archived)
board_dtos = [BoardDTO.model_validate(r.model_dump()) for r in board_records.items]
def get_many(self, offset: int = 0, limit: int = 10) -> OffsetPaginatedResults[BoardDTO]:
board_records = self.__invoker.services.board_records.get_many(offset, limit)
board_dtos = []
for r in board_records.items:
cover_image = self.__invoker.services.image_records.get_most_recent_image_for_board(r.board_id)
if cover_image:
cover_image_name = cover_image.image_name
else:
cover_image_name = None
image_count = self.__invoker.services.board_image_records.get_image_count_for_board(r.board_id)
board_dtos.append(board_record_to_dto(r, cover_image_name, image_count))
return OffsetPaginatedResults[BoardDTO](items=board_dtos, offset=offset, limit=limit, total=len(board_dtos))
def get_all(self, include_archived: bool = False) -> list[BoardDTO]:
board_records = self.__invoker.services.board_records.get_all(include_archived)
board_dtos = [BoardDTO.model_validate(r.model_dump()) for r in board_records]
def get_all(self) -> list[BoardDTO]:
board_records = self.__invoker.services.board_records.get_all()
board_dtos = []
for r in board_records:
cover_image = self.__invoker.services.image_records.get_most_recent_image_for_board(r.board_id)
if cover_image:
cover_image_name = cover_image.image_name
else:
cover_image_name = None
image_count = self.__invoker.services.board_image_records.get_image_count_for_board(r.board_id)
board_dtos.append(board_record_to_dto(r, cover_image_name, image_count))
return board_dtos

View File

@@ -4,7 +4,6 @@ from typing import Optional, Union
from zipfile import ZipFile
from invokeai.app.services.board_records.board_records_common import BoardRecordNotFoundException
from invokeai.app.services.bulk_download.bulk_download_base import BulkDownloadBase
from invokeai.app.services.bulk_download.bulk_download_common import (
DEFAULT_BULK_DOWNLOAD_ID,
BulkDownloadException,
@@ -16,6 +15,8 @@ from invokeai.app.services.images.images_common import ImageDTO
from invokeai.app.services.invoker import Invoker
from invokeai.app.util.misc import uuid_string
from .bulk_download_base import BulkDownloadBase
class BulkDownloadService(BulkDownloadBase):
def start(self, invoker: Invoker) -> None:

View File

@@ -1,6 +1,7 @@
"""Init file for InvokeAI configure package."""
from invokeai.app.services.config.config_common import PagingArgumentParser
from invokeai.app.services.config.config_default import InvokeAIAppConfig, get_config
from .config_default import InvokeAIAppConfig, get_config
__all__ = ["InvokeAIAppConfig", "get_config", "PagingArgumentParser"]

View File

@@ -3,7 +3,6 @@
from __future__ import annotations
import copy
import locale
import os
import re
@@ -26,13 +25,14 @@ DB_FILE = Path("invokeai.db")
LEGACY_INIT_FILE = Path("invokeai.init")
DEFAULT_RAM_CACHE = 10.0
DEFAULT_VRAM_CACHE = 0.25
DEFAULT_CONVERT_CACHE = 20.0
DEVICE = Literal["auto", "cpu", "cuda", "cuda:1", "mps"]
PRECISION = Literal["auto", "float16", "bfloat16", "float32"]
ATTENTION_TYPE = Literal["auto", "normal", "xformers", "sliced", "torch-sdp"]
ATTENTION_SLICE_SIZE = Literal["auto", "balanced", "max", 1, 2, 3, 4, 5, 6, 7, 8]
LOG_FORMAT = Literal["plain", "color", "syslog", "legacy"]
LOG_LEVEL = Literal["debug", "info", "warning", "error", "critical"]
CONFIG_SCHEMA_VERSION = "4.0.2"
CONFIG_SCHEMA_VERSION = "4.0.1"
def get_default_ram_cache_size() -> float:
@@ -85,7 +85,7 @@ class InvokeAIAppConfig(BaseSettings):
log_tokenization: Enable logging of parsed prompt tokens.
patchmatch: Enable patchmatch inpaint code.
models_dir: Path to the models directory.
convert_cache_dir: Path to the converted models cache directory (DEPRECATED, but do not delete because it is needed for migration from previous versions).
convert_cache_dir: Path to the converted models cache directory. When loading a non-diffusers model, it will be converted and store on disk at this location.
download_cache_dir: Path to the directory that contains dynamically downloaded models.
legacy_conf_dir: Path to directory of legacy checkpoint config files.
db_dir: Path to InvokeAI databases directory.
@@ -102,6 +102,7 @@ class InvokeAIAppConfig(BaseSettings):
profiles_dir: Path to profiles output directory.
ram: Maximum memory amount used by memory model cache for rapid switching (GB).
vram: Amount of VRAM reserved for model storage (GB).
convert_cache: Maximum size of on-disk converted models cache (GB).
lazy_offload: Keep models in VRAM until their space is needed.
log_memory_usage: If True, a memory snapshot will be captured before and after every model cache operation, and the result will be logged (at debug level). There is a time cost to capturing the memory snapshots, so it is recommended to only enable this feature if you are actively inspecting the model cache's behaviour.
device: Preferred execution device. `auto` will choose the device depending on the hardware platform and the installed torch capabilities.<br>Valid values: `auto`, `cpu`, `cuda`, `cuda:1`, `mps`
@@ -147,7 +148,7 @@ class InvokeAIAppConfig(BaseSettings):
# PATHS
models_dir: Path = Field(default=Path("models"), description="Path to the models directory.")
convert_cache_dir: Path = Field(default=Path("models/.convert_cache"), description="Path to the converted models cache directory (DEPRECATED, but do not delete because it is needed for migration from previous versions).")
convert_cache_dir: Path = Field(default=Path("models/.convert_cache"), description="Path to the converted models cache directory. When loading a non-diffusers model, it will be converted and store on disk at this location.")
download_cache_dir: Path = Field(default=Path("models/.download_cache"), description="Path to the directory that contains dynamically downloaded models.")
legacy_conf_dir: Path = Field(default=Path("configs"), description="Path to directory of legacy checkpoint config files.")
db_dir: Path = Field(default=Path("databases"), description="Path to InvokeAI databases directory.")
@@ -169,8 +170,9 @@ class InvokeAIAppConfig(BaseSettings):
profiles_dir: Path = Field(default=Path("profiles"), description="Path to profiles output directory.")
# CACHE
ram: float = Field(default_factory=get_default_ram_cache_size, gt=0, description="Maximum memory amount used by memory model cache for rapid switching (GB).")
vram: float = Field(default=DEFAULT_VRAM_CACHE, ge=0, description="Amount of VRAM reserved for model storage (GB).")
ram: float = Field(default_factory=get_default_ram_cache_size, gt=0, description="Maximum memory amount used by memory model cache for rapid switching (GB).")
vram: float = Field(default=DEFAULT_VRAM_CACHE, ge=0, description="Amount of VRAM reserved for model storage (GB).")
convert_cache: float = Field(default=DEFAULT_CONVERT_CACHE, ge=0, description="Maximum size of on-disk converted models cache (GB).")
lazy_offload: bool = Field(default=True, description="Keep models in VRAM until their space is needed.")
log_memory_usage: bool = Field(default=False, description="If True, a memory snapshot will be captured before and after every model cache operation, and the result will be logged (at debug level). There is a time cost to capturing the memory snapshots, so it is recommended to only enable this feature if you are actively inspecting the model cache's behaviour.")
@@ -355,14 +357,14 @@ class DefaultInvokeAIAppConfig(InvokeAIAppConfig):
return (init_settings,)
def migrate_v3_config_dict(config_dict: dict[str, Any]) -> dict[str, Any]:
"""Migrate a v3 config dictionary to a v4.0.0.
def migrate_v3_config_dict(config_dict: dict[str, Any]) -> InvokeAIAppConfig:
"""Migrate a v3 config dictionary to a current config object.
Args:
config_dict: A dictionary of settings from a v3 config file.
Returns:
An `InvokeAIAppConfig` config dict.
An instance of `InvokeAIAppConfig` with the migrated settings.
"""
parsed_config_dict: dict[str, Any] = {}
@@ -396,41 +398,32 @@ def migrate_v3_config_dict(config_dict: dict[str, Any]) -> dict[str, Any]:
elif k in InvokeAIAppConfig.model_fields:
# skip unknown fields
parsed_config_dict[k] = v
parsed_config_dict["schema_version"] = "4.0.0"
return parsed_config_dict
# When migrating the config file, we should not include currently-set environment variables.
config = DefaultInvokeAIAppConfig.model_validate(parsed_config_dict)
return config
def migrate_v4_0_0_to_4_0_1_config_dict(config_dict: dict[str, Any]) -> dict[str, Any]:
"""Migrate v4.0.0 config dictionary to a v4.0.1 config dictionary
def migrate_v4_0_0_config_dict(config_dict: dict[str, Any]) -> InvokeAIAppConfig:
"""Migrate v4.0.0 config dictionary to a current config object.
Args:
config_dict: A dictionary of settings from a v4.0.0 config file.
Returns:
A config dict with the settings migrated to v4.0.1.
An instance of `InvokeAIAppConfig` with the migrated settings.
"""
parsed_config_dict: dict[str, Any] = copy.deepcopy(config_dict)
# precision "autocast" was replaced by "auto" in v4.0.1
if parsed_config_dict.get("precision") == "autocast":
parsed_config_dict["precision"] = "auto"
parsed_config_dict["schema_version"] = "4.0.1"
return parsed_config_dict
def migrate_v4_0_1_to_4_0_2_config_dict(config_dict: dict[str, Any]) -> dict[str, Any]:
"""Migrate v4.0.1 config dictionary to a v4.0.2 config dictionary.
Args:
config_dict: A dictionary of settings from a v4.0.1 config file.
Returns:
An config dict with the settings migrated to v4.0.2.
"""
parsed_config_dict: dict[str, Any] = copy.deepcopy(config_dict)
# convert_cache was removed in 4.0.2
parsed_config_dict.pop("convert_cache", None)
parsed_config_dict["schema_version"] = "4.0.2"
return parsed_config_dict
parsed_config_dict: dict[str, Any] = {}
for k, v in config_dict.items():
# autocast was removed from precision in v4.0.1
if k == "precision" and v == "autocast":
parsed_config_dict["precision"] = "auto"
else:
parsed_config_dict[k] = v
if k == "schema_version":
parsed_config_dict[k] = CONFIG_SCHEMA_VERSION
config = DefaultInvokeAIAppConfig.model_validate(parsed_config_dict)
return config
def load_and_migrate_config(config_path: Path) -> InvokeAIAppConfig:
@@ -444,31 +437,27 @@ def load_and_migrate_config(config_path: Path) -> InvokeAIAppConfig:
"""
assert config_path.suffix == ".yaml"
with open(config_path, "rt", encoding=locale.getpreferredencoding()) as file:
loaded_config_dict: dict[str, Any] = yaml.safe_load(file)
loaded_config_dict = yaml.safe_load(file)
assert isinstance(loaded_config_dict, dict)
migrated = False
if "InvokeAI" in loaded_config_dict:
migrated = True
loaded_config_dict = migrate_v3_config_dict(loaded_config_dict) # pyright: ignore [reportUnknownArgumentType]
if loaded_config_dict["schema_version"] == "4.0.0":
migrated = True
loaded_config_dict = migrate_v4_0_0_to_4_0_1_config_dict(loaded_config_dict)
if loaded_config_dict["schema_version"] == "4.0.1":
migrated = True
loaded_config_dict = migrate_v4_0_1_to_4_0_2_config_dict(loaded_config_dict)
if migrated:
# This is a v3 config file, attempt to migrate it
shutil.copy(config_path, config_path.with_suffix(".yaml.bak"))
try:
# load and write without environment variables
migrated_config = DefaultInvokeAIAppConfig.model_validate(loaded_config_dict)
migrated_config.write_file(config_path)
# loaded_config_dict could be the wrong shape, but we will catch all exceptions below
migrated_config = migrate_v3_config_dict(loaded_config_dict) # pyright: ignore [reportUnknownArgumentType]
except Exception as e:
shutil.copy(config_path.with_suffix(".yaml.bak"), config_path)
raise RuntimeError(f"Failed to load and migrate v3 config file {config_path}: {e}") from e
migrated_config.write_file(config_path)
return migrated_config
if loaded_config_dict["schema_version"] == "4.0.0":
loaded_config_dict = migrate_v4_0_0_config_dict(loaded_config_dict)
loaded_config_dict.write_file(config_path)
# Attempt to load as a v4 config file
try:
# Meta is not included in the model fields, so we need to validate it separately
config = InvokeAIAppConfig.model_validate(loaded_config_dict)

View File

@@ -1,13 +1,13 @@
"""Init file for download queue."""
from invokeai.app.services.download.download_base import (
from .download_base import (
DownloadJob,
DownloadJobStatus,
DownloadQueueServiceBase,
MultiFileDownloadJob,
UnknownJobIDException,
)
from invokeai.app.services.download.download_default import DownloadQueueService, TqdmProgress
from .download_default import DownloadQueueService, TqdmProgress
__all__ = [
"DownloadJob",

View File

@@ -16,7 +16,12 @@ from requests import HTTPError
from tqdm import tqdm
from invokeai.app.services.config import InvokeAIAppConfig, get_config
from invokeai.app.services.download.download_base import (
from invokeai.app.services.events.events_base import EventServiceBase
from invokeai.app.util.misc import get_iso_timestamp
from invokeai.backend.model_manager.metadata import RemoteModelFile
from invokeai.backend.util.logging import InvokeAILogger
from .download_base import (
DownloadEventHandler,
DownloadExceptionHandler,
DownloadJob,
@@ -28,10 +33,6 @@ from invokeai.app.services.download.download_base import (
ServiceInactiveException,
UnknownJobIDException,
)
from invokeai.app.services.events.events_base import EventServiceBase
from invokeai.app.util.misc import get_iso_timestamp
from invokeai.backend.model_manager.metadata import RemoteModelFile
from invokeai.backend.util.logging import InvokeAILogger
# Maximum number of bytes to download during each call to requests.iter_content()
DOWNLOAD_CHUNK_SIZE = 100000
@@ -184,7 +185,7 @@ class DownloadQueueService(DownloadQueueServiceBase):
job = DownloadJob(
source=url,
dest=path,
access_token=access_token or self._lookup_access_token(url),
access_token=access_token,
)
mfdj.download_parts.add(job)
self._download_part2parent[job.source] = mfdj

View File

@@ -6,11 +6,12 @@ from queue import Empty, Queue
from fastapi_events.dispatcher import dispatch
from invokeai.app.services.events.events_base import EventServiceBase
from invokeai.app.services.events.events_common import (
EventBase,
)
from .events_base import EventServiceBase
class FastAPIEventService(EventServiceBase):
def __init__(self, event_handler_id: int) -> None:

View File

@@ -7,15 +7,12 @@ from PIL import Image, PngImagePlugin
from PIL.Image import Image as PILImageType
from send2trash import send2trash
from invokeai.app.services.image_files.image_files_base import ImageFileStorageBase
from invokeai.app.services.image_files.image_files_common import (
ImageFileDeleteException,
ImageFileNotFoundException,
ImageFileSaveException,
)
from invokeai.app.services.invoker import Invoker
from invokeai.app.util.thumbnails import get_thumbnail_name, make_thumbnail
from .image_files_base import ImageFileStorageBase
from .image_files_common import ImageFileDeleteException, ImageFileNotFoundException, ImageFileSaveException
class DiskImageFileStorage(ImageFileStorageBase):
"""Stores images on disk"""

View File

@@ -3,14 +3,9 @@ from datetime import datetime
from typing import Optional
from invokeai.app.invocations.fields import MetadataField
from invokeai.app.services.image_records.image_records_common import (
ImageCategory,
ImageRecord,
ImageRecordChanges,
ResourceOrigin,
)
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
from .image_records_common import ImageCategory, ImageRecord, ImageRecordChanges, ResourceOrigin
class ImageRecordStorageBase(ABC):
@@ -42,13 +37,10 @@ class ImageRecordStorageBase(ABC):
self,
offset: int = 0,
limit: int = 10,
starred_first: bool = True,
order_dir: SQLiteDirection = SQLiteDirection.Descending,
image_origin: Optional[ResourceOrigin] = None,
categories: Optional[list[ImageCategory]] = None,
is_intermediate: Optional[bool] = None,
board_id: Optional[str] = None,
search_term: Optional[str] = None,
) -> OffsetPaginatedResults[ImageRecord]:
"""Gets a page of image records."""
pass

View File

@@ -4,8 +4,11 @@ from datetime import datetime
from typing import Optional, Union, cast
from invokeai.app.invocations.fields import MetadataField, MetadataFieldValidator
from invokeai.app.services.image_records.image_records_base import ImageRecordStorageBase
from invokeai.app.services.image_records.image_records_common import (
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from .image_records_base import ImageRecordStorageBase
from .image_records_common import (
IMAGE_DTO_COLS,
ImageCategory,
ImageRecord,
@@ -16,9 +19,6 @@ from invokeai.app.services.image_records.image_records_common import (
ResourceOrigin,
deserialize_image_record,
)
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
class SqliteImageRecordStorage(ImageRecordStorageBase):
@@ -144,13 +144,10 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
self,
offset: int = 0,
limit: int = 10,
starred_first: bool = True,
order_dir: SQLiteDirection = SQLiteDirection.Descending,
image_origin: Optional[ResourceOrigin] = None,
categories: Optional[list[ImageCategory]] = None,
is_intermediate: Optional[bool] = None,
board_id: Optional[str] = None,
search_term: Optional[str] = None,
) -> OffsetPaginatedResults[ImageRecord]:
try:
self._lock.acquire()
@@ -211,21 +208,9 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
"""
query_params.append(board_id)
# Search term condition
if search_term:
query_conditions += """--sql
AND images.metadata LIKE ?
"""
query_params.append(f"%{search_term.lower()}%")
if starred_first:
query_pagination = f"""--sql
ORDER BY images.starred DESC, images.created_at {order_dir.value} LIMIT ? OFFSET ?
"""
else:
query_pagination = f"""--sql
ORDER BY images.created_at {order_dir.value} LIMIT ? OFFSET ?
"""
query_pagination = """--sql
ORDER BY images.starred DESC, images.created_at DESC LIMIT ? OFFSET ?
"""
# Final images query with pagination
images_query += query_conditions + query_pagination + ";"

View File

@@ -12,7 +12,6 @@ from invokeai.app.services.image_records.image_records_common import (
)
from invokeai.app.services.images.images_common import ImageDTO
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
class ImageServiceABC(ABC):
@@ -117,13 +116,10 @@ class ImageServiceABC(ABC):
self,
offset: int = 0,
limit: int = 10,
starred_first: bool = True,
order_dir: SQLiteDirection = SQLiteDirection.Descending,
image_origin: Optional[ResourceOrigin] = None,
categories: Optional[list[ImageCategory]] = None,
is_intermediate: Optional[bool] = None,
board_id: Optional[str] = None,
search_term: Optional[str] = None,
) -> OffsetPaginatedResults[ImageDTO]:
"""Gets a paginated list of image DTOs."""
pass

View File

@@ -3,12 +3,15 @@ from typing import Optional
from PIL.Image import Image as PILImageType
from invokeai.app.invocations.fields import MetadataField
from invokeai.app.services.image_files.image_files_common import (
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from ..image_files.image_files_common import (
ImageFileDeleteException,
ImageFileNotFoundException,
ImageFileSaveException,
)
from invokeai.app.services.image_records.image_records_common import (
from ..image_records.image_records_common import (
ImageCategory,
ImageRecord,
ImageRecordChanges,
@@ -19,11 +22,8 @@ from invokeai.app.services.image_records.image_records_common import (
InvalidOriginException,
ResourceOrigin,
)
from invokeai.app.services.images.images_base import ImageServiceABC
from invokeai.app.services.images.images_common import ImageDTO, image_record_to_dto
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
from .images_base import ImageServiceABC
from .images_common import ImageDTO, image_record_to_dto
class ImageService(ImageServiceABC):
@@ -73,12 +73,7 @@ class ImageService(ImageServiceABC):
session_id=session_id,
)
if board_id is not None:
try:
self.__invoker.services.board_image_records.add_image_to_board(
board_id=board_id, image_name=image_name
)
except Exception as e:
self.__invoker.services.logger.warn(f"Failed to add image to board {board_id}: {str(e)}")
self.__invoker.services.board_image_records.add_image_to_board(board_id=board_id, image_name=image_name)
self.__invoker.services.image_files.save(
image_name=image_name, image=image, metadata=metadata, workflow=workflow, graph=graph
)
@@ -207,25 +202,19 @@ class ImageService(ImageServiceABC):
self,
offset: int = 0,
limit: int = 10,
starred_first: bool = True,
order_dir: SQLiteDirection = SQLiteDirection.Descending,
image_origin: Optional[ResourceOrigin] = None,
categories: Optional[list[ImageCategory]] = None,
is_intermediate: Optional[bool] = None,
board_id: Optional[str] = None,
search_term: Optional[str] = None,
) -> OffsetPaginatedResults[ImageDTO]:
try:
results = self.__invoker.services.image_records.get_many(
offset,
limit,
starred_first,
order_dir,
image_origin,
categories,
is_intermediate,
board_id,
search_term,
)
image_dtos = [

View File

@@ -10,28 +10,29 @@ if TYPE_CHECKING:
import torch
from invokeai.app.services.board_image_records.board_image_records_base import BoardImageRecordStorageBase
from invokeai.app.services.board_images.board_images_base import BoardImagesServiceABC
from invokeai.app.services.board_records.board_records_base import BoardRecordStorageBase
from invokeai.app.services.boards.boards_base import BoardServiceABC
from invokeai.app.services.bulk_download.bulk_download_base import BulkDownloadBase
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.download import DownloadQueueServiceBase
from invokeai.app.services.events.events_base import EventServiceBase
from invokeai.app.services.image_files.image_files_base import ImageFileStorageBase
from invokeai.app.services.image_records.image_records_base import ImageRecordStorageBase
from invokeai.app.services.images.images_base import ImageServiceABC
from invokeai.app.services.invocation_cache.invocation_cache_base import InvocationCacheBase
from invokeai.app.services.invocation_stats.invocation_stats_base import InvocationStatsServiceBase
from invokeai.app.services.model_images.model_images_base import ModelImageFileStorageBase
from invokeai.app.services.model_manager.model_manager_base import ModelManagerServiceBase
from invokeai.app.services.names.names_base import NameServiceBase
from invokeai.app.services.session_processor.session_processor_base import SessionProcessorBase
from invokeai.app.services.session_queue.session_queue_base import SessionQueueBase
from invokeai.app.services.urls.urls_base import UrlServiceBase
from invokeai.app.services.workflow_records.workflow_records_base import WorkflowRecordsStorageBase
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningFieldData
from .board_image_records.board_image_records_base import BoardImageRecordStorageBase
from .board_images.board_images_base import BoardImagesServiceABC
from .board_records.board_records_base import BoardRecordStorageBase
from .boards.boards_base import BoardServiceABC
from .bulk_download.bulk_download_base import BulkDownloadBase
from .config import InvokeAIAppConfig
from .download import DownloadQueueServiceBase
from .events.events_base import EventServiceBase
from .image_files.image_files_base import ImageFileStorageBase
from .image_records.image_records_base import ImageRecordStorageBase
from .images.images_base import ImageServiceABC
from .invocation_cache.invocation_cache_base import InvocationCacheBase
from .invocation_stats.invocation_stats_base import InvocationStatsServiceBase
from .model_images.model_images_base import ModelImageFileStorageBase
from .model_manager.model_manager_base import ModelManagerServiceBase
from .names.names_base import NameServiceBase
from .session_processor.session_processor_base import SessionProcessorBase
from .session_queue.session_queue_base import SessionQueueBase
from .urls.urls_base import UrlServiceBase
from .workflow_records.workflow_records_base import WorkflowRecordsStorageBase
class InvocationServices:
"""Services that can be used by invocations"""

View File

@@ -9,8 +9,11 @@ import torch
import invokeai.backend.util.logging as logger
from invokeai.app.invocations.baseinvocation import BaseInvocation
from invokeai.app.services.invocation_stats.invocation_stats_base import InvocationStatsServiceBase
from invokeai.app.services.invocation_stats.invocation_stats_common import (
from invokeai.app.services.invoker import Invoker
from invokeai.backend.model_manager.load.model_cache import CacheStats
from .invocation_stats_base import InvocationStatsServiceBase
from .invocation_stats_common import (
GESStatsNotFoundError,
GraphExecutionStats,
GraphExecutionStatsSummary,
@@ -19,8 +22,6 @@ from invokeai.app.services.invocation_stats.invocation_stats_common import (
NodeExecutionStats,
NodeExecutionStatsSummary,
)
from invokeai.app.services.invoker import Invoker
from invokeai.backend.model_manager.load.model_cache import CacheStats
# Size of 1GB in bytes.
GB = 2**30

View File

@@ -1,7 +1,7 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from invokeai.app.services.invocation_services import InvocationServices
from .invocation_services import InvocationServices
class Invoker:

View File

@@ -5,14 +5,15 @@ from PIL.Image import Image as PILImageType
from send2trash import send2trash
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.model_images.model_images_base import ModelImageFileStorageBase
from invokeai.app.services.model_images.model_images_common import (
from invokeai.app.util.misc import uuid_string
from invokeai.app.util.thumbnails import make_thumbnail
from .model_images_base import ModelImageFileStorageBase
from .model_images_common import (
ModelImageFileDeleteException,
ModelImageFileNotFoundException,
ModelImageFileSaveException,
)
from invokeai.app.util.misc import uuid_string
from invokeai.app.util.thumbnails import make_thumbnail
class ModelImageFileStorageDisk(ModelImageFileStorageBase):

View File

@@ -1,7 +1,9 @@
"""Initialization file for model install service package."""
from invokeai.app.services.model_install.model_install_base import ModelInstallServiceBase
from invokeai.app.services.model_install.model_install_common import (
from .model_install_base import (
ModelInstallServiceBase,
)
from .model_install_common import (
HFModelSource,
InstallStatus,
LocalModelSource,
@@ -10,7 +12,7 @@ from invokeai.app.services.model_install.model_install_common import (
UnknownInstallJobException,
URLModelSource,
)
from invokeai.app.services.model_install.model_install_default import ModelInstallService
from .model_install_default import ModelInstallService
__all__ = [
"ModelInstallServiceBase",

View File

@@ -23,16 +23,6 @@ from invokeai.app.services.download import DownloadQueueServiceBase, MultiFileDo
from invokeai.app.services.events.events_base import EventServiceBase
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.model_install.model_install_base import ModelInstallServiceBase
from invokeai.app.services.model_install.model_install_common import (
MODEL_SOURCE_TO_TYPE_MAP,
HFModelSource,
InstallStatus,
LocalModelSource,
ModelInstallJob,
ModelSource,
StringLikeSource,
URLModelSource,
)
from invokeai.app.services.model_records import DuplicateModelException, ModelRecordServiceBase
from invokeai.app.services.model_records.model_records_base import ModelRecordChanges
from invokeai.backend.model_manager.config import (
@@ -57,6 +47,17 @@ from invokeai.backend.util.catch_sigint import catch_sigint
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.util import slugify
from .model_install_common import (
MODEL_SOURCE_TO_TYPE_MAP,
HFModelSource,
InstallStatus,
LocalModelSource,
ModelInstallJob,
ModelSource,
StringLikeSource,
URLModelSource,
)
TMPDIR_PREFIX = "tmpinstall_"
@@ -847,7 +848,7 @@ class ModelInstallService(ModelInstallServiceBase):
with self._lock:
if install_job := self._download_cache.pop(download_job.id, None):
assert excp is not None
self._set_error(install_job, excp)
install_job.set_error(excp)
self._download_queue.cancel_job(download_job)
# Let other threads know that the number of downloads has changed

View File

@@ -1,6 +1,6 @@
"""Initialization file for model load service module."""
from invokeai.app.services.model_load.model_load_base import ModelLoadServiceBase
from invokeai.app.services.model_load.model_load_default import ModelLoadService
from .model_load_base import ModelLoadServiceBase
from .model_load_default import ModelLoadService
__all__ = ["ModelLoadServiceBase", "ModelLoadService"]

View File

@@ -7,6 +7,7 @@ from typing import Callable, Optional
from invokeai.backend.model_manager import AnyModel, AnyModelConfig, SubModelType
from invokeai.backend.model_manager.load import LoadedModel, LoadedModelWithoutConfig
from invokeai.backend.model_manager.load.convert_cache import ModelConvertCacheBase
from invokeai.backend.model_manager.load.model_cache.model_cache_base import ModelCacheBase
@@ -27,6 +28,11 @@ class ModelLoadServiceBase(ABC):
def ram_cache(self) -> ModelCacheBase[AnyModel]:
"""Return the RAM cache used by this loader."""
@property
@abstractmethod
def convert_cache(self) -> ModelConvertCacheBase:
"""Return the checkpoint convert cache used by this loader."""
@abstractmethod
def load_model_from_path(
self, model_path: Path, loader: Optional[Callable[[Path], AnyModel]] = None

View File

@@ -10,7 +10,6 @@ from torch import load as torch_load
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.model_load.model_load_base import ModelLoadServiceBase
from invokeai.backend.model_manager import AnyModel, AnyModelConfig, SubModelType
from invokeai.backend.model_manager.load import (
LoadedModel,
@@ -18,11 +17,14 @@ from invokeai.backend.model_manager.load import (
ModelLoaderRegistry,
ModelLoaderRegistryBase,
)
from invokeai.backend.model_manager.load.convert_cache import ModelConvertCacheBase
from invokeai.backend.model_manager.load.model_cache.model_cache_base import ModelCacheBase
from invokeai.backend.model_manager.load.model_loaders.generic_diffusers import GenericDiffusersLoader
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.logging import InvokeAILogger
from .model_load_base import ModelLoadServiceBase
class ModelLoadService(ModelLoadServiceBase):
"""Wrapper around ModelLoaderRegistry."""
@@ -31,6 +33,7 @@ class ModelLoadService(ModelLoadServiceBase):
self,
app_config: InvokeAIAppConfig,
ram_cache: ModelCacheBase[AnyModel],
convert_cache: ModelConvertCacheBase,
registry: Optional[Type[ModelLoaderRegistryBase]] = ModelLoaderRegistry,
):
"""Initialize the model load service."""
@@ -39,6 +42,7 @@ class ModelLoadService(ModelLoadServiceBase):
self._logger = logger
self._app_config = app_config
self._ram_cache = ram_cache
self._convert_cache = convert_cache
self._registry = registry
def start(self, invoker: Invoker) -> None:
@@ -49,6 +53,11 @@ class ModelLoadService(ModelLoadServiceBase):
"""Return the RAM cache used by this loader."""
return self._ram_cache
@property
def convert_cache(self) -> ModelConvertCacheBase:
"""Return the checkpoint convert cache used by this loader."""
return self._convert_cache
def load_model(self, model_config: AnyModelConfig, submodel_type: Optional[SubModelType] = None) -> LoadedModel:
"""
Given a model's configuration, load it and return the LoadedModel object.
@@ -67,6 +76,7 @@ class ModelLoadService(ModelLoadServiceBase):
app_config=self._app_config,
logger=self._logger,
ram_cache=self._ram_cache,
convert_cache=self._convert_cache,
).load_model(model_config, submodel_type)
if hasattr(self, "_invoker"):

View File

@@ -1,9 +1,10 @@
"""Initialization file for model manager service."""
from invokeai.app.services.model_manager.model_manager_default import ModelManagerService, ModelManagerServiceBase
from invokeai.backend.model_manager import AnyModel, AnyModelConfig, BaseModelType, ModelType, SubModelType
from invokeai.backend.model_manager.load import LoadedModel
from .model_manager_default import ModelManagerService, ModelManagerServiceBase
__all__ = [
"ModelManagerServiceBase",
"ModelManagerService",

View File

@@ -5,13 +5,14 @@ from abc import ABC, abstractmethod
import torch
from typing_extensions import Self
from invokeai.app.services.config.config_default import InvokeAIAppConfig
from invokeai.app.services.download.download_base import DownloadQueueServiceBase
from invokeai.app.services.events.events_base import EventServiceBase
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.model_install.model_install_base import ModelInstallServiceBase
from invokeai.app.services.model_load.model_load_base import ModelLoadServiceBase
from invokeai.app.services.model_records.model_records_base import ModelRecordServiceBase
from ..config import InvokeAIAppConfig
from ..download import DownloadQueueServiceBase
from ..events.events_base import EventServiceBase
from ..model_install import ModelInstallServiceBase
from ..model_load import ModelLoadServiceBase
from ..model_records import ModelRecordServiceBase
class ModelManagerServiceBase(ABC):

View File

@@ -6,20 +6,19 @@ from typing import Optional
import torch
from typing_extensions import Self
from invokeai.app.services.config.config_default import InvokeAIAppConfig
from invokeai.app.services.download.download_base import DownloadQueueServiceBase
from invokeai.app.services.events.events_base import EventServiceBase
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.model_install.model_install_base import ModelInstallServiceBase
from invokeai.app.services.model_install.model_install_default import ModelInstallService
from invokeai.app.services.model_load.model_load_base import ModelLoadServiceBase
from invokeai.app.services.model_load.model_load_default import ModelLoadService
from invokeai.app.services.model_manager.model_manager_base import ModelManagerServiceBase
from invokeai.app.services.model_records.model_records_base import ModelRecordServiceBase
from invokeai.backend.model_manager.load import ModelCache, ModelLoaderRegistry
from invokeai.backend.model_manager.load import ModelCache, ModelConvertCache, ModelLoaderRegistry
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.logging import InvokeAILogger
from ..config import InvokeAIAppConfig
from ..download import DownloadQueueServiceBase
from ..events.events_base import EventServiceBase
from ..model_install import ModelInstallService, ModelInstallServiceBase
from ..model_load import ModelLoadService, ModelLoadServiceBase
from ..model_records import ModelRecordServiceBase
from .model_manager_base import ModelManagerServiceBase
class ModelManagerService(ModelManagerServiceBase):
"""
@@ -87,9 +86,11 @@ class ModelManagerService(ModelManagerServiceBase):
logger=logger,
execution_device=execution_device or TorchDevice.choose_torch_device(),
)
convert_cache = ModelConvertCache(cache_path=app_config.convert_cache_path, max_size=app_config.convert_cache)
loader = ModelLoadService(
app_config=app_config,
ram_cache=ram_cache,
convert_cache=convert_cache,
registry=ModelLoaderRegistry,
)
installer = ModelInstallService(

View File

@@ -40,24 +40,12 @@ Typical usage:
"""
import json
import logging
import sqlite3
from math import ceil
from pathlib import Path
from typing import List, Optional, Union
import pydantic
from invokeai.app.services.model_records.model_records_base import (
DuplicateModelException,
ModelRecordChanges,
ModelRecordOrderBy,
ModelRecordServiceBase,
ModelSummary,
UnknownModelException,
)
from invokeai.app.services.shared.pagination import PaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
@@ -66,11 +54,21 @@ from invokeai.backend.model_manager.config import (
ModelType,
)
from ..shared.sqlite.sqlite_database import SqliteDatabase
from .model_records_base import (
DuplicateModelException,
ModelRecordChanges,
ModelRecordOrderBy,
ModelRecordServiceBase,
ModelSummary,
UnknownModelException,
)
class ModelRecordServiceSQL(ModelRecordServiceBase):
"""Implementation of the ModelConfigStore ABC using a SQL database."""
def __init__(self, db: SqliteDatabase, logger: logging.Logger):
def __init__(self, db: SqliteDatabase):
"""
Initialize a new object from preexisting sqlite3 connection and threading lock objects.
@@ -79,7 +77,6 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
super().__init__()
self._db = db
self._cursor = db.conn.cursor()
self._logger = logger
@property
def db(self) -> SqliteDatabase:
@@ -295,20 +292,7 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
tuple(bindings),
)
result = self._cursor.fetchall()
# Parse the model configs.
results: list[AnyModelConfig] = []
for row in result:
try:
model_config = ModelConfigFactory.make_config(json.loads(row[0]), timestamp=row[1])
except pydantic.ValidationError:
# We catch this error so that the app can still run if there are invalid model configs in the database.
# One reason that an invalid model config might be in the database is if someone had to rollback from a
# newer version of the app that added a new model type.
self._logger.warning(f"Found an invalid model config in the database. Ignoring this model. ({row[0]})")
else:
results.append(model_config)
results = [ModelConfigFactory.make_config(json.loads(x[0]), timestamp=x[1]) for x in result]
return results
def search_by_path(self, path: Union[str, Path]) -> List[AnyModelConfig]:

View File

@@ -1,6 +1,7 @@
from invokeai.app.services.names.names_base import NameServiceBase
from invokeai.app.util.misc import uuid_string
from .names_base import NameServiceBase
class SimpleNameService(NameServiceBase):
"""Creates image names from UUIDs."""

View File

@@ -13,24 +13,24 @@ from invokeai.app.services.events.events_common import (
register_events,
)
from invokeai.app.services.invocation_stats.invocation_stats_common import GESStatsNotFoundError
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.session_processor.session_processor_base import (
InvocationServices,
OnAfterRunNode,
OnAfterRunSession,
OnBeforeRunNode,
OnBeforeRunSession,
OnNodeError,
OnNonFatalProcessorError,
SessionProcessorBase,
SessionRunnerBase,
)
from invokeai.app.services.session_processor.session_processor_common import CanceledException, SessionProcessorStatus
from invokeai.app.services.session_processor.session_processor_common import CanceledException
from invokeai.app.services.session_queue.session_queue_common import SessionQueueItem, SessionQueueItemNotFoundError
from invokeai.app.services.shared.graph import NodeInputError
from invokeai.app.services.shared.invocation_context import InvocationContextData, build_invocation_context
from invokeai.app.util.profiler import Profiler
from ..invoker import Invoker
from .session_processor_base import InvocationServices, SessionProcessorBase, SessionRunnerBase
from .session_processor_common import SessionProcessorStatus
class DefaultSessionRunner(SessionRunnerBase):
"""Processes a single session's invocations."""

View File

@@ -14,8 +14,6 @@ from invokeai.app.services.shared.sqlite_migrator.migrations.migration_8 import
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_9 import build_migration_9
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_10 import build_migration_10
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_11 import build_migration_11
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_12 import build_migration_12
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_13 import build_migration_13
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_impl import SqliteMigrator
@@ -47,8 +45,6 @@ def init_db(config: InvokeAIAppConfig, logger: Logger, image_files: ImageFileSto
migrator.register_migration(build_migration_9())
migrator.register_migration(build_migration_10())
migrator.register_migration(build_migration_11(app_config=config, logger=logger))
migrator.register_migration(build_migration_12(app_config=config))
migrator.register_migration(build_migration_13())
migrator.run_migrations()
return db

View File

@@ -1,35 +0,0 @@
import shutil
import sqlite3
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration
class Migration12Callback:
def __init__(self, app_config: InvokeAIAppConfig) -> None:
self._app_config = app_config
def __call__(self, cursor: sqlite3.Cursor) -> None:
self._remove_model_convert_cache_dir()
def _remove_model_convert_cache_dir(self) -> None:
"""
Removes unused model convert cache directory
"""
convert_cache = self._app_config.convert_cache_path
shutil.rmtree(convert_cache, ignore_errors=True)
def build_migration_12(app_config: InvokeAIAppConfig) -> Migration:
"""
Build the migration from database version 11 to 12.
This migration removes the now-unused model convert cache directory.
"""
migration_12 = Migration(
from_version=11,
to_version=12,
callback=Migration12Callback(app_config),
)
return migration_12

View File

@@ -1,31 +0,0 @@
import sqlite3
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration
class Migration13Callback:
def __call__(self, cursor: sqlite3.Cursor) -> None:
self._add_archived_col(cursor)
def _add_archived_col(self, cursor: sqlite3.Cursor) -> None:
"""
- Adds `archived` columns to the board table.
"""
cursor.execute("ALTER TABLE boards ADD COLUMN archived BOOLEAN DEFAULT FALSE;")
def build_migration_13() -> Migration:
"""
Build the migration from database version 12 to 13..
This migration does the following:
- Adds `archived` columns to the board table.
"""
migration_13 = Migration(
from_version=12,
to_version=13,
callback=Migration13Callback(),
)
return migration_13

View File

@@ -1,6 +1,6 @@
import os
from invokeai.app.services.urls.urls_base import UrlServiceBase
from .urls_base import UrlServiceBase
class LocalUrlService(UrlServiceBase):

View File

@@ -2,7 +2,7 @@
"name": "ESRGAN Upscaling with Canny ControlNet",
"author": "InvokeAI",
"description": "Sample workflow for using Upscaling with ControlNet with SD1.5",
"version": "2.1.0",
"version": "2.0.0",
"contact": "invoke@invoke.ai",
"tags": "upscale, controlnet, default",
"notes": "",
@@ -36,13 +36,14 @@
"version": "3.0.0",
"category": "default"
},
"id": "0e71a27e-a22b-4a9b-b20a-6d789abff2bc",
"nodes": [
{
"id": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b",
"id": "e8bf67fe-67de-4227-87eb-79e86afdfc74",
"type": "invocation",
"data": {
"id": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b",
"version": "1.2.0",
"id": "e8bf67fe-67de-4227-87eb-79e86afdfc74",
"version": "1.1.1",
"nodePack": "invokeai",
"label": "",
"notes": "",
@@ -56,10 +57,6 @@
"clip": {
"name": "clip",
"label": ""
},
"mask": {
"name": "mask",
"label": ""
}
},
"isOpen": true,
@@ -68,63 +65,79 @@
},
"position": {
"x": 1250,
"y": 1200
"y": 1500
}
},
{
"id": "5ca498a4-c8c8-4580-a396-0c984317205d",
"id": "d8ace142-c05f-4f1d-8982-88dc7473958d",
"type": "invocation",
"data": {
"id": "5ca498a4-c8c8-4580-a396-0c984317205d",
"version": "1.1.0",
"id": "d8ace142-c05f-4f1d-8982-88dc7473958d",
"version": "1.0.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "i2l",
"type": "main_model_loader",
"inputs": {
"image": {
"name": "image",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
},
"tiled": {
"name": "tiled",
"model": {
"name": "model",
"label": "",
"value": false
},
"tile_size": {
"name": "tile_size",
"label": "",
"value": 0
},
"fp32": {
"name": "fp32",
"label": "",
"value": false
"value": {
"key": "5cd43ca0-dd0a-418d-9f7e-35b2b9d5e106",
"hash": "blake3:6987f323017f597213cc3264250edf57056d21a40a0a85d83a1a33a7d44dc41a",
"name": "Deliberate_v5",
"base": "sd-1",
"type": "main"
}
}
},
"isOpen": false,
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 1650,
"y": 1675
"x": 700,
"y": 1375
}
},
{
"id": "3ed9b2ef-f4ec-40a7-94db-92e63b583ec0",
"id": "771bdf6a-0813-4099-a5d8-921a138754d4",
"type": "invocation",
"data": {
"id": "3ed9b2ef-f4ec-40a7-94db-92e63b583ec0",
"version": "1.3.0",
"id": "771bdf6a-0813-4099-a5d8-921a138754d4",
"version": "1.0.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "l2i",
"type": "image",
"inputs": {
"image": {
"name": "image",
"label": "Image To Upscale",
"value": {
"image_name": "d2e42ba6-d420-496b-82db-91c9b75956c1.png"
}
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 344.5593065887157,
"y": 1698.161491368619
}
},
{
"id": "f7564dd2-9539-47f2-ac13-190804461f4e",
"type": "invocation",
"data": {
"id": "f7564dd2-9539-47f2-ac13-190804461f4e",
"version": "1.3.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "esrgan",
"inputs": {
"board": {
"name": "board",
@@ -134,37 +147,81 @@
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"image": {
"name": "image",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
},
"tiled": {
"name": "tiled",
"label": "",
"value": false
"model_name": {
"name": "model_name",
"label": "Upscaler Model",
"value": "RealESRGAN_x2plus.pth"
},
"tile_size": {
"name": "tile_size",
"label": "",
"value": 0
},
"fp32": {
"name": "fp32",
"label": "",
"value": false
"value": 400
}
},
"isOpen": true,
"isIntermediate": false,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 2559.4751127537957,
"y": 1246.6000376741406
"x": 717.3863693661265,
"y": 1721.9215053134815
}
},
{
"id": "1d887701-df21-4966-ae6e-a7d82307d7bd",
"type": "invocation",
"data": {
"id": "1d887701-df21-4966-ae6e-a7d82307d7bd",
"version": "1.3.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "canny_image_processor",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"image": {
"name": "image",
"label": ""
},
"detect_resolution": {
"name": "detect_resolution",
"label": "",
"value": 512
},
"image_resolution": {
"name": "image_resolution",
"label": "",
"value": 512
},
"low_threshold": {
"name": "low_threshold",
"label": "",
"value": 100
},
"high_threshold": {
"name": "high_threshold",
"label": "",
"value": 200
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 1200,
"y": 1900
}
},
{
@@ -172,7 +229,7 @@
"type": "invocation",
"data": {
"id": "ca1d020c-89a8-4958-880a-016d28775cfa",
"version": "1.1.2",
"version": "1.1.1",
"nodePack": "invokeai",
"label": "",
"notes": "",
@@ -228,193 +285,6 @@
"y": 1902.9649340196056
}
},
{
"id": "1d887701-df21-4966-ae6e-a7d82307d7bd",
"type": "invocation",
"data": {
"id": "1d887701-df21-4966-ae6e-a7d82307d7bd",
"version": "1.3.3",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "canny_image_processor",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"image": {
"name": "image",
"label": ""
},
"detect_resolution": {
"name": "detect_resolution",
"label": "",
"value": 512
},
"image_resolution": {
"name": "image_resolution",
"label": "",
"value": 512
},
"low_threshold": {
"name": "low_threshold",
"label": "",
"value": 100
},
"high_threshold": {
"name": "high_threshold",
"label": "",
"value": 200
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 1200,
"y": 1900
}
},
{
"id": "d8ace142-c05f-4f1d-8982-88dc7473958d",
"type": "invocation",
"data": {
"id": "d8ace142-c05f-4f1d-8982-88dc7473958d",
"version": "1.0.3",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "main_model_loader",
"inputs": {
"model": {
"name": "model",
"label": "",
"value": {
"key": "5cd43ca0-dd0a-418d-9f7e-35b2b9d5e106",
"hash": "blake3:6987f323017f597213cc3264250edf57056d21a40a0a85d83a1a33a7d44dc41a",
"name": "Deliberate_v5",
"base": "sd-1",
"type": "main"
}
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 700,
"y": 1375
}
},
{
"id": "e8bf67fe-67de-4227-87eb-79e86afdfc74",
"type": "invocation",
"data": {
"id": "e8bf67fe-67de-4227-87eb-79e86afdfc74",
"version": "1.2.0",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "compel",
"inputs": {
"prompt": {
"name": "prompt",
"label": "",
"value": ""
},
"clip": {
"name": "clip",
"label": ""
},
"mask": {
"name": "mask",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 1250,
"y": 1500
}
},
{
"id": "771bdf6a-0813-4099-a5d8-921a138754d4",
"type": "invocation",
"data": {
"id": "771bdf6a-0813-4099-a5d8-921a138754d4",
"version": "1.0.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "image",
"inputs": {
"image": {
"name": "image",
"label": "Image To Upscale"
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 344.5593065887157,
"y": 1698.161491368619
}
},
{
"id": "f7564dd2-9539-47f2-ac13-190804461f4e",
"type": "invocation",
"data": {
"id": "f7564dd2-9539-47f2-ac13-190804461f4e",
"version": "1.3.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "esrgan",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"image": {
"name": "image",
"label": ""
},
"model_name": {
"name": "model_name",
"label": "Upscaler Model",
"value": "RealESRGAN_x2plus.pth"
},
"tile_size": {
"name": "tile_size",
"label": "",
"value": 400
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 717.3863693661265,
"y": 1721.9215053134815
}
},
{
"id": "f50624ce-82bf-41d0-bdf7-8aab11a80d48",
"type": "invocation",
@@ -543,6 +413,122 @@
"y": 1232.6219060454753
}
},
{
"id": "3ed9b2ef-f4ec-40a7-94db-92e63b583ec0",
"type": "invocation",
"data": {
"id": "3ed9b2ef-f4ec-40a7-94db-92e63b583ec0",
"version": "1.2.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "l2i",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
},
"tiled": {
"name": "tiled",
"label": "",
"value": false
},
"fp32": {
"name": "fp32",
"label": "",
"value": false
}
},
"isOpen": true,
"isIntermediate": false,
"useCache": true
},
"position": {
"x": 2559.4751127537957,
"y": 1246.6000376741406
}
},
{
"id": "5ca498a4-c8c8-4580-a396-0c984317205d",
"type": "invocation",
"data": {
"id": "5ca498a4-c8c8-4580-a396-0c984317205d",
"version": "1.0.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "i2l",
"inputs": {
"image": {
"name": "image",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
},
"tiled": {
"name": "tiled",
"label": "",
"value": false
},
"fp32": {
"name": "fp32",
"label": "",
"value": false
}
},
"isOpen": false,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 1650,
"y": 1675
}
},
{
"id": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b",
"type": "invocation",
"data": {
"id": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b",
"version": "1.1.1",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "compel",
"inputs": {
"prompt": {
"name": "prompt",
"label": "",
"value": ""
},
"clip": {
"name": "clip",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 1250,
"y": 1200
}
},
{
"id": "eb8f6f8a-c7b1-4914-806e-045ee2717a35",
"type": "invocation",

View File

@@ -2,7 +2,7 @@
"name": "Face Detailer with IP-Adapter & Canny (See Note in Details)",
"author": "kosmoskatten",
"description": "A workflow to add detail to and improve faces. This workflow is most effective when used with a model that creates realistic outputs. ",
"version": "2.1.0",
"version": "2.0.0",
"contact": "invoke@invoke.ai",
"tags": "face detailer, IP-Adapter, Canny",
"notes": "Set this image as the blur mask: https://i.imgur.com/Gxi61zP.png",
@@ -37,349 +37,16 @@
}
],
"meta": {
"version": "3.0.0",
"category": "default"
"category": "default",
"version": "3.0.0"
},
"nodes": [
{
"id": "c6359181-6479-40ec-bf3a-b7e8451683b8",
"type": "invocation",
"data": {
"id": "c6359181-6479-40ec-bf3a-b7e8451683b8",
"version": "1.0.3",
"label": "",
"notes": "",
"type": "main_model_loader",
"inputs": {
"model": {
"name": "model",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 2031.5518710051792,
"y": -492.1742944307074
}
},
{
"id": "8fe598c6-d447-44fa-a165-4975af77d080",
"type": "invocation",
"data": {
"id": "8fe598c6-d447-44fa-a165-4975af77d080",
"version": "1.3.3",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "canny_image_processor",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"image": {
"name": "image",
"label": ""
},
"detect_resolution": {
"name": "detect_resolution",
"label": "",
"value": 512
},
"image_resolution": {
"name": "image_resolution",
"label": "",
"value": 512
},
"low_threshold": {
"name": "low_threshold",
"label": "",
"value": 100
},
"high_threshold": {
"name": "high_threshold",
"label": "",
"value": 200
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 3519.4131037388597,
"y": 576.7946795840575
}
},
{
"id": "f60b6161-8f26-42f6-89ff-545e6011e501",
"type": "invocation",
"data": {
"id": "f60b6161-8f26-42f6-89ff-545e6011e501",
"version": "1.1.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "controlnet",
"inputs": {
"image": {
"name": "image",
"label": ""
},
"control_model": {
"name": "control_model",
"label": "Control Model (select canny)",
"value": {
"key": "5bdaacf7-a7a3-4fb8-b394-cc0ffbb8941d",
"hash": "blake3:260c7f8e10aefea9868cfc68d89970e91033bd37132b14b903e70ee05ebf530e",
"name": "sd-controlnet-canny",
"base": "sd-1",
"type": "controlnet"
}
},
"control_weight": {
"name": "control_weight",
"label": "",
"value": 0.5
},
"begin_step_percent": {
"name": "begin_step_percent",
"label": "",
"value": 0
},
"end_step_percent": {
"name": "end_step_percent",
"label": "",
"value": 0.5
},
"control_mode": {
"name": "control_mode",
"label": "",
"value": "balanced"
},
"resize_mode": {
"name": "resize_mode",
"label": "",
"value": "just_resize"
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 3950,
"y": 150
}
},
{
"id": "22b750db-b85e-486b-b278-ac983e329813",
"type": "invocation",
"data": {
"id": "22b750db-b85e-486b-b278-ac983e329813",
"version": "1.4.1",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "ip_adapter",
"inputs": {
"image": {
"name": "image",
"label": ""
},
"ip_adapter_model": {
"name": "ip_adapter_model",
"label": "IP-Adapter Model (select IP Adapter Face)",
"value": {
"key": "1cc210bb-4d0a-4312-b36c-b5d46c43768e",
"hash": "blake3:3d669dffa7471b357b4df088b99ffb6bf4d4383d5e0ef1de5ec1c89728a3d5a5",
"name": "ip_adapter_sd15",
"base": "sd-1",
"type": "ip_adapter"
}
},
"clip_vision_model": {
"name": "clip_vision_model",
"label": "",
"value": "ViT-H"
},
"weight": {
"name": "weight",
"label": "",
"value": 0.5
},
"method": {
"name": "method",
"label": "",
"value": "full"
},
"begin_step_percent": {
"name": "begin_step_percent",
"label": "",
"value": 0
},
"end_step_percent": {
"name": "end_step_percent",
"label": "",
"value": 0.8
},
"mask": {
"name": "mask",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 3575,
"y": -200
}
},
{
"id": "f4d15b64-c4a6-42a5-90fc-e4ed07a0ca65",
"type": "invocation",
"data": {
"id": "f4d15b64-c4a6-42a5-90fc-e4ed07a0ca65",
"version": "1.2.0",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "compel",
"inputs": {
"prompt": {
"name": "prompt",
"label": "",
"value": ""
},
"clip": {
"name": "clip",
"label": ""
},
"mask": {
"name": "mask",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 2550,
"y": -525
}
},
{
"id": "2224ed72-2453-4252-bd89-3085240e0b6f",
"type": "invocation",
"data": {
"id": "2224ed72-2453-4252-bd89-3085240e0b6f",
"version": "1.3.0",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "l2i",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
},
"tiled": {
"name": "tiled",
"label": "",
"value": false
},
"tile_size": {
"name": "tile_size",
"label": "",
"value": 0
},
"fp32": {
"name": "fp32",
"label": "",
"value": true
}
},
"isOpen": true,
"isIntermediate": false,
"useCache": true
},
"position": {
"x": 4980.1395106966565,
"y": -255.9158921745602
}
},
{
"id": "de8b1a48-a2e4-42ca-90bb-66058bffd534",
"type": "invocation",
"data": {
"id": "de8b1a48-a2e4-42ca-90bb-66058bffd534",
"version": "1.1.0",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "i2l",
"inputs": {
"image": {
"name": "image",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
},
"tiled": {
"name": "tiled",
"label": "",
"value": false
},
"tile_size": {
"name": "tile_size",
"label": "",
"value": 0
},
"fp32": {
"name": "fp32",
"label": "",
"value": true
}
},
"isOpen": false,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 3100,
"y": -275
}
},
{
"id": "44f2c190-eb03-460d-8d11-a94d13b33f19",
"type": "invocation",
"data": {
"id": "44f2c190-eb03-460d-8d11-a94d13b33f19",
"version": "1.2.0",
"version": "1.1.1",
"nodePack": "invokeai",
"label": "",
"notes": "",
@@ -393,10 +60,6 @@
"clip": {
"name": "clip",
"label": ""
},
"mask": {
"name": "mask",
"label": ""
}
},
"isOpen": true,
@@ -588,6 +251,45 @@
"y": 0
}
},
{
"id": "de8b1a48-a2e4-42ca-90bb-66058bffd534",
"type": "invocation",
"data": {
"id": "de8b1a48-a2e4-42ca-90bb-66058bffd534",
"version": "1.0.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "i2l",
"inputs": {
"image": {
"name": "image",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
},
"tiled": {
"name": "tiled",
"label": "",
"value": false
},
"fp32": {
"name": "fp32",
"label": "",
"value": true
}
},
"isOpen": false,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 3100,
"y": -275
}
},
{
"id": "bd06261d-a74a-4d1f-8374-745ed6194bc2",
"type": "invocation",
@@ -716,6 +418,53 @@
"y": -175
}
},
{
"id": "2224ed72-2453-4252-bd89-3085240e0b6f",
"type": "invocation",
"data": {
"id": "2224ed72-2453-4252-bd89-3085240e0b6f",
"version": "1.2.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "l2i",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
},
"tiled": {
"name": "tiled",
"label": "",
"value": false
},
"fp32": {
"name": "fp32",
"label": "",
"value": true
}
},
"isOpen": true,
"isIntermediate": false,
"useCache": true
},
"position": {
"x": 4980.1395106966565,
"y": -255.9158921745602
}
},
{
"id": "2974e5b3-3d41-4b6f-9953-cd21e8f3a323",
"type": "invocation",
@@ -943,6 +692,201 @@
"y": -275
}
},
{
"id": "f4d15b64-c4a6-42a5-90fc-e4ed07a0ca65",
"type": "invocation",
"data": {
"id": "f4d15b64-c4a6-42a5-90fc-e4ed07a0ca65",
"version": "1.1.1",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "compel",
"inputs": {
"prompt": {
"name": "prompt",
"label": "",
"value": ""
},
"clip": {
"name": "clip",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 2550,
"y": -525
}
},
{
"id": "22b750db-b85e-486b-b278-ac983e329813",
"type": "invocation",
"data": {
"id": "22b750db-b85e-486b-b278-ac983e329813",
"version": "1.2.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "ip_adapter",
"inputs": {
"image": {
"name": "image",
"label": ""
},
"ip_adapter_model": {
"name": "ip_adapter_model",
"label": "IP-Adapter Model (select IP Adapter Face)",
"value": {
"key": "1cc210bb-4d0a-4312-b36c-b5d46c43768e",
"hash": "blake3:3d669dffa7471b357b4df088b99ffb6bf4d4383d5e0ef1de5ec1c89728a3d5a5",
"name": "ip_adapter_sd15",
"base": "sd-1",
"type": "ip_adapter"
}
},
"weight": {
"name": "weight",
"label": "",
"value": 0.5
},
"begin_step_percent": {
"name": "begin_step_percent",
"label": "",
"value": 0
},
"end_step_percent": {
"name": "end_step_percent",
"label": "",
"value": 0.8
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 3575,
"y": -200
}
},
{
"id": "f60b6161-8f26-42f6-89ff-545e6011e501",
"type": "invocation",
"data": {
"id": "f60b6161-8f26-42f6-89ff-545e6011e501",
"version": "1.1.1",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "controlnet",
"inputs": {
"image": {
"name": "image",
"label": ""
},
"control_model": {
"name": "control_model",
"label": "Control Model (select canny)",
"value": {
"key": "5bdaacf7-a7a3-4fb8-b394-cc0ffbb8941d",
"hash": "blake3:260c7f8e10aefea9868cfc68d89970e91033bd37132b14b903e70ee05ebf530e",
"name": "sd-controlnet-canny",
"base": "sd-1",
"type": "controlnet"
}
},
"control_weight": {
"name": "control_weight",
"label": "",
"value": 0.5
},
"begin_step_percent": {
"name": "begin_step_percent",
"label": "",
"value": 0
},
"end_step_percent": {
"name": "end_step_percent",
"label": "",
"value": 0.5
},
"control_mode": {
"name": "control_mode",
"label": "",
"value": "balanced"
},
"resize_mode": {
"name": "resize_mode",
"label": "",
"value": "just_resize"
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 3950,
"y": 150
}
},
{
"id": "8fe598c6-d447-44fa-a165-4975af77d080",
"type": "invocation",
"data": {
"id": "8fe598c6-d447-44fa-a165-4975af77d080",
"version": "1.3.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "canny_image_processor",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"image": {
"name": "image",
"label": ""
},
"detect_resolution": {
"name": "detect_resolution",
"label": "",
"value": 512
},
"image_resolution": {
"name": "image_resolution",
"label": "",
"value": 512
},
"low_threshold": {
"name": "low_threshold",
"label": "",
"value": 100
},
"high_threshold": {
"name": "high_threshold",
"label": "",
"value": 200
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 3519.4131037388597,
"y": 576.7946795840575
}
},
{
"id": "4bd4ae80-567f-4366-b8c6-3bb06f4fb46a",
"type": "invocation",
@@ -1091,6 +1035,30 @@
"x": 2578.2364832140506,
"y": 78.7948456497351
}
},
{
"id": "c6359181-6479-40ec-bf3a-b7e8451683b8",
"type": "invocation",
"data": {
"id": "c6359181-6479-40ec-bf3a-b7e8451683b8",
"version": "1.0.2",
"label": "",
"notes": "",
"type": "main_model_loader",
"inputs": {
"model": {
"name": "model",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 2031.5518710051792,
"y": -492.1742944307074
}
}
],
"edges": [

View File

@@ -2,7 +2,7 @@
"name": "Multi ControlNet (Canny & Depth)",
"author": "InvokeAI",
"description": "A sample workflow using canny & depth ControlNets to guide the generation process. ",
"version": "2.1.0",
"version": "2.0.0",
"contact": "invoke@invoke.ai",
"tags": "ControlNet, canny, depth",
"notes": "",
@@ -37,218 +37,24 @@
}
],
"meta": {
"version": "3.0.0",
"category": "default"
"category": "default",
"version": "3.0.0"
},
"nodes": [
{
"id": "9db25398-c869-4a63-8815-c6559341ef12",
"id": "8e860e51-5045-456e-bf04-9a62a2a5c49e",
"type": "invocation",
"data": {
"id": "9db25398-c869-4a63-8815-c6559341ef12",
"version": "1.3.0",
"id": "8e860e51-5045-456e-bf04-9a62a2a5c49e",
"version": "1.0.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "l2i",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
},
"tiled": {
"name": "tiled",
"label": "",
"value": false
},
"tile_size": {
"name": "tile_size",
"label": "",
"value": 0
},
"fp32": {
"name": "fp32",
"label": "",
"value": false
}
},
"isOpen": true,
"isIntermediate": false,
"useCache": true
},
"position": {
"x": 5675,
"y": -825
}
},
{
"id": "c826ba5e-9676-4475-b260-07b85e88753c",
"type": "invocation",
"data": {
"id": "c826ba5e-9676-4475-b260-07b85e88753c",
"version": "1.3.3",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "canny_image_processor",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"image": {
"name": "image",
"label": ""
},
"detect_resolution": {
"name": "detect_resolution",
"label": "",
"value": 512
},
"image_resolution": {
"name": "image_resolution",
"label": "",
"value": 512
},
"low_threshold": {
"name": "low_threshold",
"label": "",
"value": 100
},
"high_threshold": {
"name": "high_threshold",
"label": "",
"value": 200
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 4095.757337055795,
"y": -455.63440891935863
}
},
{
"id": "018b1214-c2af-43a7-9910-fb687c6726d7",
"type": "invocation",
"data": {
"id": "018b1214-c2af-43a7-9910-fb687c6726d7",
"version": "1.2.4",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "midas_depth_image_processor",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"image": {
"name": "image",
"label": ""
},
"a_mult": {
"name": "a_mult",
"label": "",
"value": 2
},
"bg_th": {
"name": "bg_th",
"label": "",
"value": 0.1
},
"detect_resolution": {
"name": "detect_resolution",
"label": "",
"value": 512
},
"image_resolution": {
"name": "image_resolution",
"label": "",
"value": 512
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 4082.783145980783,
"y": 0.01629251229994111
}
},
{
"id": "d204d184-f209-4fae-a0a1-d152800844e1",
"type": "invocation",
"data": {
"id": "d204d184-f209-4fae-a0a1-d152800844e1",
"version": "1.1.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "controlnet",
"type": "image",
"inputs": {
"image": {
"name": "image",
"label": ""
},
"control_model": {
"name": "control_model",
"label": "Control Model (select canny)",
"value": {
"key": "5bdaacf7-a7a3-4fb8-b394-cc0ffbb8941d",
"hash": "blake3:260c7f8e10aefea9868cfc68d89970e91033bd37132b14b903e70ee05ebf530e",
"name": "sd-controlnet-canny",
"base": "sd-1",
"type": "controlnet"
}
},
"control_weight": {
"name": "control_weight",
"label": "",
"value": 1
},
"begin_step_percent": {
"name": "begin_step_percent",
"label": "",
"value": 0
},
"end_step_percent": {
"name": "end_step_percent",
"label": "",
"value": 1
},
"control_mode": {
"name": "control_mode",
"label": "",
"value": "balanced"
},
"resize_mode": {
"name": "resize_mode",
"label": "",
"value": "just_resize"
"label": "Depth Input Image"
}
},
"isOpen": true,
@@ -256,101 +62,8 @@
"useCache": true
},
"position": {
"x": 4479.68542130465,
"y": -618.4221638099414
}
},
{
"id": "7ce68934-3419-42d4-ac70-82cfc9397306",
"type": "invocation",
"data": {
"id": "7ce68934-3419-42d4-ac70-82cfc9397306",
"version": "1.2.0",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "compel",
"inputs": {
"prompt": {
"name": "prompt",
"label": "Positive Prompt",
"value": ""
},
"clip": {
"name": "clip",
"label": ""
},
"mask": {
"name": "mask",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 4075,
"y": -1125
}
},
{
"id": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"type": "invocation",
"data": {
"id": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"version": "1.0.3",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "main_model_loader",
"inputs": {
"model": {
"name": "model",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 3600,
"y": -1000
}
},
{
"id": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
"type": "invocation",
"data": {
"id": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
"version": "1.2.0",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "compel",
"inputs": {
"prompt": {
"name": "prompt",
"label": "Negative Prompt",
"value": ""
},
"clip": {
"name": "clip",
"label": ""
},
"mask": {
"name": "mask",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 4075,
"y": -825
"x": 3666.135718057363,
"y": 186.66887319822808
}
},
{
@@ -358,7 +71,7 @@
"type": "invocation",
"data": {
"id": "a33199c2-8340-401e-b8a2-42ffa875fc1c",
"version": "1.1.2",
"version": "1.1.1",
"nodePack": "invokeai",
"label": "",
"notes": "",
@@ -415,19 +128,24 @@
}
},
{
"id": "8e860e51-5045-456e-bf04-9a62a2a5c49e",
"id": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
"type": "invocation",
"data": {
"id": "8e860e51-5045-456e-bf04-9a62a2a5c49e",
"version": "1.0.2",
"id": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
"version": "1.1.1",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "image",
"type": "compel",
"inputs": {
"image": {
"name": "image",
"label": "Depth Input Image"
"prompt": {
"name": "prompt",
"label": "Negative Prompt",
"value": ""
},
"clip": {
"name": "clip",
"label": ""
}
},
"isOpen": true,
@@ -435,8 +153,124 @@
"useCache": true
},
"position": {
"x": 3666.135718057363,
"y": 186.66887319822808
"x": 4075,
"y": -825
}
},
{
"id": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"type": "invocation",
"data": {
"id": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"version": "1.0.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "main_model_loader",
"inputs": {
"model": {
"name": "model",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 3600,
"y": -1000
}
},
{
"id": "7ce68934-3419-42d4-ac70-82cfc9397306",
"type": "invocation",
"data": {
"id": "7ce68934-3419-42d4-ac70-82cfc9397306",
"version": "1.1.1",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "compel",
"inputs": {
"prompt": {
"name": "prompt",
"label": "Positive Prompt",
"value": ""
},
"clip": {
"name": "clip",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 4075,
"y": -1125
}
},
{
"id": "d204d184-f209-4fae-a0a1-d152800844e1",
"type": "invocation",
"data": {
"id": "d204d184-f209-4fae-a0a1-d152800844e1",
"version": "1.1.1",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "controlnet",
"inputs": {
"image": {
"name": "image",
"label": ""
},
"control_model": {
"name": "control_model",
"label": "Control Model (select canny)",
"value": {
"key": "5bdaacf7-a7a3-4fb8-b394-cc0ffbb8941d",
"hash": "blake3:260c7f8e10aefea9868cfc68d89970e91033bd37132b14b903e70ee05ebf530e",
"name": "sd-controlnet-canny",
"base": "sd-1",
"type": "controlnet"
}
},
"control_weight": {
"name": "control_weight",
"label": "",
"value": 1
},
"begin_step_percent": {
"name": "begin_step_percent",
"label": "",
"value": 0
},
"end_step_percent": {
"name": "end_step_percent",
"label": "",
"value": 1
},
"control_mode": {
"name": "control_mode",
"label": "",
"value": "balanced"
},
"resize_mode": {
"name": "resize_mode",
"label": "",
"value": "just_resize"
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 4479.68542130465,
"y": -618.4221638099414
}
},
{
@@ -488,6 +322,159 @@
"y": -575
}
},
{
"id": "018b1214-c2af-43a7-9910-fb687c6726d7",
"type": "invocation",
"data": {
"id": "018b1214-c2af-43a7-9910-fb687c6726d7",
"version": "1.2.3",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "midas_depth_image_processor",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"image": {
"name": "image",
"label": ""
},
"a_mult": {
"name": "a_mult",
"label": "",
"value": 2
},
"bg_th": {
"name": "bg_th",
"label": "",
"value": 0.1
},
"detect_resolution": {
"name": "detect_resolution",
"label": "",
"value": 512
},
"image_resolution": {
"name": "image_resolution",
"label": "",
"value": 512
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 4082.783145980783,
"y": 0.01629251229994111
}
},
{
"id": "c826ba5e-9676-4475-b260-07b85e88753c",
"type": "invocation",
"data": {
"id": "c826ba5e-9676-4475-b260-07b85e88753c",
"version": "1.3.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "canny_image_processor",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"image": {
"name": "image",
"label": ""
},
"detect_resolution": {
"name": "detect_resolution",
"label": "",
"value": 512
},
"image_resolution": {
"name": "image_resolution",
"label": "",
"value": 512
},
"low_threshold": {
"name": "low_threshold",
"label": "",
"value": 100
},
"high_threshold": {
"name": "high_threshold",
"label": "",
"value": 200
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 4095.757337055795,
"y": -455.63440891935863
}
},
{
"id": "9db25398-c869-4a63-8815-c6559341ef12",
"type": "invocation",
"data": {
"id": "9db25398-c869-4a63-8815-c6559341ef12",
"version": "1.2.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "l2i",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
},
"tiled": {
"name": "tiled",
"label": "",
"value": false
},
"fp32": {
"name": "fp32",
"label": "",
"value": false
}
},
"isOpen": true,
"isIntermediate": false,
"useCache": true
},
"position": {
"x": 5675,
"y": -825
}
},
{
"id": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"type": "invocation",

View File

@@ -2,7 +2,7 @@
"name": "Prompt from File",
"author": "InvokeAI",
"description": "Sample workflow using Prompt from File node",
"version": "2.1.0",
"version": "2.0.0",
"contact": "invoke@invoke.ai",
"tags": "text2image, prompt from file, default",
"notes": "",
@@ -37,127 +37,16 @@
}
],
"meta": {
"version": "3.0.0",
"category": "default"
"category": "default",
"version": "3.0.0"
},
"nodes": [
{
"id": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
"type": "invocation",
"data": {
"id": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
"version": "1.3.0",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "l2i",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
},
"tiled": {
"name": "tiled",
"label": "",
"value": false
},
"tile_size": {
"name": "tile_size",
"label": "",
"value": 0
},
"fp32": {
"name": "fp32",
"label": "",
"value": false
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 2037.861329274915,
"y": -329.8393457509562
}
},
{
"id": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
"type": "invocation",
"data": {
"id": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
"version": "1.2.0",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "compel",
"inputs": {
"prompt": {
"name": "prompt",
"label": "",
"value": ""
},
"clip": {
"name": "clip",
"label": ""
},
"mask": {
"name": "mask",
"label": ""
}
},
"isOpen": false,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 925,
"y": -275
}
},
{
"id": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"type": "invocation",
"data": {
"id": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"version": "1.0.3",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "main_model_loader",
"inputs": {
"model": {
"name": "model",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 0,
"y": -375
}
},
{
"id": "c2eaf1ba-5708-4679-9e15-945b8b432692",
"type": "invocation",
"data": {
"id": "c2eaf1ba-5708-4679-9e15-945b8b432692",
"version": "1.2.0",
"version": "1.1.1",
"nodePack": "invokeai",
"label": "",
"notes": "",
@@ -171,10 +60,6 @@
"clip": {
"name": "clip",
"label": ""
},
"mask": {
"name": "mask",
"label": ""
}
},
"isOpen": false,
@@ -256,6 +141,61 @@
"y": -400
}
},
{
"id": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"type": "invocation",
"data": {
"id": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"version": "1.0.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "main_model_loader",
"inputs": {
"model": {
"name": "model",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 0,
"y": -375
}
},
{
"id": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
"type": "invocation",
"data": {
"id": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
"version": "1.1.1",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "compel",
"inputs": {
"prompt": {
"name": "prompt",
"label": "",
"value": ""
},
"clip": {
"name": "clip",
"label": ""
}
},
"isOpen": false,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 925,
"y": -275
}
},
{
"id": "0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77",
"type": "invocation",
@@ -328,6 +268,53 @@
"y": -50
}
},
{
"id": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
"type": "invocation",
"data": {
"id": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
"version": "1.2.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "l2i",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
},
"tiled": {
"name": "tiled",
"label": "",
"value": false
},
"fp32": {
"name": "fp32",
"label": "",
"value": false
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 2037.861329274915,
"y": -329.8393457509562
}
},
{
"id": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"type": "invocation",

View File

@@ -2,7 +2,7 @@
"name": "Text to Image - SD1.5",
"author": "InvokeAI",
"description": "Sample text to image workflow for Stable Diffusion 1.5/2",
"version": "2.1.0",
"version": "2.0.0",
"contact": "invoke@invoke.ai",
"tags": "text2image, SD1.5, SD2, default",
"notes": "",
@@ -33,127 +33,16 @@
}
],
"meta": {
"version": "3.0.0",
"category": "default"
"category": "default",
"version": "3.0.0"
},
"nodes": [
{
"id": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
"type": "invocation",
"data": {
"id": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
"version": "1.3.0",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "l2i",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
},
"tiled": {
"name": "tiled",
"label": "",
"value": false
},
"tile_size": {
"name": "tile_size",
"label": "",
"value": 0
},
"fp32": {
"name": "fp32",
"label": "",
"value": true
}
},
"isOpen": true,
"isIntermediate": false,
"useCache": true
},
"position": {
"x": 1800,
"y": 25
}
},
{
"id": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"type": "invocation",
"data": {
"id": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"version": "1.2.0",
"nodePack": "invokeai",
"label": "Positive Compel Prompt",
"notes": "",
"type": "compel",
"inputs": {
"prompt": {
"name": "prompt",
"label": "Positive Prompt",
"value": "Super cute tiger cub, national geographic award-winning photograph"
},
"clip": {
"name": "clip",
"label": ""
},
"mask": {
"name": "mask",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 1000,
"y": 25
}
},
{
"id": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"type": "invocation",
"data": {
"id": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"version": "1.0.3",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "main_model_loader",
"inputs": {
"model": {
"name": "model",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 600,
"y": 25
}
},
{
"id": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"type": "invocation",
"data": {
"id": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"version": "1.2.0",
"version": "1.1.1",
"nodePack": "invokeai",
"label": "Negative Compel Prompt",
"notes": "",
@@ -167,10 +56,6 @@
"clip": {
"name": "clip",
"label": ""
},
"mask": {
"name": "mask",
"label": ""
}
},
"isOpen": true,
@@ -223,6 +108,61 @@
"y": 325
}
},
{
"id": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"type": "invocation",
"data": {
"id": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"version": "1.0.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "main_model_loader",
"inputs": {
"model": {
"name": "model",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 600,
"y": 25
}
},
{
"id": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"type": "invocation",
"data": {
"id": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"version": "1.1.1",
"nodePack": "invokeai",
"label": "Positive Compel Prompt",
"notes": "",
"type": "compel",
"inputs": {
"prompt": {
"name": "prompt",
"label": "Positive Prompt",
"value": "Super cute tiger cub, national geographic award-winning photograph"
},
"clip": {
"name": "clip",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 1000,
"y": 25
}
},
{
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"type": "invocation",
@@ -340,6 +280,53 @@
"x": 1400,
"y": 25
}
},
{
"id": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
"type": "invocation",
"data": {
"id": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
"version": "1.2.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "l2i",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
},
"tiled": {
"name": "tiled",
"label": "",
"value": false
},
"fp32": {
"name": "fp32",
"label": "",
"value": true
}
},
"isOpen": true,
"isIntermediate": false,
"useCache": true
},
"position": {
"x": 1800,
"y": 25
}
}
],
"edges": [

View File

@@ -2,7 +2,7 @@
"name": "Text to Image - SDXL",
"author": "InvokeAI",
"description": "Sample text to image workflow for SDXL",
"version": "2.1.0",
"version": "2.0.0",
"contact": "invoke@invoke.ai",
"tags": "text2image, SDXL, default",
"notes": "",
@@ -29,271 +29,10 @@
}
],
"meta": {
"version": "3.0.0",
"category": "default"
"category": "default",
"version": "3.0.0"
},
"nodes": [
{
"id": "0093692f-9cf4-454d-a5b8-62f0e3eb3bb8",
"type": "invocation",
"data": {
"id": "0093692f-9cf4-454d-a5b8-62f0e3eb3bb8",
"version": "1.0.3",
"label": "",
"notes": "",
"type": "vae_loader",
"inputs": {
"vae_model": {
"name": "vae_model",
"label": "VAE (use the FP16 model)",
"value": {
"key": "f20f9e5c-1bce-4c46-a84d-34ebfa7df069",
"hash": "blake3:9705ab1c31fa96b308734214fb7571a958621c7a9247eed82b7d277145f8d9fa",
"name": "sdxl-vae-fp16-fix",
"base": "sdxl",
"type": "vae"
}
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 375,
"y": -225
}
},
{
"id": "63e91020-83b2-4f35-b174-ad9692aabb48",
"type": "invocation",
"data": {
"id": "63e91020-83b2-4f35-b174-ad9692aabb48",
"version": "1.3.0",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "l2i",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
},
"tiled": {
"name": "tiled",
"label": "",
"value": false
},
"tile_size": {
"name": "tile_size",
"label": "",
"value": 0
},
"fp32": {
"name": "fp32",
"label": "",
"value": false
}
},
"isOpen": true,
"isIntermediate": false,
"useCache": false
},
"position": {
"x": 1475,
"y": -500
}
},
{
"id": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"type": "invocation",
"data": {
"id": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"version": "1.2.0",
"nodePack": "invokeai",
"label": "SDXL Positive Compel Prompt",
"notes": "",
"type": "sdxl_compel_prompt",
"inputs": {
"prompt": {
"name": "prompt",
"label": "Positive Prompt",
"value": ""
},
"style": {
"name": "style",
"label": "Positive Style",
"value": ""
},
"original_width": {
"name": "original_width",
"label": "",
"value": 1024
},
"original_height": {
"name": "original_height",
"label": "",
"value": 1024
},
"crop_top": {
"name": "crop_top",
"label": "",
"value": 0
},
"crop_left": {
"name": "crop_left",
"label": "",
"value": 0
},
"target_width": {
"name": "target_width",
"label": "",
"value": 1024
},
"target_height": {
"name": "target_height",
"label": "",
"value": 1024
},
"clip": {
"name": "clip",
"label": ""
},
"clip2": {
"name": "clip2",
"label": ""
},
"mask": {
"name": "mask",
"label": ""
}
},
"isOpen": false,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 750,
"y": -175
}
},
{
"id": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"type": "invocation",
"data": {
"id": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"version": "1.0.3",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "sdxl_model_loader",
"inputs": {
"model": {
"name": "model",
"label": "",
"value": {
"key": "4a63b226-e8ff-4da4-854e-0b9f04b562ba",
"hash": "blake3:d279309ea6e5ee6e8fd52504275865cc280dac71cbf528c5b07c98b888bddaba",
"name": "dreamshaper-xl-v2-turbo",
"base": "sdxl",
"type": "main"
}
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 375,
"y": -500
}
},
{
"id": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"type": "invocation",
"data": {
"id": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"version": "1.2.0",
"nodePack": "invokeai",
"label": "SDXL Negative Compel Prompt",
"notes": "",
"type": "sdxl_compel_prompt",
"inputs": {
"prompt": {
"name": "prompt",
"label": "Negative Prompt",
"value": ""
},
"style": {
"name": "style",
"label": "Negative Style",
"value": ""
},
"original_width": {
"name": "original_width",
"label": "",
"value": 1024
},
"original_height": {
"name": "original_height",
"label": "",
"value": 1024
},
"crop_top": {
"name": "crop_top",
"label": "",
"value": 0
},
"crop_left": {
"name": "crop_left",
"label": "",
"value": 0
},
"target_width": {
"name": "target_width",
"label": "",
"value": 1024
},
"target_height": {
"name": "target_height",
"label": "",
"value": 1024
},
"clip": {
"name": "clip",
"label": ""
},
"clip2": {
"name": "clip2",
"label": ""
},
"mask": {
"name": "mask",
"label": ""
}
},
"isOpen": false,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 750,
"y": 200
}
},
{
"id": "3774ec24-a69e-4254-864c-097d07a6256f",
"type": "invocation",
@@ -349,6 +88,75 @@
"y": -125
}
},
{
"id": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"type": "invocation",
"data": {
"id": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"version": "1.1.1",
"nodePack": "invokeai",
"label": "SDXL Negative Compel Prompt",
"notes": "",
"type": "sdxl_compel_prompt",
"inputs": {
"prompt": {
"name": "prompt",
"label": "Negative Prompt",
"value": ""
},
"style": {
"name": "style",
"label": "Negative Style",
"value": ""
},
"original_width": {
"name": "original_width",
"label": "",
"value": 1024
},
"original_height": {
"name": "original_height",
"label": "",
"value": 1024
},
"crop_top": {
"name": "crop_top",
"label": "",
"value": 0
},
"crop_left": {
"name": "crop_left",
"label": "",
"value": 0
},
"target_width": {
"name": "target_width",
"label": "",
"value": 1024
},
"target_height": {
"name": "target_height",
"label": "",
"value": 1024
},
"clip": {
"name": "clip",
"label": ""
},
"clip2": {
"name": "clip2",
"label": ""
}
},
"isOpen": false,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 750,
"y": 200
}
},
{
"id": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "invocation",
@@ -421,6 +229,154 @@
"y": -50
}
},
{
"id": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"type": "invocation",
"data": {
"id": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"version": "1.0.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "sdxl_model_loader",
"inputs": {
"model": {
"name": "model",
"label": "",
"value": {
"key": "4a63b226-e8ff-4da4-854e-0b9f04b562ba",
"hash": "blake3:d279309ea6e5ee6e8fd52504275865cc280dac71cbf528c5b07c98b888bddaba",
"name": "dreamshaper-xl-v2-turbo",
"base": "sdxl",
"type": "main"
}
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 375,
"y": -500
}
},
{
"id": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"type": "invocation",
"data": {
"id": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"version": "1.1.1",
"nodePack": "invokeai",
"label": "SDXL Positive Compel Prompt",
"notes": "",
"type": "sdxl_compel_prompt",
"inputs": {
"prompt": {
"name": "prompt",
"label": "Positive Prompt",
"value": ""
},
"style": {
"name": "style",
"label": "Positive Style",
"value": ""
},
"original_width": {
"name": "original_width",
"label": "",
"value": 1024
},
"original_height": {
"name": "original_height",
"label": "",
"value": 1024
},
"crop_top": {
"name": "crop_top",
"label": "",
"value": 0
},
"crop_left": {
"name": "crop_left",
"label": "",
"value": 0
},
"target_width": {
"name": "target_width",
"label": "",
"value": 1024
},
"target_height": {
"name": "target_height",
"label": "",
"value": 1024
},
"clip": {
"name": "clip",
"label": ""
},
"clip2": {
"name": "clip2",
"label": ""
}
},
"isOpen": false,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 750,
"y": -175
}
},
{
"id": "63e91020-83b2-4f35-b174-ad9692aabb48",
"type": "invocation",
"data": {
"id": "63e91020-83b2-4f35-b174-ad9692aabb48",
"version": "1.2.2",
"nodePack": "invokeai",
"label": "",
"notes": "",
"type": "l2i",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
},
"tiled": {
"name": "tiled",
"label": "",
"value": false
},
"fp32": {
"name": "fp32",
"label": "",
"value": false
}
},
"isOpen": true,
"isIntermediate": false,
"useCache": false
},
"position": {
"x": 1475,
"y": -500
}
},
{
"id": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
"type": "invocation",
@@ -508,6 +464,37 @@
"y": -500
}
},
{
"id": "0093692f-9cf4-454d-a5b8-62f0e3eb3bb8",
"type": "invocation",
"data": {
"id": "0093692f-9cf4-454d-a5b8-62f0e3eb3bb8",
"version": "1.0.2",
"label": "",
"notes": "",
"type": "vae_loader",
"inputs": {
"vae_model": {
"name": "vae_model",
"label": "VAE (use the FP16 model)",
"value": {
"key": "f20f9e5c-1bce-4c46-a84d-34ebfa7df069",
"hash": "blake3:9705ab1c31fa96b308734214fb7571a958621c7a9247eed82b7d277145f8d9fa",
"name": "sdxl-vae-fp16-fix",
"base": "sdxl",
"type": "vae"
}
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 375,
"y": -225
}
},
{
"id": "ade2c0d3-0384-4157-b39b-29ce429cfa15",
"type": "invocation",

View File

@@ -2,7 +2,7 @@
"name": "Text to Image with LoRA",
"author": "InvokeAI",
"description": "Simple text to image workflow with a LoRA",
"version": "2.1.0",
"version": "2.0.0",
"contact": "invoke@invoke.ai",
"tags": "text to image, lora, default",
"notes": "",
@@ -37,83 +37,28 @@
}
],
"meta": {
"version": "3.0.0",
"category": "default"
"category": "default",
"version": "3.0.0"
},
"nodes": [
{
"id": "a9683c0a-6b1f-4a5e-8187-c57e764b3400",
"id": "85b77bb2-c67a-416a-b3e8-291abe746c44",
"type": "invocation",
"data": {
"id": "a9683c0a-6b1f-4a5e-8187-c57e764b3400",
"version": "1.3.0",
"label": "",
"notes": "",
"type": "l2i",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
},
"tiled": {
"name": "tiled",
"label": "",
"value": false
},
"tile_size": {
"name": "tile_size",
"label": "",
"value": 0
},
"fp32": {
"name": "fp32",
"label": "",
"value": false
}
},
"isOpen": true,
"isIntermediate": false,
"useCache": true
},
"position": {
"x": 4450,
"y": -550
}
},
{
"id": "c3fa6872-2599-4a82-a596-b3446a66cf8b",
"type": "invocation",
"data": {
"id": "c3fa6872-2599-4a82-a596-b3446a66cf8b",
"version": "1.2.0",
"id": "85b77bb2-c67a-416a-b3e8-291abe746c44",
"version": "1.1.1",
"label": "",
"notes": "",
"type": "compel",
"inputs": {
"prompt": {
"name": "prompt",
"label": "Positive Prompt",
"value": "super cute tiger cub"
"label": "Negative Prompt",
"value": ""
},
"clip": {
"name": "clip",
"label": ""
},
"mask": {
"name": "mask",
"label": ""
}
},
"isOpen": true,
@@ -122,7 +67,31 @@
},
"position": {
"x": 3425,
"y": -575
"y": -300
}
},
{
"id": "24e9d7ed-4836-4ec4-8f9e-e747721f9818",
"type": "invocation",
"data": {
"id": "24e9d7ed-4836-4ec4-8f9e-e747721f9818",
"version": "1.0.2",
"label": "",
"notes": "",
"type": "main_model_loader",
"inputs": {
"model": {
"name": "model",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 2500,
"y": -600
}
},
{
@@ -130,7 +99,7 @@
"type": "invocation",
"data": {
"id": "c41e705b-f2e3-4d1a-83c4-e34bb9344966",
"version": "1.0.3",
"version": "1.0.2",
"label": "",
"notes": "",
"type": "lora_loader",
@@ -163,51 +132,23 @@
}
},
{
"id": "24e9d7ed-4836-4ec4-8f9e-e747721f9818",
"id": "c3fa6872-2599-4a82-a596-b3446a66cf8b",
"type": "invocation",
"data": {
"id": "24e9d7ed-4836-4ec4-8f9e-e747721f9818",
"version": "1.0.3",
"label": "",
"notes": "",
"type": "main_model_loader",
"inputs": {
"model": {
"name": "model",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": 2500,
"y": -600
}
},
{
"id": "85b77bb2-c67a-416a-b3e8-291abe746c44",
"type": "invocation",
"data": {
"id": "85b77bb2-c67a-416a-b3e8-291abe746c44",
"version": "1.2.0",
"id": "c3fa6872-2599-4a82-a596-b3446a66cf8b",
"version": "1.1.1",
"label": "",
"notes": "",
"type": "compel",
"inputs": {
"prompt": {
"name": "prompt",
"label": "Negative Prompt",
"value": ""
"label": "Positive Prompt",
"value": "super cute tiger cub"
},
"clip": {
"name": "clip",
"label": ""
},
"mask": {
"name": "mask",
"label": ""
}
},
"isOpen": true,
@@ -216,7 +157,7 @@
},
"position": {
"x": 3425,
"y": -300
"y": -575
}
},
{
@@ -374,6 +315,52 @@
"x": 3425,
"y": 0
}
},
{
"id": "a9683c0a-6b1f-4a5e-8187-c57e764b3400",
"type": "invocation",
"data": {
"id": "a9683c0a-6b1f-4a5e-8187-c57e764b3400",
"version": "1.2.2",
"label": "",
"notes": "",
"type": "l2i",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
},
"tiled": {
"name": "tiled",
"label": "",
"value": false
},
"fp32": {
"name": "fp32",
"label": "",
"value": false
}
},
"isOpen": true,
"isIntermediate": false,
"useCache": true
},
"position": {
"x": 4450,
"y": -550
}
}
],
"edges": [

View File

@@ -2,7 +2,7 @@
"name": "Tiled Upscaling (Beta)",
"author": "Invoke",
"description": "A workflow to upscale an input image with tiled upscaling. ",
"version": "2.1.0",
"version": "2.0.0",
"contact": "invoke@invoke.ai",
"tags": "tiled, upscaling, sd1.5",
"notes": "",
@@ -41,318 +41,10 @@
}
],
"meta": {
"version": "3.0.0",
"category": "default"
"category": "default",
"version": "3.0.0"
},
"nodes": [
{
"id": "2ff466b8-5e2a-4d8f-923a-a3884c7ecbc5",
"type": "invocation",
"data": {
"id": "2ff466b8-5e2a-4d8f-923a-a3884c7ecbc5",
"version": "1.0.3",
"label": "",
"notes": "",
"type": "main_model_loader",
"inputs": {
"model": {
"name": "model",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": -4514.466823162653,
"y": -1235.7908800002283
}
},
{
"id": "287f134f-da8d-41d1-884e-5940e8f7b816",
"type": "invocation",
"data": {
"id": "287f134f-da8d-41d1-884e-5940e8f7b816",
"version": "1.4.1",
"label": "",
"notes": "",
"type": "ip_adapter",
"inputs": {
"image": {
"name": "image",
"label": ""
},
"ip_adapter_model": {
"name": "ip_adapter_model",
"label": "IP-Adapter Model (select ip_adapter_sd15)",
"value": {
"key": "1cc210bb-4d0a-4312-b36c-b5d46c43768e",
"hash": "blake3:3d669dffa7471b357b4df088b99ffb6bf4d4383d5e0ef1de5ec1c89728a3d5a5",
"name": "ip_adapter_sd15",
"base": "sd-1",
"type": "ip_adapter"
}
},
"clip_vision_model": {
"name": "clip_vision_model",
"label": "",
"value": "ViT-H"
},
"weight": {
"name": "weight",
"label": "",
"value": 0.2
},
"method": {
"name": "method",
"label": "",
"value": "full"
},
"begin_step_percent": {
"name": "begin_step_percent",
"label": "",
"value": 0
},
"end_step_percent": {
"name": "end_step_percent",
"label": "",
"value": 1
},
"mask": {
"name": "mask",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": -2855.8555540799207,
"y": -183.58854843775742
}
},
{
"id": "b76fe66f-7884-43ad-b72c-fadc81d7a73c",
"type": "invocation",
"data": {
"id": "b76fe66f-7884-43ad-b72c-fadc81d7a73c",
"version": "1.3.0",
"label": "",
"notes": "",
"type": "l2i",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
},
"tiled": {
"name": "tiled",
"label": "",
"value": false
},
"tile_size": {
"name": "tile_size",
"label": "",
"value": 0
},
"fp32": {
"name": "fp32",
"label": "",
"value": false
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": -1999.770193862987,
"y": -1075
}
},
{
"id": "d334f2da-016a-4524-9911-bdab85546888",
"type": "invocation",
"data": {
"id": "d334f2da-016a-4524-9911-bdab85546888",
"version": "1.1.2",
"label": "",
"notes": "",
"type": "controlnet",
"inputs": {
"image": {
"name": "image",
"label": ""
},
"control_model": {
"name": "control_model",
"label": "Control Model (select contro_v11f1e_sd15_tile)",
"value": {
"key": "773843c8-db1f-4502-8f65-59782efa7960",
"hash": "blake3:f0812e13758f91baf4e54b7dbb707b70642937d3b2098cd2b94cc36d3eba308e",
"name": "control_v11f1e_sd15_tile",
"base": "sd-1",
"type": "controlnet"
}
},
"control_weight": {
"name": "control_weight",
"label": "",
"value": 1
},
"begin_step_percent": {
"name": "begin_step_percent",
"label": "",
"value": 0
},
"end_step_percent": {
"name": "end_step_percent",
"label": "Structural Control",
"value": 1
},
"control_mode": {
"name": "control_mode",
"label": "",
"value": "more_control"
},
"resize_mode": {
"name": "resize_mode",
"label": "",
"value": "just_resize"
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": -2481.9569385477016,
"y": -181.06590482739782
}
},
{
"id": "338b883c-3728-4f18-b3a6-6e7190c2f850",
"type": "invocation",
"data": {
"id": "338b883c-3728-4f18-b3a6-6e7190c2f850",
"version": "1.1.0",
"label": "",
"notes": "",
"type": "i2l",
"inputs": {
"image": {
"name": "image",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
},
"tiled": {
"name": "tiled",
"label": "",
"value": false
},
"tile_size": {
"name": "tile_size",
"label": "",
"value": 0
},
"fp32": {
"name": "fp32",
"label": "",
"value": false
}
},
"isOpen": false,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": -2908.4791167517287,
"y": -408.87504820159086
}
},
{
"id": "947c3f88-0305-4695-8355-df4abac64b1c",
"type": "invocation",
"data": {
"id": "947c3f88-0305-4695-8355-df4abac64b1c",
"version": "1.2.0",
"label": "",
"notes": "",
"type": "compel",
"inputs": {
"prompt": {
"name": "prompt",
"label": "",
"value": ""
},
"clip": {
"name": "clip",
"label": ""
},
"mask": {
"name": "mask",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": -4014.4136788915944,
"y": -968.5677253775948
}
},
{
"id": "9b2d8c58-ce8f-4162-a5a1-48de854040d6",
"type": "invocation",
"data": {
"id": "9b2d8c58-ce8f-4162-a5a1-48de854040d6",
"version": "1.2.0",
"label": "",
"notes": "",
"type": "compel",
"inputs": {
"prompt": {
"name": "prompt",
"label": "Positive Prompt",
"value": ""
},
"clip": {
"name": "clip",
"label": ""
},
"mask": {
"name": "mask",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": -4014.4136788915944,
"y": -1243.5677253775948
}
},
{
"id": "b875cae6-d8a3-4fdc-b969-4d53cbd03f9a",
"type": "invocation",
@@ -489,6 +181,64 @@
"y": 3.422855503409039
}
},
{
"id": "9b2d8c58-ce8f-4162-a5a1-48de854040d6",
"type": "invocation",
"data": {
"id": "9b2d8c58-ce8f-4162-a5a1-48de854040d6",
"version": "1.1.1",
"label": "",
"notes": "",
"type": "compel",
"inputs": {
"prompt": {
"name": "prompt",
"label": "Positive Prompt",
"value": ""
},
"clip": {
"name": "clip",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": -4014.4136788915944,
"y": -1243.5677253775948
}
},
{
"id": "947c3f88-0305-4695-8355-df4abac64b1c",
"type": "invocation",
"data": {
"id": "947c3f88-0305-4695-8355-df4abac64b1c",
"version": "1.1.1",
"label": "",
"notes": "",
"type": "compel",
"inputs": {
"prompt": {
"name": "prompt",
"label": "",
"value": ""
},
"clip": {
"name": "clip",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": -4014.4136788915944,
"y": -968.5677253775948
}
},
{
"id": "b3513fed-ed42-408d-b382-128fdb0de523",
"type": "invocation",
@@ -629,6 +379,104 @@
"y": -29.08699277598673
}
},
{
"id": "338b883c-3728-4f18-b3a6-6e7190c2f850",
"type": "invocation",
"data": {
"id": "338b883c-3728-4f18-b3a6-6e7190c2f850",
"version": "1.0.2",
"label": "",
"notes": "",
"type": "i2l",
"inputs": {
"image": {
"name": "image",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
},
"tiled": {
"name": "tiled",
"label": "",
"value": false
},
"fp32": {
"name": "fp32",
"label": "",
"value": false
}
},
"isOpen": false,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": -2908.4791167517287,
"y": -408.87504820159086
}
},
{
"id": "d334f2da-016a-4524-9911-bdab85546888",
"type": "invocation",
"data": {
"id": "d334f2da-016a-4524-9911-bdab85546888",
"version": "1.1.1",
"label": "",
"notes": "",
"type": "controlnet",
"inputs": {
"image": {
"name": "image",
"label": ""
},
"control_model": {
"name": "control_model",
"label": "Control Model (select contro_v11f1e_sd15_tile)",
"value": {
"key": "773843c8-db1f-4502-8f65-59782efa7960",
"hash": "blake3:f0812e13758f91baf4e54b7dbb707b70642937d3b2098cd2b94cc36d3eba308e",
"name": "control_v11f1e_sd15_tile",
"base": "sd-1",
"type": "controlnet"
}
},
"control_weight": {
"name": "control_weight",
"label": "",
"value": 1
},
"begin_step_percent": {
"name": "begin_step_percent",
"label": "",
"value": 0
},
"end_step_percent": {
"name": "end_step_percent",
"label": "Structural Control",
"value": 1
},
"control_mode": {
"name": "control_mode",
"label": "",
"value": "more_control"
},
"resize_mode": {
"name": "resize_mode",
"label": "",
"value": "just_resize"
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": -2481.9569385477016,
"y": -181.06590482739782
}
},
{
"id": "1011539e-85de-4e02-a003-0b22358491b8",
"type": "invocation",
@@ -715,6 +563,52 @@
"y": -1006.415909408244
}
},
{
"id": "b76fe66f-7884-43ad-b72c-fadc81d7a73c",
"type": "invocation",
"data": {
"id": "b76fe66f-7884-43ad-b72c-fadc81d7a73c",
"version": "1.2.2",
"label": "",
"notes": "",
"type": "l2i",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
},
"tiled": {
"name": "tiled",
"label": "",
"value": false
},
"fp32": {
"name": "fp32",
"label": "",
"value": false
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": -1999.770193862987,
"y": -1075
}
},
{
"id": "ab6f5dda-4b60-4ddf-99f2-f61fb5937527",
"type": "invocation",
@@ -885,6 +779,56 @@
"y": -78.2819050861178
}
},
{
"id": "287f134f-da8d-41d1-884e-5940e8f7b816",
"type": "invocation",
"data": {
"id": "287f134f-da8d-41d1-884e-5940e8f7b816",
"version": "1.2.2",
"label": "",
"notes": "",
"type": "ip_adapter",
"inputs": {
"image": {
"name": "image",
"label": ""
},
"ip_adapter_model": {
"name": "ip_adapter_model",
"label": "IP-Adapter Model (select ip_adapter_sd15)",
"value": {
"key": "1cc210bb-4d0a-4312-b36c-b5d46c43768e",
"hash": "blake3:3d669dffa7471b357b4df088b99ffb6bf4d4383d5e0ef1de5ec1c89728a3d5a5",
"name": "ip_adapter_sd15",
"base": "sd-1",
"type": "ip_adapter"
}
},
"weight": {
"name": "weight",
"label": "",
"value": 0.2
},
"begin_step_percent": {
"name": "begin_step_percent",
"label": "",
"value": 0
},
"end_step_percent": {
"name": "end_step_percent",
"label": "",
"value": 1
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": -2855.8555540799207,
"y": -183.58854843775742
}
},
{
"id": "1f86c8bf-06f9-4e28-abee-02f46f445ac4",
"type": "invocation",
@@ -955,6 +899,30 @@
"y": -41.810810454906914
}
},
{
"id": "2ff466b8-5e2a-4d8f-923a-a3884c7ecbc5",
"type": "invocation",
"data": {
"id": "2ff466b8-5e2a-4d8f-923a-a3884c7ecbc5",
"version": "1.0.2",
"label": "",
"notes": "",
"type": "main_model_loader",
"inputs": {
"model": {
"name": "model",
"label": ""
}
},
"isOpen": true,
"isIntermediate": true,
"useCache": true
},
"position": {
"x": -4514.466823162653,
"y": -1235.7908800002283
}
},
{
"id": "f5d9bf3b-2646-4b17-9894-20fd2b4218ea",
"type": "invocation",

View File

@@ -5,8 +5,9 @@ from PIL import Image
from invokeai.app.services.session_processor.session_processor_common import CanceledException, ProgressImage
from invokeai.backend.model_manager.config import BaseModelType
from invokeai.backend.stable_diffusion.diffusers_pipeline import PipelineIntermediateState
from invokeai.backend.util.util import image_to_dataURL
from ...backend.stable_diffusion import PipelineIntermediateState
from ...backend.util.util import image_to_dataURL
if TYPE_CHECKING:
from invokeai.app.services.events.events_base import EventServiceBase

View File

@@ -2,11 +2,6 @@
Initialization file for invokeai.backend.image_util methods.
"""
from invokeai.backend.image_util.infill_methods.patchmatch import PatchMatch # noqa: F401
from invokeai.backend.image_util.pngwriter import ( # noqa: F401
PngWriter,
PromptFormatter,
retrieve_metadata,
write_metadata,
)
from invokeai.backend.image_util.util import InitImageResizer, make_grid # noqa: F401
from .infill_methods.patchmatch import PatchMatch # noqa: F401
from .pngwriter import PngWriter, PromptFormatter, retrieve_metadata, write_metadata # noqa: F401
from .util import InitImageResizer, make_grid # noqa: F401

View File

@@ -2,7 +2,7 @@ import torch
from torch import nn as nn
from torch.nn import functional as F
from invokeai.backend.image_util.basicsr.arch_util import default_init_weights, make_layer, pixel_unshuffle
from .arch_util import default_init_weights, make_layer, pixel_unshuffle
class ResidualDenseBlock(nn.Module):

View File

@@ -4,7 +4,7 @@ import torch
import torch.nn as nn
import torch.nn.functional as F
from invokeai.backend.image_util.depth_anything.model.blocks import FeatureFusionBlock, _make_scratch
from .blocks import FeatureFusionBlock, _make_scratch
torchhub_path = Path(__file__).parent.parent / "torchhub"

View File

@@ -8,10 +8,11 @@ import numpy as np
import onnxruntime as ort
from invokeai.app.services.config.config_default import get_config
from invokeai.backend.image_util.dw_openpose.onnxdet import inference_detector
from invokeai.backend.image_util.dw_openpose.onnxpose import inference_pose
from invokeai.backend.util.devices import TorchDevice
from .onnxdet import inference_detector
from .onnxpose import inference_pose
config = get_config()

View File

@@ -98,7 +98,7 @@ class UnetSkipConnectionBlock(nn.Module):
"""
super(UnetSkipConnectionBlock, self).__init__()
self.outermost = outermost
if isinstance(norm_layer, functools.partial):
if type(norm_layer) == functools.partial:
use_bias = norm_layer.func == nn.InstanceNorm2d
else:
use_bias = norm_layer == nn.InstanceNorm2d

View File

@@ -11,8 +11,9 @@ from PIL import Image
from transformers import CLIPImageProcessor, CLIPVisionModelWithProjection
from invokeai.backend.ip_adapter.ip_attention_weights import IPAttentionWeights
from invokeai.backend.ip_adapter.resampler import Resampler
from invokeai.backend.raw_model import RawModel
from ..raw_model import RawModel
from .resampler import Resampler
class IPAdapterStateDict(TypedDict):
@@ -135,11 +136,11 @@ class IPAdapter(RawModel):
self._image_proj_model.to(device=self.device, dtype=self.dtype, non_blocking=non_blocking)
self.attn_weights.to(device=self.device, dtype=self.dtype, non_blocking=non_blocking)
def calc_size(self) -> int:
# HACK(ryand): Fix this issue with circular imports.
from invokeai.backend.model_manager.load.model_util import calc_module_size
def calc_size(self):
# workaround for circular import
from invokeai.backend.model_manager.load.model_util import calc_model_size_by_data
return calc_module_size(self._image_proj_model) + calc_module_size(self.attn_weights)
return calc_model_size_by_data(self._image_proj_model) + calc_model_size_by_data(self.attn_weights)
def _init_image_proj_model(
self, state_dict: dict[str, torch.Tensor]

View File

@@ -10,9 +10,10 @@ from safetensors.torch import load_file
from typing_extensions import Self
from invokeai.backend.model_manager import BaseModelType
from invokeai.backend.raw_model import RawModel
from invokeai.backend.util.devices import TorchDevice
from .raw_model import RawModel
class LoRALayerBase:
# rank: Optional[int]

View File

@@ -1,6 +1,6 @@
"""Re-export frequently-used symbols from the Model Manager backend."""
from invokeai.backend.model_manager.config import (
from .config import (
AnyModel,
AnyModelConfig,
BaseModelType,
@@ -13,9 +13,9 @@ from invokeai.backend.model_manager.config import (
SchedulerPredictionType,
SubModelType,
)
from invokeai.backend.model_manager.load import LoadedModel
from invokeai.backend.model_manager.probe import ModelProbe
from invokeai.backend.model_manager.search import ModelSearch
from .load import LoadedModel
from .probe import ModelProbe
from .search import ModelSearch
__all__ = [
"AnyModel",

Some files were not shown because too many files have changed in this diff Show More