-[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link] [![translation status badge]][translation status link]
+
+
+
+
+## Quick Start
+
+1. Download and unzip the installer from the bottom of the [latest release](https://github.com/invoke-ai/InvokeAI/releases/latest).
+2. Run the installer script.
+
+- **Windows**: Double-click on the `install.bat` script.
+- **macOS**: Open a Terminal window, drag the file `install.sh` from Finder into the Terminal, and press enter.
+- **Linux**: Run `install.sh`.
+
+3. When prompted, enter a location for the install and select your GPU type.
+4. Once the install finishes, find the directory you selected during install. The default location is `C:\Users\Username\invokeai` for Windows or `~/invokeai` for Linux/macOS.
+6. Run the launcher script (`invoke.bat` for Windows, `invoke.sh` for macOS and Linux) - the same way you ran the installer script in step 2.
+7. Select option 1 to start the application. Once it starts up, open your browser and go to .
+8. Open the model manager tab to install a starter model and then you'll be ready to generate.
+
+More detail, including hardware requirements and manual install instructions, are available in the [installation documentation](https://invoke-ai.github.io/InvokeAI/installation/INSTALLATION/).
+
+## Features
+
+Full details on features can be found in [our documentation](https://invoke-ai.github.io/InvokeAI/features/).
+
+### Web Server & UI
+
+Invoke runs a locally hosted web server & React UI with an industry-leading user experience.
+
+### Unified Canvas
+
+The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/out-painting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.
+
+### Workflows & Nodes
+
+Invoke offers a fully featured workflow management solution, enabling users to combine the power of node-based workflows with the easy of a UI. This allows for customizable generation pipelines to be developed and shared by users looking to create specific workflows to support their production use-cases.
+
+### Board & Gallery Management
+
+Invoke features an organized gallery system for easily storing, accessing, and remixing your content in the Invoke workspace. Images can be dragged/dropped onto any Image-base UI element in the application, and rich metadata within the Image allows for easy recall of key prompts or settings used in your workflow.
+
+### Other features
+
+- Support for both ckpt and diffusers models
+- SD1.5, SD2.0, and SDXL support
+- Upscaling Tools
+- Embedding Manager & Support
+- Model Manager & Support
+- Workflow creation & management
+- Node-Based Architecture
+
+## Troubleshooting, FAQ and Support
+
+Please review our **[FAQ](https://invoke-ai.github.io/InvokeAI/help/FAQ/)** for solutions to common installation problems and other issues.
+
+For more help, please join our [Discord][discord link].
+
+## Contributing
+
+Anyone who wishes to contribute to this project - whether documentation, features, bug fixes, code cleanup, testing, or code reviews - is very much encouraged to do so.
+
+Get started with contributing by reading our [Contribution documentation](https://invoke-ai.github.io/InvokeAI/contributing/CONTRIBUTING/), joining the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) or the GitHub discussion board.
+
+We hope you enjoy using Invoke as much as we enjoy creating it, and we hope you will elect to become part of our community.
+
+## Thanks
+
+Invoke is a combined effort of [passionate and talented people from across the world](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for their time, hard work and effort.
+
+Original portions of the software are Copyright © 2024 by respective contributors.
[CI checks on main badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
[CI checks on main link]:https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Amain
@@ -32,271 +104,4 @@
[latest release badge]: https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases
[translation status badge]: https://hosted.weblate.org/widgets/invokeai/-/svg-badge.svg
-[translation status link]: https://hosted.weblate.org/engage/invokeai/
-
-
-
-InvokeAI is a leading creative engine built to empower professionals
-and enthusiasts alike. Generate and create stunning visual media using
-the latest AI-driven technologies. InvokeAI offers an industry leading
-Web Interface, interactive Command Line Interface, and also serves as
-the foundation for multiple commercial products.
-
-**Quick links**: [[How to
- Install](https://invoke-ai.github.io/InvokeAI/installation/INSTALLATION/)] [Discord Server] [Documentation and
- Tutorials]
- [Bug Reports]
- [Discussion,
- Ideas & Q&A]
- [Contributing]
-
-
-
-
-
-
-
-
-
-## Table of Contents
-
-Table of Contents 📝
-
-**Getting Started**
-1. 🏁 [Quick Start](#quick-start)
-3. 🖥️ [Hardware Requirements](#hardware-requirements)
-
-**More About Invoke**
-1. 🌟 [Features](#features)
-2. 📣 [Latest Changes](#latest-changes)
-3. 🛠️ [Troubleshooting](#troubleshooting)
-
-**Supporting the Project**
-1. 🤝 [Contributing](#contributing)
-2. 👥 [Contributors](#contributors)
-3. 💕 [Support](#support)
-
-## Quick Start
-
-For full installation and upgrade instructions, please see:
-[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/INSTALLATION/)
-
-If upgrading from version 2.3, please read [Migrating a 2.3 root
-directory to 3.0](#migrating-to-3) first.
-
-### Automatic Installer (suggested for 1st time users)
-
-1. Go to the bottom of the [Latest Release Page](https://github.com/invoke-ai/InvokeAI/releases/latest)
-
-2. Download the .zip file for your OS (Windows/macOS/Linux).
-
-3. Unzip the file.
-
-4. **Windows:** double-click on the `install.bat` script. **macOS:** Open a Terminal window, drag the file `install.sh` from Finder
-into the Terminal, and press return. **Linux:** run `install.sh`.
-
-5. You'll be asked to confirm the location of the folder in which
-to install InvokeAI and its image generation model files. Pick a
-location with at least 15 GB of free memory. More if you plan on
-installing lots of models.
-
-6. Wait while the installer does its thing. After installing the software,
-the installer will launch a script that lets you configure InvokeAI and
-select a set of starting image generation models.
-
-7. Find the folder that InvokeAI was installed into (it is not the
-same as the unpacked zip file directory!) The default location of this
-folder (if you didn't change it in step 5) is `~/invokeai` on
-Linux/Mac systems, and `C:\Users\YourName\invokeai` on Windows. This directory will contain launcher scripts named `invoke.sh` and `invoke.bat`.
-
-8. On Windows systems, double-click on the `invoke.bat` file. On
-macOS, open a Terminal window, drag `invoke.sh` from the folder into
-the Terminal, and press return. On Linux, run `invoke.sh`
-
-9. Press 1 to open the "browser-based UI", press enter/return, wait a
-minute or two for Stable Diffusion to start up, then open your browser
-and go to http://localhost:9090.
-
-10. Type `banana sushi` in the box on the top left and click `Invoke`
-
-### Command-Line Installation (for developers and users familiar with Terminals)
-
-You must have Python 3.10 through 3.11 installed on your machine. Earlier or
-later versions are not supported.
-Node.js also needs to be installed along with `pnpm` (can be installed with
-the command `npm install -g pnpm` if needed)
-
-1. Open a command-line window on your machine. The PowerShell is recommended for Windows.
-2. Create a directory to install InvokeAI into. You'll need at least 15 GB of free space:
-
- ```terminal
- mkdir invokeai
- ````
-
-3. Create a virtual environment named `.venv` inside this directory and activate it:
-
- ```terminal
- cd invokeai
- python -m venv .venv --prompt InvokeAI
- ```
-
-4. Activate the virtual environment (do it every time you run InvokeAI)
-
- _For Linux/Mac users:_
-
- ```sh
- source .venv/bin/activate
- ```
-
- _For Windows users:_
-
- ```ps
- .venv\Scripts\activate
- ```
-
-5. Install the InvokeAI module and its dependencies. Choose the command suited for your platform & GPU.
-
- _For Windows/Linux with an NVIDIA GPU:_
-
- ```terminal
- pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
- ```
-
- _For Linux with an AMD GPU:_
-
- ```sh
- pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.6
- ```
-
- _For non-GPU systems:_
- ```terminal
- pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
- ```
-
- _For Macintoshes, either Intel or M1/M2/M3:_
-
- ```sh
- pip install InvokeAI --use-pep517
- ```
-
-6. Launch the web server (do it every time you run InvokeAI):
-
- ```terminal
- invokeai-web
- ```
-
-7. Point your browser to http://localhost:9090 to bring up the web interface.
-
-8. Type `banana sushi` in the box on the top left and click `Invoke`.
-
-Be sure to activate the virtual environment each time before re-launching InvokeAI,
-using `source .venv/bin/activate` or `.venv\Scripts\activate`.
-
-## Detailed Installation Instructions
-
-This fork is supported across Linux, Windows and Macintosh. Linux
-users can use either an Nvidia-based card (with CUDA support) or an
-AMD card (using the ROCm driver). For full installation and upgrade
-instructions, please see:
-[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/INSTALL_SOURCE/)
-
-## Hardware Requirements
-
-InvokeAI is supported across Linux, Windows and macOS. Linux
-users can use either an Nvidia-based card (with CUDA support) or an
-AMD card (using the ROCm driver).
-
-### System
-
-You will need one of the following:
-
-- An NVIDIA-based graphics card with 4 GB or more VRAM memory. 6-8 GB
- of VRAM is highly recommended for rendering using the Stable
- Diffusion XL models
-- An Apple computer with an M1 chip.
-- An AMD-based graphics card with 4GB or more VRAM memory (Linux
- only), 6-8 GB for XL rendering.
-
-We do not recommend the GTX 1650 or 1660 series video cards. They are
-unable to run in half-precision mode and do not have sufficient VRAM
-to render 512x512 images.
-
-**Memory** - At least 12 GB Main Memory RAM.
-
-**Disk** - At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
-
-## Features
-
-Feature documentation can be reviewed by navigating to [the InvokeAI Documentation page](https://invoke-ai.github.io/InvokeAI/features/)
-
-### *Web Server & UI*
-
-InvokeAI offers a locally hosted Web Server & React Frontend, with an industry leading user experience. The Web-based UI allows for simple and intuitive workflows, and is responsive for use on mobile devices and tablets accessing the web server.
-
-### *Unified Canvas*
-
-The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/outpainting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.
-
-### *Workflows & Nodes*
-
-InvokeAI offers a fully featured workflow management solution, enabling users to combine the power of nodes based workflows with the easy of a UI. This allows for customizable generation pipelines to be developed and shared by users looking to create specific workflows to support their production use-cases.
-
-### *Board & Gallery Management*
-
-Invoke AI provides an organized gallery system for easily storing, accessing, and remixing your content in the Invoke workspace. Images can be dragged/dropped onto any Image-base UI element in the application, and rich metadata within the Image allows for easy recall of key prompts or settings used in your workflow.
-
-### Other features
-
-- *Support for both ckpt and diffusers models*
-- *SD1.5, SD2.0, and SDXL support*
-- *Upscaling Tools*
-- *Embedding Manager & Support*
-- *Model Manager & Support*
-- *Workflow creation & management*
-- *Node-Based Architecture*
-
-
-### Latest Changes
-
-For our latest changes, view our [Release
-Notes](https://github.com/invoke-ai/InvokeAI/releases) and the
-[CHANGELOG](docs/CHANGELOG.md).
-
-### Troubleshooting / FAQ
-
-Please check out our **[FAQ](https://invoke-ai.github.io/InvokeAI/help/FAQ/)** to get solutions for common installation
-problems and other issues. For more help, please join our [Discord][discord link]
-
-## Contributing
-
-Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
-cleanup, testing, or code reviews, is very much encouraged to do so.
-
-Get started with contributing by reading our [Contribution documentation](https://invoke-ai.github.io/InvokeAI/contributing/CONTRIBUTING/), joining the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) or the GitHub discussion board.
-
-If you are unfamiliar with how
-to contribute to GitHub projects, we have a new contributor checklist you can follow to get started contributing:
-[New Contributor Checklist](https://invoke-ai.github.io/InvokeAI/contributing/contribution_guides/newContributorChecklist/).
-
-We hope you enjoy using our software as much as we enjoy creating it,
-and we hope that some of those of you who are reading this will elect
-to become part of our community.
-
-Welcome to InvokeAI!
-
-### Contributors
-
-This fork is a combined effort of various people from across the world.
-[Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for
-their time, hard work and effort.
-
-### Support
-
-For support, please use this repository's GitHub Issues tracking service, or join the [Discord][discord link].
-
-Original portions of the software are Copyright (c) 2024 by respective contributors.
-
+[translation status link]: https://hosted.weblate.org/engage/invokeai/
\ No newline at end of file
From caa7c0f2bd3748980e251f7c506c3d3592e47d3d Mon Sep 17 00:00:00 2001
From: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Thu, 25 Apr 2024 23:41:00 +1000
Subject: [PATCH 160/162] docs: more pruning and tidying readme
---
README.md | 55 ++++++++++++++++++++++++++++++++-----------------------
1 file changed, 32 insertions(+), 23 deletions(-)
diff --git a/README.md b/README.md
index 6ce131493c..f540e7be75 100644
--- a/README.md
+++ b/README.md
@@ -4,15 +4,15 @@
# Invoke - Professional Creative AI Tools for Visual Media
-## To learn more about Invoke, or implement our Business solutions, visit [invoke.com](https://www.invoke.com/about)
-
+#### To learn more about Invoke, or implement our Business solutions, visit [invoke.com]
+
[![discord badge]][discord link] [![latest release badge]][latest release link] [![github stars badge]][github stars link] [![github forks badge]][github forks link] [![CI checks on main badge]][CI checks on main link] [![latest commit to main badge]][latest commit to main link] [![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link] [![translation status badge]][translation status link]
Invoke is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. Invoke offers an industry leading web-based UI, and serves as the foundation for multiple commercial products.
-**Quick links**: [Installation](https://invoke-ai.github.io/InvokeAI/installation/INSTALLATION/) - [Discord](https://discord.gg/ZmtBAhwWhy) - [Documentation and Tutorials](https://invoke-ai.github.io/InvokeAI) - [Bug Reports](https://github.com/invoke-ai/InvokeAI/issues) - [Contributing](https://invoke-ai.github.io/InvokeAI/contributing/CONTRIBUTING/)
+[Installation][installation docs] - [Documentation and Tutorials][docs home] - [Bug Reports][github issues] - [Contributing][contributing docs]
@@ -22,24 +22,30 @@ Invoke is a leading creative engine built to empower professionals and enthusias
## Quick Start
-1. Download and unzip the installer from the bottom of the [latest release](https://github.com/invoke-ai/InvokeAI/releases/latest).
+1. Download and unzip the installer from the bottom of the [latest release][latest release link].
2. Run the installer script.
-- **Windows**: Double-click on the `install.bat` script.
-- **macOS**: Open a Terminal window, drag the file `install.sh` from Finder into the Terminal, and press enter.
-- **Linux**: Run `install.sh`.
+ - **Windows**: Double-click on the `install.bat` script.
+ - **macOS**: Open a Terminal window, drag the file `install.sh` from Finder into the Terminal, and press enter.
+ - **Linux**: Run `install.sh`.
3. When prompted, enter a location for the install and select your GPU type.
4. Once the install finishes, find the directory you selected during install. The default location is `C:\Users\Username\invokeai` for Windows or `~/invokeai` for Linux/macOS.
-6. Run the launcher script (`invoke.bat` for Windows, `invoke.sh` for macOS and Linux) - the same way you ran the installer script in step 2.
-7. Select option 1 to start the application. Once it starts up, open your browser and go to .
-8. Open the model manager tab to install a starter model and then you'll be ready to generate.
+5. Run the launcher script (`invoke.bat` for Windows, `invoke.sh` for macOS and Linux) the same way you ran the installer script in step 2.
+6. Select option 1 to start the application. Once it starts up, open your browser and go to .
+7. Open the model manager tab to install a starter model and then you'll be ready to generate.
-More detail, including hardware requirements and manual install instructions, are available in the [installation documentation](https://invoke-ai.github.io/InvokeAI/installation/INSTALLATION/).
+More detail, including hardware requirements and manual install instructions, are available in the [installation documentation][installation docs].
+
+## Troubleshooting, FAQ and Support
+
+Please review our [FAQ][faq] for solutions to common installation problems and other issues.
+
+For more help, please join our [Discord][discord link].
## Features
-Full details on features can be found in [our documentation](https://invoke-ai.github.io/InvokeAI/features/).
+Full details on features can be found in [our documentation][features docs].
### Web Server & UI
@@ -67,28 +73,31 @@ Invoke features an organized gallery system for easily storing, accessing, and r
- Workflow creation & management
- Node-Based Architecture
-## Troubleshooting, FAQ and Support
-
-Please review our **[FAQ](https://invoke-ai.github.io/InvokeAI/help/FAQ/)** for solutions to common installation problems and other issues.
-
-For more help, please join our [Discord][discord link].
-
## Contributing
Anyone who wishes to contribute to this project - whether documentation, features, bug fixes, code cleanup, testing, or code reviews - is very much encouraged to do so.
-Get started with contributing by reading our [Contribution documentation](https://invoke-ai.github.io/InvokeAI/contributing/CONTRIBUTING/), joining the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) or the GitHub discussion board.
+Get started with contributing by reading our [contribution documentation][contributing docs], joining the [#dev-chat] or the GitHub discussion board.
We hope you enjoy using Invoke as much as we enjoy creating it, and we hope you will elect to become part of our community.
## Thanks
-Invoke is a combined effort of [passionate and talented people from across the world](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for their time, hard work and effort.
+Invoke is a combined effort of [passionate and talented people from across the world][contributors]. We thank them for their time, hard work and effort.
Original portions of the software are Copyright © 2024 by respective contributors.
+[features docs]: https://invoke-ai.github.io/InvokeAI/features/
+[faq]: https://invoke-ai.github.io/InvokeAI/help/FAQ/
+[contributors]: https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/
+[invoke.com]: https://www.invoke.com/about
+[github issues]: https://github.com/invoke-ai/InvokeAI/issues
+[docs home]: https://invoke-ai.github.io/InvokeAI
+[installation docs]: https://invoke-ai.github.io/InvokeAI/installation/INSTALLATION/
+[#dev-chat]: https://discord.com/channels/1020123559063990373/1049495067846524939
+[contributing docs]: https://invoke-ai.github.io/InvokeAI/contributing/CONTRIBUTING/
[CI checks on main badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
-[CI checks on main link]:https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Amain
+[CI checks on main link]: https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Amain
[discord badge]: https://flat.badgen.net/discord/members/ZmtBAhwWhy?icon=discord
[discord link]: https://discord.gg/ZmtBAhwWhy
[github forks badge]: https://flat.badgen.net/github/forks/invoke-ai/InvokeAI?icon=github
@@ -102,6 +111,6 @@ Original portions of the software are Copyright © 2024 by respective contributo
[latest commit to main badge]: https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/main?icon=github&color=yellow&label=last%20dev%20commit&cache=900
[latest commit to main link]: https://github.com/invoke-ai/InvokeAI/commits/main
[latest release badge]: https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github
-[latest release link]: https://github.com/invoke-ai/InvokeAI/releases
+[latest release link]: https://github.com/invoke-ai/InvokeAI/releases/latest
[translation status badge]: https://hosted.weblate.org/widgets/invokeai/-/svg-badge.svg
-[translation status link]: https://hosted.weblate.org/engage/invokeai/
\ No newline at end of file
+[translation status link]: https://hosted.weblate.org/engage/invokeai/
From 3595beac1e1453154787d5d56eefab0d9eafe064 Mon Sep 17 00:00:00 2001
From: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Fri, 26 Apr 2024 07:30:45 +1000
Subject: [PATCH 161/162] docs: remove references to config script in
CONFIGURATION.md
---
docs/features/CONFIGURATION.md | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/docs/features/CONFIGURATION.md b/docs/features/CONFIGURATION.md
index 41f7a3ced3..d6bfe44901 100644
--- a/docs/features/CONFIGURATION.md
+++ b/docs/features/CONFIGURATION.md
@@ -51,13 +51,11 @@ The settings in this file will override the defaults. You only need
to change this file if the default for a particular setting doesn't
work for you.
+You'll find an example file next to `invokeai.yaml` that shows the default values.
+
Some settings, like [Model Marketplace API Keys], require the YAML
to be formatted correctly. Here is a [basic guide to YAML files].
-You can fix a broken `invokeai.yaml` by deleting it and running the
-configuration script again -- option [6] in the launcher, "Re-run the
-configure script".
-
#### Custom Config File Location
You can use any config file with the `--config` CLI arg. Pass in the path to the `invokeai.yaml` file you want to use.
From 241a1fdb57ffb6f7d1b6dc9e0007dd0c523c808a Mon Sep 17 00:00:00 2001
From: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date: Sat, 27 Apr 2024 19:58:46 +1000
Subject: [PATCH 162/162] feat(mm): support sdxl ckpt inpainting models
There are only a couple SDXL inpainting models, and my tests indicate they are not as good as SD1.5 inpainting, but at least we support them now.
- Add the config file. This matches what is used in A1111. The only difference from the non-inpainting SDXL config is the number of in-channels.
- Update the legacy config maps to use this config file.
---
invokeai/backend/model_manager/probe.py | 1 +
.../stable-diffusion/sd_xl_inpaint.yaml | 98 +++++++++++++++++++
2 files changed, 99 insertions(+)
create mode 100644 invokeai/configs/stable-diffusion/sd_xl_inpaint.yaml
diff --git a/invokeai/backend/model_manager/probe.py b/invokeai/backend/model_manager/probe.py
index bf21a7fe7b..8f33e4b49f 100644
--- a/invokeai/backend/model_manager/probe.py
+++ b/invokeai/backend/model_manager/probe.py
@@ -51,6 +51,7 @@ LEGACY_CONFIGS: Dict[BaseModelType, Dict[ModelVariantType, Union[str, Dict[Sched
},
BaseModelType.StableDiffusionXL: {
ModelVariantType.Normal: "sd_xl_base.yaml",
+ ModelVariantType.Inpaint: "sd_xl_inpaint.yaml",
},
BaseModelType.StableDiffusionXLRefiner: {
ModelVariantType.Normal: "sd_xl_refiner.yaml",
diff --git a/invokeai/configs/stable-diffusion/sd_xl_inpaint.yaml b/invokeai/configs/stable-diffusion/sd_xl_inpaint.yaml
new file mode 100644
index 0000000000..eea5c15a49
--- /dev/null
+++ b/invokeai/configs/stable-diffusion/sd_xl_inpaint.yaml
@@ -0,0 +1,98 @@
+model:
+ target: sgm.models.diffusion.DiffusionEngine
+ params:
+ scale_factor: 0.13025
+ disable_first_stage_autocast: True
+
+ denoiser_config:
+ target: sgm.modules.diffusionmodules.denoiser.DiscreteDenoiser
+ params:
+ num_idx: 1000
+
+ weighting_config:
+ target: sgm.modules.diffusionmodules.denoiser_weighting.EpsWeighting
+ scaling_config:
+ target: sgm.modules.diffusionmodules.denoiser_scaling.EpsScaling
+ discretization_config:
+ target: sgm.modules.diffusionmodules.discretizer.LegacyDDPMDiscretization
+
+ network_config:
+ target: sgm.modules.diffusionmodules.openaimodel.UNetModel
+ params:
+ adm_in_channels: 2816
+ num_classes: sequential
+ use_checkpoint: True
+ in_channels: 9
+ out_channels: 4
+ model_channels: 320
+ attention_resolutions: [4, 2]
+ num_res_blocks: 2
+ channel_mult: [1, 2, 4]
+ num_head_channels: 64
+ use_spatial_transformer: True
+ use_linear_in_transformer: True
+ transformer_depth: [1, 2, 10] # note: the first is unused (due to attn_res starting at 2) 32, 16, 8 --> 64, 32, 16
+ context_dim: 2048
+ spatial_transformer_attn_type: softmax-xformers
+ legacy: False
+
+ conditioner_config:
+ target: sgm.modules.GeneralConditioner
+ params:
+ emb_models:
+ # crossattn cond
+ - is_trainable: False
+ input_key: txt
+ target: sgm.modules.encoders.modules.FrozenCLIPEmbedder
+ params:
+ layer: hidden
+ layer_idx: 11
+ # crossattn and vector cond
+ - is_trainable: False
+ input_key: txt
+ target: sgm.modules.encoders.modules.FrozenOpenCLIPEmbedder2
+ params:
+ arch: ViT-bigG-14
+ version: laion2b_s39b_b160k
+ freeze: True
+ layer: penultimate
+ always_return_pooled: True
+ legacy: False
+ # vector cond
+ - is_trainable: False
+ input_key: original_size_as_tuple
+ target: sgm.modules.encoders.modules.ConcatTimestepEmbedderND
+ params:
+ outdim: 256 # multiplied by two
+ # vector cond
+ - is_trainable: False
+ input_key: crop_coords_top_left
+ target: sgm.modules.encoders.modules.ConcatTimestepEmbedderND
+ params:
+ outdim: 256 # multiplied by two
+ # vector cond
+ - is_trainable: False
+ input_key: target_size_as_tuple
+ target: sgm.modules.encoders.modules.ConcatTimestepEmbedderND
+ params:
+ outdim: 256 # multiplied by two
+
+ first_stage_config:
+ target: sgm.models.autoencoder.AutoencoderKLInferenceWrapper
+ params:
+ embed_dim: 4
+ monitor: val/rec_loss
+ ddconfig:
+ attn_type: vanilla-xformers
+ double_z: true
+ z_channels: 4
+ resolution: 256
+ in_channels: 3
+ out_ch: 3
+ ch: 128
+ ch_mult: [1, 2, 4, 4]
+ num_res_blocks: 2
+ attn_resolutions: []
+ dropout: 0.0
+ lossconfig:
+ target: torch.nn.Identity
\ No newline at end of file