Compare commits

..

18 Commits

Author SHA1 Message Date
Lincoln Stein
5057beddf5 bump rc# 2022-12-30 12:53:25 +00:00
Lincoln Stein
ade9bbe185 rebuild frontend 2022-12-30 12:52:43 +00:00
Lincoln Stein
83df5c211c create_installer now adds version number 2022-12-29 14:37:01 +00:00
Lincoln Stein
75f07dd22e Merge branch 'main' into lstein-release-candidate-2-2-5 2022-12-29 09:01:08 -05:00
Lincoln Stein
060eff5dad bump rc version 2022-12-28 20:37:36 -05:00
Lincoln Stein
5d00831f71 Merge branch 'main' into lstein-release-candidate-2-2-5 2022-12-28 20:33:39 -05:00
Lincoln Stein
d74ed7e974 bring installers up to date with 2.2.5-rc2 2022-12-29 01:21:55 +00:00
Lincoln Stein
451750229d model_cache applies rootdir to config path 2022-12-28 17:59:53 +00:00
Lincoln Stein
080fe48106 Merge branch 'lstein-release-candidate-2-2-5' of github.com:invoke-ai/InvokeAI into lstein-release-candidate-2-2-5 2022-12-28 17:59:15 +00:00
Lincoln Stein
ff0eb56c96 remove extraneous whitespace 2022-12-28 13:44:29 +00:00
Eugene Brodsky
006123aa32 rc2.2.5 (install.sh) relative path fixes (#2155)
* (installer) fix bug in resolution of relative paths in linux install script

point installer at 2.2.5-rc1

selecting ~/Data/myapps/ as location  would create a ./~/Data/myapps
instead of expanding the ~/ to the value of ${HOME}

also, squash the trailing slash in path, if it was entered by the user

* (installer) add option to automatically start the app after install

also: when exiting, print the command to get back into the app
2022-12-28 08:00:35 -05:00
Lincoln Stein
540da32bd5 give Linux user option of installing ROCm or CUDA 2022-12-26 02:37:16 +00:00
Lincoln Stein
aa084b205f Merge branch 'main' into lstein-release-candidate-2-2-5 2022-12-25 19:02:01 -05:00
Lincoln Stein
49f97f994a fix permissions on create_installer.sh 2022-12-25 19:00:41 -05:00
Lincoln Stein
211d7be03d bump version number 2022-12-25 18:28:06 -05:00
Lincoln Stein
7d99416cc9 update pulls from "latest" now 2022-12-25 23:11:35 +00:00
Lincoln Stein
f60bf9e1e6 update.bat.in debugged and working 2022-12-25 18:13:06 +00:00
Lincoln Stein
fce7b5466a installer tweaks in preparation for v2.2.5
- pin numpy to 1.23.* to avoid requirements conflict with numba
- update.sh and update.bat now accept a tag or branch string, not a URL
- update scripts download latest requirements-base before updating.
2022-12-25 17:36:59 +00:00
779 changed files with 7219 additions and 18434 deletions

View File

@@ -1,20 +1,19 @@
# use this file as a whitelist
*
!backend
!invokeai
!environments-and-requirements
!frontend
!ldm
!pyproject.toml
!README.md
!main.py
!scripts
!server
!static
!setup.py
# Guard against pulling in any models that might exist in the directory tree
**/*.pt*
**/*.ckpt
# whitelist frontend, but ignore node_modules
invokeai/frontend/node_modules
# unignore configs, but only ignore the custom models.yaml, in case it exists
!configs
configs/models.yaml
# ignore python cache
**/__pycache__
**/*.py[cod]
**/*.egg-info

4
.github/CODEOWNERS vendored
View File

@@ -2,6 +2,6 @@ ldm/invoke/pngwriter.py @CapableWeb
ldm/invoke/server_legacy.py @CapableWeb
scripts/legacy_api.py @CapableWeb
tests/legacy_tests.sh @CapableWeb
installer/ @ebr
installer/ @tildebyte
.github/workflows/ @mauwii
docker/ @mauwii
docker_build/ @mauwii

87
.github/workflows/build-cloud-img.yml vendored Normal file
View File

@@ -0,0 +1,87 @@
name: Build and push cloud image
on:
workflow_dispatch:
# push:
# branches:
# - main
# tags:
# - v*
# # we will NOT push the image on pull requests, only test buildability.
# pull_request:
# branches:
# - main
permissions:
contents: read
packages: write
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
docker:
strategy:
fail-fast: false
matrix:
arch:
- x86_64
# requires resolving a patchmatch issue
# - aarch64
runs-on: ubuntu-latest
name: ${{ matrix.arch }}
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
if: matrix.arch == 'aarch64'
- name: Docker meta
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
# see https://github.com/docker/metadata-action
# will push the following tags:
# :edge
# :main (+ any other branches enabled in the workflow)
# :<tag>
# :1.2.3 (for semver tags)
# :1.2 (for semver tags)
# :<sha>
tags: |
type=edge,branch=main
type=ref,event=branch
type=ref,event=tag
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=sha
# suffix image tags with architecture
flavor: |
latest=auto
suffix=-${{ matrix.arch }},latest=true
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
# do not login to container registry on PRs
- if: github.event_name != 'pull_request'
name: Docker login
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push cloud image
uses: docker/build-push-action@v3
with:
context: .
file: docker-build/Dockerfile.cloud
platforms: Linux/${{ matrix.arch }}
# do not push the image on PRs
push: false
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

View File

@@ -3,67 +3,63 @@ on:
push:
branches:
- 'main'
tags:
- 'v*.*.*'
jobs:
docker:
if: github.event.pull_request.draft == false
strategy:
fail-fast: false
matrix:
registry:
- ghcr.io
flavor:
- amd
- cuda
- cpu
# - cloud
include:
- flavor: amd
pip-extra-index-url: 'https://download.pytorch.org/whl/rocm5.2'
dockerfile: docker/Dockerfile
pip-requirements: requirements-lin-amd.txt
dockerfile: docker-build/Dockerfile
platforms: linux/amd64,linux/arm64
- flavor: cuda
pip-extra-index-url: ''
dockerfile: docker/Dockerfile
platforms: linux/amd64,linux/arm64
- flavor: cpu
pip-extra-index-url: 'https://download.pytorch.org/whl/cpu'
dockerfile: docker/Dockerfile
pip-requirements: requirements-lin-cuda.txt
dockerfile: docker-build/Dockerfile
platforms: linux/amd64,linux/arm64
# - flavor: cloud
# pip-requirements: requirements-lin-cuda.txt
# dockerfile: docker-build/Dockerfile.cloud
# platforms: linux/amd64
runs-on: ubuntu-latest
name: ${{ matrix.flavor }}
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Docker meta
id: meta
uses: docker/metadata-action@v4
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
images: ghcr.io/${{ github.repository }}
images: ${{ matrix.registry }}/${{ github.repository }}-${{ matrix.flavor }}
tags: |
type=ref,event=branch
type=ref,event=tag
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=sha
flavor: |
latest=${{ matrix.flavor == 'cuda' && github.ref == 'refs/heads/main' }}
suffix=${{ matrix.flavor }},onlatest=false
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
latest=true
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to GitHub Container Registry
if: github.event_name != 'pull_request'
- if: github.event_name != 'pull_request'
name: Docker login
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
registry: ${{ matrix.registry }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build container
@@ -75,16 +71,4 @@ jobs:
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
build-args: PIP_EXTRA_INDEX_URL=${{ matrix.pip-extra-index-url }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Output image, digest and metadata to summary
run: |
{
echo imageid: "${{ steps.docker_build.outputs.imageid }}"
echo digest: "${{ steps.docker_build.outputs.digest }}"
echo labels: "${{ steps.meta.outputs.labels }}"
echo tags: "${{ steps.meta.outputs.tags }}"
echo version: "${{ steps.meta.outputs.version }}"
} >> "$GITHUB_STEP_SUMMARY"
build-args: pip_requirements=${{ matrix.pip-requirements }}

View File

@@ -1,34 +0,0 @@
name: cleanup caches by a branch
on:
pull_request:
types:
- closed
workflow_dispatch:
jobs:
cleanup:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v3
- name: Cleanup
run: |
gh extension install actions/gh-actions-cache
REPO=${{ github.repository }}
BRANCH=${{ github.ref }}
echo "Fetching list of cache key"
cacheKeysForPR=$(gh actions-cache list -R $REPO -B $BRANCH | cut -f 1 )
## Setting this to not fail the workflow while deleting cache keys.
set +e
echo "Deleting caches..."
for cacheKey in $cacheKeysForPR
do
gh actions-cache delete $cacheKey -R $REPO -B $BRANCH --confirm
done
echo "Done"
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -3,18 +3,17 @@ name: Lint frontend
on:
pull_request:
paths:
- 'invokeai/frontend/**'
- 'frontend/**'
push:
paths:
- 'invokeai/frontend/**'
- 'frontend/**'
defaults:
run:
working-directory: invokeai/frontend
working-directory: frontend
jobs:
lint-frontend:
if: github.event.pull_request.draft == false
runs-on: ubuntu-22.04
steps:
- name: Setup Node 18

View File

@@ -7,7 +7,6 @@ on:
jobs:
mkdocs-material:
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
steps:
- name: checkout sources

View File

@@ -9,7 +9,6 @@ on:
jobs:
pyflakes:
name: runner / pyflakes
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2

161
.github/workflows/test-invoke-conda.yml vendored Normal file
View File

@@ -0,0 +1,161 @@
name: Test invoke.py
on:
push:
branches:
- 'main'
pull_request:
branches:
- 'main'
types:
- 'ready_for_review'
- 'opened'
- 'synchronize'
- 'converted_to_draft'
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
fail_if_pull_request_is_draft:
if: github.event.pull_request.draft == true
runs-on: ubuntu-22.04
steps:
- name: Fails in order to indicate that pull request needs to be marked as ready to review and unit tests workflow needs to pass.
run: exit 1
matrix:
if: github.event.pull_request.draft == false
strategy:
matrix:
stable-diffusion-model:
- 'stable-diffusion-1.5'
environment-yaml:
- environment-lin-amd.yml
- environment-lin-cuda.yml
- environment-mac.yml
- environment-win-cuda.yml
include:
- environment-yaml: environment-lin-amd.yml
os: ubuntu-22.04
curl-command: curl
github-env: $GITHUB_ENV
default-shell: bash -l {0}
- environment-yaml: environment-lin-cuda.yml
os: ubuntu-22.04
curl-command: curl
github-env: $GITHUB_ENV
default-shell: bash -l {0}
- environment-yaml: environment-mac.yml
os: macos-12
curl-command: curl
github-env: $GITHUB_ENV
default-shell: bash -l {0}
- environment-yaml: environment-win-cuda.yml
os: windows-2022
curl-command: curl.exe
github-env: $env:GITHUB_ENV
default-shell: pwsh
- stable-diffusion-model: stable-diffusion-1.5
stable-diffusion-model-url: https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt
stable-diffusion-model-dl-path: models/ldm/stable-diffusion-v1
stable-diffusion-model-dl-name: v1-5-pruned-emaonly.ckpt
name: ${{ matrix.environment-yaml }} on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
env:
CONDA_ENV_NAME: invokeai
INVOKEAI_ROOT: '${{ github.workspace }}/invokeai'
defaults:
run:
shell: ${{ matrix.default-shell }}
steps:
- name: Checkout sources
id: checkout-sources
uses: actions/checkout@v3
- name: create models.yaml from example
run: |
mkdir -p ${{ env.INVOKEAI_ROOT }}/configs
cp configs/models.yaml.example ${{ env.INVOKEAI_ROOT }}/configs/models.yaml
- name: create environment.yml
run: cp "environments-and-requirements/${{ matrix.environment-yaml }}" environment.yml
- name: Use cached conda packages
id: use-cached-conda-packages
uses: actions/cache@v3
with:
path: ~/conda_pkgs_dir
key: conda-pkgs-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles(matrix.environment-yaml) }}
- name: Activate Conda Env
id: activate-conda-env
uses: conda-incubator/setup-miniconda@v2
with:
activate-environment: ${{ env.CONDA_ENV_NAME }}
environment-file: environment.yml
miniconda-version: latest
- name: set test prompt to main branch validation
if: ${{ github.ref == 'refs/heads/main' }}
run: echo "TEST_PROMPTS=tests/preflight_prompts.txt" >> ${{ matrix.github-env }}
- name: set test prompt to development branch validation
if: ${{ github.ref == 'refs/heads/development' }}
run: echo "TEST_PROMPTS=tests/dev_prompts.txt" >> ${{ matrix.github-env }}
- name: set test prompt to Pull Request validation
if: ${{ github.ref != 'refs/heads/main' && github.ref != 'refs/heads/development' }}
run: echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> ${{ matrix.github-env }}
- name: Use Cached Stable Diffusion Model
id: cache-sd-model
uses: actions/cache@v3
env:
cache-name: cache-${{ matrix.stable-diffusion-model }}
with:
path: ${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}
key: ${{ env.cache-name }}
- name: Download ${{ matrix.stable-diffusion-model }}
id: download-stable-diffusion-model
if: ${{ steps.cache-sd-model.outputs.cache-hit != 'true' }}
run: |
mkdir -p "${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}"
${{ matrix.curl-command }} -H "Authorization: Bearer ${{ secrets.HUGGINGFACE_TOKEN }}" -o "${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}/${{ matrix.stable-diffusion-model-dl-name }}" -L ${{ matrix.stable-diffusion-model-url }}
- name: run configure_invokeai.py
id: run-preload-models
run: |
python scripts/configure_invokeai.py --skip-sd-weights --yes
- name: cat invokeai.init
id: cat-invokeai
run: cat ${{ env.INVOKEAI_ROOT }}/invokeai.init
- name: Run the tests
id: run-tests
if: matrix.os != 'windows-2022'
run: |
time python scripts/invoke.py \
--no-patchmatch \
--no-nsfw_checker \
--model ${{ matrix.stable-diffusion-model }} \
--from_file ${{ env.TEST_PROMPTS }} \
--root="${{ env.INVOKEAI_ROOT }}" \
--outdir="${{ env.INVOKEAI_ROOT }}/outputs"
- name: export conda env
id: export-conda-env
if: matrix.os != 'windows-2022'
run: |
mkdir -p outputs/img-samples
conda env export --name ${{ env.CONDA_ENV_NAME }} > ${{ env.INVOKEAI_ROOT }}/outputs/environment-${{ runner.os }}-${{ runner.arch }}.yml
- name: Archive results
if: matrix.os != 'windows-2022'
id: archive-results
uses: actions/upload-artifact@v3
with:
name: results_${{ matrix.requirements-file }}_${{ matrix.python-version }}
path: ${{ env.INVOKEAI_ROOT }}/outputs

View File

@@ -4,143 +4,141 @@ on:
branches:
- 'main'
pull_request:
branches:
- 'main'
types:
- 'ready_for_review'
- 'opened'
- 'synchronize'
- 'converted_to_draft'
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
fail_if_pull_request_is_draft:
if: github.event.pull_request.draft == true
runs-on: ubuntu-18.04
steps:
- name: Fails in order to indicate that pull request needs to be marked as ready to review and unit tests workflow needs to pass.
run: exit 1
matrix:
if: github.event.pull_request.draft == false
strategy:
matrix:
stable-diffusion-model:
- stable-diffusion-1.5
requirements-file:
- requirements-lin-cuda.txt
- requirements-lin-amd.txt
- requirements-mac-mps-cpu.txt
- requirements-win-colab-cuda.txt
python-version:
# - '3.9'
- '3.10'
pytorch:
# - linux-cuda-11_6
- linux-cuda-11_7
- linux-rocm-5_2
- linux-cpu
- macos-default
- windows-cpu
# - windows-cuda-11_6
# - windows-cuda-11_7
include:
# - pytorch: linux-cuda-11_6
# os: ubuntu-22.04
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
# github-env: $GITHUB_ENV
- pytorch: linux-cuda-11_7
- requirements-file: requirements-lin-cuda.txt
os: ubuntu-22.04
curl-command: curl
github-env: $GITHUB_ENV
- pytorch: linux-rocm-5_2
- requirements-file: requirements-lin-amd.txt
os: ubuntu-22.04
extra-index-url: 'https://download.pytorch.org/whl/rocm5.2'
curl-command: curl
github-env: $GITHUB_ENV
- pytorch: linux-cpu
os: ubuntu-22.04
extra-index-url: 'https://download.pytorch.org/whl/cpu'
github-env: $GITHUB_ENV
- pytorch: macos-default
- requirements-file: requirements-mac-mps-cpu.txt
os: macOS-12
curl-command: curl
github-env: $GITHUB_ENV
- pytorch: windows-cpu
- requirements-file: requirements-win-colab-cuda.txt
os: windows-2022
curl-command: curl.exe
github-env: $env:GITHUB_ENV
# - pytorch: windows-cuda-11_6
# os: windows-2022
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
# github-env: $env:GITHUB_ENV
# - pytorch: windows-cuda-11_7
# os: windows-2022
# extra-index-url: 'https://download.pytorch.org/whl/cu117'
# github-env: $env:GITHUB_ENV
name: ${{ matrix.pytorch }} on ${{ matrix.python-version }}
- stable-diffusion-model: stable-diffusion-1.5
stable-diffusion-model-url: https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt
stable-diffusion-model-dl-path: models/ldm/stable-diffusion-v1
stable-diffusion-model-dl-name: v1-5-pruned-emaonly.ckpt
name: ${{ matrix.requirements-file }} on ${{ matrix.python-version }}
runs-on: ${{ matrix.os }}
steps:
- name: Checkout sources
id: checkout-sources
uses: actions/checkout@v3
- name: setup python
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Set Cache-Directory Windows
if: runner.os == 'Windows'
id: set-cache-dir-windows
- name: set INVOKEAI_ROOT Windows
if: matrix.os == 'windows-2022'
run: |
echo "CACHE_DIR=$HOME\invokeai\models" >> ${{ matrix.github-env }}
echo "PIP_NO_CACHE_DIR=1" >> ${{ matrix.github-env }}
echo "INVOKEAI_ROOT=${{ github.workspace }}\invokeai" >> ${{ matrix.github-env }}
echo "INVOKEAI_OUTDIR=${{ github.workspace }}\invokeai\outputs" >> ${{ matrix.github-env }}
- name: Set Cache-Directory others
if: runner.os != 'Windows'
id: set-cache-dir-others
run: echo "CACHE_DIR=$HOME/invokeai/models" >> ${{ matrix.github-env }}
- name: set INVOKEAI_ROOT others
if: matrix.os != 'windows-2022'
run: |
echo "INVOKEAI_ROOT=${{ github.workspace }}/invokeai" >> ${{ matrix.github-env }}
echo "INVOKEAI_OUTDIR=${{ github.workspace }}/invokeai/outputs" >> ${{ matrix.github-env }}
- name: create models.yaml from example
run: |
mkdir -p ${{ env.INVOKEAI_ROOT }}/configs
cp configs/models.yaml.example ${{ env.INVOKEAI_ROOT }}/configs/models.yaml
- name: set test prompt to main branch validation
if: ${{ github.ref == 'refs/heads/main' }}
run: echo "TEST_PROMPTS=tests/preflight_prompts.txt" >> ${{ matrix.github-env }}
- name: set test prompt to development branch validation
if: ${{ github.ref == 'refs/heads/development' }}
run: echo "TEST_PROMPTS=tests/dev_prompts.txt" >> ${{ matrix.github-env }}
- name: set test prompt to Pull Request validation
if: ${{ github.ref != 'refs/heads/main' }}
if: ${{ github.ref != 'refs/heads/main' && github.ref != 'refs/heads/development' }}
run: echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> ${{ matrix.github-env }}
- name: install invokeai
env:
PIP_EXTRA_INDEX_URL: ${{ matrix.extra-index-url }}
run: >
pip3 install
--use-pep517
--editable=".[test]"
- name: create requirements.txt
run: cp 'environments-and-requirements/${{ matrix.requirements-file }}' '${{ matrix.requirements-file }}'
- name: run pytest
run: pytest
- name: setup python
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
# cache: 'pip'
# cache-dependency-path: ${{ matrix.requirements-file }}
- name: Use Cached models
- name: install dependencies
run: pip3 install --upgrade pip setuptools wheel
- name: install requirements
run: pip3 install -r '${{ matrix.requirements-file }}'
- name: Use Cached Stable Diffusion Model
id: cache-sd-model
uses: actions/cache@v3
env:
cache-name: huggingface-models
cache-name: cache-${{ matrix.stable-diffusion-model }}
with:
path: ${{ env.CACHE_DIR }}
path: ${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}
key: ${{ env.cache-name }}
enableCrossOsArchive: true
- name: run invokeai-configure
- name: Download ${{ matrix.stable-diffusion-model }}
id: download-stable-diffusion-model
if: ${{ steps.cache-sd-model.outputs.cache-hit != 'true' }}
run: |
mkdir -p "${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}"
${{ matrix.curl-command }} -H "Authorization: Bearer ${{ secrets.HUGGINGFACE_TOKEN }}" -o "${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}/${{ matrix.stable-diffusion-model-dl-name }}" -L ${{ matrix.stable-diffusion-model-url }}
- name: run configure_invokeai.py
id: run-preload-models
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGINGFACE_TOKEN }}
run: >
invokeai-configure
--yes
--default_only
--full-precision
# can't use fp16 weights without a GPU
run: python3 scripts/configure_invokeai.py --skip-sd-weights --yes
- name: Run the tests
if: runner.os != 'Windows'
id: run-tests
env:
# Set offline mode to make sure configure preloaded successfully.
HF_HUB_OFFLINE: 1
HF_DATASETS_OFFLINE: 1
TRANSFORMERS_OFFLINE: 1
run: >
invokeai
--no-patchmatch
--no-nsfw_checker
--from_file ${{ env.TEST_PROMPTS }}
if: matrix.os != 'windows-2022'
run: python3 scripts/invoke.py --no-patchmatch --no-nsfw_checker --model ${{ matrix.stable-diffusion-model }} --from_file ${{ env.TEST_PROMPTS }} --root="${{ env.INVOKEAI_ROOT }}" --outdir="${{ env.INVOKEAI_OUTDIR }}"
- name: Archive results
id: archive-results
if: matrix.os != 'windows-2022'
uses: actions/upload-artifact@v3
with:
name: results_${{ matrix.pytorch }}_${{ matrix.python-version }}
name: results_${{ matrix.requirements-file }}_${{ matrix.python-version }}
path: ${{ env.INVOKEAI_ROOT }}/outputs

7
.gitignore vendored
View File

@@ -1,5 +1,4 @@
# ignore default image save location and model symbolic link
embeddings/
outputs/
models/ldm/stable-diffusion-v1/model.ckpt
**/restoration/codeformer/weights
@@ -72,7 +71,6 @@ coverage.xml
.hypothesis/
.pytest_cache/
cover/
junit/
# Translations
*.mo
@@ -196,7 +194,7 @@ checkpoints
.DS_Store
# Let the frontend manage its own gitignore
!invokeai/frontend/*
!frontend/*
# Scratch folder
.scratch/
@@ -231,5 +229,8 @@ installer/install.sh
installer/update.bat
installer/update.sh
# this may be present if the user created a venv
invokeai
# no longer stored in source directory
models

109
README.md
View File

@@ -1,6 +1,6 @@
<div align="center">
![project logo](https://github.com/mauwii/InvokeAI/raw/main/docs/assets/invoke_ai_banner.png)
![project logo](docs/assets/invoke_ai_banner.png)
# InvokeAI: A Stable Diffusion Toolkit
@@ -8,10 +8,12 @@
[![latest release badge]][latest release link] [![github stars badge]][github stars link] [![github forks badge]][github forks link]
[![CI checks on main badge]][CI checks on main link] [![latest commit to main badge]][latest commit to main link]
[![CI checks on main badge]][CI checks on main link] [![CI checks on dev badge]][CI checks on dev link] [![latest commit to dev badge]][latest commit to dev link]
[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link]
[CI checks on dev badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/development?label=CI%20status%20on%20dev&cache=900&icon=github
[CI checks on dev link]: https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Adevelopment
[CI checks on main badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
[CI checks on main link]: https://github.com/invoke-ai/InvokeAI/actions/workflows/test-invoke-conda.yml
[discord badge]: https://flat.badgen.net/discord/members/ZmtBAhwWhy?icon=discord
@@ -24,14 +26,19 @@
[github open prs link]: https://github.com/invoke-ai/InvokeAI/pulls?q=is%3Apr+is%3Aopen
[github stars badge]: https://flat.badgen.net/github/stars/invoke-ai/InvokeAI?icon=github
[github stars link]: https://github.com/invoke-ai/InvokeAI/stargazers
[latest commit to main badge]: https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/main?icon=github&color=yellow&label=last%20dev%20commit&cache=900
[latest commit to main link]: https://github.com/invoke-ai/InvokeAI/commits/main
[latest commit to dev badge]: https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/development?icon=github&color=yellow&label=last%20dev%20commit&cache=900
[latest commit to dev link]: https://github.com/invoke-ai/InvokeAI/commits/development
[latest release badge]: https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases
</div>
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.
This is a fork of
[CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion),
the open source text-to-image generator. It provides a streamlined
process with various new features and options to aid the image
generation process. It runs on Windows, macOS and Linux machines, with
GPU cards with as little as 4 GB of RAM. It provides both a polished
Web interface (see below), and an easy-to-use command-line interface.
**Quick links**: [[How to Install](#installation)] [<a href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a href="https://invoke-ai.github.io/InvokeAI/">Documentation and Tutorials</a>] [<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas & Q&A</a>]
@@ -39,12 +46,6 @@ _Note: InvokeAI is rapidly evolving. Please use the
[Issues](https://github.com/invoke-ai/InvokeAI/issues) tab to report bugs and make feature
requests. Be sure to use the provided templates. They will help us diagnose issues faster._
<div align="center">
![canvas preview](https://github.com/mauwii/InvokeAI/raw/main/docs/assets/canvas_preview.png)
</div>
# Getting Started with InvokeAI
For full installation and upgrade instructions, please see:
@@ -57,7 +58,10 @@ For full installation and upgrade instructions, please see:
5. Wait a while, until it is done.
6. The folder where you ran the installer from will now be filled with lots of files. If you are on Windows, double-click on the `invoke.bat` file. On macOS, open a Terminal window, drag `invoke.sh` from the folder into the Terminal, and press return. On Linux, run `invoke.sh`
7. Press 2 to open the "browser-based UI", press enter/return, wait a minute or two for Stable Diffusion to start up, then open your browser and go to http://localhost:9090.
8. Type `banana sushi` in the box on the top left and click `Invoke`
8. Type `banana sushi` in the box on the top left and click `Invoke`:
<div align="center"><img src="docs/assets/invoke-web-server-1.png" width=640></div>
## Table of Contents
@@ -72,7 +76,7 @@ For full installation and upgrade instructions, please see:
8. [Support](#support)
9. [Further Reading](#further-reading)
## Installation
### Installation
This fork is supported across Linux, Windows and Macintosh. Linux
users can use either an Nvidia-based card (with CUDA support) or an
@@ -85,10 +89,9 @@ instructions, please see:
InvokeAI is supported across Linux, Windows and macOS. Linux
users can use either an Nvidia-based card (with CUDA support) or an
AMD card (using the ROCm driver).
#### System
You will need one of the following:
You wil need one of the following:
- An NVIDIA-based graphics card with 4 GB or more VRAM memory.
- An Apple computer with an M1 chip.
@@ -105,48 +108,52 @@ to render 512x512 images.
- At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
## Features
**Note**
Feature documentation can be reviewed by navigating to [the InvokeAI Documentation page](https://invoke-ai.github.io/InvokeAI/features/)
If you have a Nvidia 10xx series card (e.g. the 1080ti), please
run the dream script in full-precision mode as shown below.
### *Web Server & UI*
Similarly, specify full-precision mode on Apple M1 hardware.
InvokeAI offers a locally hosted Web Server & React Frontend, with an industry leading user experience. The Web-based UI allows for simple and intuitive workflows, and is responsive for use on mobile devices and tablets accessing the web server.
Precision is auto configured based on the device. If however you encounter
errors like 'expected type Float but found Half' or 'not implemented for Half'
you can try starting `invoke.py` with the `--precision=float32` flag to your initialization command
### *Unified Canvas*
```bash
(invokeai) ~/InvokeAI$ python scripts/invoke.py --precision=float32
```
Or by updating your InvokeAI configuration file with this argument.
The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/outpainting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.
### Features
### *Advanced Prompt Syntax*
#### Major Features
InvokeAI's advanced prompt syntax allows for token weighting, cross-attention control, and prompt blending, allowing for fine-tuned tweaking of your invocations and exploration of the latent space.
- [Web Server](https://invoke-ai.github.io/InvokeAI/features/WEB/)
- [Interactive Command Line Interface](https://invoke-ai.github.io/InvokeAI/features/CLI/)
- [Image To Image](https://invoke-ai.github.io/InvokeAI/features/IMG2IMG/)
- [Inpainting Support](https://invoke-ai.github.io/InvokeAI/features/INPAINTING/)
- [Outpainting Support](https://invoke-ai.github.io/InvokeAI/features/OUTPAINTING/)
- [Upscaling, face-restoration and outpainting](https://invoke-ai.github.io/InvokeAI/features/POSTPROCESS/)
- [Reading Prompts From File](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#reading-prompts-from-a-file)
- [Prompt Blending](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#prompt-blending)
- [Thresholding and Perlin Noise Initialization Options](https://invoke-ai.github.io/InvokeAI/features/OTHER/#thresholding-and-perlin-noise-initialization-options)
- [Negative/Unconditioned Prompts](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#negative-and-unconditioned-prompts)
- [Variations](https://invoke-ai.github.io/InvokeAI/features/VARIATIONS/)
- [Personalizing Text-to-Image Generation](https://invoke-ai.github.io/InvokeAI/features/TEXTUAL_INVERSION/)
- [Simplified API for text to image generation](https://invoke-ai.github.io/InvokeAI/features/OTHER/#simplified-api)
### *Command Line Interface*
#### Other Features
For users utilizing a terminal-based environment, or who want to take advantage of CLI features, InvokeAI offers an extensive and actively supported command-line interface that provides the full suite of generation functionality available in the tool.
### Other features
- *Support for both ckpt and diffusers models*
- *SD 2.0, 2.1 support*
- *Noise Control & Tresholding*
- *Popular Sampler Support*
- *Upscaling & Face Restoration Tools*
- *Embedding Manager & Support*
- *Model Manager & Support*
### Coming Soon
- *Node-Based Architecture & UI*
- And more...
- [Google Colab](https://invoke-ai.github.io/InvokeAI/features/OTHER/#google-colab)
- [Seamless Tiling](https://invoke-ai.github.io/InvokeAI/features/OTHER/#seamless-tiling)
- [Shortcut: Reusing Seeds](https://invoke-ai.github.io/InvokeAI/features/OTHER/#shortcuts-reusing-seeds)
- [Preload Models](https://invoke-ai.github.io/InvokeAI/features/OTHER/#preload-models)
### Latest Changes
For our latest changes, view our [Release
Notes](https://github.com/invoke-ai/InvokeAI/releases) and the
[CHANGELOG](docs/CHANGELOG.md).
For our latest changes, view our [Release Notes](https://github.com/invoke-ai/InvokeAI/releases)
## Troubleshooting
### Troubleshooting
Please check out our **[Q&A](https://invoke-ai.github.io/InvokeAI/help/TROUBLESHOOT/#faq)** to get solutions for common installation
problems and other issues.
@@ -160,7 +167,7 @@ To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the
If you are unfamiliar with how
to contribute to GitHub projects, here is a
[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github). A full set of contribution guidelines, along with templates, are in progress. You can **make your pull request against the "main" branch**.
[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github). A full set of contribution guidelines, along with templates, are in progress. You can **make your pull request against the "main" branch**.
We hope you enjoy using our software as much as we enjoy creating it,
and we hope that some of those of you who are reading this will elect
@@ -176,7 +183,13 @@ their time, hard work and effort.
### Support
For support, please use this repository's GitHub Issues tracking service, or join the Discord.
For support, please use this repository's GitHub Issues tracking service. Feel free to send me an
email if you use and like the script.
Original portions of the software are Copyright (c) 2023 by respective contributors.
Original portions of the software are Copyright (c) 2022
[Lincoln D. Stein](https://github.com/lstein)
### Further Reading
Please see the original README for more information on this software and underlying algorithm,
located in the file [README-CompViz.md](https://invoke-ai.github.io/InvokeAI/other/README-CompViz/).

View File

Before

Width:  |  Height:  |  Size: 651 KiB

After

Width:  |  Height:  |  Size: 651 KiB

View File

Before

Width:  |  Height:  |  Size: 596 KiB

After

Width:  |  Height:  |  Size: 596 KiB

View File

Before

Width:  |  Height:  |  Size: 609 KiB

After

Width:  |  Height:  |  Size: 609 KiB

View File

Before

Width:  |  Height:  |  Size: 548 KiB

After

Width:  |  Height:  |  Size: 548 KiB

View File

Before

Width:  |  Height:  |  Size: 705 KiB

After

Width:  |  Height:  |  Size: 705 KiB

View File

Before

Width:  |  Height:  |  Size: 757 KiB

After

Width:  |  Height:  |  Size: 757 KiB

View File

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 33 KiB

View File

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

View File

Before

Width:  |  Height:  |  Size: 466 KiB

After

Width:  |  Height:  |  Size: 466 KiB

View File

Before

Width:  |  Height:  |  Size: 7.4 KiB

After

Width:  |  Height:  |  Size: 7.4 KiB

View File

Before

Width:  |  Height:  |  Size: 539 KiB

After

Width:  |  Height:  |  Size: 539 KiB

View File

Before

Width:  |  Height:  |  Size: 7.6 KiB

After

Width:  |  Height:  |  Size: 7.6 KiB

View File

Before

Width:  |  Height:  |  Size: 450 KiB

After

Width:  |  Height:  |  Size: 450 KiB

View File

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 12 KiB

View File

Before

Width:  |  Height:  |  Size: 553 KiB

After

Width:  |  Height:  |  Size: 553 KiB

View File

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 12 KiB

View File

Before

Width:  |  Height:  |  Size: 418 KiB

After

Width:  |  Height:  |  Size: 418 KiB

View File

Before

Width:  |  Height:  |  Size: 542 KiB

After

Width:  |  Height:  |  Size: 542 KiB

View File

Before

Width:  |  Height:  |  Size: 9.5 KiB

After

Width:  |  Height:  |  Size: 9.5 KiB

View File

Before

Width:  |  Height:  |  Size: 612 KiB

After

Width:  |  Height:  |  Size: 612 KiB

View File

Before

Width:  |  Height:  |  Size: 312 KiB

After

Width:  |  Height:  |  Size: 312 KiB

View File

Before

Width:  |  Height:  |  Size: 72 KiB

After

Width:  |  Height:  |  Size: 72 KiB

View File

Before

Width:  |  Height:  |  Size: 319 KiB

After

Width:  |  Height:  |  Size: 319 KiB

View File

Before

Width:  |  Height:  |  Size: 788 KiB

After

Width:  |  Height:  |  Size: 788 KiB

View File

Before

Width:  |  Height:  |  Size: 958 KiB

After

Width:  |  Height:  |  Size: 958 KiB

View File

Before

Width:  |  Height:  |  Size: 9.4 MiB

After

Width:  |  Height:  |  Size: 9.4 MiB

View File

Before

Width:  |  Height:  |  Size: 610 KiB

After

Width:  |  Height:  |  Size: 610 KiB

View File

Before

Width:  |  Height:  |  Size: 1.1 MiB

After

Width:  |  Height:  |  Size: 1.1 MiB

View File

Before

Width:  |  Height:  |  Size: 1.3 MiB

After

Width:  |  Height:  |  Size: 1.3 MiB

View File

Before

Width:  |  Height:  |  Size: 945 KiB

After

Width:  |  Height:  |  Size: 945 KiB

View File

Before

Width:  |  Height:  |  Size: 972 KiB

After

Width:  |  Height:  |  Size: 972 KiB

View File

Before

Width:  |  Height:  |  Size: 662 KiB

After

Width:  |  Height:  |  Size: 662 KiB

View File

Before

Width:  |  Height:  |  Size: 302 KiB

After

Width:  |  Height:  |  Size: 302 KiB

View File

Before

Width:  |  Height:  |  Size: 2.2 MiB

After

Width:  |  Height:  |  Size: 2.2 MiB

View File

@@ -1,36 +1,35 @@
import base64
import eventlet
import glob
import io
import json
import math
import mimetypes
import os
import shutil
import mimetypes
import traceback
from threading import Event
from uuid import uuid4
import math
import io
import base64
import os
import json
import eventlet
from pathlib import Path
from PIL import Image
from PIL.Image import Image as ImageType
from werkzeug.utils import secure_filename
from flask import Flask, redirect, send_from_directory, request, make_response
from flask_socketio import SocketIO
from werkzeug.utils import secure_filename
from PIL import Image, ImageOps
from PIL.Image import Image as ImageType
from uuid import uuid4
from threading import Event
from invokeai.backend.modules.get_canvas_generation_mode import (
get_canvas_generation_mode,
)
from invokeai.backend.modules.parameters import parameters_to_command
import invokeai.frontend.dist as frontend
from ldm.generate import Generate
from ldm.invoke.args import Args, APP_ID, APP_VERSION, calculate_init_img_hash
from ldm.invoke.conditioning import get_tokens_for_prompt, get_prompt_structure
from ldm.invoke.generator.diffusers_pipeline import PipelineIntermediateState
from ldm.invoke.generator.inpaint import infill_methods
from ldm.invoke.globals import Globals
from ldm.invoke.pngwriter import PngWriter, retrieve_metadata
from ldm.invoke.prompt_parser import split_weighted_subprompts, Blend
from ldm.invoke.generator.inpaint import infill_methods
from backend.modules.parameters import parameters_to_command
from backend.modules.get_canvas_generation_mode import (
get_canvas_generation_mode,
)
# Loading Arguments
opt = Args()
@@ -86,17 +85,11 @@ class InvokeAIWebServer:
}
if opt.cors:
_cors = opt.cors
# convert list back into comma-separated string,
# be defensive here, not sure in what form this arrives
if isinstance(_cors, list):
_cors = ",".join(_cors)
if "," in _cors:
_cors = _cors.split(",")
socketio_args["cors_allowed_origins"] = _cors
socketio_args["cors_allowed_origins"] = opt.cors
frontend_path = self.find_frontend()
self.app = Flask(
__name__, static_url_path="", static_folder=frontend.__path__[0]
__name__, static_url_path="", static_folder=frontend_path
)
self.socketio = SocketIO(self.app, **socketio_args)
@@ -254,6 +247,18 @@ class InvokeAIWebServer:
keyfile=args.keyfile,
)
def find_frontend(self):
my_dir = os.path.dirname(__file__)
# LS: setup.py seems to put the frontend in different places on different systems, so
# this is fragile and needs to be replaced with a better way of finding the front end.
for candidate in (os.path.join(my_dir,'..','frontend','dist'), # pip install -e .
os.path.join(my_dir,'../../../../frontend','dist'), # pip install . (Linux, Mac)
os.path.join(my_dir,'../../../frontend','dist'), # pip install . (Windows)
):
if os.path.exists(candidate):
return candidate
assert "Frontend files cannot be found. Cannot continue"
def setup_app(self):
self.result_url = "outputs/"
self.init_image_url = "outputs/init-images/"
@@ -292,7 +297,7 @@ class InvokeAIWebServer:
def handle_request_capabilities():
print(f">> System config requested")
config = self.get_system_config()
config["model_list"] = self.generate.model_manager.list_models()
config["model_list"] = self.generate.model_cache.list_models()
config["infill_methods"] = infill_methods()
socketio.emit("systemConfig", config)
@@ -305,11 +310,11 @@ class InvokeAIWebServer:
{'search_folder': None, 'found_models': None},
)
else:
search_folder, found_models = self.generate.model_manager.search_models(search_folder)
search_folder, found_models = self.generate.model_cache.search_models(search_folder)
socketio.emit(
"foundModels",
{'search_folder': search_folder, 'found_models': found_models},
)
)
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
@@ -323,20 +328,18 @@ class InvokeAIWebServer:
model_name = new_model_config['name']
del new_model_config['name']
model_attributes = new_model_config
if len(model_attributes['vae']) == 0:
del model_attributes['vae']
update = False
current_model_list = self.generate.model_manager.list_models()
current_model_list = self.generate.model_cache.list_models()
if model_name in current_model_list:
update = True
print(f">> Adding New Model: {model_name}")
self.generate.model_manager.add_model(
self.generate.model_cache.add_model(
model_name=model_name, model_attributes=model_attributes, clobber=True)
self.generate.model_manager.commit(opt.conf)
self.generate.model_cache.commit(opt.conf)
new_model_list = self.generate.model_manager.list_models()
new_model_list = self.generate.model_cache.list_models()
socketio.emit(
"newModelAdded",
{"new_model_name": model_name,
@@ -354,9 +357,9 @@ class InvokeAIWebServer:
def handle_delete_model(model_name: str):
try:
print(f">> Deleting Model: {model_name}")
self.generate.model_manager.del_model(model_name)
self.generate.model_manager.commit(opt.conf)
updated_model_list = self.generate.model_manager.list_models()
self.generate.model_cache.del_model(model_name)
self.generate.model_cache.commit(opt.conf)
updated_model_list = self.generate.model_cache.list_models()
socketio.emit(
"modelDeleted",
{"deleted_model_name": model_name,
@@ -375,7 +378,7 @@ class InvokeAIWebServer:
try:
print(f">> Model change requested: {model_name}")
model = self.generate.set_model(model_name)
model_list = self.generate.model_manager.list_models()
model_list = self.generate.model_cache.list_models()
if model is None:
socketio.emit(
"modelChangeFailed",
@@ -787,7 +790,7 @@ class InvokeAIWebServer:
# App Functions
def get_system_config(self):
model_list: dict = self.generate.model_manager.list_models()
model_list: dict = self.generate.model_cache.list_models()
active_model_name = None
for model_name, model_dict in model_list.items():
@@ -1195,16 +1198,9 @@ class InvokeAIWebServer:
print(generation_parameters)
def diffusers_step_callback_adapter(*cb_args, **kwargs):
if isinstance(cb_args[0], PipelineIntermediateState):
progress_state: PipelineIntermediateState = cb_args[0]
return image_progress(progress_state.latents, progress_state.step)
else:
return image_progress(*cb_args, **kwargs)
self.generate.prompt2image(
**generation_parameters,
step_callback=diffusers_step_callback_adapter,
step_callback=image_progress,
image_callback=image_done
)

View File

@@ -1,4 +1,4 @@
from invokeai.backend.modules.parse_seed_weights import parse_seed_weights
from backend.modules.parse_seed_weights import parse_seed_weights
import argparse
SAMPLER_CHOICES = [
@@ -12,8 +12,6 @@ SAMPLER_CHOICES = [
"k_heun",
"k_lms",
"plms",
# diffusers:
"pndm",
]

View File

Before

Width:  |  Height:  |  Size: 2.7 KiB

After

Width:  |  Height:  |  Size: 2.7 KiB

View File

Before

Width:  |  Height:  |  Size: 292 KiB

After

Width:  |  Height:  |  Size: 292 KiB

View File

Before

Width:  |  Height:  |  Size: 164 KiB

After

Width:  |  Height:  |  Size: 164 KiB

View File

Before

Width:  |  Height:  |  Size: 9.5 KiB

After

Width:  |  Height:  |  Size: 9.5 KiB

View File

Before

Width:  |  Height:  |  Size: 3.4 KiB

After

Width:  |  Height:  |  Size: 3.4 KiB

View File

@@ -2,10 +2,9 @@
--extra-index-url https://download.pytorch.org/whl/torch_stable.html
--extra-index-url https://download.pytorch.org/whl/cu116
--trusted-host https://download.pytorch.org
accelerate~=0.15
accelerate~=0.14
albumentations
diffusers[torch]~=0.11
einops
diffusers
eventlet
flask_cors
flask_socketio

View File

@@ -0,0 +1,80 @@
stable-diffusion-1.5:
description: The newest Stable Diffusion version 1.5 weight file (4.27 GB)
repo_id: runwayml/stable-diffusion-v1-5
config: v1-inference.yaml
file: v1-5-pruned-emaonly.ckpt
recommended: true
width: 512
height: 512
inpainting-1.5:
description: RunwayML SD 1.5 model optimized for inpainting (4.27 GB)
repo_id: runwayml/stable-diffusion-inpainting
config: v1-inpainting-inference.yaml
file: sd-v1-5-inpainting.ckpt
recommended: True
width: 512
height: 512
ft-mse-improved-autoencoder-840000:
description: StabilityAI improved autoencoder fine-tuned for human faces (recommended; 335 MB)
repo_id: stabilityai/sd-vae-ft-mse-original
config: VAE/default
file: vae-ft-mse-840000-ema-pruned.ckpt
recommended: True
width: 512
height: 512
stable-diffusion-1.4:
description: The original Stable Diffusion version 1.4 weight file (4.27 GB)
repo_id: CompVis/stable-diffusion-v-1-4-original
config: v1-inference.yaml
file: sd-v1-4.ckpt
recommended: False
width: 512
height: 512
waifu-diffusion-1.3:
description: Stable Diffusion 1.4 fine tuned on anime-styled images (4.27 GB)
repo_id: hakurei/waifu-diffusion-v1-3
config: v1-inference.yaml
file: model-epoch09-float32.ckpt
recommended: False
width: 512
height: 512
trinart-2.0:
description: An SD model finetuned with ~40,000 assorted high resolution manga/anime-style pictures (2.13 GB)
repo_id: naclbit/trinart_stable_diffusion_v2
config: v1-inference.yaml
file: trinart2_step95000.ckpt
recommended: False
width: 512
height: 512
trinart_characters-1.0:
description: An SD model finetuned with 19.2M anime/manga style images (2.13 GB)
repo_id: naclbit/trinart_characters_19.2m_stable_diffusion_v1
config: v1-inference.yaml
file: trinart_characters_it4_v1.ckpt
recommended: False
width: 512
height: 512
trinart_vae:
description: Custom autoencoder for trinart_characters
repo_id: naclbit/trinart_characters_19.2m_stable_diffusion_v1
config: VAE/trinart
file: autoencoder_fix_kl-f8-trinart_characters.ckpt
recommended: False
width: 512
height: 512
papercut-1.0:
description: SD 1.5 fine-tuned for papercut art (use "PaperCut" in your prompts) (2.13 GB)
repo_id: Fictiverse/Stable_Diffusion_PaperCut_Model
config: v1-inference.yaml
file: PaperCut_v1.ckpt
recommended: False
width: 512
height: 512
voxel_art-1.0:
description: Stable Diffusion trained on voxel art (use "VoxelArt" in your prompts) (4.27 GB)
repo_id: Fictiverse/Stable_Diffusion_VoxelArt_Model
config: v1-inference.yaml
file: VoxelArt_v1.ckpt
recommended: False
width: 512
height: 512

View File

@@ -5,25 +5,6 @@
# model requires a model config file, a weights file,
# and the width and height of the images it
# was trained on.
diffusers-1.4:
description: 🤗🧨 Stable Diffusion v1.4
format: diffusers
repo_id: CompVis/stable-diffusion-v1-4
diffusers-1.5:
description: 🤗🧨 Stable Diffusion v1.5
format: diffusers
repo_id: runwayml/stable-diffusion-v1-5
default: true
diffusers-1.5+mse:
description: 🤗🧨 Stable Diffusion v1.5 + MSE-finetuned VAE
format: diffusers
repo_id: runwayml/stable-diffusion-v1-5
vae:
repo_id: stabilityai/sd-vae-ft-mse
diffusers-inpainting-1.5:
description: 🤗🧨 inpainting for Stable Diffusion v1.5
format: diffusers
repo_id: runwayml/stable-diffusion-inpainting
stable-diffusion-1.5:
description: The newest Stable Diffusion version 1.5 weight file (4.27 GB)
weights: models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt
@@ -31,6 +12,7 @@ stable-diffusion-1.5:
width: 512
height: 512
vae: ./models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
default: true
stable-diffusion-1.4:
description: Stable Diffusion inference model version 1.4
config: configs/stable-diffusion/v1-inference.yaml

803
configs/sd-concepts.txt Normal file
View File

@@ -0,0 +1,803 @@
sd-concepts-library/001glitch-core
sd-concepts-library/2814-roth
sd-concepts-library/3d-female-cyborgs
sd-concepts-library/4tnght
sd-concepts-library/80s-anime-ai
sd-concepts-library/80s-anime-ai-being
sd-concepts-library/852style-girl
sd-concepts-library/8bit
sd-concepts-library/8sconception
sd-concepts-library/Aflac-duck
sd-concepts-library/Akitsuki
sd-concepts-library/Atako
sd-concepts-library/Exodus-Styling
sd-concepts-library/RINGAO
sd-concepts-library/a-female-hero-from-the-legend-of-mir
sd-concepts-library/a-hat-kid
sd-concepts-library/a-tale-of-two-empires
sd-concepts-library/aadhav-face
sd-concepts-library/aavegotchi
sd-concepts-library/abby-face
sd-concepts-library/abstract-concepts
sd-concepts-library/accurate-angel
sd-concepts-library/agm-style-nao
sd-concepts-library/aj-fosik
sd-concepts-library/alberto-mielgo
sd-concepts-library/alex-portugal
sd-concepts-library/alex-thumbnail-object-2000-steps
sd-concepts-library/aleyna-tilki
sd-concepts-library/alf
sd-concepts-library/alicebeta
sd-concepts-library/alien-avatar
sd-concepts-library/alisa
sd-concepts-library/all-rings-albuns
sd-concepts-library/altvent
sd-concepts-library/altyn-helmet
sd-concepts-library/amine
sd-concepts-library/amogus
sd-concepts-library/anders-zorn
sd-concepts-library/angus-mcbride-style
sd-concepts-library/animalve3-1500seq
sd-concepts-library/anime-background-style
sd-concepts-library/anime-background-style-v2
sd-concepts-library/anime-boy
sd-concepts-library/anime-girl
sd-concepts-library/anyXtronXredshift
sd-concepts-library/anya-forger
sd-concepts-library/apex-wingman
sd-concepts-library/apulian-rooster-v0-1
sd-concepts-library/arcane-face
sd-concepts-library/arcane-style-jv
sd-concepts-library/arcimboldo-style
sd-concepts-library/armando-reveron-style
sd-concepts-library/armor-concept
sd-concepts-library/arq-render
sd-concepts-library/art-brut
sd-concepts-library/arthur1
sd-concepts-library/artist-yukiko-kanagai
sd-concepts-library/arwijn
sd-concepts-library/ashiok
sd-concepts-library/at-wolf-boy-object
sd-concepts-library/atm-ant
sd-concepts-library/atm-ant-2
sd-concepts-library/axe-tattoo
sd-concepts-library/ayush-spider-spr
sd-concepts-library/azura-from-vibrant-venture
sd-concepts-library/ba-shiroko
sd-concepts-library/babau
sd-concepts-library/babs-bunny
sd-concepts-library/babushork
sd-concepts-library/backrooms
sd-concepts-library/bad_Hub_Hugh
sd-concepts-library/bada-club
sd-concepts-library/baldi
sd-concepts-library/baluchitherian
sd-concepts-library/bamse
sd-concepts-library/bamse-og-kylling
sd-concepts-library/bee
sd-concepts-library/beholder
sd-concepts-library/beldam
sd-concepts-library/belen
sd-concepts-library/bella-goth
sd-concepts-library/belle-delphine
sd-concepts-library/bert-muppet
sd-concepts-library/better-collage3
sd-concepts-library/between2-mt-fade
sd-concepts-library/birb-style
sd-concepts-library/black-and-white-design
sd-concepts-library/black-waifu
sd-concepts-library/bloo
sd-concepts-library/blue-haired-boy
sd-concepts-library/blue-zombie
sd-concepts-library/blue-zombiee
sd-concepts-library/bluebey
sd-concepts-library/bluebey-2
sd-concepts-library/bobs-burgers
sd-concepts-library/boissonnard
sd-concepts-library/bonzi-monkey
sd-concepts-library/borderlands
sd-concepts-library/bored-ape-textual-inversion
sd-concepts-library/boris-anderson
sd-concepts-library/bozo-22
sd-concepts-library/breakcore
sd-concepts-library/brittney-williams-art
sd-concepts-library/bruma
sd-concepts-library/brunnya
sd-concepts-library/buddha-statue
sd-concepts-library/bullvbear
sd-concepts-library/button-eyes
sd-concepts-library/canadian-goose
sd-concepts-library/canary-cap
sd-concepts-library/cancer_style
sd-concepts-library/captain-haddock
sd-concepts-library/captainkirb
sd-concepts-library/car-toy-rk
sd-concepts-library/carasibana
sd-concepts-library/carlitos-el-mago
sd-concepts-library/carrascharacter
sd-concepts-library/cartoona-animals
sd-concepts-library/cat-toy
sd-concepts-library/centaur
sd-concepts-library/cgdonny1
sd-concepts-library/cham
sd-concepts-library/chandra-nalaar
sd-concepts-library/char-con
sd-concepts-library/character-pingu
sd-concepts-library/cheburashka
sd-concepts-library/chen-1
sd-concepts-library/child-zombie
sd-concepts-library/chillpill
sd-concepts-library/chonkfrog
sd-concepts-library/chop
sd-concepts-library/christo-person
sd-concepts-library/chuck-walton
sd-concepts-library/chucky
sd-concepts-library/chungus-poodl-pet
sd-concepts-library/cindlop
sd-concepts-library/collage-cutouts
sd-concepts-library/collage14
sd-concepts-library/collage3
sd-concepts-library/collage3-hubcity
sd-concepts-library/cologne
sd-concepts-library/color-page
sd-concepts-library/colossus
sd-concepts-library/command-and-conquer-remastered-cameos
sd-concepts-library/concept-art
sd-concepts-library/conner-fawcett-style
sd-concepts-library/conway-pirate
sd-concepts-library/coop-himmelblau
sd-concepts-library/coraline
sd-concepts-library/cornell-box
sd-concepts-library/cortana
sd-concepts-library/covid-19-rapid-test
sd-concepts-library/cow-uwu
sd-concepts-library/cowboy
sd-concepts-library/crazy-1
sd-concepts-library/crazy-2
sd-concepts-library/crb-portraits
sd-concepts-library/crb-surrealz
sd-concepts-library/crbart
sd-concepts-library/crested-gecko
sd-concepts-library/crinos-form-garou
sd-concepts-library/cry-baby-style
sd-concepts-library/crybaby-style-2-0
sd-concepts-library/csgo-awp-object
sd-concepts-library/csgo-awp-texture-map
sd-concepts-library/cubex
sd-concepts-library/cumbia-peruana
sd-concepts-library/cute-bear
sd-concepts-library/cute-cat
sd-concepts-library/cute-game-style
sd-concepts-library/cyberpunk-lucy
sd-concepts-library/dabotap
sd-concepts-library/dan-mumford
sd-concepts-library/dan-seagrave-art-style
sd-concepts-library/dark-penguin-pinguinanimations
sd-concepts-library/darkpenguinanimatronic
sd-concepts-library/darkplane
sd-concepts-library/david-firth-artstyle
sd-concepts-library/david-martinez-cyberpunk
sd-concepts-library/david-martinez-edgerunners
sd-concepts-library/david-moreno-architecture
sd-concepts-library/daycare-attendant-sun-fnaf
sd-concepts-library/ddattender
sd-concepts-library/degods
sd-concepts-library/degodsheavy
sd-concepts-library/depthmap
sd-concepts-library/depthmap-style
sd-concepts-library/design
sd-concepts-library/detectivedinosaur1
sd-concepts-library/diaosu-toy
sd-concepts-library/dicoo
sd-concepts-library/dicoo2
sd-concepts-library/dishonored-portrait-styles
sd-concepts-library/disquieting-muses
sd-concepts-library/ditko
sd-concepts-library/dlooak
sd-concepts-library/doc
sd-concepts-library/doener-red-line-art
sd-concepts-library/dog
sd-concepts-library/dog-django
sd-concepts-library/doge-pound
sd-concepts-library/dong-ho
sd-concepts-library/dong-ho2
sd-concepts-library/doose-s-realistic-art-style
sd-concepts-library/dq10-anrushia
sd-concepts-library/dr-livesey
sd-concepts-library/dr-strange
sd-concepts-library/dragonborn
sd-concepts-library/dreamcore
sd-concepts-library/dreamy-painting
sd-concepts-library/drive-scorpion-jacket
sd-concepts-library/dsmuses
sd-concepts-library/dtv-pkmn
sd-concepts-library/dullboy-caricature
sd-concepts-library/duranduran
sd-concepts-library/durer-style
sd-concepts-library/dyoudim-style
sd-concepts-library/early-mishima-kurone
sd-concepts-library/eastward
sd-concepts-library/eddie
sd-concepts-library/edgerunners-style
sd-concepts-library/edgerunners-style-v2
sd-concepts-library/el-salvador-style-style
sd-concepts-library/elegant-flower
sd-concepts-library/elspeth-tirel
sd-concepts-library/eru-chitanda-casual
sd-concepts-library/erwin-olaf-style
sd-concepts-library/ettblackteapot
sd-concepts-library/explosions-cat
sd-concepts-library/eye-of-agamotto
sd-concepts-library/f-22
sd-concepts-library/facadeplace
sd-concepts-library/fairy-tale-painting-style
sd-concepts-library/fairytale
sd-concepts-library/fang-yuan-001
sd-concepts-library/faraon-love-shady
sd-concepts-library/fasina
sd-concepts-library/felps
sd-concepts-library/female-kpop-singer
sd-concepts-library/fergal-cat
sd-concepts-library/filename-2
sd-concepts-library/fileteado-porteno
sd-concepts-library/final-fantasy-logo
sd-concepts-library/fireworks-over-water
sd-concepts-library/fish
sd-concepts-library/flag-ussr
sd-concepts-library/flatic
sd-concepts-library/floral
sd-concepts-library/fluid-acrylic-jellyfish-creatures-style-of-carl-ingram-art
sd-concepts-library/fnf-boyfriend
sd-concepts-library/fold-structure
sd-concepts-library/fox-purple
sd-concepts-library/fractal
sd-concepts-library/fractal-flame
sd-concepts-library/fractal-temple-style
sd-concepts-library/frank-frazetta
sd-concepts-library/franz-unterberger
sd-concepts-library/freddy-fazbear
sd-concepts-library/freefonix-style
sd-concepts-library/furrpopasthetic
sd-concepts-library/fursona
sd-concepts-library/fzk
sd-concepts-library/galaxy-explorer
sd-concepts-library/ganyu-genshin-impact
sd-concepts-library/garcon-the-cat
sd-concepts-library/garfield-pizza-plush
sd-concepts-library/garfield-pizza-plush-v2
sd-concepts-library/gba-fe-class-cards
sd-concepts-library/gba-pokemon-sprites
sd-concepts-library/geggin
sd-concepts-library/ggplot2
sd-concepts-library/ghost-style
sd-concepts-library/ghostproject-men
sd-concepts-library/gibasachan-v0
sd-concepts-library/gim
sd-concepts-library/gio
sd-concepts-library/giygas
sd-concepts-library/glass-pipe
sd-concepts-library/glass-prism-cube
sd-concepts-library/glow-forest
sd-concepts-library/goku
sd-concepts-library/gram-tops
sd-concepts-library/green-blue-shanshui
sd-concepts-library/green-tent
sd-concepts-library/grifter
sd-concepts-library/grisstyle
sd-concepts-library/grit-toy
sd-concepts-library/gt-color-paint-2
sd-concepts-library/gta5-artwork
sd-concepts-library/guttestreker
sd-concepts-library/gymnastics-leotard-v2
sd-concepts-library/half-life-2-dog
sd-concepts-library/handstand
sd-concepts-library/hanfu-anime-style
sd-concepts-library/happy-chaos
sd-concepts-library/happy-person12345
sd-concepts-library/happy-person12345-assets
sd-concepts-library/harley-quinn
sd-concepts-library/harmless-ai-1
sd-concepts-library/harmless-ai-house-style-1
sd-concepts-library/hd-emoji
sd-concepts-library/heather
sd-concepts-library/henjo-techno-show
sd-concepts-library/herge-style
sd-concepts-library/hiten-style-nao
sd-concepts-library/hitokomoru-style-nao
sd-concepts-library/hiyuki-chan
sd-concepts-library/hk-bamboo
sd-concepts-library/hk-betweenislands
sd-concepts-library/hk-bicycle
sd-concepts-library/hk-blackandwhite
sd-concepts-library/hk-breakfast
sd-concepts-library/hk-buses
sd-concepts-library/hk-clouds
sd-concepts-library/hk-goldbuddha
sd-concepts-library/hk-goldenlantern
sd-concepts-library/hk-hkisland
sd-concepts-library/hk-leaves
sd-concepts-library/hk-market
sd-concepts-library/hk-oldcamera
sd-concepts-library/hk-opencamera
sd-concepts-library/hk-peach
sd-concepts-library/hk-phonevax
sd-concepts-library/hk-streetpeople
sd-concepts-library/hk-vintage
sd-concepts-library/hoi4
sd-concepts-library/hoi4-leaders
sd-concepts-library/homestuck-sprite
sd-concepts-library/homestuck-troll
sd-concepts-library/hours-sentry-fade
sd-concepts-library/hours-style
sd-concepts-library/hrgiger-drmacabre
sd-concepts-library/huang-guang-jian
sd-concepts-library/huatli
sd-concepts-library/huayecai820-greyscale
sd-concepts-library/hub-city
sd-concepts-library/hubris-oshri
sd-concepts-library/huckleberry
sd-concepts-library/hydrasuit
sd-concepts-library/i-love-chaos
sd-concepts-library/ibere-thenorio
sd-concepts-library/ic0n
sd-concepts-library/ie-gravestone
sd-concepts-library/ikea-fabler
sd-concepts-library/illustration-style
sd-concepts-library/ilo-kunst
sd-concepts-library/ilya-shkipin
sd-concepts-library/im-poppy
sd-concepts-library/ina-art
sd-concepts-library/indian-watercolor-portraits
sd-concepts-library/indiana
sd-concepts-library/ingmar-bergman
sd-concepts-library/insidewhale
sd-concepts-library/interchanges
sd-concepts-library/inuyama-muneto-style-nao
sd-concepts-library/irasutoya
sd-concepts-library/iridescent-illustration-style
sd-concepts-library/iridescent-photo-style
sd-concepts-library/isabell-schulte-pv-pvii-3000steps
sd-concepts-library/isabell-schulte-pviii-1-image-style
sd-concepts-library/isabell-schulte-pviii-1024px-1500-steps-style
sd-concepts-library/isabell-schulte-pviii-12tiles-3000steps-style
sd-concepts-library/isabell-schulte-pviii-4-tiles-1-lr-3000-steps-style
sd-concepts-library/isabell-schulte-pviii-4-tiles-3-lr-5000-steps-style
sd-concepts-library/isabell-schulte-pviii-4tiles-500steps
sd-concepts-library/isabell-schulte-pviii-4tiles-6000steps
sd-concepts-library/isabell-schulte-pviii-style
sd-concepts-library/isometric-tile-test
sd-concepts-library/jacqueline-the-unicorn
sd-concepts-library/james-web-space-telescope
sd-concepts-library/jamie-hewlett-style
sd-concepts-library/jamiels
sd-concepts-library/jang-sung-rak-style
sd-concepts-library/jetsetdreamcastcovers
sd-concepts-library/jin-kisaragi
sd-concepts-library/jinjoon-lee-they
sd-concepts-library/jm-bergling-monogram
sd-concepts-library/joe-mad
sd-concepts-library/joe-whiteford-art-style
sd-concepts-library/joemad
sd-concepts-library/john-blanche
sd-concepts-library/johnny-silverhand
sd-concepts-library/jojo-bizzare-adventure-manga-lineart
sd-concepts-library/jos-de-kat
sd-concepts-library/junji-ito-artstyle
sd-concepts-library/kaleido
sd-concepts-library/kaneoya-sachiko
sd-concepts-library/kanovt
sd-concepts-library/kanv1
sd-concepts-library/karan-gloomy
sd-concepts-library/karl-s-lzx-1
sd-concepts-library/kasumin
sd-concepts-library/kawaii-colors
sd-concepts-library/kawaii-girl-plus-object
sd-concepts-library/kawaii-girl-plus-style
sd-concepts-library/kawaii-girl-plus-style-v1-1
sd-concepts-library/kay
sd-concepts-library/kaya-ghost-assasin
sd-concepts-library/ki
sd-concepts-library/kinda-sus
sd-concepts-library/kings-quest-agd
sd-concepts-library/kiora
sd-concepts-library/kira-sensei
sd-concepts-library/kirby
sd-concepts-library/klance
sd-concepts-library/kodakvision500t
sd-concepts-library/kogatan-shiny
sd-concepts-library/kogecha
sd-concepts-library/kojima-ayami
sd-concepts-library/koko-dog
sd-concepts-library/kuvshinov
sd-concepts-library/kysa-v-style
sd-concepts-library/laala-character
sd-concepts-library/larrette
sd-concepts-library/lavko
sd-concepts-library/lazytown-stephanie
sd-concepts-library/ldr
sd-concepts-library/ldrs
sd-concepts-library/led-toy
sd-concepts-library/lego-astronaut
sd-concepts-library/leica
sd-concepts-library/leif-jones
sd-concepts-library/lex
sd-concepts-library/liliana
sd-concepts-library/liliana-vess
sd-concepts-library/liminal-spaces-2-0
sd-concepts-library/liminalspaces
sd-concepts-library/line-art
sd-concepts-library/line-style
sd-concepts-library/linnopoke
sd-concepts-library/liquid-light
sd-concepts-library/liqwid-aquafarmer
sd-concepts-library/lizardman
sd-concepts-library/loab-character
sd-concepts-library/loab-style
sd-concepts-library/lofa
sd-concepts-library/logo-with-face-on-shield
sd-concepts-library/lolo
sd-concepts-library/looney-anime
sd-concepts-library/lost-rapper
sd-concepts-library/lphr-style
sd-concepts-library/lucario
sd-concepts-library/lucky-luke
sd-concepts-library/lugal-ki-en
sd-concepts-library/luinv2
sd-concepts-library/lula-13
sd-concepts-library/lumio
sd-concepts-library/lxj-o4
sd-concepts-library/m-geo
sd-concepts-library/m-geoo
sd-concepts-library/madhubani-art
sd-concepts-library/mafalda-character
sd-concepts-library/magic-pengel
sd-concepts-library/malika-favre-art-style
sd-concepts-library/manga-style
sd-concepts-library/marbling-art
sd-concepts-library/margo
sd-concepts-library/marty
sd-concepts-library/marty6
sd-concepts-library/mass
sd-concepts-library/masyanya
sd-concepts-library/masyunya
sd-concepts-library/mate
sd-concepts-library/matthew-stone
sd-concepts-library/mattvidpro
sd-concepts-library/maurice-quentin-de-la-tour-style
sd-concepts-library/maus
sd-concepts-library/max-foley
sd-concepts-library/mayor-richard-irvin
sd-concepts-library/mechasoulall
sd-concepts-library/medazzaland
sd-concepts-library/memnarch-mtg
sd-concepts-library/metagabe
sd-concepts-library/meyoco
sd-concepts-library/meze-audio-elite-headphones
sd-concepts-library/midjourney-style
sd-concepts-library/mikako-method
sd-concepts-library/mikako-methodi2i
sd-concepts-library/miko-3-robot
sd-concepts-library/milady
sd-concepts-library/mildemelwe-style
sd-concepts-library/million-live-akane-15k
sd-concepts-library/million-live-akane-3k
sd-concepts-library/million-live-akane-shifuku-3k
sd-concepts-library/million-live-spade-q-object-3k
sd-concepts-library/million-live-spade-q-style-3k
sd-concepts-library/minecraft-concept-art
sd-concepts-library/mishima-kurone
sd-concepts-library/mizkif
sd-concepts-library/moeb-style
sd-concepts-library/moebius
sd-concepts-library/mokoko
sd-concepts-library/mokoko-seed
sd-concepts-library/monster-girl
sd-concepts-library/monster-toy
sd-concepts-library/monte-novo
sd-concepts-library/moo-moo
sd-concepts-library/morino-hon-style
sd-concepts-library/moxxi
sd-concepts-library/msg
sd-concepts-library/mtg-card
sd-concepts-library/mtl-longsky
sd-concepts-library/mu-sadr
sd-concepts-library/munch-leaks-style
sd-concepts-library/museum-by-coop-himmelblau
sd-concepts-library/muxoyara
sd-concepts-library/my-hero-academia-style
sd-concepts-library/my-mug
sd-concepts-library/mycat
sd-concepts-library/mystical-nature
sd-concepts-library/naf
sd-concepts-library/nahiri
sd-concepts-library/namine-ritsu
sd-concepts-library/naoki-saito
sd-concepts-library/nard-style
sd-concepts-library/naruto
sd-concepts-library/natasha-johnston
sd-concepts-library/nathan-wyatt
sd-concepts-library/naval-portrait
sd-concepts-library/nazuna
sd-concepts-library/nebula
sd-concepts-library/ned-flanders
sd-concepts-library/neon-pastel
sd-concepts-library/new-priests
sd-concepts-library/nic-papercuts
sd-concepts-library/nikodim
sd-concepts-library/nissa-revane
sd-concepts-library/nixeu
sd-concepts-library/noggles
sd-concepts-library/nomad
sd-concepts-library/nouns-glasses
sd-concepts-library/obama-based-on-xi
sd-concepts-library/obama-self-2
sd-concepts-library/og-mox-style
sd-concepts-library/ohisashiburi-style
sd-concepts-library/oleg-kuvaev
sd-concepts-library/olli-olli
sd-concepts-library/on-kawara
sd-concepts-library/one-line-drawing
sd-concepts-library/onepunchman
sd-concepts-library/onzpo
sd-concepts-library/orangejacket
sd-concepts-library/ori
sd-concepts-library/ori-toor
sd-concepts-library/orientalist-art
sd-concepts-library/osaka-jyo
sd-concepts-library/osaka-jyo2
sd-concepts-library/osrsmini2
sd-concepts-library/osrstiny
sd-concepts-library/other-mother
sd-concepts-library/ouroboros
sd-concepts-library/outfit-items
sd-concepts-library/overprettified
sd-concepts-library/owl-house
sd-concepts-library/painted-by-silver-of-999
sd-concepts-library/painted-by-silver-of-999-2
sd-concepts-library/painted-student
sd-concepts-library/painting
sd-concepts-library/pantone-milk
sd-concepts-library/paolo-bonolis
sd-concepts-library/party-girl
sd-concepts-library/pascalsibertin
sd-concepts-library/pastelartstyle
sd-concepts-library/paul-noir
sd-concepts-library/pen-ink-portraits-bennorthen
sd-concepts-library/phan
sd-concepts-library/phan-s-collage
sd-concepts-library/phc
sd-concepts-library/phoenix-01
sd-concepts-library/pineda-david
sd-concepts-library/pink-beast-pastelae-style
sd-concepts-library/pintu
sd-concepts-library/pion-by-august-semionov
sd-concepts-library/piotr-jablonski
sd-concepts-library/pixel-mania
sd-concepts-library/pixel-toy
sd-concepts-library/pjablonski-style
sd-concepts-library/plant-style
sd-concepts-library/plen-ki-mun
sd-concepts-library/pokemon-conquest-sprites
sd-concepts-library/pool-test
sd-concepts-library/poolrooms
sd-concepts-library/poring-ragnarok-online
sd-concepts-library/poutine-dish
sd-concepts-library/princess-knight-art
sd-concepts-library/progress-chip
sd-concepts-library/puerquis-toy
sd-concepts-library/purplefishli
sd-concepts-library/pyramidheadcosplay
sd-concepts-library/qpt-atrium
sd-concepts-library/quiesel
sd-concepts-library/r-crumb-style
sd-concepts-library/rahkshi-bionicle
sd-concepts-library/raichu
sd-concepts-library/rail-scene
sd-concepts-library/rail-scene-style
sd-concepts-library/ralph-mcquarrie
sd-concepts-library/ransom
sd-concepts-library/rayne-weynolds
sd-concepts-library/rcrumb-portraits-style
sd-concepts-library/rd-chaos
sd-concepts-library/rd-paintings
sd-concepts-library/red-glasses
sd-concepts-library/reeducation-camp
sd-concepts-library/reksio-dog
sd-concepts-library/rektguy
sd-concepts-library/remert
sd-concepts-library/renalla
sd-concepts-library/repeat
sd-concepts-library/retro-girl
sd-concepts-library/retro-mecha-rangers
sd-concepts-library/retropixelart-pinguin
sd-concepts-library/rex-deno
sd-concepts-library/rhizomuse-machine-bionic-sculpture
sd-concepts-library/ricar
sd-concepts-library/rickyart
sd-concepts-library/rico-face
sd-concepts-library/riker-doll
sd-concepts-library/rikiart
sd-concepts-library/rikiboy-art
sd-concepts-library/rilakkuma
sd-concepts-library/rishusei-style
sd-concepts-library/rj-palmer
sd-concepts-library/rl-pkmn-test
sd-concepts-library/road-to-ruin
sd-concepts-library/robertnava
sd-concepts-library/roblox-avatar
sd-concepts-library/roy-lichtenstein
sd-concepts-library/ruan-jia
sd-concepts-library/russian
sd-concepts-library/s1m-naoto-ohshima
sd-concepts-library/saheeli-rai
sd-concepts-library/sakimi-style
sd-concepts-library/salmonid
sd-concepts-library/sam-yang
sd-concepts-library/sanguo-guanyu
sd-concepts-library/sas-style
sd-concepts-library/scarlet-witch
sd-concepts-library/schloss-mosigkau
sd-concepts-library/scrap-style
sd-concepts-library/scratch-project
sd-concepts-library/sculptural-style
sd-concepts-library/sd-concepts-library-uma-meme
sd-concepts-library/seamless-ground
sd-concepts-library/selezneva-alisa
sd-concepts-library/sem-mac2n
sd-concepts-library/senneca
sd-concepts-library/seraphimmoonshadow-art
sd-concepts-library/sewerslvt
sd-concepts-library/she-hulk-law-art
sd-concepts-library/she-mask
sd-concepts-library/sherhook-painting
sd-concepts-library/sherhook-painting-v2
sd-concepts-library/shev-linocut
sd-concepts-library/shigure-ui-style
sd-concepts-library/shiny-polyman
sd-concepts-library/shrunken-head
sd-concepts-library/shu-doll
sd-concepts-library/shvoren-style
sd-concepts-library/sims-2-portrait
sd-concepts-library/singsing
sd-concepts-library/singsing-doll
sd-concepts-library/sintez-ico
sd-concepts-library/skyfalls
sd-concepts-library/slm
sd-concepts-library/smarties
sd-concepts-library/smiling-friend-style
sd-concepts-library/smooth-pencils
sd-concepts-library/smurf-style
sd-concepts-library/smw-map
sd-concepts-library/society-finch
sd-concepts-library/sorami-style
sd-concepts-library/spider-gwen
sd-concepts-library/spritual-monsters
sd-concepts-library/stable-diffusion-conceptualizer
sd-concepts-library/star-tours-posters
sd-concepts-library/stardew-valley-pixel-art
sd-concepts-library/starhavenmachinegods
sd-concepts-library/sterling-archer
sd-concepts-library/stretch-re1-robot
sd-concepts-library/stuffed-penguin-toy
sd-concepts-library/style-of-marc-allante
sd-concepts-library/summie-style
sd-concepts-library/sunfish
sd-concepts-library/super-nintendo-cartridge
sd-concepts-library/supitcha-mask
sd-concepts-library/sushi-pixel
sd-concepts-library/swamp-choe-2
sd-concepts-library/t-skrang
sd-concepts-library/takuji-kawano
sd-concepts-library/tamiyo
sd-concepts-library/tangles
sd-concepts-library/tb303
sd-concepts-library/tcirle
sd-concepts-library/teelip-ir-landscape
sd-concepts-library/teferi
sd-concepts-library/tela-lenca
sd-concepts-library/tela-lenca2
sd-concepts-library/terraria-style
sd-concepts-library/tesla-bot
sd-concepts-library/test
sd-concepts-library/test-epson
sd-concepts-library/test2
sd-concepts-library/testing
sd-concepts-library/thalasin
sd-concepts-library/thegeneral
sd-concepts-library/thorneworks
sd-concepts-library/threestooges
sd-concepts-library/thunderdome-cover
sd-concepts-library/thunderdome-covers
sd-concepts-library/ti-junglepunk-v0
sd-concepts-library/tili-concept
sd-concepts-library/titan-robot
sd-concepts-library/tnj
sd-concepts-library/toho-pixel
sd-concepts-library/tomcat
sd-concepts-library/tonal1
sd-concepts-library/tony-diterlizzi-s-planescape-art
sd-concepts-library/towerplace
sd-concepts-library/toy
sd-concepts-library/toy-bonnie-plush
sd-concepts-library/toyota-sera
sd-concepts-library/transmutation-circles
sd-concepts-library/trash-polka-artstyle
sd-concepts-library/travis-bedel
sd-concepts-library/trigger-studio
sd-concepts-library/trust-support
sd-concepts-library/trypophobia
sd-concepts-library/ttte
sd-concepts-library/tubby
sd-concepts-library/tubby-cats
sd-concepts-library/tudisco
sd-concepts-library/turtlepics
sd-concepts-library/type
sd-concepts-library/ugly-sonic
sd-concepts-library/uliana-kudinova
sd-concepts-library/uma
sd-concepts-library/uma-clean-object
sd-concepts-library/uma-meme
sd-concepts-library/uma-meme-style
sd-concepts-library/uma-style-classic
sd-concepts-library/unfinished-building
sd-concepts-library/urivoldemort
sd-concepts-library/uzumaki
sd-concepts-library/valorantstyle
sd-concepts-library/vb-mox
sd-concepts-library/vcr-classique
sd-concepts-library/venice
sd-concepts-library/vespertine
sd-concepts-library/victor-narm
sd-concepts-library/vietstoneking
sd-concepts-library/vivien-reid
sd-concepts-library/vkuoo1
sd-concepts-library/vraska
sd-concepts-library/w3u
sd-concepts-library/walter-wick-photography
sd-concepts-library/warhammer-40k-drawing-style
sd-concepts-library/waterfallshadow
sd-concepts-library/wayne-reynolds-character
sd-concepts-library/wedding
sd-concepts-library/wedding-HandPainted
sd-concepts-library/werebloops
sd-concepts-library/wheatland
sd-concepts-library/wheatland-arknight
sd-concepts-library/wheelchair
sd-concepts-library/wildkat
sd-concepts-library/willy-hd
sd-concepts-library/wire-angels
sd-concepts-library/wish-artist-stile
sd-concepts-library/wlop-style
sd-concepts-library/wojak
sd-concepts-library/wojaks-now
sd-concepts-library/wojaks-now-now-now
sd-concepts-library/xatu
sd-concepts-library/xatu2
sd-concepts-library/xbh
sd-concepts-library/xi
sd-concepts-library/xidiversity
sd-concepts-library/xioboma
sd-concepts-library/xuna
sd-concepts-library/xyz
sd-concepts-library/yb-anime
sd-concepts-library/yerba-mate
sd-concepts-library/yesdelete
sd-concepts-library/yf21
sd-concepts-library/yilanov2
sd-concepts-library/yinit
sd-concepts-library/yoji-shinkawa-style
sd-concepts-library/yolandi-visser
sd-concepts-library/yoshi
sd-concepts-library/youpi2
sd-concepts-library/youtooz-candy
sd-concepts-library/yuji-himukai-style
sd-concepts-library/zaney
sd-concepts-library/zaneypixelz
sd-concepts-library/zdenek-art
sd-concepts-library/zero
sd-concepts-library/zero-bottle
sd-concepts-library/zero-suit-samus
sd-concepts-library/zillertal-can
sd-concepts-library/zizigooloo
sd-concepts-library/zk
sd-concepts-library/zoroark

65
docker-build/Dockerfile Normal file
View File

@@ -0,0 +1,65 @@
FROM python:3.10-slim AS builder
# use bash
SHELL [ "/bin/bash", "-c" ]
# Install necesarry packages
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
gcc=4:10.2.* \
libgl1-mesa-glx=20.3.* \
libglib2.0-0=2.66.* \
python3-dev=3.9.* \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# set WORKDIR, PATH and copy sources
ARG APPDIR=/usr/src/app
WORKDIR ${APPDIR}
ENV PATH ${APPDIR}/.venv/bin:$PATH
ARG PIP_REQUIREMENTS=requirements-lin-cuda.txt
COPY . ./environments-and-requirements/${PIP_REQUIREMENTS} ./
# install requirements
RUN python3 -m venv .venv \
&& pip install \
--upgrade \
--no-cache-dir \
'wheel>=0.38.4' \
&& pip install \
--no-cache-dir \
-r ${PIP_REQUIREMENTS}
FROM python:3.10-slim AS runtime
# setup environment
ARG APPDIR=/usr/src/app
WORKDIR ${APPDIR}
COPY --from=builder ${APPDIR} .
ENV \
PATH=${APPDIR}/.venv/bin:$PATH \
INVOKEAI_ROOT=/data \
INVOKE_MODEL_RECONFIGURE=--yes
# Install necesarry packages
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
build-essential=12.9 \
libgl1-mesa-glx=20.3.* \
libglib2.0-0=2.66.* \
libopencv-dev=4.5.* \
&& ln -sf \
/usr/lib/"$(arch)"-linux-gnu/pkgconfig/opencv4.pc \
/usr/lib/"$(arch)"-linux-gnu/pkgconfig/opencv.pc \
&& python3 -c "from patchmatch import patch_match" \
&& apt-get remove -y \
--autoremove \
build-essential \
&& apt-get autoclean \
&& rm -rf /var/lib/apt/lists/*
# set Entrypoint and default CMD
ENTRYPOINT [ "python3", "scripts/invoke.py" ]
CMD [ "--web", "--host=0.0.0.0" ]

View File

@@ -0,0 +1,86 @@
#######################
#### Builder stage ####
FROM library/ubuntu:22.04 AS builder
ARG DEBIAN_FRONTEND=noninteractive
RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt update && apt-get install -y \
git \
libglib2.0-0 \
libgl1-mesa-glx \
python3-venv \
python3-pip \
build-essential \
python3-opencv \
libopencv-dev
# This is needed for patchmatch support
RUN cd /usr/lib/x86_64-linux-gnu/pkgconfig/ &&\
ln -sf opencv4.pc opencv.pc
ARG WORKDIR=/invokeai
WORKDIR ${WORKDIR}
ENV VIRTUAL_ENV=${WORKDIR}/.venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN --mount=type=cache,target=/root/.cache/pip \
python3 -m venv ${VIRTUAL_ENV} &&\
pip install --extra-index-url https://download.pytorch.org/whl/cu116 \
torch==1.12.0+cu116 \
torchvision==0.13.0+cu116 &&\
pip install -e git+https://github.com/invoke-ai/PyPatchMatch@0.1.3#egg=pypatchmatch
COPY . .
RUN --mount=type=cache,target=/root/.cache/pip \
cp environments-and-requirements/requirements-lin-cuda.txt requirements.txt && \
pip install -r requirements.txt &&\
pip install -e .
#######################
#### Runtime stage ####
FROM library/ubuntu:22.04 as runtime
ARG DEBIAN_FRONTEND=noninteractive
ENV PYTHONUNBUFFERED=1
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt update && apt install -y --no-install-recommends \
git \
curl \
ncdu \
iotop \
bzip2 \
libglib2.0-0 \
libgl1-mesa-glx \
python3-venv \
python3-pip \
build-essential \
python3-opencv \
libopencv-dev &&\
apt-get clean && apt-get autoclean
ARG WORKDIR=/invokeai
WORKDIR ${WORKDIR}
ENV INVOKEAI_ROOT=/mnt/invokeai
ENV VIRTUAL_ENV=${WORKDIR}/.venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
COPY --from=builder ${WORKDIR} ${WORKDIR}
COPY --from=builder /usr/lib/x86_64-linux-gnu/pkgconfig /usr/lib/x86_64-linux-gnu/pkgconfig
# build patchmatch
RUN python -c "from patchmatch import patch_match"
## workaround for non-existent initfile when runtime directory is mounted; see #1613
RUN touch /root/.invokeai
ENTRYPOINT ["bash"]
CMD ["-c", "python3 scripts/invoke.py --web --host 0.0.0.0"]

44
docker-build/Makefile Normal file
View File

@@ -0,0 +1,44 @@
# Directory in the container where the INVOKEAI_ROOT (runtime dir) will be mounted
INVOKEAI_ROOT=/mnt/invokeai
# Host directory to contain the runtime dir. Will be mounted at INVOKEAI_ROOT path in the container
HOST_MOUNT_PATH=${HOME}/invokeai
IMAGE=local/invokeai:latest
USER=$(shell id -u)
GROUP=$(shell id -g)
# All downloaded models, config, etc will end up in ${HOST_MOUNT_PATH} on the host.
# This is consistent with the expected non-Docker behaviour.
# Contents can be moved to a persistent storage and used to prime the cache on another host.
build:
DOCKER_BUILDKIT=1 docker build -t local/invokeai:latest -f Dockerfile.cloud ..
configure:
docker run --rm -it --runtime=nvidia --gpus=all \
-v ${HOST_MOUNT_PATH}:${INVOKEAI_ROOT} \
-e INVOKEAI_ROOT=${INVOKEAI_ROOT} \
${IMAGE} -c "python scripts/configure_invokeai.py"
# Run the container with the runtime dir mounted and the web server exposed on port 9090
web:
docker run --rm -it --runtime=nvidia --gpus=all \
-v ${HOST_MOUNT_PATH}:${INVOKEAI_ROOT} \
-e INVOKEAI_ROOT=${INVOKEAI_ROOT} \
-p 9090:9090 \
${IMAGE} -c "python scripts/invoke.py --web --host 0.0.0.0"
# Run the cli with the runtime dir mounted
cli:
docker run --rm -it --runtime=nvidia --gpus=all \
-v ${HOST_MOUNT_PATH}:${INVOKEAI_ROOT} \
-e INVOKEAI_ROOT=${INVOKEAI_ROOT} \
${IMAGE} -c "python scripts/invoke.py"
# Run the container with the runtime dir mounted and open a bash shell
shell:
docker run --rm -it --runtime=nvidia --gpus=all \
-v ${HOST_MOUNT_PATH}:${INVOKEAI_ROOT} ${IMAGE} --
.PHONY: build configure web cli shell

35
docker-build/build.sh Executable file
View File

@@ -0,0 +1,35 @@
#!/usr/bin/env bash
set -e
# How to use: https://invoke-ai.github.io/InvokeAI/installation/INSTALL_DOCKER/#setup
source ./docker-build/env.sh \
|| echo "please execute docker-build/build.sh from repository root" \
|| exit 1
PIP_REQUIREMENTS=${PIP_REQUIREMENTS:-requirements-lin-cuda.txt}
DOCKERFILE=${INVOKE_DOCKERFILE:-docker-build/Dockerfile}
# print the settings
echo -e "You are using these values:\n"
echo -e "Dockerfile:\t ${DOCKERFILE}"
echo -e "Requirements:\t ${PIP_REQUIREMENTS}"
echo -e "Volumename:\t ${VOLUMENAME}"
echo -e "arch:\t\t ${ARCH}"
echo -e "Platform:\t ${PLATFORM}"
echo -e "Invokeai_tag:\t ${INVOKEAI_TAG}\n"
if [[ -n "$(docker volume ls -f name="${VOLUMENAME}" -q)" ]]; then
echo -e "Volume already exists\n"
else
echo -n "createing docker volume "
docker volume create "${VOLUMENAME}"
fi
# Build Container
docker build \
--platform="${PLATFORM}" \
--tag="${INVOKEAI_TAG}" \
--build-arg="PIP_REQUIREMENTS=${PIP_REQUIREMENTS}" \
--file="${DOCKERFILE}" \
.

10
docker-build/env.sh Normal file
View File

@@ -0,0 +1,10 @@
#!/usr/bin/env bash
# Variables shared by build.sh and run.sh
REPOSITORY_NAME=${REPOSITORY_NAME:-$(basename "$(git rev-parse --show-toplevel)")}
VOLUMENAME=${VOLUMENAME:-${REPOSITORY_NAME,,}_data}
ARCH=${ARCH:-$(uname -m)}
PLATFORM=${PLATFORM:-Linux/${ARCH}}
CONTAINER_FLAVOR=${CONTAINER_FLAVOR:-cuda}
INVOKEAI_BRANCH=$(git branch --show)
INVOKEAI_TAG=${REPOSITORY_NAME,,}-${CONTAINER_FLAVOR}:${INVOKEAI_TAG:-${INVOKEAI_BRANCH/\//-}}

31
docker-build/run.sh Executable file
View File

@@ -0,0 +1,31 @@
#!/usr/bin/env bash
set -e
# How to use: https://invoke-ai.github.io/InvokeAI/installation/INSTALL_DOCKER/#run-the-container
# IMPORTANT: You need to have a token on huggingface.co to be able to download the checkpoints!!!
source ./docker-build/env.sh \
|| echo "please run from repository root" \
|| exit 1
# check if HUGGINGFACE_TOKEN is available
# You must have accepted the terms of use for required models
HUGGINGFACE_TOKEN=${HUGGINGFACE_TOKEN:?Please set your token for Huggingface as HUGGINGFACE_TOKEN}
echo -e "You are using these values:\n"
echo -e "Volumename:\t ${VOLUMENAME}"
echo -e "Invokeai_tag:\t ${INVOKEAI_TAG}\n"
docker run \
--interactive \
--tty \
--rm \
--platform="$PLATFORM" \
--name="${REPOSITORY_NAME,,}" \
--hostname="${REPOSITORY_NAME,,}" \
--mount="source=$VOLUMENAME,target=/data" \
--env="HUGGINGFACE_TOKEN=${HUGGINGFACE_TOKEN}" \
--publish=9090:9090 \
--cap-add=sys_nice \
${GPU_FLAGS:+--gpus=${GPU_FLAGS}} \
"$INVOKEAI_TAG" ${1:+$@}

View File

@@ -1,112 +0,0 @@
# syntax=docker/dockerfile:1
# Maintained by Matthias Wild <mauwii@outlook.de>
ARG PYTHON_VERSION=3.9
##################
### base image ###
##################
FROM python:${PYTHON_VERSION}-slim AS python-base
# Install necesarry packages
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update \
&& apt-get install -y \
--no-install-recommends \
libgl1-mesa-glx=20.3.* \
libglib2.0-0=2.66.* \
libopencv-dev=4.5.* \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# set working directory and path
ARG APPDIR=/usr/src
ARG APPNAME=InvokeAI
WORKDIR ${APPDIR}
ENV PATH=${APPDIR}/${APPNAME}/bin:$PATH
######################
### build frontend ###
######################
FROM node:lts as frontend-builder
# Copy Sources
ARG APPDIR=/usr/src
WORKDIR ${APPDIR}
COPY --link . .
# install dependencies and build frontend
WORKDIR ${APPDIR}/invokeai/frontend
RUN \
--mount=type=cache,target=/usr/local/share/.cache/yarn/v6 \
yarn install \
--prefer-offline \
--frozen-lockfile \
--non-interactive \
--production=false \
&& yarn build
###################################
### install python dependencies ###
###################################
FROM python-base AS pyproject-builder
# Install dependencies
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update \
&& apt-get install -y \
--no-install-recommends \
gcc=4:10.2.* \
python3-dev=3.9.* \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# create virtual environment
RUN python3 -m venv "${APPNAME}" \
--upgrade-deps
# copy sources
COPY --from=frontend-builder ${APPDIR} .
# install pyproject.toml
ARG PIP_EXTRA_INDEX_URL
ENV PIP_EXTRA_INDEX_URL ${PIP_EXTRA_INDEX_URL}
RUN --mount=type=cache,target=/root/.cache/pip,sharing=locked \
"${APPDIR}/${APPNAME}/bin/pip" install \
--use-pep517 \
.
#####################
### runtime image ###
#####################
FROM python-base AS runtime
# setup environment
COPY --from=pyproject-builder ${APPDIR}/${APPNAME} ${APPDIR}/${APPNAME}
ENV INVOKEAI_ROOT=/data
ENV INVOKE_MODEL_RECONFIGURE="--yes --default_only"
# build patchmatch
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update \
&& apt-get install -y \
--no-install-recommends \
build-essential=12.9 \
&& PYTHONDONTWRITEBYTECODE=1 \
python3 -c "from patchmatch import patch_match" \
&& apt-get remove -y \
--autoremove \
build-essential \
&& apt-get autoclean \
&& rm -rf /var/lib/apt/lists/*
# set Entrypoint and default CMD
ENTRYPOINT [ "invokeai" ]
CMD [ "--web", "--host=0.0.0.0" ]
VOLUME [ "/data" ]

View File

@@ -1,43 +0,0 @@
#!/usr/bin/env bash
set -e
# How to use: https://invoke-ai.github.io/InvokeAI/installation/INSTALL_DOCKER/#setup
# Some possible pip extra-index urls (cuda 11.7 is available without extra url):
# CUDA 11.6: https://download.pytorch.org/whl/cu116
# ROCm 5.2: https://download.pytorch.org/whl/rocm5.2
# CPU: https://download.pytorch.org/whl/cpu
# as found on https://pytorch.org/get-started/locally/
SCRIPTDIR=$(dirname "$0")
cd "$SCRIPTDIR" || exit 1
source ./env.sh
DOCKERFILE=${INVOKE_DOCKERFILE:-Dockerfile}
# print the settings
echo -e "You are using these values:\n"
echo -e "Dockerfile: \t${DOCKERFILE}"
echo -e "index-url: \t${PIP_EXTRA_INDEX_URL:-none}"
echo -e "Volumename: \t${VOLUMENAME}"
echo -e "Platform: \t${PLATFORM}"
echo -e "Registry: \t${CONTAINER_REGISTRY}"
echo -e "Repository: \t${CONTAINER_REPOSITORY}"
echo -e "Container Tag: \t${CONTAINER_TAG}"
echo -e "Container Image: ${CONTAINER_IMAGE}\n"
# Create docker volume
if [[ -n "$(docker volume ls -f name="${VOLUMENAME}" -q)" ]]; then
echo -e "Volume already exists\n"
else
echo -n "createing docker volume "
docker volume create "${VOLUMENAME}"
fi
# Build Container
docker build \
--platform="${PLATFORM}" \
--tag="${CONTAINER_IMAGE}" \
${PIP_EXTRA_INDEX_URL:+--build-arg="PIP_EXTRA_INDEX_URL=${PIP_EXTRA_INDEX_URL}"} \
--file="${DOCKERFILE}" \
..

View File

@@ -1,35 +0,0 @@
#!/usr/bin/env bash
if [[ -z "$PIP_EXTRA_INDEX_URL" ]]; then
# Decide which container flavor to build if not specified
if [[ -z "$CONTAINER_FLAVOR" ]]; then
# Check for CUDA and ROCm
CUDA_AVAILABLE=$(python -c "import torch;print(torch.cuda.is_available())")
ROCM_AVAILABLE=$(python -c "import torch;print(torch.version.hip is not None)")
if [[ "$(uname -s)" != "Darwin" && "${CUDA_AVAILABLE}" == "True" ]]; then
CONTAINER_FLAVOR=cuda
elif [[ "$(uname -s)" != "Darwin" && "${ROCM_AVAILABLE}" == "True" ]]; then
CONTAINER_FLAVOR="rocm"
else
CONTAINER_FLAVOR="cpu"
fi
fi
# Set PIP_EXTRA_INDEX_URL based on container flavor
if [[ "$CONTAINER_FLAVOR" == "rocm" ]]; then
PIP_EXTRA_INDEX_URL="${PIP_EXTRA_INDEX_URL-"https://download.pytorch.org/whl/rocm"}"
elif CONTAINER_FLAVOR=cpu; then
PIP_EXTRA_INDEX_URL="${PIP_EXTRA_INDEX_URL-"https://download.pytorch.org/whl/cpu"}"
fi
fi
# Variables shared by build.sh and run.sh
REPOSITORY_NAME="${REPOSITORY_NAME-$(basename "$(git rev-parse --show-toplevel)")}"
VOLUMENAME="${VOLUMENAME-"${REPOSITORY_NAME,,}_data"}"
ARCH="${ARCH-$(uname -m)}"
PLATFORM="${PLATFORM-Linux/${ARCH}}"
INVOKEAI_BRANCH="${INVOKEAI_BRANCH-$(git branch --show)}"
CONTAINER_REGISTRY="${CONTAINER_REGISTRY-"ghcr.io"}"
CONTAINER_REPOSITORY="${CONTAINER_REPOSITORY-"$(whoami)/${REPOSITORY_NAME}"}"
CONTAINER_TAG="${CONTAINER_TAG-"${INVOKEAI_BRANCH##*/}-${CONTAINER_FLAVOR}"}"
CONTAINER_IMAGE="${CONTAINER_REGISTRY}/${CONTAINER_REPOSITORY}:${CONTAINER_TAG}"
CONTAINER_IMAGE="${CONTAINER_IMAGE,,}"

View File

@@ -1,31 +0,0 @@
#!/usr/bin/env bash
set -e
# How to use: https://invoke-ai.github.io/InvokeAI/installation/INSTALL_DOCKER/#run-the-container
# IMPORTANT: You need to have a token on huggingface.co to be able to download the checkpoints!!!
SCRIPTDIR=$(dirname "$0")
cd "$SCRIPTDIR" || exit 1
source ./env.sh
echo -e "You are using these values:\n"
echo -e "Volumename:\t${VOLUMENAME}"
echo -e "Invokeai_tag:\t${CONTAINER_IMAGE}"
echo -e "local Models:\t${MODELSPATH:-unset}\n"
docker run \
--interactive \
--tty \
--rm \
--platform="${PLATFORM}" \
--name="${REPOSITORY_NAME,,}" \
--hostname="${REPOSITORY_NAME,,}" \
--mount=source="${VOLUMENAME}",target=/data \
${MODELSPATH:+-u "$(id -u):$(id -g)"} \
${MODELSPATH:+--mount="type=bind,source=${MODELSPATH},target=/data/models"} \
${HUGGING_FACE_HUB_TOKEN:+--env="HUGGING_FACE_HUB_TOKEN=${HUGGING_FACE_HUB_TOKEN}"} \
--publish=9090:9090 \
--cap-add=sys_nice \
${GPU_FLAGS:+--gpus="${GPU_FLAGS}"} \
"${CONTAINER_IMAGE}" ${1:+$@}

View File

@@ -4,108 +4,6 @@ title: Changelog
# :octicons-log-16: **Changelog**
## v2.3.0 <small>(15 January 2023)</small>
**Transition to diffusers
Version 2.3 provides support for both the traditional `.ckpt` weight
checkpoint files as well as the HuggingFace `diffusers` format. This
introduces several changes you should know about.
1. The models.yaml format has been updated. There are now two
different type of configuration stanza. The traditional ckpt
one will look like this, with a `format` of `ckpt` and a
`weights` field that points to the absolute or ROOTDIR-relative
location of the ckpt file.
```
inpainting-1.5:
description: RunwayML SD 1.5 model optimized for inpainting (4.27 GB)
repo_id: runwayml/stable-diffusion-inpainting
format: ckpt
width: 512
height: 512
weights: models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt
config: configs/stable-diffusion/v1-inpainting-inference.yaml
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
```
A configuration stanza for a diffusers model hosted at HuggingFace will look like this,
with a `format` of `diffusers` and a `repo_id` that points to the
repository ID of the model on HuggingFace:
```
stable-diffusion-2.1:
description: Stable Diffusion version 2.1 diffusers model (5.21 GB)
repo_id: stabilityai/stable-diffusion-2-1
format: diffusers
```
A configuration stanza for a diffuers model stored locally should
look like this, with a `format` of `diffusers`, but a `path` field
that points at the directory that contains `model_index.json`:
```
waifu-diffusion:
description: Latest waifu diffusion 1.4
format: diffusers
path: models/diffusers/hakurei-haifu-diffusion-1.4
```
2. In order of precedence, InvokeAI will now use HF_HOME, then
XDG_CACHE_HOME, then finally default to `ROOTDIR/models` to
store HuggingFace diffusers models.
Consequently, the format of the models directory has changed to
mimic the HuggingFace cache directory. When HF_HOME and XDG_HOME
are not set, diffusers models are now automatically downloaded
and retrieved from the directory `ROOTDIR/models/diffusers`,
while other models are stored in the directory
`ROOTDIR/models/hub`. This organization is the same as that used
by HuggingFace for its cache management.
This allows you to share diffusers and ckpt model files easily with
other machine learning applications that use the HuggingFace
libraries. To do this, set the environment variable HF_HOME
before starting up InvokeAI to tell it what directory to
cache models in. To tell InvokeAI to use the standard HuggingFace
cache directory, you would set HF_HOME like this (Linux/Mac):
`export HF_HOME=~/.cache/huggingface`
Both HuggingFace and InvokeAI will fall back to the XDG_CACHE_HOME
environment variable if HF_HOME is not set; this path
takes precedence over `ROOTDIR/models` to allow for the same sharing
with other machine learning applications that use HuggingFace
libraries.
3. If you upgrade to InvokeAI 2.3.* from an earlier version, there
will be a one-time migration from the old models directory format
to the new one. You will see a message about this the first time
you start `invoke.py`.
4. Both the front end back ends of the model manager have been
rewritten to accommodate diffusers. You can import models using
their local file path, using their URLs, or their HuggingFace
repo_ids. On the command line, all these syntaxes work:
```
!import_model stabilityai/stable-diffusion-2-1-base
!import_model /opt/sd-models/sd-1.4.ckpt
!import_model https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/blob/main/PaperCut_v1.ckpt
```
**KNOWN BUGS (15 January 2023)
1. On CUDA systems, the 768 pixel stable-diffusion-2.0 and
stable-diffusion-2.1 models can only be run as `diffusers` models
when the `xformer` library is installed and configured. Without
`xformers`, InvokeAI returns black images.
2. Inpainting and outpainting have regressed in quality.
Both these issues are being actively worked on.
## v2.2.4 <small>(11 December 2022)</small>
**the `invokeai` directory**
@@ -196,7 +94,7 @@ the desired release's zip file, which you can find by clicking on the green
This point release removes references to the binary installer from the
installation guide. The binary installer is not stable at the current
time. First time users are encouraged to use the "source" installer as
described in [Installing InvokeAI with the Source Installer](installation/deprecated_documentation/INSTALL_SOURCE.md)
described in [Installing InvokeAI with the Source Installer](installation/INSTALL_SOURCE.md)
With InvokeAI 2.2, this project now provides enthusiasts and professionals a
robust workflow solution for creating AI-generated and human facilitated

Binary file not shown.

Before

Width:  |  Height:  |  Size: 142 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 124 KiB

View File

@@ -136,7 +136,7 @@ mixture of both using any of the accepted command switch formats:
# InvokeAI initialization file
# This is the InvokeAI initialization file, which contains command-line default values.
# Feel free to edit. If anything goes wrong, you can re-initialize this file by deleting
# or renaming it and then running invokeai-configure again.
# or renaming it and then running configure_invokeai.py again.
# The --root option below points to the folder in which InvokeAI stores its models, configs and outputs.
--root="/Users/mauwii/invokeai"

View File

@@ -51,7 +51,7 @@ You can also combine styles and concepts:
If you used an installer to install InvokeAI, you may have already set a HuggingFace token.
If you skipped this step, you can:
- run the InvokeAI configuration script again (if you used a manual installer): `invokeai-configure`
- run the InvokeAI configuration script again (if you used a manual installer): `scripts/configure_invokeai.py`
- set one of the `HUGGINGFACE_TOKEN` or `HUGGING_FACE_HUB_TOKEN` environment variables to contain your token
Finally, if you already used any HuggingFace library on your computer, you might already have a token

View File

@@ -1,76 +0,0 @@
---
title: Model Merging
---
# :material-image-off: Model Merging
## How to Merge Models
As of version 2.3, InvokeAI comes with a script that allows you to
merge two or three diffusers-type models into a new merged model. The
resulting model will combine characteristics of the original, and can
be used to teach an old model new tricks.
You may run the merge script by starting the invoke launcher
(`invoke.sh` or `invoke.bat`) and choosing the option for _merge
models_. This will launch a text-based interactive user interface that
prompts you to select the models to merge, how to merge them, and the
merged model name.
Alternatively you may activate InvokeAI's virtual environment from the
command line, and call the script via `merge_models --gui` to open up
a version that has a nice graphical front end. To get the commandline-
only version, omit `--gui`.
The user interface for the text-based interactive script is
straightforward. It shows you a series of setting fields. Use control-N (^N)
to move to the next field, and control-P (^P) to move to the previous
one. You can also use TAB and shift-TAB to move forward and
backward. Once you are in a multiple choice field, use the up and down
cursor arrows to move to your desired selection, and press <SPACE> or
<ENTER> to select it. Change text fields by typing in them, and adjust
scrollbars using the left and right arrow keys.
Once you are happy with your settings, press the OK button. Note that
there may be two pages of settings, depending on the height of your
screen, and the OK button may be on the second page. Advance past the
last field of the first page to get to the second page, and reverse
this to get back.
If the merge runs successfully, it will create a new diffusers model
under the selected name and register it with InvokeAI.
## The Settings
* Model Selection -- there are three multiple choice fields that
display all the diffusers-style models that InvokeAI knows about.
If you do not see the model you are looking for, then it is probably
a legacy checkpoint model and needs to be converted using the
`invoke` command-line client and its `!optimize` command. You
must select at least two models to merge. The third can be left at
"None" if you desire.
* Alpha -- This is the ratio to use when combining models. It ranges
from 0 to 1. The higher the value, the more weight is given to the
2d and (optionally) 3d models. So if you have two models named "A"
and "B", an alpha value of 0.25 will give you a merged model that is
25% A and 75% B.
* Interpolation Method -- This is the method used to combine
weights. The options are "weighted_sum" (the default), "sigmoid",
"inv_sigmoid" and "add_difference". Each produces slightly different
results. When three models are in use, only "add_difference" is
available. (TODO: cite a reference that describes what these
interpolation methods actually do and how to decide among them).
* Force -- Not all models are compatible with each other. The merge
script will check for compatibility and refuse to merge ones that
are incompatible. Set this checkbox to try merging anyway.
* Name for merged model - This is the name for the new model. Please
use InvokeAI conventions - only alphanumeric letters and the
characters ".+-".
## Caveats
This is a new script and may contain bugs.

View File

@@ -120,7 +120,7 @@ A number of caveats:
(`--iterations`) argument.
3. Your results will be _much_ better if you use the `inpaint-1.5` model
released by runwayML and installed by default by `invokeai-configure`.
released by runwayML and installed by default by `scripts/configure_invokeai.py`.
This model was trained specifically to harmoniously fill in image gaps. The
standard model will work as well, but you may notice color discontinuities at
the border.

View File

@@ -28,11 +28,11 @@ should "just work" without further intervention. Simply pass the `--upscale`
the popup in the Web GUI.
**GFPGAN** requires a series of downloadable model files to work. These are
loaded when you run `invokeai-configure`. If GFPAN is failing with an
loaded when you run `scripts/configure_invokeai.py`. If GFPAN is failing with an
error, please run the following from the InvokeAI directory:
```bash
invokeai-configure
python scripts/configure_invokeai.py
```
If you do not run this script in advance, the GFPGAN module will attempt to
@@ -106,7 +106,7 @@ This repo also allows you to perform face restoration using
[CodeFormer](https://github.com/sczhou/CodeFormer).
In order to setup CodeFormer to work, you need to download the models like with
GFPGAN. You can do this either by running `invokeai-configure` or by manually
GFPGAN. You can do this either by running `configure_invokeai.py` or by manually
downloading the
[model file](https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth)
and saving it to `ldm/invoke/restoration/codeformer/weights` folder.

View File

@@ -239,24 +239,28 @@ Generate an image with a given prompt, record the seed of the image, and then
use the `prompt2prompt` syntax to substitute words in the original prompt for
words in a new prompt. This works for `img2img` as well.
For example, consider the prompt `a cat.swap(dog) playing with a ball in the forest`. Normally, because of the word words interact with each other when doing a stable diffusion image generation, these two prompts would generate different compositions:
- `a cat playing with a ball in the forest`
- `a dog playing with a ball in the forest`
| `a cat playing with a ball in the forest` | `a dog playing with a ball in the forest` |
| --- | --- |
| img | img |
- For multiple word swaps, use parentheses: `a (fluffy cat).swap(barking dog) playing with a ball in the forest`.
- To swap a comma, use quotes: `a ("fluffy, grey cat").swap("big, barking dog") playing with a ball in the forest`.
- Supports options `t_start` and `t_end` (each 0-1) loosely corresponding to bloc97's `prompt_edit_tokens_start/_end` but with the math swapped to make it easier to
intuitively understand. `t_start` and `t_end` are used to control on which steps cross-attention control should run. With the default values `t_start=0` and `t_end=1`, cross-attention control is active on every step of image generation. Other values can be used to turn cross-attention control off for part of the image generation process.
- For example, if doing a diffusion with 10 steps for the prompt is `a cat.swap(dog, t_start=0.3, t_end=1.0) playing with a ball in the forest`, the first 3 steps will be run as `a cat playing with a ball in the forest`, while the last 7 steps will run as `a dog playing with a ball in the forest`, but the pixels that represent `dog` will be locked to the pixels that would have represented `cat` if the `cat` prompt had been used instead.
- Conversely, for `a cat.swap(dog, t_start=0, t_end=0.7) playing with a ball in the forest`, the first 7 steps will run as `a dog playing with a ball in the forest` with the pixels that represent `dog` locked to the same pixels that would have represented `cat` if the `cat` prompt was being used instead. The final 3 steps will just run `a cat playing with a ball in the forest`.
> For img2img, the step sequence does not start at 0 but instead at `(1.0-strength)` - so if the img2img `strength` is `0.7`, `t_start` and `t_end` must both be greater than `0.3` (`1.0-0.7`) to have any effect.
Prompt2prompt `.swap()` is not compatible with xformers, which will be temporarily disabled when doing a `.swap()` - so you should expect to use more VRAM and run slower that with xformers enabled.
- `a ("fluffy cat").swap("smiling dog") eating a hotdog`.
- quotes optional: `a (fluffy cat).swap(smiling dog) eating a hotdog`.
- for single word substitutions parentheses are also optional:
`a cat.swap(dog) eating a hotdog`.
- Supports options `s_start`, `s_end`, `t_start`, `t_end` (each 0-1) loosely
corresponding to bloc97's `prompt_edit_spatial_start/_end` and
`prompt_edit_tokens_start/_end` but with the math swapped to make it easier to
intuitively understand.
- Example usage:`a (cat).swap(dog, s_end=0.3) eating a hotdog` - the `s_end`
argument means that the "spatial" (self-attention) edit will stop having any
effect after 30% (=0.3) of the steps have been done, leaving Stable
Diffusion with 70% of the steps where it is free to decide for itself how to
reshape the cat-form into a dog form.
- The numbers represent a percentage through the step sequence where the edits
should happen. 0 means the start (noisy starting image), 1 is the end (final
image).
- For img2img, the step sequence does not start at 0 but instead at
(1-strength) - so if strength is 0.7, s_start and s_end must both be
greater than 0.3 (1-0.7) to have any effect.
- Convenience option `shape_freedom` (0-1) to specify how much "freedom" Stable
Diffusion should have to change the shape of the subject being swapped.
- `a (cat).swap(dog, shape_freedom=0.5) eating a hotdog`.
The `prompt2prompt` code is based off
[bloc97's colab](https://github.com/bloc97/CrossAttentionControl).

View File

@@ -10,261 +10,83 @@ You may personalize the generated images to provide your own styles or objects
by training a new LDM checkpoint and introducing a new vocabulary to the fixed
model as a (.pt) embeddings file. Alternatively, you may use or train
HuggingFace Concepts embeddings files (.bin) from
<https://huggingface.co/sd-concepts-library> and its associated
notebooks.
<https://huggingface.co/sd-concepts-library> and its associated notebooks.
## **Hardware and Software Requirements**
## **Training**
You will need a GPU to perform training in a reasonable length of
time, and at least 12 GB of VRAM. We recommend using the [`xformers`
library](../installation/070_INSTALL_XFORMERS) to accelerate the
training process further. During training, about ~8 GB is temporarily
needed in order to store intermediate models, checkpoints and logs.
To train, prepare a folder that contains images sized at 512x512 and execute the
following:
## **Preparing for Training**
### WINDOWS
To train, prepare a folder that contains 3-5 images that illustrate
the object or concept. It is good to provide a variety of examples or
poses to avoid overtraining the system. Format these images as PNG
(preferred) or JPG. You do not need to resize or crop the images in
advance, but for more control you may wish to do so.
As the default backend is not available on Windows, if you're using that
platform, set the environment variable `PL_TORCH_DISTRIBUTED_BACKEND` to `gloo`
Place the training images in a directory on the machine InvokeAI runs
on. We recommend placing them in a subdirectory of the
`text-inversion-training-data` folder located in the InvokeAI root
directory, ordinarily `~/invokeai` (Linux/Mac), or
`C:\Users\your_name\invokeai` (Windows). For example, to create an
embedding for the "psychedelic" style, you'd place the training images
into the directory
`~invokeai/text-inversion-training-data/psychedelic`.
## **Launching Training Using the Console Front End**
InvokeAI 2.3 and higher comes with a text console-based training front
end. From within the `invoke.sh`/`invoke.bat` Invoke launcher script,
start the front end by selecting choice (3):
```sh
Do you want to generate images using the
1. command-line
2. browser-based UI
3. textual inversion training
4. open the developer console
Please enter 1, 2, 3, or 4: [1] 3
```bash
python3 ./main.py -t \
--base ./configs/stable-diffusion/v1-finetune.yaml \
--actual_resume ./models/ldm/stable-diffusion-v1/model.ckpt \
-n my_cat \
--gpus 0 \
--data_root D:/textual-inversion/my_cat \
--init_word 'cat'
```
From the command line, with the InvokeAI virtual environment active,
you can launch the front end with the command `textual_inversion
--gui`.
During the training process, files will be created in
`/logs/[project][time][project]/` where you can see the process.
This will launch a text-based front end that will look like this:
Conditioning contains the training prompts inputs, reconstruction the input
images for the training epoch samples, samples scaled for a sample of the prompt
and one with the init word provided.
<figure markdown>
![ti-frontend](../assets/textual-inversion/ti-frontend.png)
</figure>
On a RTX3090, the process for SD will take ~1h @1.6 iterations/sec.
The interface is keyboard-based. Move from field to field using
control-N (^N) to move to the next field and control-P (^P) to the
previous one. <Tab> and <shift-TAB> work as well. Once a field is
active, use the cursor keys. In a checkbox group, use the up and down
cursor keys to move from choice to choice, and <space> to select a
choice. In a scrollbar, use the left and right cursor keys to increase
and decrease the value of the scroll. In textfields, type the desired
values.
!!! note
The number of parameters may look intimidating, but in most cases the
predefined defaults work fine. The red circled fields in the above
illustration are the ones you will adjust most frequently.
According to the associated paper, the optimal number of
images is 3-5. Your model may not converge if you use more images than
that.
### Model Name
Training will run indefinitely, but you may wish to stop it (with ctrl-c) before
the heat death of the universe, when you find a low loss epoch or around ~5000
iterations. Note that you can set a fixed limit on the number of training steps
by decreasing the "max_steps" option in
configs/stable_diffusion/v1-finetune.yaml (currently set to 4000000)
This will list all the diffusers models that are currently
installed. Select the one you wish to use as the basis for your
embedding. Be aware that if you use a SD-1.X-based model for your
training, you will only be able to use this embedding with other
SD-1.X-based models. Similarly, if you train on SD-2.X, you will only
be able to use the embeddings with models based on SD-2.X.
## **Run the Model**
### Trigger Term
Once the model is trained, specify the trained .pt or .bin file when starting
invoke using
This is the prompt term you will use to trigger the embedding. Type a
single word or phrase you wish to use as the trigger, example
"psychedelic" (without angle brackets). Within InvokeAI, you will then
be able to activate the trigger using the syntax `<psychedelic>`.
### Initializer
This is a single character that is used internally during the training
process as a placeholder for the trigger term. It defaults to "*" and
can usually be left alone.
### Resume from last saved checkpoint
As training proceeds, textual inversion will write a series of
intermediate files that can be used to resume training from where it
was left off in the case of an interruption. This checkbox will be
automatically selected if you provide a previously used trigger term
and at least one checkpoint file is found on disk.
Note that as of 20 January 2023, resume does not seem to be working
properly due to an issue with the upstream code.
### Data Training Directory
This is the location of the images to be used for training. When you
select a trigger term like "my-trigger", the frontend will prepopulate
this field with `~/invokeai/text-inversion-training-data/my-trigger`,
but you can change the path to wherever you want.
### Output Destination Directory
This is the location of the logs, checkpoint files, and embedding
files created during training. When you select a trigger term like
"my-trigger", the frontend will prepopulate this field with
`~/invokeai/text-inversion-output/my-trigger`, but you can change the
path to wherever you want.
### Image resolution
The images in the training directory will be automatically scaled to
the value you use here. For best results, you will want to use the
same default resolution of the underlying model (512 pixels for
SD-1.5, 768 for the larger version of SD-2.1).
### Center crop images
If this is selected, your images will be center cropped to make them
square before resizing them to the desired resolution. Center cropping
can indiscriminately cut off the top of subjects' heads for portrait
aspect images, so if you have images like this, you may wish to use a
photoeditor to manually crop them to a square aspect ratio.
### Mixed precision
Select the floating point precision for the embedding. "no" will
result in a full 32-bit precision, "fp16" will provide 16-bit
precision, and "bf16" will provide mixed precision (only available
when XFormers is used).
### Max training steps
How many steps the training will take before the model converges. Most
training sets will converge with 2000-3000 steps.
### Batch size
This adjusts how many training images are processed simultaneously in
each step. Higher values will cause the training process to run more
quickly, but use more memory. The default size will run with GPUs with
as little as 12 GB.
### Learning rate
The rate at which the system adjusts its internal weights during
training. Higher values risk overtraining (getting the same image each
time), and lower values will take more steps to train a good
model. The default of 0.0005 is conservative; you may wish to increase
it to 0.005 to speed up training.
### Scale learning rate by number of GPUs, steps and batch size
If this is selected (the default) the system will adjust the provided
learning rate to improve performance.
### Use xformers acceleration
This will activate XFormers memory-efficient attention. You need to
have XFormers installed for this to have an effect.
### Learning rate scheduler
This adjusts how the learning rate changes over the course of
training. The default "constant" means to use a constant learning rate
for the entire training session. The other values scale the learning
rate according to various formulas.
Only "constant" is supported by the XFormers library.
### Gradient accumulation steps
This is a parameter that allows you to use bigger batch sizes than
your GPU's VRAM would ordinarily accommodate, at the cost of some
performance.
### Warmup steps
If "constant_with_warmup" is selected in the learning rate scheduler,
then this provides the number of warmup steps. Warmup steps have a
very low learning rate, and are one way of preventing early
overtraining.
## The training run
Start the training run by advancing to the OK button (bottom right)
and pressing <enter>. A series of progress messages will be displayed
as the training process proceeds. This may take an hour or two,
depending on settings and the speed of your system. Various log and
checkpoint files will be written into the output directory (ordinarily
`~/invokeai/text-inversion-output/my-model/`)
At the end of successful training, the system will copy the file
`learned_embeds.bin` into the InvokeAI root directory's `embeddings`
directory, using a subdirectory named after the trigger token. For
example, if the trigger token was `psychedelic`, then look for the
embeddings file in
`~/invokeai/embeddings/psychedelic/learned_embeds.bin`
You may now launch InvokeAI and try out a prompt that uses the trigger
term. For example `a plate of banana sushi in <psychedelic> style`.
## **Training with the Command-Line Script**
Training can also be done using a traditional command-line script. It
can be launched from within the "developer's console", or from the
command line after activating InvokeAI's virtual environment.
It accepts a large number of arguments, which can be summarized by
passing the `--help` argument:
```sh
textual_inversion --help
```bash
python3 ./scripts/invoke.py \
--embedding_path /path/to/embedding.pt
```
Typical usage is shown here:
```sh
textual_inversion \
--model=stable-diffusion-1.5 \
--resolution=512 \
--learnable_property=style \
--initializer_token='*' \
--placeholder_token='<psychedelic>' \
--train_data_dir=/home/lstein/invokeai/training-data/psychedelic \
--output_dir=/home/lstein/invokeai/text-inversion-training/psychedelic \
--scale_lr \
--train_batch_size=8 \
--gradient_accumulation_steps=4 \
--max_train_steps=3000 \
--learning_rate=0.0005 \
--resume_from_checkpoint=latest \
--lr_scheduler=constant \
--mixed_precision=fp16 \
--only_save_embeds
Then, to utilize your subject at the invoke prompt
```bash
invoke> "a photo of *"
```
## Reading
This also works with image2image
For more information on textual inversion, please see the following
resources:
```bash
invoke> "waterfall and rainbow in the style of *" --init_img=./init-images/crude_drawing.png --strength=0.5 -s100 -n4
```
* The [textual inversion repository](https://github.com/rinongal/textual_inversion) and
associated paper for details and limitations.
* [HuggingFace's textual inversion training
page](https://huggingface.co/docs/diffusers/training/text_inversion)
* [HuggingFace example script
documentation](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion)
(Note that this script is similar to, but not identical, to
`textual_inversion`, but produces embed files that are completely compatible.
For .pt files it's also possible to train multiple tokens (modify the
placeholder string in `configs/stable-diffusion/v1-finetune.yaml`) and combine
LDM checkpoints using:
---
```bash
python3 ./scripts/merge_embeddings.py \
--manager_ckpts /path/to/first/embedding.pt \
[</path/to/second/embedding.pt>,[...]] \
--output_path /path/to/output/embedding.pt
```
copyright (c) 2023, Lincoln Stein and the InvokeAI Development Team
Credit goes to rinongal and the repository
Please see [the repository](https://github.com/rinongal/textual_inversion) and
associated paper for details and limitations.

View File

@@ -6,70 +6,70 @@ title: WebUI Hotkey List
## App Hotkeys
| Setting | Hotkey |
| --------------- | ------------------ |
| ++ctrl+enter++ | Invoke |
| ++shift+x++ | Cancel |
| ++alt+a++ | Focus Prompt |
| ++o++ | Toggle Options |
| ++shift+o++ | Pin Options |
| ++z++ | Toggle Viewer |
| ++g++ | Toggle Gallery |
| ++f++ | Maximize Workspace |
| ++1++ - ++5++ | Change Tabs |
| ++"`"++ | Toggle Console |
| Setting | Hotkey |
| ----------------- | ------------------ |
| ++"Ctrl\+Enter"++ | Invoke |
| ++"Shift\+X"++ | Cancel |
| ++"Alt\+A"++ | Focus Prompt |
| ++"O"++ | Toggle Options |
| ++"Shift\+O"++ | Pin Options |
| ++"Z"++ | Toggle Viewer |
| ++"G"++ | Toggle Gallery |
| ++"F"++ | Maximize Workspace |
| ++"1-5"++ | Change Tabs |
| ++"`"++ | Toggle Console |
## General Hotkeys
| Setting | Hotkey |
| -------------- | ---------------------- |
| ++p++ | Set Prompt |
| ++s++ | Set Seed |
| ++a++ | Set Parameters |
| ++shift+r++ | Restore Faces |
| ++shift+u++ | Upscale |
| ++i++ | Show Info |
| ++shift+i++ | Send To Image To Image |
| ++del++ | Delete Image |
| ++esc++ | Close Panels |
| Setting | Hotkey |
| --------------- | ---------------------- |
| ++"P"++ | Set Prompt |
| ++"S"++ | Set Seed |
| ++"A"++ | Set Parameters |
| ++"Shift\+R"++ | Restore Faces |
| ++"Shift\+U"++ | Upscale |
| ++"I"++ | Show Info |
| ++"Shift\+I"++ | Send To Image To Image |
| ++"Del"++ | Delete Image |
| ++"Esc"++ | Close Panels |
## Gallery Hotkeys
| Setting | Hotkey |
| ----------------------| --------------------------- |
| ++arrow-left++ | Previous Image |
| ++arrow-right++ | Next Image |
| ++shift+g++ | Toggle Gallery Pin |
| ++shift+arrow-up++ | Increase Gallery Image Size |
| ++shift+arrow-down++ | Decrease Gallery Image Size |
| Setting | Hotkey |
| ------------------ | --------------------------- |
| ++"Arrow Left"++ | Previous Image |
| ++"Arrow Right"++ | Next Image |
| ++"Shift\+G"++ | Toggle Gallery Pin |
| ++"Shift\+Up"++ | Increase Gallery Image Size |
| ++"Shift\+Down"++ | Decrease Gallery Image Size |
## Unified Canvas Hotkeys
| Setting | Hotkey |
| --------------------------------- | ---------------------- |
| ++b++ | Select Brush |
| ++e++ | Select Eraser |
| ++bracket-left++ | Decrease Brush Size |
| ++bracket-right++ | Increase Brush Size |
| ++shift+bracket-left++ | Decrease Brush Opacity |
| ++shift+bracket-right++ | Increase Brush Opacity |
| ++v++ | Move Tool |
| ++shift+f++ | Fill Bounding Box |
| ++del++ / ++backspace++ | Erase Bounding Box |
| ++c++ | Select Color Picker |
| ++n++ | Toggle Snap |
| ++"Hold Space"++ | Quick Toggle Move |
| ++q++ | Toggle Layer |
| ++shift+c++ | Clear Mask |
| ++h++ | Hide Mask |
| ++shift+h++ | Show/Hide Bounding Box |
| ++shift+m++ | Merge Visible |
| ++shift+s++ | Save To Gallery |
| ++ctrl+c++ | Copy To Clipboard |
| ++shift+d++ | Download Image |
| ++ctrl+z++ | Undo |
| ++ctrl+y++ / ++ctrl+shift+z++ | Redo |
| ++r++ | Reset View |
| ++arrow-left++ | Previous Staging Image |
| ++arrow-right++ | Next Staging Image |
| ++enter++ | Accept Staging Image |
| Setting | Hotkey |
| ------------------------------ | ---------------------- |
| ++"B"++ | Select Brush |
| ++"E"++ | Select Eraser |
| ++"["++ | Decrease Brush Size |
| ++"]"++ | Increase Brush Size |
| ++"Shift\+["++ | Decrease Brush Opacity |
| ++"Shift\+]"++ | Increase Brush Opacity |
| ++"V"++ | Move Tool |
| ++"Shift\+F"++ | Fill Bounding Box |
| ++"Delete/Backspace"++ | Erase Bounding Box |
| ++"C"++ | Select Color Picker |
| ++"N"++ | Toggle Snap |
| ++"Hold Space"++ | Quick Toggle Move |
| ++"Q"++ | Toggle Layer |
| ++"Shift\+C"++ | Clear Mask |
| ++"H"++ | Hide Mask |
| ++"Shift\+H"++ | Show/Hide Bounding Box |
| ++"Shift\+M"++ | Merge Visible |
| ++"Shift\+S"++ | Save To Gallery |
| ++"Ctrl\+C"++ | Copy To Clipboard |
| ++"Shift\+D"++ | Download Image |
| ++"Ctrl\+Z"++ | Undo |
| ++"Ctrl\+Y / Ctrl\+Shift\+Z"++ | Redo |
| ++"R"++ | Reset View |
| ++"Arrow Left"++ | Previous Staging Image |
| ++"Arrow Right"++ | Next Staging Image |
| ++"Enter"++ | Accept Staging Image |

View File

@@ -93,15 +93,9 @@ getting InvokeAI up and running on your system. For alternative installation and
upgrade instructions, please see:
[InvokeAI Installation Overview](installation/)
Users who wish to make use of the **PyPatchMatch** inpainting functions
will need to perform a bit of extra work to enable this
module. Instructions can be found at [Installing
PyPatchMatch](installation/060_INSTALL_PATCHMATCH.md).
If you have an NVIDIA card, you can benefit from the significant
memory savings and performance benefits provided by Facebook Lab's
**xFormers** module. Instructions for Linux and Windows users can be found
at [Installing xFormers](installation/070_INSTALL_XFORMERS.md).
Linux users who wish to make use of the PyPatchMatch inpainting functions will
need to perform a bit of extra work to enable this module. Instructions can be
found at [Installing PyPatchMatch](installation/060_INSTALL_PATCHMATCH.md).
## :fontawesome-solid-computer: Hardware Requirements
@@ -157,8 +151,6 @@ images in full-precision mode:
<!-- seperator -->
- [Prompt Engineering](features/PROMPTS.md)
<!-- seperator -->
- [Model Merging](features/MODEL_MERGING.md)
<!-- seperator -->
- Miscellaneous
- [NSFW Checker](features/NSFW.md)
- [Embiggen upscaling](features/EMBIGGEN.md)

Some files were not shown because too many files have changed in this diff Show More