Compare commits

...

1399 Commits

Author SHA1 Message Date
Lincoln Stein
650f4bb58c quote output, embedding and autoscan directores in invokeai.init (#2827)
This should prevent the errors that users are seeing with spaces in the
file paths
2023-02-27 00:17:37 -05:00
Lincoln Stein
7b92b27ceb Merge branch 'v2.3' into bugfix/quote-initfile-paths 2023-02-26 23:54:20 -05:00
Lincoln Stein
8f1b301d01 restore previous naming scheme for sd-2.x models: (#2820)
- stable-diffusion-2.1-base base model from
stabilityai/stable-diffusion-2-1-base

- stable-diffusion-2.1-768 768 pixel model from
stabilityai/stable-diffusion-2-1-768

- sd-inpainting-2.0 512 pixel inpainting model from
runwayml/stable-diffusion-inpainting

This PR also bumps the version number up to v2.3.1.post2
2023-02-26 23:54:06 -05:00
Lincoln Stein
e3a19d4f3e quote output, embedding and autoscan directores in invokeai.init
- this should prevent the errors that users are seeing with
  spaces in the file pathsa

quot
2023-02-26 23:02:18 -05:00
Lincoln Stein
ecbb385447 bump version number 2023-02-26 16:11:07 -05:00
Lincoln Stein
741464b053 restore previous naming scheme for sd-2.x models:
- stable-diffusion-2.1-base
  base model from stabilityai/stable-diffusion-2-1-base

- stable-diffusion-2.1-768
  768 pixel model from stabilityai/stable-diffusion-2-1-768

- sd-inpainting-2.0
  512 pixel inpainting model from runwayml/stable-diffusion-inpainting
2023-02-26 15:31:43 -05:00
blessedcoolant
33f832e6ab [ui]: 2.3 hotfixes (#2806)
- Updated Spanish translation
- Updated Portuguese (Brazil) translation
- Fix a number of translation issues and add missing strings
- Fix vertical symmetry and symmetry steps issue when generation steps
is adjusted
2023-02-26 12:30:59 +13:00
psychedelicious
281c788489 chore(ui): build frontend 2023-02-25 14:26:50 +11:00
psychedelicious
3858bef185 fix(ui): clamp symmetry steps to generation steps
Also renamed the variables to `horizontalSymmetrySteps` as `TimePercentage` is not accurate.
2023-02-25 14:26:46 +11:00
psychedelicious
f9a1afd09c fix(ui): fix #2802 vertical symmetry not working 2023-02-25 11:28:17 +11:00
psychedelicious
251e9c0294 fix(ui): add missing strings
Fixes #2797
Fixes #2798
2023-02-25 11:27:47 +11:00
psychedelicious
d8bf2e3c10 fix(ui): fix translation typing, fix strings
I had inadvertently un-safe-d our translation types when migrating to Weblate.

This PR fixes that, and a number of translation string bugs that went unnoticed due to the lack of type safety,
2023-02-25 11:26:35 +11:00
Gabriel Mackievicz Telles
218f30b7d0 translationBot(ui): update translation (Portuguese (Brazil))
Currently translated at 91.8% (431 of 469 strings)

Co-authored-by: Gabriel Mackievicz Telles <telles.gabriel@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/pt_BR/
Translation: InvokeAI/Web UI
2023-02-25 11:13:23 +11:00
Jeff Mahoney
da983c7773 translationBot(ui): added translation (Romanian)
Co-authored-by: Jeff Mahoney <jbmahoney@gmail.com>
2023-02-25 11:13:23 +11:00
gallegonovato
7012e16c43 translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (469 of 469 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2023-02-25 11:13:23 +11:00
Lincoln Stein
b1050abf7f hotfix for broken merge function (#2801)
Bump version up to accommodate a hotfix on v2.3.1 release.
(model merge functionality was broken)
2023-02-24 15:33:54 -05:00
Lincoln Stein
210998081a use right pep-440 standard version number 2023-02-24 15:14:39 -05:00
Lincoln Stein
604acb9d91 use pep-440 standard version number 2023-02-24 15:07:54 -05:00
Lincoln Stein
5beeb1a897 hotfix for broken merge function 2023-02-24 15:00:22 -05:00
Lincoln Stein
de6304b729 fixes crashes on merge in both WebUI and console (#2800)
- an inadvertent change to the model manager broke the merging functions
- corrected here - will be a hotfix
2023-02-24 14:58:06 -05:00
Lincoln Stein
d0be79c33d fixes crashes on merge in both WebUI and console
- an inadvertent change to the model manager broke the merging functions
- corrected here - will be a hotfix
2023-02-24 14:54:23 -05:00
Lincoln Stein
b4ed8bc47a Merge branch 'main' into v2.3 2023-02-24 10:52:03 -05:00
Lincoln Stein
bd85e00530 Last PR needed for v2.3.1 (#2788)
- Add curated set of starter models based on team discussion. The final
list of starter models can be found in
`invokeai/configs/INITIAL_MODELS.yaml`

- To test model installation, I selected and installed all the models on
the list. This led to my discovering that when there are no more starter
models to display, the console front end crashes. So I made a fix to
this in which the entire starter model selection is no longer shown.

- Update model table in 050_INSTALL_MODELS.md

- Add guide to dealing with low-memory situations
- Version is now `v2.3.1`
2023-02-24 10:31:38 -05:00
Lincoln Stein
4e446130d8 Merge branch 'v2.3' into enhance/curated-2.3.1-models 2023-02-24 10:30:42 -05:00
Lincoln Stein
4c93b514bb bump version to final 2.3.1 2023-02-24 10:04:41 -05:00
Lincoln Stein
d078941316 add low memory troubleshooting guide 2023-02-24 10:04:06 -05:00
Lincoln Stein
230d3a496d document starter models
- add new script `scripts/make_models_markdown_table.py` that parses
  INITIAL_MODELS.yaml and creates markdown table for the model installation
  documentation file

- update 050_INSTALLING_MODELS.md with above table, and add a warning
  about additional license terms that apply to some of the models.
2023-02-24 09:33:07 -05:00
Jonathan
ec2890c19b Run garbage collection to allow the CUDA cache to completely empty. (#2791) 2023-02-24 08:48:54 -05:00
Lincoln Stein
a540cc537f add curated set of HuggingFace diffusers models for 2.3.1 release
- Final list can be found in invokeai/configs/INITIAL_MODELS.yaml

- After installing all the models, I discovered a bug in the file
  selection form that caused a crash when no remaining uninstalled
  models remained. So had to fix this.
2023-02-24 00:53:48 -05:00
Lincoln Stein
39c57aa358 fix generate backend to generate "accurate" intermediate images (#2787)
The sample_to_image method in `ldm.invoke.generator.base` was still
using ckpt-era code. As a result when the WebUI was set to show
"accurate" intermediate images, there'd be a crash. This PR corrects the
problem.

- Closes #2784
- Closes #2775
2023-02-24 00:33:29 -05:00
Lincoln Stein
2d990c1f54 Merge branch 'v2.3' into bugfix/webui-accurate-intermediates 2023-02-23 22:07:18 -05:00
Lincoln Stein
7fb2da8741 fix generate backend to generate "accurate" intermediate images
- Closes #2784
- Closes #2775
2023-02-23 22:03:28 -05:00
Lincoln Stein
c69fcb1c10 fix ckpt_convert module to work with dreambooth v2 models (#2776)
- Discord member @marcus.llewellyn reported that some civitai
2.1-derived checkpoints were not converting properly (probably
dreambooth-generated):
https://discord.com/channels/1020123559063990373/1078386197589655582/1078387806122025070

- @blessedcoolant tracked this down to a missing key that was used to
derive vector length of the CLIP model used by fetching the second
dimension of the tensor at "cond_stage_model.model.text_projection".

- On inspection, I found that the same second dimension can be recovered
from key 'cond_stage_model.model.ln_final.bias', and use that instead. I
hope this is correct; tested on multiple v1, v2 and inpainting models
and they converted correctly.

- While debugging this, I found and fixed several other issues:

- model download script was not pre-downloading the OpenCLIP
text_encoder or text_tokenizer. This is fixed.
- got rid of legacy code in `ckpt_to_diffuser.py` and replaced with
calls into `model_manager`
  - more consistent status reporting in the CLI.
2023-02-23 21:51:57 -05:00
Lincoln Stein
0982548e1f Merge branch 'v2.3' into bugfix/v2-model-conversion 2023-02-23 21:27:49 -05:00
Matthias Wild
11a29fdc4d fix python 3.9 compatibility (#2780)
without this change, the project can be installed on 3.9 but not used
this also fixes the container images

Maybe we should re-enable Python 3.9 checks which would have prevented
this.
2023-02-24 00:49:25 +01:00
Lincoln Stein
24407048a5 Version 2.3.1-rc4 (#2782)
Just a version bump to use a format recognized by PyPi.
2023-02-23 18:09:43 -05:00
Matthias Wild
a7c2333312 Merge branch 'main' into fix/py39-compatibility 2023-02-23 23:53:38 +01:00
Lincoln Stein
b5b541c747 bump version; use correct format for PyPi 2023-02-23 17:47:36 -05:00
Lincoln Stein
ad6ea02c9c Update main with V2.3 fixes (#2774)
Until the nodes merge happens, we can continue to merge bugfixes from
the 2.3 branch into `main`. This will bring main into sync with
`v2.3.1+rc3`
2023-02-23 17:38:16 -05:00
mauwii
1a6ed85d99 fix typeing to be compatible with python 3.9
without this, the project can be installed on 3.9 but not used
this also fixes the container images
2023-02-23 23:27:16 +01:00
Lincoln Stein
a094bbd839 push to pypi from branch v2.3 (#2778)
This change will cause releases on the v2.3 branch to be pushed to PyPi.
2023-02-23 17:20:24 -05:00
Lincoln Stein
73dda812ea push to pypi from branch v2.3
This change will cause releases on the v2.3 branch to be pushed
to PyPi.
2023-02-23 16:55:25 -05:00
Lincoln Stein
8eaf1c4033 Revert "(updater) style 'pip' progress to use dark background"
This reverts commit 89239d1c54.

- This was making a subprocess call to 'bash', and hence crashing
  on windows systems!
2023-02-23 16:33:57 -05:00
Lincoln Stein
4f44b64052 fix ckpt_convert module to work with dreambooth v2 models
- Discord member @marcus.llewellyn reported that some civitai 2.1-derived checkpoints were
  not converting properly (probably dreambooth-generated):
  https://discord.com/channels/1020123559063990373/1078386197589655582/1078387806122025070

- @blessedcoolant tracked this down to a missing key that was used to
  derive vector length of the CLIP model used by fetching the second
  dimension of the tensor at "cond_stage_model.model.text_projection".
  His proposed solution was to hardcode a value of 1024.

- On inspection, I found that the same second dimension can be
  recovered from key 'cond_stage_model.model.ln_final.bias', and use
  that instead. I hope this is correct; tested on multiple v1, v2 and
  inpainting models and they converted correctly.

- While debugging this, I found and fixed several other issues:

  - model download script was not pre-downloading the OpenCLIP
    text_encoder or text_tokenizer. This is fixed.
  - got rid of legacy code in `ckpt_to_diffuser.py` and replaced
    with calls into `model_manager`
  - more consistent status reporting in the CLI.
2023-02-23 15:43:58 -05:00
Lincoln Stein
c559bf3e10 Add a sanity check to root directory finding algorithm (#2772)
Root directory finding algorithm is:

2) use --root argument
2) use INVOKEAI_ROOT environment variable
3) use VIRTUAL_ENV environment variable
4) use ~/invokeai

Since developers are liable to put virtual environments in their
favorite places, not necessarily in the invokeai root directory, this PR
adds a sanity check that looks for the existence of
`VIRTUAL_ENV/invokeai.init`, and moves on to (4) if not found.
2023-02-23 11:37:11 -05:00
Lincoln Stein
a485515bc6 Merge branch 'v2.3' into bugfix/sanity-check-rootdir 2023-02-23 11:14:52 -05:00
Lincoln Stein
2c9b29725b Bugfix/windows install (#2770)
# This will constitute v2.3.1+rc2

## Windows installer enhancements
  
1. resize installer window to give more room for configure and download
forms
2. replace '\' with '/' in directory names to allow user to
drag-and-drop
       folders into the dialogue boxes that accept directories.
3. similar change in CLI for the !import_model and !convert_model
commands
4. better error reporting when a model download fails due to network
errors
5. put the launcher scripts into a loop so that menu reappears after
       invokeai, merge script, etc exits. User can quit with "Q".
6. do not try to download fp16 of sd-ft-mse-vae, since it doesn't exist.
7. cleaned up status reporting when installing models
8. Detect when install failed for some reason and print helpful error
      message rather than stack trace.
9. Detect window size and resize to minimum acceptable values to provide
      better display of configure and install forms.
10. Fix a bug in the CLI which prevented diffusers imported by their
repo_ids
from being correctly registered in the current session (though they
install
      correctly)
11. Capitalize the "i" in Imported in the autogenerated descriptions.
2023-02-23 11:14:30 -05:00
Lincoln Stein
28612c899a add a sanity check to root directory finding algorithm
Root directory finding algorithm is:

2) use --root argument
2) use INVOKEAI_ROOT environment variable
3) use VIRTUAL_ENV environment variable
4) use ~/invokeai

Since developer's are liable to put virtual environments in their
favorite places, not necessarily in the invokeai root directory, this
PR adds a sanity check that looks for the existence of
VIRTUAL_ENV/invokeai.init, and moves to (4) if not found.
2023-02-23 10:15:01 -05:00
Lincoln Stein
88acbeaa35 install creator tags but don't commit 2023-02-23 07:08:41 -05:00
Lincoln Stein
46729efe95 upgrade to compel 0.1.7 2023-02-23 07:06:40 -05:00
Lincoln Stein
b3d03e1146 Merge branch 'v2.3.1' into bugfix/windows-install 2023-02-23 01:04:39 -05:00
Lincoln Stein
e29c9a7d9e fix CLI import of diffusers by repo_id
- Fix a bug in the CLI which prevented diffusers imported by their repo_ids
  from being correctly registered in the current session (though they install
  correctly)

- Capitalize the "i" in Imported in the autogenerated descriptions.
2023-02-23 01:00:14 -05:00
Lincoln Stein
9b157b6532 fix several issues with Windows installs
1. resize installer window to give more room for configure and download forms
2. replace '\' with '/' in directory names to allow user to drag-and-drop
   folders into the dialogue boxes that accept directories.
3. similar change in CLI for the !import_model and !convert_model commands
4. better error reporting when a model download fails due to network errors
5. put the launcher scripts into a loop so that menu reappears after
   invokeai, merge script, etc exits. User can quit with "Q".
6. do not try to download fp16 of sd-ft-mse-vae, since it doesn't exist.
7. cleaned up status reporting when installing models
2023-02-23 00:49:59 -05:00
blessedcoolant
10a1e7962b docs: add TRANSLATION.md (#2769) 2023-02-23 15:37:15 +13:00
Lincoln Stein
cb672d7d00 Merge branch 'v2.3.1' into docs/add-translation-md 2023-02-22 21:35:39 -05:00
psychedelicious
e791fb6b0b docs: tweak messaging 2023-02-23 13:00:05 +11:00
psychedelicious
1c9001ad21 docs: add TRANSLATION.md 2023-02-23 12:53:03 +11:00
Lincoln Stein
3083356cf0 installer enhancements
- Detect when install failed for some reason and print helpful error
  message rather than stack trace.

- Detect window size and resize to minimum acceptable values to provide
  better display of configure and install forms.
2023-02-22 19:18:07 -05:00
blessedcoolant
179814e50a [WebUI] 2.3.1 Localization (#2765) 2023-02-23 10:29:14 +13:00
blessedcoolant
9515c07fca Merge branch 'v2.3.1' into localization-231 2023-02-23 10:29:02 +13:00
blessedcoolant
a45e94fde7 build: localization (2.3.1-final) 2023-02-23 09:47:01 +13:00
Lincoln Stein
8b6196e0a2 version 2.3.1 release candidate 1 2023-02-22 15:26:35 -05:00
Sergey Krashevich
ee2c0ab51b translationBot(ui): update translation (Russian)
Currently translated at 81.4% (382 of 469 strings)

translationBot(ui): update translation (Russian)

Currently translated at 81.6% (382 of 468 strings)

Co-authored-by: Sergey Krashevich <svk@svk.su>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2023-02-22 21:25:08 +01:00
Riccardo Giovanetti
ca5f129902 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (469 of 469 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (468 of 468 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-02-22 21:25:08 +01:00
Lincoln Stein
cf2eca7c60 Add new console frontend to initial model selection, and other model mgmt improvements (#2644)
## Major Changes
The invokeai-configure script has now been refactored. The work of
selecting and downloading initial models at install time is now done by
a script named `invokeai-model-install` (module name is
`ldm.invoke.config.model_install`)

Screen 1 - adjust startup options:

![screenshot1](https://user-images.githubusercontent.com/111189/219976468-b642df78-a6fe-44a2-bf97-54ccf34e9656.png)

Screen 2 - select SD models:

![screenshot2](https://user-images.githubusercontent.com/111189/219976494-13c7d257-cc8d-4dae-9521-3b352aab010b.png)

The calling arguments for `invokeai-configure` have not changed, so
nothing should break. After initializing the root directory, the script
calls `invokeai-model-install` to let the user select the starting
models to install.

`invokeai-model-install puts up a console GUI with checkboxes to
indicate which models to install. It respects the `--default_only` and
`--yes` arguments so that CI will continue to work. Here are the various
effects you can achieve:

`invokeai-configure`
       This will use console-based UI to initialize invokeai.init,
       download support models, and choose and download SD models
    
`invokeai-configure --yes`
Without activating the GUI, populate invokeai.init with default values,
       download support models and download the "recommended" SD models
    
`invokeai-configure --default_only`
Activate the GUI for changing init options, but don't show the SD
download
form, and automatically download the default SD model (currently SD-1.5)
    
`invokeai-model-install`
       Select and install models. This can be used to download arbitrary
models from the Internet, install HuggingFace models using their
repo_id,
       or watch a directory for models to load at startup time
    
`invokeai-model-install --yes`
       Import the recommended SD models without a GUI
    
`invokeai-model-install --default_only`
       As above, but only import the default model

## Flexible Model Imports

The console GUI allows the user to import arbitrary models into InvokeAI
using:

1. A HuggingFace Repo_id
2. A URL (http/https/ftp) that points to a checkpoint or safetensors
file
3. A local path on disk pointing to a checkpoint/safetensors file or
diffusers directory
4. A directory to be scanned for all checkpoint/safetensors files to be
imported

The UI allows the user to specify multiple models to bulk import. The
user can specify whether to import the ckpt/safetensors as-is, or
convert to `diffusers`. The user can also designate a directory to be
scanned at startup time for checkpoint/safetensors files.

## Backend Changes

To support the model selection GUI PR introduces a new method in
`ldm.invoke.model_manager` called `heuristic_import(). This accepts a
string-like object which can be a repo_id, URL, local path or directory.
It will figure out what the object is and import it. It interrogates the
contents of checkpoint and safetensors files to determine what type of
SD model they are -- v1.x, v2.x or v1.x inpainting.

## Installer

I am attaching a zip file of the installer if you would like to try the
process from end to end.

[InvokeAI-installer-v2.3.0.zip](https://github.com/invoke-ai/InvokeAI/files/10785474/InvokeAI-installer-v2.3.0.zip)
2023-02-22 15:24:59 -05:00
Lincoln Stein
16aea1e869 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-22 14:22:52 -05:00
blessedcoolant
75ff6cd3c3 Refactor prompting code paths to use the compel library (#2729)
motivation: i want to be doing future prompting development work in the
`compel` lib (https://github.com/damian0815/compel) - which is currently
pip installable with `pip install compel`.
2023-02-23 08:09:52 +13:00
blessedcoolant
7b7b31637c Merge branch 'main' into refactor_use_compel 2023-02-23 07:43:30 +13:00
blessedcoolant
fca564c18a ui: fix use prompt when prompt has colon (#2760)
- Fixes wonky use prompt when prompt contains colon
2023-02-23 07:41:38 +13:00
Lincoln Stein
eb8d87e185 Merge branch 'main' into refactor_use_compel 2023-02-22 12:34:16 -05:00
Lincoln Stein
dbadb1d7b5 Merge branch 'main' into fix/ui/prompt-metadata 2023-02-22 12:33:54 -05:00
Lincoln Stein
a4afb69615 fix crash in textual inversion with "num_samples=0" error (#2762)
-At some point pathlib was added to the list of imported modules and
this broken the os.path code that assembled the sample data set.

-Now fixed by replacing os.path calls with Path methods
2023-02-22 12:31:28 -05:00
Lincoln Stein
8b7925edf3 fix crash in textual inversion with "num_samples=0" error
-At some point pathlib was added to the list of imported modules and this
broken the os.path code that assembled the sample data set.

-Now fixed by replacing os.path calls with Path methods
2023-02-22 11:29:30 -05:00
Lincoln Stein
168a51c5a6 fix textual inversion output directory path
- The configure script was misnaming the directory for text-inversion-output.
- Now fixed.
2023-02-22 10:06:04 -05:00
Damian Stewart
3f5d8c3e44 remove inaccurate docstring 2023-02-22 13:18:39 +01:00
Lincoln Stein
609bb19573 fixes to resizing and init file editing
- Disable responsive resizing below starting dimensions (you can make
  form larger, but not smaller than what it was at startup)

- Fix bug that caused multiple --ckpt_convert entries (and similar) to
  be written to init file.
2023-02-22 07:05:51 -05:00
psychedelicious
d561d6d3dd chore(ui): build frontend 2023-02-22 22:09:11 +11:00
psychedelicious
7ffaa17551 fix(ui): use prompt bug when prompt has colon
This bug is related to the format in which we stored prompts for some time: an array of weighted subprompts.

This caused some strife when recalling a prompt if the prompt had colons in it, due to our recently introduced handling of negative prompts.

Currently there is no need to store a prompt as anything other than a string, so we revert to doing that.

Compatibility with structured prompts is maintained via helper hook.
2023-02-22 20:33:58 +11:00
Damian Stewart
97eac58a50 fix blend tokenizaiton reporting; fix LDM checkpoint support 2023-02-22 10:29:42 +01:00
Damian Stewart
cedbe8fcd7 fix .blend 2023-02-22 09:04:23 +01:00
Jonathan
a461875abd Merge branch 'main' into refactor_use_compel 2023-02-21 21:14:28 -06:00
Lincoln Stein
ab018ccdfe Fallback to using filename to trigger embeddings (#2752)
Lots of earlier embeds use a common trigger token such as * or the
hebrew letter shan. Previously, the textual inversion manager would
refuse to load the second and subsequent embeddings that used a
previously-claimed trigger. Now, when this case is encountered, the
trigger token is replaced by <filename> and the user is informed of the
fact.
2023-02-21 21:58:11 -05:00
Lincoln Stein
d41dcdfc46 move trigger_str registration into try block 2023-02-21 21:38:42 -05:00
Lincoln Stein
972aecc4c5 fix responsive resizing 2023-02-21 21:33:44 -05:00
Lincoln Stein
6b7be4e5dc remove dangling debug statement 2023-02-21 20:09:34 -05:00
Lincoln Stein
9b1a7b553f add "hit any key to exit" pause at end of install 2023-02-21 20:03:08 -05:00
Lincoln Stein
7f99efc5df require diffusers 0.13 2023-02-21 17:28:07 -05:00
Lincoln Stein
0a6d8b4855 Merge branch 'main' into refactor_use_compel 2023-02-21 17:19:48 -05:00
Lincoln Stein
5e41811fb5 move trigger text munging to upper level per review 2023-02-21 17:04:42 -05:00
Lincoln Stein
5a4967582e reformat with black and isort 2023-02-21 14:12:57 -05:00
Jonathan
1d0ba4a1a7 Merge branch 'main' into bugfix/filename-embedding-fallback 2023-02-21 13:12:34 -06:00
Lincoln Stein
4878c7a2d5 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-21 14:09:38 -05:00
blessedcoolant
9e5aa645a7 Fix crashing when using 2.1 model (#2757)
We now require more free memory to avoid attention slicing. 17.5% free
was not sufficient headroom in all cases, so now we require 25%.
2023-02-22 08:03:51 +13:00
Lincoln Stein
d01e23973e fix problem that was causing CI failures 2023-02-21 13:44:32 -05:00
Jonathan
71bbd78574 Fix crashing when using 2.1 model
We now require more free memory to avoid attention slicing. 17.5% free was not sufficient headroom, so now we require 25%.
2023-02-21 12:35:03 -06:00
Lincoln Stein
fff41a7349 merged with main 2023-02-21 12:20:59 -05:00
blessedcoolant
d5f524a156 Merge branch 'main' into bugfix/filename-embedding-fallback 2023-02-22 06:13:41 +13:00
Jonathan
3ab9d02883 Fixed embiggening crash due to clear_cuda_cache not being passed on and bad cuda stats initialization. (#2756) 2023-02-22 06:12:24 +13:00
Lincoln Stein
27a2e27c3a fix crash when installed models < number columns
1. Fixed display crash when the number of installed models is less than
   the number of desired columns to display them.

2. Added --ckpt_convert option to init file.
2023-02-21 12:09:34 -05:00
Jonathan
da04b11a31 Merge branch 'main' into bugfix/filename-embedding-fallback 2023-02-21 10:52:13 -06:00
Lincoln Stein
3795b40f63 implemented the following fixes:
Enhancements:
1. Directory-based imports will not attempt to import components of diffusers models.
2. Diffuser directory imports now supported
3. Files that end with .ckpt that are not Stable Diffusion models (such as VAEs) are
   skipped during import.

Bugs identified in Psychedelicious's review:
1. The invokeai-configure form now tracks the current contents of `invokeai.init` correctly.
2. The autoencoders are no longer treated like installable models, but instead are
   mandatory support models. They will no longer appear in `models.yaml`

Bugs identified in Damian's review:
1. If invokeai-model-install is started before the root directory is initialized, it will
   call invokeai-configure to fix the matter.
2. Fix bug that was causing empty `models.yaml` under certain conditions.
3. Made import textbox smaller
4. Hide the "convert to diffusers" options if nothing to import.
2023-02-21 11:47:41 -05:00
Lincoln Stein
9436f2e3d1 alphabetize trigger strings 2023-02-21 06:23:34 -05:00
Lincoln Stein
7fadd5e5c4 performance: low-memory option for calculating guidance sequentially (#2732)
In theory, this reduces peak memory consumption by doing the conditioned
and un-conditioned predictions one after the other instead of in a
single mini-batch.

In practice, it doesn't reduce the reported "Max VRAM used for this
generation" for me, even without xformers. (But it does slow things down
by a good 18%.)

That suggests to me that the peak memory usage is during VAE decoding,
not the diffusion unet, but ymmv. It does [improve things for gogurt's
16 GB
M1](https://github.com/invoke-ai/InvokeAI/pull/2732#issuecomment-1436187407),
so it seems worthwhile.

To try it out, use the `--sequential_guidance` option:
2dded68267/ldm/invoke/args.py (L487-L492)
2023-02-20 23:00:54 -05:00
Lincoln Stein
4c2a588e1f Merge branch 'main' into perf/lowmem_sequential_guidance 2023-02-20 22:40:31 -05:00
Lincoln Stein
5f9de762ff update installation docs for 2.3.1 installer screens (#2749)
This PR updates the manual page for automatic installation, and contains
screenshots of the new installer screens.
2023-02-20 22:40:02 -05:00
Lincoln Stein
91f7abb398 replace repeated triggers with <filename> 2023-02-20 22:33:13 -05:00
Damian Stewart
6420b81a5d Merge remote-tracking branch 'upstream/main' into refactor_use_compel 2023-02-20 23:34:38 +01:00
Lincoln Stein
b6ed5eafd6 update installation docs for 2.3.1 installer screens 2023-02-20 17:24:52 -05:00
blessedcoolant
694d5aa2e8 Add 'update' action to launcher script (#2636)
- Adds an update action to launcher script
- This action calls new python script `invokeai-update`, which prompts
user to update to latest release version, main development version, or
an arbitrary git tag or branch name.
- It then uses `pip` to update to whatever tag was specified.

The user interface (such as it is) looks like this:

![updater-screenshot](https://user-images.githubusercontent.com/111189/218291539-e5542662-6bfd-46ef-8ea9-655ca77392b7.png)
2023-02-21 11:17:22 +13:00
Lincoln Stein
833079140b Merge branch 'main' into enhance/update-menu 2023-02-20 17:16:20 -05:00
Lincoln Stein
fd27948c36 Merge branch 'main' into perf/lowmem_sequential_guidance 2023-02-20 17:15:33 -05:00
Damian Stewart
1dfaaa2a57 fix web ui issues 2023-02-20 22:58:07 +01:00
Lincoln Stein
bac6b50dd1 During textual inversion training, skip over non-image files (#2747)
- The TI script was looping over all files in the training image
directory, regardless of whether they were image files or not. This PR
adds a check for image file extensions.
- 
- Closes #2715
2023-02-20 16:17:32 -05:00
blessedcoolant
a30c91f398 Merge branch 'main' into bugfix/textual-inversion-training 2023-02-21 09:58:19 +13:00
Lincoln Stein
17294bfa55 restore ability of textual inversion manager to read .pt files (#2746)
- Fixes longstanding bug in the token vector size code which caused .pt
files to be assigned the wrong token vector length. These were then
tossed out during directory scanning.
2023-02-20 15:34:56 -05:00
Lincoln Stein
3fa1771cc9 Merge branch 'main' into perf/lowmem_sequential_guidance 2023-02-20 15:20:15 -05:00
Lincoln Stein
f3bd386ff0 Merge branch 'main' into bugfix/textual-inversion-training 2023-02-20 15:19:53 -05:00
Lincoln Stein
8486ce31de Merge branch 'main' into bugfix/embedding-vector-length 2023-02-20 15:19:36 -05:00
Lincoln Stein
1d9845557f reduced verbosity of embed loading messages 2023-02-20 15:18:55 -05:00
Lincoln Stein
55dce6cfdd remove more dead code 2023-02-20 15:08:07 -05:00
Lincoln Stein
58be915446 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-20 14:48:41 -05:00
blessedcoolant
dc9268f772 [WebUI] Symmetry Fix (#2745)
Symmetry now has a toggle on and off. Won't be passed if not enabled.
Symmetry settings now moved to their accordion.
2023-02-21 08:47:23 +13:00
Lincoln Stein
47ddc00c6a in textual inversion training, skip over non-image files
- Closes #2715
2023-02-20 14:44:10 -05:00
Lincoln Stein
0d22fd59ed restore ability of textual inversion manager to read .pt files
- Fixes longstanding bug in the token vector size code which caused
  .pt files to be assigned the wrong token vector length. These
  were then tossed out during directory scanning.
2023-02-20 14:34:14 -05:00
blessedcoolant
d5efd57c28 Merge branch 'symmetry-fix' of https://github.com/blessedcoolant/InvokeAI into symmetry-fix 2023-02-21 07:44:34 +13:00
blessedcoolant
b52a92da7e build: symmetry-fix-2 2023-02-21 07:43:56 +13:00
blessedcoolant
b949162e7e Revert Symmetry Big Size Input 2023-02-21 07:42:20 +13:00
blessedcoolant
5409991256 Merge branch 'main' into symmetry-fix 2023-02-21 07:29:53 +13:00
blessedcoolant
be1bcbc173 build: symmetry-fix 2023-02-21 07:28:25 +13:00
blessedcoolant
d6196e863d Move symmetry settings to their own accordion 2023-02-21 07:25:24 +13:00
blessedcoolant
63e790b79b fix crash in CLI when --save_intermediates called (#2744)
Fixes #2733
2023-02-21 07:16:45 +13:00
Lincoln Stein
cf53bba99e Merge branch 'main' into bugfix/save-intermediates 2023-02-20 12:51:53 -05:00
Lincoln Stein
ed4c8f6a8a fix crash in CLI when --save_intermediates called
Fixes #2733
2023-02-20 12:50:32 -05:00
Lincoln Stein
aab8263c31 Fix crash on calling diffusers' prepare_attention_mask (#2743)
Diffusers' `prepare_attention_mask` was crashing when we didn't pass in
a batch size.
2023-02-20 12:35:33 -05:00
Jonathan
b21bd6f428 Fix crash on calling diffusers' prepare_attention_mask
Diffusers' `prepare_attention_mask` was crashing when we didn't pass in a batch size.
2023-02-20 11:12:47 -06:00
Kevin Turner
cb6903dfd0 Merge branch 'main' into perf/lowmem_sequential_guidance 2023-02-20 08:03:11 -08:00
blessedcoolant
cd87ca8214 Correctly detect when an embedding is incompatible with the current model (#2736)
- Fixed the test for token length; tested on several .pt and .bin files
- Also added a __main__ entrypoint for CLI.py, to make pdb debugging a
bit more convenient.
2023-02-21 04:32:32 +13:00
blessedcoolant
58e5bf5a58 Merge branch 'main' into bugfix/embedding-compatibility-test 2023-02-21 04:09:18 +13:00
blessedcoolant
f17c7ca6f7 [WebUI] Symmetry Settings (#2741)
Add the newly added Symmetry settings to the WebUI.
2023-02-21 04:07:30 +13:00
blessedcoolant
c3dd28cff9 Merge branch 'main' into symmetry-webui 2023-02-21 04:06:54 +13:00
blessedcoolant
db4e1e8b53 add @lstein and @blessedcoolant to all codeowner paths (#2742)
- In an emergency, one or the other of these individuals will be
available to review any part of the code.
2023-02-21 04:06:23 +13:00
Lincoln Stein
3e43c3e698 add @lstein and @blessedcoolant to all paths
- In an emergency, one or the other of these individuals will
  be available to review any part of the code.
2023-02-20 10:02:32 -05:00
blessedcoolant
cc7733af1c Merge branch 'main' into enhance/update-menu 2023-02-21 03:54:40 +13:00
blessedcoolant
2a29734a56 Merge branch 'main' into symmetry-webui 2023-02-21 03:18:47 +13:00
blessedcoolant
f2e533f7c8 build: threshold slider fix 2023-02-21 03:17:41 +13:00
blessedcoolant
078f897b67 Revert Threshold Slider to older values 2023-02-21 02:57:00 +13:00
Matthias Wild
8352ab2076 remove old swagger related files since security issues (#2730) 2023-02-20 14:55:21 +01:00
Matthias Wild
1a3d47814b Merge branch 'main' into update/docs/remove-swagger-related-files 2023-02-20 14:54:22 +01:00
Lincoln Stein
e852ad0a51 fix bug that prevented converted files from being written into models.yaml` 2023-02-20 08:48:54 -05:00
blessedcoolant
136cd0e868 Merge branch 'main' into symmetry-webui 2023-02-21 02:43:40 +13:00
blessedcoolant
7afe26320a build: symmetry-settings 2023-02-21 02:41:26 +13:00
Lincoln Stein
702da71515 swap y/n values for broken model reconfiguration prompt 2023-02-20 08:34:46 -05:00
blessedcoolant
b313cf8afd Add Symmetry Settings 2023-02-21 02:27:55 +13:00
Lincoln Stein
852d78d9ad Fix for issue #2707 (#2710)
When selecting the last model of the third model-list in the
model-merging-TUI it crashed because the code forgot about the "None"
element.

Additionally it seems that it accidentally always took the wrong model
as third model if selected?

This simple fix resolves both issues.
2023-02-20 08:02:00 -05:00
Lincoln Stein
5570a88858 Merge branch 'main' into update/docs/remove-swagger-related-files 2023-02-20 07:44:42 -05:00
Lincoln Stein
cfd897874b Merge branch 'main' into perf/lowmem_sequential_guidance 2023-02-20 07:42:35 -05:00
Lincoln Stein
1249147c57 Merge branch 'main' into enhance/update-menu 2023-02-20 07:38:56 -05:00
Lincoln Stein
eec5c3bbb1 Merge branch 'main' into main 2023-02-20 07:38:08 -05:00
Jonathan
ca8d9fb885 Add symmetry to generation (#2675)
Added symmetry to Invoke based on discussions with @damian0815. This can currently only be activated via the CLI with the `--h_symmetry_time_pct` and `--v_symmetry_time_pct` options. Those take values from 0.0-1.0, exclusive, indicating the percentage through generation at which symmetry is applied as a one-time operation. To have symmetry in either axis applied after the first step, use a very low value like 0.001.
2023-02-20 07:33:19 -05:00
Lincoln Stein
7d77fb9691 fixed --default_only behavior 2023-02-20 01:29:39 -05:00
Lincoln Stein
a4c0dfb33c fix broken --ckpt_convert option
- not sure why, but at some pont --ckpt_convert (which converts legacy checkpoints)
  into diffusers in memory, stopped working due to float16/float32 issues.

- this commit repairs the problem

- also removed some debugging messages I found in passing
2023-02-20 01:12:02 -05:00
Kevin Turner
2dded68267 add --sequential_guidance option for low-RAM tradeoff 2023-02-19 21:21:14 -08:00
Lincoln Stein
172ce3dc25 correctly detect when an embedding is incompatible with the current model
- Fixed the test for token length; tested on several .pt and .bin files
- Also added a __main__ entrypoint for CLI.py, to make pdb debugging a bit
  more convenient.
2023-02-19 22:30:57 -05:00
Kevin Turner
6c8d4b091e dev(InvokeAIDiffuserComponent): mollify type checker's concern about the optional argument 2023-02-19 16:58:54 -08:00
Lincoln Stein
7beebc3659 resolved conflicts; ran black and isort 2023-02-19 19:48:01 -05:00
Lincoln Stein
5461318eda clean up diagnostic messages 2023-02-19 19:38:29 -05:00
Kevin Turner
d0abe13b60 performance(InvokeAIDiffuserComponent): add low-memory path for calculating conditioned and unconditioned predictions sequentially
Proof of concept. Still needs to be wired up to options or heuristics.
2023-02-19 16:04:54 -08:00
Kevin Turner
aca9d74489 refactor(InvokeAIDiffuserComponent): rename internal methods
Prefix with `_` as is tradition.
2023-02-19 15:33:16 -08:00
mauwii
a0c213a158 remove /static 2023-02-19 23:51:00 +01:00
mauwii
740210fc99 remove old unused files since security issues 2023-02-19 23:47:28 +01:00
Lincoln Stein
ca10d0652f show title of add models screen 2023-02-19 16:55:09 -05:00
Lincoln Stein
e1a85d8184 fix incorrect passing of precision to model installer 2023-02-19 16:24:31 -05:00
Lincoln Stein
9d8236c59d tested and working on Ubuntu
- You can now achieve several effects:

   `invokeai-configure`
   This will use console-based UI to initialize invokeai.init,
   download support models, and choose and download SD models

   `invokeai-configure --yes`
   Without activating the GUI, populate invokeai.init with default values,
   download support models and download the "recommended" SD models

   `invokeai-configure --default_only`
   As above, but only download the default SD model (currently SD-1.5)

   `invokeai-model-install`
   Select and install models. This can be used to download arbitrary
   models from the Internet, install HuggingFace models using their repo_id,
   or watch a directory for models to load at startup time

   `invokeai-model-install --yes`
   Import the recommended SD models without a GUI

   `invokeai-model-install --default_only`
   As above, but only import the default model
2023-02-19 16:08:58 -05:00
blessedcoolant
7eafcd47a6 [WebUI] Bug Fixes (#2728)
A  few bugs fixed.

- After the recent update to the Cancel Button, it was no longer
respecting sizing in Floating Mode and the Beta Canvas. Fixed that.
- After the recent dependency update, useHotkeys was bugging out for the
fullscreen hotkey `f`. Realized this was happening because the hotkey
was initialized in two places -- in both the gallery and the parameter
floating button. Removed it from both those places and moved it to the
InvokeTabs component. It makes sense to reside it here because it is a
global hotkey.
- Also added index `0` to the default Accordion index in state in order
to ensure that the main accordions stay open. Conveniently this works
great on all tabs. We have all the primary options in accordions so they
stay open. And as for advanced settings, the first one is always Seed
which is an important accordion, so it opens up by default.

Think there may be some more bugs. Looking in to them.
2023-02-20 09:39:48 +13:00
Damian Stewart
ded3f13a33 move all prompting stuff to use compel 2023-02-19 20:42:29 +01:00
Lincoln Stein
e5646d7241 both forms functional; need integration 2023-02-19 13:12:05 -05:00
blessedcoolant
79ac9698c1 build: webui-bug-fixes 2023-02-20 05:28:52 +13:00
blessedcoolant
d29f57c93d fix: Keep the first accordion open by default on reset
We need to do this now because we are using multiple accordions.
2023-02-20 05:26:48 +13:00
blessedcoolant
9b7cde8918 fix: Fullscreen Hotkey Bug
After upgrading the deps, the full screen hotkey started to bug out. I believe this was happening because it was triggered in two different components causing it to run twice. Removed it from both floating buttons and moved it to the Invoke tab. Makes sense to keep it there as it is a global hotkey.
2023-02-20 05:20:51 +13:00
blessedcoolant
8ae71303a5 fix: Cancel Button not maintaining min height
After the recent changes the Cancel button wasn't maintaining min height in floating mode. Also the new button group was not scaling in width correctly on the Canvas Beta UI. Fixed both.
2023-02-20 05:18:37 +13:00
blessedcoolant
2cd7bd4a8e docs: add translation info to readme (#2725)
- Adds a translation status badge
- Adds a blurb about contributing a translation (we want Weblate to be
the source of truth for translations, and to avoid updating translations
directly here)
2023-02-20 04:46:26 +13:00
blessedcoolant
b813298f2a Merge branch 'main' into docs/readme-translation 2023-02-20 04:43:13 +13:00
blessedcoolant
58f787f7d4 ui: update deps, fix husky script (#2726)
- Upgraded all dependencies
- Removed beta TS 5.0 as it conflicted with some packages
- Added types for `Array.prototype.findLast` and
`Array.prototype.findLastIndex` (these definitions are provided in TS
5.0
- Fixed fixed type import syntax in a few components
- Re-patched `redux-deep-persist` and tested to ensure the patch still
works

The husky pre-commit command was `npx run lint`, but it should run
`lint-staged`. Also, `npx` wasn't working for me. Changed the command to
`npm run lint-staged` and it all works. Extended the `lint-staged`
triggers to hit `json`, `scss` and `html`.
2023-02-20 04:14:24 +13:00
blessedcoolant
2bba543d20 Merge branch 'main' into chore/ui/update-deps 2023-02-20 04:13:47 +13:00
Jonathan
d3c1b747ee Fix behavior when encountering a bad embedding (#2721)
When encountering a bad embedding, InvokeAI was asking about reconfiguring models. This is because the embedding load error was never handled - it now is.
2023-02-19 14:04:59 +00:00
psychedelicious
b9ecf93ba3 ui: translations update from weblate (#2727)
Translations update from [Hosted Weblate](https://hosted.weblate.org)
for [InvokeAI/Web
UI](https://hosted.weblate.org/projects/invokeai/web-ui/).



Current translation status:

![Weblate translation
status](https://hosted.weblate.org/widgets/invokeai/-/web-ui/horizontal-auto.svg)
2023-02-19 23:12:05 +11:00
psychedelicious
487da8394d translationBot(ui): update translation (French)
Currently translated at 85.4% (398 of 466 strings)

Co-authored-by: psychedelicious <mabianfu@icloud.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/fr/
Translation: InvokeAI/Web UI
2023-02-19 13:00:55 +01:00
Riccardo Giovanetti
4c93bc56f8 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (466 of 466 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-02-19 13:00:55 +01:00
Hosted Weblate
727dfeae43 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-02-19 13:00:55 +01:00
psychedelicious
88d561dee7 chore(ui): build frontend 2023-02-19 22:32:05 +11:00
psychedelicious
7a379f1d4f chore(ui): update deps
- Upgraded all dependencies
- Removed beta TS 5.0 as it conflicted with some packages
- Added types for `Array.prototype.findLast` and `Array.prototype.findLastIndex` (these definitions are provided in TS 5.0
- Fixed fixed type import syntax in a few components
- Re-patched `redux-deep-persist` and tested to ensure the patch still works
2023-02-19 22:32:05 +11:00
psychedelicious
3ad89f99d2 build(ui): fix husky & lint-staged 2023-02-19 22:32:00 +11:00
psychedelicious
d76c5da514 docs: add translation info to readme 2023-02-19 19:13:38 +11:00
blessedcoolant
da5b0673e7 docs(ti): add using & troubleshooting sections (#2717)
Add `Using Embeddings` and `Troubleshooting` sections clarifying issues
I had when using TI for the first time.
2023-02-19 16:52:44 +13:00
blessedcoolant
d7180afe9d Merge branch 'main' into docs/ti/add-using-troubleshooting 2023-02-19 16:51:50 +13:00
psychedelicious
2e9c15711b docs(ti): add using & troubleshooting sections 2023-02-19 14:45:26 +11:00
psychedelicious
e19b08b149 [WebUI] Model Manager Lag Fix (#2720)
Model Manager lags a bit if you have a lot of models.

Basically added a fake delay to rendering the model list so the modal
has time to load first. Hacky but if it works it works.
2023-02-19 14:42:25 +11:00
blessedcoolant
234d76a269 build: webui-model-manager-lag-fix 2023-02-19 15:25:14 +13:00
blessedcoolant
826d941068 fix: Fix Model Manager Modal Lag
By hacking in a fake delay to load the list.
2023-02-19 15:23:25 +13:00
Kevin Turner
34e449213c add ability to retrieve current list of embedding trigger strings (#2650) 2023-02-18 18:05:00 -08:00
Kevin Turner
671c5943e4 Merge remote-tracking branch 'origin/main' into api/add-trigger-string-retrieval
# Conflicts:
#	ldm/generate.py
2023-02-18 17:44:59 -08:00
blessedcoolant
16c24ec367 [WebUI] Implement a "Cancel after current iteration" Button (#2642)
## What was the problem/requirement? (What/Why)
Frequently, I wish to cancel the processing of images, but also want the
current image to finalize before I do. To work around this, I need to
wait until the current one finishes before pressing the cancel.

## What was the solution? (How)
* Implemented a button that allows to "Cancel after current iteration,"
which stores a state in the UI that will attempt to cancel the
processing after the current image finishes
* If the button is pressed again, while it is spinning and before the
next iteration happens, this will stop the scheduling of the cancel, and
behave as if the button was never pressed.

### Minor
* Added `.yarn` to `.gitignore` as this was an output folder produced
from following Frontend's README

### Revision 2
#### Major 
* Changed from a standalone button to a context menu next to the
original cancel button. Pressing the context menu will give the
drop-down option to select which type of cancel method the user prefers,
and they can press that button for canceling in the specified type
* Moved states to system state for cross-screen and toggled cancel types
management
* Added in distribution for the target yarn version (allowing any
version of yarn to compile successfully), and updated the README to
ensure `--immutable` is passed for onboarding developers

#### Minor 
* Updated `.gitignore` to ignore specific yarn folders, as specified by
their team -
https://yarnpkg.com/getting-started/qa#which-files-should-be-gitignored

## How were these changes tested?
* `yarn dev` => Server started successfully
* Manual testing on the development server to ensure the button behaved
as expected
* `yarn run  build` => Success

### Artifacts
#### Revision 1
* Video showing the UI changes in action

https://user-images.githubusercontent.com/89283782/218347722-3a15ce61-2d8c-4c38-b681-e7a3e79dd595.mov

* Images showing the basic UI changes

![image](https://user-images.githubusercontent.com/89283782/218347124-4afbb699-2abc-4e71-a794-b04f7179cfe2.png)

![image](https://user-images.githubusercontent.com/89283782/218347826-443db351-7a3a-4111-80af-56d56a81f07b.png)

#### Revision 2
* Video showing the UI changes in action

https://user-images.githubusercontent.com/89283782/219901217-048d2912-9b61-4415-85fd-9e8fedb00c79.mov

* Images showing the basic UI changes
(Default state) 

![image](https://user-images.githubusercontent.com/89283782/219901228-918b263a-dc75-4e5d-8897-5fc62c71a790.png)
(Drop-down context menu active) 

![image](https://user-images.githubusercontent.com/89283782/219901241-021be07a-b768-40a2-988f-eb59be4a962d.png)
(Scheduled cancel selected and running)

![image](https://user-images.githubusercontent.com/89283782/219901243-59a9c61a-71a7-44b3-adab-7aa4c9ee1f8e.png)
(Scheduled cancel started)

![image](https://user-images.githubusercontent.com/89283782/219901266-b4c0adc1-d791-4989-9351-075758e06534.png)


## Notes
* Using `SystemState`'s `currentStatus` variable, when the value is
`common:statusIterationComplete` is an alternative to this approach (and
would be more optimal as it should prevent the next iteration from even
starting), but since the names are within the translations, rather than
an enum or other type, this method of tracking the current iteration was
used instead.
* `isLoading` on `IAIIconButton` caused the Icon Button to also be
disabled, so the current solution works around that with conditionally
rendering the icon of the button instead of passing that value.
* I don't have context on the development expectation for `dist` folder
interactions (and couldn't find any documentation outside of the
`.gitignore` mentioning that the folder should remain. Let me know if
they need to be modified a certain way.
2023-02-19 14:35:34 +13:00
psychedelicious
e8240855e0 chore(ui): build frontend 2023-02-19 12:18:40 +11:00
psychedelicious
a5e065048e feat(ui): persist blacklist cancelAfter 2023-02-19 11:53:52 +11:00
blessedcoolant
a53c3269db build: cancel-after-iteration-webui 2023-02-19 13:30:15 +13:00
blessedcoolant
8bf93d3a32 Isolate Cancel Button Menu Styling 2023-02-19 13:23:04 +13:00
blessedcoolant
d42cc0fd1c Port Cancel Button Options Menu to New Component 2023-02-19 13:18:03 +13:00
blessedcoolant
d2553d783c Add IAISimpleMenu Component 2023-02-19 13:17:45 +13:00
blhook
10b747d22b Run yarn build once more due to merge 2023-02-18 14:45:00 -08:00
blhook
1d567fa593 Merge branch 'main' into scheduled-cancel 2023-02-18 14:43:05 -08:00
psychedelicious
3a3dd39d3a [ui] fix weblate merge conflict (#2716)
My last attempt to fix the Weblate missing keys was done incorrectly and
caused a merge conflict on the Weblate repo.

This PR follows these steps to fix it
https://docs.weblate.org/en/latest/faq.html#how-to-fix-merge-conflicts-in-translations

🤞
2023-02-19 09:14:35 +11:00
psychedelicious
f4b3d7dba2 fix(ui): add useSlidersForAll string 2023-02-19 09:12:14 +11:00
Riccardo Giovanetti
de2c7fd372 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (462 of 462 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
2023-02-19 09:05:01 +11:00
Anonymous
b140e1c619 translationBot(ui): update translation (English)
Currently translated at 100.0% (462 of 462 strings)

Co-authored-by: Anonymous <noreply@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/en/
Translation: InvokeAI/Web UI
2023-02-19 09:05:01 +11:00
Riccardo Giovanetti
1308584289 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (459 of 459 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-02-19 09:05:01 +11:00
blhook
2ac4778bcf Fix broken translation string location in Scheduled Cancel 2023-02-18 13:51:53 -08:00
blhook
6101d67dba Post-merge cleanup 2023-02-18 13:35:33 -08:00
blhook
3cd50fe3a1 Merge branch 'main' into scheduled-cancel 2023-02-18 13:30:45 -08:00
blhook
e683b574d1 Change scheduled send to be as part of context for Cancel button 2023-02-18 13:23:58 -08:00
blessedcoolant
0decd05913 fix conversion of checkpoints into incompatible diffusers models (#2714)
- The checkpoint conversion script was generating diffusers models with
the safety checker set to null. This resulted in models that could not
be merged with ones that have the safety checker activated.

- This PR fixes the issue by incorporating the safety checker into all
1.x-derived checkpoints, regardless of user's nsfw_checker setting.
2023-02-19 05:42:19 +13:00
Lincoln Stein
d01b7ea2d2 remove debug statement & actually do merge 2023-02-18 11:19:06 -05:00
Lincoln Stein
4fa91724d9 fix conversion of checkpoints into incompatible diffusers models
- The checkpoint conversion script was generating diffusers models
  with the safety checker set to null. This resulted in models
  that could not be merged with ones that have the safety checker
  activated.

- This PR fixes the issue by incorporating the safety checker into
  all 1.x-derived checkpoints, regardless of user's nsfw_checker setting.
2023-02-18 11:07:38 -05:00
blessedcoolant
e3d1c64b77 fix(diffusers_pipeline): ensure cuda.get_mem_info always gets a specific device index. (#2700)
Also tighten up the typing of `device` attributes in general.

Fixes 
> ValueError: Expected a torch.device with a specified index or an
integer, but got:cuda
2023-02-19 04:33:16 +13:00
blessedcoolant
17f35a7bba Merge branch 'main' into fix/expected-torch-device 2023-02-19 04:16:13 +13:00
blessedcoolant
ab2f0a6fbf fix(ui): fix translation files (#2708)
Weblate's first PR was it attempting to fix some translation issues we
had overlooked!

It wanted to remove some keys which it did not see in our translation
source due to typos.

This PR instead corrects the key names to resolve the issues.
2023-02-19 03:51:32 +13:00
blessedcoolant
41cbf2f7c4 Merge branch 'main' into feat/ui/fix-translations 2023-02-19 03:50:35 +13:00
Damian Stewart
d5d2e1d7a3 Merge branch 'main' into fix/expected-torch-device 2023-02-18 15:23:08 +01:00
Lincoln Stein
587faa3e52 preparation for startup option editor 2023-02-18 08:51:26 -05:00
psychedelicious
80229ab73e Fixed grammar in "other options" feature tooltip (#2711)
It bothered me so i fixed it
2023-02-18 22:05:46 +11:00
ExperimentalCyborg
68b2911d2f Fixed grammar in "other options" feature tooltip 2023-02-18 11:58:33 +01:00
Iman Karim
2bf2f627e4 Fix for issue #2707 2023-02-18 11:40:12 +01:00
psychedelicious
58676b2ce2 fix(ui): fix translation files 2023-02-18 19:08:46 +11:00
psychedelicious
11f79dc1e1 [WebUI] Localization Port Bug Fixes (#2706)
- Fixed missing localization string for "useSlidersForAll"
- Fixed status messages being broken.
2023-02-18 18:59:05 +11:00
blessedcoolant
2a095ddc8e build: localization-bug-fixes 2023-02-18 19:35:39 +13:00
blessedcoolant
dd849d2e91 Fix Localization Porting Bugs 2023-02-18 19:32:55 +13:00
Lincoln Stein
8c63fac958 AttributeError: 'Namespace' object has no attribute 'log_tokenization' (#2698)
Could be fixed here or alternatively declared in file globals.py
2023-02-18 01:08:50 -05:00
blessedcoolant
11a70e9764 Merge branch 'main' into patch-14 2023-02-18 18:45:05 +13:00
blessedcoolant
33ce78e4a2 feat(ui): set up for weblate translation (#2702)
# Weblate Translation 

After doing a full integration test of 3 translation service providers
on my fork of InvokeAI, we have chosen
[Weblate](https://hosted.weblate.org). The other two viable options were
[Crowdin](https://crowdin.com/) and
[Transifex](https://www.transifex.com/).

Weblate was the choice because its hosted service provides a very solid
UX / DX, can scale as much as we may ever need, is FOSS itself, and
generously offers free hosted service to other libre projects like ours.

## How it works

Weblate hosts its own fork of our repo and establishes a kind of
unidirectional relationship between our repo and its fork.

### InvokeAI --> Weblate

The `invoke-ai/InvokeAI` repo has had the Weblate GitHub app added to
it. This app watches for changes to our translation source
(`invokeai/frontend/public/locales/en.json`) and then updates the
Weblate fork. The Weblate UI then knows there are new strings to be
translated, or changes to be made.

### Translation

Our translators can then update the translations on the Weblate UI. The
plan now is to invite individual community members who have expressed
interest in maintaining a language or two and give them access to the
app. We can also open the doors to the general public if desired.

### Weblate --> InvokeAI

When a translation is ready or changed, the system will make a PR to
`main`. We have a substantial degree of control over this and will
likely manually trigger these PRs instead of letting them fire off
automatically.

Once a PR is merged, we will still need to rebuild the web UI. I think
we can set things up so that we only need the rebuild when a totally new
language is added, but for now, we will stick to this relatively simple
setup.

## This PR 

This PR sets up the web UI's translation stuff to work with Weblate:
- merged each locale into a single file
- updated the i18next config and UI to work with this simpler file
structure
- update our eslint and prettier rules to ensure the locale files have
the same format as what Weblate outputs (`tabWidth: 4`)
- added a thank you to Weblate in our README

Once this is merged, I'll link Weblate to `main` and do a couple tests
to ensure it is all working as expected.
2023-02-18 18:42:03 +13:00
psychedelicious
4f78518858 chore(ui): build frontend 2023-02-18 15:26:24 +11:00
psychedelicious
fad99ac4d2 docs: add thanks to weblate for translation 2023-02-18 15:26:24 +11:00
psychedelicious
423b592b25 feat(ui): set up for weblate translation 2023-02-18 15:26:04 +11:00
Kevin Turner
8aa7d1da55 fix(xformers): shush about not having Triton available. (#2701)
It's not readily available on Windows and xformers only uses it on some very specific hardware anyway.
2023-02-17 18:02:32 -08:00
Kevin Turner
6b702c32ca fix(xformers): shush about not having Triton available.
It's not readily available on Windows and xformers only uses it on some very specific hardware anyway.
2023-02-17 17:41:27 -08:00
blessedcoolant
767012aec0 [WebUI] Model Merging (#2699)
This PR brings Model Merging to the WebUI.

Inside the Model Manager, you can now find a new button called Merge
Models. Rest of it is self explanatory.


![firefox_BYCM4YNHEa](https://user-images.githubusercontent.com/54517381/219795631-dbb5c5c4-fc3a-4cdd-9549-18c2e5302835.png)
2023-02-18 14:34:35 +13:00
blessedcoolant
2267057e2b Merge branch 'main' into webui-model-merging 2023-02-18 14:13:44 +13:00
Kevin Turner
b8212e4dea fix(diffusers_pipeline): ensure cuda.get_mem_info always gets a specific device index.
Also tighten up the typing of `device` attributes in general.
2023-02-17 16:56:15 -08:00
blessedcoolant
5b7e4a5f5d Add Error Handling For Merging 2023-02-18 12:17:22 +13:00
Lincoln Stein
07f9fa63d0 Bugfixes on the merge_model GUI (#2697)
This fixes a few cosmetic bugs in the merge models console GUI:

1) Fix the minimum and maximum ranges on alpha. Was 0.05 to 0.95. Now
0.01 to 0.99.
2) Don't show the 'add_difference' interpolation method when 2 models
selected, or the other three methods when three models selected
2023-02-17 16:57:03 -05:00
Lincoln Stein
1ae8986451 add log_tokenization to globals 2023-02-17 16:47:32 -05:00
Lincoln Stein
b305c240de fix syntax errors introduced by github web-ui edits 2023-02-17 16:44:20 -05:00
blessedcoolant
248dc81ec3 build: [WebUI] model-merge 2023-02-18 10:18:29 +13:00
blessedcoolant
ebe0071ed2 feat: [WebUI] Model Merging 2023-02-18 10:13:56 +13:00
spezialspezial
7a518218e5 AttributeError: 'Namespace' object has no attribute 'log_tokenization'
Could be fixed here or alternatively declared in file globals.py
2023-02-17 22:11:49 +01:00
Lincoln Stein
fc14ac7faa Merge branch 'main' into api/add-trigger-string-retrieval 2023-02-17 15:53:57 -05:00
Lincoln Stein
95e2739c47 Merge branch 'main' into bugfix/merge-gui 2023-02-17 15:42:53 -05:00
Lincoln Stein
f129393a2e document add_difference on-screen 2023-02-17 15:42:06 -05:00
Lincoln Stein
c55bbd1a85 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-17 15:00:33 -05:00
Lincoln Stein
ccba41cdb2 Bugfix/convert v2 models (#2630)
## Convert v2 models in CLI

- This PR introduces a CLI prompt for the proper configuration file to
use when converting a ckpt file, in order to support both inpainting
      and v2 models files.
    
- When user tries to directly !import a v2 model, it prints out a proper
warning that v2 ckpts are not directly supported and converts it into a
diffusers model automatically.

The user interaction looks like this:
```
(stable-diffusion-1.5) invoke> !import_model /home/lstein/graphic-art.ckpt
Short name for this model [graphic-art]: graphic-art-test
Description for this model [Imported model graphic-art]: Imported model graphic-art
What type of model is this?:
[1] A model based on Stable Diffusion 1.X
[2] A model based on Stable Diffusion 2.X
[3] An inpainting model based on Stable Diffusion 1.X
[4] Something else
Your choice: [1] 2
```

In addition, this PR enhances the bulk checkpoint import function. If a
directory path is passed to `!import_model` then it will be scanned for
`.ckpt` and `.safetensors` files. The user will be prompted to import
all the files found, or select which ones to import.

Addresses
https://discord.com/channels/1020123559063990373/1073730061380894740/1073954728544845855
2023-02-17 14:50:54 -05:00
Lincoln Stein
3d442bbf22 Merge branch 'main' into bugfix/convert-v2-models 2023-02-17 14:50:05 -05:00
Lincoln Stein
4888d0d832 fix slider and interpolations
- fix alpha slider to show values from 0.01 to 0.99
- fix interpolation list to show 'difference' method for 3 models,
-   and weighted_sum, sigmoid and inverse_sigmoid methods for 2
2023-02-17 14:46:26 -05:00
Lincoln Stein
47de3fb007 correct display of 'add_difference' method when three models defined
- due to typo, the add_difference method was being displayed as "['add_difference']"
2023-02-17 14:41:02 -05:00
blessedcoolant
41bc160cb8 [WebUI] They see me slidin .. they hatin... (#2614)
Porting over as many usable options to slider as possible.

- Ported Face Restoration settings to Sliders.
- Ported Upscale Settings to Sliders.
- Ported Variation Amount to Sliders.
- Ported Noise Threshold to Sliders <-- Optimized slider so the values
actually make sense.
- Ported Perlin Noise to Sliders.
- Added a suboption hook for the High Res Strength Slider.
- Fixed a couple of small issues with the Slider component.
- Ported Main Options to Sliders.
2023-02-17 21:58:35 +13:00
psychedelicious
d0ba155c19 chore(ui): build frontend 2023-02-17 19:54:36 +11:00
blessedcoolant
5f0848bf7d feat(ui): add all-sliders option 2023-02-17 19:53:44 +11:00
Lincoln Stein
6551527fe2 Update 050_INSTALLING_MODELS.md (#2690)
Fix typo; "cute" to "cube"
2023-02-16 23:03:30 -05:00
Lincoln Stein
159ce2ea08 Merge branch 'main' into bugfix/convert-v2-models 2023-02-16 23:00:58 -05:00
Steven Frank
3715570d17 Update 050_INSTALLING_MODELS.md
Fix typo; "cute" to "cube"
2023-02-16 19:53:01 -08:00
Lincoln Stein
65a7432b5a disable xformers if cuda not available 2023-02-16 22:20:30 -05:00
Lincoln Stein
557e28f460 Fix workflow path filters (#2689)
remove leading Slash from paths
2023-02-16 22:15:31 -05:00
Lincoln Stein
62a7f252f5 Merge branch 'main' into fix/ci/workflow-path-filters 2023-02-16 22:14:45 -05:00
Lincoln Stein
2fa14200aa Merge branch 'main' into api/add-trigger-string-retrieval 2023-02-16 22:12:39 -05:00
mauwii
0605cf94f0 remove leading Slash from paths 2023-02-17 04:10:40 +01:00
Lincoln Stein
d69156c616 remove superseded code 2023-02-16 22:05:00 -05:00
Lincoln Stein
0963bbbe78 rebuild frontend after merge conflict 2023-02-16 21:52:20 -05:00
Lincoln Stein
f3351a5e47 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-16 21:51:15 -05:00
Lincoln Stein
f3f4c68acc fix model download and autodetection bugs
- Corrected error that caused --full-precision argument to be ignored
  when models downloaded using the --yes argument.

- Improved autodetection of v1 inpainting files; no longer relies on the
  file having 'inpaint' in the name.
2023-02-16 21:37:50 -05:00
Lincoln Stein
5d617ce63d rebuild front end 2023-02-16 20:03:59 -05:00
Kevin Turner
8a0d45ac5a new OffloadingDevice loads one model at a time, on demand (#2596)
* new OffloadingDevice loads one model at a time, on demand

* fixup! new OffloadingDevice loads one model at a time, on demand

* fix(prompt_to_embeddings): call the text encoder directly instead of its forward method

allowing any associated hooks to run with it.

* more attempts to get things on the right device from the offloader

* more attempts to get things on the right device from the offloader

* make offloading methods an explicit part of the pipeline interface

* inlining some calls where device is only used once

* ensure model group is ready after pipeline.to is called

* fixup! Strategize slicing based on free [V]RAM (#2572)

* doc(offloading): docstrings for offloading.ModelGroup

* doc(offloading): docstrings for offloading-related pipeline methods

* refactor(offloading): s/SimpleModelGroup/FullyLoadedModelGroup

* refactor(offloading): s/HotSeatModelGroup/LazilyLoadedModelGroup

to frame it is the same terms as "FullyLoadedModelGroup"

---------

Co-authored-by: Damian Stewart <null@damianstewart.com>
2023-02-16 23:48:27 +00:00
Matthias Wild
2468ba7445 skip huge workflows if not needed (#2688)
- filter paths for `build-container.yml` and `test-invoke-pip.yml`
  - add workflow to pass required checks on PRs with `paths-ignore`
  - this triggers if `test-invoke-pip.yml` does not
- fix "CI checks on main link" in `/README.md`
2023-02-16 22:57:36 +01:00
mauwii
65b7d2db47 skip huge workflows if not needed
- filter paths for `build-container.yml` and `test-invoke-pip.yml`
  - add workflow to pass required checks on PRs with `paths-ignore`
  - this triggers if `test-invoke-pip.yml` does not
- fix "CI checks on main link" in `/README.md`
2023-02-16 22:56:39 +01:00
Ryan Cao
e07f1bb89c build frontend 2023-02-16 21:33:47 +01:00
Ryan Cao
f4f813d108 design: smooth progress bar animations 2023-02-16 21:33:47 +01:00
Lincoln Stein
6217edcb6c tweak wording of python version requirements 2023-02-16 12:55:13 -05:00
Lincoln Stein
c5cc832304 check maximum value of python version as well as minimum 2023-02-16 12:52:07 -05:00
blessedcoolant
a76038bac4 [WebUI] Even off JSX string syntax (#2058)
Assuming that mixing `"literal strings"` and `{'JSX expressions'}`
throughout the code is not for a explicit reason but just a result IDE
autocompletion, I changed all props to be consistent with the
conventional style of using simple string literals where it is
sufficient.

This is a somewhat trivial change, but it makes the code a little more
readable and uniform
2023-02-17 01:22:17 +13:00
blessedcoolant
ff4942f9b4 Merge branch 'main' into pr/2058 2023-02-17 01:05:20 +13:00
blessedcoolant
1ccad64871 build: lint/format ignores stats.html (#2681) 2023-02-17 00:42:51 +13:00
psychedelicious
19f0022bbe build: lint/format ignores stats.html 2023-02-16 20:02:52 +11:00
psychedelicious
ecc7b7a700 builds frontend 2023-02-16 19:54:38 +11:00
David Regla
e46102124e [WebUI] Even off JSX string props
Increased consistency and readability by replacing any unnecessary JSX expressions in places where string literals are sufficient
2023-02-16 19:54:25 +11:00
Lincoln Stein
314ed7d8f6 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-16 03:24:02 -05:00
Lincoln Stein
b1341bc611 fully functional and ready for review
- quashed multiple bugs in model conversion and importing
- found old issue in handling of resume of interrupted downloads
- will require extensive testing
2023-02-16 03:22:25 -05:00
Lincoln Stein
07be605dcb mostly working 2023-02-16 01:30:59 -05:00
Lincoln Stein
fe318775c3 bring in url download bugfix from PR 2630 2023-02-16 00:37:17 -05:00
Lincoln Stein
1bb07795d8 model installer downloads starter models + user-provided paths and repo_ids
- Ability to scan directory not yet implemented
- Can't download from Civitai due to incomplete URL download implementation
2023-02-16 00:34:15 -05:00
Eugene Brodsky
caf07479ec fix spelling mistake 2023-02-16 00:19:08 -05:00
Johnathon Selstad
508780d07f Also fix .bat file to point at correct configurer 2023-02-16 00:19:08 -05:00
Johnathon Selstad
05e67e924c Make configure_invokeai.py call invokeai_configure 2023-02-16 00:19:08 -05:00
blessedcoolant
fb2488314f fix minor typos (#2666)
Very, very minor typos I noticed.
2023-02-16 10:14:30 +13:00
blessedcoolant
062f58209b Merge branch 'main' into fix_typos 2023-02-16 10:01:28 +13:00
Matthias Wild
7cb9d6b1a6 [WebUI] Model Conversion (#2616)
### WebUI Model Conversion

**Model Search Updates**

- Model Search now has a radio group that allows users to pick the type
of model they are importing. If they know their model has a custom
config file, they can assign it right here. Based on their pick, the
model config data is automatically populated. And this same information
is used when converting the model to `diffusers`.


![firefox_q8b4Iog73A](https://user-images.githubusercontent.com/54517381/218283322-6bf31fd5-349a-410f-991a-2aa50ee8b6e1.png)

- Files named `model.safetensors` and
`diffusion_pytorch_model.safetensors` are excluded from the search
because these are naming conventions used by diffusers models and they
will end up showing on the list because our conversion saves safetensors
and not bin files.

**Model Conversion UI**

- The **Convert To Diffusers** button can be found on the Edit page of
any **Checkpoint Model**.


![firefox_VUzv10CZ7m](https://user-images.githubusercontent.com/54517381/218283424-d9864406-ebb3-44a4-9e00-b6adda72d817.png)

- When converting the model, the entire process is handled
automatically. The corresponding config while at the time of the Ckpt
addition is used in the process.
- Users are presented with the choice on where to save the diffusers
converted model - same location as the ckpt, InvokeAI models root folder
or a completely custom location.


![firefox_HJlR97KY0u](https://user-images.githubusercontent.com/54517381/218283443-b9136edd-b432-4569-a8cc-50961544f31f.png)

- When the model is converted, the checkpoint entry is replaced with the
diffusers model entry. A user can readd the ckpt if they wish to.

--- 

More or less done. Might make some minor UX improvements as I refine
things.
2023-02-15 21:58:29 +01:00
blessedcoolant
fb721234ec final build (webui-model-conversion) 2023-02-16 09:32:54 +13:00
blessedcoolant
92906aeb08 Merge branch 'main' into webui-model-conversion 2023-02-16 09:31:28 +13:00
Jonathan
cab41f0538 Fix perlin noise generator for diffusers tensors (#2678)
Tensors with diffusers no longer have to be multiples of 8. This broke Perlin noise generation. We now generate noise for the next largest multiple of 8 and return a cropped result. Fixes #2674.
2023-02-15 19:37:42 +01:00
Kent Keirsey
5d0dcaf81e Fix typo and Hi-Res Bug 2023-02-15 13:06:31 +01:00
psychedelicious
9591c8d4e0 builds frontend 2023-02-15 22:30:47 +11:00
psychedelicious
bcb1fbe031 add tooltips & status messages to model conversion 2023-02-15 22:28:36 +11:00
Lincoln Stein
e87a2fe14b model installer frontend done - needs to be hooked to backend 2023-02-15 01:07:39 -05:00
blhook
d00571b5a4 Revert yarn.lock 2023-02-14 18:05:24 -08:00
fattire
b08a514594 missed one. 2023-02-14 17:49:01 -08:00
Eugene Brodsky
265ccaca4a Merge branch 'main' into enhance/update-menu 2023-02-14 20:48:36 -05:00
fattire
7aa6c827f7 fix minor typos 2023-02-14 17:38:21 -08:00
Jonathan
093174942b Add thresholding for all diffusers types (#2479)
`generator` now asks `InvokeAIDiffuserComponent` to do postprocessing work on latents after every step. Thresholding - now implemented as replacing latents outside of the threshold with random noise - is called at this point. This postprocessing step is also where we can hook up symmetry and other image latent manipulations in the future.

Note: code at this layer doesn't need to worry about MPS as relevant torch functions are wrapped and made MPS-safe by `generator.py`.
2023-02-14 18:00:34 -06:00
Lincoln Stein
f299f40763 convert existing model display to column format 2023-02-14 16:32:54 -05:00
Lincoln Stein
7545e38655 frontend design done; functionality not hooked up yet 2023-02-14 00:02:19 -05:00
Lincoln Stein
0bc55a0d55 Fix link to the installation documentation
Broken link in the README. Now pointing to correct mkdocs file.
2023-02-14 04:15:23 +01:00
Lincoln Stein
d38e7170fe fix broken !import_model downloads
1. Now works with sites that produce lots of redirects, such as CIVITAI
2. Derive name of destination model file from HTTP Content-Disposition header,
   if present.
3. Swap \\ for / in file paths provided by users, to hopefully fix issues with
   Windows.
2023-02-13 22:14:24 -05:00
mauwii
15a9412255 some small formatting fixes 2023-02-13 23:10:58 +01:00
Lincoln Stein
e29399e032 don't even try to load incompatible embeddings 2023-02-13 17:00:52 -05:00
Lincoln Stein
bc18a94d8c add ability to retrieve current list of embedding trigger strings
This PR adds a new attributer to ldm.generate, `embedding_trigger_strings`:

```
gen = Generate(...)
strings = gen.embedding_trigger_strings
strings = gen.embedding_trigger_strings()
```

The trigger strings will change when the model is updated to show only
those strings which are compatible with the current
model. Dynamically-downloaded triggers from the HF Concepts Library
will only show up after they are used for the first time. However, the
full list of concepts available for download can be retrieved
programatically like this:

```
from ldm.invoke.concepts_lib import HuggingFAceConceptsLibrary
concepts = HuggingFaceConceptsLibrary()
trigger_strings = concepts.list_concepts()
```
2023-02-13 14:11:36 -05:00
Lincoln Stein
5d2bdd478c Merge branch 'main' into bugfix/convert-v2-models 2023-02-13 13:15:05 -05:00
Lincoln Stein
9cacba916b Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-13 09:31:34 -05:00
Lincoln Stein
628e82fa79 Added arabic locale files (#2561)
I have added the arabic locale files. There need to be some
modifications to the code in order to detect the language direction and
add it to the current document body properties.

For example we can use this:

import { appWithTranslation, useTranslation } from "next-i18next";
import React, { useEffect } from "react";

  const { t, i18n } = useTranslation();
  const direction = i18n.dir();
  useEffect(() => {
    document.body.dir = direction;
  }, [direction]);

This should be added to the app file. It uses next-i18next to
automatically get the current language and sets the body text direction
(ltr or rtl) depending on the selected language.
2023-02-13 07:45:16 -05:00
Lincoln Stein
fbbbba2fac correct crash on edge case 2023-02-13 07:40:15 -05:00
blessedcoolant
9cbf9d52b4 Merge branch 'main' into pr/2561 2023-02-13 23:48:18 +13:00
blessedcoolant
fb35fe1a41 Merge branch 'main' into pr/2561 2023-02-13 23:47:21 +13:00
psychedelicious
b60b5750af builds frontend 2023-02-13 21:23:26 +11:00
psychedelicious
3ff40114fa adds arabic to language picker 2023-02-13 21:22:39 +11:00
psychedelicious
71c6ae8789 fixes mislocated language file 2023-02-13 21:22:18 +11:00
psychedelicious
d9a7536fa8 moves languages to fallback lang (en) 2023-02-13 21:21:46 +11:00
Lincoln Stein
99f4417cd7 Improve error messages from Textual Inversion and Merge scripts (#2641)
## Provide informative error messages when TI and Merge scripts have
insufficient space for console UI

- The invokeai-ti and invokeai-merge scripts will crash if there is not
enough space in the console to fit the user interface (even after
responsive formatting).

- This PR intercepts the errors and prints a useful error message
advising user to make window larger.
2023-02-13 00:12:32 -05:00
Lincoln Stein
47f94bde04 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-12 23:59:31 -05:00
Lincoln Stein
197e6b95e3 add missing file 2023-02-12 23:59:18 -05:00
Lincoln Stein
8e47ca8d57 Merge branch 'main' into bugfix/prevent-ti-frontend-crash 2023-02-12 23:56:41 -05:00
Lincoln Stein
714fff39ba add new console frontend to initial model selection, and other improvements
1. The invokeai-configure script has now been refactored. The work of
   selecting and downloading initial models at install time is now done
   by a script named invokeai-initial-models (module
   name is ldm.invoke.config.initial_model_select)

   The calling arguments for invokeai-configure have not changed, so
   nothing should break. After initializing the root directory, the
   script calls invokeai-initial-models to let the user select the
   starting models to install.

2. invokeai-initial-models puts up a console GUI with checkboxes to
   indicate which models to install. It respects the --default_only
   and --yes arguments so that CI will continue to work.

3. User can now edit the VAE assigned to diffusers models in the CLI.

4. Fixed a bug that caused a crash during model loading when the VAE
   is set to None, rather than being empty.
2023-02-12 23:52:44 -05:00
Eugene Brodsky
89239d1c54 (updater) style 'pip' progress to use dark background 2023-02-12 19:10:11 -05:00
blhook
c03d98cf46 Implement a cancel after next iteration button 2023-02-12 15:56:03 -08:00
Lincoln Stein
d1ad46d6f1 ask user to make window larger if not enough space for textual inversion/merge gui
- The invokeai-ti and invokeai-merge scripts will crash if there is not enough space
  in the console to fit the user interface (even after responsive formatting).

- This PR intercepts the errors and prints a useful error message advising user to
  make window larger.
2023-02-12 17:38:46 -05:00
Lincoln Stein
6ae7560f66 Merge branch 'main' into webui-model-conversion 2023-02-12 17:22:32 -05:00
Lincoln Stein
e561d19206 a few adjustments
- fix unused variables and f-strings found by pyflakes
- use global_converted_ckpts_dir() to find location of diffusers
- fixed bug in model_manager that was causing the description of converted
  models to read "Optimized version of {model_name}'
2023-02-12 17:20:13 -05:00
Jonathan
9eed1919c2 Strategize slicing based on free [V]RAM (#2572)
Strategize slicing based on free [V]RAM when not using xformers. Free [V]RAM is evaluated at every generation. When there's enough memory, the entire generation occurs without slicing. If there is not enough free memory, we use diffusers' sliced attention.
2023-02-12 18:24:15 +00:00
blessedcoolant
b87f7b1129 Update Model Conversion Help Text 2023-02-13 00:30:50 +13:00
blessedcoolant
7410a60208 Merge branch 'main' into webui-model-conversion 2023-02-12 23:35:49 +13:00
Matthias Wild
7c86130a3d add merge_group trigger to test-invoke-pip.yml (#2590) 2023-02-12 05:00:04 +01:00
Lincoln Stein
58a1d9aae0 Merge branch 'main' into update/ci/prepare-test-invoke-pip-for-queue 2023-02-11 22:38:55 -05:00
Lincoln Stein
24e32f6ae2 add 'update' action to launcher script
- Adds an update action to launcher script
- This action calls new python script `invokeai-update`, which prompts
  user to update to latest release version, main development version,
  or an arbitrary git tag or branch name.
- It then uses `pip` to update to whatever tag was specified.
2023-02-11 22:32:48 -05:00
Lincoln Stein
3dd7393984 Huge Docker Update - better caching, don't use root user, include dockerhub and more.... (#2597)
Some of the core features of this PR include:

- optional push image to dockerhub (will be skipped in repos which
didn't set it up)
- stop using the root user at runtime
- trigger builds also for update/docker/* and update/ci/docker/*
- always cache image from current branch and main branch
- separate caches for container flavors
- updated comments with instructions in build.sh and run.sh
2023-02-11 18:25:48 -05:00
Lincoln Stein
f18f743d03 Merge branch 'main' into update/docker/include-dockerhub 2023-02-11 18:03:03 -05:00
Lincoln Stein
c660dcdfcd improve ability to bulk import .ckpt and .safetensors
This commit cleans up the code that did bulk imports of legacy model
files. The code has been refactored, and the user is now offered the
option of importing all the model files found in the directory, or
selecting which ones to import.
2023-02-11 17:59:12 -05:00
blessedcoolant
9e0250c0b4 Merge branch 'main' into webui-model-conversion 2023-02-12 11:13:13 +13:00
blessedcoolant
08c747f1e0 test-build (model-conversion-v1) 2023-02-12 11:12:23 +13:00
blessedcoolant
04ae6fde80 Model Manager localization updates 2023-02-12 11:11:00 +13:00
blessedcoolant
b1a53c8ef0 {Model Manager] Backend update to support custom save locations and configs 2023-02-12 11:10:47 +13:00
blessedcoolant
cd64511f24 [Model Manager] Allows uses to pick Diffusers converted model save location
Users can now pick the folder to save their diffusers converted model. It can either be the same folder as the ckpt, or the invoke root models folder or a totally custom location.
2023-02-12 11:10:17 +13:00
blessedcoolant
1e98e0b159 [Model Manager] Allow users to pick model type
Users can now pick model type when adding a new model and the configuration files are automatically applied.
2023-02-12 11:09:09 +13:00
Lincoln Stein
4f7af55bc3 if importing a v2 ckpt model, convert to diffusers 2023-02-11 16:35:45 -05:00
Lincoln Stein
d0e6a57e48 make inpaint model conversion work
Fixed a couple of bugs:

1. The original config file for the ckpt file is derived from the entry in
   `models.yaml` rather than relying on the user to select. The implication
   of this is that V2 ckpt models need to be assigned `v2-inference-v.yaml`
   when they are first imported. Otherwise they won't convert right. Note
   that currently V2 ckpts are imported with `v1-inference.yaml`, which
   isn't right either.

2. Fixed a backslash in the output diffusers path, which was causing
   load failures on Linux.

Remaining issues:

1. The radio buttons for selecting the model type are
   nonfunctional. It feels to me like these should be moved into the
   dialogue for importing ckpt/safetensors files, because this is
   where the algorithm needs help from the user.

2. The output diffusers model is written into the same directory as
   the input ckpt file. The CLI does it differently and stores the
   diffusers model in `ROOTDIR/models/converted-ckpts`. We should
   settle on one way or the other.
2023-02-11 15:53:41 -05:00
Lincoln Stein
d28a486769 rebuild frontend 2023-02-11 15:07:12 -05:00
Lincoln Stein
84722d92f6 foo 2023-02-11 15:06:34 -05:00
Lincoln Stein
8a3b5ac21d rebuild frontend 2023-02-11 14:58:49 -05:00
Lincoln Stein
717d53a773 Merge branch 'main' into bugfix/convert-v2-models 2023-02-11 14:27:52 -05:00
blessedcoolant
96926d6648 v2 Conversion Support & Radio Picker
Converted the picker options to a Radio Group and also updated the backend to use the appropriate config if it is a v2 model that needs to be converted.
2023-02-12 05:00:29 +13:00
Lincoln Stein
f3639de8b1 add note in manual that directly running v2 models not supported 2023-02-11 09:43:14 -05:00
Lincoln Stein
b71e675e8d support conversion of v2 models
- This PR introduces a CLI prompt for the proper configuration file to
  use when converting a ckpt file, in order to support both inpainting
  and v2 models files.

- When user tries to directly !import a v2 model, it prints out a proper
  warning that v2 ckpts are not directly supported.
2023-02-11 09:39:41 -05:00
tyler
d3c850104b pulling esrgan denoise strength through to the generate API. 2023-02-12 02:47:37 +13:00
tyler
c00155f6a4 pulling esrgan denoise strength through to the generate API. 2023-02-12 02:47:37 +13:00
Lincoln Stein
8753070fc7 Fix Incorrect Windows Environment Activation Location (Manual Installation Documentation) (#2627)
## What was the problem/requirement? (What/Why)
* Windows location for the Python environment activate location is
currently incorrect
  * Due to this, this command will fail for Windows-based users
* The contributing link within the `Developer Install` sections leads to
a [404](https://invoke-ai.github.io/index.md#Contributing)
* `Developer Install`'s numbered list currently lists 1, 1, 2, . . .

## What was the solution? (How)
* Changed the location of Windows script based on actual location -
[reference](https://docs.python.org/3/library/venv.html)
* Moved the link to point to one directory higher -- the main index.md
* Minor format adjustments to allow for the numbered list to appear as
expected

## How were these changes tested?
* `mkdocs serve` => Verified on local server that the changes reflected
as expected

## Notes
Contributing mentions to set the upstream towards the `development`
branch, but that branch has been untouched for several months, so I've
pointed to the `main` branch. Let me know if we need to switch to a
different one.
2023-02-11 08:15:17 -05:00
Lincoln Stein
ed8f9f021d Merge branch 'main' into update-installation-documents 2023-02-11 07:46:29 -05:00
Lincoln Stein
3ccc705396 fix two bugs in conversion of inpaint models from ckpt to diffusers m… (#2620)
…odels

- If CLI asked to convert the currently loaded model, the model would
crash on the first rendering. CLI will now refuse to convert a model
loaded in memory (probably a good idea in any case).

- CLI will offer the `v1-inpainting-inference.yaml` as the configuration
file when importing an inpainting a .ckpt or .safetensors file that has
"inpainting" in the name. Otherwise it offers `v1-inference.yaml` as the
default.
2023-02-11 07:45:06 -05:00
blessedcoolant
11e422cf29 Ignore two files names instead of the entire folder
rather than bypassing any path with diffusers in it, im specifically bypassing model.safetensors and diffusion_pytorch_model.safetensors both of which should be diffusers files in most cases.
2023-02-12 00:13:22 +13:00
blessedcoolant
7f695fed39 Ignore safetensor or ckpt files inside diffusers model folders.
Basically skips the path if the path has the word diffusers anywhere inside it.
2023-02-12 00:03:42 +13:00
blessedcoolant
310501cd8a Add support for custom config files 2023-02-11 23:34:24 +13:00
blhook
106b3aea1b Fix incorrect Windows env activation location
Change broken link to Contributing inside of Developer Install
Minor format modification to allow for numbered list to appear properly
2023-02-11 00:30:07 -08:00
blessedcoolant
6e52ca3307 Model Convert Component 2023-02-11 20:41:49 +13:00
blessedcoolant
94c31f672f Add Initial Checks for Inpainting
The conversion itself is broken. But that's another issue.
2023-02-11 20:41:18 +13:00
Saifeddine ALOUI
240bbb9852 Merge branch 'main' into main 2023-02-11 01:17:42 +01:00
Matthias Wild
8cf2ed91a9 Merge branch 'main' into update/docker/include-dockerhub 2023-02-10 22:55:54 +01:00
mauwii
7be5b4ca8b update Dockerfile
- introduce build arg `VOLUME_DIR`
- fix permissions of the Volume
2023-02-10 22:55:19 +01:00
Lincoln Stein
d589ad96aa fix two bugs in conversion of inpaint models from ckpt to diffusers models
- If CLI asked to convert the currently loaded model, the model would crash
  on the first rendering. CLI will now refuse to convert a model loaded
  in memory (probably a good idea in any case).

- CLI will offer the `v1-inpainting-inference.yaml` as the configuration
  file when importing an inpainting a .ckpt or .safetensors file that
  has "inpainting" in the name. Otherwise it offers `v1-inference.yaml`
  as the default.
2023-02-10 15:06:37 -05:00
Lincoln Stein
097e41e8d2 2.3.0 Documentation Fixes (#2609)
Found a couple of places where the formatting was messed up. I also
added a "Quick Start Guide" to the README for people who encounter
InvokeAI through PyPi. It features the PyPi install!
2023-02-10 13:00:14 -05:00
mauwii
4cf43b858d update 020_INSTALL_MANUAL.md
- some formatting changes / fixes
- updates venv creation commands
- remove extra index from Mac Installations
2023-02-10 17:29:12 +01:00
mauwii
13a4666a6e update README.md
- fix some formatting issues
- fix command to create venv
- some other small updates
2023-02-10 16:27:21 +01:00
blessedcoolant
9232290950 Initial Implementation - Model Conversion Frontend 2023-02-11 03:53:31 +13:00
blessedcoolant
f3153d45bc Initial Implementation - Model Conversion Backend 2023-02-11 03:53:15 +13:00
Lincoln Stein
d9cb6da951 Merge branch 'main' into update/docker/include-dockerhub 2023-02-10 09:42:07 -05:00
Saifeddine ALOUI
17535d887f Merge branch 'invoke-ai:main' into main 2023-02-10 07:58:28 +01:00
Lincoln Stein
35da7f5b96 Merge branch 'main' into doc/manual-install-fixes 2023-02-09 21:55:21 -05:00
Lincoln Stein
4e95a68582 adding support for ESRGAN denoising strength (#2598)
pulling in denoising support from upstream (its already there, invoke
just isn't using it). I've enabled this as a command line argument as
construction of the ESRGAN handler happens once. Ideally this would be a
UI option that could be adjusted for each upscaling task. Unfortunately
that is beyond my current level of InvokeAI-foo.

Upstream reference is here, starting on line 99 "use dni to control the
denoise strength"

https://github.com/xinntao/Real-ESRGAN/blob/master/inference_realesrgan.py
2023-02-09 21:55:00 -05:00
Lincoln Stein
9dfeb93f80 add quick install instructions to README 2023-02-09 20:28:20 -05:00
blessedcoolant
02247ffc79 resolved build (denoise_str) 2023-02-10 14:12:21 +13:00
blessedcoolant
48da030415 resolving conflicts 2023-02-10 14:03:31 +13:00
Lincoln Stein
817e04bee0 add quickstart instructions for PyPi 2023-02-09 19:37:04 -05:00
Lincoln Stein
e5d0b0c37d Formatting fixes, Manual Installation docs
Found a couple of places where the formatting was messed up. This corrects them.
2023-02-09 17:56:25 -05:00
blessedcoolant
950f450665 Merge branch 'main' into main 2023-02-10 11:52:26 +13:00
Lincoln Stein
f5d1fbd896 Update main to release v2.3.0 (#2608)
# Release 2.3.0

This will bring `main` up to date with release 2.3.0. I will need
approvals from @mauwii (docs) and @blessedcoolant (for _version.py).
2023-02-09 17:28:55 -05:00
Lincoln Stein
424cee63f1 Merge branch 'main' into release/2.3.0-last-tweaks 2023-02-09 16:36:51 -05:00
blessedcoolant
79daf8b039 clean build (esrgan-denoise-str) 2023-02-10 10:20:37 +13:00
blessedcoolant
383cbca896 lint-resolve 2023-02-10 10:16:55 +13:00
psychedelicious
07c55d5e2a adds upscaling denoising to metadata viewer 2023-02-10 07:30:17 +11:00
blessedcoolant
156151df45 build (esrgan-denoise-str) 2023-02-10 09:19:55 +13:00
blessedcoolant
03b1d71af9 Resolving Conflicts 2023-02-10 09:18:02 +13:00
blessedcoolant
da193ecd4a ESLint EOL Fix 2023-02-10 09:11:07 +13:00
psychedelicious
56fd202e21 builds frontend 2023-02-10 08:24:40 +13:00
Jonathan
29454a2974 Update generationSlice.ts 2023-02-10 08:24:40 +13:00
Jonathan
c977d295f5 Update generationSlice.ts 2023-02-10 08:24:40 +13:00
Jonathan
28eaffa188 Update generationSlice.ts
Added perlin noise state restoration.
2023-02-10 08:24:40 +13:00
psychedelicious
3feff09fb3 fixes #2049 use threshold not setting correct value 2023-02-10 08:24:40 +13:00
Lincoln Stein
158d1ef384 bump version number; update contributors 2023-02-09 13:01:08 -05:00
Matthias Wild
f6ad107fdd Merge branch 'main' into update/ci/prepare-test-invoke-pip-for-queue 2023-02-09 08:48:06 +01:00
blessedcoolant
e2c392631a build (esrgan-denoise-str) 2023-02-09 20:21:22 +13:00
blessedcoolant
4a1b4d63ef Change denoise_str default to 0.75 2023-02-09 20:21:09 +13:00
blessedcoolant
83ecda977c Add frontend UI for denoise_str for ESRGAN 2023-02-09 20:19:25 +13:00
blessedcoolant
9601febef8 Add denoise_str to ESRGARN - frontend server 2023-02-09 20:16:47 +13:00
blessedcoolant
0503680efa Change denoise_str to an arg instead of a class variable 2023-02-09 20:16:23 +13:00
mauwii
57ccec1df3 remove metadata to summary step since secret use
- This PR will also close #2593
2023-02-09 07:24:28 +01:00
Matthias Wild
22f3634481 Merge branch 'main' into update/docker/include-dockerhub 2023-02-09 07:18:45 +01:00
blessedcoolant
5590c73af2 Prettified Frontend 2023-02-09 19:16:36 +13:00
coreco
1f76b30e54 adding support for ESRGAN denoising strength, which allows for improved detail retention when upscaling photorelistic faces 2023-02-08 22:36:35 -06:00
Lincoln Stein
4785a1cd05 Up version to 2.3.0-rc7 (#2591)
This brings `main` up to date with 2.3.0 release candidate 7.
2023-02-08 22:06:58 -05:00
mauwii
8bd04654c7 remove Trash folder if existing 2023-02-09 04:00:51 +01:00
Lincoln Stein
2876c4ddec Merge branch 'main' into 2.3.0rc7 2023-02-08 21:40:14 -05:00
mauwii
0dce3188cc make DOCKERHUB_USERNAME a secret 2023-02-09 03:35:16 +01:00
mauwii
106c7aa956 bindmount outputs directory to ./docker/outputs 2023-02-09 02:48:12 +01:00
mauwii
b04f199035 revert caching to main; cache to ref_name 2023-02-09 01:33:08 +01:00
mauwii
a2b992dfd1 always cache-to main 2023-02-09 01:25:32 +01:00
mauwii
745e253a78 fix condition of Docker Hub Description step 2023-02-09 01:05:08 +01:00
mauwii
2ea551d37d create user home, expose port 2023-02-09 00:56:09 +01:00
mauwii
8d1481ca10 cache from main and ref_name 2023-02-09 00:28:04 +01:00
mauwii
307e7e00c2 only push if refs/heads/main or refs/tags/* 2023-02-09 00:15:46 +01:00
Lincoln Stein
4bce81de26 blank out lstein's employer info 2023-02-08 18:08:02 -05:00
mauwii
c3ad1c8a9f remove long sha from container tags 2023-02-09 00:06:09 +01:00
mauwii
05d51d7b5b re-use --link, lock pip cache 2023-02-08 23:45:15 +01:00
mauwii
09f69a4d28 Add output when activating venv 2023-02-08 23:39:28 +01:00
mauwii
a338af17c8 Initialize PIP_CACHE_DIR after setting the env 2023-02-08 23:17:15 +01:00
Matthias Wild
bc82fc0cdd Merge branch 'invoke-ai:main' into main 2023-02-08 22:46:16 +01:00
Saifeddine
418a3d6e41 Merge branch 'main' of https://github.com/ParisNeo/ArtBot 2023-02-08 21:59:58 +01:00
Saifeddine
fbcc52ec3d upgréaded arabic localization 2023-02-08 21:59:53 +01:00
Saifeddine ALOUI
47e89f4ba1 Merge branch 'invoke-ai:main' into main 2023-02-08 21:59:27 +01:00
Lincoln Stein
12d15a1a3f Up version to 2.3.0-rc7 2023-02-08 15:55:35 -05:00
mauwii
888d3ae968 cleanup Dockerfile 2023-02-08 21:54:06 +01:00
mauwii
a28120abdd small improvements to env.sh 2023-02-08 21:53:57 +01:00
Lincoln Stein
2aad4dab90 Initial Slider & Img2Img=1 Updates (#2467)
Adding a slider for Hi Res Fix to control Img2Img

Updated Img2img to accept values of 1 (replacing Inpaint Replace)
2023-02-08 15:50:44 -05:00
mauwii
4493d83aea update docker-scripts instructions 2023-02-08 21:35:19 +01:00
mauwii
eff0fb9a69 add merge_group trigger to test-invoke-pip.yml 2023-02-08 21:23:13 +01:00
Lincoln Stein
c19107e0a8 Merge branch 'main' into Img2Img-Slider-Updates 2023-02-08 15:21:46 -05:00
Lincoln Stein
eaf29e1751 Make menu options in invoke.bat the same as options in invoke.sh (#2588)
- This makes the launcher options menu on Windows look and act the same
as the Linux/Mac launcher, which previously was lacking the command-line
help option and didn't list item (6) as an option.
2023-02-08 15:20:43 -05:00
psychedelicious
d964374a91 builds frontend 2023-02-09 07:03:58 +11:00
Kent Keirsey
9826f80d7f Initial Slider & Img2Img=1 Updates 2023-02-09 07:02:39 +11:00
Lincoln Stein
ec89bd19dc Merge branch 'main' into installer/fix-launcher-menu 2023-02-08 14:54:36 -05:00
Lincoln Stein
23aaf54f56 Documentation for 2.3.0 (#2564)
Work in progress. I am reviewing and updating the documentation for
2.3.0. The following sections need to be done:

- [x] index.md
- [x] installation/010_INSTALL_AUTOMATED.md
- [x] installation/020_INSTALL_MANUAL.md
- [x] installation/030_INSTALL_CUDA_AND_ROCM.md (needs to be written
from scratch)
- [x] installation/040_INSTALL_DOCKER.md
- [x] installation/050_INSTALLING_MODELS.md
- [x] features/CLI.md
- [x] features/WEB.md
2023-02-08 14:54:20 -05:00
Lincoln Stein
6d3cc25bca Merge branch 'main' into 2.3-documentation-fixes 2023-02-08 14:29:35 -05:00
Lincoln Stein
c9d246c4ec Update 050_INSTALLING_MODELS.md (#2576)
Using Windows 10 I found I needed to use double backslashes to import a
new model, when using single backslash the output would say
"e:_ProjectsCodemodelsldmstable-diffusion-model-to-import.ckpt is
neither the path to a .ckpt file nor a diffusers repository id. Can't
import." This added tip in the documentation will help Windows users
overcome this.
2023-02-08 14:25:36 -05:00
mauwii
74406456f2 Fix links (ignored deprecated folder) 2023-02-08 20:07:27 +01:00
Lincoln Stein
8e0cd2df18 add 2.3.0 release date 2023-02-08 14:06:53 -05:00
Lincoln Stein
4d4b1777db Merge branch 'main' into patch-1 2023-02-08 13:59:47 -05:00
Lincoln Stein
d6e5da6e37 deprecated out of date FAQ 2023-02-08 13:58:17 -05:00
mauwii
5bb0f9bedc update Dockerfile - more simple user creation 2023-02-08 19:57:41 +01:00
Lincoln Stein
dec7d8b160 fix up the features/overview document 2023-02-08 13:52:02 -05:00
Lincoln Stein
4ecf016ace Merge branch 'main' into 2.3-documentation-fixes 2023-02-08 12:47:27 -05:00
Lincoln Stein
4d74af2363 Update docs/installation/030_INSTALL_CUDA_AND_ROCM.md
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2023-02-08 12:46:36 -05:00
Lincoln Stein
c6a2ba12e2 finished CLI, IMG2IMG and WEB updates 2023-02-08 12:45:56 -05:00
Lincoln Stein
350b5205a3 fix crash when --prompt="prompt" is used in CLI (#2579)
- The following were supposed to be equivalent, but the latter crashes:
```
invoke> banana sushi
invoke> --prompt="banana sushi"
```
This PR fixes the problem.

- Fixes #2548
2023-02-08 11:36:20 -05:00
Lincoln Stein
06028e0131 Merge branch 'main' into bugfix/cli-crash-on-prompt-arg 2023-02-08 11:06:48 -05:00
Lincoln Stein
c6d13e679f make menu options in invoke.bat the same as options in invoke.sh
- This makes the launcher options menu on Windows look and act the same
  as the Linux/Mac launcher, which previously was lacking the command-line
  help option and didn't list item (6) as an option.
2023-02-08 11:04:00 -05:00
psychedelicious
72357266a6 fixes #2578 use prompt bug on webkit browsers 2023-02-09 02:25:57 +13:00
Lincoln Stein
9d69843a9d fix screenshot directory name 2023-02-08 07:57:46 -05:00
Lincoln Stein
0547d20b2f crop screenshots 2023-02-08 07:54:27 -05:00
Lincoln Stein
2af6b8fbd8 screenshot revision 2023-02-08 07:46:47 -05:00
psychedelicious
0cee72dba5 fixes #2525 del hotkey doesn't work after canceling
The `useHotkeys` hook for this hotkey didn't have `isConnected` or `isProcessing` in its dependencies array. This prevented `handleDelete()` from dispatching the delete request.
2023-02-09 01:37:55 +13:00
psychedelicious
77c11a42ee fixes #2505 add preserve masked to status text 2023-02-09 01:10:59 +13:00
mauwii
bf812e6493 hotfix - use context github.ref_name for cache 2023-02-08 11:01:55 +01:00
Matthias Wild
a3da12d867 Merge branch 'invoke-ai:main' into main 2023-02-08 10:22:48 +01:00
Lincoln Stein
1d62b4210f First draft of CODEOWNERS (#2558)
This is an early draft of a codeowners file for InvokeAI. It has plenty
of gaps in it. Please use this PR to add yourself and others where
appropriate.
2023-02-08 01:13:45 -05:00
Lincoln Stein
d5a3571c00 Merge branch 'main' into dev/codeowner-assignment 2023-02-08 00:46:31 -05:00
Lincoln Stein
8b2ed9b8fd finished work on INSTALLING MODELS 2023-02-08 00:40:21 -05:00
Lincoln Stein
24792eb5da add CUDA and ROCm installation instructions 2023-02-07 23:02:45 -05:00
Lincoln Stein
614220576f add that forward slashes work too 2023-02-07 23:01:59 -05:00
Lincoln Stein
70bcbc7401 Better AMD clarification (#2536)
To better clarify that AMD is supported when using linux
2023-02-07 22:36:40 -05:00
Lincoln Stein
492605ac3e Merge branch 'main' into patch-1 2023-02-07 22:14:39 -05:00
Lincoln Stein
67f892455f fix crash when --prompt="prompt" is used in CLI
- The following were supposed to be equivalent, but the latter crashes:
```
invoke> banana sushi
invoke> --prompt="banana sushi"
```
This PR fixes the problem.

- Fixes #2548
2023-02-07 22:09:34 -05:00
Lincoln Stein
ae689d1a4a add platform-specific help instructions to installer (#2530)
This adds some platform-specific help messages to the installer welcome
screen:

- For Windows, the message encourages them to install VC++ core
libraries and the registry long name patch
- For MacOSX, the message warns the user to install the XCode tools.
2023-02-07 20:47:58 -05:00
Lincoln Stein
10990799db Merge branch 'main' into dev/codeowner-assignment 2023-02-07 20:29:38 -05:00
Lincoln Stein
c5b4397212 Merge branch 'main' into installer/platform-specific-help 2023-02-07 20:25:02 -05:00
LoganPederson
f62bbef9f7 Update 050_INSTALLING_MODELS.md
I found I needed to use double backslashes to import a new model, when using single backslash the output would say "e:_ProjectsCodemodelsldmstable-diffusion-model-to-import.ckpt is neither the path to a .ckpt file nor a diffusers repository id. Can't import." This added tip in the documentation will help Windows users overcome this.
2023-02-07 18:19:59 -06:00
Matthias Wild
6b4a06c3fc Merge branch 'invoke-ai:main' into main 2023-02-08 00:25:49 +01:00
mauwii
9157da8237 Begun to fill the empty CUDA/ROCm doc
🤡
2023-02-08 00:05:24 +01:00
Lincoln Stein
9c2b9af3a8 Bring main up to 2.3.0-rc6 (#2563)
This bumps up the version number, and also applies a hotfix to the
configure script to fix the problem described in PR #2562
2023-02-07 18:02:13 -05:00
mauwii
3833b28132 Create a new user for the container runtime
without root permission
2023-02-07 23:46:43 +01:00
Lincoln Stein
e3419c82e8 Merge branch 'main' into patch-1 2023-02-07 17:45:15 -05:00
Lincoln Stein
65f3d22649 Merge branch 'main' into dev/codeowner-assignment 2023-02-07 17:44:37 -05:00
Lincoln Stein
39b0288595 Merge branch 'main' into 2.3.0rc6 2023-02-07 17:43:38 -05:00
Lincoln Stein
13d12a0ceb Merge branch 'main' into 2.3-documentation-fixes 2023-02-07 17:08:10 -05:00
Lincoln Stein
b92dc8db83 add developer install instructions 2023-02-07 17:04:01 -05:00
Lincoln Stein
b49188a39d doc updates; clean up install directory
- Large rewrite of documentation for automated and manual install.
- Reorganize installer zip file to reduce visual clutter for user.
2023-02-07 16:35:22 -05:00
Lincoln Stein
b9c8270ee6 update manual install doc 2023-02-07 14:19:55 -05:00
Jonathan
f0f3520bca Switch to using max for attention slicing in all cases for the time being. (#2569) 2023-02-07 19:28:57 +01:00
Matthias Wild
e8f9ab82ed Merge branch 'invoke-ai:main' into main 2023-02-07 18:48:47 +01:00
psychedelicious
6ab364b16a build frontend 2023-02-07 17:06:47 +01:00
psychedelicious
a4dc11addc switch to @vitejs/plugin-react-swc 2023-02-07 17:06:47 +01:00
psychedelicious
0372702eb4 remove unneeded polyfill 2023-02-07 17:06:47 +01:00
psychedelicious
aa8eeea478 update app build configuration 2023-02-07 17:06:47 +01:00
blessedcoolant
e54ecc4c37 build (vite-4-code-quality) 2023-02-07 17:06:47 +01:00
blessedcoolant
4a12c76097 Remove build-dev 2023-02-07 17:06:47 +01:00
blessedcoolant
be72faf78e Upgrade to Vite 4 2023-02-07 17:06:46 +01:00
blessedcoolant
28d44d80ed Rebase Fix - ModelSelect 2023-02-07 17:06:46 +01:00
psychedelicious
9008d9996f builds frontend 2023-02-07 17:06:46 +01:00
psychedelicious
be2a9b78bb fixes rebase issues 2023-02-07 17:06:46 +01:00
Ryan Cao
70003ee5b1 feat: add copy image in share menu 2023-02-07 17:06:46 +01:00
psychedelicious
45a5ccba84 Updates code quality tooling and formats codebase
- `eslint` and `prettier` configs
- `husky` to format and lint via pre-commit hook
- `babel-plugin-transform-imports` to treeshake `lodash` and other packages if needed

Lints and formats codebase.
2023-02-07 17:06:46 +01:00
psychedelicious
f80a64a0f4 Reorganises internal state
`options` slice was huge and managed a mix of generation parameters and general app settings. It has been split up:

- Generation parameters are now in `generationSlice`.
- Postprocessing parameters are now in `postprocessingSlice`
- UI related things are now in `uiSlice`

There is probably more to be done, like `gallerySlice` perhaps should only manage internal gallery state, and not if the gallery is displayed.

Full-slice selectors have been made for each slice.

Other organisational tweaks.
2023-02-07 17:06:46 +01:00
Lincoln Stein
511df2963b remove debugging statement 2023-02-07 17:06:46 +01:00
Lincoln Stein
f92f62a91b enhance model_manager support for converting inpainting ckpt files
Previously conversions of .ckpt and .safetensors files to diffusers
models were failing with channel mismatch errors. This is corrected
with this PR.

- The model_manager convert_and_import() method now accepts the path
  to the checkpoint file's configuration file, using the parameter
  `original_config_file`. For inpainting files this should be set to
  the full path to `v1-inpainting-inference.yaml`.

- If no configuration file is provided in the call, then the presence
  of an inpainting file will be inferred at the
  `ldm.ckpt_to_diffuser.convert_ckpt_to_diffUser()` level by looking
  for the string "inpaint" in the path. AUTO1111 does something
  similar to this, but it is brittle and not recommended.

- This PR also changes the model manager model_names() method to return
  the model names in case folded sort order.
2023-02-07 17:06:45 +01:00
psychedelicious
3efe9899c2 build frontend 2023-02-08 01:53:34 +13:00
psychedelicious
bdbe4660fc switch to @vitejs/plugin-react-swc 2023-02-08 01:53:34 +13:00
psychedelicious
8af9432f63 remove unneeded polyfill 2023-02-08 01:53:34 +13:00
psychedelicious
668d9cdb9d update app build configuration 2023-02-08 01:53:34 +13:00
blessedcoolant
90f5811e59 build (vite-4-code-quality) 2023-02-08 01:53:34 +13:00
blessedcoolant
15d21206a3 Remove build-dev 2023-02-08 01:53:34 +13:00
blessedcoolant
b622286f17 Upgrade to Vite 4 2023-02-08 01:53:34 +13:00
blessedcoolant
176add58b2 Rebase Fix - ModelSelect 2023-02-08 01:53:34 +13:00
psychedelicious
33c5f5a9c2 builds frontend 2023-02-08 01:53:34 +13:00
psychedelicious
2b7752b72e fixes rebase issues 2023-02-08 01:53:34 +13:00
Ryan Cao
5478d2a15e feat: add copy image in share menu 2023-02-08 01:53:34 +13:00
psychedelicious
9ad76fe80c Updates code quality tooling and formats codebase
- `eslint` and `prettier` configs
- `husky` to format and lint via pre-commit hook
- `babel-plugin-transform-imports` to treeshake `lodash` and other packages if needed

Lints and formats codebase.
2023-02-08 01:53:34 +13:00
psychedelicious
d74c4009cb Reorganises internal state
`options` slice was huge and managed a mix of generation parameters and general app settings. It has been split up:

- Generation parameters are now in `generationSlice`.
- Postprocessing parameters are now in `postprocessingSlice`
- UI related things are now in `uiSlice`

There is probably more to be done, like `gallerySlice` perhaps should only manage internal gallery state, and not if the gallery is displayed.

Full-slice selectors have been made for each slice.

Other organisational tweaks.
2023-02-08 01:53:34 +13:00
Lincoln Stein
ffe0e81ec9 Support conversion of inpainting ckpt files to diffusers (#2550)
#     enhance model_manager support for converting inpainting ckpt files
    
Previously conversions of .ckpt and .safetensors files to diffusers
    models were failing with channel mismatch errors. This is corrected
    with this PR.
    
 - The model_manager convert_and_import() method now accepts the path
      to the checkpoint file's configuration file, using the parameter
      `original_config_file`. For inpainting files this should be set to
      the full path to `v1-inpainting-inference.yaml`.
    
- If no configuration file is provided in the call, then the presence
      of an inpainting file will be inferred at the
      `ldm.ckpt_to_diffuser.convert_ckpt_to_diffUser()` level by looking
      for the string "inpaint" in the path. AUTO1111 does something
      similar to this, but it is brittle and not recommended.
    
- This PR also changes the model manager model_names() method to return
      the model names in case folded sort order.
2023-02-07 07:25:30 -05:00
Lincoln Stein
bdf683ec41 Merge branch 'main' into enhance/convert-inpaint-models 2023-02-07 06:59:35 -05:00
mauwii
7f41893da4 set scope for caches 2023-02-07 09:27:20 +01:00
mauwii
42da4f57c2 update .dockerignore 2023-02-07 09:27:20 +01:00
mauwii
c2e11dfe83 update build-container.yml
- add long sha tag
- update cache-from
Dockerfile:
- re-use `apt-get update`
env.sh/build.sh:
- rename platform to lowercase
2023-02-07 09:27:20 +01:00
mauwii
17e1930229 remove CONTAINER_FLAVOR build arg
also disable currently unused PIP_PACKAGE build arg
will start using it when problems with XFORMERS are sorted out
2023-02-07 09:27:20 +01:00
mauwii
bde94347d3 don't use --linkin COPY 2023-02-07 09:27:20 +01:00
mauwii
b1612afff4 update .dockerignore 2023-02-07 09:27:20 +01:00
mauwii
1d10d952b2 use cleartext DOCKERHUB_USERNAME 2023-02-07 09:27:20 +01:00
mauwii
9150f9ef3c move LABEL to top 2023-02-07 09:27:20 +01:00
mauwii
7bc0f7cc6c update Docker Hub description 2023-02-07 09:27:20 +01:00
mauwii
c52d11b24c optionally push to DockerHub 2023-02-07 09:27:20 +01:00
mauwii
59486615dd update build-container.yml 2023-02-07 09:27:20 +01:00
mauwii
f0212cd361 update Dockerfile 2023-02-07 09:27:20 +01:00
mauwii
ee4cb5fdc9 add id to Build container 2023-02-07 09:27:20 +01:00
mauwii
75b919237b update cache-from 2023-02-07 09:27:20 +01:00
mauwii
07a9062e1f update .dockerignore and scripts 2023-02-07 09:27:20 +01:00
mauwii
cdb3e18b80 add flavor to pip cache id
to prevent cache invalidation
2023-02-07 09:27:20 +01:00
Lincoln Stein
28a5424242 Update textual inversion doc with the correct CLI name. (#2560) 2023-02-07 01:22:03 -05:00
Lincoln Stein
8d418af20b Merge branch 'main' into ti-doc-update 2023-02-07 00:59:53 -05:00
Lincoln Stein
055badd611 Diffusers Samplers (#2565)
- Diffusers Sampler list is independent from CKPT Sampler list. And the
app will load the correct list based on what model you have loaded.
- Isolated the activeModelSelector coz this is used in multiple places.
- Possible fix to the white screen bug that some users face. This was
happening because of a possible null in the active model list
description tag. Which should hopefully now be fixed with the new
activeModelSelector.

I'll keep tabs on the last thing. Good to go.
2023-02-07 00:59:32 -05:00
blessedcoolant
944f9e98a7 build (diffusers-samplers) 2023-02-07 18:29:14 +13:00
blessedcoolant
fcffcf5602 Diffusers Samplers
DIsplay sampler list based on the active model.
2023-02-07 18:26:06 +13:00
blessedcoolant
f121dfe120 Update model select to use new active model selector
Hopefully this also fixes the white screen error that some users face.
2023-02-07 18:25:45 +13:00
blessedcoolant
a7dd7b4298 Add activeModelSelector
Active Model details are used in multiple places. So makes sense to have a selector for it.
2023-02-07 18:25:12 +13:00
Lincoln Stein
d94780651c Merge branch 'main' into patch-1 2023-02-07 00:07:31 -05:00
Lincoln Stein
d26abd7f01 add empty CUDA/ROCM install guide 2023-02-07 00:04:56 -05:00
Lincoln Stein
7e2b122105 updated manual install instructions 2023-02-06 23:59:48 -05:00
Lincoln Stein
8a21fc1c50 bump version to 2.3.0-rc6 2023-02-06 23:36:49 -05:00
Lincoln Stein
275d5040f4 Merge branch 'bugfix/configure-script' into 2.3.0rc6 2023-02-06 23:35:32 -05:00
Lincoln Stein
1b5930dcad do not merge diffusers and ckpt stanzas 2023-02-06 23:23:07 -05:00
Lincoln Stein
d5810f6270 Bring main up to date with RC5 (#2555)
Updated the version number
2023-02-06 22:23:58 -05:00
Lincoln Stein
ebc51dc535 incomplete work on manual install 2023-02-06 21:47:29 -05:00
Lincoln Stein
ac6e9238f1 Merge branch 'main' into ti-doc-update 2023-02-06 20:06:33 -05:00
Saifeddine
01eb93d664 Added Arabic Localisation 2023-02-07 00:42:09 +01:00
Saifeddine
89f69c2d94 Merge branch 'main' of https://github.com/ParisNeo/ArtBot 2023-02-07 00:29:33 +01:00
Saifeddine
dc6f6fcab7 Added arabic locale files 2023-02-07 00:29:30 +01:00
Dan Sully
6343b245ef Update textual inversion doc with the correct CLI name. 2023-02-06 14:51:22 -08:00
Lincoln Stein
8c80da2844 Merge branch 'main' into 2.3.0rc5 2023-02-06 17:38:25 -05:00
Lincoln Stein
a12189e088 fix build-container.yml (#2557)
This should fix the build-container workflow when triggered by a Tag
(that it is failing was mentioned in #2555 )
2023-02-06 15:09:04 -05:00
cosmii02
472c97e4e8 Merge branch 'main' into patch-1 2023-02-06 22:05:47 +02:00
mauwii
5baf0ae755 add mkdocs.yml and pyproject.toml
also make docs separate header
2023-02-06 20:47:20 +01:00
Lincoln Stein
a56e3014a4 Merge branch 'main' into update/ci/refine-build-container 2023-02-06 14:42:02 -05:00
Lincoln Stein
f3eff38f90 add tildebyte areas 2023-02-06 14:38:42 -05:00
Lincoln Stein
53d2d34b3d Merge branch 'main' into 2.3.0rc5 2023-02-06 14:34:16 -05:00
Lincoln Stein
ede7d1a8f7 first draft of codeowners 2023-02-06 14:33:46 -05:00
blessedcoolant
ac23a321b0 build (hires-strength-slider) 2023-02-07 08:22:39 +13:00
blessedcoolant
f52b233205 Add Hi Res Strength Slider 2023-02-07 08:22:39 +13:00
mauwii
8242fc8bad update metadata 2023-02-06 19:58:48 +01:00
Matthias Wild
09b6f7572b Merge branch 'invoke-ai:main' into main 2023-02-06 19:50:40 +01:00
Lincoln Stein
bde6e96800 Merge branch 'main' into 2.3.0rc5 2023-02-06 12:55:47 -05:00
Lincoln Stein
13474e985b Merge branch 'main' into patch-1 2023-02-06 12:54:07 -05:00
Jonathan
28b40bebbe Refactor CUDA cache clearing to add statistical reporting. (#2553) 2023-02-06 12:53:30 -05:00
Lincoln Stein
1c9fd00f98 this is likely the penultimate rc 2023-02-06 12:03:08 -05:00
Lincoln Stein
8ab66a211c force torch reinstall (#2532)
For the torch and torchvision libraries **only**, the installer will now
pass the pip `--force-reinstall` option. This is intended to fix issues
with the user getting a CPU-only version of torch and then not being
able to replace it.
2023-02-06 11:58:57 -05:00
Lincoln Stein
bc03ff8b30 Merge branch 'main' into install/force-torch-reinstall 2023-02-06 11:31:57 -05:00
blessedcoolant
0247d63511 Build (negative-prompt-box) 2023-02-07 05:21:09 +13:00
blessedcoolant
7604b36577 Add Negative Prompts Box 2023-02-07 05:21:09 +13:00
blessedcoolant
4a026bd46e Organize language picker items alphabetically 2023-02-07 05:21:09 +13:00
blessedcoolant
6241fc19e0 Fix the model manager edit placeholder not being full height 2023-02-07 05:21:09 +13:00
blessedcoolant
25d7d71dd8 Slightly decrease the size of the tab list icons 2023-02-07 05:21:09 +13:00
Jonathan
2432adb38f In exception handlers, clear the torch CUDA cache (if we're using CUDA) to free up memory for other programs using the GPU and to reduce fragmentation. (#2549) 2023-02-06 10:33:24 -05:00
Lincoln Stein
91acae30bf Merge branch 'main' into patch-1 2023-02-06 10:14:27 -05:00
Lincoln Stein
ca749b7de1 remove debugging statement 2023-02-06 09:45:21 -05:00
Lincoln Stein
7486aa8608 enhance model_manager support for converting inpainting ckpt files
Previously conversions of .ckpt and .safetensors files to diffusers
models were failing with channel mismatch errors. This is corrected
with this PR.

- The model_manager convert_and_import() method now accepts the path
  to the checkpoint file's configuration file, using the parameter
  `original_config_file`. For inpainting files this should be set to
  the full path to `v1-inpainting-inference.yaml`.

- If no configuration file is provided in the call, then the presence
  of an inpainting file will be inferred at the
  `ldm.ckpt_to_diffuser.convert_ckpt_to_diffUser()` level by looking
  for the string "inpaint" in the path. AUTO1111 does something
  similar to this, but it is brittle and not recommended.

- This PR also changes the model manager model_names() method to return
  the model names in case folded sort order.
2023-02-06 09:35:23 -05:00
mauwii
0402766f4d add author label 2023-02-06 14:05:27 +01:00
mauwii
a9ef5d1532 update tags 2023-02-06 14:05:27 +01:00
Matthias Wild
a485d45400 Update test-invoke-pip.yml (#2524)
test-invoke-pip.yml:
- enable caching of pip dependencies in `actions/setup-python@v4`
- add workflow_dispatch trigger
- fix indentation in concurrency
- set env `PIP_USE_PEP517: '1'`
- cache python dependencies
- remove models cache (since we currently use 190.96 GB of 10 GB while I
am writing this)
- add step to set `INVOKEAI_OUTDIR`
- add outdir arg to invokeai
- fix path in archive results

model_manager.py:
- read files in chunks when calculating sha (windows runner is crashing
otherwise)
2023-02-06 12:56:15 +01:00
mauwii
a40bdef29f update model_manager.py
- read files in chunks when calculating sha
  - windows runner is crashing without
2023-02-06 12:30:10 +01:00
mauwii
fc2670b4d6 update test-invoke-pip.yml
- add workflow_dispatch trigger
- fix indentation in concurrency
- set env `PIP_USE_PEP517: '1'`
- cache python dependencies
- remove models cache (since currently 183.59 GB of 10 GB are Used)
- add step to set `INVOKEAI_OUTDIR`
- add outdir arg to invokeai
- fix path in archive results
2023-02-06 12:30:10 +01:00
Eugene Brodsky
f0cd1aa736 highlight key elements of installer welcome message
- help users to avoid glossing over per-platform prerequisites
- better link colouring
- update link to community instructions to install xcode command line tools
2023-02-06 00:57:29 -05:00
Lincoln Stein
c3807b044d Merge branch 'main' into install/force-torch-reinstall 2023-02-06 00:18:38 -05:00
Jonathan
b7ab025f40 Update base.py (#2543)
Free up CUDA cache right after each image is generated. VRAM usage drops down to pre-generation levels.
2023-02-06 05:14:35 +00:00
Lincoln Stein
633f702b39 fix crash in txt2img and img2img w/ inpainting models and perlin > 0 (#2544)
- get_perlin_noise() was returning 9 channels; fixed code to return
noise for just the 4 image channels and not the mask ones.

- Closes Issue #2541
2023-02-05 23:50:32 -05:00
Lincoln Stein
3969637488 remove misleading completion message from merge_diffusers 2023-02-05 23:39:43 -05:00
Lincoln Stein
658ef829d4 tweak initial model descriptions 2023-02-05 23:23:09 -05:00
Lincoln Stein
0240656361 fix crash in txt2img and img2img w/ inpainting models and perlin > 0
- get_perlin_noise() was returning 9 channels; fixed code to return
  noise for just the 4 image channels and not the mask ones.

- Closes Issue #2541
2023-02-05 22:55:08 -05:00
Lincoln Stein
719a5de506 Merge branch 'main' into patch-1 2023-02-05 21:43:13 -05:00
Matthias Wild
05bb9e444b update pypi_helper.py (#2533)
- dont rename requests
- remove dash in verison (`2.3.0-rc3` becomes `2.3.0rc3`)
- read package_name instead of hardcode it
2023-02-06 03:34:52 +01:00
Lincoln Stein
0076757767 Merge branch 'main' into dev/ci/update-pypi-helper 2023-02-05 21:10:49 -05:00
Lincoln Stein
6ab03c4d08 fix crash in both textual_inversion and merge front ends when not enough models defined (#2540)
- Issue is that if insufficient diffusers models are defined in
models.yaml the frontend would ungraciously crash.

- Now it emits appropriate error messages telling user what the problem
is.
2023-02-05 19:34:07 -05:00
Lincoln Stein
142016827f fix formatting bugs in both textual_inversion and merge front ends
- Issue is that if insufficient diffusers models are defined in
  models.yaml the frontend would ungraciously crash.

- Now it emits appropriate error messages telling user what the problem
  is.
2023-02-05 18:35:01 -05:00
Lincoln Stein
466a82bcc2 Updates frontend README.md (#2539) 2023-02-05 17:25:25 -05:00
Lincoln Stein
05349f6cdc Merge branch 'main' into dev/ci/update-pypi-helper 2023-02-05 17:13:09 -05:00
psychedelicious
ab585aefae Update README.md 2023-02-06 09:07:44 +11:00
Matthias Wild
083ce9358b hotfix build-container.yml (#2537)
fix broken tag
2023-02-05 22:30:23 +01:00
Lincoln Stein
f56cf2400a Merge branch 'main' into install/force-torch-reinstall 2023-02-05 15:40:35 -05:00
cosmii02
5de5e659d0 Better AMD clarification
To better clarify that AMD is supported when using linux
2023-02-05 12:29:50 -08:00
mauwii
fc53f6d47c hotfix build-container.yml 2023-02-05 21:25:44 +01:00
Matthias Wild
2f70daef8f Issue/2487/address docker issues (#2517)
Address issues of #2487
2023-02-05 21:20:13 +01:00
mauwii
fc2a136eb0 add requested change 2023-02-05 21:15:39 +01:00
Lincoln Stein
ce3da40434 Merge branch 'main' into install/force-torch-reinstall 2023-02-05 15:01:56 -05:00
mauwii
7933f27a72 update pypi_helper.py`
- dont rename requests
- remove dash in verison (`2.3.0-rc3` becomes `2.3.0rc3`)
- read package_name instead of hardcode it
2023-02-05 20:45:31 +01:00
mauwii
1c197c602f update Dockerfile, .dockerignore and workflow
- dont build frontend since complications with QEMU
- set pip cache dir
- add pip cache to all pip related build steps
- dont lock pip cache
- update dockerignore to exclude uneeded files
2023-02-05 20:20:50 +01:00
mauwii
90656aa7bf update Dockerfile
- add build arg `FRONTEND_DIR`
2023-02-05 20:20:50 +01:00
mauwii
394b4a771e update Dockerfile
- remove yarn install args `--prefer-offline` and `--production=false`
2023-02-05 20:20:50 +01:00
mauwii
9c3f548900 update settings output in build.sh 2023-02-05 20:20:50 +01:00
mauwii
5662d2daa8 add invokeai/frontend/dist/** to .dockerignore 2023-02-05 20:20:50 +01:00
mauwii
fc0f966ad2 fix docs 2023-02-05 20:20:50 +01:00
mauwii
eb702a5049 fix env.sh, update Dockerfile, update build.sh
env.sh:
- move check for torch to CONVTAINER_FLAVOR detection

Dockerfile
- only mount `/var/cache/apt` for apt related steps
- remove `docker-clean` from `/etc/apt/apt.conf.d` for BuildKit cache
- remove apt-get clean for BuildKit cache
- only copy frontend to frontend-builder
- mount `/usr/local/share/.cache/yarn` in frountend-builder
- separate steps for yarn install and yarn build
- build pytorch in pyproject-builder

build.sh
- prepare for installation with extras
2023-02-05 20:20:50 +01:00
mauwii
1386d73302 fix env.sh
only try to auto-detect CUDA/ROCm if torch is installed
2023-02-05 20:20:50 +01:00
mauwii
6089f33e54 fix HUGGING_FACE_HUB_TOKEN 2023-02-05 20:20:50 +01:00
mauwii
3a260cf54f update directory from docker-build to docker 2023-02-05 20:20:50 +01:00
mauwii
9949a438f4 update docs with newly added variables
also remove outdated information
2023-02-05 20:20:50 +01:00
mauwii
84c1122208 fix build.sh and env.sh 2023-02-05 20:20:50 +01:00
Lincoln Stein
cc3d431928 2.3.0rc4 (#2514)
This will bring main up to date with v2.3.0-rc4
2023-02-05 14:05:15 -05:00
Lincoln Stein
c44b060a2e Merge branch 'main' into 2.3.0rc4 2023-02-05 13:40:56 -05:00
Lincoln Stein
eff7fb89d8 installer will --force-reinstall torch 2023-02-05 13:39:46 -05:00
Lincoln Stein
cd5c112fcd Allow multiple models to be imported by passing a directory. (#2529)
This change allows passing a directory with multiple models in it to be
imported.

Ensures that diffusers directories will still work.

Fixed up some minor type issues.
2023-02-05 13:36:00 -05:00
Lincoln Stein
563867fa99 Merge branch 'main' into main 2023-02-05 12:51:03 -05:00
Lincoln Stein
2e230774c2 Merge branch 'main' into 2.3.0rc4 2023-02-05 12:44:44 -05:00
Lincoln Stein
9577410be4 add platform-specific help instructions to installer 2023-02-05 12:43:13 -05:00
Lincoln Stein
4ada4c9f1f Add --log_tokenization to sysargs (#2523)
This allows the --log_tokenization option to be used as a command line
argument (or from invokeai.init), making it possible to view
tokenization information in the terminal when using the web interface.
2023-02-05 11:55:26 -05:00
blessedcoolant
9a6966924c Merge branch 'main' into main 2023-02-06 05:33:48 +13:00
Lincoln Stein
0d62525f3d reword help message slightly 2023-02-05 08:11:02 -08:00
Dan Sully
2ec864e37e Allow multiple models to be imported by passing a directory. 2023-02-05 08:11:02 -08:00
Lincoln Stein
9307ce3dc3 this fixes a crash in the TI frontend (#2527)
- This fixes an edge case crash when the textual inversion frontend
  tried to display the list of models and no default model defined
  in models.yaml

Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
2023-02-05 16:05:33 +00:00
Lincoln Stein
15996446e0 Merge branch 'main' into 2.3.0rc4 2023-02-05 10:54:53 -05:00
blessedcoolant
7a06c8fd89 Merge branch 'main' into main 2023-02-06 04:43:49 +13:00
Lincoln Stein
4895fe8395 fix crash when text mask applied to img2img (#2526)
This PR fixes the crash reported at https://discord.com/channels/1020123559063990373/1031668022294884392/1071782238137630800

It also quiets-down the "NSFW is disabled" nag during img2img generation.
2023-02-05 15:26:40 +00:00
Lincoln Stein
1e793a2dfe Merge branch 'main' into 2.3.0rc4 2023-02-05 10:24:09 -05:00
blessedcoolant
9c8fcaaf86 Beautify & Cleanup WebUI Logs 2023-02-05 22:55:57 +13:00
blessedcoolant
bf4344be51 Beautify Usage Stats Log 2023-02-05 22:55:40 +13:00
blessedcoolant
f7532cdfd4 Beautify Token Log Outputs 2023-02-05 22:55:29 +13:00
blessedcoolant
f1dd76c20b Remove Deprecation Warning from Diffusers Pipeline 2023-02-05 22:55:10 +13:00
whosawhatsis
3016eeb6fb Merge branch 'invoke-ai:main' into main 2023-02-04 22:56:59 -05:00
whosawhatsis
75b62d6ca8 Add --log_tokenization to sysargs
This allows the --log_tokenization option to be used as a command line argument (or from invokeai.init), making it possible to view tokenization information in the terminal when using the web interface.
2023-02-04 19:56:20 -08:00
Lincoln Stein
82ae2769c8 Configuration script tidying up (#2513)
- Rename configure_invokeai.py to invokeai_configure.py to be consistent
with installed script name
- Remove warning message about half-precision models not being available
during the model download process.
- adjust estimated file size reported by configure
- guesstimate disk space needed for "all" models
- fix up the "latest" tag to be named 'v2.3-latest'
2023-02-04 21:58:56 -05:00
Lincoln Stein
61149abd2f Merge branch 'main' into lstein/normalize-names 2023-02-04 21:41:22 -05:00
Lincoln Stein
eff126af6e Merge branch 'main' into 2.3.0rc4 2023-02-04 21:40:47 -05:00
Matthias Wild
0ca499cf96 Add workflow for PyPI Release (#2516) 2023-02-05 00:31:00 +01:00
mauwii
3abf85e658 fix conditions
workflow will only run in official repo
2023-02-04 23:58:07 +01:00
mauwii
5095285854 fix pypi-release.yml 2023-02-04 23:46:10 +01:00
mauwii
93623a4449 add conditions to check for Repo and Secret 2023-02-04 23:22:23 +01:00
mauwii
0197459b02 change back to current version 2023-02-04 23:07:20 +01:00
mauwii
1578bc68cc change version to test workflow 2023-02-04 23:06:29 +01:00
mauwii
4ace397a99 remove debug steps 2023-02-04 23:05:29 +01:00
mauwii
d85a710211 rename pypi_helper.py 2023-02-04 23:00:39 +01:00
mauwii
536d534ab4 add pypi-release.yml and pypi-helper.py 2023-02-04 22:58:21 +01:00
Lincoln Stein
fc752a4e75 move old .venv directory away during install
- To ensure a clean environment, the installer will now detect whether a
  previous .venv exists in the install location, and move it to .venv-backup
  before creating a fresh .venv.

- Any previous .venv-backup is deleted.

- User is informed of process.
2023-02-04 16:14:29 -05:00
Lincoln Stein
3c06d114c3 fix name of latest tag 2023-02-04 14:04:24 -05:00
Lincoln Stein
00d79c1fe3 bump version number to rc4 2023-02-04 14:00:58 -05:00
Lincoln Stein
60213893ab configuration script tidying up
- Rename configure_invokeai.py to invokeai_configure.py to be
  consistent with installed script name
- Remove warning message about half-precision models not being
  available during the model download process.

- adjust estimated file size reported by configure

- guesstimate disk space needed for "all" models

- fix up the "latest" tag to be named 'v2.3-latest'
2023-02-04 13:55:36 -05:00
Lincoln Stein
3b58413d9f Fixes PYTORCH_ENABLE_MPS_FALLBACK not set correctly (#2508)
`torch` wasn't seeing the environment variable. I suspect this is
because it was imported before the variable was set, so was running with
a different environment.

Many `torch` ops are supported on MPS so this wasn't noticed
immediately, but some samplers like k_dpm_2 still use unsupported
operations and need this fallback.
2023-02-04 11:32:52 -05:00
Lincoln Stein
1139884493 Merge branch 'main' into fix/mps-fallback 2023-02-04 11:11:59 -05:00
Lincoln Stein
17e8f966d0 Fix registration of text masks (#2501)
- Scale and crop not applied correctly
- Problem found and fixed by @spezialspezial
- Closes #2470
2023-02-04 10:48:27 -05:00
Lincoln Stein
a42b25339f Merge branch 'main' into bugfix/txt2mask 2023-02-04 10:25:30 -05:00
Lincoln Stein
1b0731dd1a use torch-cu117 from download.torch.org rather than pypi (#2492)
This PR forces the installer to install the official torch-cu117 wheel
from download.torch.org, rather than relying on PyPi.org to return the
correct version. It ought to correct the problems that some people have
experienced with cuda support not being installed.
2023-02-04 10:04:22 -05:00
Lincoln Stein
61c3886843 Merge branch 'main' into bugfix/use-cu117-wheel 2023-02-04 09:43:52 -05:00
Lincoln Stein
f76d57637e Fix bugs in merge and convert process (#2491)
1. The convert module was converting ckpt models into
StableDiffusionGeneratorPipeline objects for use in-memory, but then
when saved to disk created files that could not be merged with
StableDiffusionPipeline models. I have added a flag that selects which
pipeline class to return, so that both in-memory and disk conversions
work properly.

2. This PR also fixes an issue with `invoke.sh` not using the correct
path for the textual inversion and merge scripts.

3. Quench nags during the merge process about the safety checker being
turned off.
2023-02-04 09:40:09 -05:00
Lincoln Stein
6bf73a0cf9 Merge branch 'main' into bugfix/use-cu117-wheel 2023-02-04 09:17:45 -05:00
Lincoln Stein
5145df21d9 Merge branch 'main' into bugfix/merge-fixes 2023-02-04 09:17:01 -05:00
blessedcoolant
e96ac61cb3 Add Ukranian Localization (#2486)
* Add Ukranian & Update Italian

* Frontend Build (Ukranian Localization)

* Update invokeai/frontend/dist/locales/hotkeys/ua.json

Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>

* UA Localization Fixes

* Build (ua-fixes)

* Clean Build

* Clear Build

* Clean Build (resolving main conflicts)

* Clear Build

* Frontend Build (ua-localization-rebased)

---------

Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2023-02-05 00:24:24 +13:00
blessedcoolant
0e35d829c1 Build (french-localization) 2023-02-04 23:14:25 +13:00
blessedcoolant
d08f048621 Add French Localization 2023-02-04 23:14:25 +13:00
Saifeddine ALOUI
cfd453c1c7 Added French localization 2023-02-04 23:14:25 +13:00
Saifeddine ALOUI
6ca177e462 Added French localization 2023-02-04 09:54:30 +01:00
psychedelicious
a1b1a48fb3 Fixes PYTORCH_ENABLE_MPS_FALLBACK not set correctly
`torch` wasn't seeing the environment variable. I suspect this is because it was imported before the variable was set, so was running with a different environment.

Many `torch` ops are supported on MPS so this wasn't noticed immediately, but some samplers like k_dpm_2 still use unsupported operations and need this fallback.
2023-02-04 17:27:33 +11:00
Eugene Brodsky
b5160321bf fix finding the wheel when running from outside the installer directory
in case of calling python script instead of shell
2023-02-03 23:50:57 -05:00
Lincoln Stein
0cc2a8176e bump version 2023-02-03 23:50:57 -05:00
Lincoln Stein
9ac81c1dc4 change latest tag to v2.2.3-latest, won\'t conflict with 2.2.5 latest tag 2023-02-03 23:50:57 -05:00
Lincoln Stein
50191774fc this fixes an issue when the install script is called outside its directory
- Also reimplements the python-path finding logic of the older install.sh script.
2023-02-03 23:50:57 -05:00
Eugene Brodsky
fcd9b813e3 Merge branch 'main' into bugfix/use-cu117-wheel 2023-02-03 23:13:22 -05:00
Lincoln Stein
813f92a1ae do not install the "update" script
- The update script doesn't work yet, so we shouldn't install it.
- For now, users update by re-running the installer.
2023-02-03 20:26:10 -05:00
Lincoln Stein
0d141c1d84 small f-string syntax fix in generate.py (#2483)
Probably low priority, but helps the error message be more clear by
hopefully displaying model_name.
2023-02-03 18:29:50 -05:00
Lincoln Stein
2e3cd03b27 Merge branch 'main' into bugfix/use-cu117-wheel 2023-02-03 18:15:54 -05:00
Lincoln Stein
4500c8b244 Merge branch 'main' into patch-2 2023-02-03 18:03:29 -05:00
Lincoln Stein
d569c9dec6 remove dead code 2023-02-03 17:35:35 -05:00
Matthias Wild
01a2b8c05b Adapt latest changes to Dockerfile (#2478)
* remove non maintained Dockerfile

* adapt Docker related files to latest changes
- also build the frontend when building the image
- skip user response if INVOKE_MODEL_RECONFIGURE is set
- split INVOKE_MODEL_RECONFIGURE to support more than one argument

* rename `docker-build` dir to `docker`

* update build-container.yml
- rename image to invokeai
- add cpu flavor
- add metadata to build summary
- enable caching
- remove build-cloud-img.yml

* fix yarn cache path, link copyjob
2023-02-03 22:34:47 +00:00
Lincoln Stein
b23664c794 registration of mask images was off due to typo
- Problem found and fixed by @spezialspezial
- Closes #2470
2023-02-03 17:32:35 -05:00
Lincoln Stein
f06fefcacc Merge branch 'main' into patch-2 2023-02-03 17:15:29 -05:00
Lincoln Stein
7fa3a499bb fix crash on Windows10 when configure script given no HF token
Crashes would occur in the invokeai-configure script if no HF token
was found in cache and the user declines to provide one when prompted.
The reason appears to be that on Linux systems getpass_asterisk()
raises an EOFError when no input is provided

On windows10, getpass_asterisk() does not raise the EOFError, but
returns an empty string instead. This patch detects this and raises
the exception so that the control logic is preserved.
2023-02-03 16:06:49 -05:00
Lincoln Stein
c50b64ec1d correct default menu entry in install.bat file 2023-02-03 13:30:21 -05:00
Lincoln Stein
76b0bdb6f9 Fix: upgrade fails if existing venv was created with symlinks (#2489)
if reinstalling over an existing installation where the .venv was
created with symlinks to system python instead of copies of the python
executable, the installer would raise a `SameFileError`, because it
would attempt to copy Python over itself. This fixes the issue.

Copying the executable is still preferred for new environments, because
this guarantees the stable Python version.
2023-02-03 13:29:20 -05:00
Lincoln Stein
b0ad109886 Merge branch 'main' into fix-samefile 2023-02-03 13:02:01 -05:00
Lincoln Stein
66b312c353 enhance console gui for invokeai-merge (#2480)
- Added modest adaptive behavior; if the screen is wide enough the three
checklists of models will be arranged in a horizontal row.
- Added color support
# What it looks like
On a wide window:

![image](https://user-images.githubusercontent.com/111189/216495149-0ceed761-b829-4b21-8e90-0b7faf2c7b72.png)
On a narrow window:

![image](https://user-images.githubusercontent.com/111189/216495239-1d6615cf-0e7e-44fe-83d7-513819635d8a.png)
2023-02-03 13:00:16 -05:00
Lincoln Stein
fc857f9d91 Merge branch 'main' into lstein/enhance-merge-models-gui 2023-02-03 12:36:23 -05:00
Lincoln Stein
d6bd0cbf61 Bugfixes for path finding during manual install (#2490)
- fixes bug in finding the source of the configs dir;
- updates the docs for manual install to clarify the preference to
keeping the `.venv` inside the runtime dir, and the caveat/extra steps
required if done otherwise
2023-02-03 11:02:47 -05:00
Lincoln Stein
a32f6e9ea7 use torch-cu117 from download.torch.org rather than pypi 2023-02-03 10:57:15 -05:00
Lincoln Stein
b41342a779 Merge branch 'main' into bugfix/config-manual-install 2023-02-03 10:28:18 -05:00
Lincoln Stein
7603c8982c feat: add copy image in share menu (#2484)
<img width="233" alt="Screenshot 2023-02-03 at 12 11 46"
src="https://user-images.githubusercontent.com/70191398/216510761-3e5013a3-5346-45d4-92e5-d913d035f1bc.png">
2023-02-03 10:27:54 -05:00
Lincoln Stein
d351e365d6 Merge branch 'main' into lstein/enhance-merge-models-gui 2023-02-03 10:27:32 -05:00
Lincoln Stein
d453afbf6b Merge branch 'main' into fix-samefile 2023-02-03 10:27:03 -05:00
Lincoln Stein
9ae55c91cc quench safety checker warnings from diffusers 2023-02-03 10:14:51 -05:00
Lincoln Stein
9e46badc40 convert no longer creates StableDiffusionGenerator pipelines unless asked to 2023-02-03 10:04:32 -05:00
Lincoln Stein
ca0f3ec0e4 fix launcher shell script to use correct names for ti and merge functions 2023-02-03 09:45:57 -05:00
Eugene Brodsky
4b9be6113d (docs) remove an obsolete symlink to a documentation file 2023-02-03 09:01:54 -05:00
Eugene Brodsky
31964c7c4c (docs) remove an obsolete manual install doc 2023-02-03 09:01:30 -05:00
Eugene Brodsky
64f9fbda2f (docs) update manual install documentation 2023-02-03 08:51:46 -05:00
psychedelicious
3ece2f19f0 Merge branch 'main' into share-copy-image 2023-02-04 00:46:48 +11:00
Eugene Brodsky
c38b0b906d (config) fix invokeai-configure path handling after manual install 2023-02-03 08:06:27 -05:00
Lincoln Stein
c79678a643 prevent crash when no default model defined 2023-02-03 02:27:50 -05:00
Lincoln Stein
2217998010 remove the environments-and-requirements directory 2023-02-03 01:49:30 -05:00
Eugene Brodsky
3b43f3a5a1 (installer) fix failure to create venv over an existing venv
if reinstalling over an existing installation where the .venv
was created with symlinks to system python instead of copies
of the python executable, the installer would raise a
SameFileError, because it would attempt to copy Python over
itself. This fixes the issue.
2023-02-03 00:36:28 -05:00
Lincoln Stein
3f193d2b97 attempted correction of white screen issue 2023-02-02 23:47:55 -05:00
Ryan Cao
9fe660c515 feat: add copy image in share menu 2023-02-03 12:10:33 +08:00
gogurtenjoyer
16356d5225 small f-string fix in generate.py
Probably low priority, but helps the error message be more clear by hopefully displaying model_name.
2023-02-02 19:33:17 -08:00
Lincoln Stein
e04cb70c7c rebuild front end 2023-02-02 21:55:01 -05:00
Lincoln Stein
ddd5137cc6 Update version 2023-02-02 21:17:53 -05:00
Lincoln Stein
b9aef33ae8 enhance console gui for invokeai-merge
- Added modest adaptive behavior; if the screen is wide enough the three
  checklists of models will be arranged in a horizontal row.
- Added color support
2023-02-02 20:26:45 -05:00
gogurtenjoyer
797e2f780d Add python version warning from the docs
Just a quick update about Python 3.11.
2023-02-02 19:28:49 -05:00
Lincoln Stein
0642728484 remove requirements step from install manual (#2442)
removing the step to link the requirements file from the docs for manual
Installation after commenting about it in #2431
2023-02-02 16:50:29 -05:00
Lincoln Stein
fe9b4f4a3c Merge branch 'main' into update/docs/remove-requirements-step 2023-02-02 16:14:45 -05:00
Lincoln Stein
756e50f641 Installer rewrite in Python (#2448)
## Summary

This PR rewrites the core of the installer in Python for cross-platform
compatibility. Filesystem path manipulation, platform/arch decisions and
various edge cases are handled in a more convenient fashion. The
original `install.bat.in`/`install.sh.in` scripts are kept as
entrypoints for their respective OSs, but only serve as thin wrappers to
the Python module.

In addition, it:

- builds and **packages the .whl with the installer**, so that
downloading a versioned installer will guarantee installation of the
same version of the application.
- updates shell entrypoints: 
- new commands are `invokeai`, `invokeai-configure`, `invokeai-ti`,
`invokeai-merge`.
- these commands will be available in the activated `.venv` or via the
launch scripts
- `invoke.py` and `configure_invokeai.py` scripts are deprecated but
kept around for backwards compatibility and keeping users' surprise to a
minimum.
- introduces a new `ldm/invoke/config` package and moves the
`configure_invokeai` script into it. Similarly, movers Textual Inversion
script and TUI to `ldm/invoke/training`.
- moves the `configs` directory into the `ldm/invoke/config` package for
easy distribution.
- updates documentation to reflect all of the above changes
- fixes a failing test
- reduces wheel size to 3MB (from 27MB) by excluding unnecessary image
files under `assets`

⚠️ self-updating functionality and ability to install arbitrary
versions are still WIP. For now we can recommend downloading and running
the installer for a specific version as desired.

## Testing the source install

From the cloned source, check out this branch, and:

`$ python3 installer/main.py --root <path_to_destination>`

Also try:

`$ python3 installer/main.py ` - will prompt for paths
`$ python3 installer/main.py --yes` - will not prompt for any input

- try to combine the `--yes` and `--root` options
- try to install in destinations with "quirky" paths, such as paths
containing spaces in the directory name, etc.

## Testing the packaged install ("Automated Installer"):

Download the
[InvokeAI-installer-v2.3.0+a0.zip](https://github.com/invoke-ai/InvokeAI/files/10533913/InvokeAI-installer-v2.3.0%2Ba0.zip)
file, unzip it, and run the install script for your platform (preferably
in a terminal window)

OR make your own: from the cloned source, check out this branch, and:

```
cd installer
./create_installer.sh
# (do NOT tag/push when prompted! just say "no")
```

This will create the installation media:
`InvokeAI-installer-v2.3.0+a0.zip`. The installer is now
*platform-agnostic* - meaning, both Windows and *nix install resources
are packaged together.

Copy it somewhere as if it had been downloaded from the internet. Unzip
the file, enter the created `InvokeAI-Installer` directory, and run
`install.sh` or `install.bat` as applicable your platform.

⚠️ NOTE!!! `install.sh` accepts the same arguments as are
applicable to the Python script, i.e. you can `install.sh --yes --root
....`. This is NOT yet supported by the Windows `.bat` script. Only
interactive installation is supported on Windows. (this is still a
TODO).
2023-02-02 16:08:10 -05:00
Lincoln Stein
2202288eb2 Merge branch 'main' into dev/installer 2023-02-02 15:17:40 -05:00
Lincoln Stein
fc3378bb74 Load legacy ckpt files as diffusers models (#2468)
* refactor ckpt_to_diffuser to allow converted pipeline to remain in memory

- This idea was introduced by Damian
- Note that although I attempted to use the updated HuggingFace module
  pipelines/stable_diffusion/convert_from_ckpt.py, it was unable to
  convert safetensors files for reasons I didn't dig into.
- Default is to extract EMA weights.

* add --ckpt_convert option to load legacy ckpt files as diffusers models

- not quite working - I'm getting artifacts and glitches in the
  converted diffuser models
- leave as draft for time being

* do not include safety checker in converted files

* add ability to control which vae is used

API now allows the caller to pass an external VAE model to the
checkpoint conversion process. In this way, if an external VAE is
specified in the checkpoint's config stanza, this VAE will be used
when constructing the diffusers model.

Tested with both regular and inpainting 1.X models.

Not tested with SD 2.X models!

---------

Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
Co-authored-by: Damian Stewart <null@damianstewart.com>
2023-02-02 20:15:44 +00:00
Lincoln Stein
96228507d2 Merge branch 'main' into dev/installer 2023-02-02 14:30:35 -05:00
Lincoln Stein
1fe5ec32f5 Swap codeowners for installer (#2477)
This PR changes the codeowner for the installer directory from
@tildebyte to @ebr due to the former's time commitments.

Further reorganization of the codeowners is pending.
2023-02-02 14:27:31 -05:00
Lincoln Stein
6dee9051a1 swap codeowners for installer 2023-02-02 13:54:53 -05:00
Lincoln Stein
d58574ca46 Merge branch 'main' into dev/installer 2023-02-02 13:53:11 -05:00
Lincoln Stein
d282000c05 swap tildebyte to ebr as code owner 2023-02-02 13:52:45 -05:00
Kevin Turner
80c5322ccc fix(img2img): do not attempt to do a zero-step img2img when strength is low (#2472) 2023-02-02 10:04:09 -08:00
Kevin Turner
da181ce64e Merge branch 'main' into fix/img2img-low-strength 2023-02-02 09:40:16 -08:00
Kevin Turner
5ef66ca237 Fix typo in xformers version, 0.16 -> 0.0.16 (#2475) 2023-02-02 09:39:08 -08:00
Lincoln Stein
e99e720474 resolve conflicts with main and rebuild frontend 2023-02-02 11:00:33 -05:00
Kevin Turner
7aa331af8c Merge branch 'main' into fix/img2img-low-strength 2023-02-02 07:20:34 -08:00
noodlebox
9e943ff7dc Fix typo in xformers version, 0.16 -> 0.0.16 2023-02-02 05:26:15 -05:00
Kent Keirsey
b5040ba8d0 Build 2023-02-02 22:52:03 +13:00
Kent Keirsey
07462d1d99 Remove Inpaint Replace 2023-02-02 22:52:03 +13:00
Eugene Brodsky
d273fba42c (installer) upgrade pip in python3.9 environments 2023-02-02 01:30:47 -05:00
Eugene Brodsky
735545dca1 (installer) remove pip from bootstrap venv requirements as it was breaking bootstrapping 2023-02-02 01:18:02 -05:00
Eugene Brodsky
328f87559b (installer) remove leftover debug logs; fix typo 2023-02-02 01:03:51 -05:00
Eugene Brodsky
6f10b06a0c (installer) clarify user messaging during destination directory selection 2023-02-02 01:03:51 -05:00
Eugene Brodsky
fd60c8297d (package) provide more legacy aliases to entrypoints to minimize user surprise 2023-02-02 01:03:51 -05:00
Lincoln Stein
480064fa06 pip won't install itself without --upgrade 2023-02-02 00:48:53 -05:00
Lincoln Stein
3810d6a4ce numerous tweaks
1. only load triton on linux machines
2. require pip >= 23.0 so that editable installs can run without setup.py
3. model files default to SD-1.5, not 2.1
4. use diffusers model of inpainting rather than ckpt
5. selected a new set of initial models based on # of likes at huggingface
2023-02-02 00:28:38 -05:00
Kevin Turner
44d36a0e0b fix(img2img): do not attempt to do a zero-step img2img when strength is low 2023-02-01 18:42:54 -08:00
Lincoln Stein
3996ee843c fix bugs in launcher script installation
- launcher scripts are installed *before* the configure script runs,
  so that if something goes wrong in the configure script, the user
  can run invoke.{sh,bat} and get the option to re-run configure
- fixed typo in invoke.sh which misspelled name of invokeai-configure
2023-02-01 19:14:07 -05:00
Lincoln Stein
6d966313b9 add a --find-links argument to import custom wheels 2023-02-01 19:03:15 -05:00
Lincoln Stein
8ce9f07223 Merge branch 'main' into dev/installer 2023-02-01 17:50:22 -05:00
Lincoln Stein
11ac50a6ea install xformers and triton when CUDA torch requested 2023-02-01 17:41:38 -05:00
Matthias Wild
31146eb797 add workflow to clean caches after PR gets closed (#2450)
This helps at least a bit to get rid of all those huge caches
2023-02-01 19:06:07 +01:00
mauwii
99cd598334 add workflow to clean caches after PR gets closed 2023-02-01 18:32:29 +01:00
Lincoln Stein
5441be8169 requirements: add xformers for CUDA platforms (#2465)
[xformers
0.16](https://github.com/facebookresearch/xformers/releases/tag/v0.0.16)
was released earlier today, and is now installable from wheels on PyPI!

Fixes #1876.
2023-02-01 10:59:13 -05:00
Lincoln Stein
3e98b50b62 Merge branch 'main' into req-xformers 2023-02-01 10:29:49 -05:00
Lincoln Stein
5f16148dea Prevent actions from running on draft PRs (#2457)
Draft PRs are triggering actions on every commit (except
`test-invoke-pip.yml`).

I've added a conditional to each job to only run when the PR is not a
draft.

(maybe there is a reason we are running all applicable workflows on
draft PRs?)
2023-02-01 00:33:15 -05:00
Lincoln Stein
9628d45a92 Merge branch 'main' into build/no-actions-on-draft 2023-02-01 00:15:30 -05:00
Eugene Brodsky
6cbdd88fe2 (installer) correctly call invokeai entrypoints in .bat launch script 2023-02-01 00:08:18 -05:00
Eugene Brodsky
d423db4f82 (meta) add copyright statements for installer code 2023-01-31 23:47:36 -05:00
Eugene Brodsky
5c8c204a1b (installer) fix regression in directory selection 2023-01-31 23:47:36 -05:00
Eugene Brodsky
a03471c588 (installer) hide system and user site packages from the installer 2023-01-31 23:47:36 -05:00
Kevin Turner
6608343455 [enhancement] Print status message at startup when xformers is available (#2461) 2023-01-31 19:11:17 -08:00
Kevin Turner
abd972f099 Merge branch 'main' into feat/xformers-startup-message 2023-01-31 18:48:09 -08:00
Lincoln Stein
bd57793a65 fix img2img by working around pytorch bug (#2458)
horribly, temporarily send the vae to `.cpu()` so that good latents can
be produced

closes #2418
2023-01-31 21:46:05 -05:00
Kevin Turner
8cdc65effc Merge branch 'main' into fix_2418_simplified 2023-01-31 17:45:54 -08:00
Kevin Turner
85b553c567 requirements: add xformers for CUDA platforms
Now available from pip!
2023-01-31 16:51:43 -08:00
Matthias Wild
af74a2d1f4 fix broken Dockerfile (#2445)
also switch to `python:3.9-slim` since it has a ton less security issues
2023-02-01 01:47:25 +01:00
mauwii
6fdc9ac224 re-enable INVOKE_MODEL_RECONFIGURE 2023-02-01 01:21:07 +01:00
mauwii
8107d354d9 fix broken Dockerfile
also switch to python 3.9:slim since it has a ton less security issues
2023-02-01 01:21:07 +01:00
mauwii
7ca8abb206 integrate required changes
- also remove conda related things
- rename `invoke` to `invokeai`
- rename `configure_invokeai` to `invokeai-configure`
- rename venv back to common `.venv` but add `--prompt InvokeAI`
- remove outdated information
2023-02-01 01:17:24 +01:00
Lincoln Stein
28c17613c4 feat(inpaint): add solid infill for use with inpainting model (#2441)
A new infill method, **solid:** solid color. currently using middle
gray.

Fixes #2417

It seems like the runwayml inpainting model specifically expects those
masked areas to be blanked out like this.

I haven't tried the SD 2.0 inpainting model with it yet.
2023-01-31 18:27:48 -05:00
mauwii
eeb7a4c28c (ci) disable py3.9, lin-cuda-11_6 and win cuda 2023-02-01 00:24:56 +01:00
mauwii
0009d82a92 update test_path.py to also verify caution.png 2023-02-01 00:22:28 +01:00
Lincoln Stein
e6d52d7ce6 Merge branch 'main' into fix_2418_simplified 2023-01-31 18:11:56 -05:00
Lincoln Stein
8c726d3e3e Merge branch 'main' into build/no-actions-on-draft 2023-01-31 18:08:52 -05:00
Lincoln Stein
56e2d22b6e Merge branch 'main' into feat/solid-infill 2023-01-31 18:02:17 -05:00
Lincoln Stein
053d11fe30 fix(inpainting model): blank areas to be repainted in the masked image (#2447)
Otherwise the model seems too reluctant to change these areas, even
though the mask channel should allow it to.

This makes the solid infill method proposed by #2441 less necessary,
though I think there's still a place for an infill method that is faster
than patchmatch and more predictable than tiles.

Even with #2441, this PR is still useful because it influences all areas
to be painted, not just the infill area.

Fixes #2417
2023-01-31 18:01:33 -05:00
Lincoln Stein
0066187651 Merge branch 'main' into feat/solid-infill 2023-01-31 17:53:09 -05:00
Lincoln Stein
d3d24fa816 fill color is parameterized 2023-01-31 17:52:33 -05:00
Kevin Turner
4d58fed6b0 Merge branch 'main' into fix/inpainting-blank-slate 2023-01-31 11:04:56 -08:00
Kevin Turner
bde5874707 fix dimension errors when inpainting model is used with hires-fix (#2440) 2023-01-31 11:04:02 -08:00
Kevin Turner
eed802f5d9 Merge branch 'main' into fix/hires_inpaint 2023-01-31 09:34:29 -08:00
Lincoln Stein
c13e11a264 Merge branch 'dev/installer' of github.com:invoke-ai/InvokeAI into dev/installer 2023-01-31 12:26:19 -05:00
Lincoln Stein
1c377b7995 further improvements to ability to find location of data files
- implement the following pattern for finding data files under both
  regular and editable install conditions:

  import invokeai.foo.bar as bar
  path = bar.__path__[0]

- this *seems* to work reliably with Python 3.9. Testing on 3.10 needs
  to be performed.
2023-01-31 12:24:55 -05:00
mauwii
efe8dcaae9 cleanup test_path.py, enable pytest in pipeline
temporary enable 3.9 tests as well
2023-01-31 18:18:32 +01:00
Lincoln Stein
fc8e3dbcd3 fix crash when editing name of model
- fixes a spurious "unknown model name" error when trying to edit the
  short name of an existing model.
- relaxes naming requirements to include the ':' and '/' characters
  in model names
2023-01-31 09:59:58 -05:00
mauwii
ec1e83e912 add pytest to test path of frontend and configs 2023-01-31 09:06:06 +01:00
mauwii
ab9daf1241 remove frontend from configure_invokeai.py
since it does not get accessed there at all
2023-01-31 08:15:48 +01:00
mauwii
c061c1b1b6 fix frontend path
point to package's path instead of searching for it
2023-01-31 08:15:20 +01:00
Lincoln Stein
b9cc56593e print status message at startup when xformers is available 2023-01-30 22:01:06 -05:00
psychedelicious
6a0e1c8673 Merge branch 'main' into build/no-actions-on-draft 2023-01-31 12:00:38 +11:00
Kevin Turner
371edc993a Implement .swap() against diffusers 0.12 (#2385) 2023-01-30 15:56:24 -08:00
Lincoln Stein
d71734c90d update frontend path in lint test 2023-01-30 18:48:43 -05:00
Lincoln Stein
9ad4c03277 Various fixes
1) Downgrade numpy to avoid dependency conflict with numba
2) Move all non ldm/invoke files into `invokeai`. This includes assets, backend, frontend, and configs.
3) Fix up way that the backend finds the frontend and the generator finds the NSFW caution.png icon.
2023-01-30 18:42:17 -05:00
Damian Stewart
5299324321 workaround for pytorch bug, fixes #2418 2023-01-30 18:45:53 +01:00
Damian Stewart
817e36f8bf Merge branch 'diffusers_cross_attention_control_reimplementation' of github.com:damian0815/InvokeAI into diffusers_cross_attention_control_reimplementation 2023-01-30 16:23:52 +01:00
Damian Stewart
d044d4c577 rename override/restore methods to better reflect what they actually do 2023-01-30 16:23:44 +01:00
Damian Stewart
3f1120e6f2 Merge branch 'main' into diffusers_cross_attention_control_reimplementation 2023-01-30 16:17:25 +01:00
Damian Stewart
17d73d09c0 Revert "with diffusers cac, always run the original prompt on the first step"
This reverts commit 27ee939e4b.
2023-01-30 15:38:03 +01:00
Damian Stewart
478c379534 for cac make t_start=0.1 the default 2023-01-30 15:30:01 +01:00
Damian Stewart
c5c160a788 Merge branch 'diffusers_cross_attention_control_reimplementation' of github.com:damian0815/InvokeAI into diffusers_cross_attention_control_reimplementation 2023-01-30 14:51:06 +01:00
Damian Stewart
27ee939e4b with diffusers cac, always run the original prompt on the first step 2023-01-30 14:50:57 +01:00
psychedelicious
c222cf7e64 Prevents actions from running on draft PRs 2023-01-30 22:28:05 +11:00
Eugene Brodsky
b2a3b8bbf6 (installer) fix the create_installer.sh script so it instructs the user to deactivate an active venv 2023-01-30 03:42:27 -05:00
Eugene Brodsky
11cb03f7de (installer) fall back to attempted source install if wheel not found
if running `python3 installer/main.py` from the source distribution,
it would fail because it expected to find a wheel.

this PR tries to perform a source install by going one level up the directory
tree and checking for `pyproject.toml` and `ldm` directory entries to
confirm (to a degree) that this is an InvokeAI distribution
2023-01-30 03:29:09 -05:00
Eugene Brodsky
6b1dc34523 (installer) improve selection of destination directory 2023-01-30 03:15:05 -05:00
Eugene Brodsky
44786b0496 (installer) improve function naming 2023-01-29 23:39:14 -05:00
Lincoln Stein
d9ed0f6005 fix documentation of huggingface cache location (#2430)
* fix documentation of huggingface cache location

---------

Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
2023-01-29 20:30:50 -06:00
Eugene Brodsky
2e7a002308 (installer) remove unnecessary shell options from the install wrapper script 2023-01-29 20:10:51 -05:00
Jonathan
5ce62e00c9 Merge branch 'main' into diffusers_cross_attention_control_reimplementation 2023-01-29 13:52:01 -06:00
Kevin Turner
5a8c28de97 Merge remote-tracking branch 'origin/main' into fix/hires_inpaint 2023-01-29 10:51:59 -08:00
Jonathan
07e03b31b7 Update --hires_fix (#2414)
* Update --hires_fix

Change `--hires_fix` to calculate initial width and height based on the model's resolution (if available) and with a minimum size.
2023-01-29 12:27:01 -06:00
Eugene Brodsky
5ee5c5a012 (training) correctly import TI module; fix type annotation 2023-01-28 19:09:16 -05:00
Eugene Brodsky
3075c99ed2 (ci) fix test that was failng due to CLI entrypoint change 2023-01-28 19:03:48 -05:00
Eugene Brodsky
2c0bee2a6d (config) ensure the correct 'invokeai' command is displayed to the user after configuration 2023-01-28 17:39:33 -05:00
Eugene Brodsky
8f86aa7ded (docs) update install docs to refer to the platform-agnostic installer 2023-01-28 17:39:33 -05:00
Eugene Brodsky
34e0d7aaa8 (config) rename all mentions of scripts/configure_invokeai.py to the new invokeai-configure command 2023-01-28 17:39:33 -05:00
Eugene Brodsky
abe4e1ea91 (scripts) improved script entrypoints 2023-01-28 17:39:33 -05:00
Eugene Brodsky
f1f8ce604a (installer) build .whl and distribute together with the installer; install from bundled .whl by default 2023-01-28 17:39:33 -05:00
Eugene Brodsky
47dbe7bc0d (assets) move 'caution.png' to avoid including entire 'assets' dir in the wheel
reduces wheel size to 3MB from 27MB
2023-01-28 17:39:33 -05:00
Eugene Brodsky
ebe6daac56 (installer) do not install if already in a venv 2023-01-28 17:39:33 -05:00
Eugene Brodsky
d209dab881 (installer) support both pip and source install; no longer support installing from a downloaded release .zip 2023-01-28 17:39:33 -05:00
Eugene Brodsky
2ff47cdecf (scripts) rename/reorganize CLI scripts
- add torch MPS fallback directly to CLI.py
- rename CLI scripts with `invoke-...` prefix
- delete long-deprecated scripts
- add a missing package dependency
- delete setup.py as obsolete
2023-01-28 17:39:33 -05:00
Eugene Brodsky
22c34aabfe (package) move TI scripts into a module; update packaging of 'configs' dir 2023-01-28 17:39:33 -05:00
Eugene Brodsky
b58a80109b (test) tweak pytest coverage options
- remove redundant options (unchanged from defaults)
- don't test 3rd party code
- omit fully covered files from coverage report
- gitignore junit (xml) test output directory
2023-01-28 17:39:33 -05:00
Eugene Brodsky
c5a9e70e7f (parser) fix missing argument default in parse_legacy_blend 2023-01-28 17:39:33 -05:00
Eugene Brodsky
c5914ce236 (installer) new torch index urls + support installation from PyPi 2023-01-28 17:39:33 -05:00
Eugene Brodsky
242abac12d (installer) add a --y[es[to_all]] argument for a fully hands-off install/config 2023-01-28 17:39:33 -05:00
Eugene Brodsky
4b659982b7 (installer) install.bat wrapper for the python script 2023-01-28 17:39:33 -05:00
Eugene Brodsky
71733bcfa1 (installer) copy launch/update scripts to the root dir; improve launch experience on Linux/Mac
- install.sh is now a thin wrapper around the pythonized install script
- install.bat not done yet - to follow
- user messaging is tailored to the current platform (paste shortcuts, file paths, etc)
- emit invoke.sh/invoke.bat scripts to the runtime dir
- improve launch scripts (add help option, etc)
- only emit the platform-specific scripts
2023-01-28 17:39:33 -05:00
Eugene Brodsky
d047e070b8 (config) fix config file creation in edge cases
if the config directory is missing, initialize it using the standard
process of copying it over, instead of failing to create the config file

this can happen if the user is re-running the config script in a directory which
already has the init file, but no configs dir
2023-01-28 17:39:33 -05:00
Eugene Brodsky
02c530e200 (installer) work around Windows install issues 2023-01-28 17:39:33 -05:00
Eugene Brodsky
d36bbb817c (installer) use pep517 for installing dependencies
the 'setup.py install' method is deprecated in favour of a
build-system independent format: https://peps.python.org/pep-0517/

this is needed to install dependencies that don't have a pyproject.toml
file (only setup.py) in a forward-compatible way
2023-01-28 17:39:33 -05:00
Eugene Brodsky
9997fde144 (config) moving the 'configs' dir into the 'config' module
This allows reliable distribution of the initial 'configs' directory
with the Python package, and enables the configuration script to be running
from anywhere, as long as the virtual environment is available on the sys.path
2023-01-28 17:39:33 -05:00
Eugene Brodsky
9e22ed5c12 (installer) ignore temporary venv cleanup errors on Windows
There is a race condition affecting the 'tempfile' module on Windows.
A PermissionsError is raised when cleaning up the temp dir
Python3.10 introduced a flag to suppress this error.

Windows + Python3.9 users will receive an unpleasant stack trace for now
2023-01-28 17:39:32 -05:00
Eugene Brodsky
169c56e471 (installer) install PyTorch from correct repositories 2023-01-28 17:39:32 -05:00
Eugene Brodsky
b186965e77 (installer) ask the user for their GPU type; improve other messaging 2023-01-28 17:39:32 -05:00
Eugene Brodsky
88526b9294 (config) move configure_invokeai script to the config module for easier importing 2023-01-28 17:39:32 -05:00
Eugene Brodsky
071a438745 (installer) add graphics accelerator selection 2023-01-28 17:39:32 -05:00
Eugene Brodsky
93129fde32 (installer) run configure_invokeai from within the installer 2023-01-28 17:39:32 -05:00
Eugene Brodsky
802b95b9d9 (installer) use prompt-toolkit for directory picking instead of tkinter 2023-01-28 17:39:32 -05:00
Eugene Brodsky
c279314cf5 (installer) use plumbum for better stdout streaming 2023-01-28 17:39:32 -05:00
Eugene Brodsky
f75b194b76 (installer) PoC to install the app (source installer style) into the app venv 2023-01-28 17:39:32 -05:00
Eugene Brodsky
bf1996bbcf (installer) add venv creation for the app 2023-01-28 17:39:32 -05:00
Eugene Brodsky
d3962ab7b5 (installer) Windows fixes 2023-01-28 17:39:32 -05:00
Eugene Brodsky
2296f5449e (installer) initial work on the installer 2023-01-28 17:39:32 -05:00
Kevin Turner
b6d37a70ca fix(inpainting model): threshold mask to avoid gray blurry seam 2023-01-28 13:34:22 -08:00
Kevin Turner
71b6ddf5fb fix(inpainting model): blank areas to be repainted in the masked image
Otherwise the model seems too reluctant to change these areas, even though the mask channel should allow it to.
2023-01-28 11:10:32 -08:00
mauwii
14de7ed925 remove requirements step from install manual 2023-01-28 00:58:32 +01:00
Kevin Turner
6556b200b5 remove experimental "blur" infill
It seems counterproductive for use with the inpainting model, and not especially useful otherwise.
2023-01-27 15:25:50 -08:00
Kevin Turner
d627cd1865 feat(inpaint): add simpler infill methods for use with inpainting model 2023-01-27 14:28:16 -08:00
Kevin Turner
09b6104bfd refactor(txt2img2img): factor out tensor shape 2023-01-27 12:04:12 -08:00
Kevin Turner
1bb5b4ab32 fix dimension errors when inpainting model is used with hires-fix 2023-01-27 11:52:05 -08:00
Lincoln Stein
c18db4e47b removed defunct textual inversion script (#2433)
The original textual inversion script in scripts is now superseded. The
replacement can be found in ldm/invoke/textual_inversion.py and is a
merging of the command line and front end scripts. After running `pip
install -e .` there will be a `textual_inversion` command on your path.
You can activate the front end this way:

`textual_inversion -gui`
2023-01-27 10:44:34 -05:00
Jonathan
f9c92e3576 Merge branch 'main' into bugfix/remove-defunct-scripts 2023-01-27 08:32:15 -06:00
psychedelicious
1ceb7a60db adds double-click to reset view to 100% (#2436)
Adds double-click to reset canvas view to 100%.

- Adds hook to manage single and double clicks
- Single Click `Reset Canvas View` --> scale to fit, no change to
current behaviour
- Double Click `Reset Canvas View` --> set scale to 1
2023-01-28 00:56:24 +11:00
psychedelicious
f509650ec5 adds double-click to reset view to 100% 2023-01-27 08:30:24 -05:00
psychedelicious
0d0f35a1e2 Fix download button styling (#2435)
fixes #2383
2023-01-28 00:29:34 +11:00
psychedelicious
6dbc42fc1a fixes download button styling 2023-01-27 20:23:12 +11:00
Lincoln Stein
f6018fe5aa removed defunct textual inversion script 2023-01-26 23:35:09 -05:00
blessedcoolant
e4cd66216e Frontend Build (diffusers-mm-fixes) 2023-01-27 17:23:25 +13:00
blessedcoolant
995fbc78c8 Diffusers Model Manager Fixes 2023-01-27 17:23:25 +13:00
blessedcoolant
3083f8313d Default Seam Steps to 30
Seems to be the temporary solution for the seams looking horrible with some diffuser models.
2023-01-27 17:23:25 +13:00
Lincoln Stein
c0614ac7f3 Improve configuration of initial Waifu models (#2426)
Testing suggests that the diffusers versions of Waifu-1.4 anything-v4.0
require the `sd-vae-ft-mse` to generate decent images, so the
appropriate arguments have been added to the initial model file.
2023-01-26 18:18:00 -05:00
Lincoln Stein
0186630514 Merge branch 'main' into install/better-initial-models 2023-01-26 17:42:10 -05:00
Lincoln Stein
d53df09203 [enhancement] Improve organization & behavior of model merging and textual inversion scripts (#2427)
- Model merging and textual inversion scripts have been moved into
`ldm/invoke`, which allows them to be installed properly by
pyproject.toml.
- As part of the pyproject install, the .py suffix is removed from the
command. I.e. use `invoke`, `configure_invokeai`, `merge_models` and
`textual_inversion`.
- GUI versions are activated by adding `--gui` to the command. Without
this, you get a classical argv-based command. Example: `merge_models
--gui`
- Fixed up the launcher scripts to accommodate new naming scheme.
- Keyboard behavior of the GUI front ends has been improved. You can now
use up and down arrow to move from field to field, in addition to <tab>
and ctrl-N/ctrl-P
2023-01-26 17:36:45 -05:00
Lincoln Stein
12a29bfbc0 Merge branch 'main' into install/change-script-locations 2023-01-26 17:10:33 -05:00
Lincoln Stein
f36114eb94 Fix Sliders unable to take typed input (#2407)
So far the slider component was unable to take typed input due to a
bunch of issues that were a pain to solve. This PR fixes it.

Things to test:

- Moving the slider also updates the value in the input text box.
- Input text box next to slider can be changed in two ways: If you type
a manual value, the slider will be updated when you lose focus from the
input box. If you use the stepper icons to update the values, the slider
should update immediately.
- Make sure the reset buttons next to the slider are updating correctly
and make sure this updates both the slider and the input box values.
- Brush Size slider -> make sure the hotkeys are updating the input box
too.
2023-01-26 17:10:16 -05:00
Lincoln Stein
c255481c11 Merge branch 'main' into slider-fix 2023-01-26 16:20:25 -05:00
Lincoln Stein
7f81105acf dev: update to diffusers 0.12, transformers 4.26 (#2420)
Happy New Year!
2023-01-26 16:18:37 -05:00
Lincoln Stein
c8de679dc3 Merge branch 'main' into update-diffusers 2023-01-26 15:43:41 -05:00
Lincoln Stein
85b18fe9ee Merge branch 'main' into install/better-initial-models 2023-01-26 15:42:13 -05:00
Lincoln Stein
e0d8c19da6 fix indentation problem 2023-01-26 15:39:59 -05:00
Lincoln Stein
5567808237 tweak documentation 2023-01-26 15:28:54 -05:00
Lincoln Stein
2817f8a428 update launcher shell scripts for new script names & paths 2023-01-26 15:26:38 -05:00
Lincoln Stein
8e4c044ca2 clean up tab/cursor behavior in textual inversion txt gui 2023-01-26 15:18:28 -05:00
Lincoln Stein
9dc3832b9b clean up merge_models 2023-01-26 15:10:16 -05:00
Lincoln Stein
046abb634e Remove dependency on original clipseg library for text masking (#2425)
- This replaces the original clipseg library with the transformers
version from HuggingFace.
- This should make it possible to register InvokeAI at PyPi and do a
fully automated pip-based install.
- Minor regression: it is no longer possible to specify which device the
clipseg model will be loaded into, and it will reside in CPU. However,
performance is more than acceptable.
2023-01-26 12:14:13 -05:00
Lincoln Stein
d3a469d136 fix location of textual_inversion script 2023-01-26 11:56:23 -05:00
Lincoln Stein
e79f89b619 improve initial model configuration 2023-01-26 11:53:06 -05:00
Lincoln Stein
cbd967cbc4 add documentation caveat about location of HF cached models 2023-01-26 11:48:03 -05:00
damian
e090c0dc10 try without setting every time 2023-01-26 17:46:51 +01:00
damian
c381788ab9 don't restore None 2023-01-26 17:44:27 +01:00
damian
fb312f9ed3 use the correct value - whoops 2023-01-26 17:30:29 +01:00
damian
729752620b trying out JPPhoto's patch on vast.ai 2023-01-26 17:27:33 +01:00
damian
8ed8bf52d0 use 'auto' slice size 2023-01-26 17:04:22 +01:00
Lincoln Stein
a49d546125 simplified code a bit 2023-01-26 09:46:34 -05:00
Lincoln Stein
288e31fc60 remove dependency on original clipseg library
- This replaces the original clipseg library with the transformers
  version from HuggingFace.
- This should make it possible to register InvokeAI at PyPi and do
  a fully automated pip-based install.
- Minor regression: it is no longer possible to specify which device
  the clipseg model will be loaded into, and it will reside in CPU.
  However, performance is more than acceptable.
2023-01-26 09:35:16 -05:00
Lincoln Stein
7b2c0d12a3 add missing VAEs to initial diffuser models 2023-01-26 00:25:39 -05:00
Kevin Turner
2978c3eb8d Merge branch 'main' into update-diffusers 2023-01-25 18:42:00 -08:00
Damian Stewart
5e7ed964d2 wip updating docs 2023-01-25 23:49:38 +01:00
Damian Stewart
93a24445dc Merge remote-tracking branch 'upstream/main' into diffusers_cross_attention_control_reimplementation 2023-01-25 23:05:39 +01:00
Damian Stewart
95d147c5df MPS support: negatory 2023-01-25 23:03:30 +01:00
Damian Stewart
41aed57449 wip tracking down MPS slicing support 2023-01-25 22:27:23 +01:00
Damian Stewart
34a3f4a820 cleanup 2023-01-25 21:47:17 +01:00
Damian Stewart
1f5ad1b05e sliced swap working 2023-01-25 21:38:27 +01:00
blessedcoolant
87c63f1f08 Slider Fix Build 2023-01-26 09:04:52 +13:00
blessedcoolant
5b054dd5b7 Conflict Resolved Build (slider-fix) 2023-01-26 09:04:20 +13:00
blessedcoolant
fc5c8cc800 Merge branch 'main' into slider-fix 2023-01-26 09:03:02 +13:00
blessedcoolant
eb2ca4970b Add Dutch Localization Build 2023-01-26 08:56:38 +13:00
blessedcoolant
c2b10e6461 Add Dutch Localization 2023-01-26 08:56:38 +13:00
Dennis
190d266060 Dutch localization 2023-01-26 08:56:38 +13:00
Kevin Turner
8c8e1a448d dev: update to diffusers 0.12, transformers 4.26
Happy New Year!
2023-01-25 10:51:56 -08:00
Damian Stewart
c52dd7e3f4 Merge branch 'diffusers_cross_attention_control_reimplementation' of github.com:damian0815/InvokeAI into diffusers_cross_attention_control_reimplementation 2023-01-25 14:51:15 +01:00
Damian Stewart
a4aea1540b more wip sliced attention (.swap doesn't work yet) 2023-01-25 14:51:08 +01:00
Kevin Turner
3c53b46a35 Merge branch 'main' into diffusers_cross_attention_control_reimplementation 2023-01-24 19:32:34 -08:00
blessedcoolant
65fd6cd105 Merge branch 'main' into slider-fix 2023-01-25 08:28:37 +13:00
Lincoln Stein
61403fe306 fix second conflict in CLI.py 2023-01-24 14:21:21 -05:00
Lincoln Stein
b2f288d6ec fix conflict in CLI.py 2023-01-24 14:20:40 -05:00
blessedcoolant
d1d12e4f92 Merge branch 'main' into slider-fix 2023-01-25 08:06:30 +13:00
Lincoln Stein
eaf7934d74 [Enhancements] Allow user to specify VAE with !import_model and delete underlying model with !del_model (#2369)
Fix two deficiencies in the CLI's support for model management:

1. `!import_model` did not allow user to specify VAE file. This is now
fixed.
2. `!del_model` did not offer the user the opportunity to delete the
underlying
       weights file or diffusers directory. This is now fixed.
2023-01-24 13:43:16 -05:00
Lincoln Stein
079ec4cb5c Merge branch 'main' into feat/import-with-vae 2023-01-24 13:16:00 -05:00
blessedcoolant
38d0b1e3df Merge branch 'main' into slider-fix 2023-01-25 07:14:26 +13:00
blessedcoolant
fc6500e819 Fix Inpaint Replace Slider 2023-01-25 07:13:01 +13:00
Lincoln Stein
f521f5feba improve UI of textual inversion frontend (#2333)
- File selection box now accepts directories that don't exist yet.
- Fixed crash when resume is selected and no files available to resume
from.
2023-01-24 12:22:17 -05:00
Lincoln Stein
ce865a8d69 Merge branch 'main' into slider-fix 2023-01-24 12:21:39 -05:00
Lincoln Stein
00839d02ab Merge branch 'main' into lstein-improve-ti-frontend 2023-01-24 11:53:03 -05:00
Lincoln Stein
ce52d0c42b Merge branch 'main' into feat/import-with-vae 2023-01-24 11:52:40 -05:00
Lincoln Stein
f687d90bca [feat] Better status reporting when loading embeds and concepts (#2372)
This PR improves the console reporting of the process of recognizing
trigger tokens and loading their embeds.

1. Do not report "concept is not known to HuggingFace" if the trigger
term is in fact a local embedding trigger.
2. When a trigger term is first recognized during a session, report the
fact.
This should help debug embedding issues in the future.

Note that the local embeddings produced by the new InvokeAI TI training
script default to the format <trigger> with literal angle brackets. This
sets them off from the rest of the text well and will enable
autocomplete at some point in the future. However, this means that they
supersede like-named HuggingFace concepts, and may cause problems for
people uploading them to the HuggingFace repository (although that
problem already exists).
2023-01-24 09:35:53 -05:00
Lincoln Stein
7473d814f5 remove original setup.py 2023-01-24 09:11:05 -05:00
Lincoln Stein
b2c30c2093 Merge branch 'main' into bugfix/embed-loading-messages 2023-01-24 09:08:13 -05:00
Lincoln Stein
a7048eea5f Merge branch 'main' into feat/import-with-vae 2023-01-24 09:07:41 -05:00
Lincoln Stein
87c9398266 [enhancement] import .safetensors ckpt files directly (#2353)
This small fix makes it possible to import and run safetensors ckpt
files directly without doing a conversion step first.
2023-01-24 09:06:49 -05:00
Damian Stewart
63c6019f92 sliced attention processor wip (untested) 2023-01-24 14:46:32 +01:00
blessedcoolant
8eaf0d8bfe Fix Slider Build 2023-01-24 16:44:58 +13:00
blessedcoolant
5344481809 Fix Slider not being able to take typed input 2023-01-24 16:43:29 +13:00
Lincoln Stein
9f32daab2d Merge branch 'main' into lstein-import-safetensors 2023-01-23 21:58:07 -05:00
Lincoln Stein
884768c39d Make sure --free_gpu_mem still works when using CKPT-based diffuser model (#2367)
This PR attempts to fix `--free_gpu_mem` option that was not working in
CKPT-based diffuser model after #1583.

I noticed that the memory usage after #1583 did not decrease after
generating an image when `--free_gpu_mem` option was enabled.
It turns out that the option was not propagated into `Generator`
instance, hence the generation will always run without the memory saving
procedure.

This PR also related to #2326. Initially, I was trying to make
`--free_gpu_mem` works on 🤗 diffuser model as well.
In the process, I noticed that InvokeAI will raise an exception when
`--free_gpu_mem` is enabled.
I tried to quickly fix it by simply ignoring the exception and produce a
warning message to user's console.
2023-01-23 21:48:23 -05:00
Lincoln Stein
bc2194228e stability improvements
- provide full traceback when a model fails to load
- fix VAE record for VoxelArt; otherwise load fails
2023-01-23 21:40:27 -05:00
Lincoln Stein
10c3afef17 Merge branch 'main' into bugfix/free-gpu-mem-diffuser 2023-01-23 21:15:12 -05:00
Lincoln Stein
98e9721101 correct fail-to-resume error
- applied https://github.com/huggingface/diffusers/pull/2072 to fix
  error in epoch calculation that caused script not to resume from
  latest checkpoint when asked to.
2023-01-23 21:04:07 -05:00
blessedcoolant
66babb2e81 Japanese Localization Build 2023-01-24 09:07:29 +13:00
blessedcoolant
31a967965b Add Japanese Localization 2023-01-24 09:07:29 +13:00
Katsuyuki-Karasawa
b9c9b947cd update japanese translation 2023-01-24 09:07:29 +13:00
唐澤 克幸
1eee08a070 add Japanese Translation 2023-01-24 09:07:29 +13:00
Lincoln Stein
aca1b61413 [Feature] Add interactive diffusers model merger (#2388)
This PR adds `scripts/merge_fe.py`, which will merge any 2-3 diffusers
models registered in InvokeAI's `models.yaml`, producing a new merged
model that will be registered as well.

Currently this script will only work if all models to be merged are
known by their repo_ids. Local models, including those converted from
ckpt files, will cause a crash due to a bug in the diffusers
`checkpoint_merger.py` code. I have made a PR against
huggingface/diffusers which fixes this:
https://github.com/huggingface/diffusers/pull/2060
2023-01-23 09:27:05 -05:00
Lincoln Stein
e18beaff9c Merge branch 'main' into feat/merge-script 2023-01-23 09:05:38 -05:00
Kevin Turner
d7554b01fd fix typo in prompt 2023-01-23 00:24:06 -08:00
Kevin Turner
70f8793700 Merge branch 'main' into feat/import-with-vae 2023-01-23 00:17:46 -08:00
Kevin Turner
0d4e6cbff5 Merge branch 'main' into bugfix/embed-loading-messages 2023-01-23 00:12:33 -08:00
Kevin Turner
ea61bf2c94 [bugfix] ckpt conversion script respects cache in ~/invokeai/models (#2395) 2023-01-23 00:07:23 -08:00
Lincoln Stein
7dead7696c fixed setup.py to install the new scripts 2023-01-23 00:43:15 -05:00
Lincoln Stein
ffcc5ad795 conversion script uses invokeai models cache by default 2023-01-23 00:35:16 -05:00
Lincoln Stein
48deb3e49d add model merging documentation and launcher script menu entries 2023-01-23 00:20:28 -05:00
Lincoln Stein
6c31225d19 create small module for merge importation logic 2023-01-22 18:07:53 -05:00
Damian Stewart
c0610f7cb9 pass missing value 2023-01-22 18:19:06 +01:00
Damian Stewart
313b206ff8 squash float16/float32 mismatch on linux 2023-01-22 18:13:12 +01:00
Lincoln Stein
f0fe483915 Merge branch 'main' into feat/merge-script 2023-01-21 18:42:40 -05:00
Lincoln Stein
4ee8d104f0 working, but needs diffusers PR to be accepted 2023-01-21 18:39:13 -05:00
Kevin Turner
89791d91e8 fix: use pad_token for padding (#2381)
Stable Diffusion 2 does not use eos_token for padding.

Fixes #2378
2023-01-21 13:30:03 -08:00
Kevin Turner
87f3da92e9 Merge branch 'main' into fix/sd2-padding-token 2023-01-21 13:11:02 -08:00
Lincoln Stein
f169bb0020 fix long prompt weighting bug in ckpt codepath (#2382) 2023-01-21 15:14:14 -05:00
Damian Stewart
155efadec2 Merge branch 'main' into fix/sd2-padding-token 2023-01-21 21:05:40 +01:00
Damian Stewart
bffe199ad7 SwapCrossAttnProcessor working - tested on mac CPU (MPS doesn't work) 2023-01-21 20:54:18 +01:00
Damian Stewart
0c2a511671 wip SwapCrossAttnProcessor 2023-01-21 18:07:36 +01:00
Damian Stewart
e94c8fa285 fix long prompt weighting bug in ckpt codepath 2023-01-21 12:08:21 +01:00
Lincoln Stein
b3363a934d Update index.md (#2377) 2023-01-21 00:17:23 -05:00
Lincoln Stein
599c558c87 Merge branch 'main' into patch-1 2023-01-20 23:54:40 -05:00
Kevin Turner
d35ec3398d fix: use pad_token for padding
Stable Diffusion does not use the eos_token for padding.
2023-01-20 19:25:20 -08:00
Lincoln Stein
96a900d1fe correctly import diffusers models by their local path
- Corrects a bug in which the local path was treated as a repo_id
2023-01-20 20:13:43 -05:00
Lincoln Stein
f00f7095f9 Add instructions for installing xFormers on linux (#2360)
I've written up the install procedure for xFormers on Linux systems.

I need help with the Windows install; I don't know what the build
dependencies (compiler, etc) are. This section of the docs is currently
empty.

Please see `docs/installation/070_INSTALL_XFORMERS.md`
2023-01-20 17:57:12 -05:00
mauwii
d7217e3801 disable instable CI tests for windows runners
therefore enable all pytorch versions to verify installation
2023-01-20 23:30:25 +01:00
mauwii
fc5fdae562 update installation instructions 2023-01-20 23:30:25 +01:00
mauwii
a491644e56 fix dependencies/requirements 2023-01-20 23:30:24 +01:00
mauwii
ec2a509e01 make images in README.md compatible to pypi
also add missing new-lines before/after headings
2023-01-20 23:30:24 +01:00
mauwii
6a3a0af676 update test-invoke-pip.yml
- remove stable-diffusion-model from matrix
- add windows-cuda-11_6 and linux-cuda-11_6
- enable linux-cpu
- disable windows-cpu
- change step order
- remove job env
- set runner.os specific env
- install editable
- cache models folder
- remove `--model` and `--root` arguments from invoke command
2023-01-20 23:30:24 +01:00
mauwii
ef4b03289a enable image generating step for windows as well
- also remove left over debug lines and development branch leftover
2023-01-20 23:30:24 +01:00
mauwii
963b666844 fix memory issue on windows runner
- use cpu version which is only 162.6 MB
- set `INVOKEAI_ROOT=C:\InvokeAI` on Windows runners
2023-01-20 23:30:24 +01:00
mauwii
5a788f8f73 fix test-invoke-pip.yml matrix 2023-01-20 23:30:24 +01:00
mauwii
5afb63e41b replace legacy setup.py with pyproject.toml
other changes which where required:
- move configure_invokeai.py into ldm.invoke
- update files which imported configure_invokeai to use new location:
    - ldm/invoke/CLI.py
    - scripts/load_models.py
    - scripts/preload_models.py
- update test-invoke-pip.yml:
    - remove pr type "converted_to_draft"
    - remove reference to dev/diffusers
    - remove no more needed requirements from matrix
    - add pytorch to matrix
    - install via `pip3 install --use-pep517 .`
    - use the created executables
        - this should also fix configure_invoke not executed in windows
To install use `pip install --use-pep517 -e .` where `-e` is optional
2023-01-20 23:30:24 +01:00
Lincoln Stein
279ffcfe15 Merge branch 'main' into lstein/xformers-instructions 2023-01-20 17:29:39 -05:00
Lincoln Stein
9b73292fcb add pip install documentation for xformers 2023-01-20 17:28:14 -05:00
Lincoln Stein
67d91dc550 Merge branch 'bugfix/embed-loading-messages' of github.com:invoke-ai/InvokeAI into bugfix/embed-loading-messages 2023-01-20 17:16:50 -05:00
Lincoln Stein
a1c0818a08 ignore .DS_Store files when scanning Mac embeddings 2023-01-20 17:16:39 -05:00
Lincoln Stein
2cf825b169 Merge branch 'main' into bugfix/embed-loading-messages 2023-01-20 17:14:46 -05:00
Lincoln Stein
292b0d70d8 Merge branch 'lstein-improve-ti-frontend' of github.com:invoke-ai/InvokeAI into lstein-improve-ti-frontend 2023-01-20 17:14:08 -05:00
Lincoln Stein
c3aa3d48a0 ignore .DS_Store files when scanning Mac embeddings 2023-01-20 17:13:32 -05:00
Lincoln Stein
9e3c947cd3 Merge branch 'main' into lstein-improve-ti-frontend 2023-01-20 17:01:09 -05:00
Lincoln Stein
4b8aebabfb add diffusers repo as a reference for further reading 2023-01-20 16:59:34 -05:00
Lincoln Stein
080fc4b380 add documentation and minor bug fixes
- Added new documentation for textual inversion training process
- Move `main.py` into the deprecated scripts folder
- Fix bug in `textual_inversion.py` which was causing it to not load
  the globals module correctly.
- Sort models alphabetically in console front end
- Only show diffusers models in console front end
2023-01-20 16:55:50 -05:00
Lincoln Stein
195294e74f sort models alphabetically 2023-01-20 15:17:54 -05:00
michaelk71
da81165a4b Update index.md 2023-01-20 19:03:12 +01:00
Lincoln Stein
f3ff386491 [enhancement] Reorganize form for textual inversion training (#2375)
- Add num_train_epochs
- Reorganize widgets so all sliders that control # of steps are together
2023-01-20 10:58:26 -05:00
Lincoln Stein
da524f159e Merge branch 'main' into feat/enhance-ti-training-ui 2023-01-20 10:28:27 -05:00
Lincoln Stein
2d1eeec063 Save HFToken only if it is present (#2370)
Fixes https://github.com/invoke-ai/InvokeAI/issues/2083
2023-01-19 22:16:19 -05:00
Nicholas Koh
a8bb1a1109 Save HFToken only if it is present 2023-01-19 21:47:27 -05:00
Lincoln Stein
d9fa505412 [feat] Provide option to disable xformers from command line (#2373)
Starting `invoke.py` with --no-xformers will disable
memory-efficient-attention support if xformers is installed.

For symmetry, `--xformers` will enable support, but this is already the
default if xformers is available.
2023-01-19 19:15:57 -05:00
Lincoln Stein
02ce602a38 Merge branch 'main' into feat/disable-xformers 2023-01-19 18:45:59 -05:00
Lincoln Stein
9b1843307b [enhancement] Reorganize form for textual inversion training
- Add num_train_epochs
- Reorganize widgets so all sliders that control # of steps are together
2023-01-19 18:43:12 -05:00
Lincoln Stein
f0010919f2 Merge branch 'main' into bugfix/free-gpu-mem-diffuser 2023-01-19 18:03:36 -05:00
Lincoln Stein
d113b4ad41 [bugfix] suppress extraneous warning messages generated by diffusers (#2374)
This commit suppresses a few irrelevant warning messages that the
diffusers module produces:

1. The warning that turning off the NSFW detector makes you an
irresponsible person.
2. Warnings about running fp16 models stored in CPU (we are not running
them in CPU, just caching them in CPU RAM)
2023-01-19 18:00:31 -05:00
Lincoln Stein
895505976e [bugfix] suppress extraneous warning messages generated by diffusers
This commit suppresses a few irrelevant warning messages that the
diffusers module produces:

1. The warning that turning off the NSFW detector makes you an
irresponsible person.
2. Warnings about running fp16 models stored in CPU (we are not running
   them in CPU, just caching them in CPU RAM)
2023-01-19 16:49:40 -05:00
Lincoln Stein
171f4aa71b [feat] Provide option to disable xformers from command line
Starting `invoke.py` with --no-xformers will disable
memory-efficient-attention support if xformers is installed.

--xformers will enable support, but this is already the
default.
2023-01-19 16:16:35 -05:00
Lincoln Stein
775e1a21c7 improve embed trigger token not found error
- Now indicates that the trigger is *neither* a huggingface concept,
  nor the trigger of a locally loaded embed.
2023-01-19 15:46:58 -05:00
Lincoln Stein
3c3d893b9d improve status reporting when loading local and remote embeddings
- During trigger token processing, emit better status messages indicating
  which triggers were found.
- Suppress message "<token> is not known to HuggingFace library, when
  token is in fact a local embed.
2023-01-19 15:43:52 -05:00
Lincoln Stein
33a5c83c74 during ckpt->diffusers tell user when custom autoencoder can't be loaded
- When a ckpt or safetensors file uses an external autoencoder and we
  don't know which diffusers model corresponds to this (if any!), then
  we fallback to using stabilityai/sd-vae-ft-mse
- This commit improves error reporting so that user knows what is happening.
2023-01-19 12:05:49 -05:00
Lincoln Stein
7ee0edcb9e when converting a ckpt/safetensors model, preserve vae in diffusers config
- After successfully converting a ckt file to diffusers, model_manager
  will attempt to create an equivalent 'vae' entry to the resulting
  diffusers stanza.

- This is a bit of a hack, as it relies on a hard-coded dictionary
  to map ckpt VAEs to diffusers VAEs. The correct way to do this
  would be to convert the VAE to a diffusers model and then point
  to that. But since (almost) all models are using vae-ft-mse-840000-ema-pruned,
  I did it the easy way first and will work on the better solution later.
2023-01-19 11:02:49 -05:00
Lincoln Stein
7bd2220a24 fix two bugs in model import
1. !import_model did not allow user to specify VAE file. This is now fixed.
2. !del_model did not offer the user the opportunity to delete the underlying
   weights file or diffusers directory. This is now fixed.
2023-01-19 01:30:58 -05:00
Lincoln Stein
284b432ffd add triton install instructions 2023-01-18 22:34:36 -05:00
Lincoln Stein
ab675af264 Merge branch 'main' into lstein-improve-ti-frontend 2023-01-18 22:22:30 -05:00
Daya Adianto
be58a6bfbc Merge branch 'main' into bugfix/free-gpu-mem-diffuser 2023-01-19 10:21:06 +07:00
Daya Adianto
5a40aadbee Ensure free_gpu_mem option is passed into the generator (#2326) 2023-01-19 09:57:03 +07:00
Lincoln Stein
e11f15cf78 Merge branch 'main' into lstein-import-safetensors 2023-01-18 17:09:48 -05:00
Lincoln Stein
ce17051b28 Store & load 🤗 models at XDG_CACHE_HOME if HF_HOME is not set (#2359)
This commit allows InvokeAI to store & load 🤗 models at a location set
by `XDG_CACHE_HOME` environment variable if `HF_HOME` is not set.

By integrating this commit, a user who either use `HF_HOME` or
`XDG_CACHE_HOME` environment variables in their environment can let
InvokeAI to reuse the existing cache directory used by 🤗 library by
default. I happened to benefit from this commit because I have a Jupyter
Notebook that uses 🤗 diffusers model stored at `XDG_CACHE_HOME`
directory.

Reference:
https://huggingface.co/docs/huggingface_hub/main/en/package_reference/environment_variables#xdgcachehome
2023-01-18 17:05:06 -05:00
Lincoln Stein
a2bdc8b579 Merge branch 'lstein-import-safetensors' of github.com:invoke-ai/InvokeAI into lstein-import-safetensors 2023-01-18 12:16:06 -05:00
Lincoln Stein
1c62ae461e fix vae safetensor loading 2023-01-18 12:15:57 -05:00
Lincoln Stein
c5b802b596 Merge branch 'main' into feature/hub-in-xdg-cache-home 2023-01-18 11:53:46 -05:00
Lincoln Stein
b9ab9ffb4a Merge branch 'main' into lstein-import-safetensors 2023-01-18 10:58:38 -05:00
Lincoln Stein
f232068ab8 Update automated install doc - link to MS C libs (#2306)
Updated the link for the MS Visual C libraries - I'm not sure if MS
changed the location of the files but this new one leads right to the
file downloads.
2023-01-18 10:56:09 -05:00
Lincoln Stein
4556f29359 Merge branch 'main' into lstein/xformers-instructions 2023-01-18 09:33:17 -05:00
Lincoln Stein
c1521be445 add instructions for installing xFormers on linux 2023-01-18 09:31:19 -05:00
Daya Adianto
f3e952ecf0 Use global_cache_dir calls properly 2023-01-18 21:06:01 +07:00
Daya Adianto
aa4e8d8cf3 Migrate legacy models (pre-2.3.0) to 🤗 cache directory if exists 2023-01-18 21:02:31 +07:00
Daya Adianto
a7b2074106 Ignore free_gpu_mem when using 🤗 diffuser model (#2326) 2023-01-18 19:42:11 +07:00
Daya Adianto
2282e681f7 Store & load 🤗 models at XDG_CACHE_HOME if HF_HOME is not set
This commit allows InvokeAI to store & load 🤗 models at a location
set by `XDG_CACHE_HOME` environment variable if `HF_HOME` is not set.

Reference: https://huggingface.co/docs/huggingface_hub/main/en/package_reference/environment_variables#xdgcachehome
2023-01-18 19:32:09 +07:00
Lincoln Stein
6e2365f835 Merge branch 'main' into patch-1 2023-01-17 23:52:13 -05:00
Lincoln Stein
e4ea98c277 further improvements to initial load (#2330)
- Migration process will not crash if duplicate model files are found,
one in legacy location and the other in new location. The model in the
legacy location will be deleted in this case.

- Added a hint to stable-diffusion-2.1 telling people it will work best
with 768 pixel images.

- Added the anything-4.0 model.
2023-01-17 23:21:14 -05:00
Lincoln Stein
2fd5fe6c89 Merge branch 'main' into lstein-improve-migration 2023-01-17 22:55:58 -05:00
Lincoln Stein
4a9e93463d Merge branch 'lstein-import-safetensors' of github.com:invoke-ai/InvokeAI into lstein-import-safetensors 2023-01-17 22:52:50 -05:00
Lincoln Stein
0b5c0c374e load safetensors vaes 2023-01-17 22:51:57 -05:00
Lincoln Stein
5750f5dac2 Merge branch 'main' into lstein-import-safetensors 2023-01-17 21:31:56 -05:00
Kevin Turner
3fb095de88 do not use autocast for diffusers (#2349)
fixes #2345
2023-01-17 14:26:35 -08:00
Lincoln Stein
c5fecfe281 Merge branch 'main' into lstein-improve-migration 2023-01-17 17:05:12 -05:00
Kevin Turner
1fa6a3558e Merge branch 'main' into lstein-fix-autocast 2023-01-17 14:00:51 -08:00
Lincoln Stein
2ee68cecd9 tip fix (#2281)
Context: Small fix for the manual, added tab for a "!!! tip"
2023-01-17 16:25:09 -05:00
Lincoln Stein
c8d1d4d159 Merge branch 'main' into lstein-fix-autocast 2023-01-17 16:23:33 -05:00
Lincoln Stein
529b19f8f6 Merge branch 'main' into patch-1 2023-01-17 14:57:17 -05:00
Lincoln Stein
be4f44fafd [Enhancement] add --default_only arg to configure_invokeai.py, for CI use (#2355)
Added a --default_only argument that limits model downloads to the
single default model, for use in continuous integration.

New behavior

         - switch -
    --yes      --default_only           Behavior
    -----      --------------           --------

   <not set>     <not set>              interactive download

   --yes         <not set>              non-interactively download all
                                          recommended models

   --yes        --default_only          non-interactively download the
                                          default model
2023-01-17 14:56:50 -05:00
Kevin Turner
5aec48735e lint(generator): 🚮 remove unused imports 2023-01-17 11:44:45 -08:00
Kevin Turner
3c919f0337 Restore ldm/invoke/conditioning.py 2023-01-17 11:37:14 -08:00
mauwii
858ddffab6 add --default_only to run-preload-models step 2023-01-17 20:10:37 +01:00
Lincoln Stein
212fec669a add --default_only arg to configure_invokeai.py for CI use
Added a --default_only argument that limits model downloads to the single
default model, for use in continuous integration.

New behavior

         - switch -
    --yes      --default_only           Behavior
    -----      --------------           --------

   <not set>     <not set>              interactive download

   --yes         <not set>              non-interactively download all
                                          recommended models

   --yes        --default_only          non-interactively download the
                                          default model
2023-01-17 12:45:04 -05:00
Lincoln Stein
fc2098834d support direct loading of .safetensors models
- Small fix to allow ckpt files with the .safetensors suffix
  to be directly loaded, rather than undergo a conversion step
  first.
2023-01-17 08:11:19 -05:00
Lincoln Stein
8a31e5c5e3 allow safetensors models to be imported 2023-01-17 00:18:09 -05:00
Lincoln Stein
bcc0110c59 Merge branch 'lstein-fix-autocast' of github.com:invoke-ai/InvokeAI into lstein-fix-autocast 2023-01-16 23:18:54 -05:00
Lincoln Stein
ce1c5e70b8 fix autocast dependency in cross_attention_control 2023-01-16 23:18:43 -05:00
Lincoln Stein
ce00c9856f fix perlin noise and txt2img2img 2023-01-16 22:50:13 -05:00
Lincoln Stein
7e8f364d8d do not use autocast for diffusers
- All tensors in diffusers code path are now set explicitly to
  float32 or float16, depending on the --precision flag.
- autocast is still used in the ckpt path, since it is being
  deprecated.
2023-01-16 19:32:06 -05:00
Lincoln Stein
088cd2c4dd further tweaks to model management
- Work around problem with OmegaConf.update() that prevented model names
  from containing periods.
- Fix logic bug in !delete_model that didn't check for existence of model
  in config file.
2023-01-16 17:11:59 -05:00
Lincoln Stein
9460763eff Merge branch 'main' into lstein-improve-migration 2023-01-16 16:47:08 -05:00
Lincoln Stein
fe46d9d0f7 Merge branch 'main' into patch-1 2023-01-16 16:46:46 -05:00
Damian Stewart
563196bd03 pass step count and step index to diffusion step func (#2342) 2023-01-16 19:56:54 +00:00
Lincoln Stein
d2a038200c Merge branch 'main' into lstein-improve-migration 2023-01-16 14:22:13 -05:00
Lincoln Stein
d6ac0eeffd make SD-1.5 the default again 2023-01-16 14:21:34 -05:00
Lincoln Stein
3a1724652e upgrade requirements to CUDA 11.7, torch 1.13 (#2331)
* upgrade requirements to CUDA 11.7, torch 1.13

* fix ROCm version number

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2023-01-16 14:19:27 -05:00
Lincoln Stein
8c073a7818 Merge branch 'main' into patch-1 2023-01-16 08:38:14 -05:00
Lincoln Stein
8c94f6a234 Merge branch 'main' into patch-1 2023-01-16 08:35:25 -05:00
Lincoln Stein
5fa8f8be43 Merge branch 'main' into lstein-improve-migration 2023-01-16 08:33:20 -05:00
Daya Adianto
5b35fa53a7 Improve readability of the manual installation documentation (#2296)
* docs: Fix links to pip and Conda installation methods

* docs: Improve installation script readability

This commit adds a space between `-m` option and the module name.

* docs: Fix alignments of step 4 & 9 in `pip` installation method

* docs: Rewrite step 10 of the ` pip` installation method

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2023-01-15 22:37:02 +00:00
Lincoln Stein
a2ee32f57f Merge branch 'main' into lstein-improve-ti-frontend 2023-01-15 17:12:50 -05:00
Brian Racer
4486169a83 pin dnspython version (#2327)
Fixes dns-related errors that began January 14, 2023
2023-01-15 17:08:45 -05:00
Lincoln Stein
bfeafa8d5e improve UI of textual inversion frontend
- File selection box now accepts directories that don't exist yet.
- Fixed crash when resume is selected and no files available to resume from.
2023-01-15 17:04:14 -05:00
Lincoln Stein
f86c8b043c further improvements to initial load
- Migration process will not crash if duplicate model files are found,
  one in legacy location and the other in new location.
  The model in the legacy location will be deleted in this case.

- Added a hint to stable-diffusion-2.1 telling people it will work best
  with 768 pixel images.

- Added the anything-4.0 model.
2023-01-15 15:08:59 -05:00
Lincoln Stein
251a409087 adjust initial model defaults (#2322)
- Default to SD 1.5
- Add waifu diffusion 1.4
2023-01-15 15:18:41 +00:00
Kevin Turner
6fdbc1978d use 🧨diffusers model (#1583)
* initial commit of DiffusionPipeline class

* spike: proof of concept using diffusers for txt2img

* doc: type hints for Generator

* refactor(model_cache): factor out load_ckpt

* model_cache: add ability to load a diffusers model pipeline

and update associated things in Generate & Generator to not instantly fail when that happens

* model_cache: fix model default image dimensions

* txt2img: support switching diffusers schedulers

* diffusers: let the scheduler do its scaling of the initial latents

Remove IPNDM scheduler; it is not behaving.

* web server: update image_progress callback for diffusers data

* diffusers: restore prompt weighting feature

* diffusers: fix set-sampler error following model switch

* diffusers: use InvokeAIDiffuserComponent for conditioning

* cross_attention_control: stub (no-op) implementations for diffusers

* model_cache: let offload_model work with DiffusionPipeline, sorta.

* models.yaml.example: add diffusers-format model, set as default

* test-invoke-conda: use diffusers-format model
test-invoke-conda: put huggingface-token where the library can use it

* environment-mac: upgrade to diffusers 0.7 (from 0.6)

this was already done for linux; mac must have been lost in the merge.

* preload_models: explicitly load diffusers models

In non-interactive mode too, as long as you're logged in.

* fix(model_cache): don't check `model.config` in diffusers format

clean-up from recent merge.

* diffusers integration: support img2img

* dev: upgrade to diffusers 0.8 (from 0.7.1)

We get to remove some code by using methods that were factored out in the base class.

* refactor: remove backported img2img.get_timesteps

now that we can use it directly from diffusers 0.8.1

* ci: use diffusers model

* dev: upgrade to diffusers 0.9 (from 0.8.1)

* lint: correct annotations for Python 3.9.

* lint: correct AttributeError.name reference for Python 3.9.

* CI: prefer diffusers-1.4 because it no longer requires a token

The RunwayML models still do.

* build: there's yet another place to update requirements?

* configure: try to download models even without token

Models in the CompVis and stabilityai repos no longer require them. (But runwayml still does.)

* configure: add troubleshooting info for config-not-found

* fix(configure): prepend root to config path

* fix(configure): remove second `default: true` from models example

* CI: simplify test-on-push logic now that we don't need secrets

The "test on push but only in forks" logic was only necessary when tests didn't work for PRs-from-forks.

* create an embedding_manager for diffusers

* internal: avoid importing diffusers DummyObject

see https://github.com/huggingface/diffusers/issues/1479

* fix "config attributes…not expected" diffusers warnings.

* fix deprecated scheduler construction

* work around an apparent MPS torch bug that causes conditioning to have no effect

* 🚧 post-rebase repair

* preliminary support for outpainting (no masking yet)

* monkey-patch diffusers.attention and use Invoke lowvram code

* add always_use_cpu arg to bypass MPS

* add cross-attention control support to diffusers (fails on MPS)

For unknown reasons MPS produces garbage output with .swap(). Use
--always_use_cpu arg to invoke.py for now to test this code on MPS.

* diffusers support for the inpainting model

* fix debug_image to not crash with non-RGB images.

* inpainting for the normal model [WIP]

This seems to be performing well until the LAST STEP, at which point it dissolves to confetti.

* fix off-by-one bug in cross-attention-control (#1774)

prompt token sequences begin with a "beginning-of-sequence" marker <bos> and end with a repeated "end-of-sequence" marker <eos> - to make a default prompt length of <bos> + 75 prompt tokens + <eos>. the .swap() code was failing to take the column for <bos> at index 0 into account. the changes here do that, and also add extra handling for a single <eos> (which may be redundant but which is included for completeness).

based on my understanding and some assumptions about how this all works, the reason .swap() nevertheless seemed to do the right thing, to some extent, is because over multiple steps the conditioning process in Stable Diffusion operates as a feedback loop. a change to token n-1 has flow-on effects to how the [1x4x64x64] latent tensor is modified by all the tokens after it, - and as the next step is processed, all the tokens before it as well. intuitively, a token's conditioning effects "echo" throughout the whole length of the prompt. so even though the token at n-1 was being edited when what the user actually wanted was to edit the token at n, it nevertheless still had some non-negligible effect, in roughly the right direction, often enough that it seemed like it was working properly.

* refactor common CrossAttention stuff into a mixin so that the old ldm code can still work if necessary

* inpainting for the normal model. I think it works this time.

* diffusers: reset num_vectors_per_token

sync with 44a0055571

* diffusers: txt2img2img (hires_fix)

with so much slicing and dicing of pipeline methods to stitch them together

* refactor(diffusers): reduce some code duplication amongst the different tasks

* fixup! refactor(diffusers): reduce some code duplication amongst the different tasks

* diffusers: enable DPMSolver++ scheduler

* diffusers: upgrade to diffusers 0.10, add Heun scheduler

* diffusers(ModelCache): stopgap to make from_cpu compatible with diffusers

* CI: default to diffusers-1.5 now that runwayml token requirement is gone

* diffusers: update to 0.10 (and transformers to 4.25)

* diffusers: use xformers when available

diffusers no longer auto-enables this as of 0.10.2.

* diffusers: make masked img2img behave better with multi-step schedulers

re-randomizing the noise each step was confusing them.

* diffusers: work more better with more models.

fixed relative path problem with local models.

fixed models on hub not always having a `fp16` branch.

* diffusers: stopgap fix for attention_maps_callback crash after recent merge

* fixup import merge conflicts

correction for 061c5369a2

* test: add tests/inpainting inputs for masked img2img

* diffusers(AddsMaskedGuidance): partial fix for k-schedulers

Prevents them from crashing, but results are still hot garbage.

* fix --safety_checker arg parsing

and add note to diffusers loader about where safety checker gets called

* generate: fix import error

* CI: don't try to read the old init location

* diffusers: support loading an alternate VAE

* CI: remove sh-syntax if-statement so it doesn't crash powershell

* CI: fold strings in yaml because backslash is not line-continuation in powershell

* attention maps callback stuff for diffusers

* build: fix syntax error in environment-mac

* diffusers: add INITIAL_MODELS with diffusers-compatible repos

* re-enable the embedding manager; closes #1778

* Squashed commit of the following:

commit e4a956abc37fcb5cf188388b76b617bc5c8fda7d
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 15:43:07 2022 +0100

    import new load handling from EmbeddingManager and cleanup

commit c4abe91a5ba0d415b45bf734068385668b7a66e6
Merge: 032e856e 1efc6397
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 15:09:53 2022 +0100

    Merge branch 'feature_textual_inversion_mgr' into dev/diffusers_with_textual_inversion_manager

commit 032e856eefb3bbc39534f5daafd25764bcfcef8b
Merge: 8b4f0fe9 bc515e24
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 15:08:01 2022 +0100

    Merge remote-tracking branch 'upstream/dev/diffusers' into dev/diffusers_with_textual_inversion_manager

commit 1efc6397fc6e61c1aff4b0258b93089d61de5955
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 15:04:28 2022 +0100

    cleanup and add performance notes

commit e400f804ac471a0ca2ba432fd658778b20c7bdab
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 14:45:07 2022 +0100

    fix bug and update unit tests

commit deb9ae0ae1016750e93ce8275734061f7285a231
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 14:28:29 2022 +0100

    textual inversion manager seems to work

commit 162e02505dec777e91a983c4d0fb52e950d25ff0
Merge: cbad4583 12769b3d
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 11:58:03 2022 +0100

    Merge branch 'main' into feature_textual_inversion_mgr

commit cbad45836c6aace6871a90f2621a953f49433131
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 11:54:10 2022 +0100

    use position embeddings

commit 070344c69b0e0db340a183857d0a787b348681d3
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 11:53:47 2022 +0100

    Don't crash CLI on exceptions

commit b035ac8c6772dfd9ba41b8eeb9103181cda028f8
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 11:11:55 2022 +0100

    add missing position_embeddings

commit 12769b3d3562ef71e0f54946b532ad077e10043c
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 13:33:25 2022 +0100

    debugging why it don't work

commit bafb7215eabe1515ca5e8388fd3bb2f3ac5362cf
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 13:21:33 2022 +0100

    debugging why it don't work

commit 664a6e9e14
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 12:48:38 2022 +0100

    use TextualInversionManager in place of embeddings (wip, doesn't work)

commit 8b4f0fe9d6e4e2643b36dfa27864294785d7ba4e
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 12:48:38 2022 +0100

    use TextualInversionManager in place of embeddings (wip, doesn't work)

commit ffbe1ab11163ba712e353d89404e301d0e0c6cdf
Merge: 6e4dad60 023df37e
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 02:37:31 2022 +0100

    Merge branch 'feature_textual_inversion_mgr' into dev/diffusers

commit 023df37eff
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 02:36:54 2022 +0100

    cleanup

commit 05fac594ea
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 02:07:49 2022 +0100

    tweak error checking

commit 009f32ed39
Author: damian <null@damianstewart.com>
Date:   Thu Dec 15 21:29:47 2022 +0100

    unit tests passing for embeddings with vector length >1

commit beb1b08d9a
Author: Damian Stewart <d@damianstewart.com>
Date:   Thu Dec 15 13:39:09 2022 +0100

    more explicit equality tests when overwriting

commit 44d8a5a7c8
Author: Damian Stewart <d@damianstewart.com>
Date:   Thu Dec 15 13:30:13 2022 +0100

    wip textual inversion manager (unit tests passing for 1v embedding overwriting)

commit 417c2b57d9
Author: Damian Stewart <d@damianstewart.com>
Date:   Thu Dec 15 12:30:55 2022 +0100

    wip textual inversion manager (unit tests passing for base stuff + padding)

commit 2e80872e3b
Author: Damian Stewart <d@damianstewart.com>
Date:   Thu Dec 15 10:57:57 2022 +0100

    wip new TextualInversionManager

* stop using WeightedFrozenCLIPEmbedder

* store diffusion models locally

- configure_invokeai.py reconfigured to store diffusion models rather than
  CompVis models
- hugging face caching model is used, but cache is set to ~/invokeai/models/repo_id
- models.yaml does **NOT** use path, just repo_id
- "repo_name" changed to "repo_id" to following hugging face conventions
- Models are loaded with full precision pending further work.

* allow non-local files during development

* path takes priority over repo_id

* MVP for model_cache and configure_invokeai

- Feature complete (almost)

- configure_invokeai.py downloads both .ckpt and diffuser models,
  along with their VAEs. Both types of download are controlled by
  a unified INITIAL_MODELS.yaml file.

- model_cache can load both type of model and switches back and forth
  in CPU. No memory leaks detected

TO DO:

  1. I have not yet turned on the LocalOnly flag for diffuser models, so
     the code will check the Hugging Face repo for updates before using the
     locally cached models. This will break firewalled systems. I am thinking
     of putting in a global check for internet connectivity at startup time
     and setting the LocalOnly flag based on this. It would be good to check
     updates if there is connectivity.

  2. I have not gone completely through INITIAL_MODELS.yaml to check which
     models are available as diffusers and which are not. So models like
     PaperCut and VoxelArt may not load properly. The runway and stability
     models are checked, as well as the Trinart models.

  3. Add stanzas for SD 2.0 and 2.1 in INITIAL_MODELS.yaml

REMAINING PROBLEMS NOT DIRECTLY RELATED TO MODEL_CACHE:

  1. When loading a .ckpt file there are lots of messages like this:

     Warning! ldm.modules.attention.CrossAttention is no longer being
     maintained. Please use InvokeAICrossAttention instead.

     I'm not sure how to address this.

  2. The ckpt models ***don't actually run*** due to the lack of special-case
     support for them in the generator objects. For example, here's the hard
     crash you get when you run txt2img against the legacy waifu-diffusion-1.3
     model:
```
     >> An error occurred:
     Traceback (most recent call last):
       File "/data/lstein/InvokeAI/ldm/invoke/CLI.py", line 140, in main
           main_loop(gen, opt)
      File "/data/lstein/InvokeAI/ldm/invoke/CLI.py", line 371, in main_loop
         gen.prompt2image(
      File "/data/lstein/InvokeAI/ldm/generate.py", line 496, in prompt2image
	 results = generator.generate(
      File "/data/lstein/InvokeAI/ldm/invoke/generator/base.py", line 108, in generate
         image = make_image(x_T)
      File "/data/lstein/InvokeAI/ldm/invoke/generator/txt2img.py", line 33, in make_image
         pipeline_output = pipeline.image_from_embeddings(
      File "/home/lstein/invokeai/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1265, in __getattr__
         raise AttributeError("'{}' object has no attribute '{}'".format(
     AttributeError: 'LatentDiffusion' object has no attribute 'image_from_embeddings'
```

  3. The inpainting diffusion model isn't working. Here's the output of "banana
     sushi" when inpainting-1.5 is loaded:

```
    Traceback (most recent call last):
      File "/data/lstein/InvokeAI/ldm/generate.py", line 496, in prompt2image
        results = generator.generate(
      File "/data/lstein/InvokeAI/ldm/invoke/generator/base.py", line 108, in generate
        image = make_image(x_T)
      File "/data/lstein/InvokeAI/ldm/invoke/generator/txt2img.py", line 33, in make_image
        pipeline_output = pipeline.image_from_embeddings(
      File "/data/lstein/InvokeAI/ldm/invoke/generator/diffusers_pipeline.py", line 301, in image_from_embeddings
        result_latents, result_attention_map_saver = self.latents_from_embeddings(
      File "/data/lstein/InvokeAI/ldm/invoke/generator/diffusers_pipeline.py", line 330, in latents_from_embeddings
        result: PipelineIntermediateState = infer_latents_from_embeddings(
      File "/data/lstein/InvokeAI/ldm/invoke/generator/diffusers_pipeline.py", line 185, in __call__
        for result in self.generator_method(*args, **kwargs):
      File "/data/lstein/InvokeAI/ldm/invoke/generator/diffusers_pipeline.py", line 367, in generate_latents_from_embeddings
        step_output = self.step(batched_t, latents, guidance_scale,
      File "/home/lstein/invokeai/.venv/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
        return func(*args, **kwargs)
      File "/data/lstein/InvokeAI/ldm/invoke/generator/diffusers_pipeline.py", line 409, in step
        step_output = self.scheduler.step(noise_pred, timestep, latents, **extra_step_kwargs)
      File "/home/lstein/invokeai/.venv/lib/python3.9/site-packages/diffusers/schedulers/scheduling_lms_discrete.py", line 223, in step
        pred_original_sample = sample - sigma * model_output
    RuntimeError: The size of tensor a (9) must match the size of tensor b (4) at non-singleton dimension 1
```

* proper support for float32/float16

- configure script now correctly detects user's preference for
  fp16/32 and downloads the correct diffuser version. If fp16
  version not available, falls back to fp32 version.

- misc code cleanup and simplification in model_cache

* add on-the-fly conversion of .ckpt to diffusers models

1. On-the-fly conversion code can be found in the file ldm/invoke/ckpt_to_diffusers.py.

2. A new !optimize command has been added to the CLI. Should be ported to Web GUI.

User experience on the CLI is this:

```
invoke> !optimize /home/lstein/invokeai/models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
INFO: Converting legacy weights file /home/lstein/invokeai/models/ldm/stable-diffusion-v1/sd-v1-4.ckpt to optimized diffuser model.
      This operation will take 30-60s to complete.
Success. Optimized model is now located at /home/lstein/tmp/invokeai/models/optimized-ckpts/sd-v1-4
Writing new config file entry for sd-v1-4...

>> New configuration:
sd-v1-4:
  description: Optimized version of sd-v1-4
  format: diffusers
  path: /home/lstein/tmp/invokeai/models/optimized-ckpts/sd-v1-4

OK to import [n]? y
>> Verifying that new model loads...
>> Current VRAM usage:  2.60G
>> Offloading stable-diffusion-2.1 to CPU
>> Loading diffusers model from /home/lstein/tmp/invokeai/models/optimized-ckpts/sd-v1-4
  | Using faster float16 precision
You have disabled the safety checker for <class 'ldm.invoke.generator.diffusers_pipeline.StableDiffusionGeneratorPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion \
license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances,\
 disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
  | training width x height = (512 x 512)
>> Model loaded in 3.48s
>> Max VRAM used to load the model: 2.17G
>> Current VRAM usage:2.17G
>> Textual inversions available:
>> Setting Sampler to k_lms (LMSDiscreteScheduler)
Keep model loaded? [y]
```

* add parallel set of generator files for ckpt legacy generation

* generation using legacy ckpt models now working

* diffusers: fix missing attention_maps_callback

fix for 23eb80b404

* associate legacy CrossAttention with .ckpt models

* enable autoconvert

New --autoconvert CLI option will scan a designated directory for
new .ckpt files, convert them into diffuser models, and import
them into models.yaml.

Works like this:

   invoke.py --autoconvert /path/to/weights/directory

In ModelCache added two new methods:

  autoconvert_weights(config_path, weights_directory_path, models_directory_path)
  convert_and_import(ckpt_path, diffuser_path)

* diffusers: update to diffusers 0.11 (from 0.10.2)

* fix vae loading & width/height calculation

* refactor: encapsulate these conditioning data into one container

* diffusers: fix some noise-scaling issues by pushing the noise-mixing down to the common function

* add support for safetensors and accelerate

* set local_files_only when internet unreachable

* diffusers: fix error-handling path when model repo has no fp16 branch

* fix generatorinpaint error

Fixes :
  "ModuleNotFoundError: No module named 'ldm.invoke.generatorinpaint'
   https://github.com/invoke-ai/InvokeAI/pull/1583#issuecomment-1363634318

* quench diffuser safety-checker warning

* diffusers: support stochastic DDIM eta parameter

* fix conda env creation on macos

* fix cross-attention with diffusers 0.11

* diffusers: the VAE needs to be tiling as well as the U-Net

* diffusers: comment on subfolders

* diffusers: embiggen!

* diffusers: make model_cache.list_models serializable

* diffusers(inpaint): restore scaling functionality

* fix requirements clash between numba and numpy 1.24

* diffusers: allow inpainting model to do non-inpainting tasks

* start expanding model_cache functionality

* add import_ckpt_model() and import_diffuser_model() methods to model_manager

- in addition, model_cache.py is now renamed to model_manager.py

* allow "recommended" flag to be optional in INITIAL_MODELS.yaml

* configure_invokeai now downloads VAE diffusers in advance

* rename ModelCache to ModelManager

* remove support for `repo_name` in models.yaml

* check for and refuse to load embeddings trained on incompatible models

* models.yaml.example: s/repo_name/repo_id

and remove extra INITIAL_MODELS now that the main one has diffusers models in it.

* add MVP textual inversion script

* refactor(InvokeAIDiffuserComponent): factor out _combine()

* InvokeAIDiffuserComponent: implement threshold

* InvokeAIDiffuserComponent: diagnostic logs for threshold

...this does not look right

* add a curses-based frontend to textual inversion

- not quite working yet
- requires npyscreen installed
- on windows will also have the windows-curses requirement, but not added
  to requirements yet

* add curses-based interface for textual inversion

* fix crash in convert_and_import()

- This corrects a "local variable referenced before assignment" error
  in model_manager.convert_and_import()

* potential workaround for no 'state_dict' key error

- As reported in https://github.com/huggingface/diffusers/issues/1876

* create TI output dir if needed

* Update environment-lin-cuda.yml (#2159)

Fixing line 42 to be the proper order to define the transformers requirement: ~= instead of =~

* diffusers: update sampler-to-scheduler mapping

based on https://github.com/huggingface/diffusers/issues/277#issuecomment-1371428672

* improve user exp for ckt to diffusers conversion

- !optimize_models command now operates on an existing ckpt file entry in models.yaml
- replaces existing entry, rather than adding a new one
- offers to delete the ckpt file after conversion

* web: adapt progress callback to deal with old generator or new diffusers pipeline

* clean-up model_manager code

- add_model() verified to work for .ckpt local paths,
  .ckpt remote URLs, diffusers local paths, and
  diffusers repo_ids

- convert_and_import() verified to work for local and
  remove .ckpt files

* handle edge cases for import_model() and convert_model()

* add support for safetensor .ckpt files

* fix name error

* code cleanup with pyflake

* improve model setting behavior

- If the user enters an invalid model name at startup time, will not
  try to load it, warn, and use default model
- CLI UI enhancement: include currently active model in the command
  line prompt.

* update test-invoke-pip.yml
- fix model cache path to point to runwayml/stable-diffusion-v1-5
- remove `skip-sd-weights` from configure_invokeai.py args

* exclude dev/diffusers from "fail for draft PRs"

* disable "fail on PR jobs"

* re-add `--skip-sd-weights` since no space

* update workflow environments
- include `INVOKE_MODEL_RECONFIGURE: '--yes'`

* clean up model load failure handling

- Allow CLI to run even when no model is defined or loadable.
- Inhibit stack trace when model load fails - only show last error
- Give user *option* to run configure_invokeai.py when no models
  successfully load.
- Restart invokeai after reconfiguration.

* further edge-case handling

1) only one model in models.yaml file, and that model is broken
2) no models in models.yaml
3) models.yaml doesn't exist at all

* fix incorrect model status listing

- "cached" was not being returned from list_models()
- normalize handling of exceptions during model loading:
   - Passing an invalid model name to generate.set_model() will return
     a KeyError
   - All other exceptions are returned as the appropriate Exception

* CI: do download weights (if not already cached)

* diffusers: fix scheduler loading in offline mode

* CI: fix model name (no longer has `diffusers-` prefix)

* Update txt2img2img.py (#2256)

* fixes to share models with HuggingFace cache system

- If HF_HOME environment variable is defined, then all huggingface models
  are stored in that directory following the standard conventions.
- For seamless interoperability, set HF_HOME to ~/.cache/huggingface
- If HF_HOME not defined, then models are stored in ~/invokeai/models.
  This is equivalent to setting HF_HOME to ~/invokeai/models

A future commit will add a migration mechanism so that this change doesn't
break previous installs.

* feat - make model storage compatible with hugging face caching system

This commit alters the InvokeAI model directory to be compatible with
hugging face, making it easier to share diffusers (and other models)
across different programs.

- If the HF_HOME environment variable is not set, then models are
  cached in ~/invokeai/models in a format that is identical to the
  HuggingFace cache.

- If HF_HOME is set, then models are cached wherever HF_HOME points.

- To enable sharing with other HuggingFace library clients, set
  HF_HOME to ~/.cache/huggingface to set the default cache location
  or to ~/invokeai/models to have huggingface cache inside InvokeAI.

* fixes to share models with HuggingFace cache system

    - If HF_HOME environment variable is defined, then all huggingface models
      are stored in that directory following the standard conventions.
    - For seamless interoperability, set HF_HOME to ~/.cache/huggingface
    - If HF_HOME not defined, then models are stored in ~/invokeai/models.
      This is equivalent to setting HF_HOME to ~/invokeai/models

    A future commit will add a migration mechanism so that this change doesn't
    break previous installs.

* fix error "no attribute CkptInpaint"

* model_manager.list_models() returns entire model config stanza+status

* Initial Draft - Model Manager Diffusers

* added hash function to diffusers

* implement sha256 hashes on diffusers models

* Add Model Manager Support for Diffusers

* fix various problems with model manager

- in cli import functions, fix not enough values to unpack from
  _get_name_and_desc()
- fix crash when using old-style vae: value with new-style diffuser

* rebuild frontend

* fix dictconfig-not-serializable issue

* fix NoneType' object is not subscriptable crash in model_manager

* fix "str has no attribute get" error in model_manager list_models()

* Add path and repo_id support for Diffusers Model Manager

Also fixes bugs

* Fix tooltip IT localization not working

* Add Version Number To WebUI

* Optimize Model Search

* Fix incorrect font on the Model Manager UI

* Fix image degradation on merge fixes - [Experimental]

This change should effectively fix a couple of things.

- Fix image degradation on subsequent merges of the canvas layers.
- Fix the slight transparent border that is left behind when filling the bounding box with a color.
- Fix the left over line of color when filling a bounding box with color.

So far there are no side effects for this. If any, please report.

* Add local model filtering for Diffusers / Checkpoints

* Go to home on modal close for the Add Modal UI

* Styling Fixes

* Model Manager Diffusers Localization Update

* Add Safe Tensor scanning to Model Manager

* Fix model edit form dispatching string values instead of numbers.

* Resolve VAE handling / edge cases for supplied repos

* defer injecting tokens for textual inversions until they're used for the first time

* squash a console warning

* implement model migration check

* add_model() overwrites previous config rather than merges

* fix model config file attribute merging

* fix precision handling in textual inversion script

* allow ckpt conversion script to work with safetensors .ckpts

Applied patch here:
beb932c5d1

* fix name "args" is not defined crash in textual_inversion_training

* fix a second NameError: name 'args' is not defined crash

* fix loading of the safety checker from the global cache dir

* add installation step to textual inversion frontend

- After a successful training run, the script will copy learned_embeds.bin
  to a subfolder of the embeddings directory.
- User given the option to delete the logs and intermediate checkpoints
  (which together use 7-8G of space)
- If textual inversion training fails, reports the error gracefully.

* don't crash out on incompatible embeddings

- put try: blocks around places where the system tries to load an embedding
  which is incompatible with the currently loaded model

* add support for checkpoint resuming

* textual inversion preferences are saved and restored between sessions

- Preferences are stored in a file named text-inversion-training/preferences.conf
- Currently the resume-from-checkpoint option is not working correctly. Possible
  bug in textual_inversion_training.py?

* copy learned_embeddings.bin into right location

* add front end for diffusers model merging

- Front end doesn't do anything yet!!!!
- Made change to model name parsing in CLI to support ability to have merged models
  with the "+" character in their names.

* improve inpainting experience

- recommend ckpt version of inpainting-1.5 to user
- fix get_noise() bug in ckpt version of omnibus.py

* update environment*yml

* tweak instructions to install HuggingFace token

* bump version number

* enhance update scripts

- update scripts will now fetch new INITIAL_MODELS.yaml so that
  configure_invokeai.py will know about the diffusers versions.

* enhance invoke.sh/invoke.bat launchers

- added configure_invokeai.py to menu
- menu defaults to browser-based invoke

* remove conda workflow (#2321)

* fix `token_ids has shape torch.Size([79]) - expected [77]`

* update CHANGELOG.md with 2.3.* info

- Add information on how formats have changed and the upgrade process.
- Add short bug list.

Co-authored-by: Damian Stewart <d@damianstewart.com>
Co-authored-by: Damian Stewart <null@damianstewart.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
Co-authored-by: Wybartel-luxmc <37852506+Wybartel-luxmc@users.noreply.github.com>
Co-authored-by: mauwii <Mauwii@outlook.de>
Co-authored-by: mickr777 <115216705+mickr777@users.noreply.github.com>
Co-authored-by: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
Co-authored-by: Matthias Wild <40327258+mauwii@users.noreply.github.com>
2023-01-15 09:22:46 -05:00
Lincoln Stein
c855d2a350 Consolidate version numbers (#2201)
* update version number

* print version number at startup

* move version number into ldm/invoke/_version.py

* bump version to 2.2.6+a0

* handle whitespace better

* resolve issues raised by mauwii during PR review
2023-01-15 04:07:21 +01:00
Kent Keirsey
4dd74cdc68 update Readme (#2278)
* Update Readme & Assets

* Update Canvas Assets

* Updated Readme to correct missing refs

* Correcting refs

* Updating Canvas Preview size

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2023-01-15 01:13:11 +00:00
Lincoln Stein
746e97ea1d enhance the installer (#2299)
1. create_installers.sh now asks before tagging and committing the
   current repo
2. trailing whitespace removed from user-provided location of invokeai
   directory in install.bat
2023-01-14 19:28:14 -05:00
gogurtenjoyer
241313c4a6 Update automated install doc - link to MS C libs
Updated the link for the MS Visual C libraries - I'm not sure if MS changed the location of the files but this new one leads right to the file downloads.
2023-01-12 14:09:35 -08:00
Edward Johan
b6d1a17a1e tip fix 2023-01-09 23:53:55 +06:00
Lincoln Stein
c73434c2a3 tweak install instructions (#2227)
- Removed links from the install instructions to the installer zip files.
- Replaced "2.2.4" with "2.X.X" globally, to avoid the docs going out of
  date.
2023-01-09 00:12:41 +00:00
Matthias Wild
69b15024a9 update python requirements (#2251)
since torch versions <0.13.1 have a critical security issue
2023-01-08 07:44:03 -05:00
William Chong
26e413ae9c Require huggingface-hub version 0.11.1 (#2222)
`import login` only works in huggingface-hub >= 0.11.0

Fixes https://github.com/invoke-ai/InvokeAI/issues/2149
2023-01-04 22:21:48 +00:00
Chris Dawson
91eb84c5d9 Allow multiple CORS origins (#2031)
* Permit cmd override for CORS modification

* Enable multiple origins for CORS

* Remove CMD_OVERRIDE

* Revert executable bit change

* Defensively convert list into string

* Bad if statement

* Retry rebase

* Retry rebase

Co-authored-by: Chris Dawson <chris@vivoh.com>
2023-01-04 14:26:42 -05:00
Lincoln Stein
5d69bd408b fix facexlib weights being downloaded to .venv (#2221)
- fix problem of facexlib weights being downloaded into the .venv
  package directory when codeformer restoration requested.
- now users pre-downloaded weights in ~/invokeai/models/gfpgan/weights
  (which is shared with gfpgan)

Co-authored-by: Matthias Wild <40327258+mauwii@users.noreply.github.com>
2023-01-04 14:22:49 -05:00
Minjune Song
21bf512056 Local embeddings support (CLI autocomplete) (#2211)
* integrate local embeds with HF embeds

* Update concepts_lib.py

* Update concepts_lib.py

Co-authored-by: BuildTools <unconfigured@null.spigotmc.org>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2023-01-04 06:22:10 +00:00
Lincoln Stein
6c6e534c1a fix codeformer facexlib files being downloaded into .venv
- Fixed codeformer module so that the facexlib files are downloaded
  into their pre-stored location in models/gfpgan/weights (shared
  with the GFPGAN module)
2023-01-04 00:13:33 -05:00
Name
010378153f spelling mistake fixxed
wil -> will
2023-01-04 05:48:18 +13:00
Jeremy Clark
9091b6e24a Explicitly call python found in system (#2203)
Explicitly calls the python bin found in the system instead of calling `python` which may fail on systems where python is installed as `python3`
2023-01-02 13:47:01 +00:00
Matthias Wild
64700b07a8 fixing a typo in invoke.py (#2204) 2023-01-02 02:39:43 +00:00
Matthias Wild
34f8117241 Fix patchmatch-docs (#2111)
* use `uname -m` instead of `arch`
addressing #2105

* fix install patchmatch formating

* fix 2 broken links

* remove instruction to do develop install of patchmatch

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2023-01-01 20:52:05 +00:00
Lincoln Stein
c3f82d4481 update version number (#2199) 2023-01-01 19:26:55 +00:00
Lincoln Stein
3929bd3e13 Lstein release candidate 2.2.5 (#2137)
* installer tweaks in preparation for v2.2.5

- pin numpy to 1.23.* to avoid requirements conflict with numba
- update.sh and update.bat now accept a tag or branch string, not a URL
- update scripts download latest requirements-base before updating.

* update.bat.in debugged and working

* update pulls from "latest" now

* bump version number

* fix permissions on create_installer.sh

* give Linux user option of installing ROCm or CUDA

* rc2.2.5 (install.sh) relative path fixes (#2155)

* (installer) fix bug in resolution of relative paths in linux install script

point installer at 2.2.5-rc1

selecting ~/Data/myapps/ as location  would create a ./~/Data/myapps
instead of expanding the ~/ to the value of ${HOME}

also, squash the trailing slash in path, if it was entered by the user

* (installer) add option to automatically start the app after install

also: when exiting, print the command to get back into the app

* remove extraneous whitespace

* model_cache applies rootdir to config path

* bring installers up to date with 2.2.5-rc2

* bump rc version

* create_installer now adds version number

* rebuild frontend

* bump rc#

* add locales to frontend dist package

- bump to patchlevel 6

* bump patchlevel

* use invoke-ai version of GFPGAN

- This version is very slightly modified to allow weights files
  to be pre-downloaded by the configure script.

* fix formatting error during startup

* bump patch level

* workaround #2 for GFPGAN facexlib() weights downloading

* bump patch

* ready for merge and release

* remove extraneous comment

* set PYTORCH_ENABLE_MPS_FALLBACK directly in invoke.py

Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2023-01-01 17:54:45 +00:00
Yorzaren
caf7caddf7 Update WEBUIHOTKEYS.md - Key Display (#2190)
* Update WEBUIHOTKEYS.md

Fixed display errors so it no longer show extra plus signs on the site

* Update WEBUIHOTKEYS.md

Correction to keycap look to have symbols on special keys like enter, shift, and ctrl.
2022-12-31 23:48:17 +01:00
blessedcoolant
9fded69f0c Frontend Lint 2022-12-31 06:22:32 +13:00
blessedcoolant
9f719883c8 WebUI 2.2.5 Bug Fix Build 2022-12-31 06:22:32 +13:00
blessedcoolant
5d4da31dcd Localization Updates 2022-12-31 06:22:32 +13:00
blessedcoolant
686640af3a Fix status localization not working when iteration count > 0 2022-12-31 06:22:32 +13:00
blessedcoolant
edc22e06c3 RU Localization for Model Manager and Tooltips
Co-Authored-By: Artur <83028930+netsvetaev@users.noreply.github.com>
2022-12-31 06:22:32 +13:00
blessedcoolant
409a46e2c4 Fix styling of slider input to accommodate 4 digit values 2022-12-31 06:22:32 +13:00
blessedcoolant
e7ee4ecac7 Fix NumberInput not respecting min and max values on stepper click 2022-12-31 06:22:32 +13:00
blessedcoolant
da6c690d7b WebUI 2.2.5 Build 2022-12-30 08:35:54 +13:00
blessedcoolant
7c4544f95e Fix Seed Shuffle layout to adjust to localization text 2022-12-30 08:35:54 +13:00
David Regla
f173e0a085 i18n: add Spanish (es) translations 2022-12-30 08:35:54 +13:00
David Regla
2a90e0c55f Remove extra trailing space in JSON file 2022-12-30 08:35:54 +13:00
Lincoln Stein
9d103ef030 attempt to address memory issues when loading ckpt models (#2128)
- A couple of users have reported that switching back and forth
  between ckpt models is causing a "GPU out of memory" crash.
  Traceback suggests there is actually a CPU RAM issue.

- This speculative test simply performs a round of garbage collection
  before the point where the crash occurs.
2022-12-29 09:00:50 -05:00
pejotr
4cc60669c1 [WebUI] Localize tooltips (#2136)
* [WebUI]: Localize tooltips

* fix: typo in seamCorrection translation

* [WebUI]: Localize tooltips

* fix: typo in seamCorrection translation

* Add Missing Language Placeholders for Tooltip Localization

* Fix UI displacement in RU localization for options

* Fix double options during merge.

* Fix tkinter lefover

Co-authored-by: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
2022-12-29 21:19:57 +13:00
Yorzaren
d456aea8f3 Update WEBUIHOTKEYS.md
Fixed display errors so it no longer show extra plus signs on the site
2022-12-29 17:30:57 +13:00
Ryan Cao
4151883cb2 i18n: simplified chinese translations for model manager 2022-12-29 16:37:51 +13:00
blessedcoolant
a029d90630 Model Manager Final Build 2022-12-29 08:33:27 +13:00
blessedcoolant
211d6b3831 Improve add models guidance 2022-12-29 08:33:27 +13:00
blessedcoolant
b40faa98bd Model Manager Test Build 4 2022-12-29 08:33:27 +13:00
blessedcoolant
8d4ad0de4e Formatting Pass 2022-12-29 08:33:27 +13:00
blessedcoolant
e4b2f815e8 Improve interaction area for edit and stylize 2022-12-29 08:33:27 +13:00
blessedcoolant
0dd5804949 Normalize the config path to prevent write errors 2022-12-29 08:33:27 +13:00
blessedcoolant
53476af72e Add Italian and PT BR Localization for Model Manager 2022-12-29 08:33:27 +13:00
blessedcoolant
61ee597f4b Add Messages To Indicate to the user to add models. 2022-12-29 08:33:27 +13:00
blessedcoolant
ad0b366e47 Add option to scan loaded folder again 2022-12-29 08:33:27 +13:00
blessedcoolant
942f029a24 Model Manager Test Build 3 2022-12-29 08:33:27 +13:00
blessedcoolant
e0d7c466cc Add Scrollbar Styling 2022-12-29 08:33:27 +13:00
blessedcoolant
16c0132a6b Model Manager Test Build 2 2022-12-29 08:33:27 +13:00
blessedcoolant
7cb2fcf8b4 Remove folder picker 2022-12-29 08:33:27 +13:00
blessedcoolant
1a65d43569 Add Icon To the tkinter folder picker 2022-12-29 08:33:27 +13:00
blessedcoolant
1313e31f62 Add Italian Localization for Model Manager 2022-12-29 08:33:27 +13:00
blessedcoolant
aa213285bb Style fixes to accommodate localization in Model Manager 2022-12-29 08:33:27 +13:00
blessedcoolant
f691353570 Add Model Manager German Localization 2022-12-29 08:33:27 +13:00
blessedcoolant
1c75010f29 Model Manager Test Build 2022-12-29 08:33:27 +13:00
blessedcoolant
eba8fb58ed Change Settings Icon in the Site Header 2022-12-29 08:33:27 +13:00
blessedcoolant
83a7e60fe5 Add Missing Localization Files for Model Manager 2022-12-29 08:33:27 +13:00
blessedcoolant
d4e86feeeb Add Simplified Chinese Localization
Co-Authored-By: Ryan Cao <70191398+ryanccn@users.noreply.github.com>
2022-12-29 08:33:27 +13:00
blessedcoolant
427614d1df Populate en-US localization configs 2022-12-29 08:33:27 +13:00
blessedcoolant
ce6fb8ea29 Model Form Styling 2022-12-29 08:33:27 +13:00
blessedcoolant
df858eb3f9 Add Edit Model Functionality 2022-12-29 08:33:27 +13:00
blessedcoolant
6523fd07ab Model Edit Initial Implementation 2022-12-29 08:33:27 +13:00
Kent Keirsey
a823e37126 Fix storehook references 2022-12-29 08:33:27 +13:00
Kent Keirsey
4eed06903c Adding model edits (unstable/WIP) 2022-12-29 08:33:27 +13:00
blessedcoolant
79d577bff9 Model Manager Frontend Rebased 2022-12-29 08:33:27 +13:00
blessedcoolant
3521557541 Model Manager Backend Implementation 2022-12-29 08:33:27 +13:00
Artur
e66b1a685c Update README.md (#2115)
Copyright (c) 2020 -> 2022

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-25 16:41:11 +00:00
tomosuto
c351aa19eb Update 020_INSTALL_MANUAL.md (#2093)
Fixed a description that was overflowing from the warning box
2022-12-25 02:02:50 +00:00
Artur
aa1f46820f Update 020_INSTALL_MANUAL.md (#2114)
«git clone» step added for pip

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-24 21:11:51 +00:00
blessedcoolant
1d34405f4f [WebUI] Localization Support (#2050)
* Initial Localization Implementation

* Fix Initial Spinner

* Language Picker Dropdown

* RU Localization Update

Co-Authored-By: Artur <83028930+netsvetaev@users.noreply.github.com>

* Fixed localization breaking themes

* useUpdateTranslation Hook

To force trigger translations for data objects

* Localize Tab Data

* Localize Prompt Input & Current Image Buttons

* Localize Gallery & Bug FIxes

Fix a bug where the delete image from the context menu wasn't working. Removed tooltips that were broken as they don't work in context menu.

* Fix localization breaking in production

* Add Toast Localization Support

* Localize Unified Canvas

* Localize WIP Tabs

* Localize Hotkeys

* Localize Settings

* RU Localization Update

Co-Authored-By: Artur <83028930+netsvetaev@users.noreply.github.com>

* Add Support for Italian and Portuguese

* Localize Toasts

* Fix width of language picker items

* Localize Backend Messages

* Disable Debug Messages

* Add Support for French

* Fix missing localization for a string in the SettingsModal

* Disable French

* Styling updates to normalize text and accommodate other langs

* Add Portuguese Brazilian

* Fix Hotkey headers not being localized.

* Fix styling issue on models tag in Settings

* Fix Slider Styling to accommodate different languages

* Fix slider styling in light mode.

* Add German

* Add Italian

* Add Polish

* Update Italian

* Localized Frontend Build

* Updated RU Translations

* Fresh Build with updated RU changes

* Bug Fixes and Loc Updates

* Updated Frontend Build

* Fresh Build

Co-authored-by: Artur <83028930+netsvetaev@users.noreply.github.com>
2022-12-24 18:23:21 +00:00
Matthias Wild
f961e865f5 use uname -m instead of arch (#2110)
addressing #2105

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-24 16:55:19 +00:00
Kent Keirsey
9eba6acb7f Fix of Hires Fix on Img2Img tab (#2096)
* Fix of Hires Fix on Img2Img tab

Fixed linting issues

* Attempting to fix prettier workflow issues

* More Prettier Attempts

* Finally Fixed Prettier Issues

* Fix of Hires Fix on Img2Img tab

Fixed linting issues

* Attempting to fix prettier workflow issues

* More Prettier Attempts

* Finally Fixed Prettier Issues

* updated with useEffect

* Update to fix Prettier

* Update useEffect dependencies

* Fix dispatch dependency error from prettier

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-24 10:56:40 -05:00
Lincoln Stein
e32dd1d703 [docs] Provide an example of reading prompts from a script (#2087)
* add example of using -from_file to read from a script

Addresses #1654, #473, #566, #1008 at least partially.

* fix bug in code example

* improve docs for !fetch and !replay

* enable rendering of images in GH WebUI
also fix indention in some bullet lists

Co-authored-by: mauwii <Mauwii@outlook.de>
2022-12-23 14:06:59 +00:00
tomosuto
bbbfea488d Update 020_INSTALL_MANUAL.md (#2092)
The file name should be configure_invokeai.py
2022-12-23 04:58:40 +00:00
Lincoln Stein
c8a9848ad6 correct a crash in img2img under particular circumstances (#2088)
When using the inpainting model, the following sequence of events
would cause a predictable crash:

1. Use unified canvas to outcrop a portion of the image.
2. Accept outcropped image and import into img2img
3. Try any img2img operation

This closes #1596.

The crash was:

```
operands could not be broadcast together with shapes (320,512) (512,576)

Traceback (most recent call last):
  File "/data/lstein/InvokeAI/backend/invoke_ai_web_server.py", line 1125, in generate_images
    self.generate.prompt2image(
  File "/data/lstein/InvokeAI/ldm/generate.py", line 492, in prompt2image
    results = generator.generate(
  File "/data/lstein/InvokeAI/ldm/invoke/generator/base.py", line 98, in generate
    image = make_image(x_T)
  File "/data/lstein/InvokeAI/ldm/invoke/generator/omnibus.py", line 138, in make_image
    return self.sample_to_image(samples)
  File "/data/lstein/InvokeAI/ldm/invoke/generator/omnibus.py", line 173, in sample_to_image
    corrected_result = super(Img2Img, self).repaste_and_color_correct(gen_result, self.pil_image, self.pil_mask, self.mask_blur_radius)
  File "/data/lstein/InvokeAI/ldm/invoke/generator/base.py", line 148, in repaste_and_color_correct
    mask_pixels = init_a_pixels * init_mask_pixels > 0
ValueError: operands could not be broadcast together with shapes (320,512) (512,576)
```

This error was caused by the image and its mask not being of identical
size due to the outcropping operation. The ultimate cause of this
error has something to do with different code paths being followed in
the `inpaint` vs the `omnibus` modules.

Since omnibus will be obsoleted by diffusers, I have chosen just to
work around the problem rather than track it down to its source. The
only ill effect is that color correction will not be applied to the
first image created by `img2img` after applying the outcrop and
immediately importing into the img2img canvas. Since the inpainting
model has less of a color drift problem than the standard model, this
is unlikely to be problematic.
2022-12-22 14:53:23 +00:00
Eugene Brodsky
e88e274bf2 Add @ebr to Contributors (#2095)
* (docs) @ebr signs Statement of Values

* (docs) add @ebr to Contributors page
2022-12-21 14:33:08 -05:00
Lincoln Stein
cca8d14c79 defer patchmatch loading (#2039)
* defer patchmatch loading

Because of the way that patchmatch was loaded early at import time, it
was impossible to turn off the attempted loading with --no-patchmatch.

In addition, the patchmatch loading messages appear early on during
initialization, interfering with ability to print out the version
cleanly when --version provided to invoke script.

This commit creates a thin wrapper class for patch_match that is only
loaded when needed, solving both problems.

* create a singleton patchmatch object for use in inpainting

This creates a thin wrapper to patchmatch which loads the module
on demand, respecting the global "trypatchmatch" option.

* address 2d round of issues in PR 2039 comments

* Patchmatch->PatchMatch and misc cleanup
2022-12-20 15:32:35 -08:00
Kent Keirsey
464aafa862 Correct asset link (#2081)
* Correct asset link

Minor documentation fix to correct linked asset.

* fix switched graphics
also:
- add blanks before/after figure tag
  (makes the screenshot also appear in github)
- use a table in inpainting example to have the pics side by side

Co-authored-by: mauwii <Mauwii@outlook.de>
2022-12-20 17:29:54 +00:00
Lincoln Stein
6e98b5535d add --version to invoke.py arguments (#2038)
* add --version to invoke.py arguments

This commit allows invoke.py to print out its name and version
number when given the --version argument. I had to move some
status messages around in order to make the output clean.

There is still an early message about initializing patchmatch
that interferes with a clean print of the version, and in fact the
--no-patchmatch argument is not doing anything. This will be the
subject of a subsequent PR.

* export APP_ID and APP_VERSION

Needed to support the web backend.
2022-12-20 15:14:28 +00:00
Eugene Brodsky
ab2972f320 Fix the configure script crash when no TTY is allocated (e.g. a container) (#2080)
* (config) avoid failure when huggingface token is not set

it is not required for model download, and we are handling the
saving of the token during huggingface authentication phase elsewhere.

* (config) safely print to non-tty terminals where width can not be determined
2022-12-20 03:52:58 +01:00
Matthias Wild
1ba40db361 optimize Dockerfile (#2036)
* remove build-essentials after opencv is built
also remo unecesarry python3-opencv dependencie (its already in venv)

* use branch name as tag

* leave pip and setuptools on the preinstalled vers.

* Rename Argument from WORKDIR to APPDIR

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-19 23:53:19 +01:00
Damian Stewart
f69fc68e06 Revert "Don't crash CLI on exceptions (#2066)" (#2078)
This reverts commit 147834e99c.
2022-12-20 08:56:04 +13:00
Scott Lahteine
7d8d4bcafb Global replace [ \t]+$, add "GB" (#1751)
* "GB"

* Replace [ \t]+$ global

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-19 16:36:39 +00:00
Lincoln Stein
4fd97ceddd remove redundant code line (#2068)
* remove redundant code line

install.bat was copying the requirements file into the install folder
twice, causing an error message on the second try. This fixes the
issue.

* add further improvements to installer

- Windows version will unzip to have requirements.txt already in
  the right location, to prevent problems when users try to run
  the .bat script from within a mounted read-only zip file manager.

- Do not assume that "pip" is on the path in either the .bat or shell
  versions of the update script.
2022-12-19 14:57:41 +00:00
Eugene Brodsky
ded49523cd (docs) update installer links as per some Discord reports 2022-12-18 22:32:33 -05:00
Eugene Brodsky
914e5fc4f8 (docs) update python install instructions for Ubuntu. Add documentation for installing OpenGL libraries on Linux. Fixes #1987 2022-12-18 22:32:33 -05:00
Eugene Brodsky
ab4d391a3a (docs) call out that the root volume needs at least 6GB of free space for pip cache. Fixes #2056 2022-12-18 22:32:33 -05:00
Matthias Wild
82f59829b8 set workflow PR triggers to filter PR-types (#2065)
* set workflow PR triggers to filter PR-types
- `review_requested`
- `ready_for_review`

* fail tests if draft pr

* add more types to test pr triggers

* remove unneeded condition

* readd condition

* leave PR-types default, only verify PRs to main
and fail for draft-PRs

* set types to cancel when converted to draft
2022-12-18 20:54:07 +00:00
Damian Stewart
147834e99c Don't crash CLI on exceptions (#2066) 2022-12-18 16:28:47 +01:00
Eugene Brodsky
f41da11d66 Relax Huggingface login requirement during setup (#2046)
* (config) handle huggingface token more gracefully

* (docs) document HuggingFace token requirement for Concepts

* (cli) deprecate the --(no)-interactive CLI flag

It was previously only used to skip the SD weights download, and therefore
the prompt for Huggingface token (the "interactive" part).

Now that we don't need a Huggingface token
to download the SD weights at all, we can replace this flag with
"--skip-sd-weights", to clearly describe its purpose

The `--(no)-interactive` flag still functions the same, but shows a deprecation message

* (cli) fix emergency_model_reconfigure argument parsing

* (config) fix installation issues on systems with non-UTF8 locale

Co-authored-by: Matthias Wild <40327258+mauwii@users.noreply.github.com>
2022-12-18 10:44:50 +01:00
Eugene Brodsky
5c5454e4a5 (docs) add redirects for moved pages (#2063) 2022-12-18 08:04:58 +00:00
Shapor Naghibzadeh
dedbdeeafc Update scripts/configure_invokeai.py
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2022-12-18 01:21:47 -05:00
Shapor Naghibzadeh
d1770bff37 Accept --root_dir in addition to --root in configure_ivokeai.py to be consistent with the documentation in 020_INSTALL_MANUAL. 2022-12-18 01:21:47 -05:00
JPPhoto
20652620d9 Added compiled TS changes 2022-12-17 20:56:42 -05:00
Jonathan
51613525a4 Updated to pull threshold from an existing image even if 0 (#2051)
Addresses #2049 but not other cases where the stored value is 0 (e.g. perlin noise). This should be investigated more throughly.
2022-12-17 19:03:09 -05:00
blessedcoolant
dc39f8d6a7 Fix broken embedding variants (#2037) 2022-12-17 03:07:05 +00:00
Lincoln Stein
f1748d7017 avoid leaking data to HuggingFace (#2021)
Before making a concept download request to HuggingFace, the concepts
library module now checks the concept name against a downloaded list
of all the concepts currently known to HuggingFace.  If the requested
concept is not on the list, then no download request is made.
2022-12-16 16:50:02 +00:00
Lincoln Stein
de7abce464 add an argument that lets user specify folders to scan for weights (#1977)
* add an argument that lets user specify folders to scan for weights

This PR adds a `--weight_folders` argument to invoke.py. Using
argparse, it adds a "weight_folders" attribute to the Args object, and
can be used like this:

```
'''test.py'''
from ldm.invoke.args import Args

args = Args().parse_args()

for folder in args.weight_folders:
    print(folder)
```

Example output:

```
python test.py --weight_folders /tmp/weights /home/fred/invokeai/weights "./my folder with spaces/weight files"
/tmp/weights
/home/fred/invokeai/weights
./my folder with spaces/weight files
```

* change --weight_folders to --weight_dirs
2022-12-16 15:14:49 +00:00
Kaspar Emanuel
2aa5bb6aad Auto-format frontend (#2009)
* Auto-format frontend

* Update lint-frontend GA workflow node and checkout

* Fix linter error in ThemeChanger

* Add a `on: pull_request` to lint-frontend workflow

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-16 13:56:39 +01:00
Matthias Wild
c0c4d7ca69 update (docker-)build scripts, .dockerignore and add patchmatch (#1970)
* update build scripts and dockerignore
updates to build and run script:
- read repository name
- include flavor in container name
- read arch via arch command
- use latest tag instead of arch
- don't bindmount `$HOME/.huggingface`
- make sure HUGGINGFACE_TOKEN is set

updates to .dockerignore
- include environment-and-requirements
- exclude binary_installer
- exclude docker-build
- exclude docs

* disable push and pr triggers of cloud image
also disable pushing.

This was decided since:
- it is not multiarch useable
- the default image is already cloud aproved

* integrate patchmatch in container

* pin verisons of recently introduced dependencies

* remove now unecesarry part from build.sh
move huggingface token to run script, so it can download missing models

* move GPU_FLAGS to run script
since not needed at build time

* update env.sh

- read REPOSITORY_NAME from env if available
- add comment to explain the intension of this file
- remove unecesarry exports

* get rid of repository_name_lc

* capitalize variables

* update INSTALL_DOCKER with new variables

* add comments pointing to the docs

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-16 13:53:37 +01:00
Eugene Brodsky
7d09d9da49 delete old 'server' package and the dependency_injector requirement (#2032)
fixes #1944
2022-12-16 06:28:16 -05:00
blessedcoolant
ffa54f4a35 Fix --config arg not being recognized 2022-12-16 18:29:47 +13:00
blessedcoolant
69cc0993f8 Add Embedding Parsing (#1973)
* Add Embedding Parsing

* Add Embedding Parsing

* Return token_dim in embedding_info

* fixes to handle other variants

1. Handle the case of a .bin file being mislabeled .pt (seen in the
wild at https://cyberes.github.io/stable-diffusion-textual-inversion-models/)

2. Handle the "broken" .pt files reported by https://github.com/invoke-ai/InvokeAI/issues/1829

3. When token name is not available, use the basename of the pt or bin file rather than the
   whole path.

fixes #1829

* remove whitespace

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-15 17:26:36 -05:00
mauwii
1050f2726a update links with new filenames 2022-12-16 10:26:02 +13:00
Lincoln Stein
f7170e4156 improve installation documentation
1. Added a big fat warning to the Windows installer to tell user
   to install Visual C++ redistributable.

2. Added a bit fat warning to the automated installer doc to
   tell user the same thing.

3. Reordered entries on the table-of-contents sidebar for installation
   to prioritize the most important docs.

4. Moved older installation documentation into deprecated folder.

5. Moved developer-specific installation documentation into the
   developers folder.
2022-12-16 10:26:02 +13:00
mauwii
bfa8fed568 update site_name 2022-12-16 10:24:03 +13:00
mauwii
2923dfaed1 update index and changelog 2022-12-16 10:24:03 +13:00
zeptofine
0932b4affa Replace latest link
The link still points to 2.2.3 .
2022-12-16 10:22:44 +13:00
Kaspar Emanuel
0b10835269 Fix initial theme setting 2022-12-16 10:20:26 +13:00
Kaspar Emanuel
6e0f3475b4 Reduce frontend eslint warnings to 0 2022-12-16 10:18:45 +13:00
Kaspar Emanuel
9b9e276491 Add lint-frontend github actions workflow 2022-12-16 10:16:01 +13:00
Kaspar Emanuel
392c0725f3 Remove circular dependencies in frontend 2022-12-16 10:16:01 +13:00
wfng92
2a2f38a016 Correct timestep for img2img initial noise addition (#1946)
* Correct timestep for img2img initial noise addition

* apply fix to inpaint and txt2img2img as well

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-15 15:59:19 -05:00
Kevin Turner
7a4e647287 build: GitHub Action to lint python files with pyflakes (#1332) 2022-12-15 19:30:58 +00:00
Lincoln Stein
b8e1151a9c change pypatchmatch only 2022-12-15 10:37:13 -05:00
Lincoln Stein
f39cb668fc update requirements and environment files
- Update pypatchmatch to 0.1.5 (see request from @Kyle0654 here
  https://discord.com/channels/1020123559063990373/1034740515209486387/1052465462757310464 )

- Removed basicsrs workaround for environment files, now that we know
  problem was caused by Windows long path issue.
2022-12-15 10:37:13 -05:00
Lincoln Stein
6c015eedb3 better error reporting when root directory not found (#2001)
- The invoke.py script now checks that the root (runtime) directory contains
  the expected config/models.yaml file and if it doesn't exits with a helpful
  error message about how to set the proper root.

- Formerly the script would fail with a "bad model" message and try to redownload
  its models, which is not helpful in the case that the root is missing or
  damaged.
2022-12-15 09:34:10 -05:00
Kent Keirsey
834e56a513 update Contributors directive to merge to main 2022-12-14 23:42:53 -05:00
mauwii
652aaa809b add missed backticks and some icons for tab
also puth the alf screenshot in a table to fit the other examples
2022-12-14 18:36:07 -05:00
Lincoln Stein
89880e1f72 affirm that <concepts> work with the webGUI 2022-12-14 18:36:07 -05:00
Lincoln Stein
d94f955d9d fix manual install documentation 2022-12-14 18:36:07 -05:00
Damian Stewart
64339af2dc restrict to 75 tokens and correctly handle blends 2022-12-14 16:54:27 -05:00
Chris Dawson
5d20f47993 Permit usage of GPUs in docker script (#1985)
* Add gpu support to docker

Enable GPUs within docker

* Use gpus flag

* Add GPU information to readme

* Fix env var name for GPU
2022-12-14 05:21:33 +00:00
Ivan Efimov
ccf8a46320 Fix: define path as None before usage 2022-12-13 19:46:03 -05:00
mauwii
af3d72e001 re-enable wheel install in test-invoke-pip.yml 2022-12-13 23:29:08 +01:00
Matthias Wild
1d78e1af9c add concurrency to test actions (#1975)
configured to only cancel workflows in PRs, but not on main branch
origins in #1933, but opitmized to not cancel workflows of non PRs
2022-12-13 19:53:10 +01:00
Matthias Wild
1fd605604f remove redundant tests, only do 20 steps (#1972)
- remove tests already performed in PR
- remove tests pointing to non existing files
- reduce steps to 20

This should decrease test time a lot and also "fix" failing mac tests.
I still recommend to invent why mac invoke takes so much longer!

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-13 19:39:29 +01:00
Ivan Efimov
f0b04c5066 Typo fix in INSTALL_AUTOMATED.md (#1968) 2022-12-13 19:13:28 +01:00
Eugene Brodsky
2836976d6d (install) fix segfault on macos when using homebrew 2022-12-13 11:39:08 -05:00
blessedcoolant
474220ce8e Fresh Frontend Build 2022-12-13 20:45:37 +13:00
blessedcoolant
4074705194 Update WEBUIHOTKEYS.md
Update hotkeys docs. They were outdated.
2022-12-13 20:45:37 +13:00
blessedcoolant
e89ff01caf Update Hotkeys Modal
Update hotkeys modal to reflect the previous changes to the scale and restore hotkeys and also improve a few other descriptions.
2022-12-13 20:45:37 +13:00
blessedcoolant
2187d0f31c Change Restore and Upscale Hotkeys
Changed the hotkeys of Restore and Upscale from R and U to Shift R and Shift U. Users could accidentally press R and U to trigger these functions which can be annoying. Especially considering R is also a hotkey for Reset View in other tabs and it can become muscle memory.
2022-12-13 20:45:37 +13:00
Lincoln Stein
1219c39d78 Lstein installer improvements (#1954)
* add logic for finding the root (runtime) directory

This commit fixes the root search logic to be as follows:

1) The `--root_dir` command line argument
2) The contents of environment variable INVOKEAI_ROOT
3) The VIRTUAL_ENV environment variable, plus '..'
4) $HOME/invokeai

(3) is the new feature. Since we are now recommending to install
InvokeAI and its dependencies into the .venv in the root directory,
this should be a reliable choice.

* make installer scripts more robust

This commits improves the installer .sh and .bat scripts in the following
ways:

1. They now handle folder/directory names containing spaces.
2. Pip will be installed into the .venv using the `ensurepip`
   module.

This addresses issues identified by @vargol in Issue #1941

* add --prefer-binary option to pip install

* fix unset variable crash

* add patch level to zip file name

* Fix crash introduced in #1948
2022-12-13 01:15:11 -05:00
blessedcoolant
bc0b0e4752 Possible fix for crash introduced in #1948 (#1963)
* Possible fix for crash introduced in #1948

* fix root dir search logic

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-13 01:14:46 -05:00
Lincoln Stein
cd3da2900d close #1956 (#1962) 2022-12-13 01:12:53 -05:00
blessedcoolant
4402ca10b2 [WebUI 2.2.5] Unified Canvas Alternate UI Beta (#1951)
* Fix Prompt Placeholder Text Color

* Display Model Desc as tooltip in SiteHeader

This'll allow the user to quickly access info like activation token for that model if they set it in the description.

* Unified Canvas UI Beta

* Initial Test Build

* Make Snap Grid Hotkey Accessible Always
2022-12-12 19:36:05 -05:00
Matthias Wild
1a1625406c Make Dockerfile cloud ready (tested on runpod) (#1950)
* Push dockerfile (#18)

* update build-container.yml

* add login step to build-container.yml

* update job name

* update matrix: add registry and platforms
also set latest only for cuda image

* quote string

* use latest for amd and cuda image

* separate images for cuda and amd

* change latest from auto to true

* configure_invoke -y instead of --interactive

* fix argument to --yes

* update matrix:
- use flavor instead of pip-requirements
- add flavor `cloud`
- add `dockerfile`

* introduce INVOKE_MODEL_RECONFIGURE

* add `--cap-add=sys_nice` to run.sh

* update Dockerfile: install wheel

* only have main branch in action again

* disable push of cloud image for now
since it still has it's own workflow, but PoC succeeded

* remove now untrue comments in top

* install pip, setuptools and wheel in sep. step

* add labels to the image

* remove doubled installation of wheel
2022-12-12 17:54:42 -05:00
Lincoln Stein
36e6908266 add logic for finding the root (runtime) directory (#1948)
This commit fixes the root search logic to be as follows:

1) The `--root_dir` command line argument
2) The contents of environment variable INVOKEAI_ROOT
3) The VIRTUAL_ENV environment variable, plus '..'
4) $HOME/invokeai

(3) is the new feature. Since we are now recommending to install
InvokeAI and its dependencies into the .venv in the root directory,
this should be a reliable choice.
2022-12-12 15:05:14 -05:00
Lincoln Stein
7314f1a862 add --karras_max option to invoke.py command line (#1762)
This addresses image regression image reported in #1754
2022-12-12 13:16:15 -05:00
psychedelicious
5c3cbd05f1 Improves configure_invokeai.py postscript (#1935)
The first few lines directed the user to run `python scripts/invoke.py`, which is not exactly correct anymore, and a holdover from previous versions.

Improves and clarifies the postscript messaging.
2022-12-12 13:13:46 -05:00
rmagur1203
f4e7383490 Load model in inpaint when using free_gpu_mem option (#1938)
* Load model in inpaint when using free_gpu_mem option

* Passing free_gpu_mem option to inpaint generator
2022-12-12 09:14:30 -05:00
rmagur1203
96a12099ed Fix the mistake of not importing the gc (#1939) 2022-12-12 09:14:09 -05:00
Lincoln Stein
e159bb3dce update installers for v2.2.4 tag (#1936) 2022-12-11 18:17:45 -05:00
rmagur1203
bd0c0d77d2 Reduce more memories on free_gpu_mem option (#1915)
* Enhance free_gpu_mem option
Unload cond_stage_model on free_gpu_mem option is setted

* Enhance free_gpu_mem option
Unload cond_stage_model on free_gpu_mem option is setted
2022-12-11 13:49:55 -05:00
Lincoln Stein
f745f78cb3 correct bug when trying to enhance JPG images (#1928)
This fix was authored by @mebelz and is reissued here to base it on
`main`.
2022-12-11 13:48:47 -05:00
Lincoln Stein
7efe0f3996 fix mkdocs formatting (#1927)
* fix mkdocs formatting

* update formatting, add some mkdocs specials

* fix wrong line break, use icon for tab key

Co-authored-by: mauwii <Mauwii@outlook.de>
2022-12-11 13:48:34 -05:00
Damian Stewart
9f855a358a fix for crash with inpainting model introduced by #1866 (#1922)
* fix for crash using inpainting model

* prevent crash due to invalid attention_maps_saver
2022-12-11 13:48:12 -05:00
Matthias Wild
62b80a81d3 Update dockerfile 2.2.4 (#1924)
* updated Dockerfile
- use `python:3.10-slim` as baseimage
- separate builder and runtime stages again
- get rid of uneeded packages
- pin packages for persistence
- remove outdir from entrypoint since invoke.init is available in /data
- shrinked image size to <2GB
- way better security score than before

* small output update to build.sh and run.sh

* update matrix in build-container.yml

* remove branches from build-container.yml
2022-12-11 17:33:54 +01:00
blessedcoolant
14587c9a95 Fresh Frontend Build 2022-12-11 11:19:22 -05:00
blessedcoolant
fcae5defe3 Add invokeai.init to gitignore 2022-12-11 11:19:22 -05:00
Lincoln Stein
e7144055d1 make webGUI model changing work again
- Using relative root addresses was causing problems when the
  current working directory was changed after start time.
- This commit makes the root address absolute at start time, such
  that changing the working directory later on doesn't break anything.
2022-12-11 11:19:22 -05:00
Lincoln Stein
c857c6cc62 rebuild frontend for 2.2.4 2022-12-11 11:19:22 -05:00
Lincoln Stein
7ecb11cf86 remove sampler questions (#1903) 2022-12-11 09:07:55 -05:00
Lincoln Stein
e4b61923ae fix InvokeAI download URLs (#1910)
- This fixes the .bat and .sh file URLs for the InvokeAI source
  code.
2022-12-11 07:10:17 -05:00
psychedelicious
aa68e4e0da Adds polyfill for Array.prototype.findLast() (#1909) 2022-12-11 06:54:15 -05:00
blessedcoolant
09365d6d2e Fix GUI not working (#1916) 2022-12-11 06:53:40 -05:00
AdamOStark
b77f34998c Responsive for devices under 600px
This doesn't not work for the Canvas Painting yet, but works on img2img and text2img
2022-12-11 22:10:46 +13:00
Lincoln Stein
0439b51a26 Simple Installer for Unified Directory Structure, Initial Implementation (#1819)
* partially working simple installer

* works on linux

* fix linux requirements files

* read root environment variable in right place

* fix cat invokeai.init in test workflows

* fix classical cp error in test-invoke-pip.yml

* respect --root argument now

* untested bat installers added

* windows install.bat now working

fix logic to find frontend files

* rename simple_install to "installer"

1. simple_install => 'installer'
2. source and binary install directories are removed

* enable update scripts to update requirements

- Also pin requirements to known working commits.
- This may be a breaking change; exercise with caution
- No functional testing performed yet!

* update docs and installation requirements

NOTE: This may be a breaking commit! Due to the way the installer
works, I have to push to a public branch in order to do full end-to-end
testing.

- Updated installation docs, removing binary and source installers and
  substituting the "simple" unified installer.
- Pin requirements for the "http:" downloads to known working commits.
- Removed as much as possible the invoke-ai forks of others' repos.

* fix directory path for installer

* correct requirement/environment errors

* exclude zip files in .gitignore

* possible fix for dockerbuild

* ready for torture testing

- final Windows bat file tweaks
- copy environments-and-requirements to the runtime directory so that
  the `update.sh` script can run.

  This is not ideal, since we lose control over the
  requirements. Better for the update script to pull the proper
  updated requirements script from the repository.

* allow update.sh/update.bat to install arbitrary InvokeAI versions

- Can pass the zip file path to any InvokeAI release, branch, commit or tag,
  and the installer will try to install it.
- Updated documentation
- Added Linux Python install hints.

* use binary installer's :err_exit function

* user diffusers 0.10.0

* added logic for CPPFLAGS on mac

* improve windows install documentation

- added information on a couple of gotchas I experienced during
  windows installation, including DLL loading errors experienced
  when Visual Studio C++ Redistributable was not present.

* tagged to pull from 2.2.4-rc1

- also fix error of shell window closing immediately if suitable
  python not found

Co-authored-by: mauwii <Mauwii@outlook.de>
2022-12-11 00:37:08 -05:00
blessedcoolant
ef6870c714 Fix Inpainting Model entry in models.yaml.example 2022-12-10 23:52:24 -05:00
Damian Stewart
8cbb50c204 avoid further crash under low-memory conditions 2022-12-10 15:32:11 -05:00
blessedcoolant
12a8d7fc14 Fix crash introduced in #1866 2022-12-10 15:32:11 -05:00
Matthias Wild
3d2b497eb0 Run more tests for PRs (#1895)
* run 3 tests for PR with different samplers
reduce tests for PR to do only 5 Iterations

* use correct txt file - delete unused old file
2022-12-10 20:07:14 +01:00
Damian Stewart
786b8878d6 Save and display per-token attention maps (#1866)
* attention maps saving to /tmp

* tidy up diffusers branch backporting of cross attention refactoring

* base64-encoding the attention maps image for generationResult

* cleanup/refactor conditioning.py

* attention maps and tokens being sent to web UI

* attention maps: restrict count to actual token count and improve robustness

* add argument type hint to image_to_dataURL function

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>

Co-authored-by: damian <git@damianstewart.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2022-12-10 15:57:41 +01:00
Lincoln Stein
55132f6463 pin diffusers to 0.9.0 2022-12-09 09:09:22 -05:00
Matthias Wild
ed9186b099 Add windows to test workflows (#1809)
* add windows to test runners

* disable fail-fast for debugging

* re-enable login shell for conda workflow
also fix expression to exclude windows from run tests

* enable fail-fast again

* fix condition, pin runner verisons

* remove feature branch from push trigger
since already being triggered now via PR

* use gfpgan from pypi for windows
curious if this would fix the installation here as well
since worked for #1802

* unpin basicsr for windows

* for curiosity enabling testing for windows as well

* disable pip cache
since windows failed with a memory error now
but was working before it had a cache

* use matrix.github-env

* set platform specific root and outdir

* disable tests for windows since memory error
I guess the windows installation uses more space than linux
and for this they have less swap memory
2022-12-09 14:21:38 +01:00
wfng92
d2026d0509 Fix error when init_mask=None and invert_mask=True
In the event where no `init_mask` is given and `invert_mask` is set to True, the script will raise the following error:

```bash
AttributeError: 'NoneType' object has no attribute 'mode'
```

The new implementation will only run inversion when both variables are valid.
2022-12-08 22:37:11 -05:00
Artur
0bc4ed14cd Prompt placeholder changed in PromptInput.tsx
Syntax examples were added
2022-12-08 22:35:41 -05:00
Jonathan
06369d07c0 Update CLI.py 2022-12-08 22:34:49 -05:00
Jonathan
4e61069821 Update embiggen.py 2022-12-08 22:34:49 -05:00
Daya Adianto
d7ba041007 Enable force free GPU memory in img2img 2022-12-07 19:25:21 -05:00
Sammy
3859302f1c Remove -e from "INSTALL_PATCHMATCH.md
The -e flag does NOT work in this case and results in a RemoteNotFound Error
2022-12-07 19:24:31 -05:00
Sammy
865439114b Arch Specific Patchmatch Instructions + Fixing linux conda installation 2022-12-07 19:24:31 -05:00
Lynne Whitehorn
4d76116152 Update invoke.bat.in isolate environment variables
Without locally scoped (to the script) environment variables, this script can only be run once and then you need to start a new cmd session to get a clean environment.

Surrounding the script with setlocal/endlocal achieves this.

https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/setlocal
https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/endlocal
2022-12-07 17:45:19 -05:00
spezialspezial
42f5bd4e12 Account for flat models
Merged models from auto11 merge board are flat for some reason. Current behavior of invoke is not changed by this modification.
2022-12-07 12:11:37 -05:00
Vedant Madane
04e77f3858 Fix Broken Link To Notebook
* The link pointed to https://github.com/invoke-ai/InvokeAI/blob/main/notebooks/Stable-Diffusion-local-Windows.ipynb which does not exist so it has been replaced with https://github.com/invoke-ai/InvokeAI/blob/main/notebooks/Stable_Diffusion_AI_Notebook.ipynb

* Add buttons for running on Colab 

* Tried adding running InvokeAI on Binder but the error was:
ERROR: Ignored the following versions that require a different python version: 0.55.2 Requires-Python <3.5
ERROR: Could not find a version that satisfies the requirement clipseg (from invokeai) (from versions: none)
ERROR: No matching distribution found for clipseg
Removing intermediate container 25be65428187
The command '/bin/sh -c ${KERNEL_PYTHON_PREFIX}/bin/pip install --no-cache-dir .' returned a non-zero code: 1

`## Running Online On JupyterHub Binder
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/invoke-ai/InvokeAI/main?labpath=https%3A%2F%2Fgithub.com%2Finvoke-ai%2FInvokeAI%2Fblob%2Fmain%2Fnotebooks%2FStable_Diffusion_AI_Notebook.ipynb)`

This will have to be added for having the Launch | Binder button after it runs properly.
2022-12-07 08:28:14 -05:00
Eugene Brodsky
1fc1eeec38 Fix docker push github action and expand with additional metadata (#1837)
* update docker build (cloud) action with additional metadata, new labels

* (docker) also add aarch64 cloud build and remove arch suffix

* (docker) architecture suffix is needed for now

* (docker) don't build aarch64 for now
2022-12-07 14:03:33 +01:00
Matthias Wild
556081695a disable pushing the cloud container (#1831) 2022-12-06 18:06:48 +01:00
Eugene Brodsky
ad7917c7aa Optimized Docker build with support for external working directory (#1544)
* add docker build optimized for size; do not copy models to image

useful for cloud deployments. attempts to utilize docker layer
caching as effectively as possible. also some quick tools to help with
building

* add workflow to build cloud img in ci

* push cloud image in addition to building

* (ci) also tag docker images with git SHA

* (docker) rework Makefile for easy cache population and local use

* support the new conda-less install; further optimize docker build

* (ci) clean up the build-cloud-img action

* improve the Makefile for local use

* move execution of invoke script from entrypoint to cmd, allows overriding the cmd if needed (e.g. in Runpod

* remove unnecessary copyright statements

* (docs) add a section on running InvokeAI in the cloud using Docker

* (docker) add patchmatch to the cloud image; improve build caching; simplify Makefile

* (docker) fix pip requirements path to use binary_installer directory
2022-12-06 13:28:07 +01:00
Kent Keirsey
39cca8139f Clean up readme 2022-12-06 06:58:26 -05:00
blessedcoolant
1d1988683b Fix Embedding Dir not working 2022-12-05 22:24:31 -05:00
Lincoln Stein
44a0055571 correct regression in loading of PaperCut and VoxelArt models (#1730)
This corrects a regression in loading of these models due to
a change of the embedding_manager parameter `num_vectors_per_token`

Fixes #1718
2022-12-05 19:04:34 +01:00
Lincoln Stein
0cc01143d8 invoke script cds to its location before running (#1805) 2022-12-05 19:03:20 +01:00
spezialspezial
1c0247d58a Eventually update APP_VERSION to 2.2.3
Not sure what the procedure is for the version number. Is this supposed to match every git tag or just major versions? Same question for setup.py
2022-12-04 14:33:16 -05:00
Damian Stewart
d335f51e5f fix off-by-one bug in cross-attention-control (#1774)
prompt token sequences begin with a "beginning-of-sequence" marker <bos> and end with a repeated "end-of-sequence" marker <eos> - to make a default prompt length of <bos> + 75 prompt tokens + <eos>. the .swap() code was failing to take the column for <bos> at index 0 into account. the changes here do that, and also add extra handling for a single <eos> (which may be redundant but which is included for completeness).

based on my understanding and some assumptions about how this all works, the reason .swap() nevertheless seemed to do the right thing, to some extent, is because over multiple steps the conditioning process in Stable Diffusion operates as a feedback loop. a change to token n-1 has flow-on effects to how the [1x4x64x64] latent tensor is modified by all the tokens after it, - and as the next step is processed, all the tokens before it as well. intuitively, a token's conditioning effects "echo" throughout the whole length of the prompt. so even though the token at n-1 was being edited when what the user actually wanted was to edit the token at n, it nevertheless still had some non-negligible effect, in roughly the right direction, often enough that it seemed like it was working properly.
2022-12-04 11:41:03 +01:00
Lincoln Stein
38cd968130 stability and use improvements to binary & source installers
- Pass command-line arguments through to invoke.py via the .bat and .sh scripts.
- Remove obsolete warning message from binary install.bat
- Make sure that current working directory matches where .bat file is installed
2022-12-03 21:25:12 -05:00
tildebyte
0111304982 fix(srcinstall) shell installer: cp scripts instead of linking 2022-12-03 21:24:18 -05:00
Eugene Brodsky
c607d4fe6c (config) clarify why we're setting the env var 2022-12-03 14:33:21 -05:00
Eugene Brodsky
6d6076d3c7 (config) fix permissions on configure_invokeai.py, improve documentation in globals.py comment 2022-12-03 14:33:21 -05:00
Eugene Brodsky
485fcc7fcb (config) do not cache HF token when using the non-canonical env var
this mirrors the behaviour when using the officially supported env var
2022-12-03 14:33:21 -05:00
Eugene Brodsky
76633f500a (config) make user aware of any problems downloading models
also implement a generic way of reporting issues at the end of installation
2022-12-03 14:33:21 -05:00
Eugene Brodsky
ed6194351c (config) try to authenticate to Huggingface more eagerly, using env vars 2022-12-03 14:33:21 -05:00
Eugene Brodsky
f237744ab1 (config) fix f-string in prompt for output location 2022-12-03 14:33:21 -05:00
ofirkris
678cf8519e typo fix 2022-12-03 14:30:48 -05:00
Damian Stewart
ee9de75b8d Make install instructions discoverable in readme (#1752)
also "Macintosh" → "macOS" to improve "We Support macOS Properly And Not Halfassed Like Other OSS Projects" signalling

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-03 14:20:50 -05:00
Andy Bearman
50f3847ef8 Fix Linux source URL in installation docs 2022-12-03 14:19:58 -05:00
Lincoln Stein
8596e3586c add documentation warning about 1650/60 cards
Several users have been trying to run InvokeAI on GTX 1650 and 1660
cards. They really can't because these cards don't work with
half-precision and only have 4-6GB of memory. Added a warning to
the docs (in two places) about this problem.
2022-12-03 13:16:22 -05:00
Lincoln Stein
5ef1e0714b Merge branch 'main' of github.com:/invoke-ai/InvokeAI into main 2022-12-03 12:25:30 +00:00
Lincoln Stein
be871c3ab3 Merge branch 'ebr-gh-link-src-installer' into main 2022-12-03 12:24:03 +00:00
Lincoln Stein
dec40d9b04 Update source_installer/install.sh.in
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2022-12-03 07:20:32 -05:00
Lincoln Stein
fe5c008dd5 Update docs/installation/INSTALL_SOURCE.md
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2022-12-03 07:20:32 -05:00
Lincoln Stein
72def2ae13 documentation fixes for 2.2.3
- Add Xcode installation instructions to source installer walkthrough
- Fix link to source installer page from installer overview
- If OSX install crashes, script will tell Mac users to go to the docs
  to learn how to install Xcode
2022-12-03 07:20:32 -05:00
Eugene Brodsky
31cd76a2af (docs) install ux: link directly to release zip files
NB: if we remove the version from the zip file names, we can link
directly to assets in the latest GH release from documentation without
the need to keep the links updated
2022-12-03 00:24:49 -05:00
Eugene Brodsky
00c78263ce (docs) install ux: link main README directly to source installer 2022-12-03 00:19:45 -05:00
Lincoln Stein
5c31feb3a1 Remove reference to binary installer 2022-12-02 22:02:51 -05:00
Shawn Zhong
26f129cef8 Fix broken link 2022-12-02 22:02:30 -05:00
Lincoln Stein
292ee06751 Fix description of source code installer
The mkdocs version of INSTALL_SOURCE.md has disappeared and I am patching this in
so that users find the correct installer.
2022-12-02 17:16:29 -05:00
Lincoln Stein
c00d53fcce fix link in documentation 2022-12-02 15:50:34 -05:00
Daya Adianto
a78a8728fe Fix FlaskUI initialization 2022-12-02 15:50:14 -05:00
Kevin Turner
6b5d19347a fix(invoke.sh.in): remove additional mystery character 2022-12-02 15:43:59 -05:00
Eugene Brodsky
26671d8eed (installer) fix syntax error in invoke.sh.in 2022-12-02 15:43:59 -05:00
Lincoln Stein
b487fa4391 fix basicsr conflict on windows 2022-12-02 12:53:13 -05:00
Lincoln Stein
12b98ba4ec make invoke.sh executable 2022-12-02 12:53:13 -05:00
Lincoln Stein
fa25a64d37 remove references to binary installer from docs 2022-12-02 12:48:26 -05:00
Lincoln Stein
29540452f2 fix bad naming of invoke.sh.in 2022-12-02 11:25:37 -05:00
Lincoln Stein
c7960f930a fix regression in copy function 2022-12-02 10:53:42 -05:00
Lincoln Stein
c1c8b5026a apply current directory patch to binary installer .sh file 2022-12-02 10:53:42 -05:00
Lincoln Stein
5da42e0ad2 add back PYTORCH_ENABLE_MPS_FALLBACK 2022-12-02 10:53:42 -05:00
Lincoln Stein
34d6f35408 run .bat file in directory potentially containing spaces
- The previous fix for the "install in Windows system directory" error would fail
   if the path includes directories with spaces in them. This fixes that.

- In addition, addressing the same issue in source installer, although not
	yet reported in wild.
2022-12-02 10:53:42 -05:00
mauwii
401165ba35 correctly link current core team 2022-12-02 09:33:19 -05:00
mauwii
6d8057c84f fix POSTPROCESS ToC 2022-12-02 09:33:19 -05:00
mauwii
3f23dee6f4 add title 2022-12-02 09:33:19 -05:00
mauwii
8cdd961ad2 update IMG2IMG.md 2022-12-02 09:33:19 -05:00
mauwii
470b267939 update CONEPTS.md
- use table with correct syntax for screenshots
- switch Title and first Headline to look better in ToC
2022-12-02 09:33:19 -05:00
mauwii
bf399e303c add index.md to features
to prevent the menu being occupied from the expanded CLI ToC
Should maybe be fleshed out a bit
2022-12-02 09:33:19 -05:00
mauwii
b3d7ad7461 a lot of formatting updates to CLI.md 2022-12-02 09:33:19 -05:00
mauwii
cd66b2c76d fix links in older_docs_to_be_removed 2022-12-02 09:33:19 -05:00
psychedelicious
6b406e2b5e Adds tip for importing models on Windows 2022-12-02 09:25:36 -05:00
Lincoln Stein
6737cc1443 recompile for linux 2022-12-02 09:11:17 -05:00
Lincoln Stein
7fd0eeb9f9 update darwin requirements 2022-12-02 09:11:17 -05:00
Lincoln Stein
16e3b45fa2 update linux reuqirements file 2022-12-02 09:11:17 -05:00
Lincoln Stein
2f07ea03a9 binary installer fix
- bat file changes to directory it lives in rather than user's current directory
- restore incorrect requirements and compiled Darwin requirements file
2022-12-02 09:11:17 -05:00
Lincoln Stein
b563d75c58 restored mac requirements file 2022-12-02 09:11:17 -05:00
psychedelicious
a7b7b20d16 Updates docs release link to latest 2022-12-02 06:20:16 -05:00
Lincoln Stein
a47ef3ded9 change download links to release candidate 2022-12-01 23:24:23 -05:00
Lincoln Stein
7cb9b654f3 add compiled windows file 2022-12-01 23:07:48 -05:00
Lincoln Stein
8819e12a86 configure script changed from preload_models.py to configure_invokeai.py
This makes a cosmetic change. Instead of calling preload_models.py
(deprecated) it calls configure_invokeai.py. Currently the two do
the same thing.
2022-12-01 22:51:05 -05:00
Lincoln Stein
967eb60ea9 added the linux py3.10* file 2022-12-01 22:51:05 -05:00
psychedelicious
b1091ecda1 Fixes failed canvas generation when gallery is empty
There was some old logic from before Unified Canvas which aborted generation when there was no currentImage. 

If you have an image in the gallery, there is always a currentImage. But if gallery is empty, there is no currentImage. Generation would silently fail in this case.

We apparently never tested with an empty gallery and thus never ran into the issue. This removes this old and now-unused logic.
2022-12-01 22:29:56 -05:00
Lincoln Stein
2723dd9051 remove bad characters from end of user input
Some users were leaving whitespace at the end of their root
directories or ending them with a backslash. This caused the root
directory to become unusable. This removes whitespace and backslashes
from the end of the directory names.

Note that more needs to be done to cleanse the input, but for now
this will cover the cases we have seen so far in the wild.
2022-12-01 22:15:39 -05:00
Lincoln Stein
8f050d992e documentation fixes for release 2022-12-01 22:02:50 -05:00
Lincoln Stein
0346095876 fix incorrect syntax for .bat 2022-12-01 22:02:27 -05:00
Lincoln Stein
f9bbc55f74 Merge branch 'source-installer-improvements' into main 2022-12-01 23:18:54 +00:00
Lincoln Stein
878a3907e9 defer loading of Hugging Face concepts until needed
Some users have been complaining that the CLI "freezes" for a while
before the invoke> prompt appears. I believe this is due to internet
delay while the concepts library names are downloaded by the autocompleter.
I have changed logic so that the concepts are downloaded the first time
the user types a < and tabs.
2022-12-01 17:56:18 -05:00
Lincoln Stein
4cfb41d9ae configure_invokeai.py enhancement
- Adds a new option to download <a>ll the models, in addition
  to <r>ecommended and <c>ustomized.
2022-12-01 15:59:14 -05:00
Lincoln Stein
6ec64ecb3c fix commit conflict markers 2022-12-01 15:07:54 -05:00
Lincoln Stein
540315edaa rename to binary_installer in build docs 2022-12-01 14:58:07 -05:00
Lincoln Stein
cf10a1b736 Merge branch 'main' into source-installer-improvements 2022-12-01 19:45:47 +00:00
Lincoln Stein
9fb2a43780 rename "installer" to "binary_installer"
- Fix up internal names so scripts run properly
2022-12-01 19:40:47 +00:00
Lincoln Stein
1b743f7d9b source installer improvements and documentation
- Source installer provides more context for what it is doing, and
  sends user to help/troubleshooting pages when something goes wrong.

- install.sh and install.bat are renamed to install.sh.in and install.bat.in
  to discourage users from running them from within the

- Documentation updated
2022-12-01 19:40:13 +00:00
Damian Stewart
d7bf3f7d7b make .sh/.bat files inside installer/ non executable (#1664)
* make binary installer executables non-executable inside the repo

* update docs to match previous commit
2022-12-01 19:35:21 +01:00
Lincoln Stein
eba31e7caf Documentation updates for 2.2 release 2022-12-01 08:09:31 -05:00
Lincoln Stein
bde456f9fa fix startup messages and a startup crash
- make the warnings about patchmatch less redundant
- only warn about being unable to load concepts from Hugging Face
  library once
- do not crash when unable to load concepts from Hugging Face
  due to network connectivity issues
2022-12-01 07:42:31 -05:00
Lincoln Stein
9ee83380e6 fix missig history file in output director 2022-12-01 07:39:26 -05:00
Lincoln Stein
6982e6a469 rebuilt frontend 2022-11-30 19:20:57 -05:00
Lincoln Stein
0f4d71ed63 Merge dev into main for 2.2.0 (#1642)
* Fixes inpainting + code cleanup

* Disable stage info in Inpainting Tab

* Mask Brush Preview now always at 0.5 opacity

The new mask is only visible properly at max opacity but at max opacity the brush preview becomes fully opaque blocking the view. So the mask brush preview no remains at 0.5 no matter what the Brush opacity is.

* Remove save button from Canvas Controls (cleanup)

* Implements invert mask

* Changes "Invert Mask" to "Preserve Masked Areas"

* Fixes (?) spacebar issues

* Patches redux-persist and redux-deep-persist with debounced persists

Our app changes redux state very, very often. As our undo/redo history grows, the calls to persist state start to take in the 100ms range, due to a the deep cloning of the history. This causes very noticeable performance lag.

The deep cloning is required because we need to blacklist certain items in redux from being persisted (e.g. the app's connection status).

Debouncing the whole process of persistence is a simple and effective solution. Unfortunately, `redux-persist` dropped `debounce` between v4 and v5, replacing it with `throttle`. `throttle`, instead of delaying the expensive action until a period of X ms of inactivity, simply ensures the action is executed at least every X ms. Of course, this does not fix our performance issue. 

The patch is very simple. It adds a `debounce` argument - a number of milliseconds - and debounces `redux-persist`'s `update()` method (provided by `createPersistoid`) by that many ms.

Before this, I also tried writing a custom storage adapter for `redux-persist` to debounce the calls to `localStorage.setItem()`. While this worked and was far less invasive, it doesn't actually address the issue. It turns out `setItem()` is a very fast part of the process.

We use `redux-deep-persist` to simplify the `redux-persist` configuration, which can get complicated when you need to blacklist or whitelist deeply nested state. There is also a patch here for that library because it uses the same types as `redux-persist`.

Unfortunately, the last release of `redux-persist` used a package `flat-stream` which was malicious and has been removed from npm. The latest commits to `redux-persist` (about 1 year ago) do not build; we cannot use the master branch. And between the last release and last commit, the changes have all been breaking.

Patching this last release (about 3 years old at this point) directly is far simpler than attempting to fix the upstream library's master branch or figuring out an alternative to the malicious and now non-existent dependency.

* Adds debouncing

* Fixes AttributeError: 'dict' object has no attribute 'invert_mask'

* Updates package.json to use redux-persist patches

* Attempts to fix redux-persist debounce patch

* Fixes undo/redo

* Fixes invert mask

* Debounce > 300ms

* Limits history to 256 for each of undo and redo

* Canvas styling

* Hotkeys improvement

* Add Metadata To Viewer

* Increases CFG Scale max to 200

* Fix gallery width size for Outpainting

Also fixes the canvas resizing failing n fast pushes

* Fixes disappearing canvas grid lines

* Adds staging area

* Fixes "use all" not setting variationAmount

Now sets to 0 when the image had variations.

* Builds fresh bundle

* Outpainting tab loads to empty canvas instead of upload

* Fixes wonky canvas layer ordering & compositing

* Fixes error on inpainting paste back

`TypeError: 'float' object cannot be interpreted as an integer`

* Hides staging area outline on mouseover prev/next

* Fixes inpainting not doing img2img when no mask

* Fixes bbox not resizing in outpainting if partially off screen

* Fixes crashes during iterative outpaint. Still doesn't work correctly though.

* Fix iterative outpainting by restoring original images

* Moves image uploading to HTTP

- It all seems to work fine
- A lot of cleanup is still needed
- Logging needs to be added
- May need types to be reviewed

* Fixes: outpainting temp images show in gallery

* WIP refactor to unified canvas

* Removes console.log from redux-persist patch

* Initial unification of canvas

* Removes all references to split inpainting/outpainting canvas

* Add patchmatch and infill_method parameter to prompt2image (options are 'patchmatch' or 'tile').

* Fixes app after removing in/out-painting refs

* Rebases on dev, updates new env files w/ patchmatch

* Organises features/canvas

* Fixes bounding box ending up offscreen

* Organises features/canvas

* Stops unnecessary canvas rescales on gallery state change

* Fixes 2px layout shift on toggle canvas lock

* Clips lines drawn while canvas locked

When drawing with the locked canvas, if a brush stroke gets too close to the edge of the canvas and its stroke would extend past the edge of the canvas, the edge of that stroke will be seen after unlocking the canvas.

This could cause a problem if you unlock the canvas and now have a bunch of strokes just outside the init image area, which are far back in undo history and you cannot easily erase.

With this change, lines drawn while the canvas is locked get clipped to the initial image bbox, fixing this issue.

Additionally, the merge and save to gallery functions have been updated to respect the initial image bbox so they function how you'd expect.

* Fixes reset canvas view when locked

* Fixes send to buttons

* Fixes bounding box not being rounded to 64

* Abandons "inpainting" canvas lock

* Fixes save to gallery including empty area, adds download and copy image

* Fix Current Image display background going over image bounds

* Sets status immediately when clicking Invoke

* Adds hotkeys and refactors sharing of konva instances

Adds hotkeys to canvas. As part of this change, the access to konva instance objects was refactored:

Previously closure'd refs were used to indirectly get access to the konva instances outside of react components.

Now, a  getter and setter function are used to provide access directly to the konva objects.

* Updates hotkeys

* Fixes canvas showing spinner on first load

Also adds good default canvas scale and positioning when no image is on it

* Fixes possible hang on MaskCompositer

* Improves behaviour when setting init canvas image/reset view

* Resets bounding box coords/dims when no image present

* Disables canvas actions which cannot be done during processing

* Adds useToastWatcher hook

- Dispatch an `addToast` action with standard Chakra toast options object to add a toast to the toastQueue
- The hook is called in App.tsx and just useEffect's w/ toastQueue as dependency to create the toasts
- So now you can add toasts anywhere you have access to `dispatch`, which includes middleware and thunks
- Adds first usage of this for the save image buttons in canvas

* Update Hotkey Info

Add missing tooltip hotkeys and update the hotkeys modal to reflect the new hotkeys for the Unified Canvas.

* Fix theme changer not displaying current theme on page refresh

* Fix tab count in hotkeys panel

* Unify Brush and Eraser Sizes

* Fix staging area display toggle not working

* Staging Area delete button is now red

So it doesnt feel blended into to the rest of them.

* Revert "Fix theme changer not displaying current theme on page refresh"

This reverts commit 903edfb803e743500242589ff093a8a8a0912726.

* Add arguments to use SSL to webserver

* Integrates #1487 - touch events

Need to add:
- Pinch zoom
- Touch-specific handling (some things aren't quite right)

* Refactors upload-related async thunks

- Now standard thunks instead of RTK createAsyncThunk()
- Adds toasts for all canvas upload-related actions

* Reorganises app file structure

* Fixes Canvas Auto Save to Gallery

* Fixes staging area outline

* Adds staging area hotkeys, disables gallery left/right when staging

* Fixes Use All Parameters

* Fix metadata viewer image url length when viewing intermediate

* Fixes intermediate images being tiny in txt2img/img2img

* Removes stale code

* Improves canvas status text and adds option to toggle debug info

* Fixes paste image to upload

* Adds model drop-down to site header

* Adds theme changer popover

* Fix missing key on ThemeChanger map

* Fixes stage position changing on zoom

* Hotkey Cleanup

- Viewer is now Z
- Canvas Move tool is V - sync with PS
- Removed some unused hotkeys

* Fix canvas resizing when both options and gallery are unpinned

* Implements thumbnails for gallery

- Thumbnails are saved whenever an image is saved, and when gallery requests images from server
- Thumbnails saved at original image aspect ratio with width of 128px as WEBP
- If the thumbnail property of an image is unavailable for whatever reason, the image's full size URL is used instead

* Saves thumbnails to separate thumbnails directory

* Thumbnail size = 256px

* Fix Lightbox Issues

* Disables canvas image saving functions when processing

* Fix index error on going past last image in Gallery

* WIP - Lightbox Fixes

Still need to fix the images not being centered on load when the image res changes

* Fixes another similar index error, simplifies logic

* Reworks canvas toolbar

* Fixes canvas toolbar upload button

* Cleans up IAICanvasStatusText

* Improves metadata handling, fixes #1450

- Removes model list from metadata
- Adds generation's specific model to metadata
- Displays full metadata in JSON viewer

* Gracefully handles corrupted images; fixes #1486

- App does not crash if corrupted image loaded
- Error is displayed in the UI console and CLI output if an image cannot be loaded

* Adds hotkey to reset canvas interaction state

If the canvas' interaction state (e.g. isMovingBoundingBox, isDrawing, etc) get stuck somehow, user can press Escape to reset the state.

* Removes stray console.log()

* Fixes bug causing gallery to close on context menu open

* Minor bugfixes

- When doing long-running canvas image exporting actions, display indeterminate progress bar
- Fix staging area image outline not displaying after committing/discarding results

* Removes unused imports

* Fixes repo root .gitignore ignoring frontend things

* Builds fresh bundle

* Styling updates

* Removes reasonsWhyNotReady

The popover doesn't play well with the button being disabled, and I don't think adds any value.

* Image gallery resize/style tweaks

* Styles buttons for clearing canvas history and mask

* First pass on Canvas options panel

* Fixes bug where discarding staged images results in loss of history

* Adds Save to Gallery button to staging toolbar

* Rearrange some canvas toolbar icons

Put brush stuff together and canvas movement stuff together

* Fix gallery maxwidth on unified canvas

* Update Layer hotkey display to UI

* Adds option to crop to bounding box on save

* Masking option tweaks

* Crop to Bounding Box > Save Box Region Only

* Adds clear temp folder

* Updates mask options popover behavior

* Builds fresh bundle

* Fix styling on alert modals

* Fix input checkbox styling being incorrect on light theme

* Styling fixes

* Improves gallery resize behaviour

* Cap gallery size on canvas tab so it doesnt overflow

* Fixes bug when postprocessing image with no metadata

* Adds IAIAlertDialog component

* Moves Loopback to app settings

* Fixes metadata viewer not showing metadata after refresh

Also adds Dream-style prompt to metadata

* Adds outpainting specific options

* Linting

* Fixes gallery width on lightbox, fixes gallery button expansion

* Builds fresh bundle

* Fix Lightbox images of different res not centering

* Update feature tooltip text

* Highlight mask icon when on mask layer

* Fix gallery not resizing correctly on open and close

* Add loopback to just img2img. Remove from settings.

* Fix to gallery resizing

* Removes Advanced checkbox, cleans up options panel for unified canvas

* Minor styling fixes to new options panel layout

* Styling Updates

* Adds infill method

* Tab Styling Fixes

* memoize outpainting options

* Fix unnecessary gallery re-renders

* Isolate Cursor Pos debug text on canvas to prevent rerenders

* Fixes missing postprocessed image metadata before refresh

* Builds fresh bundle

* Fix rerenders on model select

* Floating panel re-render fix

* Simplify fullscreen hotkey selector

* Add Training WIP Tab

* Adds Training icon

* Move full screen hotkey to floating to prevent tab rerenders

* Adds single-column gallery layout

* Fixes crash on cancel with intermediates enabled, fixes #1416

* Updates npm dependencies

* Fixes img2img attempting inpaint when init image has transparency

* Fixes missing threshold and perlin parameters in metadata viewer

* Renames "Threshold" > "Noise Threshold"

* Fixes postprocessing not being disabled when clicking use all

* Builds fresh bundle

* Adds color picker

* Lints & builds fresh bundle

* Fixes iterations being disabled when seed random & variations are off

* Un-floors cursor position

* Changes color picker preview to circles

* Fixes variation params not set correctly when recalled

* Fixes invoke hotkey not working in input fields

* Simplifies Accordion

Prep for adding reset buttons for each section

* Fixes mask brush preview color

* Committing color picker color changes tool to brush

* Color picker does not overwrite user-selected alpha

* Adds brush color alpha hotkey

* Lints

* Removes force_outpaint param

* Add inpaint size options to inpaint at a larger size than the actual inpaint image, then scale back down for recombination

* Bug fix for inpaint size

* Adds inpaint size (as scale bounding box) to UI

* Adds auto-scaling for inpaint size

* Improves scaled bbox display logic

* Fixes bug with clear mask and history

* Fixes shouldShowStagingImage not resetting to true on commit

* Builds fresh bundle

* Fixes canvas failing to scale on first run

* Builds fresh bundle

* Fixes unnecessary canvas scaling

* Adds gallery drag and drop to img2img/canvas

* Builds fresh bundle

* Fix desktop mode being broken with new versions of flaskwebgui

* Fixes canvas dimensions not setting on first load

* Builds fresh bundle

* stop crash on !import_models call on model inside rootdir

- addresses bug report #1546

* prevent "!switch state gets confused if model switching fails"

- If !switch were to fail on a particular model, then generate got
  confused and wouldn't try again until you switch to a different working
  model and back again.

- This commit fixes and closes #1547

* Revert "make the docstring more readable and improve the list_models logic"

This reverts commit 248068fe5d.

* fix model cache path

* also set fail-fast to it's default (true)
in this way the whole action fails if one job fails
this should unblock the runners!!!

* fix output path for Archive results

* disable checks for python 3.9

* Update-requirements and test-invoke-pip workflow (#1574)

* update requirements files

* update test-invoke-pip workflow

* move requirements-mkdocs.txt to docs folder (#1575)

* move requirements-mkdocs.txt to docs folder

* update copyright

* Fixes outpainting with resized inpaint size

* Interactive configuration (#1517)

* Update scripts/configure_invokeai.py

prevent crash if output exists

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>

* implement changes requested by reviews

* default to correct root and output directory on Windows systems

- Previously the script was relying on the readline buffer editing
  feature to set up the correct default. But this feature doesn't
  exist on windows.

- This commit detects when user typed return with an empty directory
  value and replaces with the default directory.

* improved readability of directory choices

* Update scripts/configure_invokeai.py

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>

* better error reporting at startup

- If user tries to run the script outside of the repo or runtime directory,
  a more informative message will appear explaining the problem.

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>

* Embedding merging (#1526)

* add whole <style token> to vocab for concept library embeddings

* add ability to load multiple concept .bin files

* make --log_tokenization respect custom tokens

* start working on concept downloading system

* preliminary support for dynamic loading and merging of multiple embedded models

- The embedding_manager is now enhanced with ldm.invoke.concepts_lib,
  which handles dynamic downloading and caching of embedded models from
  the Hugging Face concepts library (https://huggingface.co/sd-concepts-library)

- Downloading of a embedded model is triggered by the presence of one or more
  <concept> tags in the prompt.

- Once the embedded model is downloaded, its trigger phrase will be loaded
  into the embedding manager and the prompt's <concept> tag will be replaced
  with the <trigger_phrase>

- The downloaded model stays on disk for fast loading later.

- The CLI autocomplete will complete partial <concept> tags for you. Type a
  '<' and hit tab to get all ~700 concepts.

BUGS AND LIMITATIONS:

- MODEL NAME VS TRIGGER PHRASE

  You must use the name of the concept embed model from the SD
  library, and not the trigger phrase itself. Usually these are the
  same, but not always. For example, the model named "hoi4-leaders"
  corresponds to the trigger "<HOI4-Leader>"

  One reason for this design choice is that there is no apparent
  constraint on the uniqueness of the trigger phrases and one trigger
  phrase may map onto multiple models. So we use the model name
  instead.

  The second reason is that there is no way I know of to search
  Hugging Face for models with certain trigger phrases. So we'd have
  to download all 700 models to index the phrases.

  The problem this presents is that this may confuse users, who will
  want to reuse prompts from distributions that use the trigger phrase
  directly. Usually this will work, but not always.

- WON'T WORK ON A FIREWALLED SYSTEM

  If the host running IAI has no internet connection, it can't
  download the concept libraries. I will add a script that allows
  users to preload a list of concept models.

- BUG IN PROMPT REPLACEMENT WHEN MODEL NOT FOUND

  There's a small bug that occurs when the user provides an invalid
  model name. The <concept> gets replaced with <None> in the prompt.

* fix loading .pt embeddings; allow multi-vector embeddings; warn on dupes

* simplify replacement logic and remove cuda assumption

* download list of concepts from hugging face

* remove misleading customization of '*' placeholder

the existing code as-is did not do anything; unclear what it was supposed to do.

the obvious alternative -- setting using 'placeholder_strings' instead of
'placeholder_tokens' to match model.params.personalization_config.params.placeholder_strings --
caused a crash. i think this is because the passed string also needed to be handed over
on init of the PersonalizedBase as the 'placeholder_token' argument.
this is weird config dict magic and i don't want to touch it. put a
breakpoint in personalzied.py line 116 (top of PersonalizedBase.__init__) if
you want to have a crack at it yourself.

* address all the issues raised by damian0815 in review of PR #1526

* actually resize the token_embeddings

* multiple improvements to the concept loader based on code reviews

1. Activated the --embedding_directory option (alias --embedding_path)
   to load a single embedding or an entire directory of embeddings at
   startup time.

2. Can turn off automatic loading of embeddings using --no-embeddings.

3. Embedding checkpoints are scanned with the pickle scanner.

4. More informative error messages when a concept can't be loaded due
   either to a 404 not found error or a network error.

* autocomplete terms end with ">" now

* fix startup error and network unreachable

1. If the .invokeai file does not contain the --root and --outdir options,
  invoke.py will now fix it.

2. Catch and handle network problems when downloading hugging face textual
   inversion concepts.

* fix misformatted error string

Co-authored-by: Damian Stewart <d@damianstewart.com>

* model_cache.py: fix list_models

Signed-off-by: devops117 <55235206+devops117@users.noreply.github.com>

* add statement of values (#1584)

* this adds the Statement of Values

Google doc source = https://docs.google.com/document/d/1-PrUKDJcxy8OyNGc8CyiHhv2VgLvjt7LRGlEpbg1nmQ/edit?usp=sharing

* Fix heading

* Update InvokeAI_Statement_of_Values.md

* Update InvokeAI_Statement_of_Values.md

* Update InvokeAI_Statement_of_Values.md

* Update InvokeAI_Statement_of_Values.md

* Update InvokeAI_Statement_of_Values.md

* add keturn and mauwii to the team member list

* Fix punctuation

* this adds the Statement of Values

Google doc source = https://docs.google.com/document/d/1-PrUKDJcxy8OyNGc8CyiHhv2VgLvjt7LRGlEpbg1nmQ/edit?usp=sharing

* add keturn and mauwii to the team member list

* fix formating
- make sub bullets use * (decide to all use - or *)
- indent sub bullets
Sorry, first only looked at the code version and found this only after
looking at the markdown rendered version

* use multiparagraph numbered sections

* Break up Statement Of Values as per comments on #1584

* remove duplicated word, reduce vagueness

it's important not to overstate how many artists we are consulting.

* fix typo (sorry blessedcoolant)

Co-authored-by: mauwii <Mauwii@outlook.de>
Co-authored-by: damian <git@damianstewart.com>

* update dockerfile (#1551)

* update dockerfile

* remove not existing file from .dockerignore

* remove bloat and unecesary step
also use --no-cache-dir for pip install
image is now close to 2GB

* make Dockerfile a variable

* set base image to `ubuntu:22.10`

* add build-essential

* link outputs folder for persistence

* update tag variable

* update docs

* fix not customizeable build args, add reqs output

* !model_import autocompletes in ROOTDIR

* Adds psychedelicious to statement of values signature (#1602)

* add a --no-patchmatch option to disable patchmatch loading (#1598)

This feature was added to prevent the CI Macintosh tests from erroring
out when patchmatch is unable to retrieve its shared library from
github assets.

* Fix #1599 by relaxing the `match_trigger` regex (#1601)

* Fix #1599 by relaxing the `match_trigger` regex

Also simplify logic and reduce duplication.

* restrict trigger regex again (but not so far)

* make concepts library work with Web UI

This PR makes it possible to include a Hugging Face concepts library
<style-or-subject-trigger> in the WebUI prompt. The metadata seems
to be correctly handled.

* documentation enhancements (#1603)

- Add documentation for the Hugging Face concepts library and TI embedding.

- Fixup index.md to point to each of the feature documentation files,
  including ones that are pending.

* tweak setup and environment files for linux & pypatchmatch (#1580)

* tweak setup and environment files for linux & pypatchmatch

- Downgrade python requirements to 3.9 because 3.10 is not supported
  on Ubuntu 20.04 LTS (widely-used distro)
- Use our github pypatchmatch 0.1.3 in order to install Makefile
  where it needs to be.
- Restored "-e ." as the last install step on pip installs. Hopefully
  this will not trigger the high-CPU hang we've previously experienced.

* keep windows on basicsr 1.4.1

* keep windows on basicsr 1.4.1

* bump pypatchmatch requirement to 0.1.4

- This brings in a version of pypatchmatch that will gracefully
  handle internet connection not available at startup time.
- Also refactors and simplifies the handling of gfpgan's basicsr requirement
  across various platforms.

* revert to older version of list_models() (#1611)

This restores the correct behavior of list_models() and quenches
the bug of list_models() returning a single model entry named "name".

I have not investigated what was wrong with the new version, but I
think it may have to do with changes to the behavior in dict.update()

* Fixes for #1604 (#1605)

* Converts ESRGAN image input to RGB

- Also adds typing for image input.
- Partially resolves #1604

* ensure there are unmasked pixels before color matching

Co-authored-by: Kyle Schouviller <kyle0654@hotmail.com>

* update index.md (#1609)

- comment out non existing link
- fix indention
- add seperator between feature categories

* Debloat-docker (#1612)

* debloat Dockerfile
- less options more but more userfriendly
- better Entrypoint to simulate CLI usage
- without command the container still starts the web-host

* debloat build.sh

* better syntax in run.sh

* update Docker docs
- fix description of VOLUMENAME
- update run script example to reflect new entrypoint

* Test installer (#1618)

* test linux install

* try removing http from parsed requirements

* pip install confirmed working on linux

* ready for linux testing

- rebuilt py3.10-linux-x86_64-cuda-reqs.txt to include pypatchmatch
  dependency.
- point install.sh and install.bat to test-installer branch.

* Updates MPS reqs

* detect broken readline history files

* fix download.pytorch.org URL

* Test installer (Win 11) (#1620)

Co-authored-by: Cyrus Chan <cyruswkc@hku.hk>

* Test installer (MacOS 13.0.1 w/ torch==1.12.0) (#1621)

* Test installer (Win 11)

* Test installer (MacOS 13.0.1 w/ torch==1.12.0)

Co-authored-by: Cyrus Chan <cyruswkc@hku.hk>

* change sourceball to development for testing

* Test installer (MacOS 13.0.1 w/ torch==1.12.1 & torchvision==1.13.1) (#1622)

* Test installer (Win 11)

* Test installer (MacOS 13.0.1 w/ torch==1.12.0)

* Test installer (MacOS 13.0.1 w/ torch==1.12.1 & torchvision==1.13.1)

Co-authored-by: Cyrus Chan <cyruswkc@hku.hk>

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: Cyrus Chan <82143712+cyruschan360@users.noreply.github.com>
Co-authored-by: Cyrus Chan <cyruswkc@hku.hk>

* 2.2 Doc Updates (#1589)

* Unified Canvas Docs & Assets

Unified Canvas draft

Advanced Tools Updates

Doc Updates (lstein feedback)

* copy edits to Unified Canvas docs

- consistent capitalisation and feature naming
- more intimate address (replace "the user" with "you") for improved User
  Engagement(tm)
- grammatical massaging and *poesie*

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
Co-authored-by: damian <git@damianstewart.com>

* include a step after config to `cat ~/.invokeai` (#1629)

* disable patchmatch in CI actions (#1626)

* disable patchmatch in CI actions

* fix indention

* replace tab with spaces

Co-authored-by: Matthias Wild <40327258+mauwii@users.noreply.github.com>
Co-authored-by: mauwii <Mauwii@outlook.de>

* Fix installer script for macOS. (#1630)

* refer to the platform as 'osx' instead of 'mac', otherwise the
composed URL to micromamba is wrong.
* move the `-O` option to `tar` to be grouped with the other tar flags
to avoid the `-O` being interpreted as something to unarchive.

* Removes symlinked environment.yaml (#1631)

Was unintentionally added in #1621

* Fix inpainting with iterations (#1635)

* fix error when inpainting using runwayml inpainting model (#1634)

- error was "Omnibus object has no attribute pil_image"
- closes #1596

* add k_dpmpp_2_a and k_dpmpp_2 solvers options (#1389)

* add k_dpmpp_2_a and k_dpmpp_2 solvers options

* update frontend

Co-authored-by: Victor <victorca25@users.noreply.github.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>

* add .editorconfig (#1636)

* Web UI 2.2 bugfixes (#1572)

* Fixes bug preventing multiple images from being generated

* Fixes valid seam strength value range

* Update Delete Alert Text

Indicates to the user that images are not permanently deleted.

* Fixes left/right arrows not working on gallery

* Fixes initial image on load erroneously set to a user uploaded image

Should be a result gallery image.

* Lightbox Fixes

- Lightbox is now a button in the current image buttons
- Lightbox is also now available in the gallery context menu
- Lightbox zoom issues fixed
- Lightbox has a fade in animation.

* Fix image display wrapper in current preview not overflow bounds

* Revert "Fix image display wrapper in current preview not overflow bounds"

This reverts commit 5511c82714dbf1d1999d64e8bc357bafa34ddf37.

* Change Staging Area discard icon from Bin to X

* Expose Snap Threshold and Move Snap Settings to BBox Panel

* Changes img2img strength default to 0.75

* Fixes drawing triggering when mouse enters canvas w/ button down

When we only supported inpainting and no zoom, this was useful. It allowed the cursor to leave the canvas (which was easy to do given the limited canvas dimensions) and without losing the "I am drawing" state. 

With a zoomable canvas this is no longer as useful.

Additionally, we have more popovers and tools (like the color pickers) which result in unexpected brush strokes. This fixes that issue.

* Revert "Expose Snap Threshold and Move Snap Settings to BBox Panel"

We will handle this a bit differently - by allowing the grid origin to be moved. I will dig in at some point.

This reverts commit 33c92ecf4da724c2f17d9d91c7ea31a43a2f6deb.

* Adds Limit Strokes to Box

* Adds fill bounding box button

* Adds erase bounding box button

* Changes Staging area discard icon to match others

* Fixes right click breaking move tool

* Fixes brush preview visibility issue with "darken outside box"

* Fixes history bugs with addFillRect, addEraseRect, and other actions

* Adds missing `key`

* Fixes postprocessing being applied to canvas generations

* Fixes bbox not getting scaled in various situations

* Fixes staging area show image toggle not resetting on accept/discard

* Locks down canvas while generating/staging

* Fixes move tool breaking when canvas loses focus during move/transform

* Hides cursor when restrict strokes is on and mouse outside bbox

* Lints

* Builds fresh bundle

* Fix overlapping hotkey for Fill Bounding Box

* Build Fresh Bundle

* Fixes bug with mask and bbox overlay

* Builds fresh bundle

Co-authored-by: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>

* disable NSFW checker loading during the CI tests (#1641)

* disable NSFW checker loading during the CI tests

The NSFW filter apparently causes invoke.py to crash during CI testing,
possibly due to out of memory errors. This workaround disables NSFW
model loading.

* doc change

* fix formatting errors in yml files

* Configure the NSFW checker at install time with default on (#1624)

* configure the NSFW checker at install time with default on

1. Changes the --safety_checker argument to --nsfw_checker and
--no-nsfw_checker. The original argument is recognized for backward
compatibility.

2. The configure script asks users whether to enable the checker
(default yes). Also offers users ability to select default sampler and
number of generation steps.

3.Enables the pasting of the caution icon on blurred images when
InvokeAI is installed into the package directory.

4. Adds documentation for the NSFW checker, including caveats about
accuracy, memory requirements, and intermediate image dispaly.

* use better fitting icon

* NSFW defaults false for testing

* set default back to nsfw active

Co-authored-by: Matthias Wild <40327258+mauwii@users.noreply.github.com>
Co-authored-by: mauwii <Mauwii@outlook.de>

Signed-off-by: devops117 <55235206+devops117@users.noreply.github.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Co-authored-by: Kyle Schouviller <kyle0654@hotmail.com>
Co-authored-by: javl <mail@jaspervanloenen.com>
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
Co-authored-by: mauwii <Mauwii@outlook.de>
Co-authored-by: Matthias Wild <40327258+mauwii@users.noreply.github.com>
Co-authored-by: Damian Stewart <d@damianstewart.com>
Co-authored-by: DevOps117 <55235206+devops117@users.noreply.github.com>
Co-authored-by: damian <git@damianstewart.com>
Co-authored-by: Damian Stewart <null@damianstewart.com>
Co-authored-by: Cyrus Chan <82143712+cyruschan360@users.noreply.github.com>
Co-authored-by: Cyrus Chan <cyruswkc@hku.hk>
Co-authored-by: Andre LaBranche <dre@mac.com>
Co-authored-by: victorca25 <41912303+victorca25@users.noreply.github.com>
Co-authored-by: Victor <victorca25@users.noreply.github.com>
2022-11-30 16:12:23 -05:00
Lincoln Stein
8f3f64b22e prevent crash that occurs when changing models.yaml on windows systems
Windows does not support an atomic `os.rename()` operation. This
PR changes it to `os.replace()`, which does the same thing.
2022-11-25 16:59:31 -05:00
slashtechno
dba0280790 Fix Colab requirements (again) (#1505) 2022-11-24 20:41:31 -05:00
Kevin Coakley
19e2cff18c Fix micromamba tar command for macOS
Moved the -O from after the file to after the tar command for compatibility with macOS

Signed-off-by: Kevin Coakley <kcoakley@sdsc.edu>
2022-11-18 16:08:17 -05:00
slashtechno
58f65d49b6 Fixed Google Colab requirements URL
Signed-off-by: slashtechno <77907286+slashtechno@users.noreply.github.com>
2022-11-16 22:31:18 -05:00
Lawrence Norton
e5edd025d6 Fixing Dead Link
I believe this is what you meant to link?
2022-11-14 17:34:51 -05:00
Peter Lin
29e229b409 adding troubleshooting tips to the newer doc 2022-11-14 10:31:19 -05:00
Lincoln Stein
93cdb476d9 change installer download repo to main.tar.gz 2022-11-13 16:44:21 -05:00
Lincoln Stein
1305e7a56c documentation hot fixes
- changes pointers to installation instructions from README
- Adds the changelog for the 2.1.3 release
2022-11-13 10:18:28 -05:00
majick
58edf262e4 Fix CWD being removed from path again. Refixes #723 again
Weirdly, I'm not even on a Mac or using a special Python distro.  Just
plain old Debian stable and a plain old venv.
2022-11-13 09:58:42 -05:00
blessedcoolant
fd67df9447 Remove gfpgan_dir
+ Update GFPGAN Model Path Defaults
>  Update them to match the new file heirarchy
2022-11-13 00:27:56 +00:00
Lincoln Stein
45e5053d06 added assets back 2022-11-13 00:23:51 +00:00
Lincoln Stein
9c5999ede1 added caveats to use of installer script 2022-11-12 20:02:01 +00:00
Lincoln Stein
7ddf7f0b7d use invoke-ai gfpgan to store weights in right place 2022-11-12 19:31:41 +00:00
psychedelicious
b8de5244b1 Fixes issue with intermediates size
Sorry @lstein !
2022-11-12 19:03:22 +00:00
Lincoln Stein
72e011a4e4 stop crash on text mask generation 2022-11-12 18:23:16 +00:00
Lincoln Stein
98db0d746c fix crash in inpaint when no seed in original image 2022-11-12 17:57:43 +00:00
Lincoln Stein
1a8e007066 merge release-candidate-1-3-2 into main.
Squashed commit of the following:

commit 9a1fe8e7fb
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 12 17:07:40 2022 +0000

    swap in release URLs for installers

commit ff56f5251b
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 12 17:03:21 2022 +0000

    fix up bad unicode chars in invoke.py

commit ed943bd6c7
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 12 16:05:45 2022 +0000

    outcrop improvements, hand-added

commit 7ad2355b1d
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 12 15:14:33 2022 +0000

    documentation fixes

commit 66c920fc19
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 16:49:25 2022 -0500

    Revert "Resize hires as an image"

    This reverts commit d05b1b3544.

commit 3fc5cb09f8
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 12 12:43:17 2022 +0000

    fix incorrect link in install

commit 1345ec77ab
Author: tildebyte <337875+tildebyte@users.noreply.github.com>
Date:   Sun Nov 6 19:07:31 2022 -0500

    toil(repo): add tildebyte as owner of installer/ directory

commit b116715490
Author: Kyle Schouviller <kyle0654@hotmail.com>
Date:   Thu Nov 10 21:43:56 2022 -0800

    Fix performance issue introduced by torch cuda cache clear during generation

commit fa3670270e
Author: mauwii <Mauwii@outlook.de>
Date:   Fri Nov 11 12:42:03 2022 +0100

    small update to dockers huggingface section

commit c304250ef6
Author: mauwii <Mauwii@outlook.de>
Date:   Fri Nov 11 12:19:27 2022 +0100

    fix format and Link in INSTALL_INVOKE.md

commit 802ce5dde5
Author: mauwii <Mauwii@outlook.de>
Date:   Fri Nov 11 11:17:49 2022 +0100

    small fixex to format and a link in INSTALL_MANUAL

commit 311ee320ec
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 11 07:23:35 2022 +0000

    ignore installer intermediate files

commit e9df17b374
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 11 07:19:25 2022 +0000

    fix backslash-related syntax error

commit 061fb4ef00
Merge: 52be0d23 4095acd1
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 11 06:50:04 2022 +0000

    Merge branch 'release-candidate-2-1-3' of github.com:/invoke-ai/InvokeAI into release-candidate-2-1-3

commit 52be0d2396
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 11 06:49:45 2022 +0000

    add WindowsLongFileName batfile to source installer

commit 4095acd10e
Author: mauwii <Mauwii@outlook.de>
Date:   Fri Nov 11 07:05:17 2022 +0100

    Doc Updates
    A lot of re-formating of new Installation Docs
    also some content updates/corrections

commit 201eb22d76
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 11 04:41:02 2022 +0000

    prevent two models from being marked default in models.yaml

commit 17ab982200
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 11 03:56:54 2022 +0000

    installers download branch HEAD not tag

commit a04965b0e9
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 11 03:48:21 2022 +0000

    improve messaging during installation process

commit 0b529f0c57
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 15:22:32 2022 +0000

    enable outcropping of random JPG/PNG images

    - Works best with runwayML inpainting model
    - Numerous code changes required to propagate seed to final metadata.
      Original code predicated on the image being generated within InvokeAI.

commit 6f9f848345
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 17:27:42 2022 +0000

    enhance outcropping with ability to direct contents of new regions

    - When outcropping an image you can now add a `--new_prompt` option, to specify
      a new prompt to be used instead of the original one used to generate the image.

    - Similarly you can provide a new seed using `--seed` (or `-S`). A seed of zero
      will pick one randomly.

    - This PR also fixes the crash that happened when trying to outcrop an image
      that does not contain InvokeAI metadata.

commit 918c1589ef
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 20:16:47 2022 +0000

    fix #1402

commit 116415b3fc
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 21:27:25 2022 +0000

    fix invoke.py crash if no models.yaml file present

    - Script will now offer the user the ability to create a
      minimal models.yaml and then gracefully exit.
    - Closes #1420

commit b4b6eabaac
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 16:49:25 2022 -0500

    Revert "Log strength with hires"

    This reverts commit 82d4904c07.

commit 4ef1f4a854
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 20:01:49 2022 +0000

    remove temporary directory from repo

commit 510fc4ebaa
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 19:59:03 2022 +0000

    remove -e from clipseg load in installer

commit a20914434b
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 19:37:07 2022 +0000

    change clipseg repo branch to avoid clipseg not found error

commit 0d134195fd
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 18:39:29 2022 +0000

    update repo URL to point to rc

commit 649d8c8573
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 18:13:28 2022 +0000

    integrate tildebyte installer

commit a358d370a0
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 17:48:14 2022 +0000

    add @tildebyte compiled pip installer

commit 94a9033c4f
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 14:52:00 2022 +0000

    ignore source installer zip files

commit 18a947c503
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 14:46:36 2022 +0000

    documentation and environment file fixes

    - Have clarified the relationship between the @tildebyte and @cmdr2 installers;
      However, @tildebyte installer merge is still a WIP due to conflicts over
      such things as `invoke.sh`.
    - Rechristened 1click installer as "source" installer. @tildebyte installer will be
      "the" installer. (We'll see which one generates the least support requests and
      maintenance work.)
    - Sync'd `environment-mac.yml` with `development`. The former was failing with a
      taming-transformers error as per https://discord.com/channels/@me/1037201214154231899/1040060947378749460

commit a23b031895
Author: Mike DiGiovanni <vinblau@gmail.com>
Date:   Wed Nov 9 16:44:59 2022 -0500

    Fixes typos in README.md

commit 23af68c7d7
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 07:02:27 2022 -0500

    downgrade win installs to basicsr==1.4.1

commit e258beeb51
Merge: 7460c069 e481bfac
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 06:37:45 2022 -0500

    Merge branch 'release-candidate-2-1-3' of github.com:invoke-ai/InvokeAI into release-candidate-2-1-3

commit 7460c069b8
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 06:36:48 2022 -0500

    remove --prefer-binary from requirements-base.txt

    It appears that some versions of pip do not recognize this option
    when it appears in the requirements file. Did not explore this further
    but recommend --prefer-binary in the manual install instructions on
    the command line.

commit e481bfac61
Merge: 5040747c d1ab65a4
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 11:21:56 2022 +0000

    Merge branch 'release-candidate-2-1-3' of github.com:/invoke-ai/InvokeAI into release-candidate-2-1-3

commit 5040747c67
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 11:21:43 2022 +0000

    fix windows install instructions & bat file

commit d1ab65a431
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 10 07:18:59 2022 +0100

    update WEBUIHOTKEYS.md

commit af4ee7feb8
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 10 06:33:49 2022 +0100

    update INSTALL_DOCKER.md

commit 764fb29ade
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 10 06:30:15 2022 +0100

    fix formatting in INSTALL.md

commit 1014d3ba44
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 10 06:29:14 2022 +0100

    fix build.sh invokeai_conda_env_file default value

commit 40a48aca88
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 10 05:25:30 2022 +0100

    fix environment-mac.yml
    moved taming-transformers-rom1504 to pip dependencies

commit 92abc00f16
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 10 05:19:52 2022 +0100

    fix test-invoke-conda
    - copy required conda environment yaml
    - use environment.yml
    - I use cp instead of ln since would be compatible for windows runners

commit a5719aabf8
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 10 04:14:35 2022 +0100

    update Dockerfile
    - link environment.yml from new environemnts path
    - change default conda_env_file
    - quote all variables to avoid splitting
    - also remove paths from conda-env-files in build-container.yml

commit 44a18511fa
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 20:51:06 2022 +0000

    update paths in container build workflow

commit b850dbadaf
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 20:16:57 2022 +0000

    finished reorganization of install docs

commit 9ef8b944d5
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 18:50:58 2022 +0000

    tweaks to manual install documentation

    --prefer-binary is an iffy option in the requirements file. It isn't
    supported by some versions of pip, so I removed it from
    requirements-base.txt and inserted it into the manual install
    instructions where it seems to do what it is supposed to.

commit efc5a98488
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 18:20:03 2022 +0000

    manual installation documentation tested on Linux

commit 1417c87928
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 17:37:06 2022 +0000

    change name of requirements.txt to avoid confusion

commit 2dd6fc2b93
Merge: 22213612 71ee44a8
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 17:26:24 2022 +0000

    Merge branch 'release-candidate-2-1-3' of github.com:/invoke-ai/InvokeAI into release-candidate-2-1-3

commit 22213612a0
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 17:25:59 2022 +0000

    directory cleanup; working on install docs

commit 71ee44a827
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 02:07:13 2022 +0000

    prevent crash when switching to an invalid model

commit b17ca0a5e7
Author: damian0815 <null@damianstewart.com>
Date:   Tue Nov 8 14:28:38 2022 +0100

    don't suppress exceptions when doing cross-attention control

commit 71bbfe4a1a
Author: damian0815 <null@damianstewart.com>
Date:   Tue Nov 8 12:59:34 2022 +0100

    Fix #1362 by improving VRAM usage patterns when doing .swap()

    commit ef3f7a26e242b73c2beb0195c7fd8f654ef47f55
    Author: damian0815 <null@damianstewart.com>
    Date:   Tue Nov 8 12:18:37 2022 +0100

        remove log spam

    commit 7189d649622d4668b120b0dd278388ad672142c4
    Author: damian0815 <null@damianstewart.com>
    Date:   Tue Nov 8 12:10:28 2022 +0100

        change the way saved slicing strategy is applied

    commit 01c40f751ab72955140165c16f95ae411732265b
    Author: damian0815 <null@damianstewart.com>
    Date:   Tue Nov 8 12:04:43 2022 +0100

        fix slicing_strategy_getter callsite

    commit f8cfe25150a346958903316bc710737d99839923
    Author: damian0815 <null@damianstewart.com>
    Date:   Tue Nov 8 11:56:22 2022 +0100

        cleanup, consistent dim=0 also tested

    commit 5bf9b1e890d48e962afd4a668a219b68271e5dc1
    Author: damian0815 <null@damianstewart.com>
    Date:   Tue Nov 8 11:34:09 2022 +0100

        refactored context, tested with non-sliced cross attention control

    commit d58a46e39bf562e7459290d2444256e8c08ad0b6
    Author: damian0815 <null@damianstewart.com>
    Date:   Sun Nov 6 00:41:52 2022 +0100

        cleanup

    commit 7e2c658b4c06fe239311b65b9bb16fa3adec7fd7
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:57:31 2022 +0100

        disable logs

    commit 20ee89d93841b070738b3d8a4385c93b097d92eb
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:36:58 2022 +0100

        slice saved attention if necessary

    commit 0a7684a22c880ec0f48cc22bfed4526358f71546
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:32:38 2022 +0100

        raise instead of asserting

    commit 7083104c7f3a0d8fd96e94a2f391de50a3c942e4
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:31:00 2022 +0100

        store dim when saving slices

    commit f7c0808ed383ec1dc70645288a798ed2aa4fa85c
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:27:16 2022 +0100

        don't retry on exception

    commit 749a721e939b3fe7c1741e7998dab6bd2c85a0cb
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:24:50 2022 +0100

        stuff

    commit 032ab90e9533be8726301ec91b97137e2aadef9a
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:20:17 2022 +0100

        more logging

    commit 3dc34b387f033482305360e605809d95a40bf6f8
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:16:47 2022 +0100

        logs

    commit 901c4c1aa4b9bcef695a6551867ec8149e6e6a93
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:12:39 2022 +0100

        actually set save_slicing_strategy to True

    commit f780e0a0a7c6b6a3db320891064da82589358c8a
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:10:35 2022 +0100

        store slicing strategy

    commit 93bb6d566fd18c5c69ef7dacc8f74ba2cf671cb7
    Author: damian <git@damianstewart.com>
    Date:   Sat Nov 5 20:43:48 2022 +0100

        still not it

    commit 5e3a9541f8ae00bde524046963910323e20c40b7
    Author: damian <git@damianstewart.com>
    Date:   Sat Nov 5 17:20:02 2022 +0100

        wip offloading attention slices on-demand

    commit 4c2966aa856b6f3b446216da3619ae931552ef08
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 15:47:40 2022 +0100

        pre-emptive offloading, idk if it works

    commit 572576755e9f0a878d38e8173e485126c0efbefb
    Author: root <you@example.com>
    Date:   Sat Nov 5 11:25:32 2022 +0000

        push attention slices to cpu. slow but saves memory.

    commit b57c83a68f2ac03976ebc89ce2ff03812d6d185f
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 12:04:22 2022 +0100

        verbose logging

    commit 3a5dae116f110a96585d9eb71d713b5ed2bc3d2b
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 11:50:48 2022 +0100

        wip fixing mem strategy crash (4 test on runpod)

    commit 3cf237db5fae0c7b0b4cc3c47c81830bdb2ae7de
    Author: damian0815 <null@damianstewart.com>
    Date:   Fri Nov 4 09:02:40 2022 +0100

        wip, only works on cuda

commit 5702271991
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 14:09:36 2022 +0000

    speculative reorganization of the requirements & environment files

    - This is only a test!
    - The various environment*.yml and requirements*.txt files have all
      been moved into a directory named "environments-and-requirements".
    - The idea is to clean up our root directory so that the github home
      page is tidy.
    - The manual install instructions will start with the instructions to
      create a symbolic link from environment.yml to the appropriate file
      for OS and GPU.
    - The 1-click installers have been updated to accommodate this change.

commit 10781e7dc4
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 01:59:45 2022 +0000

    refactoring requirements

commit 099d1157c5
Author: mauwii <Mauwii@outlook.de>
Date:   Wed Nov 9 00:16:18 2022 +0100

    better way to make sure if conda is useable

commit ab825bf7ee
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 22:05:33 2022 +0000

    add back --prefer-binaries to requirements

commit 10cfeb5ada
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 8 22:27:19 2022 +0100

    add quotes to set and use `$environment_file`

commit e97515d045
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 8 22:24:21 2022 +0100

    set environment file for conda update

commit 0f04bc5789
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 8 22:21:25 2022 +0100

    use conda env update

commit 3f74aabecd
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 8 22:20:44 2022 +0100

    use command instead of hash

commit b1a99a51b7
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 14:44:44 2022 -0500

    remove --global git config from 1-click installers

commit 8004f8a6d9
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Mon Nov 7 09:07:20 2022 -0500

    Revert "Use array slicing to calc ddim timesteps"

    This reverts commit 1f0c5b4cf1.

commit ff8ff2212a
Merge: 8e5363cd 636620b1
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 14:01:40 2022 +0000

    add initfile support from PR #1386

commit 8e5363cd83
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 13:26:18 2022 +0000

    move 'installer/' to '1-click-installer' to make room for tildebyte installer

commit 1450779146
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 12:56:36 2022 +0000

    update branch for installer to pull against

commit 8cd5d95b8a
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 05:30:20 2022 +0000

    move all models into subdirectories of ./models

    - this required an update to the invoke-ai fork of gfpgan
    - simultaneously reverted consolidation of environment and
      requirements files, as their presence in a directory
      triggered setup.py to try to install a sub-package.

commit abd6407394
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 03:52:46 2022 +0000

    leave a copy of environment-cuda.yml at top level

    - named it environment.yml
    - need to avoid a big change for users and breaking older support
      instructions.

commit 734dacfbe9
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 03:50:07 2022 +0000

    consolidate environment files

    - starting to remove unneeded entries and pins
    - no longer require -e in front of github dependencies
    - update setup.py with release number
    - update manual installation instructions

commit 636620b1d5
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 03:26:16 2022 +0000

    change initfile to ~/.invokeai

    - adjust documentation
    - also fix 'clipseg_models' to 'clipseg', which seems to be working now

commit 1fe41146f0
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 5 14:28:01 2022 -0400

    add support for an initialization file, invokeai.init

    - Place preferred startup command switches in a file named
      "invokeai.init". The file can consist of a single line of switches
      such as "--web --steps=28", a series of switches on each
      line, or any combination of the two.

     Example:
     ```
       --web
       --host=0.0.0.0
       --steps=28
       --grid
       -f 0.6 -C 11.0 -A k_euler_a
    ```

    - The following options, which were previously only available within
      the CLI, are now available on the command line as well:

      --steps
      --strength
      --cfg_scale
      --width
      --height
      --fit

commit 2ad6ef355a
Merge: 865502ee 8b47c829
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sun Nov 6 18:08:36 2022 +0000

    update discord link

commit 865502ee4f
Author: mauwii <Mauwii@outlook.de>
Date:   Sun Nov 6 18:00:16 2022 +0100

    update changelog

commit c7984f3299
Author: mauwii <Mauwii@outlook.de>
Date:   Sun Nov 6 17:07:27 2022 +0100

    update TROUBLESHOOT.md

commit 7f150ed833
Author: mauwii <Mauwii@outlook.de>
Date:   Sun Nov 6 16:56:58 2022 +0100

    remove `:`from headlines in CONTRIBUTORS.md

commit badf4e256c
Author: mauwii <Mauwii@outlook.de>
Date:   Sun Nov 6 16:56:37 2022 +0100

    enable navigation tabs
    Since the docs are growing, this way they look cleaner

commit e64c60bbb3
Author: mauwii <Mauwii@outlook.de>
Date:   Sun Nov 6 16:18:59 2022 +0100

    remove preflight checks from assets
    seems like somebody executed tests and commited them

commit 1780618543
Author: mauwii <Mauwii@outlook.de>
Date:   Sun Nov 6 16:15:06 2022 +0100

    update INSTALLING_MODELS.md

commit f91fd27624
Author: Kyle Schouviller <kyle0654@hotmail.com>
Date:   Sat Nov 5 14:47:53 2022 -0700

    Bug fix for inpaint size

commit 09e41e8f76
Author: Kyle Schouviller <kyle0654@hotmail.com>
Date:   Sat Nov 5 14:34:52 2022 -0700

    Add inpaint size options to inpaint at a larger size than the actual inpaint image, then scale back down for recombination

commit 6eeb2107b3
Author: mauwii <Mauwii@outlook.de>
Date:   Sat Nov 5 21:01:14 2022 +0100

    remove create-caches.yml since not used anywhere

commit 17053ad8b7
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 5 16:01:55 2022 -0400

    fix duplicated argument introduced by conflict resolution

commit fefb4dc1f8
Merge: 762ca60a d05b1b35
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 5 12:47:35 2022 -0700

    Merge branch 'development' into fix_generate.py

commit d05b1b3544
Author: Craig <cwallen@users.noreply.github.com>
Date:   Sat Oct 29 20:40:30 2022 -0400

    Resize hires as an image

commit 82d4904c07
Author: Craig <cwallen@users.noreply.github.com>
Date:   Sat Oct 29 20:37:40 2022 -0400

    Log strength with hires

commit 1cdcf33cfa
Merge: 6616fa83 cbc029c6
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 5 09:57:38 2022 -0400

    Merge branch 'main' into development

    - this synchronizes recent document fixes by mauwii

commit 6616fa835a
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 4 00:47:03 2022 -0400

    fix Windows library dependency issues

    This commit addresses two bugs:

    1) invokeai.py crashes immediately with a message about an undefined
       attritube sigKILL (closes #1288). The fix is to pin torch at 1.12.1.

    2) Version 1.4.2 of basicsr fails to load properly on Windows, and is
       a requirement of realesrgan, however 1.4.1 works. Pinning basicsr
       in our requirements file resulted in a dependency conflict, so I
       ended up cloning realesrgan into the invoke-ai Git space and changing
       the requirements file there.

    If there is a more elegant solution, please advise.

commit 7b9a4564b1
Author: Matthias Wild <40327258+mauwii@users.noreply.github.com>
Date:   Sat Nov 5 14:36:45 2022 +0100

    Update-docs (#1382)

    * update IMG2IMG.md

    * update INPAINTING.md

    * update WEBUIHOTKEYS.md

    * more doc updates (mostly fix formatting):
    - OUTPAINTING.md
    - POSTPROCESS.md
    - PROMPTS.md
    - VARIATIONS.md
    - WEB.md
    - WEBUIHOTKEYS.md

commit fcdefa0620
Author: Matthias Wild <40327258+mauwii@users.noreply.github.com>
Date:   Fri Nov 4 20:47:31 2022 +0100

    Hotifx docs (#1376) (#1377)

commit ef8b3ce639
Merge: b7042095 36870a8f
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 4 12:08:44 2022 -0400

    Merge-main-into-development (#1373)

    To get the rid of the difference between main and development.

    Since otherwise it will be a pain to start fixing the documentatino
    (when the state between main and development is not the same ...)

    Also this should fix the problem of all tests failing since environment
    yamls get updated.

commit 36870a8f53
Merge: 6b89adfa b7042095
Author: Matthias Wild <40327258+mauwii@users.noreply.github.com>
Date:   Fri Nov 4 16:25:00 2022 +0100

    Merge branch 'development' into merge-main-into-development

commit b70420951d
Author: damian0815 <null@damianstewart.com>
Date:   Thu Nov 3 12:39:45 2022 +0100

    fix parsing error doing eg `forest ().swap(in winter)`

commit 1f0c5b4cf1
Author: wfng92 <43742196+wfng92@users.noreply.github.com>
Date:   Thu Nov 3 17:13:52 2022 +0800

    Use array slicing to calc ddim timesteps

commit 8648da8111
Author: mauwii <Mauwii@outlook.de>
Date:   Fri Nov 4 00:06:19 2022 +0100

    update environment-linux-aarch64 to use python 3.9

commit 45b4593563
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 3 22:31:46 2022 +0100

    update environment-linux-aarch64.yml
    - move getpass_asterisk to pip

commit 41b04316cf
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 3 20:40:08 2022 +0100

    rename job, remove debug branch from triggers

commit e97c6db2a3
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 3 20:34:01 2022 +0100

    include build matrix to build x86_64 and aarch64

commit 896820a349
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 05:01:15 2022 +0100

    disable caching

commit 06c8f468bf
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 04:26:39 2022 +0100

    disable PR-Validation
    since there are no files passed from context this is unecesarry

commit 61920e2701
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 04:09:39 2022 +0100

    update action to use current branch
    also update build-args of dockerfile and build.sh

commit f34ba7ca70
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 02:30:24 2022 +0100

    remove unecesarry mkdir command again

commit c30ef0895d
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 01:51:12 2022 +0100

    remove symlink to GFPGANv1.4
    also re-add mkdir to prevent action from failing

commit aa3a774f73
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 01:48:59 2022 +0100

    update build-container.yml to use cachev3

commit 2c30555b84
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 01:34:20 2022 +0100

    update Dockerfile
    - create models.yaml from models.yaml.example
    - run preload_models.py with --no-interactive

commit 743f605773
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 01:21:15 2022 +0100

    update build.sh to download sd-v1.5 model

commit 519c661abb
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 25 01:26:50 2022 +0200

    replace old fashined markdown templates with forms
    this will help the readability of issues a lot 🤓

commit 22c956c75f
Merge: 13696adc 0196571a
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 3 10:20:21 2022 -0400

    Merge branch 'development' of github.com:invoke-ai/InvokeAI into development

commit 13696adc3a
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 3 10:20:10 2022 -0400

    speculative change to solve windows esrgan issues

commit 0196571a12
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 2 22:39:35 2022 -0400

    remove merge markers from preload_models.py

commit 9666f466ab
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 2 18:29:34 2022 -0400

    use refined model by default

commit 240e5486c8
Merge: 8164b6b9 aa247e68
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 2 18:35:00 2022 -0400

    Merge branch 'spezialspezial-patch-9' into development

commit 8164b6b9cf
Merge: 4fc82d55 dd5a88dc
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 2 17:06:46 2022 -0400

    Merge branch 'development' of github.com:invoke-ai/InvokeAI into development

commit 4fc82d554f
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Thu Nov 3 04:17:28 2022 +1300

    [WebUI] Final 2.1 Release Build

commit 96b34c0f85
Author: damian0815 <null@damianstewart.com>
Date:   Wed Nov 2 09:08:11 2022 +0100

    Final WebUI build for Release 2.1
    - squashed commit of 52 commits from PR #1327

    don't log base64 progress images

    Fresh Build For WebUI

    [WebUI] Loopback Default False

    Fixes bugs/styling

    - Fixes missing web app state on new version:
    Adds stateReconciler to redux-persist.

    When we add more values to the state and then release the update app, they will be automatically merged in.

    Reseting web UI will be needed far less.
    7159ec

    - Fixes console z-index
    - Moves reset web UI button to visible area

    Decreases gallery width on inpainting

    Increases workarea split padding to 1rem

    Adds missing tooltips to site header

    Changes inpainting controls settings to hover

    Fixes hotkeys and settings buttons not working

    Improves bounding box interactions

    - Bounding box can now be moved by dragging any of its edges
    - Bounding box does not affect drawing if already drawing a stroke
    - Can lock bounding box to draw directly on the bounding box edges
    - Removes spacebar-hold behaviour due to technical issues

    Fixes silent crash when init image too large

    To send the mask to the server, the UI rendered the mask onto the init image and sent the whole image. The mask was then cropped by the server.

    If the image was too large, the app silently failed. Maybe it exceeds the websocket size limit.

    Fixed by cropping the mask in the UI layer, sending only bounding-box-sized mask image data.

    Disabled bounding box settings when locked

    Styles image uploader

    Builds fresh bundle

    Improves bounding box interaction

    Added spacebar-hold-to-transform back.

    Address bounding box feedback

    - Adds back toggle to hide bounding box
    - Box quick toggle = q, normal toggle = shift + q
    - Styles canvas alert icons

    Adds hints when unable to invoke

    - Popover on Invoke button indicates why exactly it is disabled, e.g. prompt is empty, something else is processing, etc.
    - There may be more than one reason; all are displayed.

    Fix Inpainting Alerts Styling

    Preventing unnecessary re-renders across the app

    Code Split Inpaint Options

    Isolate features to their own components so they dont re-render the other stuff each time.

    [TESTING] Remove  global isReady checking

    I dont believe this is need at all because the isready state is constantly updated when needed and tracked real time in the Redux store. This causes massive re-renders. @psychedelicious If this is absolutely essential for a reason that I do not see, please hit me up on Discord.

    Fresh Bundle

    Fix Bounding Box Settings re-rendering on brush stroke

    [Code Splitting] Bounding Box Options

    Isolated all bounding box components to trigger unnecessary re-renders. Still need to fix  bounding box  triggering re-renders on the control panel inside the canvas itself. But the options panel should be a good to go with this change.

    Inpainting Controls Code Spitting and Performance

    Codesplit the entirety of the inpainting controls. Created new selectors for each and every component to ensure there are no unnecessary re-renders. App feels a lot smoother.

    Fixes rerenders on ClearBrushHistory

    Fixes crash when requesting post-generation upscale/face restoration

    - Moves the inpainting paste to before the postprocessing.

    Removes unused isReady state

    Changes Report Bug icon to a bug

    Restores shift+q bounding box shortcut

    Adds alert for bounding box size to status icons

    Adds asCheckbox to IAIIconButton

    Rough draft of this. Not happy with the styling but it's clearer than having them look just like buttons.

    Fixes crash related to old value of progress_latents in state

    Styling changes and settings modal minor refactor

    Fixes: uploaded JPG images not loading

    Reworks CurrentImageButtons.tsx

    - Change all icons to FA iconset for consistency
    - Refactors IAIIconButton, IAIButton, IAIPopover to handle ref forwarding
    - Redesigns buttons into group

    Only generate 1 iteration when seed fixed & variations disabled

    Fixes progress images select

    Fixes edge case: upload over gets stuck while alt tabbing

    - Press esc to close it now

    Fixes display progress images select typing

    Fixes current image button rerenders

    Adds min width to ImageUploader

    Makes fast-latents in progress default

    Update Icon Button Checkbox Style Styling

    Fixes next/prev image buttons

    Refactor canvas buttons + more

    Add Save Intermediates Step Count

    For accurate mode only.

    Co-Authored-By: Richard Macarthy <richardmacarthy@protonmail.com>

    Restores "initial image" text

    Address feedback

    - moves mask clear button
    - fixes intermediates
    - shrinks inpainting icons by 10%

    Fix Loopback Styling

    Adds escape hotkey to close floating panels

    Readd Hotkey for Dual Display

    Updated Current Image Button Styling

commit dd5a88dcee
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Thu Nov 3 04:17:28 2022 +1300

    [WebUI] Final 2.1 Release Build

commit 95ed56bf82
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Thu Nov 3 04:16:31 2022 +1300

    Updated Current Image Button Styling

commit 1ae80f5ab9
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Thu Nov 3 04:07:57 2022 +1300

    Readd Hotkey for Dual Display

commit 1f0bd3ca6c
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Thu Nov 3 02:07:00 2022 +1100

    Adds escape hotkey to close floating panels

commit a1971f6830
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Thu Nov 3 03:38:15 2022 +1300

    Fix Loopback Styling

commit c6118e8898
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Thu Nov 3 01:29:51 2022 +1100

    Address feedback

    - moves mask clear button
    - fixes intermediates
    - shrinks inpainting icons by 10%

commit 7ba958cf7f
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Thu Nov 3 01:10:38 2022 +1100

    Restores "initial image" text

commit 383905d5d2
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Thu Nov 3 02:59:11 2022 +1300

    Add Save Intermediates Step Count

    For accurate mode only.

    Co-Authored-By: Richard Macarthy <richardmacarthy@protonmail.com>

commit 6173e3e9ca
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Thu Nov 3 00:53:53 2022 +1100

    Refactor canvas buttons + more

commit 3feb7d8922
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Thu Nov 3 00:49:23 2022 +1100

    Fixes next/prev image buttons

commit 1d9edbd0dd
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Thu Nov 3 00:50:44 2022 +1300

    Update Icon Button Checkbox Style Styling

commit d439abdb89
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 21:37:24 2022 +1100

    Makes fast-latents in progress default

commit ee47ea0c89
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 21:37:09 2022 +1100

    Adds min width to ImageUploader

commit 300bb2e627
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 21:28:22 2022 +1100

    Fixes current image button rerenders

commit ccf8593501
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 21:27:43 2022 +1100

    Fixes display progress images select typing

commit 0fda612f3f
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 21:02:01 2022 +1100

    Fixes edge case: upload over gets stuck while alt tabbing

    - Press esc to close it now

commit 5afff65b71
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 20:33:19 2022 +1100

    Fixes progress images select

commit 7e55bdefce
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 20:27:47 2022 +1100

    Only generate 1 iteration when seed fixed & variations disabled

commit 620cf84d3d
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 19:51:38 2022 +1100

    Reworks CurrentImageButtons.tsx

    - Change all icons to FA iconset for consistency
    - Refactors IAIIconButton, IAIButton, IAIPopover to handle ref forwarding
    - Redesigns buttons into group

commit cfe567c62a
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 16:14:50 2022 +1100

    Fixes: uploaded JPG images not loading

commit cefe12f1df
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 15:31:18 2022 +1100

    Styling changes and settings modal minor refactor

commit 1e51c39928
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 15:27:46 2022 +1100

    Fixes crash related to old value of progress_latents in state

commit 42a02bbb80
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 13:15:06 2022 +1100

    Adds asCheckbox to IAIIconButton

    Rough draft of this. Not happy with the styling but it's clearer than having them look just like buttons.

commit f1ae6dae4c
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 13:13:56 2022 +1100

    Adds alert for bounding box size to status icons

commit 6195579910
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 11:52:19 2022 +1100

    Restores shift+q bounding box shortcut

commit 16c8b23b34
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 11:32:07 2022 +1100

    Changes Report Bug icon to a bug

commit 07ae626b22
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 11:17:16 2022 +1100

    Removes unused isReady state

commit 8d171bb044
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 11:13:26 2022 +1100

    Fixes crash when requesting post-generation upscale/face restoration

    - Moves the inpainting paste to before the postprocessing.

commit 6e33ca7e9e
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 10:59:01 2022 +1100

    Fixes rerenders on ClearBrushHistory

commit db46e12f2b
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 11:36:28 2022 +1300

    Inpainting Controls Code Spitting and Performance

    Codesplit the entirety of the inpainting controls. Created new selectors for each and every component to ensure there are no unnecessary re-renders. App feels a lot smoother.

commit 868e4b2db8
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 07:40:31 2022 +1300

    [Code Splitting] Bounding Box Options

    Isolated all bounding box components to trigger unnecessary re-renders. Still need to fix  bounding box  triggering re-renders on the control panel inside the canvas itself. But the options panel should be a good to go with this change.

commit 2e562742c1
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 06:40:27 2022 +1300

    Fix Bounding Box Settings re-rendering on brush stroke

commit 68e6958009
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 06:28:34 2022 +1300

    Fresh Bundle

commit ea6e3a7949
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 06:26:56 2022 +1300

    [TESTING] Remove  global isReady checking

    I dont believe this is need at all because the isready state is constantly updated when needed and tracked real time in the Redux store. This causes massive re-renders. @psychedelicious If this is absolutely essential for a reason that I do not see, please hit me up on Discord.

commit b2879ca99f
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 06:08:59 2022 +1300

    Code Split Inpaint Options

    Isolate features to their own components so they dont re-render the other stuff each time.

commit 4e911566c3
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 03:50:56 2022 +1300

    Preventing unnecessary re-renders across the app

commit 9bafda6a15
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 03:02:35 2022 +1300

    Fix Inpainting Alerts Styling

commit 871a8a5375
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 23:52:07 2022 +1100

    Adds hints when unable to invoke

    - Popover on Invoke button indicates why exactly it is disabled, e.g. prompt is empty, something else is processing, etc.
    - There may be more than one reason; all are displayed.

commit 0eef74bc00
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 23:40:11 2022 +1100

    Address bounding box feedback

    - Adds back toggle to hide bounding box
    - Box quick toggle = q, normal toggle = shift + q
    - Styles canvas alert icons

commit 423ae32097
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 17:06:07 2022 +1100

    Improves bounding box interaction

    Added spacebar-hold-to-transform back.

commit 8282e5d045
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 16:57:07 2022 +1100

    Builds fresh bundle

commit 19305cdbdf
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 16:51:11 2022 +1100

    Styles image uploader

commit eb9028ab30
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 16:51:03 2022 +1100

    Disabled bounding box settings when locked

commit 21483f5d07
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 16:50:24 2022 +1100

    Fixes silent crash when init image too large

    To send the mask to the server, the UI rendered the mask onto the init image and sent the whole image. The mask was then cropped by the server.

    If the image was too large, the app silently failed. Maybe it exceeds the websocket size limit.

    Fixed by cropping the mask in the UI layer, sending only bounding-box-sized mask image data.

commit 82dcbac28f
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 16:28:30 2022 +1100

    Improves bounding box interactions

    - Bounding box can now be moved by dragging any of its edges
    - Bounding box does not affect drawing if already drawing a stroke
    - Can lock bounding box to draw directly on the bounding box edges
    - Removes spacebar-hold behaviour due to technical issues

commit d43bd4625d
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 15:10:49 2022 +1100

    Fixes hotkeys and settings buttons not working

commit ea891324a2
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 11:04:02 2022 +1100

    Changes inpainting controls settings to hover

commit 8fd9ea2193
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 11:03:41 2022 +1100

    Adds missing tooltips to site header

commit fb02666856
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 11:03:25 2022 +1100

    Increases workarea split padding to 1rem

commit f6f5c2731b
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 11:03:10 2022 +1100

    Decreases gallery width on inpainting

commit b4e3f771e0
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 10:54:59 2022 +1100

    Fixes bugs/styling

    - Fixes missing web app state on new version:
    Adds stateReconciler to redux-persist.

    When we add more values to the state and then release the update app, they will be automatically merged in.

    Reseting web UI will be needed far less.
    7159ec

    - Fixes console z-index
    - Moves reset web UI button to visible area

commit 99bb9491ac
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Tue Nov 1 08:35:45 2022 +1300

    [WebUI] Loopback Default False

commit 0453f21127
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 23:23:51 2022 +1300

    Fresh Build For WebUI

commit 9fc09aa4bd
Author: damian0815 <null@damianstewart.com>
Date:   Wed Nov 2 09:08:11 2022 +0100

    don't log base64 progress images

commit 5e87062cf8
Author: spezialspezial <75758219+spezialspezial@users.noreply.github.com>
Date:   Wed Nov 2 00:21:27 2022 +0100

    Option to directly invert the grayscale heatmap - fix

commit 3e7a459990
Author: spezialspezial <75758219+spezialspezial@users.noreply.github.com>
Date:   Tue Nov 1 21:37:33 2022 +0100

    Update txt2mask.py

commit bbf4c03e50
Author: spezialspezial <75758219+spezialspezial@users.noreply.github.com>
Date:   Tue Nov 1 21:11:19 2022 +0100

    Option to directly invert the grayscale heatmap

    Theoretically less work inverting the image while it's small but I can't measure a significant difference. Though, handy option to have in some cases.

commit 611a3a9753
Author: mauwii <Mauwii@outlook.de>
Date:   Wed Nov 2 02:23:09 2022 +0100

    fix name of caching step

commit 1611f0d181
Author: mauwii <Mauwii@outlook.de>
Date:   Wed Nov 2 02:18:46 2022 +0100

    readd caching of sd-models
    - this would remove the necesarrity of the secret availability in PRs

commit 08835115e4
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 1 22:10:12 2022 -0400

    pin pytorch_lightning to 1.7.7, issue #1331

commit 2d84e28d32
Merge: 533fd04e ef17aae8
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 1 22:11:04 2022 -0400

    Merge branch 'development' of github.com:invoke-ai/InvokeAI into development

commit ef17aae8ab
Author: damian0815 <null@damianstewart.com>
Date:   Wed Nov 2 00:39:48 2022 +0100

    add damian0815 to contributors list

commit 0cc39f01a3
Author: damian0815 <null@damianstewart.com>
Date:   Wed Nov 2 01:18:50 2022 +0100

    report full size for fast latents and update conversion matrix for v1.5

commit 688d7258f1
Author: damian0815 <null@damianstewart.com>
Date:   Wed Nov 2 00:33:00 2022 +0100

    fix a bug that broke cross attention control index mapping

commit 4513320bf1
Author: damian0815 <null@damianstewart.com>
Date:   Wed Nov 2 00:31:58 2022 +0100

    save VRAM by not recombining tensors that have been sliced to save VRAM

commit 533fd04ef0
Merge: 6215592b dff5681c
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 1 17:40:36 2022 -0400

    Merge branch 'development' of github.com:invoke-ai/InvokeAI into development

commit dff5681cf0
Author: damian0815 <d@d.com>
Date:   Tue Nov 1 13:56:03 2022 +0100

    shorter strings

commit 5a2790a69b
Author: damian0815 <d@d.com>
Date:   Tue Nov 1 13:19:20 2022 +0100

    convert progress display to a drop-down

commit 7c5305ccba
Author: damian0815 <d@d.com>
Date:   Tue Nov 1 12:54:46 2022 +0100

    do not try to save base64 intermediates in gallery on cancellation

commit 4013e8ad6f
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 21:54:35 2022 +1100

    Fixes b64 image sending and displaying

commit d1dfd257f9
Author: damian <d@d.com>
Date:   Tue Nov 1 11:40:40 2022 +0100

    wip base64

commit 5322d735ee
Author: damian <d@d.com>
Date:   Tue Nov 1 11:31:42 2022 +0100

    update frontend

commit cdb107dcda
Author: damian <d@d.com>
Date:   Tue Nov 1 11:17:43 2022 +0100

    add option to show intermediate latent space

commit be1393a41c
Author: damian <d@d.com>
Date:   Tue Nov 1 10:16:55 2022 +0100

    ensure existing exception handling code also handles new exception class

commit e554c2607f
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Nov 1 10:08:42 2022 +0100

    Rebuilt prompt parsing logic

    Complete re-write of the prompt parsing logic to be more readable and
    logical, and therefore also hopefully easier to debug, maintain, and
    augment.

    In the process it has also become more robust to badly-formed prompts.

    Squashed commit of the following:

    commit 8fcfa88a16e1390d41717e940d72aed64712171c
    Author: Damian at mba <damian@frey.NOSPAMco.nz>
    Date:   Sun Oct 30 17:05:57 2022 +0100

        further cleanup

    commit 1a1fd78bcfeb49d072e3e6d5808aa8df15441629
    Author: Damian at mba <damian@frey.NOSPAMco.nz>
    Date:   Sun Oct 30 16:07:57 2022 +0100

        cleanup and document

    commit 099c9659fa8b8135876f9a5a50fe80b20bc0635c
    Author: Damian at mba <damian@frey.NOSPAMco.nz>
    Date:   Sun Oct 30 15:54:58 2022 +0100

        works fully

    commit 5e6887ea8c25a1e21438ff6defb381fd027d25fd
    Author: Damian at mba <damian@frey.NOSPAMco.nz>
    Date:   Sun Oct 30 15:24:31 2022 +0100

        further...

    commit 492fda120844d9bc1ad4ec7dd408a3374762d0ff
    Author: Damian at mba <damian@frey.NOSPAMco.nz>
    Date:   Sun Oct 30 14:08:57 2022 +0100

        getting there...

    commit c6aab05a8450cc3c95c8691daf38fdc64c74f52d
    Author: Damian at mba <damian@frey.NOSPAMco.nz>
    Date:   Fri Oct 28 14:29:03 2022 +0200

        wip doesn't compile

    commit 5e533f731cfd20cd435330eeb0012e5689e87e81
    Author: Damian at mba <damian@frey.NOSPAMco.nz>
    Date:   Fri Oct 28 13:21:43 2022 +0200

        working with CrossAttentionCtonrol but no Attention support yet

    commit 9678348773431e500e110e8aede99086bb7b5955
    Author: Damian at mba <damian@frey.NOSPAMco.nz>
    Date:   Fri Oct 28 13:04:52 2022 +0200

        wip rebuiling prompt parser

commit 6215592b12
Merge: ef24d76a 349cc254
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 1 17:34:55 2022 -0400

    Merge branch 'development' of github.com:invoke-ai/InvokeAI into development

commit 349cc25433
Author: damian0815 <d@d.com>
Date:   Tue Nov 1 20:08:52 2022 +0100

    fix crash (be a little less aggressive clearing out the attention slice)

commit 214d276379
Author: damian0815 <d@d.com>
Date:   Tue Nov 1 19:57:55 2022 +0100

    be more aggressive at clearing out saved_attn_slice

commit ef24d76adc
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 1 14:34:23 2022 -0400

    fix library problems in preload_modules

commit ab2b5a691d
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 1 17:22:48 2022 -0400

    fix model_cache memory management issues

commit c7de2b2801
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 02:02:14 2022 +0100

    disable checks with sd-V1.4 model...
    ...to save some resources, since V1.5 is the default now

commit e8075658ac
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 31 22:20:51 2022 +0100

    update test-invoke-conda.yml
    - fix model dl path for sd-v1-4.ckpt
    - copy configs/models.yaml.example to configs/models.yaml

commit 4202dabee1
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 31 22:17:21 2022 +0100

    fix models example weights for sd-v1.4

commit d67db2bcf1
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Tue Nov 1 08:35:45 2022 +1300

    [WebUI] Loopback Default False

commit 7159ec885f
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Mon Oct 31 11:33:05 2022 -0400

    further improvements to preload_models.py

    - Faster startup for command line switch processing
    - Specify configuration file to modify using --config option:

      ./scripts/preload_models.ply --config models/my-models-file.yaml

commit b5cf734ba9
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Mon Oct 31 11:08:19 2022 -0400

    improve behavior of preload_models.py

    - NEVER overwrite user's existing models.yaml
    - Instead, merge its contents into new config file,
      and rename original to models.yaml.orig (with
      message)
    - models.yaml has been removed from repository and renamed
      models.yaml.example

commit f7dc8eafee
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Mon Oct 31 10:47:35 2022 -0400

    restore models.yaml to virgin state

commit 762ca60a30
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Oct 4 22:55:10 2022 -0400

    Update INPAINTING.md

commit e7fb9f342c
Author: Hideyuki Katsushiro <h.katsushiro@qualia.tokyo.jp>
Date:   Wed Oct 5 10:08:53 2022 +0900

    add argument --outdir
2022-11-12 17:17:07 +00:00
Kent Keirsey
8b47c82992 Update README.md 2022-11-06 09:21:05 -08:00
Kent Keirsey
eab435da27 Update index.md 2022-11-06 09:21:05 -08:00
Lincoln Stein
cbc029c6f9 fix Windows library dependency issues
This commit addresses two bugs:

1) invokeai.py crashes immediately with a message about an undefined
   attritube sigKILL (closes #1288). The fix is to pin torch at 1.12.1.

2) Version 1.4.2 of basicsr fails to load properly on Windows, and is
   a requirement of realesrgan, however 1.4.1 works. Pinning basicsr
   in our requirements file resulted in a dependency conflict, so I
   ended up cloning realesrgan into the invoke-ai Git space and changing
   the requirements file there.

If there is a more elegant solution, please advise.
2022-11-05 06:45:28 -07:00
Lincoln Stein
d318968abe remove --no_interactive from preload_scripts.py example (#1378) 2022-11-05 06:23:56 +01:00
Matthias Wild
e71655237a Hotifx docs (#1376) 2022-11-04 15:17:28 -04:00
Lincoln Stein
6b89adfa7e change "python3" to "python" in instructions 2022-11-03 19:22:05 -04:00
mauwii
8aa4a258f4 replace old fashined markdown templates with forms
this will help the readability of issues a lot 🤓
2022-11-03 16:28:06 -04:00
Lincoln Stein
174a9b78b0 Bring main back into a consistent state with other branches
- Due to misuse of rebase command, main was transiently
  in an inconsistent state.

- This repairs the damage, and adds a few post-release
  patches that ensure stable conda installs on Mac and Windows.
2022-11-03 15:44:06 -04:00
Lincoln Stein
90d37eac03 update requirements to address #1149 2022-10-18 16:00:59 -04:00
Lincoln Stein
230de023ff resolve doc conflicts during merge 2022-10-18 08:27:33 -04:00
mauwii
febf86dedf Merge branch 'fix-gh-actions' of github.com:mauwii/stable-diffusion into fix-gh-actions 2022-10-18 13:26:03 +02:00
mauwii
76ae17abac update cache steps
remove restore-keys, make keys uniuqe
2022-10-18 13:25:51 +02:00
mauwii
339ff4b464 fix conda pkg cache name
also change content of hashFile-function
2022-10-18 13:25:51 +02:00
mauwii
00c0e487dd move export behind the tests, upload with artifact
also switch to python between 3.9-3.10 and use conda-forge again
2022-10-18 13:25:50 +02:00
mauwii
5c8dfa38be readd pip dependencie in environment-ma.yml 2022-10-18 13:25:50 +02:00
mauwii
acf85c66a5 add current branch to push trigger 2022-10-18 13:25:50 +02:00
mauwii
3619918954 rename step to export conda env 2022-10-18 13:25:50 +02:00
mauwii
65b14683a8 unpin conda package versions in environment.yml 2022-10-18 13:25:50 +02:00
mauwii
f4fc02a3da switch to default channel in environment-mac.yml 2022-10-18 13:25:50 +02:00
mauwii
c334170a93 pin versions only for pip packages 2022-10-18 13:25:50 +02:00
mauwii
deab6c64fc export conda env instead of only print versions 2022-10-18 13:25:50 +02:00
mauwii
e1c9503951 list conda packages after activating env
also want to show how much faster it will run now with cached pkgs
2022-10-18 13:25:50 +02:00
mauwii
9a21812bf5 revert changes to environment.yml
@tildebyte this would not have been pointed out without PR-Validation
2022-10-18 13:25:50 +02:00
mauwii
347b5ce452 fix expression 2022-10-18 13:25:50 +02:00
mauwii
b39029521b use very short validation for Pull Requests 2022-10-18 13:25:49 +02:00
mauwii
97b26f3de2 remove doubled checkpoint cache 2022-10-18 13:25:49 +02:00
mauwii
e19a7a990d unpin versions in environment
as asked by @tildebyte
2022-10-18 13:25:49 +02:00
mauwii
3e424e1046 remove pip from dependencies 2022-10-18 13:25:49 +02:00
mauwii
db20b4af9c remove pr trigger 2022-10-18 13:25:15 +02:00
Matthias Wild
44ff8f8531 squash merge update-gh-actions into fix-gh-actions
* fix mkdocs deployment

* update path to python bin

* add trigger for current branch

* change path to python_bin for mac as well

* try to use setup-python@v4 instead of setting env

* remove setup conda action

* try to use $CONDA

* remove overseen action

* change branch from master to main

* sort out if then else for faster syntax

* remove more if functions

* add updates to create-caches as well

* eliminate the rest of if functions

* try to unpin pytorch and torchvision

* restore pinned versions

* try switching from set-output to use env

* update test-invoke-conda as well

* fix env var creation

* quote variable

* add second equal to compare

* try another way to use outputs

* fix outputs

* pip install for mac before creating conda env

* fix output variable

* fix python bin path

* remove pip install for before creating conda env

* unpin streamlit version in conda mac env

* try to make git-workflows better readable

* remove 4gotten trigger

* Update-gh-actions (#6)

* fix mkdocs deployment

* update path to python bin

* add trigger for current branch

* change path to python_bin for mac as well

* try to use setup-python@v4 instead of setting env

* remove setup conda action

* try to use $CONDA

* remove overseen action

* change branch from master to main

* sort out if then else for faster syntax

* remove more if functions

* add updates to create-caches as well

* eliminate the rest of if functions

* try to unpin pytorch and torchvision

* restore pinned versions

* try switching from set-output to use env

* update test-invoke-conda as well

* fix env var creation

* quote variable

* add second equal to compare

* try another way to use outputs

* fix outputs

* pip install for mac before creating conda env

* fix output variable

* fix python bin path

* remove pip install for before creating conda env

* unpin streamlit version in conda mac env

* try to make git-workflows better readable

* use macos-latest

* try to update conda before creating mac env

* better conda update trial

* re-pin streamlit version

* re-added trigger to run workflow in current branch

* try to find out if conda mac env could be updated

* install cmake, protobuf and rust b4 conda

* add yes to conda update

* lets try anaconda3-2022.05

* try environment.yml for mac as well

* reenable conda mac env, add pip install
also fix gitignore by changing from dream to invoke

* remove
- unecesary virtualenv creation
- conda update

change != macos back to == linux

* remove cmake from brew install since pre-installed

* disable opencv-python pip requirement

* fixed commands to find latest package versions

* update requirements for mac env

* back to the roots - only install conda env
depending on runner_os with or without extra env variables

* check out macOS in azure-pipelines
since becoming kind of tired of the GitHub Runner which is broken as ...

* let's try to setup python and update conda env

* initialize conda before using it

* add trigger in azure-pipelines.yml

* And another go for update first ....

* update azure-pipelines.yml
- add caching
- add checkpoint download
- add paths to trigger
and more

* unquote checkpoint-url

* fix chekpoint-url variable

* mkdir before downloading model

* set pr trigger to main, rename anaconda cache

* unique cacheHitVariables

* try to use macos-latest instead of macos-12

* update test-invoke-conda.yml:
- remove unecesarry echo step
- use s-weigand/setup-conda@v1
- remove conda update from install deps step since updated with action

* update test-invoke-conda.yml:
- rename conda env cache from ldm to invokeai
- reorder steps:
  1. checkout sources
  2. setup python
  3. setup conda
  4. keep order after set platform variables

* change macos back to 12 since also fails with 11

* update condition in run the tests
make difference between main or not main

* fix path to cache invokeai conda env

* fix invokeai conda env cache path

* update mkdocs-flow.yml

* change conda-channel priority

* update create-caches

* update conda env also when cache was used

* os dependend conda env cache path

* use existing CONDA env pointing to conda root

* create CONDA_ROOT output from $CONDA

* use output variable to define test prompts

* use setup-python v4, get rid of PYTHON_BIN env

* add runner.os to result artifacts name

* update test-invoke-conda.yml:
- reuse macos-latest
- disable setup python 3.9
- setup conda with default python version
- create or update conda environment depending on cache success
- remove name parameter from conda update since name is set in env yml

* improve mkdocs-flow.yml

* disable cache-hugginface-torch
since preload_models.py downloads to more than one location

* update mkdocs-flow.yml with new name

* rename mkdocs action to mkdocs-material

* try to ignore error when creating conda env
maybe it would still be usable, lets see ;P

* remove bloat

* update environment-mac.yml
to match dependencies of invoke-ai/InvokeAI's main branch

* disable conda update, tweak prompt condition

* try to set some env vars for macOS to fix conda

* stop ignoring error, use env instead of outputs

* tweak `[[` connditions

* update python and pip dependencie
makes a difference of 1 sec per itteration compared to 3.9!!!
also I see no reason why using a old pip version would be beneficial

* remove unecesarry env for macOS
everything was pre-tested on my MacBook Air 2020 with M1

* update conda env in setup step

* activate conda env after installation

* update test-invoke-conda.yml
- set conda env dependent on matrix.os
- set CONDA_ENV_NAME to prevent breaking action when renaming conda env
- fix conda env activation

* fix activate conda env

* set bash -l as default shell

* use action to activate conda env

* add conda env file to env activation

* try to replace s-weigeand with conda-incubator

* remove azure-pipelines.yml
funniest part is that the macos runner is the same as the one on github!

* include environment-file in matrix
- also disable auto-activate-base and auto-update-conda
- include macos-latest and macos-12 for debugging purpose
- set miniforge-version in matrix

* fix miniforge-variant, set fail-fast to false

* add step to setup miniconda
- make default shell a matrix variable
- remove bloat

* use a mac env yml without pinned versions

* unpin nomkl, pytorch and torchvision
also removed opencv-pyhton

* cache conda pkgs dir instead of conda env

* use python 3.10, exclude macos-12 from cache

* fix expression

* prepare for PR

* fix doubled id

* reuse pinned versions in mac conda env
- updated python pip version
- unpined pytorch and torchvision
- removed opencv-python
- updated versions to most recent (tested locally)

* fix classical copy/paste error

* remove unused env from shell-block comment

* fix hashFiles function to determine restore-keys

* reenable caching `~.cache`, update create-caches

* unpin all versions in mac conda env file
this was the only way I got it working in the action, also works locally
tested on MacBook Air 2020 M1
remove environment-mac-unpinned.yml

* prepare merge by removing this branch from trigger

* include pull_request trigger for main and dev

* remove pull_request trigger
2022-10-18 13:25:15 +02:00
mauwii
a8b794d7e0 update precision info 2022-10-17 22:27:27 -04:00
mauwii
f868362ca8 fix prompt in README.md 2022-10-17 22:27:27 -04:00
mauwii
8858f7e97c (re-) fix a lot in mkdocs 2022-10-17 22:27:27 -04:00
Matthias Wild
2db4969e18 Merge branch 'main' into fix-gh-actions 2022-10-17 23:41:36 +02:00
mauwii
2ecc1abf21 fix links to point to invoke-ai.github.io 2022-10-17 17:40:31 -04:00
mauwii
703bc9494a Merge remote-tracking branch 'upstream/main' into fix-gh-actions-fork 2022-10-17 21:40:16 +02:00
Lincoln Stein
e5ab07091d adding license using GitHub template
Did not attempt to add additional copyright information.
2022-10-17 12:09:24 -04:00
Lincoln Stein
891678b656 remove license files temporarily 2022-10-17 12:08:09 -04:00
Lincoln Stein
39ea2a257c remove additional copyrights from license file
Trying to get GitHub to recognize our MIT license. Perhaps the additional copyrights are confusing it.
2022-10-17 12:07:00 -04:00
Lincoln Stein
2d68eae16b Second try at getting GitHub to register license 2022-10-17 12:05:42 -04:00
spezialspezial
d65948c423 Update gitignore to ignore codeformer weights at new location
Eventually making it slightly more flexible
2022-10-17 11:54:45 -04:00
majick
9910a0b004 Fix broken links to CLI.md
* Looks like there was a bad paste
2022-10-16 23:40:27 -04:00
majick
ff96358cb3 Correct typo in the subtitle of the project
* “Formally” means that there is a formality such as a rule or declaration, “formerly” refers to a prior state.  The latter is almost certainly what is meant here.
2022-10-16 23:40:27 -04:00
mauwii
edf471f655 update cache steps
remove restore-keys, make keys uniuqe
2022-10-17 04:43:06 +02:00
mauwii
5b02c8ca4a fix conda pkg cache name
also change content of hashFile-function
2022-10-17 04:02:38 +02:00
mauwii
e7688c53b8 move export behind the tests, upload with artifact
also switch to python between 3.9-3.10 and use conda-forge again
2022-10-17 03:27:15 +02:00
mauwii
87cada42db readd pip dependencie in environment-ma.yml 2022-10-17 02:22:19 +02:00
mauwii
6fe67ee426 add current branch to push trigger 2022-10-17 02:12:46 +02:00
mauwii
5fbc81885a rename step to export conda env 2022-10-17 02:08:08 +02:00
mauwii
25ba5451f2 unpin conda package versions in environment.yml 2022-10-17 02:07:17 +02:00
mauwii
138c9cf7a8 switch to default channel in environment-mac.yml 2022-10-17 02:05:59 +02:00
mauwii
87981306a3 pin versions only for pip packages 2022-10-17 01:50:19 +02:00
mauwii
f7893b3ea9 export conda env instead of only print versions 2022-10-17 01:48:22 +02:00
mauwii
87395fe6fe list conda packages after activating env
also want to show how much faster it will run now with cached pkgs
2022-10-16 22:48:53 +02:00
mauwii
15f876c66c revert changes to environment.yml
@tildebyte this would not have been pointed out without PR-Validation
2022-10-16 22:02:58 +02:00
mauwii
522c35ac5b fix expression 2022-10-16 21:52:49 +02:00
mauwii
bb2d6d640f use very short validation for Pull Requests 2022-10-16 21:50:57 +02:00
mauwii
2412d8dec1 remove doubled checkpoint cache 2022-10-16 20:53:07 +02:00
mauwii
2ab5a43663 unpin versions in environment
as asked by @tildebyte
2022-10-16 20:48:31 +02:00
mauwii
0ec3d6c10a remove pip from dependencies 2022-10-16 20:36:33 +02:00
mauwii
d208e1b0f5 Merge branch 'fix-gh-actions' of github.com:mauwii/stable-diffusion into fix-gh-actions 2022-10-16 20:35:57 +02:00
mauwii
8a6ba6a212 remove pr trigger 2022-10-16 13:56:45 -04:00
Matthias Wild
b793d69ff3 squash merge update-gh-actions into fix-gh-actions
* fix mkdocs deployment

* update path to python bin

* add trigger for current branch

* change path to python_bin for mac as well

* try to use setup-python@v4 instead of setting env

* remove setup conda action

* try to use $CONDA

* remove overseen action

* change branch from master to main

* sort out if then else for faster syntax

* remove more if functions

* add updates to create-caches as well

* eliminate the rest of if functions

* try to unpin pytorch and torchvision

* restore pinned versions

* try switching from set-output to use env

* update test-invoke-conda as well

* fix env var creation

* quote variable

* add second equal to compare

* try another way to use outputs

* fix outputs

* pip install for mac before creating conda env

* fix output variable

* fix python bin path

* remove pip install for before creating conda env

* unpin streamlit version in conda mac env

* try to make git-workflows better readable

* remove 4gotten trigger

* Update-gh-actions (#6)

* fix mkdocs deployment

* update path to python bin

* add trigger for current branch

* change path to python_bin for mac as well

* try to use setup-python@v4 instead of setting env

* remove setup conda action

* try to use $CONDA

* remove overseen action

* change branch from master to main

* sort out if then else for faster syntax

* remove more if functions

* add updates to create-caches as well

* eliminate the rest of if functions

* try to unpin pytorch and torchvision

* restore pinned versions

* try switching from set-output to use env

* update test-invoke-conda as well

* fix env var creation

* quote variable

* add second equal to compare

* try another way to use outputs

* fix outputs

* pip install for mac before creating conda env

* fix output variable

* fix python bin path

* remove pip install for before creating conda env

* unpin streamlit version in conda mac env

* try to make git-workflows better readable

* use macos-latest

* try to update conda before creating mac env

* better conda update trial

* re-pin streamlit version

* re-added trigger to run workflow in current branch

* try to find out if conda mac env could be updated

* install cmake, protobuf and rust b4 conda

* add yes to conda update

* lets try anaconda3-2022.05

* try environment.yml for mac as well

* reenable conda mac env, add pip install
also fix gitignore by changing from dream to invoke

* remove
- unecesary virtualenv creation
- conda update

change != macos back to == linux

* remove cmake from brew install since pre-installed

* disable opencv-python pip requirement

* fixed commands to find latest package versions

* update requirements for mac env

* back to the roots - only install conda env
depending on runner_os with or without extra env variables

* check out macOS in azure-pipelines
since becoming kind of tired of the GitHub Runner which is broken as ...

* let's try to setup python and update conda env

* initialize conda before using it

* add trigger in azure-pipelines.yml

* And another go for update first ....

* update azure-pipelines.yml
- add caching
- add checkpoint download
- add paths to trigger
and more

* unquote checkpoint-url

* fix chekpoint-url variable

* mkdir before downloading model

* set pr trigger to main, rename anaconda cache

* unique cacheHitVariables

* try to use macos-latest instead of macos-12

* update test-invoke-conda.yml:
- remove unecesarry echo step
- use s-weigand/setup-conda@v1
- remove conda update from install deps step since updated with action

* update test-invoke-conda.yml:
- rename conda env cache from ldm to invokeai
- reorder steps:
  1. checkout sources
  2. setup python
  3. setup conda
  4. keep order after set platform variables

* change macos back to 12 since also fails with 11

* update condition in run the tests
make difference between main or not main

* fix path to cache invokeai conda env

* fix invokeai conda env cache path

* update mkdocs-flow.yml

* change conda-channel priority

* update create-caches

* update conda env also when cache was used

* os dependend conda env cache path

* use existing CONDA env pointing to conda root

* create CONDA_ROOT output from $CONDA

* use output variable to define test prompts

* use setup-python v4, get rid of PYTHON_BIN env

* add runner.os to result artifacts name

* update test-invoke-conda.yml:
- reuse macos-latest
- disable setup python 3.9
- setup conda with default python version
- create or update conda environment depending on cache success
- remove name parameter from conda update since name is set in env yml

* improve mkdocs-flow.yml

* disable cache-hugginface-torch
since preload_models.py downloads to more than one location

* update mkdocs-flow.yml with new name

* rename mkdocs action to mkdocs-material

* try to ignore error when creating conda env
maybe it would still be usable, lets see ;P

* remove bloat

* update environment-mac.yml
to match dependencies of invoke-ai/InvokeAI's main branch

* disable conda update, tweak prompt condition

* try to set some env vars for macOS to fix conda

* stop ignoring error, use env instead of outputs

* tweak `[[` connditions

* update python and pip dependencie
makes a difference of 1 sec per itteration compared to 3.9!!!
also I see no reason why using a old pip version would be beneficial

* remove unecesarry env for macOS
everything was pre-tested on my MacBook Air 2020 with M1

* update conda env in setup step

* activate conda env after installation

* update test-invoke-conda.yml
- set conda env dependent on matrix.os
- set CONDA_ENV_NAME to prevent breaking action when renaming conda env
- fix conda env activation

* fix activate conda env

* set bash -l as default shell

* use action to activate conda env

* add conda env file to env activation

* try to replace s-weigeand with conda-incubator

* remove azure-pipelines.yml
funniest part is that the macos runner is the same as the one on github!

* include environment-file in matrix
- also disable auto-activate-base and auto-update-conda
- include macos-latest and macos-12 for debugging purpose
- set miniforge-version in matrix

* fix miniforge-variant, set fail-fast to false

* add step to setup miniconda
- make default shell a matrix variable
- remove bloat

* use a mac env yml without pinned versions

* unpin nomkl, pytorch and torchvision
also removed opencv-pyhton

* cache conda pkgs dir instead of conda env

* use python 3.10, exclude macos-12 from cache

* fix expression

* prepare for PR

* fix doubled id

* reuse pinned versions in mac conda env
- updated python pip version
- unpined pytorch and torchvision
- removed opencv-python
- updated versions to most recent (tested locally)

* fix classical copy/paste error

* remove unused env from shell-block comment

* fix hashFiles function to determine restore-keys

* reenable caching `~.cache`, update create-caches

* unpin all versions in mac conda env file
this was the only way I got it working in the action, also works locally
tested on MacBook Air 2020 M1
remove environment-mac-unpinned.yml

* prepare merge by removing this branch from trigger

* include pull_request trigger for main and dev

* remove pull_request trigger
2022-10-16 13:56:45 -04:00
mauwii
54f55471df remove pr trigger 2022-10-16 19:34:31 +02:00
Matthias Wild
cec7fb7dc6 squash merge update-gh-actions into fix-gh-actions
* fix mkdocs deployment

* update path to python bin

* add trigger for current branch

* change path to python_bin for mac as well

* try to use setup-python@v4 instead of setting env

* remove setup conda action

* try to use $CONDA

* remove overseen action

* change branch from master to main

* sort out if then else for faster syntax

* remove more if functions

* add updates to create-caches as well

* eliminate the rest of if functions

* try to unpin pytorch and torchvision

* restore pinned versions

* try switching from set-output to use env

* update test-invoke-conda as well

* fix env var creation

* quote variable

* add second equal to compare

* try another way to use outputs

* fix outputs

* pip install for mac before creating conda env

* fix output variable

* fix python bin path

* remove pip install for before creating conda env

* unpin streamlit version in conda mac env

* try to make git-workflows better readable

* remove 4gotten trigger

* Update-gh-actions (#6)

* fix mkdocs deployment

* update path to python bin

* add trigger for current branch

* change path to python_bin for mac as well

* try to use setup-python@v4 instead of setting env

* remove setup conda action

* try to use $CONDA

* remove overseen action

* change branch from master to main

* sort out if then else for faster syntax

* remove more if functions

* add updates to create-caches as well

* eliminate the rest of if functions

* try to unpin pytorch and torchvision

* restore pinned versions

* try switching from set-output to use env

* update test-invoke-conda as well

* fix env var creation

* quote variable

* add second equal to compare

* try another way to use outputs

* fix outputs

* pip install for mac before creating conda env

* fix output variable

* fix python bin path

* remove pip install for before creating conda env

* unpin streamlit version in conda mac env

* try to make git-workflows better readable

* use macos-latest

* try to update conda before creating mac env

* better conda update trial

* re-pin streamlit version

* re-added trigger to run workflow in current branch

* try to find out if conda mac env could be updated

* install cmake, protobuf and rust b4 conda

* add yes to conda update

* lets try anaconda3-2022.05

* try environment.yml for mac as well

* reenable conda mac env, add pip install
also fix gitignore by changing from dream to invoke

* remove
- unecesary virtualenv creation
- conda update

change != macos back to == linux

* remove cmake from brew install since pre-installed

* disable opencv-python pip requirement

* fixed commands to find latest package versions

* update requirements for mac env

* back to the roots - only install conda env
depending on runner_os with or without extra env variables

* check out macOS in azure-pipelines
since becoming kind of tired of the GitHub Runner which is broken as ...

* let's try to setup python and update conda env

* initialize conda before using it

* add trigger in azure-pipelines.yml

* And another go for update first ....

* update azure-pipelines.yml
- add caching
- add checkpoint download
- add paths to trigger
and more

* unquote checkpoint-url

* fix chekpoint-url variable

* mkdir before downloading model

* set pr trigger to main, rename anaconda cache

* unique cacheHitVariables

* try to use macos-latest instead of macos-12

* update test-invoke-conda.yml:
- remove unecesarry echo step
- use s-weigand/setup-conda@v1
- remove conda update from install deps step since updated with action

* update test-invoke-conda.yml:
- rename conda env cache from ldm to invokeai
- reorder steps:
  1. checkout sources
  2. setup python
  3. setup conda
  4. keep order after set platform variables

* change macos back to 12 since also fails with 11

* update condition in run the tests
make difference between main or not main

* fix path to cache invokeai conda env

* fix invokeai conda env cache path

* update mkdocs-flow.yml

* change conda-channel priority

* update create-caches

* update conda env also when cache was used

* os dependend conda env cache path

* use existing CONDA env pointing to conda root

* create CONDA_ROOT output from $CONDA

* use output variable to define test prompts

* use setup-python v4, get rid of PYTHON_BIN env

* add runner.os to result artifacts name

* update test-invoke-conda.yml:
- reuse macos-latest
- disable setup python 3.9
- setup conda with default python version
- create or update conda environment depending on cache success
- remove name parameter from conda update since name is set in env yml

* improve mkdocs-flow.yml

* disable cache-hugginface-torch
since preload_models.py downloads to more than one location

* update mkdocs-flow.yml with new name

* rename mkdocs action to mkdocs-material

* try to ignore error when creating conda env
maybe it would still be usable, lets see ;P

* remove bloat

* update environment-mac.yml
to match dependencies of invoke-ai/InvokeAI's main branch

* disable conda update, tweak prompt condition

* try to set some env vars for macOS to fix conda

* stop ignoring error, use env instead of outputs

* tweak `[[` connditions

* update python and pip dependencie
makes a difference of 1 sec per itteration compared to 3.9!!!
also I see no reason why using a old pip version would be beneficial

* remove unecesarry env for macOS
everything was pre-tested on my MacBook Air 2020 with M1

* update conda env in setup step

* activate conda env after installation

* update test-invoke-conda.yml
- set conda env dependent on matrix.os
- set CONDA_ENV_NAME to prevent breaking action when renaming conda env
- fix conda env activation

* fix activate conda env

* set bash -l as default shell

* use action to activate conda env

* add conda env file to env activation

* try to replace s-weigeand with conda-incubator

* remove azure-pipelines.yml
funniest part is that the macos runner is the same as the one on github!

* include environment-file in matrix
- also disable auto-activate-base and auto-update-conda
- include macos-latest and macos-12 for debugging purpose
- set miniforge-version in matrix

* fix miniforge-variant, set fail-fast to false

* add step to setup miniconda
- make default shell a matrix variable
- remove bloat

* use a mac env yml without pinned versions

* unpin nomkl, pytorch and torchvision
also removed opencv-pyhton

* cache conda pkgs dir instead of conda env

* use python 3.10, exclude macos-12 from cache

* fix expression

* prepare for PR

* fix doubled id

* reuse pinned versions in mac conda env
- updated python pip version
- unpined pytorch and torchvision
- removed opencv-python
- updated versions to most recent (tested locally)

* fix classical copy/paste error

* remove unused env from shell-block comment

* fix hashFiles function to determine restore-keys

* reenable caching `~.cache`, update create-caches

* unpin all versions in mac conda env file
this was the only way I got it working in the action, also works locally
tested on MacBook Air 2020 M1
remove environment-mac-unpinned.yml

* prepare merge by removing this branch from trigger

* include pull_request trigger for main and dev

* remove pull_request trigger
2022-10-16 19:19:49 +02:00
Lincoln Stein
b0b82efffe restore inline images
<div> around the inline images works great in gh-pages, but breaks plain old markdown in GitHub code display. This removes the <div>s, causing slight degradation in quality of gh-page appearance.
2022-10-16 12:07:21 -04:00
Lincoln Stein
e599604294 restore inline images
<div> seems to be messing with the ability of the plain-old markdown processor to display inline images. Slightly degrades appearance of gh-pages.
2022-10-16 12:05:33 -04:00
Eric Wolf
57a3ea9d7b Update 'ldm' env to 'invokeai' in troubleshooting steps 2022-10-16 11:23:00 -04:00
Conor Reid
a3a50bb886 Update generate.py
Fixed spelling mistake (open source king)
2022-10-15 16:02:14 -04:00
890 changed files with 274083 additions and 39179 deletions

View File

@@ -1,3 +1,25 @@
# use this file as a whitelist
*
!environment*.yml
!docker-build
!invokeai
!ldm
!pyproject.toml
# Guard against pulling in any models that might exist in the directory tree
**/*.pt*
**/*.ckpt
# ignore frontend but whitelist dist
invokeai/frontend/
!invokeai/frontend/dist/
# ignore invokeai/assets but whitelist invokeai/assets/web
invokeai/assets/
!invokeai/assets/web/
# Byte-compiled / optimized / DLL files
**/__pycache__/
**/*.py[cod]
# Distribution / packaging
*.egg-info/
*.egg

12
.editorconfig Normal file
View File

@@ -0,0 +1,12 @@
# All files
[*]
charset = utf-8
end_of_line = lf
indent_size = 2
indent_style = space
insert_final_newline = true
trim_trailing_whitespace = true
# Python
[*.py]
indent_size = 4

2
.gitattributes vendored
View File

@@ -1,4 +1,4 @@
# Auto normalizes line endings on commit so devs don't need to change local settings.
# Only affects text files and ignores other file types.
# Only affects text files and ignores other file types.
# For more info see: https://www.aleksandrhovhannisyan.com/blog/crlf-vs-lf-normalizing-line-endings-in-git/
* text=auto

55
.github/CODEOWNERS vendored
View File

@@ -1,4 +1,51 @@
ldm/invoke/pngwriter.py @CapableWeb
ldm/invoke/server_legacy.py @CapableWeb
scripts/legacy_api.py @CapableWeb
tests/legacy_tests.sh @CapableWeb
# continuous integration
/.github/workflows/ @mauwii @lstein @blessedcoolant
# documentation
/docs/ @lstein @mauwii @tildebyte @blessedcoolant
mkdocs.yml @lstein @mauwii @blessedcoolant
# installation and configuration
/pyproject.toml @mauwii @lstein @ebr @blessedcoolant
/docker/ @mauwii @lstein @blessedcoolant
/scripts/ @ebr @lstein @blessedcoolant
/installer/ @ebr @lstein @tildebyte @blessedcoolant
ldm/invoke/config @lstein @ebr @blessedcoolant
invokeai/assets @lstein @ebr @blessedcoolant
invokeai/configs @lstein @ebr @blessedcoolant
/ldm/invoke/_version.py @lstein @blessedcoolant
# web ui
/invokeai/frontend @blessedcoolant @psychedelicious @lstein
/invokeai/backend @blessedcoolant @psychedelicious @lstein
# generation and model management
/ldm/*.py @lstein @blessedcoolant
/ldm/generate.py @lstein @keturn @blessedcoolant
/ldm/invoke/args.py @lstein @blessedcoolant
/ldm/invoke/ckpt* @lstein @blessedcoolant
/ldm/invoke/ckpt_generator @lstein @blessedcoolant
/ldm/invoke/CLI.py @lstein @blessedcoolant
/ldm/invoke/config @lstein @ebr @mauwii @blessedcoolant
/ldm/invoke/generator @keturn @damian0815 @blessedcoolant
/ldm/invoke/globals.py @lstein @blessedcoolant
/ldm/invoke/merge_diffusers.py @lstein @blessedcoolant
/ldm/invoke/model_manager.py @lstein @blessedcoolant
/ldm/invoke/txt2mask.py @lstein @blessedcoolant
/ldm/invoke/patchmatch.py @Kyle0654 @blessedcoolant @lstein
/ldm/invoke/restoration @lstein @blessedcoolant
# attention, textual inversion, model configuration
/ldm/models @damian0815 @keturn @lstein @blessedcoolant
/ldm/modules @damian0815 @keturn @lstein @blessedcoolant
# Nodes
apps/ @Kyle0654 @lstein @blessedcoolant
# legacy REST API
# is CapableWeb still engaged?
/ldm/invoke/pngwriter.py @CapableWeb @lstein @blessedcoolant
/ldm/invoke/server_legacy.py @CapableWeb @lstein @blessedcoolant
/scripts/legacy_api.py @CapableWeb @lstein @blessedcoolant
/tests/legacy_tests.sh @CapableWeb @lstein @blessedcoolant

102
.github/ISSUE_TEMPLATE/BUG_REPORT.yml vendored Normal file
View File

@@ -0,0 +1,102 @@
name: 🐞 Bug Report
description: File a bug report
title: '[bug]: '
labels: ['bug']
# assignees:
# - moderator_bot
# - lstein
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this Bug Report!
- type: checkboxes
attributes:
label: Is there an existing issue for this?
description: |
Please use the [search function](https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen+label%3Abug)
irst to see if an issue already exists for the bug you encountered.
options:
- label: I have searched the existing issues
required: true
- type: markdown
attributes:
value: __Describe your environment__
- type: dropdown
id: os_dropdown
attributes:
label: OS
description: Which operating System did you use when the bug occured
multiple: false
options:
- 'Linux'
- 'Windows'
- 'macOS'
validations:
required: true
- type: dropdown
id: gpu_dropdown
attributes:
label: GPU
description: Which kind of Graphic-Adapter is your System using
multiple: false
options:
- 'cuda'
- 'amd'
- 'mps'
- 'cpu'
validations:
required: true
- type: input
id: vram
attributes:
label: VRAM
description: Size of the VRAM if known
placeholder: 8GB
validations:
required: false
- type: textarea
id: what-happened
attributes:
label: What happened?
description: |
Briefly describe what happened, what you expected to happen and how to reproduce this bug.
placeholder: When using the webinterface and right-clicking on button X instead of the popup-menu there error Y appears
validations:
required: true
- type: textarea
attributes:
label: Screenshots
description: If applicable, add screenshots to help explain your problem
placeholder: this is what the result looked like <screenshot>
validations:
required: false
- type: textarea
attributes:
label: Additional context
description: Add any other context about the problem here
placeholder: Only happens when there is full moon and Friday the 13th on Christmas Eve 🎅🏻
validations:
required: false
- type: input
id: contact
attributes:
label: Contact Details
description: __OPTIONAL__ How can we get in touch with you if we need more info (besides this issue)?
placeholder: ex. email@example.com, discordname, twitter, ...
validations:
required: false

View File

@@ -0,0 +1,56 @@
name: Feature Request
description: Commit a idea or Request a new feature
title: '[enhancement]: '
labels: ['enhancement']
# assignees:
# - lstein
# - tildebyte
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this Feature request!
- type: checkboxes
attributes:
label: Is there an existing issue for this?
description: |
Please make use of the [search function](https://github.com/invoke-ai/InvokeAI/labels/enhancement)
to see if a simmilar issue already exists for the feature you want to request
options:
- label: I have searched the existing issues
required: true
- type: input
id: contact
attributes:
label: Contact Details
description: __OPTIONAL__ How could we get in touch with you if we need more info (besides this issue)?
placeholder: ex. email@example.com, discordname, twitter, ...
validations:
required: false
- type: textarea
id: whatisexpected
attributes:
label: What should this feature add?
description: Please try to explain the functionality this feature should add
placeholder: |
Instead of one huge textfield, it would be nice to have forms for bug-reports, feature-requests, ...
Great benefits with automatic labeling, assigning and other functionalitys not available in that form
via old-fashioned markdown-templates. I would also love to see the use of a moderator bot 🤖 like
https://github.com/marketplace/actions/issue-moderator-with-commands to auto close old issues and other things
validations:
required: true
- type: textarea
attributes:
label: Alternatives
description: Describe alternatives you've considered
placeholder: A clear and concise description of any alternative solutions or features you've considered.
- type: textarea
attributes:
label: Aditional Content
description: Add any other context or screenshots about the feature request here.
placeholder: This is a Mockup of the design how I imagine it <screenshot>

View File

@@ -1,36 +0,0 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe your environment**
- GPU: [cuda/amd/mps/cpu]
- VRAM: [if known]
- CPU arch: [x86/arm]
- OS: [Linux/Windows/macOS]
- Python: [Anaconda/miniconda/miniforge/pyenv/other (explain)]
- Branch: [if `git status` says anything other than "On branch main" paste it here]
- Commit: [run `git show` and paste the line that starts with "Merge" here]
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.

14
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,14 @@
blank_issues_enabled: false
contact_links:
- name: Project-Documentation
url: https://invoke-ai.github.io/InvokeAI/
about: Should be your first place to go when looking for manuals/FAQs regarding our InvokeAI Toolkit
- name: Discord
url: https://discord.gg/ZmtBAhwWhy
about: Our Discord Community could maybe help you out via live-chat
- name: GitHub Community Support
url: https://github.com/orgs/community/discussions
about: Please ask and answer questions regarding the GitHub Platform here.
- name: GitHub Security Bug Bounty
url: https://bounty.github.com/
about: Please report security vulnerabilities of the GitHub Platform here.

View File

@@ -1,20 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@@ -1,42 +1,111 @@
# Building the Image without pushing to confirm it is still buildable
# confirum functionality would unfortunately need way more resources
name: build container image
on:
push:
branches:
- 'main'
- 'development'
pull_request:
branches:
- 'main'
- 'development'
- 'update/ci/docker/*'
- 'update/docker/*'
paths:
- 'pyproject.toml'
- 'ldm/**'
- 'invokeai/backend/**'
- 'invokeai/configs/**'
- 'invokeai/frontend/dist/**'
- 'docker/Dockerfile'
tags:
- 'v*.*.*'
workflow_dispatch:
jobs:
docker:
if: github.event.pull_request.draft == false
strategy:
fail-fast: false
matrix:
flavor:
- amd
- cuda
- cpu
include:
- flavor: amd
pip-extra-index-url: 'https://download.pytorch.org/whl/rocm5.2'
- flavor: cuda
pip-extra-index-url: ''
- flavor: cpu
pip-extra-index-url: 'https://download.pytorch.org/whl/cpu'
runs-on: ubuntu-latest
name: ${{ matrix.flavor }}
env:
PLATFORMS: 'linux/amd64,linux/arm64'
DOCKERFILE: 'docker/Dockerfile'
steps:
- name: prepare docker-tag
env:
repository: ${{ github.repository }}
run: echo "dockertag=${repository,,}" >> $GITHUB_ENV
- name: Checkout
uses: actions/checkout@v3
- name: Docker meta
id: meta
uses: docker/metadata-action@v4
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
images: |
ghcr.io/${{ github.repository }}
${{ vars.DOCKERHUB_REPOSITORY }}
tags: |
type=ref,event=branch
type=ref,event=tag
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=sha,enable=true,prefix=sha-,format=short
flavor: |
latest=${{ matrix.flavor == 'cuda' && github.ref == 'refs/heads/main' }}
suffix=-${{ matrix.flavor }},onlatest=false
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache
key: buildx-${{ hashFiles('docker-build/Dockerfile') }}
platforms: ${{ env.PLATFORMS }}
- name: Login to GitHub Container Registry
if: github.event_name != 'pull_request'
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to Docker Hub
if: github.event_name != 'pull_request' && vars.DOCKERHUB_REPOSITORY != ''
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build container
uses: docker/build-push-action@v3
id: docker_build
uses: docker/build-push-action@v4
with:
context: .
file: docker-build/Dockerfile
platforms: linux/amd64
push: false
tags: ${{ env.dockertag }}:latest
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache
file: ${{ env.DOCKERFILE }}
platforms: ${{ env.PLATFORMS }}
push: ${{ github.ref == 'refs/heads/main' || github.ref == 'refs/tags/*' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
build-args: PIP_EXTRA_INDEX_URL=${{ matrix.pip-extra-index-url }}
cache-from: |
type=gha,scope=${{ github.ref_name }}-${{ matrix.flavor }}
type=gha,scope=main-${{ matrix.flavor }}
cache-to: type=gha,mode=max,scope=${{ github.ref_name }}-${{ matrix.flavor }}
- name: Docker Hub Description
if: github.ref == 'refs/heads/main' || github.ref == 'refs/tags/*' && vars.DOCKERHUB_REPOSITORY != ''
uses: peter-evans/dockerhub-description@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
repository: ${{ vars.DOCKERHUB_REPOSITORY }}
short-description: ${{ github.event.repository.description }}

34
.github/workflows/clean-caches.yml vendored Normal file
View File

@@ -0,0 +1,34 @@
name: cleanup caches by a branch
on:
pull_request:
types:
- closed
workflow_dispatch:
jobs:
cleanup:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v3
- name: Cleanup
run: |
gh extension install actions/gh-actions-cache
REPO=${{ github.repository }}
BRANCH=${{ github.ref }}
echo "Fetching list of cache key"
cacheKeysForPR=$(gh actions-cache list -R $REPO -B $BRANCH | cut -f 1 )
## Setting this to not fail the workflow while deleting cache keys.
set +e
echo "Deleting caches..."
for cacheKey in $cacheKeysForPR
do
gh actions-cache delete $cacheKey -R $REPO -B $BRANCH --confirm
done
echo "Done"
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,80 +0,0 @@
name: Create Caches
on: workflow_dispatch
jobs:
os_matrix:
strategy:
matrix:
os: [ubuntu-latest, macos-latest]
include:
- os: ubuntu-latest
environment-file: environment.yml
default-shell: bash -l {0}
- os: macos-latest
environment-file: environment-mac.yml
default-shell: bash -l {0}
name: Test invoke.py on ${{ matrix.os }} with conda
runs-on: ${{ matrix.os }}
defaults:
run:
shell: ${{ matrix.default-shell }}
steps:
- name: Checkout sources
uses: actions/checkout@v3
- name: setup miniconda
uses: conda-incubator/setup-miniconda@v2
with:
auto-activate-base: false
auto-update-conda: false
miniconda-version: latest
- name: set environment
run: |
[[ "$GITHUB_REF" == 'refs/heads/main' ]] \
&& echo "TEST_PROMPTS=tests/preflight_prompts.txt" >> $GITHUB_ENV \
|| echo "TEST_PROMPTS=tests/dev_prompts.txt" >> $GITHUB_ENV
echo "CONDA_ROOT=$CONDA" >> $GITHUB_ENV
echo "CONDA_ENV_NAME=invokeai" >> $GITHUB_ENV
- name: Use Cached Stable Diffusion v1.4 Model
id: cache-sd-v1-4
uses: actions/cache@v3
env:
cache-name: cache-sd-v1-4
with:
path: models/ldm/stable-diffusion-v1/model.ckpt
key: ${{ env.cache-name }}
restore-keys: ${{ env.cache-name }}
- name: Download Stable Diffusion v1.4 Model
if: ${{ steps.cache-sd-v1-4.outputs.cache-hit != 'true' }}
run: |
[[ -d models/ldm/stable-diffusion-v1 ]] \
|| mkdir -p models/ldm/stable-diffusion-v1
[[ -r models/ldm/stable-diffusion-v1/model.ckpt ]] \
|| curl \
-H "Authorization: Bearer ${{ secrets.HUGGINGFACE_TOKEN }}" \
-o models/ldm/stable-diffusion-v1/model.ckpt \
-L https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt
- name: Activate Conda Env
uses: conda-incubator/setup-miniconda@v2
with:
activate-environment: ${{ env.CONDA_ENV_NAME }}
environment-file: ${{ matrix.environment-file }}
- name: Use Cached Huggingface and Torch models
id: cache-hugginface-torch
uses: actions/cache@v3
env:
cache-name: cache-hugginface-torch
with:
path: ~/.cache
key: ${{ env.cache-name }}
restore-keys: |
${{ env.cache-name }}-${{ hashFiles('scripts/preload_models.py') }}
- name: run preload_models.py
run: python scripts/preload_models.py

29
.github/workflows/lint-frontend.yml vendored Normal file
View File

@@ -0,0 +1,29 @@
name: Lint frontend
on:
pull_request:
paths:
- 'invokeai/frontend/**'
push:
paths:
- 'invokeai/frontend/**'
defaults:
run:
working-directory: invokeai/frontend
jobs:
lint-frontend:
if: github.event.pull_request.draft == false
runs-on: ubuntu-22.04
steps:
- name: Setup Node 18
uses: actions/setup-node@v3
with:
node-version: '18'
- uses: actions/checkout@v3
- run: 'yarn install --frozen-lockfile'
- run: 'yarn tsc'
- run: 'yarn run madge'
- run: 'yarn run lint --max-warnings=0'
- run: 'yarn run prettier --check'

View File

@@ -1,28 +0,0 @@
name: Deploy
on:
push:
branches:
- main
# pull_request:
# branches:
# - main
jobs:
build:
name: Deploy docs to GitHub Pages
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Build
uses: Tiryoh/actions-mkdocs@v0
with:
mkdocs_version: 'latest' # option
requirements: '/requirements-mkdocs.txt' # option
configfile: '/mkdocs.yml' # option
- name: Deploy
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./site

View File

@@ -4,10 +4,10 @@ on:
branches:
- 'main'
- 'development'
- 'release-candidate-2-1'
jobs:
mkdocs-material:
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
steps:
- name: checkout sources
@@ -23,7 +23,7 @@ jobs:
- name: install requirements
run: |
python -m \
pip install -r requirements-mkdocs.txt
pip install -r docs/requirements-mkdocs.txt
- name: confirm buildability
run: |

20
.github/workflows/pyflakes.yml vendored Normal file
View File

@@ -0,0 +1,20 @@
on:
pull_request:
push:
branches:
- main
- development
- 'release-candidate-*'
jobs:
pyflakes:
name: runner / pyflakes
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: pyflakes
uses: reviewdog/action-pyflakes@v1
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
reporter: github-pr-review

41
.github/workflows/pypi-release.yml vendored Normal file
View File

@@ -0,0 +1,41 @@
name: PyPI Release
on:
push:
paths:
- 'ldm/invoke/_version.py'
workflow_dispatch:
jobs:
release:
if: github.repository == 'invoke-ai/InvokeAI'
runs-on: ubuntu-22.04
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
TWINE_NON_INTERACTIVE: 1
steps:
- name: checkout sources
uses: actions/checkout@v3
- name: install deps
run: pip install --upgrade build twine
- name: build package
run: python3 -m build
- name: check distribution
run: twine check dist/*
- name: check PyPI versions
if: github.ref == 'refs/heads/main' || github.ref == 'refs/heads/v2.3'
run: |
pip install --upgrade requests
python -c "\
import scripts.pypi_helper; \
EXISTS=scripts.pypi_helper.local_on_pypi(); \
print(f'PACKAGE_EXISTS={EXISTS}')" >> $GITHUB_ENV
- name: upload package
if: env.PACKAGE_EXISTS == 'False' && env.TWINE_PASSWORD != ''
run: twine upload dist/*

View File

@@ -1,112 +0,0 @@
name: Test invoke.py
on:
push:
branches:
- 'main'
- 'development'
pull_request:
branches:
- 'main'
- 'development'
jobs:
matrix:
strategy:
fail-fast: false
matrix:
stable-diffusion-model:
# - 'https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt'
- 'https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt'
os:
- ubuntu-latest
- macOS-12
include:
- os: ubuntu-latest
environment-file: environment.yml
default-shell: bash -l {0}
- os: macOS-12
environment-file: environment-mac.yml
default-shell: bash -l {0}
# - stable-diffusion-model: https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt
# stable-diffusion-model-dl-path: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
# stable-diffusion-model-switch: stable-diffusion-1.4
- stable-diffusion-model: https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt
stable-diffusion-model-dl-path: models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt
stable-diffusion-model-switch: stable-diffusion-1.5
name: ${{ matrix.os }} with ${{ matrix.stable-diffusion-model-switch }}
runs-on: ${{ matrix.os }}
env:
CONDA_ENV_NAME: invokeai
defaults:
run:
shell: ${{ matrix.default-shell }}
steps:
- name: Checkout sources
id: checkout-sources
uses: actions/checkout@v3
- name: create models.yaml from example
run: cp configs/models.yaml.example configs/models.yaml
- name: Use cached conda packages
id: use-cached-conda-packages
uses: actions/cache@v3
with:
path: ~/conda_pkgs_dir
key: conda-pkgs-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles(matrix.environment-file) }}
- name: Activate Conda Env
id: activate-conda-env
uses: conda-incubator/setup-miniconda@v2
with:
activate-environment: ${{ env.CONDA_ENV_NAME }}
environment-file: ${{ matrix.environment-file }}
miniconda-version: latest
- name: set test prompt to main branch validation
if: ${{ github.ref == 'refs/heads/main' }}
run: echo "TEST_PROMPTS=tests/preflight_prompts.txt" >> $GITHUB_ENV
- name: set test prompt to development branch validation
if: ${{ github.ref == 'refs/heads/development' }}
run: echo "TEST_PROMPTS=tests/dev_prompts.txt" >> $GITHUB_ENV
- name: set test prompt to Pull Request validation
if: ${{ github.ref != 'refs/heads/main' && github.ref != 'refs/heads/development' }}
run: echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> $GITHUB_ENV
- name: Download ${{ matrix.stable-diffusion-model-switch }}
id: download-stable-diffusion-model
run: |
[[ -d models/ldm/stable-diffusion-v1 ]] \
|| mkdir -p models/ldm/stable-diffusion-v1
curl \
-H "Authorization: Bearer ${{ secrets.HUGGINGFACE_TOKEN }}" \
-o ${{ matrix.stable-diffusion-model-dl-path }} \
-L ${{ matrix.stable-diffusion-model }}
- name: run preload_models.py
id: run-preload-models
run: |
python scripts/preload_models.py \
--no-interactive
- name: Run the tests
id: run-tests
run: |
time python scripts/invoke.py \
--model ${{ matrix.stable-diffusion-model-switch }} \
--from_file ${{ env.TEST_PROMPTS }}
- name: export conda env
id: export-conda-env
run: |
mkdir -p outputs/img-samples
conda env export --name ${{ env.CONDA_ENV_NAME }} > outputs/img-samples/environment-${{ runner.os }}-${{ runner.arch }}.yml
- name: Archive results
id: archive-results
uses: actions/upload-artifact@v3
with:
name: results_${{ matrix.os }}_${{ matrix.stable-diffusion-model-switch }}
path: outputs/img-samples

View File

@@ -0,0 +1,67 @@
name: Test invoke.py pip
on:
pull_request:
paths-ignore:
- 'pyproject.toml'
- 'ldm/**'
- 'invokeai/backend/**'
- 'invokeai/configs/**'
- 'invokeai/frontend/dist/**'
merge_group:
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
matrix:
if: github.event.pull_request.draft == false
strategy:
matrix:
python-version:
# - '3.9'
- '3.10'
pytorch:
# - linux-cuda-11_6
- linux-cuda-11_7
- linux-rocm-5_2
- linux-cpu
- macos-default
- windows-cpu
# - windows-cuda-11_6
# - windows-cuda-11_7
include:
# - pytorch: linux-cuda-11_6
# os: ubuntu-22.04
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
# github-env: $GITHUB_ENV
- pytorch: linux-cuda-11_7
os: ubuntu-22.04
github-env: $GITHUB_ENV
- pytorch: linux-rocm-5_2
os: ubuntu-22.04
extra-index-url: 'https://download.pytorch.org/whl/rocm5.2'
github-env: $GITHUB_ENV
- pytorch: linux-cpu
os: ubuntu-22.04
extra-index-url: 'https://download.pytorch.org/whl/cpu'
github-env: $GITHUB_ENV
- pytorch: macos-default
os: macOS-12
github-env: $GITHUB_ENV
- pytorch: windows-cpu
os: windows-2022
github-env: $env:GITHUB_ENV
# - pytorch: windows-cuda-11_6
# os: windows-2022
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
# github-env: $env:GITHUB_ENV
# - pytorch: windows-cuda-11_7
# os: windows-2022
# extra-index-url: 'https://download.pytorch.org/whl/cu117'
# github-env: $env:GITHUB_ENV
name: ${{ matrix.pytorch }} on ${{ matrix.python-version }}
runs-on: ${{ matrix.os }}
steps:
- run: 'echo "No build required"'

148
.github/workflows/test-invoke-pip.yml vendored Normal file
View File

@@ -0,0 +1,148 @@
name: Test invoke.py pip
on:
push:
branches:
- 'main'
paths:
- 'pyproject.toml'
- 'ldm/**'
- 'invokeai/backend/**'
- 'invokeai/configs/**'
- 'invokeai/frontend/dist/**'
pull_request:
paths:
- 'pyproject.toml'
- 'ldm/**'
- 'invokeai/backend/**'
- 'invokeai/configs/**'
- 'invokeai/frontend/dist/**'
types:
- 'ready_for_review'
- 'opened'
- 'synchronize'
merge_group:
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
matrix:
if: github.event.pull_request.draft == false
strategy:
matrix:
python-version:
# - '3.9'
- '3.10'
pytorch:
# - linux-cuda-11_6
- linux-cuda-11_7
- linux-rocm-5_2
- linux-cpu
- macos-default
- windows-cpu
# - windows-cuda-11_6
# - windows-cuda-11_7
include:
# - pytorch: linux-cuda-11_6
# os: ubuntu-22.04
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
# github-env: $GITHUB_ENV
- pytorch: linux-cuda-11_7
os: ubuntu-22.04
github-env: $GITHUB_ENV
- pytorch: linux-rocm-5_2
os: ubuntu-22.04
extra-index-url: 'https://download.pytorch.org/whl/rocm5.2'
github-env: $GITHUB_ENV
- pytorch: linux-cpu
os: ubuntu-22.04
extra-index-url: 'https://download.pytorch.org/whl/cpu'
github-env: $GITHUB_ENV
- pytorch: macos-default
os: macOS-12
github-env: $GITHUB_ENV
- pytorch: windows-cpu
os: windows-2022
github-env: $env:GITHUB_ENV
# - pytorch: windows-cuda-11_6
# os: windows-2022
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
# github-env: $env:GITHUB_ENV
# - pytorch: windows-cuda-11_7
# os: windows-2022
# extra-index-url: 'https://download.pytorch.org/whl/cu117'
# github-env: $env:GITHUB_ENV
name: ${{ matrix.pytorch }} on ${{ matrix.python-version }}
runs-on: ${{ matrix.os }}
env:
PIP_USE_PEP517: '1'
steps:
- name: Checkout sources
id: checkout-sources
uses: actions/checkout@v3
- name: set test prompt to main branch validation
if: ${{ github.ref == 'refs/heads/main' }}
run: echo "TEST_PROMPTS=tests/preflight_prompts.txt" >> ${{ matrix.github-env }}
- name: set test prompt to Pull Request validation
if: ${{ github.ref != 'refs/heads/main' }}
run: echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> ${{ matrix.github-env }}
- name: setup python
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: pyproject.toml
- name: install invokeai
env:
PIP_EXTRA_INDEX_URL: ${{ matrix.extra-index-url }}
run: >
pip3 install
--editable=".[test]"
- name: run pytest
id: run-pytest
run: pytest
- name: set INVOKEAI_OUTDIR
run: >
python -c
"import os;from ldm.invoke.globals import Globals;OUTDIR=os.path.join(Globals.root,str('outputs'));print(f'INVOKEAI_OUTDIR={OUTDIR}')"
>> ${{ matrix.github-env }}
- name: run invokeai-configure
id: run-preload-models
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGINGFACE_TOKEN }}
run: >
invokeai-configure
--yes
--default_only
--full-precision
# can't use fp16 weights without a GPU
- name: run invokeai
id: run-invokeai
env:
# Set offline mode to make sure configure preloaded successfully.
HF_HUB_OFFLINE: 1
HF_DATASETS_OFFLINE: 1
TRANSFORMERS_OFFLINE: 1
run: >
invokeai
--no-patchmatch
--no-nsfw_checker
--from_file ${{ env.TEST_PROMPTS }}
--outdir ${{ env.INVOKEAI_OUTDIR }}/${{ matrix.python-version }}/${{ matrix.pytorch }}
- name: Archive results
id: archive-results
uses: actions/upload-artifact@v3
with:
name: results
path: ${{ env.INVOKEAI_OUTDIR }}

33
.gitignore vendored
View File

@@ -1,11 +1,14 @@
# ignore default image save location and model symbolic link
.idea/
embeddings/
outputs/
models/ldm/stable-diffusion-v1/model.ckpt
ldm/invoke/restoration/codeformer/weights
**/restoration/codeformer/weights
# ignore user models config
configs/models.user.yaml
config/models.user.yml
invokeai.init
# ignore the Anaconda/Miniconda installer used while building Docker image
anaconda.sh
@@ -70,6 +73,7 @@ coverage.xml
.hypothesis/
.pytest_cache/
cover/
junit/
# Translations
*.mo
@@ -184,7 +188,7 @@ src
**/__pycache__/
outputs
# Logs and associated folders
# Logs and associated folders
# created from generated embeddings.
logs
testtube
@@ -193,7 +197,7 @@ checkpoints
.DS_Store
# Let the frontend manage its own gitignore
!frontend/*
!invokeai/frontend/*
# Scratch folder
.scratch/
@@ -201,6 +205,7 @@ checkpoints
gfpgan/
models/ldm/stable-diffusion-v1/*.sha256
# GFPGAN model files
gfpgan/
@@ -208,4 +213,24 @@ gfpgan/
configs/models.yaml
# weights (will be created by installer)
models/ldm/stable-diffusion-v1/*.ckpt
models/ldm/stable-diffusion-v1/*.ckpt
models/clipseg
models/gfpgan
# ignore initfile
.invokeai
# ignore environment.yml and requirements.txt
# these are links to the real files in environments-and-requirements
environment.yml
requirements.txt
# source installer files
installer/*zip
installer/install.bat
installer/install.sh
installer/update.bat
installer/update.sh
# no longer stored in source directory
models

128
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,128 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior
may be reported to the community leaders responsible for enforcement
at https://github.com/invoke-ai/InvokeAI/issues. All complaints will
be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.

View File

@@ -0,0 +1,84 @@
<img src="docs/assets/invoke_ai_banner.png" align="center">
Invoke-AI is a community of software developers, researchers, and user
interface experts who have come together on a voluntary basis to build
software tools which support cutting edge AI text-to-image
applications. This community is open to anyone who wishes to
contribute to the effort and has the skill and time to do so.
# Our Values
The InvokeAI team is a diverse community which includes individuals
from various parts of the world and many walks of life. Despite our
differences, we share a number of core values which we ask prospective
contributors to understand and respect. We believe:
1. That Open Source Software is a positive force in the world. We
create software that can be used, reused, and redistributed, without
restrictions, under a straightforward Open Source license (MIT). We
believe that Open Source benefits society as a whole by increasing the
availability of high quality software to all.
2. That those who create software should receive proper attribution
for their creative work. While we support the exchange and reuse of
Open Source Software, we feel strongly that the original authors of a
piece of code should receive credit for their contribution, and we
endeavor to do so whenever possible.
3. That there is moral ambiguity surrounding AI-assisted art. We are
aware of the moral and ethical issues surrounding the release of the
Stable Diffusion model and similar products. We are aware that, due to
the composition of their training sets, current AI-generated image
models are biased against certain ethnic groups, cultural concepts of
beauty, ethnic stereotypes, and gender roles.
1. We recognize the potential for harm to these groups that these biases
represent and trust that future AI models will take steps towards
reducing or eliminating the biases noted above, respect and give due
credit to the artists whose work is sourced, and call on developers
and users to favor these models over the older ones as they become
available.
4. We are deeply committed to ensuring that this technology benefits
everyone, including artists. We see AI art not as a replacement for
the artist, but rather as a tool to empower them. With that
in mind, we are constantly debating how to build systems that put
artists needs first: tools which can be readily integrated into an
artists existing workflows and practices, enhancing their work and
helping them to push it further. Every decision we take as a team,
which includes several artists, aims to build towards that goal.
5. That artificial intelligence can be a force for good in the world,
but must be used responsibly. Artificial intelligence technologies
have the potential to improve society, in everything from cancer care,
to customer service, to creative writing.
1. While we do not believe that software should arbitrarily limit what
users can do with it, we recognize that when used irresponsibly, AI
has the potential to do much harm. Our Discord server is actively
moderated in order to minimize the potential of harm from
user-contributed images. In addition, we ask users of our software to
refrain from using it in any way that would cause mental, emotional or
physical harm to individuals and vulnerable populations including (but
not limited to) women; minors; ethnic minorities; religious groups;
members of LGBTQIA communities; and people with disabilities or
impairments.
2. Note that some of the image generation AI models which the Invoke-AI
toolkit supports carry licensing agreements which impose restrictions
on how the model is used. We ask that our users read and agree to
these terms if they wish to make use of these models. These agreements
are distinct from the MIT license which applies to the InvokeAI
software and source code.
6. That mutual respect is key to a healthy software development
community. Members of the InvokeAI community are expected to treat
each other with respect, beneficence, and empathy. Each of us has a
different background and a unique set of skills. We strive to help
each other grow and gain new skills, and we apportion expectations in
a way that balances the members' time, skillset, and interest
area. Disputes are resolved by open and honest communication.
## Signature
This document has been collectively crafted and approved by the current InvokeAI team members, as of 28 Nov 2022: **lstein** (Lincoln Stein), **blessedcoolant**, **hipsterusername** (Kent Keirsey), **Kyle0654** (Kyle Schouviller), **damian0815**, **mauwii** (Matthias Wild), **Netsvetaev** (Artur Netsvetaev), **psychedelicious**, **tildebyte**, **keturn**, and **ebr** (Eugene Brodsky). Although individuals within the group may hold differing views on particular details and/or their implications, we are all in agreement about its fundamental statements, as well as their significance and importance to this project moving forward.

13
LICENSE
View File

@@ -1,17 +1,6 @@
MIT License
Copyright (c) 2022 Lincoln Stein and InvokeAI Organization
This software is derived from a fork of the source code available from
https://github.com/pesser/stable-diffusion and
https://github.com/CompViz/stable-diffusion. They carry the following
copyrights:
Copyright (c) 2022 Machine Vision and Learning Group, LMU Munich
Copyright (c) 2022 Robin Rombach and Patrick Esser and contributors
Please see individual source code files for copyright and authorship
attributions.
Copyright (c) 2022 InvokeAI Team
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

336
README.md
View File

@@ -1,23 +1,19 @@
<div align="center">
![project logo](https://github.com/invoke-ai/InvokeAI/raw/main/docs/assets/invoke_ai_banner.png)
# InvokeAI: A Stable Diffusion Toolkit
_Formerly known as lstein/stable-diffusion_
![project logo](docs/assets/logo.png)
[![discord badge]][discord link]
[![latest release badge]][latest release link] [![github stars badge]][github stars link] [![github forks badge]][github forks link]
[![CI checks on main badge]][CI checks on main link] [![CI checks on dev badge]][CI checks on dev link] [![latest commit to dev badge]][latest commit to dev link]
[![CI checks on main badge]][CI checks on main link] [![latest commit to main badge]][latest commit to main link]
[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link]
[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link] [![translation status badge]][translation status link]
[CI checks on dev badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/development?label=CI%20status%20on%20dev&cache=900&icon=github
[CI checks on dev link]: https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Adevelopment
[CI checks on main badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
[CI checks on main link]: https://github.com/invoke-ai/InvokeAI/actions/workflows/test-invoke-conda.yml
[CI checks on main link]:https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Amain
[discord badge]: https://flat.badgen.net/discord/members/ZmtBAhwWhy?icon=discord
[discord link]: https://discord.gg/ZmtBAhwWhy
[github forks badge]: https://flat.badgen.net/github/forks/invoke-ai/InvokeAI?icon=github
@@ -28,178 +24,264 @@ _Formerly known as lstein/stable-diffusion_
[github open prs link]: https://github.com/invoke-ai/InvokeAI/pulls?q=is%3Apr+is%3Aopen
[github stars badge]: https://flat.badgen.net/github/stars/invoke-ai/InvokeAI?icon=github
[github stars link]: https://github.com/invoke-ai/InvokeAI/stargazers
[latest commit to dev badge]: https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/development?icon=github&color=yellow&label=last%20dev%20commit&cache=900
[latest commit to dev link]: https://github.com/invoke-ai/InvokeAI/commits/development
[latest commit to main badge]: https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/main?icon=github&color=yellow&label=last%20dev%20commit&cache=900
[latest commit to main link]: https://github.com/invoke-ai/InvokeAI/commits/main
[latest release badge]: https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases
[translation status badge]: https://hosted.weblate.org/widgets/invokeai/-/svg-badge.svg
[translation status link]: https://hosted.weblate.org/engage/invokeai/
</div>
This is a fork of
[CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion),
the open source text-to-image generator. It provides a streamlined
process with various new features and options to aid the image
generation process. It runs on Windows, Mac and Linux machines, with
GPU cards with as little as 4 GB of RAM. It provides both a polished
Web interface (see below), and an easy-to-use command-line interface.
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.
**Quick links**: [<a href="https://discord.gg/NwVCmKwY">Discord Server</a>] [<a href="https://invoke-ai.github.io/InvokeAI/">Documentation and Tutorials</a>] [<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas & Q&A</a>]
**Quick links**: [[How to Install](https://invoke-ai.github.io/InvokeAI/#installation)] [<a href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a href="https://invoke-ai.github.io/InvokeAI/">Documentation and Tutorials</a>] [<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas & Q&A</a>]
<div align="center"><img src="docs/assets/invoke-web-server-1.png" width=640></div>
_Note: This fork is rapidly evolving. Please use the
_Note: InvokeAI is rapidly evolving. Please use the
[Issues](https://github.com/invoke-ai/InvokeAI/issues) tab to report bugs and make feature
requests. Be sure to use the provided templates. They will help aid diagnose issues faster._
requests. Be sure to use the provided templates. They will help us diagnose issues faster._
<div align="center">
![canvas preview](https://github.com/invoke-ai/InvokeAI/raw/main/docs/assets/canvas_preview.png)
</div>
## Table of Contents
1. [Installation](#installation)
2. [Hardware Requirements](#hardware-requirements)
3. [Features](#features)
4. [Latest Changes](#latest-changes)
5. [Troubleshooting](#troubleshooting)
6. [Contributing](#contributing)
7. [Contributors](#contributors)
8. [Support](#support)
9. [Further Reading](#further-reading)
1. [Quick Start](#getting-started-with-invokeai)
2. [Installation](#detailed-installation-instructions)
3. [Hardware Requirements](#hardware-requirements)
4. [Features](#features)
5. [Latest Changes](#latest-changes)
6. [Troubleshooting](#troubleshooting)
7. [Contributing](#contributing)
8. [Contributors](#contributors)
9. [Support](#support)
10. [Further Reading](#further-reading)
### Installation
## Getting Started with InvokeAI
This fork is supported across multiple platforms. You can find individual installation instructions
below.
For full installation and upgrade instructions, please see:
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/)
- #### [Linux](docs/installation/INSTALL_LINUX.md)
### Automatic Installer (suggested for 1st time users)
- #### [Windows](docs/installation/INSTALL_WINDOWS.md)
1. Go to the bottom of the [Latest Release Page](https://github.com/invoke-ai/InvokeAI/releases/latest)
- #### [Macintosh](docs/installation/INSTALL_MAC.md)
2. Download the .zip file for your OS (Windows/macOS/Linux).
### Hardware Requirements
3. Unzip the file.
#### System
4. If you are on Windows, double-click on the `install.bat` script. On
macOS, open a Terminal window, drag the file `install.sh` from Finder
into the Terminal, and press return. On Linux, run `install.sh`.
You wil need one of the following:
5. You'll be asked to confirm the location of the folder in which
to install InvokeAI and its image generation model files. Pick a
location with at least 15 GB of free memory. More if you plan on
installing lots of models.
6. Wait while the installer does its thing. After installing the software,
the installer will launch a script that lets you configure InvokeAI and
select a set of starting image generaiton models.
7. Find the folder that InvokeAI was installed into (it is not the
same as the unpacked zip file directory!) The default location of this
folder (if you didn't change it in step 5) is `~/invokeai` on
Linux/Mac systems, and `C:\Users\YourName\invokeai` on Windows. This directory will contain launcher scripts named `invoke.sh` and `invoke.bat`.
8. On Windows systems, double-click on the `invoke.bat` file. On
macOS, open a Terminal window, drag `invoke.sh` from the folder into
the Terminal, and press return. On Linux, run `invoke.sh`
9. Press 2 to open the "browser-based UI", press enter/return, wait a
minute or two for Stable Diffusion to start up, then open your browser
and go to http://localhost:9090.
10. Type `banana sushi` in the box on the top left and click `Invoke`
### Command-Line Installation (for users familiar with Terminals)
You must have Python 3.9 or 3.10 installed on your machine. Earlier or later versions are
not supported.
1. Open a command-line window on your machine. The PowerShell is recommended for Windows.
2. Create a directory to install InvokeAI into. You'll need at least 15 GB of free space:
```terminal
mkdir invokeai
````
3. Create a virtual environment named `.venv` inside this directory and activate it:
```terminal
cd invokeai
python -m venv .venv --prompt InvokeAI
```
4. Activate the virtual environment (do it every time you run InvokeAI)
_For Linux/Mac users:_
```sh
source .venv/bin/activate
```
_For Windows users:_
```ps
.venv\Scripts\activate
```
5. Install the InvokeAI module and its dependencies. Choose the command suited for your platform & GPU.
_For Windows/Linux with an NVIDIA GPU:_
```terminal
pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117
```
_For Linux with an AMD GPU:_
```sh
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.2
```
_For Macintoshes, either Intel or M1/M2:_
```sh
pip install InvokeAI --use-pep517
```
6. Configure InvokeAI and install a starting set of image generation models (you only need to do this once):
```terminal
invokeai-configure
```
7. Launch the web server (do it every time you run InvokeAI):
```terminal
invokeai --web
```
8. Point your browser to http://localhost:9090 to bring up the web interface.
9. Type `banana sushi` in the box on the top left and click `Invoke`.
Be sure to activate the virtual environment each time before re-launching InvokeAI,
using `source .venv/bin/activate` or `.venv\Scripts\activate`.
### Detailed Installation Instructions
This fork is supported across Linux, Windows and Macintosh. Linux
users can use either an Nvidia-based card (with CUDA support) or an
AMD card (using the ROCm driver). For full installation and upgrade
instructions, please see:
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/INSTALL_SOURCE/)
## Hardware Requirements
InvokeAI is supported across Linux, Windows and macOS. Linux
users can use either an Nvidia-based card (with CUDA support) or an
AMD card (using the ROCm driver).
### System
You will need one of the following:
- An NVIDIA-based graphics card with 4 GB or more VRAM memory.
- An Apple computer with an M1 chip.
- An AMD-based graphics card with 4GB or more VRAM memory. (Linux only)
#### Memory
We do not recommend the GTX 1650 or 1660 series video cards. They are
unable to run in half-precision mode and do not have sufficient VRAM
to render 512x512 images.
### Memory
- At least 12 GB Main Memory RAM.
#### Disk
### Disk
- At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
**Note**
## Features
If you have a Nvidia 10xx series card (e.g. the 1080ti), please
run the dream script in full-precision mode as shown below.
Feature documentation can be reviewed by navigating to [the InvokeAI Documentation page](https://invoke-ai.github.io/InvokeAI/features/)
Similarly, specify full-precision mode on Apple M1 hardware.
### *Web Server & UI*
Precision is auto configured based on the device. If however you encounter
errors like 'expected type Float but found Half' or 'not implemented for Half'
you can try starting `invoke.py` with the `--precision=float32` flag:
InvokeAI offers a locally hosted Web Server & React Frontend, with an industry leading user experience. The Web-based UI allows for simple and intuitive workflows, and is responsive for use on mobile devices and tablets accessing the web server.
```bash
(ldm) ~/stable-diffusion$ python scripts/invoke.py --precision=float32
```
### *Unified Canvas*
### Features
The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/outpainting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.
#### Major Features
### *Advanced Prompt Syntax*
- [Web Server](docs/features/WEB.md)
- [Interactive Command Line Interface](docs/features/CLI.md)
- [Image To Image](docs/features/IMG2IMG.md)
- [Inpainting Support](docs/features/INPAINTING.md)
- [Outpainting Support](docs/features/OUTPAINTING.md)
- [Upscaling, face-restoration and outpainting](docs/features/POSTPROCESS.md)
- [Seamless Tiling](docs/features/OTHER.md#seamless-tiling)
- [Google Colab](docs/features/OTHER.md#google-colab)
- [Reading Prompts From File](docs/features/PROMPTS.md#reading-prompts-from-a-file)
- [Shortcut: Reusing Seeds](docs/features/OTHER.md#shortcuts-reusing-seeds)
- [Prompt Blending](docs/features/PROMPTS.md#prompt-blending)
- [Thresholding and Perlin Noise Initialization Options](/docs/features/OTHER.md#thresholding-and-perlin-noise-initialization-options)
- [Negative/Unconditioned Prompts](docs/features/PROMPTS.md#negative-and-unconditioned-prompts)
- [Variations](docs/features/VARIATIONS.md)
- [Personalizing Text-to-Image Generation](docs/features/TEXTUAL_INVERSION.md)
- [Simplified API for text to image generation](docs/features/OTHER.md#simplified-api)
InvokeAI's advanced prompt syntax allows for token weighting, cross-attention control, and prompt blending, allowing for fine-tuned tweaking of your invocations and exploration of the latent space.
#### Other Features
### *Command Line Interface*
- [Creating Transparent Regions for Inpainting](docs/features/INPAINTING.md#creating-transparent-regions-for-inpainting)
- [Preload Models](docs/features/OTHER.md#preload-models)
For users utilizing a terminal-based environment, or who want to take advantage of CLI features, InvokeAI offers an extensive and actively supported command-line interface that provides the full suite of generation functionality available in the tool.
### Other features
- *Support for both ckpt and diffusers models*
- *SD 2.0, 2.1 support*
- *Noise Control & Tresholding*
- *Popular Sampler Support*
- *Upscaling & Face Restoration Tools*
- *Embedding Manager & Support*
- *Model Manager & Support*
### Coming Soon
- *Node-Based Architecture & UI*
- And more...
### Latest Changes
- v2.0.1 (13 October 2022)
- fix noisy images at high step count when using k* samplers
- dream.py script now calls invoke.py module directly rather than
via a new python process (which could break the environment)
For our latest changes, view our [Release
Notes](https://github.com/invoke-ai/InvokeAI/releases) and the
[CHANGELOG](docs/CHANGELOG.md).
- v2.0.0 (9 October 2022)
## Troubleshooting
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains
for backward compatibility.
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
- Support for <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/INPAINTING.md">inpainting</a> and <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/OUTPAINTING.md">outpainting</a>
- img2img runs on all k* samplers
- Support for <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/PROMPTS.md#negative-and-unconditioned-prompts">negative prompts</a>
- Support for CodeFormer face reconstruction
- Support for Textual Inversion on Macintoshes
- Support in both WebGUI and CLI for <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/POSTPROCESS.md">post-processing of previously-generated images</a>
using facial reconstruction, ESRGAN upscaling, outcropping (similar to DALL-E infinite canvas),
and "embiggen" upscaling. See the `!fix` command.
- New `--hires` option on `invoke>` line allows <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/CLI.md#this-is-an-example-of-txt2img">larger images to be created without duplicating elements</a>, at the cost of some performance.
- New `--perlin` and `--threshold` options allow you to add and control variation
during image generation (see <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/OTHER.md#thresholding-and-perlin-noise-initialization-options">Thresholding and Perlin Noise Initialization</a>
- Extensive metadata now written into PNG files, allowing reliable regeneration of images
and tweaking of previous settings.
- Command-line completion in `invoke.py` now works on Windows, Linux and Mac platforms.
- Improved <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/CLI.md">command-line completion behavior</a>.
New commands added:
* List command-line history with `!history`
* Search command-line history with `!search`
* Clear history with `!clear`
- Deprecated `--full_precision` / `-F`. Simply omit it and `invoke.py` will auto
configure. To switch away from auto use the new flag like `--precision=float32`.
For older changelogs, please visit the **[CHANGELOG](docs/features/CHANGELOG.md)**.
### Troubleshooting
Please check out our **[Q&A](docs/help/TROUBLESHOOT.md)** to get solutions for common installation
Please check out our **[Q&A](https://invoke-ai.github.io/InvokeAI/help/TROUBLESHOOT/#faq)** to get solutions for common installation
problems and other issues.
# Contributing
## Contributing
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with how
to contribute to GitHub projects, here is a
[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github).
cleanup, testing, or code reviews, is very much encouraged to do so.
A full set of contribution guidelines, along with templates, are in progress, but for now the most
important thing is to **make your pull request against the "development" branch**, and not against
"main". This will help keep public breakage to a minimum and will allow you to propose more radical
changes.
To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the GitHub discussion board.
If you'd like to help with translation, please see our [translation guide](docs/other/TRANSLATION.md).
If you are unfamiliar with how
to contribute to GitHub projects, here is a
[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github). A full set of contribution guidelines, along with templates, are in progress. You can **make your pull request against the "main" branch**.
We hope you enjoy using our software as much as we enjoy creating it,
and we hope that some of those of you who are reading this will elect
to become part of our community.
Welcome to InvokeAI!
### Contributors
This fork is a combined effort of various people from across the world.
[Check out the list of all these amazing people](docs/other/CONTRIBUTORS.md). We thank them for
[Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for
their time, hard work and effort.
Thanks to [Weblate](https://weblate.org/) for generously providing translation services to this project.
### Support
For support, please use this repository's GitHub Issues tracking service. Feel free to send me an
email if you use and like the script.
For support, please use this repository's GitHub Issues tracking service, or join the Discord.
Original portions of the software are Copyright (c) 2020
[Lincoln D. Stein](https://github.com/lstein)
Original portions of the software are Copyright (c) 2023 by respective contributors.
### Further Reading
Please see the original README for more information on this software and underlying algorithm,
located in the file [README-CompViz.md](docs/other/README-CompViz.md).

View File

@@ -21,7 +21,7 @@ This model card focuses on the model associated with the Stable Diffusion model,
# Uses
## Direct Use
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
@@ -68,11 +68,11 @@ Using the model to generate content that is cruel to individuals is a misuse of
considerations.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
@@ -84,7 +84,7 @@ The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
Stable Diffusion v1 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
@@ -108,12 +108,12 @@ filtered to images with an original size `>= 512x512`, estimated aesthetics scor
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
steps show the relative improvements of the checkpoints:
![pareto](assets/v1-variants-scores.jpg)
![pareto](assets/v1-variants-scores.jpg)
Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact

File diff suppressed because it is too large Load Diff

Binary file not shown.

View File

@@ -0,0 +1,164 @@
@echo off
@rem This script will install git (if not found on the PATH variable)
@rem using micromamba (an 8mb static-linked single-file binary, conda replacement).
@rem For users who already have git, this step will be skipped.
@rem Next, it'll download the project's source code.
@rem Then it will download a self-contained, standalone Python and unpack it.
@rem Finally, it'll create the Python virtual environment and preload the models.
@rem This enables a user to install this project without manually installing git or Python
@rem change to the script's directory
PUSHD "%~dp0"
set "no_cache_dir=--no-cache-dir"
if "%1" == "use-cache" (
set "no_cache_dir="
)
echo ***** Installing InvokeAI.. *****
@rem Config
set INSTALL_ENV_DIR=%cd%\installer_files\env
@rem https://mamba.readthedocs.io/en/latest/installation.html
set MICROMAMBA_DOWNLOAD_URL=https://github.com/cmdr2/stable-diffusion-ui/releases/download/v1.1/micromamba.exe
set RELEASE_URL=https://github.com/invoke-ai/InvokeAI
set RELEASE_SOURCEBALL=/archive/refs/heads/main.tar.gz
set PYTHON_BUILD_STANDALONE_URL=https://github.com/indygreg/python-build-standalone/releases/download
set PYTHON_BUILD_STANDALONE=20221002/cpython-3.10.7+20221002-x86_64-pc-windows-msvc-shared-install_only.tar.gz
set PACKAGES_TO_INSTALL=
call git --version >.tmp1 2>.tmp2
if "%ERRORLEVEL%" NEQ "0" set PACKAGES_TO_INSTALL=%PACKAGES_TO_INSTALL% git
@rem Cleanup
del /q .tmp1 .tmp2
@rem (if necessary) install git into a contained environment
if "%PACKAGES_TO_INSTALL%" NEQ "" (
@rem download micromamba
echo ***** Downloading micromamba from %MICROMAMBA_DOWNLOAD_URL% to micromamba.exe *****
call curl -L "%MICROMAMBA_DOWNLOAD_URL%" > micromamba.exe
@rem test the mamba binary
echo ***** Micromamba version: *****
call micromamba.exe --version
@rem create the installer env
if not exist "%INSTALL_ENV_DIR%" (
call micromamba.exe create -y --prefix "%INSTALL_ENV_DIR%"
)
echo ***** Packages to install:%PACKAGES_TO_INSTALL% *****
call micromamba.exe install -y --prefix "%INSTALL_ENV_DIR%" -c conda-forge %PACKAGES_TO_INSTALL%
if not exist "%INSTALL_ENV_DIR%" (
echo ----- There was a problem while installing "%PACKAGES_TO_INSTALL%" using micromamba. Cannot continue. -----
pause
exit /b
)
)
del /q micromamba.exe
@rem For 'git' only
set PATH=%INSTALL_ENV_DIR%\Library\bin;%PATH%
@rem Download/unpack/clean up InvokeAI release sourceball
set err_msg=----- InvokeAI source download failed -----
echo Trying to download "%RELEASE_URL%%RELEASE_SOURCEBALL%"
curl -L %RELEASE_URL%%RELEASE_SOURCEBALL% --output InvokeAI.tgz
if %errorlevel% neq 0 goto err_exit
set err_msg=----- InvokeAI source unpack failed -----
tar -zxf InvokeAI.tgz
if %errorlevel% neq 0 goto err_exit
del /q InvokeAI.tgz
set err_msg=----- InvokeAI source copy failed -----
cd InvokeAI-*
xcopy . .. /e /h
if %errorlevel% neq 0 goto err_exit
cd ..
@rem cleanup
for /f %%i in ('dir /b InvokeAI-*') do rd /s /q %%i
rd /s /q .dev_scripts .github docker-build tests
del /q requirements.in requirements-mkdocs.txt shell.nix
echo ***** Unpacked InvokeAI source *****
@rem Download/unpack/clean up python-build-standalone
set err_msg=----- Python download failed -----
curl -L %PYTHON_BUILD_STANDALONE_URL%/%PYTHON_BUILD_STANDALONE% --output python.tgz
if %errorlevel% neq 0 goto err_exit
set err_msg=----- Python unpack failed -----
tar -zxf python.tgz
if %errorlevel% neq 0 goto err_exit
del /q python.tgz
echo ***** Unpacked python-build-standalone *****
@rem create venv
set err_msg=----- problem creating venv -----
.\python\python -E -s -m venv .venv
if %errorlevel% neq 0 goto err_exit
call .venv\Scripts\activate.bat
echo ***** Created Python virtual environment *****
@rem Print venv's Python version
set err_msg=----- problem calling venv's python -----
echo We're running under
.venv\Scripts\python --version
if %errorlevel% neq 0 goto err_exit
set err_msg=----- pip update failed -----
.venv\Scripts\python -m pip install %no_cache_dir% --no-warn-script-location --upgrade pip wheel
if %errorlevel% neq 0 goto err_exit
echo ***** Updated pip and wheel *****
set err_msg=----- requirements file copy failed -----
copy binary_installer\py3.10-windows-x86_64-cuda-reqs.txt requirements.txt
if %errorlevel% neq 0 goto err_exit
set err_msg=----- main pip install failed -----
.venv\Scripts\python -m pip install %no_cache_dir% --no-warn-script-location -r requirements.txt
if %errorlevel% neq 0 goto err_exit
echo ***** Installed Python dependencies *****
set err_msg=----- InvokeAI setup failed -----
.venv\Scripts\python -m pip install %no_cache_dir% --no-warn-script-location -e .
if %errorlevel% neq 0 goto err_exit
copy binary_installer\invoke.bat.in .\invoke.bat
echo ***** Installed invoke launcher script ******
@rem more cleanup
rd /s /q binary_installer installer_files
@rem preload the models
call .venv\Scripts\python ldm\invoke\config\invokeai_configure.py
set err_msg=----- model download clone failed -----
if %errorlevel% neq 0 goto err_exit
deactivate
echo ***** Finished downloading models *****
echo All done! Execute the file invoke.bat in this directory to start InvokeAI
pause
exit
:err_exit
echo %err_msg%
pause
exit

View File

@@ -0,0 +1,235 @@
#!/usr/bin/env bash
# ensure we're in the correct folder in case user's CWD is somewhere else
scriptdir=$(dirname "$0")
cd "$scriptdir"
set -euo pipefail
IFS=$'\n\t'
function _err_exit {
if test "$1" -ne 0
then
echo -e "Error code $1; Error caught was '$2'"
read -p "Press any key to exit..."
exit
fi
}
# This script will install git (if not found on the PATH variable)
# using micromamba (an 8mb static-linked single-file binary, conda replacement).
# For users who already have git, this step will be skipped.
# Next, it'll download the project's source code.
# Then it will download a self-contained, standalone Python and unpack it.
# Finally, it'll create the Python virtual environment and preload the models.
# This enables a user to install this project without manually installing git or Python
echo -e "\n***** Installing InvokeAI into $(pwd)... *****\n"
export no_cache_dir="--no-cache-dir"
if [ $# -ge 1 ]; then
if [ "$1" = "use-cache" ]; then
export no_cache_dir=""
fi
fi
OS_NAME=$(uname -s)
case "${OS_NAME}" in
Linux*) OS_NAME="linux";;
Darwin*) OS_NAME="darwin";;
*) echo -e "\n----- Unknown OS: $OS_NAME! This script runs only on Linux or macOS -----\n" && exit
esac
OS_ARCH=$(uname -m)
case "${OS_ARCH}" in
x86_64*) ;;
arm64*) ;;
*) echo -e "\n----- Unknown system architecture: $OS_ARCH! This script runs only on x86_64 or arm64 -----\n" && exit
esac
# https://mamba.readthedocs.io/en/latest/installation.html
MAMBA_OS_NAME=$OS_NAME
MAMBA_ARCH=$OS_ARCH
if [ "$OS_NAME" == "darwin" ]; then
MAMBA_OS_NAME="osx"
fi
if [ "$OS_ARCH" == "linux" ]; then
MAMBA_ARCH="aarch64"
fi
if [ "$OS_ARCH" == "x86_64" ]; then
MAMBA_ARCH="64"
fi
PY_ARCH=$OS_ARCH
if [ "$OS_ARCH" == "arm64" ]; then
PY_ARCH="aarch64"
fi
# Compute device ('cd' segment of reqs files) detect goes here
# This needs a ton of work
# Suggestions:
# - lspci
# - check $PATH for nvidia-smi, gtt CUDA/GPU version from output
# - Surely there's a similar utility for AMD?
CD="cuda"
if [ "$OS_NAME" == "darwin" ] && [ "$OS_ARCH" == "arm64" ]; then
CD="mps"
fi
# config
INSTALL_ENV_DIR="$(pwd)/installer_files/env"
MICROMAMBA_DOWNLOAD_URL="https://micro.mamba.pm/api/micromamba/${MAMBA_OS_NAME}-${MAMBA_ARCH}/latest"
RELEASE_URL=https://github.com/invoke-ai/InvokeAI
RELEASE_SOURCEBALL=/archive/refs/heads/main.tar.gz
PYTHON_BUILD_STANDALONE_URL=https://github.com/indygreg/python-build-standalone/releases/download
if [ "$OS_NAME" == "darwin" ]; then
PYTHON_BUILD_STANDALONE=20221002/cpython-3.10.7+20221002-${PY_ARCH}-apple-darwin-install_only.tar.gz
elif [ "$OS_NAME" == "linux" ]; then
PYTHON_BUILD_STANDALONE=20221002/cpython-3.10.7+20221002-${PY_ARCH}-unknown-linux-gnu-install_only.tar.gz
fi
echo "INSTALLING $RELEASE_SOURCEBALL FROM $RELEASE_URL"
PACKAGES_TO_INSTALL=""
if ! hash "git" &>/dev/null; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL git"; fi
# (if necessary) install git and conda into a contained environment
if [ "$PACKAGES_TO_INSTALL" != "" ]; then
# download micromamba
echo -e "\n***** Downloading micromamba from $MICROMAMBA_DOWNLOAD_URL to micromamba *****\n"
curl -L "$MICROMAMBA_DOWNLOAD_URL" | tar -xvjO bin/micromamba > micromamba
chmod u+x ./micromamba
# test the mamba binary
echo -e "\n***** Micromamba version: *****\n"
./micromamba --version
# create the installer env
if [ ! -e "$INSTALL_ENV_DIR" ]; then
./micromamba create -y --prefix "$INSTALL_ENV_DIR"
fi
echo -e "\n***** Packages to install:$PACKAGES_TO_INSTALL *****\n"
./micromamba install -y --prefix "$INSTALL_ENV_DIR" -c conda-forge "$PACKAGES_TO_INSTALL"
if [ ! -e "$INSTALL_ENV_DIR" ]; then
echo -e "\n----- There was a problem while initializing micromamba. Cannot continue. -----\n"
exit
fi
fi
rm -f micromamba.exe
export PATH="$INSTALL_ENV_DIR/bin:$PATH"
# Download/unpack/clean up InvokeAI release sourceball
_err_msg="\n----- InvokeAI source download failed -----\n"
curl -L $RELEASE_URL/$RELEASE_SOURCEBALL --output InvokeAI.tgz
_err_exit $? _err_msg
_err_msg="\n----- InvokeAI source unpack failed -----\n"
tar -zxf InvokeAI.tgz
_err_exit $? _err_msg
rm -f InvokeAI.tgz
_err_msg="\n----- InvokeAI source copy failed -----\n"
cd InvokeAI-*
cp -r . ..
_err_exit $? _err_msg
cd ..
# cleanup
rm -rf InvokeAI-*/
rm -rf .dev_scripts/ .github/ docker-build/ tests/ requirements.in requirements-mkdocs.txt shell.nix
echo -e "\n***** Unpacked InvokeAI source *****\n"
# Download/unpack/clean up python-build-standalone
_err_msg="\n----- Python download failed -----\n"
curl -L $PYTHON_BUILD_STANDALONE_URL/$PYTHON_BUILD_STANDALONE --output python.tgz
_err_exit $? _err_msg
_err_msg="\n----- Python unpack failed -----\n"
tar -zxf python.tgz
_err_exit $? _err_msg
rm -f python.tgz
echo -e "\n***** Unpacked python-build-standalone *****\n"
# create venv
_err_msg="\n----- problem creating venv -----\n"
if [ "$OS_NAME" == "darwin" ]; then
# patch sysconfig so that extensions can build properly
# adapted from https://github.com/cashapp/hermit-packages/commit/fcba384663892f4d9cfb35e8639ff7a28166ee43
PYTHON_INSTALL_DIR="$(pwd)/python"
SYSCONFIG="$(echo python/lib/python*/_sysconfigdata_*.py)"
TMPFILE="$(mktemp)"
chmod +w "${SYSCONFIG}"
cp "${SYSCONFIG}" "${TMPFILE}"
sed "s,'/install,'${PYTHON_INSTALL_DIR},g" "${TMPFILE}" > "${SYSCONFIG}"
rm -f "${TMPFILE}"
fi
./python/bin/python3 -E -s -m venv .venv
_err_exit $? _err_msg
source .venv/bin/activate
echo -e "\n***** Created Python virtual environment *****\n"
# Print venv's Python version
_err_msg="\n----- problem calling venv's python -----\n"
echo -e "We're running under"
.venv/bin/python3 --version
_err_exit $? _err_msg
_err_msg="\n----- pip update failed -----\n"
.venv/bin/python3 -m pip install $no_cache_dir --no-warn-script-location --upgrade pip
_err_exit $? _err_msg
echo -e "\n***** Updated pip *****\n"
_err_msg="\n----- requirements file copy failed -----\n"
cp binary_installer/py3.10-${OS_NAME}-"${OS_ARCH}"-${CD}-reqs.txt requirements.txt
_err_exit $? _err_msg
_err_msg="\n----- main pip install failed -----\n"
.venv/bin/python3 -m pip install $no_cache_dir --no-warn-script-location -r requirements.txt
_err_exit $? _err_msg
echo -e "\n***** Installed Python dependencies *****\n"
_err_msg="\n----- InvokeAI setup failed -----\n"
.venv/bin/python3 -m pip install $no_cache_dir --no-warn-script-location -e .
_err_exit $? _err_msg
echo -e "\n***** Installed InvokeAI *****\n"
cp binary_installer/invoke.sh.in ./invoke.sh
chmod a+rx ./invoke.sh
echo -e "\n***** Installed invoke launcher script ******\n"
# more cleanup
rm -rf binary_installer/ installer_files/
# preload the models
.venv/bin/python3 scripts/configure_invokeai.py
_err_msg="\n----- model download clone failed -----\n"
_err_exit $? _err_msg
deactivate
echo -e "\n***** Finished downloading models *****\n"
echo "All done! Run the command"
echo " $scriptdir/invoke.sh"
echo "to start InvokeAI."
read -p "Press any key to exit..."
exit

View File

@@ -0,0 +1,36 @@
@echo off
PUSHD "%~dp0"
call .venv\Scripts\activate.bat
echo Do you want to generate images using the
echo 1. command-line
echo 2. browser-based UI
echo OR
echo 3. open the developer console
set /p choice="Please enter 1, 2 or 3: "
if /i "%choice%" == "1" (
echo Starting the InvokeAI command-line.
.venv\Scripts\python scripts\invoke.py %*
) else if /i "%choice%" == "2" (
echo Starting the InvokeAI browser-based UI.
.venv\Scripts\python scripts\invoke.py --web %*
) else if /i "%choice%" == "3" (
echo Developer Console
echo Python command is:
where python
echo Python version is:
python --version
echo *************************
echo You are now in the system shell, with the local InvokeAI Python virtual environment activated,
echo so that you can troubleshoot this InvokeAI installation as necessary.
echo *************************
echo *** Type `exit` to quit this shell and deactivate the Python virtual environment ***
call cmd /k
) else (
echo Invalid selection
pause
exit /b
)
deactivate

View File

@@ -0,0 +1,46 @@
#!/usr/bin/env sh
set -eu
. .venv/bin/activate
# set required env var for torch on mac MPS
if [ "$(uname -s)" == "Darwin" ]; then
export PYTORCH_ENABLE_MPS_FALLBACK=1
fi
echo "Do you want to generate images using the"
echo "1. command-line"
echo "2. browser-based UI"
echo "OR"
echo "3. open the developer console"
echo "Please enter 1, 2, or 3:"
read choice
case $choice in
1)
printf "\nStarting the InvokeAI command-line..\n";
.venv/bin/python scripts/invoke.py $*;
;;
2)
printf "\nStarting the InvokeAI browser-based UI..\n";
.venv/bin/python scripts/invoke.py --web $*;
;;
3)
printf "\nDeveloper Console:\n";
printf "Python command is:\n\t";
which python;
printf "Python version is:\n\t";
python --version;
echo "*************************"
echo "You are now in your user shell ($SHELL) with the local InvokeAI Python virtual environment activated,";
echo "so that you can troubleshoot this InvokeAI installation as necessary.";
printf "*************************\n"
echo "*** Type \`exit\` to quit this shell and deactivate the Python virtual environment *** ";
/usr/bin/env "$SHELL";
;;
*)
echo "Invalid selection";
exit
;;
esac

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,17 @@
InvokeAI
Project homepage: https://github.com/invoke-ai/InvokeAI
Installation on Windows:
NOTE: You might need to enable Windows Long Paths. If you're not sure,
then you almost certainly need to. Simply double-click the 'WinLongPathsEnabled.reg'
file. Note that you will need to have admin privileges in order to
do this.
Please double-click the 'install.bat' file (while keeping it inside the invokeAI folder).
Installation on Linux and Mac:
Please open the terminal, and run './install.sh' (while keeping it inside the invokeAI folder).
After installation, please run the 'invoke.bat' file (on Windows) or 'invoke.sh'
file (on Linux/Mac) to start InvokeAI.

View File

@@ -0,0 +1,33 @@
--prefer-binary
--extra-index-url https://download.pytorch.org/whl/torch_stable.html
--extra-index-url https://download.pytorch.org/whl/cu116
--trusted-host https://download.pytorch.org
accelerate~=0.15
albumentations
diffusers[torch]~=0.11
einops
eventlet
flask_cors
flask_socketio
flaskwebgui==1.0.3
getpass_asterisk
imageio-ffmpeg
pyreadline3
realesrgan
send2trash
streamlit
taming-transformers-rom1504
test-tube
torch-fidelity
torch==1.12.1 ; platform_system == 'Darwin'
torch==1.12.0+cu116 ; platform_system == 'Linux' or platform_system == 'Windows'
torchvision==0.13.1 ; platform_system == 'Darwin'
torchvision==0.13.0+cu116 ; platform_system == 'Linux' or platform_system == 'Windows'
transformers
picklescan
https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip
https://github.com/invoke-ai/clipseg/archive/1f754751c85d7d4255fa681f4491ff5711c1c288.zip
https://github.com/invoke-ai/GFPGAN/archive/3f5d2397361199bc4a91c08bb7d80f04d7805615.zip ; platform_system=='Windows'
https://github.com/invoke-ai/GFPGAN/archive/c796277a1cf77954e5fc0b288d7062d162894248.zip ; platform_system=='Linux' or platform_system=='Darwin'
https://github.com/Birch-san/k-diffusion/archive/363386981fee88620709cf8f6f2eea167bd6cd74.zip
https://github.com/invoke-ai/PyPatchMatch/archive/129863937a8ab37f6bbcec327c994c0f932abdbc.zip

View File

@@ -1,74 +0,0 @@
FROM ubuntu AS get_miniconda
SHELL ["/bin/bash", "-c"]
# install wget
RUN apt-get update \
&& apt-get install -y \
wget \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# download and install miniconda
ARG conda_version=py39_4.12.0-Linux-x86_64
ARG conda_prefix=/opt/conda
RUN wget --progress=dot:giga -O /miniconda.sh \
https://repo.anaconda.com/miniconda/Miniconda3-${conda_version}.sh \
&& bash /miniconda.sh -b -p ${conda_prefix} \
&& rm -f /miniconda.sh
FROM ubuntu AS invokeai
# use bash
SHELL [ "/bin/bash", "-c" ]
# clean bashrc
RUN echo "" > ~/.bashrc
# Install necesarry packages
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
gcc \
git \
libgl1-mesa-glx \
libglib2.0-0 \
pip \
python3 \
python3-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# clone repository and create symlinks
ARG invokeai_git=https://github.com/invoke-ai/InvokeAI.git
ARG project_name=invokeai
RUN git clone ${invokeai_git} /${project_name} \
&& mkdir /${project_name}/models/ldm/stable-diffusion-v1 \
&& ln -s /data/models/sd-v1-4.ckpt /${project_name}/models/ldm/stable-diffusion-v1/model.ckpt \
&& ln -s /data/outputs/ /${project_name}/outputs
# set workdir
WORKDIR /${project_name}
# install conda env and preload models
ARG conda_prefix=/opt/conda
ARG conda_env_file=environment.yml
COPY --from=get_miniconda ${conda_prefix} ${conda_prefix}
RUN source ${conda_prefix}/etc/profile.d/conda.sh \
&& conda init bash \
&& source ~/.bashrc \
&& conda env create \
--name ${project_name} \
--file ${conda_env_file} \
&& rm -Rf ~/.cache \
&& conda clean -afy \
&& echo "conda activate ${project_name}" >> ~/.bashrc \
&& ln -s /data/models/GFPGANv1.4.pth ./src/gfpgan/experiments/pretrained_models/GFPGANv1.4.pth \
&& conda activate ${project_name} \
&& python scripts/preload_models.py
# Copy entrypoint and set env
ENV CONDA_PREFIX=${conda_prefix}
ENV PROJECT_NAME=${project_name}
COPY docker-build/entrypoint.sh /
ENTRYPOINT [ "/entrypoint.sh" ]

View File

@@ -1,81 +0,0 @@
#!/usr/bin/env bash
set -e
# IMPORTANT: You need to have a token on huggingface.co to be able to download the checkpoint!!!
# configure values by using env when executing build.sh
# f.e. env ARCH=aarch64 GITHUB_INVOKE_AI=https://github.com/yourname/yourfork.git ./build.sh
source ./docker-build/env.sh || echo "please run from repository root" || exit 1
invokeai_conda_version=${INVOKEAI_CONDA_VERSION:-py39_4.12.0-${platform/\//-}}
invokeai_conda_prefix=${INVOKEAI_CONDA_PREFIX:-\/opt\/conda}
invokeai_conda_env_file=${INVOKEAI_CONDA_ENV_FILE:-environment.yml}
invokeai_git=${INVOKEAI_GIT:-https://github.com/invoke-ai/InvokeAI.git}
huggingface_token=${HUGGINGFACE_TOKEN?}
# print the settings
echo "You are using these values:"
echo -e "project_name:\t\t ${project_name}"
echo -e "volumename:\t\t ${volumename}"
echo -e "arch:\t\t\t ${arch}"
echo -e "platform:\t\t ${platform}"
echo -e "invokeai_conda_version:\t ${invokeai_conda_version}"
echo -e "invokeai_conda_prefix:\t ${invokeai_conda_prefix}"
echo -e "invokeai_conda_env_file: ${invokeai_conda_env_file}"
echo -e "invokeai_git:\t\t ${invokeai_git}"
echo -e "invokeai_tag:\t\t ${invokeai_tag}\n"
_runAlpine() {
docker run \
--rm \
--interactive \
--tty \
--mount source="$volumename",target=/data \
--workdir /data \
alpine "$@"
}
_copyCheckpoints() {
echo "creating subfolders for models and outputs"
_runAlpine mkdir models
_runAlpine mkdir outputs
echo -n "downloading sd-v1-4.ckpt"
_runAlpine wget --header="Authorization: Bearer ${huggingface_token}" -O models/sd-v1-4.ckpt https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt
echo "done"
echo "downloading GFPGANv1.4.pth"
_runAlpine wget -O models/GFPGANv1.4.pth https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth
}
_checkVolumeContent() {
_runAlpine ls -lhA /data/models
}
_getModelMd5s() {
_runAlpine \
alpine sh -c "md5sum /data/models/*"
}
if [[ -n "$(docker volume ls -f name="${volumename}" -q)" ]]; then
echo "Volume already exists"
if [[ -z "$(_checkVolumeContent)" ]]; then
echo "looks empty, copying checkpoint"
_copyCheckpoints
fi
echo "Models in ${volumename}:"
_checkVolumeContent
else
echo -n "createing docker volume "
docker volume create "${volumename}"
_copyCheckpoints
fi
# Build Container
docker build \
--platform="${platform}" \
--tag "${invokeai_tag}" \
--build-arg project_name="${project_name}" \
--build-arg conda_version="${invokeai_conda_version}" \
--build-arg conda_prefix="${invokeai_conda_prefix}" \
--build-arg conda_env_file="${invokeai_conda_env_file}" \
--build-arg invokeai_git="${invokeai_git}" \
--file ./docker-build/Dockerfile \
.

View File

@@ -1,8 +0,0 @@
#!/bin/bash
set -e
source "${CONDA_PREFIX}/etc/profile.d/conda.sh"
conda activate "${PROJECT_NAME}"
python scripts/invoke.py \
${@:---web --host=0.0.0.0}

View File

@@ -1,13 +0,0 @@
#!/usr/bin/env bash
project_name=${PROJECT_NAME:-invokeai}
volumename=${VOLUMENAME:-${project_name}_data}
arch=${ARCH:-x86_64}
platform=${PLATFORM:-Linux/${arch}}
invokeai_tag=${INVOKEAI_TAG:-${project_name}-${arch}}
export project_name
export volumename
export arch
export platform
export invokeai_tag

View File

@@ -1,15 +0,0 @@
#!/usr/bin/env bash
set -e
source ./docker-build/env.sh || echo "please run from repository root" || exit 1
docker run \
--interactive \
--tty \
--rm \
--platform "$platform" \
--name "$project_name" \
--hostname "$project_name" \
--mount source="$volumename",target=/data \
--publish 9090:9090 \
"$invokeai_tag" ${1:+$@}

103
docker/Dockerfile Normal file
View File

@@ -0,0 +1,103 @@
# syntax=docker/dockerfile:1
ARG PYTHON_VERSION=3.9
##################
## base image ##
##################
FROM python:${PYTHON_VERSION}-slim AS python-base
LABEL org.opencontainers.image.authors="mauwii@outlook.de"
# prepare for buildkit cache
RUN rm -f /etc/apt/apt.conf.d/docker-clean \
&& echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' >/etc/apt/apt.conf.d/keep-cache
# Install necessary packages
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update \
&& apt-get install -y \
--no-install-recommends \
libgl1-mesa-glx=20.3.* \
libglib2.0-0=2.66.* \
libopencv-dev=4.5.*
# set working directory and env
ARG APPDIR=/usr/src
ARG APPNAME=InvokeAI
WORKDIR ${APPDIR}
ENV PATH ${APPDIR}/${APPNAME}/bin:$PATH
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE 1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED 1
# don't fall back to legacy build system
ENV PIP_USE_PEP517=1
#######################
## build pyproject ##
#######################
FROM python-base AS pyproject-builder
# Install dependencies
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update \
&& apt-get install -y \
--no-install-recommends \
build-essential=12.9 \
gcc=4:10.2.* \
python3-dev=3.9.*
# prepare pip for buildkit cache
ARG PIP_CACHE_DIR=/var/cache/buildkit/pip
ENV PIP_CACHE_DIR ${PIP_CACHE_DIR}
RUN mkdir -p ${PIP_CACHE_DIR}
# create virtual environment
RUN --mount=type=cache,target=${PIP_CACHE_DIR},sharing=locked \
python3 -m venv "${APPNAME}" \
--upgrade-deps
# copy sources
COPY --link . .
# install pyproject.toml
ARG PIP_EXTRA_INDEX_URL
ENV PIP_EXTRA_INDEX_URL ${PIP_EXTRA_INDEX_URL}
RUN --mount=type=cache,target=${PIP_CACHE_DIR},sharing=locked \
"${APPNAME}/bin/pip" install .
# build patchmatch
RUN python3 -c "from patchmatch import patch_match"
#####################
## runtime image ##
#####################
FROM python-base AS runtime
# Create a new user
ARG UNAME=appuser
RUN useradd \
--no-log-init \
-m \
-U \
"${UNAME}"
# create volume directory
ARG VOLUME_DIR=/data
RUN mkdir -p "${VOLUME_DIR}" \
&& chown -R "${UNAME}" "${VOLUME_DIR}"
# setup runtime environment
USER ${UNAME}
COPY --chown=${UNAME} --from=pyproject-builder ${APPDIR}/${APPNAME} ${APPNAME}
ENV INVOKEAI_ROOT ${VOLUME_DIR}
ENV TRANSFORMERS_CACHE ${VOLUME_DIR}/.cache
ENV INVOKE_MODEL_RECONFIGURE "--yes --default_only"
EXPOSE 9090
ENTRYPOINT [ "invokeai" ]
CMD [ "--web", "--host", "0.0.0.0", "--port", "9090" ]
VOLUME [ "${VOLUME_DIR}" ]

51
docker/build.sh Executable file
View File

@@ -0,0 +1,51 @@
#!/usr/bin/env bash
set -e
# If you want to build a specific flavor, set the CONTAINER_FLAVOR environment variable
# e.g. CONTAINER_FLAVOR=cpu ./build.sh
# Possible Values are:
# - cpu
# - cuda
# - rocm
# Don't forget to also set it when executing run.sh
# if it is not set, the script will try to detect the flavor by itself.
#
# Doc can be found here:
# https://invoke-ai.github.io/InvokeAI/installation/040_INSTALL_DOCKER/
SCRIPTDIR=$(dirname "${BASH_SOURCE[0]}")
cd "$SCRIPTDIR" || exit 1
source ./env.sh
DOCKERFILE=${INVOKE_DOCKERFILE:-./Dockerfile}
# print the settings
echo -e "You are using these values:\n"
echo -e "Dockerfile:\t\t${DOCKERFILE}"
echo -e "index-url:\t\t${PIP_EXTRA_INDEX_URL:-none}"
echo -e "Volumename:\t\t${VOLUMENAME}"
echo -e "Platform:\t\t${PLATFORM}"
echo -e "Container Registry:\t${CONTAINER_REGISTRY}"
echo -e "Container Repository:\t${CONTAINER_REPOSITORY}"
echo -e "Container Tag:\t\t${CONTAINER_TAG}"
echo -e "Container Flavor:\t${CONTAINER_FLAVOR}"
echo -e "Container Image:\t${CONTAINER_IMAGE}\n"
# Create docker volume
if [[ -n "$(docker volume ls -f name="${VOLUMENAME}" -q)" ]]; then
echo -e "Volume already exists\n"
else
echo -n "creating docker volume "
docker volume create "${VOLUMENAME}"
fi
# Build Container
DOCKER_BUILDKIT=1 docker build \
--platform="${PLATFORM:-linux/amd64}" \
--tag="${CONTAINER_IMAGE:-invokeai}" \
${CONTAINER_FLAVOR:+--build-arg="CONTAINER_FLAVOR=${CONTAINER_FLAVOR}"} \
${PIP_EXTRA_INDEX_URL:+--build-arg="PIP_EXTRA_INDEX_URL=${PIP_EXTRA_INDEX_URL}"} \
${PIP_PACKAGE:+--build-arg="PIP_PACKAGE=${PIP_PACKAGE}"} \
--file="${DOCKERFILE}" \
..

51
docker/env.sh Normal file
View File

@@ -0,0 +1,51 @@
#!/usr/bin/env bash
# This file is used to set environment variables for the build.sh and run.sh scripts.
# Try to detect the container flavor if no PIP_EXTRA_INDEX_URL got specified
if [[ -z "$PIP_EXTRA_INDEX_URL" ]]; then
# Activate virtual environment if not already activated and exists
if [[ -z $VIRTUAL_ENV ]]; then
[[ -e "$(dirname "${BASH_SOURCE[0]}")/../.venv/bin/activate" ]] \
&& source "$(dirname "${BASH_SOURCE[0]}")/../.venv/bin/activate" \
&& echo "Activated virtual environment: $VIRTUAL_ENV"
fi
# Decide which container flavor to build if not specified
if [[ -z "$CONTAINER_FLAVOR" ]] && python -c "import torch" &>/dev/null; then
# Check for CUDA and ROCm
CUDA_AVAILABLE=$(python -c "import torch;print(torch.cuda.is_available())")
ROCM_AVAILABLE=$(python -c "import torch;print(torch.version.hip is not None)")
if [[ "${CUDA_AVAILABLE}" == "True" ]]; then
CONTAINER_FLAVOR="cuda"
elif [[ "${ROCM_AVAILABLE}" == "True" ]]; then
CONTAINER_FLAVOR="rocm"
else
CONTAINER_FLAVOR="cpu"
fi
fi
# Set PIP_EXTRA_INDEX_URL based on container flavor
if [[ "$CONTAINER_FLAVOR" == "rocm" ]]; then
PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/rocm"
elif [[ "$CONTAINER_FLAVOR" == "cpu" ]]; then
PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/cpu"
# elif [[ -z "$CONTAINER_FLAVOR" || "$CONTAINER_FLAVOR" == "cuda" ]]; then
# PIP_PACKAGE=${PIP_PACKAGE-".[xformers]"}
fi
fi
# Variables shared by build.sh and run.sh
REPOSITORY_NAME="${REPOSITORY_NAME-$(basename "$(git rev-parse --show-toplevel)")}"
REPOSITORY_NAME="${REPOSITORY_NAME,,}"
VOLUMENAME="${VOLUMENAME-"${REPOSITORY_NAME}_data"}"
ARCH="${ARCH-$(uname -m)}"
PLATFORM="${PLATFORM-linux/${ARCH}}"
INVOKEAI_BRANCH="${INVOKEAI_BRANCH-$(git branch --show)}"
CONTAINER_REGISTRY="${CONTAINER_REGISTRY-"ghcr.io"}"
CONTAINER_REPOSITORY="${CONTAINER_REPOSITORY-"$(whoami)/${REPOSITORY_NAME}"}"
CONTAINER_FLAVOR="${CONTAINER_FLAVOR-cuda}"
CONTAINER_TAG="${CONTAINER_TAG-"${INVOKEAI_BRANCH##*/}-${CONTAINER_FLAVOR}"}"
CONTAINER_IMAGE="${CONTAINER_REGISTRY}/${CONTAINER_REPOSITORY}:${CONTAINER_TAG}"
CONTAINER_IMAGE="${CONTAINER_IMAGE,,}"

41
docker/run.sh Executable file
View File

@@ -0,0 +1,41 @@
#!/usr/bin/env bash
set -e
# How to use: https://invoke-ai.github.io/InvokeAI/installation/040_INSTALL_DOCKER/
SCRIPTDIR=$(dirname "${BASH_SOURCE[0]}")
cd "$SCRIPTDIR" || exit 1
source ./env.sh
# Create outputs directory if it does not exist
[[ -d ./outputs ]] || mkdir ./outputs
echo -e "You are using these values:\n"
echo -e "Volumename:\t${VOLUMENAME}"
echo -e "Invokeai_tag:\t${CONTAINER_IMAGE}"
echo -e "local Models:\t${MODELSPATH:-unset}\n"
docker run \
--interactive \
--tty \
--rm \
--platform="${PLATFORM}" \
--name="${REPOSITORY_NAME,,}" \
--hostname="${REPOSITORY_NAME,,}" \
--mount=source="${VOLUMENAME}",target=/data \
--mount type=bind,source="$(pwd)"/outputs,target=/data/outputs \
${MODELSPATH:+--mount="type=bind,source=${MODELSPATH},target=/data/models"} \
${HUGGING_FACE_HUB_TOKEN:+--env="HUGGING_FACE_HUB_TOKEN=${HUGGING_FACE_HUB_TOKEN}"} \
--publish=9090:9090 \
--cap-add=sys_nice \
${GPU_FLAGS:+--gpus="${GPU_FLAGS}"} \
"${CONTAINER_IMAGE}" ${@:+$@}
# Remove Trash folder
for f in outputs/.Trash*; do
if [ -e "$f" ]; then
rm -Rf "$f"
break
fi
done

View File

@@ -4,66 +4,447 @@ title: Changelog
# :octicons-log-16: **Changelog**
## v2.0.1 (13 October 2022)
## v2.3.0 <small>(15 January 2023)</small>
- fix noisy images at high step count when using k* samplers
- dream.py script now calls invoke.py module directly rather than
via a new python process (which could break the environment)
**Transition to diffusers
Version 2.3 provides support for both the traditional `.ckpt` weight
checkpoint files as well as the HuggingFace `diffusers` format. This
introduces several changes you should know about.
1. The models.yaml format has been updated. There are now two
different type of configuration stanza. The traditional ckpt
one will look like this, with a `format` of `ckpt` and a
`weights` field that points to the absolute or ROOTDIR-relative
location of the ckpt file.
```
inpainting-1.5:
description: RunwayML SD 1.5 model optimized for inpainting (4.27 GB)
repo_id: runwayml/stable-diffusion-inpainting
format: ckpt
width: 512
height: 512
weights: models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt
config: configs/stable-diffusion/v1-inpainting-inference.yaml
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
```
A configuration stanza for a diffusers model hosted at HuggingFace will look like this,
with a `format` of `diffusers` and a `repo_id` that points to the
repository ID of the model on HuggingFace:
```
stable-diffusion-2.1:
description: Stable Diffusion version 2.1 diffusers model (5.21 GB)
repo_id: stabilityai/stable-diffusion-2-1
format: diffusers
```
A configuration stanza for a diffuers model stored locally should
look like this, with a `format` of `diffusers`, but a `path` field
that points at the directory that contains `model_index.json`:
```
waifu-diffusion:
description: Latest waifu diffusion 1.4
format: diffusers
path: models/diffusers/hakurei-haifu-diffusion-1.4
```
2. In order of precedence, InvokeAI will now use HF_HOME, then
XDG_CACHE_HOME, then finally default to `ROOTDIR/models` to
store HuggingFace diffusers models.
Consequently, the format of the models directory has changed to
mimic the HuggingFace cache directory. When HF_HOME and XDG_HOME
are not set, diffusers models are now automatically downloaded
and retrieved from the directory `ROOTDIR/models/diffusers`,
while other models are stored in the directory
`ROOTDIR/models/hub`. This organization is the same as that used
by HuggingFace for its cache management.
This allows you to share diffusers and ckpt model files easily with
other machine learning applications that use the HuggingFace
libraries. To do this, set the environment variable HF_HOME
before starting up InvokeAI to tell it what directory to
cache models in. To tell InvokeAI to use the standard HuggingFace
cache directory, you would set HF_HOME like this (Linux/Mac):
`export HF_HOME=~/.cache/huggingface`
Both HuggingFace and InvokeAI will fall back to the XDG_CACHE_HOME
environment variable if HF_HOME is not set; this path
takes precedence over `ROOTDIR/models` to allow for the same sharing
with other machine learning applications that use HuggingFace
libraries.
3. If you upgrade to InvokeAI 2.3.* from an earlier version, there
will be a one-time migration from the old models directory format
to the new one. You will see a message about this the first time
you start `invoke.py`.
4. Both the front end back ends of the model manager have been
rewritten to accommodate diffusers. You can import models using
their local file path, using their URLs, or their HuggingFace
repo_ids. On the command line, all these syntaxes work:
```
!import_model stabilityai/stable-diffusion-2-1-base
!import_model /opt/sd-models/sd-1.4.ckpt
!import_model https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/blob/main/PaperCut_v1.ckpt
```
**KNOWN BUGS (15 January 2023)
1. On CUDA systems, the 768 pixel stable-diffusion-2.0 and
stable-diffusion-2.1 models can only be run as `diffusers` models
when the `xformer` library is installed and configured. Without
`xformers`, InvokeAI returns black images.
2. Inpainting and outpainting have regressed in quality.
Both these issues are being actively worked on.
## v2.2.4 <small>(11 December 2022)</small>
**the `invokeai` directory**
Previously there were two directories to worry about, the directory that
contained the InvokeAI source code and the launcher scripts, and the `invokeai`
directory that contained the models files, embeddings, configuration and
outputs. With the 2.2.4 release, this dual system is done away with, and
everything, including the `invoke.bat` and `invoke.sh` launcher scripts, now
live in a directory named `invokeai`. By default this directory is located in
your home directory (e.g. `\Users\yourname` on Windows), but you can select
where it goes at install time.
After installation, you can delete the install directory (the one that the zip
file creates when it unpacks). Do **not** delete or move the `invokeai`
directory!
**Initialization file `invokeai/invokeai.init`**
You can place frequently-used startup options in this file, such as the default
number of steps or your preferred sampler. To keep everything in one place, this
file has now been moved into the `invokeai` directory and is named
`invokeai.init`.
**To update from Version 2.2.3**
The easiest route is to download and unpack one of the 2.2.4 installer files.
When it asks you for the location of the `invokeai` runtime directory, respond
with the path to the directory that contains your 2.2.3 `invokeai`. That is, if
`invokeai` lives at `C:\Users\fred\invokeai`, then answer with `C:\Users\fred`
and answer "Y" when asked if you want to reuse the directory.
The `update.sh` (`update.bat`) script that came with the 2.2.3 source installer
does not know about the new directory layout and won't be fully functional.
**To update to 2.2.5 (and beyond) there's now an update path**
As they become available, you can update to more recent versions of InvokeAI
using an `update.sh` (`update.bat`) script located in the `invokeai` directory.
Running it without any arguments will install the most recent version of
InvokeAI. Alternatively, you can get set releases by running the `update.sh`
script with an argument in the command shell. This syntax accepts the path to
the desired release's zip file, which you can find by clicking on the green
"Code" button on this repository's home page.
**Other 2.2.4 Improvements**
- Fix InvokeAI GUI initialization by @addianto in #1687
- fix link in documentation by @lstein in #1728
- Fix broken link by @ShawnZhong in #1736
- Remove reference to binary installer by @lstein in #1731
- documentation fixes for 2.2.3 by @lstein in #1740
- Modify installer links to point closer to the source installer by @ebr in
#1745
- add documentation warning about 1650/60 cards by @lstein in #1753
- Fix Linux source URL in installation docs by @andybearman in #1756
- Make install instructions discoverable in readme by @damian0815 in #1752
- typo fix by @ofirkris in #1755
- Non-interactive model download (support HUGGINGFACE_TOKEN) by @ebr in #1578
- fix(srcinstall): shell installer - cp scripts instead of linking by @tildebyte
in #1765
- stability and usage improvements to binary & source installers by @lstein in
#1760
- fix off-by-one bug in cross-attention-control by @damian0815 in #1774
- Eventually update APP_VERSION to 2.2.3 by @spezialspezial in #1768
- invoke script cds to its location before running by @lstein in #1805
- Make PaperCut and VoxelArt models load again by @lstein in #1730
- Fix --embedding_directory / --embedding_path not working by @blessedcoolant in
#1817
- Clean up readme by @hipsterusername in #1820
- Optimized Docker build with support for external working directory by @ebr in
#1544
- disable pushing the cloud container by @mauwii in #1831
- Fix docker push github action and expand with additional metadata by @ebr in
#1837
- Fix Broken Link To Notebook by @VedantMadane in #1821
- Account for flat models by @spezialspezial in #1766
- Update invoke.bat.in isolate environment variables by @lynnewu in #1833
- Arch Linux Specific PatchMatch Instructions & fixing conda install on linux by
@SammCheese in #1848
- Make force free GPU memory work in img2img by @addianto in #1844
- New installer by @lstein
## v2.2.3 <small>(2 December 2022)</small>
!!! Note
This point release removes references to the binary installer from the
installation guide. The binary installer is not stable at the current
time. First time users are encouraged to use the "source" installer as
described in [Installing InvokeAI with the Source Installer](installation/deprecated_documentation/INSTALL_SOURCE.md)
With InvokeAI 2.2, this project now provides enthusiasts and professionals a
robust workflow solution for creating AI-generated and human facilitated
compositions. Additional enhancements have been made as well, improving safety,
ease of use, and installation.
Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a
512x768 image (and less for smaller images), and is compatible with
Windows/Linux/Mac (M1 & M2).
You can see the [release video](https://youtu.be/hIYBfDtKaus) here, which
introduces the main WebUI enhancement for version 2.2 -
[The Unified Canvas](features/UNIFIED_CANVAS.md). This new workflow is the
biggest enhancement added to the WebUI to date, and unlocks a stunning amount of
potential for users to create and iterate on their creations. The following
sections describe what's new for InvokeAI.
## v2.2.2 <small>(30 November 2022)</small>
!!! note
The binary installer is not ready for prime time. First time users are recommended to install via the "source" installer accessible through the links at the bottom of this page.****
With InvokeAI 2.2, this project now provides enthusiasts and professionals a
robust workflow solution for creating AI-generated and human facilitated
compositions. Additional enhancements have been made as well, improving safety,
ease of use, and installation.
Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a
512x768 image (and less for smaller images), and is compatible with
Windows/Linux/Mac (M1 & M2).
You can see the [release video](https://youtu.be/hIYBfDtKaus) here, which
introduces the main WebUI enhancement for version 2.2 -
[The Unified Canvas](https://invoke-ai.github.io/InvokeAI/features/UNIFIED_CANVAS/).
This new workflow is the biggest enhancement added to the WebUI to date, and
unlocks a stunning amount of potential for users to create and iterate on their
creations. The following sections describe what's new for InvokeAI.
## v2.2.0 <small>(2 December 2022)</small>
With InvokeAI 2.2, this project now provides enthusiasts and professionals a
robust workflow solution for creating AI-generated and human facilitated
compositions. Additional enhancements have been made as well, improving safety,
ease of use, and installation.
Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a
512x768 image (and less for smaller images), and is compatible with
Windows/Linux/Mac (M1 & M2).
You can see the [release video](https://youtu.be/hIYBfDtKaus) here, which
introduces the main WebUI enhancement for version 2.2 -
[The Unified Canvas](features/UNIFIED_CANVAS.md). This new workflow is the
biggest enhancement added to the WebUI to date, and unlocks a stunning amount of
potential for users to create and iterate on their creations. The following
sections describe what's new for InvokeAI.
## v2.1.3 <small>(13 November 2022)</small>
- A choice of installer scripts that automate installation and configuration.
See
[Installation](installation/index.md).
- A streamlined manual installation process that works for both Conda and
PIP-only installs. See
[Manual Installation](installation/020_INSTALL_MANUAL.md).
- The ability to save frequently-used startup options (model to load, steps,
sampler, etc) in a `.invokeai` file. See
[Client](features/CLI.md)
- Support for AMD GPU cards (non-CUDA) on Linux machines.
- Multiple bugs and edge cases squashed.
## v2.1.0 <small>(2 November 2022)</small>
- update mac instructions to use invokeai for env name by @willwillems in #1030
- Update .gitignore by @blessedcoolant in #1040
- reintroduce fix for m1 from #579 missing after merge by @skurovec in #1056
- Update Stable_Diffusion_AI_Notebook.ipynb (Take 2) by @ChloeL19 in #1060
- Print out the device type which is used by @manzke in #1073
- Hires Addition by @hipsterusername in #1063
- fix for "1 leaked semaphore objects to clean up at shutdown" on M1 by
@skurovec in #1081
- Forward dream.py to invoke.py using the same interpreter, add deprecation
warning by @db3000 in #1077
- fix noisy images at high step counts by @lstein in #1086
- Generalize facetool strength argument by @db3000 in #1078
- Enable fast switching among models at the invoke> command line by @lstein in
#1066
- Fix Typo, committed changing ldm environment to invokeai by @jdries3 in #1095
- Update generate.py by @unreleased in #1109
- Update 'ldm' env to 'invokeai' in troubleshooting steps by @19wolf in #1125
- Fixed documentation typos and resolved merge conflicts by @rupeshs in #1123
- Fix broken doc links, fix malaprop in the project subtitle by @majick in #1131
- Only output facetool parameters if enhancing faces by @db3000 in #1119
- Update gitignore to ignore codeformer weights at new location by
@spezialspezial in #1136
- fix links to point to invoke-ai.github.io #1117 by @mauwii in #1143
- Rework-mkdocs by @mauwii in #1144
- add option to CLI and pngwriter that allows user to set PNG compression level
by @lstein in #1127
- Fix img2img DDIM index out of bound by @wfng92 in #1137
- Fix gh actions by @mauwii in #1128
- update mac instructions to use invokeai for env name by @willwillems in #1030
- Update .gitignore by @blessedcoolant in #1040
- reintroduce fix for m1 from #579 missing after merge by @skurovec in #1056
- Update Stable_Diffusion_AI_Notebook.ipynb (Take 2) by @ChloeL19 in #1060
- Print out the device type which is used by @manzke in #1073
- Hires Addition by @hipsterusername in #1063
- fix for "1 leaked semaphore objects to clean up at shutdown" on M1 by
@skurovec in #1081
- Forward dream.py to invoke.py using the same interpreter, add deprecation
warning by @db3000 in #1077
- fix noisy images at high step counts by @lstein in #1086
- Generalize facetool strength argument by @db3000 in #1078
- Enable fast switching among models at the invoke> command line by @lstein in
#1066
- Fix Typo, committed changing ldm environment to invokeai by @jdries3 in #1095
- Fixed documentation typos and resolved merge conflicts by @rupeshs in #1123
- Only output facetool parameters if enhancing faces by @db3000 in #1119
- add option to CLI and pngwriter that allows user to set PNG compression level
by @lstein in #1127
- Fix img2img DDIM index out of bound by @wfng92 in #1137
- Add text prompt to inpaint mask support by @lstein in #1133
- Respect http[s] protocol when making socket.io middleware by @damian0815 in
#976
- WebUI: Adds Codeformer support by @psychedelicious in #1151
- Skips normalizing prompts for web UI metadata by @psychedelicious in #1165
- Add Asymmetric Tiling by @carson-katri in #1132
- Web UI: Increases max CFG Scale to 200 by @psychedelicious in #1172
- Corrects color channels in face restoration; Fixes #1167 by @psychedelicious
in #1175
- Flips channels using array slicing instead of using OpenCV by @psychedelicious
in #1178
- Fix typo in docs: s/Formally/Formerly by @noodlebox in #1176
- fix clipseg loading problems by @lstein in #1177
- Correct color channels in upscale using array slicing by @wfng92 in #1181
- Web UI: Filters existing images when adding new images; Fixes #1085 by
@psychedelicious in #1171
- fix a number of bugs in textual inversion by @lstein in #1190
- Improve !fetch, add !replay command by @ArDiouscuros in #882
- Fix generation of image with s>1000 by @holstvoogd in #951
- Web UI: Gallery improvements by @psychedelicious in #1198
- Update CLI.md by @krummrey in #1211
- outcropping improvements by @lstein in #1207
- add support for loading VAE autoencoders by @lstein in #1216
- remove duplicate fix_func for MPS by @wfng92 in #1210
- Metadata storage and retrieval fixes by @lstein in #1204
- nix: add shell.nix file by @Cloudef in #1170
- Web UI: Changes vite dist asset paths to relative by @psychedelicious in #1185
- Web UI: Removes isDisabled from PromptInput by @psychedelicious in #1187
- Allow user to generate images with initial noise as on M1 / mps system by
@ArDiouscuros in #981
- feat: adding filename format template by @plucked in #968
- Web UI: Fixes broken bundle by @psychedelicious in #1242
- Support runwayML custom inpainting model by @lstein in #1243
- Update IMG2IMG.md by @talitore in #1262
- New dockerfile - including a build- and a run- script as well as a GH-Action
by @mauwii in #1233
- cut over from karras to model noise schedule for higher steps by @lstein in
#1222
- Prompt tweaks by @lstein in #1268
- Outpainting implementation by @Kyle0654 in #1251
- fixing aspect ratio on hires by @tjennings in #1249
- Fix-build-container-action by @mauwii in #1274
- handle all unicode characters by @damian0815 in #1276
- adds models.user.yml to .gitignore by @JakeHL in #1281
- remove debug branch, set fail-fast to false by @mauwii in #1284
- Protect-secrets-on-pr by @mauwii in #1285
- Web UI: Adds initial inpainting implementation by @psychedelicious in #1225
- fix environment-mac.yml - tested on x64 and arm64 by @mauwii in #1289
- Use proper authentication to download model by @mauwii in #1287
- Prevent indexing error for mode RGB by @spezialspezial in #1294
- Integrate sd-v1-5 model into test matrix (easily expandable), remove
unecesarry caches by @mauwii in #1293
- add --no-interactive to configure_invokeai step by @mauwii in #1302
- 1-click installer and updater. Uses micromamba to install git and conda into a
contained environment (if necessary) before running the normal installation
script by @cmdr2 in #1253
- configure_invokeai.py script downloads the weight files by @lstein in #1290
## v2.0.1 <small>(13 October 2022)</small>
- fix noisy images at high step count when using k\* samplers
- dream.py script now calls invoke.py module directly rather than via a new
python process (which could break the environment)
## v2.0.0 <small>(9 October 2022)</small>
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains
for backward compatibility.
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
- Support for <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/INPAINTING.md">inpainting</a> and <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/OUTPAINTING.md">outpainting</a>
- img2img runs on all k* samplers
- Support for <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/PROMPTS.md#negative-and-unconditioned-prompts">negative prompts</a>
- Support for CodeFormer face reconstruction
- Support for Textual Inversion on Macintoshes
- Support in both WebGUI and CLI for <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/POSTPROCESS.md">post-processing of previously-generated images</a>
using facial reconstruction, ESRGAN upscaling, outcropping (similar to DALL-E infinite canvas),
and "embiggen" upscaling. See the `!fix` command.
- New `--hires` option on `invoke>` line allows <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/CLI.m#this-is-an-example-of-txt2img">larger images to be created without duplicating elements</a>, at the cost of some performance.
- New `--perlin` and `--threshold` options allow you to add and control variation
during image generation (see <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/OTHER.md#thresholding-and-perlin-noise-initialization-options">Thresholding and Perlin Noise Initialization</a>
- Extensive metadata now written into PNG files, allowing reliable regeneration of images
and tweaking of previous settings.
- Command-line completion in `invoke.py` now works on Windows, Linux and Mac platforms.
- Improved <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/CLI.m">command-line completion behavior</a>.
New commands added:
* List command-line history with `!history`
* Search command-line history with `!search`
* Clear history with `!clear`
- Deprecated `--full_precision` / `-F`. Simply omit it and `invoke.py` will auto
configure. To switch away from auto use the new flag like `--precision=float32`.
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains for
backward compatibility.
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
- Support for [inpainting](features/INPAINTING.md) and
[outpainting](features/OUTPAINTING.md)
- img2img runs on all k\* samplers
- Support for
[negative prompts](features/PROMPTS.md#negative-and-unconditioned-prompts)
- Support for CodeFormer face reconstruction
- Support for Textual Inversion on Macintoshes
- Support in both WebGUI and CLI for
[post-processing of previously-generated images](features/POSTPROCESS.md)
using facial reconstruction, ESRGAN upscaling, outcropping (similar to DALL-E
infinite canvas), and "embiggen" upscaling. See the `!fix` command.
- New `--hires` option on `invoke>` line allows
[larger images to be created without duplicating elements](features/CLI.md#this-is-an-example-of-txt2img),
at the cost of some performance.
- New `--perlin` and `--threshold` options allow you to add and control
variation during image generation (see
[Thresholding and Perlin Noise Initialization](features/OTHER.md#thresholding-and-perlin-noise-initialization-options))
- Extensive metadata now written into PNG files, allowing reliable regeneration
of images and tweaking of previous settings.
- Command-line completion in `invoke.py` now works on Windows, Linux and Mac
platforms.
- Improved [command-line completion behavior](features/CLI.md) New commands
added:
- List command-line history with `!history`
- Search command-line history with `!search`
- Clear history with `!clear`
- Deprecated `--full_precision` / `-F`. Simply omit it and `invoke.py` will auto
configure. To switch away from auto use the new flag like
`--precision=float32`.
## v1.14 <small>(11 September 2022)</small>
- Memory optimizations for small-RAM cards. 512x512 now possible on 4 GB GPUs.
- Full support for Apple hardware with M1 or M2 chips.
- Add "seamless mode" for circular tiling of image. Generates beautiful effects.
([prixt](https://github.com/prixt)).
- Inpainting support.
- Improved web server GUI.
- Lots of code and documentation cleanups.
- Memory optimizations for small-RAM cards. 512x512 now possible on 4 GB GPUs.
- Full support for Apple hardware with M1 or M2 chips.
- Add "seamless mode" for circular tiling of image. Generates beautiful effects.
([prixt](https://github.com/prixt)).
- Inpainting support.
- Improved web server GUI.
- Lots of code and documentation cleanups.
## v1.13 <small>(3 September 2022)</small>
- Support image variations (see [VARIATIONS](features/VARIATIONS.md)
([Kevin Gibbons](https://github.com/bakkot) and many contributors and reviewers)
- Supports a Google Colab notebook for a standalone server running on Google hardware
[Arturo Mendivil](https://github.com/artmen1516)
- WebUI supports GFPGAN/ESRGAN facial reconstruction and upscaling
[Kevin Gibbons](https://github.com/bakkot)
- WebUI supports incremental display of in-progress images during generation
[Kevin Gibbons](https://github.com/bakkot)
- A new configuration file scheme that allows new models (including upcoming
stable-diffusion-v1.5) to be added without altering the code.
([David Wager](https://github.com/maddavid12))
- Can specify --grid on invoke.py command line as the default.
- Miscellaneous internal bug and stability fixes.
- Works on M1 Apple hardware.
- Multiple bug fixes.
- Support image variations (see [VARIATIONS](features/VARIATIONS.md)
([Kevin Gibbons](https://github.com/bakkot) and many contributors and
reviewers)
- Supports a Google Colab notebook for a standalone server running on Google
hardware [Arturo Mendivil](https://github.com/artmen1516)
- WebUI supports GFPGAN/ESRGAN facial reconstruction and upscaling
[Kevin Gibbons](https://github.com/bakkot)
- WebUI supports incremental display of in-progress images during generation
[Kevin Gibbons](https://github.com/bakkot)
- A new configuration file scheme that allows new models (including upcoming
stable-diffusion-v1.5) to be added without altering the code.
([David Wager](https://github.com/maddavid12))
- Can specify --grid on invoke.py command line as the default.
- Miscellaneous internal bug and stability fixes.
- Works on M1 Apple hardware.
- Multiple bug fixes.
---
@@ -71,49 +452,59 @@ title: Changelog
- Improved file handling, including ability to read prompts from standard input.
(kudos to [Yunsaki](https://github.com/yunsaki)
- The web server is now integrated with the invoke.py script. Invoke by adding --web to
the invoke.py command arguments.
- The web server is now integrated with the invoke.py script. Invoke by adding
--web to the invoke.py command arguments.
- Face restoration and upscaling via GFPGAN and Real-ESGAN are now automatically
enabled if the GFPGAN directory is located as a sibling to Stable Diffusion.
VRAM requirements are modestly reduced. Thanks to both [Blessedcoolant](https://github.com/blessedcoolant) and
VRAM requirements are modestly reduced. Thanks to both
[Blessedcoolant](https://github.com/blessedcoolant) and
[Oceanswave](https://github.com/oceanswave) for their work on this.
- You can now swap samplers on the invoke> command line. [Blessedcoolant](https://github.com/blessedcoolant)
- You can now swap samplers on the invoke> command line.
[Blessedcoolant](https://github.com/blessedcoolant)
---
## v1.11 <small>(26 August 2022)</small>
- NEW FEATURE: Support upscaling and face enhancement using the GFPGAN module. (kudos to [Oceanswave](https://github.com/Oceanswave)
- You now can specify a seed of -1 to use the previous image's seed, -2 to use the seed for the image generated before that, etc.
Seed memory only extends back to the previous command, but will work on all images generated with the -n# switch.
- NEW FEATURE: Support upscaling and face enhancement using the GFPGAN module.
(kudos to [Oceanswave](https://github.com/Oceanswave)
- You now can specify a seed of -1 to use the previous image's seed, -2 to use
the seed for the image generated before that, etc. Seed memory only extends
back to the previous command, but will work on all images generated with the
-n# switch.
- Variant generation support temporarily disabled pending more general solution.
- Created a feature branch named **yunsaki-morphing-invoke** which adds experimental support for
iteratively modifying the prompt and its parameters. Please see[ Pull Request #86](https://github.com/lstein/stable-diffusion/pull/86)
for a synopsis of how this works. Note that when this feature is eventually added to the main branch, it will may be modified
significantly.
- Created a feature branch named **yunsaki-morphing-invoke** which adds
experimental support for iteratively modifying the prompt and its parameters.
Please
see[Pull Request #86](https://github.com/lstein/stable-diffusion/pull/86) for
a synopsis of how this works. Note that when this feature is eventually added
to the main branch, it will may be modified significantly.
---
## v1.10 <small>(25 August 2022)</small>
- A barebones but fully functional interactive web server for online generation of txt2img and img2img.
- A barebones but fully functional interactive web server for online generation
of txt2img and img2img.
---
## v1.09 <small>(24 August 2022)</small>
- A new -v option allows you to generate multiple variants of an initial image
in img2img mode. (kudos to [Oceanswave](https://github.com/Oceanswave). [
See this discussion in the PR for examples and details on use](https://github.com/lstein/stable-diffusion/pull/71#issuecomment-1226700810))
- Added ability to personalize text to image generation (kudos to [Oceanswave](https://github.com/Oceanswave) and [nicolai256](https://github.com/nicolai256))
in img2img mode. (kudos to [Oceanswave](https://github.com/Oceanswave).
[ See this discussion in the PR for examples and details on use](https://github.com/lstein/stable-diffusion/pull/71#issuecomment-1226700810))
- Added ability to personalize text to image generation (kudos to
[Oceanswave](https://github.com/Oceanswave) and
[nicolai256](https://github.com/nicolai256))
- Enabled all of the samplers from k_diffusion
---
## v1.08 <small>(24 August 2022)</small>
- Escape single quotes on the invoke> command before trying to parse. This avoids
parse errors.
- Escape single quotes on the invoke> command before trying to parse. This
avoids parse errors.
- Removed instruction to get Python3.8 as first step in Windows install.
Anaconda3 does it for you.
- Added bounds checks for numeric arguments that could cause crashes.
@@ -123,34 +514,36 @@ title: Changelog
## v1.07 <small>(23 August 2022)</small>
- Image filenames will now never fill gaps in the sequence, but will be assigned the
next higher name in the chosen directory. This ensures that the alphabetic and chronological
sort orders are the same.
- Image filenames will now never fill gaps in the sequence, but will be assigned
the next higher name in the chosen directory. This ensures that the alphabetic
and chronological sort orders are the same.
---
## v1.06 <small>(23 August 2022)</small>
- Added weighted prompt support contributed by [xraxra](https://github.com/xraxra)
- Example of using weighted prompts to tweak a demonic figure contributed by [bmaltais](https://github.com/bmaltais)
- Added weighted prompt support contributed by
[xraxra](https://github.com/xraxra)
- Example of using weighted prompts to tweak a demonic figure contributed by
[bmaltais](https://github.com/bmaltais)
---
## v1.05 <small>(22 August 2022 - after the drop)</small>
- Filenames now use the following formats:
000010.95183149.png -- Two files produced by the same command (e.g. -n2),
000010.26742632.png -- distinguished by a different seed.
- Filenames now use the following formats: 000010.95183149.png -- Two files
produced by the same command (e.g. -n2), 000010.26742632.png -- distinguished
by a different seed.
000011.455191342.01.png -- Two files produced by the same command using
000011.455191342.02.png -- a batch size>1 (e.g. -b2). They have the same seed.
000011.4160627868.grid#1-4.png -- a grid of four images (-g); the whole grid can
be regenerated with the indicated key
000011.4160627868.grid#1-4.png -- a grid of four images (-g); the whole grid
can be regenerated with the indicated key
- It should no longer be possible for one image to overwrite another
- You can use the "cd" and "pwd" commands at the invoke> prompt to set and retrieve
the path of the output directory.
- You can use the "cd" and "pwd" commands at the invoke> prompt to set and
retrieve the path of the output directory.
---
@@ -164,26 +557,28 @@ title: Changelog
## v1.03 <small>(22 August 2022)</small>
- The original txt2img and img2img scripts from the CompViz repository have been moved into
a subfolder named "orig_scripts", to reduce confusion.
- The original txt2img and img2img scripts from the CompViz repository have been
moved into a subfolder named "orig_scripts", to reduce confusion.
---
## v1.02 <small>(21 August 2022)</small>
- A copy of the prompt and all of its switches and options is now stored in the corresponding
image in a tEXt metadata field named "Dream". You can read the prompt using scripts/images2prompt.py,
or an image editor that allows you to explore the full metadata.
**Please run "conda env update" to load the k_lms dependencies!!**
- A copy of the prompt and all of its switches and options is now stored in the
corresponding image in a tEXt metadata field named "Dream". You can read the
prompt using scripts/images2prompt.py, or an image editor that allows you to
explore the full metadata. **Please run "conda env update" to load the k_lms
dependencies!!**
---
## v1.01 <small>(21 August 2022)</small>
- added k_lms sampling.
**Please run "conda env update" to load the k_lms dependencies!!**
- use half precision arithmetic by default, resulting in faster execution and lower memory requirements
Pass argument --full_precision to invoke.py to get slower but more accurate image generation
- added k_lms sampling. **Please run "conda env update" to load the k_lms
dependencies!!**
- use half precision arithmetic by default, resulting in faster execution and
lower memory requirements Pass argument --full_precision to invoke.py to get
slower but more accurate image generation
---

Binary file not shown.

After

Width:  |  Height:  |  Size: 359 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 528 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 601 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 142 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 122 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 112 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 107 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 114 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 169 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 635 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 284 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 252 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 428 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 331 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 369 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 362 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 329 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 329 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 377 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 328 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 380 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 372 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 401 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 441 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 451 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 338 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 271 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 353 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 330 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 439 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 463 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 444 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 468 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 466 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 475 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 429 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 429 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 477 KiB

Some files were not shown because too many files have changed in this diff Show More