Compare commits

...

129 Commits

Author SHA1 Message Date
Lincoln Stein
10d8d1bb25 Merge branch 'main' of github.com:invoke-ai/InvokeAI into main 2022-10-10 09:35:54 -04:00
Lincoln Stein
b30ae57731 update web gui walkthrough 2022-10-10 09:35:40 -04:00
Lincoln Stein
b0bfbafd3d update web gui walkthrough 2022-10-10 09:33:40 -04:00
Lincoln Stein
7c50bd2039 rebuild front end 2022-10-10 09:19:52 -04:00
Lincoln Stein
ae4e385abd merge changes to mac installation instructions 2022-10-10 09:18:48 -04:00
Lincoln Stein
e301cd3321 Update OUTPAINTING.md
fix typo
2022-10-10 09:13:35 -04:00
Lincoln Stein
2977680ca1 add links for history processing 2022-10-10 09:13:35 -04:00
Lincoln Stein
2a5aa6e986 fix typos 2022-10-10 09:13:35 -04:00
Lincoln Stein
3bba41ee89 add more features to changelog 2022-10-10 09:13:35 -04:00
Lincoln Stein
179b5f7839 frontend rebuild 2022-10-10 09:13:35 -04:00
Lincoln Stein
26d7712f03 fix link error 2022-10-10 09:13:35 -04:00
Lincoln Stein
c0b370e1b9 add perlin noise to list of new features 2022-10-10 09:13:34 -04:00
Lincoln Stein
15cc92e54a fix environment-mac.yml as per #964 2022-10-10 09:12:48 -04:00
Marco Labarile
acdd5b3922 Fix markdown typo in WEB.md 2022-10-10 09:09:04 -04:00
psychedelicious
9685fc210c Updates INSTALL_MAC.md 2022-10-10 09:08:14 -04:00
Jim Hays
f4cdc0001f Update references to lstein/stable-diffusion to invoke-ai/InvokeAI 2022-10-10 09:06:51 -04:00
Lincoln Stein
3f78e9a1a3 rebuild frontend 2022-10-10 09:06:06 -04:00
Eric Wolf
280e2899d7 fix typo 2022-10-10 09:05:45 -04:00
Lincoln Stein
82b0bb838c fix link error 2022-10-10 09:05:45 -04:00
Lincoln Stein
8482518618 add perlin noise to list of new features 2022-10-10 09:05:45 -04:00
Lincoln Stein
6425bda663 add short list of 2.0.0 new features 2022-10-10 09:05:45 -04:00
psychedelicious
12413b0be6 Fix safari display:grid lag 2022-10-10 03:13:56 +02:00
Lincoln Stein
275dca83be Update OUTPAINTING.md
fix typo
2022-10-09 18:46:23 -04:00
Lincoln Stein
be5bf03ccc add links for history processing 2022-10-09 18:44:31 -04:00
Lincoln Stein
0c479cd706 fix typos 2022-10-09 18:43:09 -04:00
Lincoln Stein
7325b73073 add more features to changelog 2022-10-09 18:41:57 -04:00
Lincoln Stein
49380f75a9 frontend rebuild 2022-10-09 18:25:18 -04:00
Lincoln Stein
3d4276439f merge prior to backing out PR #1000 2022-10-09 18:24:15 -04:00
Lincoln Stein
a4c36dbc15 fix link error 2022-10-09 18:21:13 -04:00
Lincoln Stein
4fbd11a1f2 add perlin noise to list of new features 2022-10-09 18:21:13 -04:00
Lincoln Stein
8ce3d4dd7f add short list of 2.0.0 new features 2022-10-09 18:21:13 -04:00
Lincoln Stein
b82c968278 fix references from lstein/stable-diffusion to invoke-ai/InvokeAI
- as per #989
2022-10-09 18:21:13 -04:00
Lincoln Stein
bc8e86e643 fix environment-mac.yml as per #964 2022-10-09 18:21:13 -04:00
Lincoln Stein
1b6fab59a4 run make_schedule() if it hasn't already been called
- fixes #984
2022-10-09 18:21:13 -04:00
Lincoln Stein
d1dd35a1d2 final tweak to embedded screenshots in WEB.md 2022-10-09 18:21:13 -04:00
Lincoln Stein
400f062771 make initial screenshot even larger 2022-10-09 18:21:13 -04:00
Lincoln Stein
40894d67ac fixup image sizes in WEB.md 2022-10-09 18:21:13 -04:00
Lincoln Stein
08a0b85111 fix image links in documentation 2022-10-09 18:21:13 -04:00
Lincoln Stein
7da6fad359 add missing doc files 2022-10-09 18:21:05 -04:00
rpagliuca
b24d182237 Update README.md
Small writing error
2022-10-09 18:17:11 -04:00
Marco Labarile
2bdcc106f2 Fix markdown typo in WEB.md 2022-10-09 18:16:27 -04:00
psychedelicious
7a98387e8d Updates INSTALL_MAC.md 2022-10-09 18:16:27 -04:00
Jim Hays
58d0f14d03 Update references to lstein/stable-diffusion to invoke-ai/InvokeAI 2022-10-09 18:16:27 -04:00
rpagliuca
bc9471987b Update README.md
Small writing error
2022-10-09 18:16:27 -04:00
Lincoln Stein
dc6e60cbcc Update INPAINTING.md
Changed Gimp instructions to indicate that partial transparency is better than full transparency.
2022-10-09 18:16:27 -04:00
Lincoln Stein
7dae5fb131 rebuild frontend 2022-10-09 18:16:24 -04:00
Eric Wolf
3bc1ff5e5a fix typo 2022-10-09 18:07:57 -04:00
Lincoln Stein
8ff9c69e2f fix link error 2022-10-09 16:41:05 -04:00
Lincoln Stein
988ace8029 add perlin noise to list of new features 2022-10-09 16:39:36 -04:00
Lincoln Stein
6e9d996ece add short list of 2.0.0 new features 2022-10-09 16:36:00 -04:00
Lincoln Stein
789714b0b1 fix references from lstein/stable-diffusion to invoke-ai/InvokeAI
- as per #989
2022-10-09 15:38:22 -04:00
Lincoln Stein
773a64d4c0 fix references from lstein/stable-diffusion to invoke-ai/InvokeAI
- as per #989
2022-10-09 15:37:45 -04:00
Lincoln Stein
bb7629d2b8 fix environment-mac.yml as per #964 2022-10-09 15:34:19 -04:00
Lincoln Stein
745c020aa2 fix environment-mac.yml as per #964 2022-10-09 15:33:56 -04:00
Lincoln Stein
c5344acb25 run make_schedule() if it hasn't already been called
- fixes #984
2022-10-09 15:30:23 -04:00
Lincoln Stein
318eb35ea0 run make_schedule() if it hasn't already been called
- fixes #984
2022-10-09 15:29:04 -04:00
Lincoln Stein
6e2fd2affe rebuild frontend 2022-10-09 14:52:00 -04:00
Lincoln Stein
8faa06fb15 Merge branch 'main' into development
- this syncs documentation and code
2022-10-09 14:47:27 -04:00
Lincoln Stein
ce8c238ac4 final tweak to embedded screenshots in WEB.md 2022-10-09 11:45:16 -04:00
Lincoln Stein
f6c37e46e1 make initial screenshot even larger 2022-10-09 11:43:42 -04:00
Lincoln Stein
2d69efccef fixup image sizes in WEB.md 2022-10-09 11:42:59 -04:00
Lincoln Stein
f9d2aafaeb fix image links in documentation 2022-10-09 11:42:03 -04:00
Kent Keirsey
22514aec2e Minor CSS & Link Updates
Also updated dist files with new CSS
2022-10-09 11:40:16 -04:00
Lincoln Stein
5a22a83f4c add missing doc files 2022-10-09 11:38:39 -04:00
Lincoln Stein
b1d43eae46 almost ready for public release
- merged release-candidate-2
- fix up documentation
- add web tutorial
2022-10-09 11:37:00 -04:00
Marco Labarile
0b8cdb6964 Fix markdown typo in WEB.md 2022-10-09 08:57:14 -04:00
psychedelicious
aed5ad22fb Updates INSTALL_MAC.md 2022-10-09 08:49:29 -04:00
Jim Hays
dc9c16b93d Update references to lstein/stable-diffusion to invoke-ai/InvokeAI 2022-10-09 08:48:23 -04:00
rpagliuca
f6e858a548 Update README.md
Small writing error
2022-10-09 08:45:55 -04:00
Lincoln Stein
4c2db171ca Update INPAINTING.md
Changed Gimp instructions to indicate that partial transparency is better than full transparency.
2022-10-09 08:45:33 -04:00
Lincoln Stein
1255127e49 rebuild frontend 2022-10-09 08:37:51 -04:00
blessedcoolant
1cb74a6357 [WebUI] Masonry Layout for Gallery 2022-10-09 08:36:28 -04:00
psychedelicious
5e2b250426 Images grow to fit space in gallery 2022-10-09 08:36:17 -04:00
blessedcoolant
ad190cfbb2 Smaller Gallery Images 2022-10-09 08:36:03 -04:00
blessedcoolant
542ceb051b Rework Gallery DIsplay 2022-10-09 08:34:57 -04:00
blessedcoolant
3473669458 WebUI Bug Fixes & Tweaks 2022-10-09 08:33:18 -04:00
blessedcoolant
3170c83d8d [WebUI] Masonry Layout for Gallery 2022-10-09 08:32:06 -04:00
psychedelicious
3046dabde2 Images grow to fit space in gallery 2022-10-09 08:32:06 -04:00
blessedcoolant
1b02074fea Smaller Gallery Images 2022-10-09 08:32:06 -04:00
blessedcoolant
f15fd2c3d3 Rework Gallery DIsplay 2022-10-09 08:32:06 -04:00
blessedcoolant
081271d6a1 WebUI Bug Fixes & Tweaks 2022-10-09 08:32:06 -04:00
Peter Baylies
27f62999c9 * Fix for Perlin noise issue for cuda as well. 2022-10-09 08:24:02 -04:00
Peter Baylies
89d130edf4 * Fix for Perlin noise issue for cuda as well. 2022-10-09 08:23:23 -04:00
Lincoln Stein
31869885d9 enhance the in-line -h command help text
- the prompt argument comes before the optional arguments
- usage statement shows 'invoke>' rather than 'invoke.py'
- use pydoc pager to help display long help message
2022-10-08 13:55:05 -04:00
Lincoln Stein
4c026d9d92 enhance the in-line -h command help text
- the prompt argument comes before the optional arguments
- usage statement shows 'invoke>' rather than 'invoke.py'
- use pydoc pager to help display long help message
2022-10-08 13:53:56 -04:00
Any-Winter-4079
435231ef08 Get for external TI .bin files to work
Issue referenced in https://github.com/invoke-ai/InvokeAI/issues/980#issuecomment-1272162880
Users whose embeddings are trained on a non-regular num_vectors_per_token (e.g. 6), should update this value in their local repo, to get that embedding to work.
2022-10-08 13:18:19 -04:00
Any-Winter-4079
19a79caf41 Get for external TI .bin files to work
Issue referenced in https://github.com/invoke-ai/InvokeAI/issues/980#issuecomment-1272162880
Users whose embeddings are trained on a non-regular num_vectors_per_token (e.g. 6), should update this value in their local repo, to get that embedding to work.
2022-10-08 13:17:44 -04:00
David Burnett
7b095f8f97 add realesrgan to requirements.txt, remove nightie for torch and torchvision due to performance issues 2022-10-08 12:01:45 -04:00
psychedelicious
9579a401b5 Fixes CORS handling 2022-10-08 11:56:38 -04:00
blessedcoolant
8ea88f49b1 Fix Gallery being open by default 2022-10-08 21:23:41 +13:00
blessedcoolant
a62541d976 Merge branch 'webui-image-drawer' of https://github.com/blessedcoolant/InvokeAI into webui-image-drawer 2022-10-08 17:39:50 +13:00
blessedcoolant
fbd9a49899 [WebUI] Gallery Drawer Release Build 2022-10-08 17:36:18 +13:00
blessedcoolant
4e571e12b8 Add Image Gallery Drawer 2022-10-08 17:33:47 +13:00
blessedcoolant
2567f5faa5 Add Image Gallery Drawer 2022-10-08 16:55:39 +13:00
blessedcoolant
3b0c4b74b6 [WebUI] Add Image To Image UI 2022-10-07 16:28:19 -04:00
Lincoln Stein
5157cbeda1 restore ability of ksamplers to process -v variation options
- supersedes #977
2022-10-07 16:21:16 -04:00
Lincoln Stein
b296933ba0 autorotate init images using exif orientation tag 2022-10-07 12:06:40 -04:00
Jakub Kolčář
45cc867b0c fixed perlin noise generation for mps (macos) - fix for cpu fallback 2022-10-07 10:35:42 -04:00
Lincoln Stein
333219be35 fix broken image generation on plms and ddim samplers 2022-10-07 08:26:53 -04:00
spezialspezial
c1230da3ab remove duplicated code 2022-10-07 08:13:34 -04:00
Kent Keirsey
4f247a3672 Web Docs Update 2022-10-07 13:41:27 +13:00
Lincoln Stein
1f25f52af9 Merge branch 'development' of github.com:invoke-ai/InvokeAI into development 2022-10-06 18:31:25 -04:00
Lincoln Stein
7541c7cf5d fix k_samplers in img2img - probably correct now 2022-10-06 18:31:04 -04:00
blessedcoolant
a6cdde3ce4 Change Invoke Button Text To Invoke 2022-10-07 10:37:35 +13:00
blessedcoolant
a53b9a443f Fix WebUI Not Working 2022-10-07 08:09:55 +13:00
blessedcoolant
6e1328d4c2 Fix WebUI Not Working 2022-10-07 08:02:10 +13:00
Lincoln Stein
440065f7f8 revert previous change 2022-10-06 14:57:06 -04:00
Lincoln Stein
2c27e759cd fix #889 - fuzzy k* img2img at low strength 2022-10-06 14:16:56 -04:00
blessedcoolant
74419f41a3 Release Candidate 2.0 WebUI 2022-10-07 06:50:34 +13:00
blessedcoolant
542ee56c77 [WebUI] Fix Threshold and Perlin Noise Styling 2022-10-07 06:48:16 +13:00
psychedelicious
461e662644 Adds hotkeys to modal 2022-10-07 06:44:47 +13:00
psychedelicious
58d73f5cae Adds next/prev image buttons/hotkeys 2022-10-07 06:44:47 +13:00
blessedcoolant
0c1c220bb9 Revert Auto Build Frontend Workflow 2022-10-07 06:41:03 +13:00
blessedcoolant
bf5ccfffa5 Merge branch 'development' of https://github.com/invoke-ai/InvokeAI into development 2022-10-07 06:29:24 +13:00
blessedcoolant
7b270ec3b0 Revert "[bot] builds dev bundle"
This reverts commit 7a0d4c3350.
2022-10-07 06:26:58 +13:00
Lincoln Stein
e4ef7bdbb9 Merge branch 'development' into webui-hotkeys 2022-10-06 13:25:12 -04:00
blessedcoolant
c7ccb9dacd Fix WebUI CORS Issue 2022-10-06 11:15:33 -04:00
GitHub Actions Bot
7a0d4c3350 [bot] builds dev bundle 2022-10-06 11:15:33 -04:00
blessedcoolant
40d7141a4d Add Basic Hotkey Support 2022-10-07 02:29:47 +13:00
psychedelicious
c430f5452b Resolves @bakkot's review 2022-10-06 07:27:45 -04:00
psychedelicious
97de5e31f9 Sets up GH actions to auto-build frontend bundle 2022-10-06 07:27:45 -04:00
Lincoln Stein
a99aab6309 enable --hires to use k* samplers 2022-10-05 20:10:21 -04:00
ArDiouscuros
5a40f7ad15 Fix for crashes in txt2img hires fix mode 2022-10-05 20:10:06 -04:00
Damian at mba
0f55d89e20 Improve IMG2IMG docs with deeper explanation of what is happening under the hood 2022-10-05 10:21:03 -04:00
Marco Labarile
8a8be92eac Fix markdown typo in WEB.md 2022-10-04 22:53:56 -04:00
psychedelicious
9318719b9e Updates INSTALL_MAC.md 2022-10-04 07:16:42 -04:00
Jim Hays
8e76bc2b5d Update references to lstein/stable-diffusion to invoke-ai/InvokeAI 2022-10-01 18:15:20 -04:00
rpagliuca
1af86618e3 Update README.md
Small writing error
2022-10-01 15:00:25 -04:00
Lincoln Stein
b732bcad2f Update INPAINTING.md
Changed Gimp instructions to indicate that partial transparency is better than full transparency.
2022-10-01 12:17:46 -04:00
64 changed files with 1466 additions and 931 deletions

View File

@@ -41,10 +41,13 @@ _This repository was formally known as lstein/stable-diffusion_
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases
</div>
This is a fork of [CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion), the open
source text-to-image generator. It provides a streamlined process with various new features and
options to aid the image generation process. It runs on Windows, Mac and Linux machines, and runs on
GPU cards with as little as 4 GB or RAM.
This is a fork of
[CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion),
the open source text-to-image generator. It provides a streamlined
process with various new features and options to aid the image
generation process. It runs on Windows, Mac and Linux machines, with
GPU cards with as little as 4 GB of RAM. It provides both a polished
Web interface, and an easy-to-use command-line interface.
_Note: This fork is rapidly evolving. Please use the
[Issues](https://github.com/invoke-ai/InvokeAI/issues) tab to report bugs and make feature
@@ -90,7 +93,12 @@ You wil need one of the following:
- At least 6 GB of free disk space for the machine learning model, Python, and all its dependencies.
#### Note
**Note**
If you have a Nvidia 10xx series card (e.g. the 1080ti), please
run the dream script in full-precision mode as shown below.
Similarly, specify full-precision mode on Apple M1 hardware.
Precision is auto configured based on the device. If however you encounter
errors like 'expected type Float but found Half' or 'not implemented for Half'
@@ -104,6 +112,7 @@ you can try starting `invoke.py` with the `--precision=float32` flag:
#### Major Features
- [Web Server](docs/features/WEB.md)
- [Interactive Command Line Interface](docs/features/CLI.md)
- [Image To Image](docs/features/IMG2IMG.md)
- [Inpainting Support](docs/features/INPAINTING.md)
@@ -111,7 +120,6 @@ you can try starting `invoke.py` with the `--precision=float32` flag:
- [Upscaling, face-restoration and outpainting](docs/features/POSTPROCESS.md)
- [Seamless Tiling](docs/features/OTHER.md#seamless-tiling)
- [Google Colab](docs/features/OTHER.md#google-colab)
- [Web Server](docs/features/WEB.md)
- [Reading Prompts From File](docs/features/PROMPTS.md#reading-prompts-from-a-file)
- [Shortcut: Reusing Seeds](docs/features/OTHER.md#shortcuts-reusing-seeds)
- [Prompt Blending](docs/features/PROMPTS.md#prompt-blending)
@@ -128,8 +136,30 @@ you can try starting `invoke.py` with the `--precision=float32` flag:
### Latest Changes
- vNEXT (TODO 2022)
- v2.0.0 (9 October 2022)
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains
for backward compatibility.
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
- Support for <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/INPAINTING.md">inpainting</a> and <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/OUTPAINTING.md">outpainting</a>
- img2img runs on all k* samplers
- Support for <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/PROMPTS.md#negative-and-unconditioned-prompts">negative prompts</a>
- Support for CodeFormer face reconstruction
- Support for Textual Inversion on Macintoshes
- Support in both WebGUI and CLI for <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/POSTPROCESS.md">post-processing of previously-generated images</a>
using facial reconstruction, ESRGAN upscaling, outcropping (similar to DALL-E infinite canvas),
and "embiggen" upscaling. See the `!fix` command.
- New `--hires` option on `invoke>` line allows <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/CLI.m#this-is-an-example-of-txt2img">larger images to be created without duplicating elements</a>, at the cost of some performance.
- New `--perlin` and `--threshold` options allow you to add and control variation
during image generation (see <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/OTHER.md#thresholding-and-perlin-noise-initialization-options">Thresholding and Perlin Noise Initialization</a>
- Extensive metadata now written into PNG files, allowing reliable regeneration of images
and tweaking of previous settings.
- Command-line completion in `invoke.py` now works on Windows, Linux and Mac platforms.
- Improved <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/CLI.m">command-line completion behavior</a>.
New commands added:
* List command-line history with `!history`
* Search command-line history with `!search`
* Clear history with `!clear`
- Deprecated `--full_precision` / `-F`. Simply omit it and `invoke.py` will auto
configure. To switch away from auto use the new flag like `--precision=float32`.

View File

@@ -32,7 +32,7 @@ model:
placeholder_strings: ["*"]
initializer_words: ['face', 'man', 'photo', 'africanmale']
per_image_tokens: false
num_vectors_per_token: 6
num_vectors_per_token: 1
progressive_words: False
unet_config:

Binary file not shown.

After

Width:  |  Height:  |  Size: 284 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 252 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 983 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 101 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 546 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 336 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 148 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 637 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 529 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 838 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 838 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 989 KiB

View File

@@ -4,7 +4,7 @@ title: Changelog
# :octicons-log-16: Changelog
## v1.13 <small>(in process)</small>
## v1.13
- Supports a Google Colab notebook for a standalone server running on Google
hardware [Arturo Mendivil](https://github.com/artmen1516)

View File

@@ -50,6 +50,8 @@ information underneath the transparent needs to be preserved, not erased.
More details can be found here:
[Creating Transparent Images For Inpainting](./INPAINTING.md#creating-transparent-regions-for-inpainting)
<<<<<<< HEAD
=======
**IMPORTANT ISSUE** `img2img` does not work properly on initial images smaller than 512x512. Please scale your
image to at least 512x512 before using it. Larger images are not a problem, but may run out of VRAM on your
GPU card. To fix this, use the --fit option, which downscales the initial image to fit within the box specified
@@ -58,6 +60,7 @@ by width x height:
tree on a hill with a river, national geographic -I./test-pictures/big-sketch.png -H512 -W512 --fit
~~~
>>>>>>> main
## How does it actually work, though?
The main difference between `img2img` and `prompt2img` is the starting point. While `prompt2img` always starts with pure
@@ -67,7 +70,11 @@ gaussian noise and progressively refines it over the requested number of steps,
**Let's start** by thinking about vanilla `prompt2img`, just generating an image from a prompt. If the step count is 10, then the "latent space" (Stable Diffusion's internal representation of the image) for the prompt "fire" with seed `1592514025` develops something like this:
```commandline
<<<<<<< HEAD
dream> "fire" -s10 -W384 -H384 -S1592514025
=======
invoke> "fire" -s10 -W384 -H384 -S1592514025
>>>>>>> main
```
![latent steps](../assets/img2img/000019.steps.png)
@@ -95,7 +102,11 @@ Notice how much more fuzzy the starting image is for strength `0.7` compared to
| | strength = 0.7 | strength = 0.4 |
| -- | -- | -- |
| initial image that SD sees | ![](../assets/img2img/000032.step-0.png) | ![](../assets/img2img/000030.step-0.png) |
<<<<<<< HEAD
| steps argument to `dream>` | `-S10` | `-S10` |
=======
| steps argument to `invoke>` | `-S10` | `-S10` |
>>>>>>> main
| steps actually taken | 7 | 4 |
| latent space at each step | ![](../assets/img2img/000032.steps.gravity.png) | ![](../assets/img2img/000030.steps.gravity.png) |
| output | ![](../assets/img2img/000032.1592514025.png) | ![](../assets/img2img/000030.1592514025.png) |
@@ -106,10 +117,17 @@ Both of the outputs look kind of like what I was thinking of. With the strength
If you want to try this out yourself, all of these are using a seed of `1592514025` with a width/height of `384`, step count `10`, the default sampler (`k_lms`), and the single-word prompt `fire`:
```commandline
<<<<<<< HEAD
dream> "fire" -s10 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png --strength 0.7
```
The code for rendering intermediates is on my (damian0815's) branch [document-img2img](https://github.com/damian0815/InvokeAI/tree/document-img2img) - run `dream.py` and check your `outputs/img-samples/intermediates` folder while generating an image.
=======
invoke> "fire" -s10 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png --strength 0.7
```
The code for rendering intermediates is on my (damian0815's) branch [document-img2img](https://github.com/damian0815/InvokeAI/tree/document-img2img) - run `invoke.py` and check your `outputs/img-samples/intermediates` folder while generating an image.
>>>>>>> main
### Compensating for the reduced step count
@@ -118,7 +136,11 @@ After putting this guide together I was curious to see how the difference would
Here's strength `0.4` (note step count `50`, which is `20 ÷ 0.4` to make sure SD does `20` steps from my image):
```commandline
<<<<<<< HEAD
dream> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
=======
invoke> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
>>>>>>> main
```
![](../assets/img2img/000035.1592514025.png)
@@ -126,7 +148,11 @@ invoke> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
and strength `0.7` (note step count `30`, which is roughly `20 ÷ 0.7` to make sure SD does `20` steps from my image):
```commandline
<<<<<<< HEAD
dream> "fire" -s30 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.7
=======
invoke> "fire" -s30 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.7
>>>>>>> main
```
![](../assets/img2img/000046.1592514025.png)

View File

@@ -38,8 +38,8 @@ We are hoping to get rid of the need for this workaround in an upcoming release.
2. Layer->Transparency->Add Alpha Channel
3. Use lasoo tool to select region to mask
4. Choose Select -> Float to create a floating selection
5. Open the Layers toolbar (++ctrl+l++) and select "Floating Selection"
6. Set opacity to 0%
5. Open the Layers toolbar (^L) and select "Floating Selection"
6. Set opacity to a value between 0% and 99%
7. Export as PNG
8. In the export dialogue, Make sure the "Save colour values from
transparent pixels" checkbox is selected.

View File

@@ -70,7 +70,7 @@ additional 64 pixels to the top of the image:
invoke> !fix images/curly.png --out_direction top 64
~~~
(you can abbreviate ``--out_direction` as `-D`.
(you can abbreviate `--out_direction` as `-D`.
The result is shown here:

View File

@@ -20,39 +20,33 @@ The default face restoration module is GFPGAN. The default upscale is
Real-ESRGAN. For an alternative face restoration module, see [CodeFormer
Support] below.
As of version 1.14, environment.yaml will install the Real-ESRGAN package into
the standard install location for python packages, and will put GFPGAN into a
subdirectory of "src" in the InvokeAI directory. (The reason for this is
that the standard GFPGAN distribution has a minor bug that adversely affects
image color.) Upscaling with Real-ESRGAN should "just work" without further
intervention. Simply pass the --upscale (-U) option on the invoke> command line,
or indicate the desired scale on the popup in the Web GUI.
As of version 1.14, environment.yaml will install the Real-ESRGAN
package into the standard install location for python packages, and
will put GFPGAN into a subdirectory of "src" in the InvokeAI
directory. Upscaling with Real-ESRGAN should "just work" without
further intervention. Simply pass the --upscale (-U) option on the
invoke> command line, or indicate the desired scale on the popup in
the Web GUI.
For **GFPGAN** to work, there is one additional step needed. You will need to
download and copy the GFPGAN
[models file](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth)
into **src/gfpgan/experiments/pretrained_models**. On Mac and Linux systems,
here's how you'd do it using **wget**:
**GFPGAN** requires a series of downloadable model files to
work. These are loaded when you run `scripts/preload_models.py`. If
GFPAN is failing with an error, please run the following from the
InvokeAI directory:
```bash
wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth -P src/gfpgan/experiments/pretrained_models/
```
~~~~
python scripts/preload_models.py
~~~~
Make sure that you're in the InvokeAI directory when you do this.
If you do not run this script in advance, the GFPGAN module will attempt
to download the models files the first time you try to perform facial
reconstruction.
Alternatively, if you have GFPGAN installed elsewhere, or if you are using an
earlier version of this package which asked you to install GFPGAN in a sibling
directory, you may use the `--gfpgan_dir` argument with `invoke.py` to set a
custom path to your GFPGAN directory. _There are other GFPGAN related boot
arguments if you wish to customize further._
!!! warning "Internet connection needed"
Users whose GPU machines are isolated from the Internet (e.g.
on a University cluster) should be aware that the first time you run invoke.py with GFPGAN and
Real-ESRGAN turned on, it will try to download model files from the Internet. To rectify this, you
may run `python3 scripts/preload_models.py` after you have installed GFPGAN and all its
dependencies.
Alternatively, if you have GFPGAN installed elsewhere, or if you are
using an earlier version of this package which asked you to install
GFPGAN in a sibling directory, you may use the `--gfpgan_dir` argument
with `invoke.py` to set a custom path to your GFPGAN directory. _There
are other GFPGAN related boot arguments if you wish to customize
further._
## Usage
@@ -124,15 +118,15 @@ actions.
This repo also allows you to perform face restoration using
[CodeFormer](https://github.com/sczhou/CodeFormer).
In order to setup CodeFormer to work, you need to download the models like with
GFPGAN. You can do this either by running `preload_models.py` or by manually
downloading the
[model file](https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth)
In order to setup CodeFormer to work, you need to download the models
like with GFPGAN. You can do this either by running
`preload_models.py` or by manually downloading the [model
file](https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth)
and saving it to `ldm/restoration/codeformer/weights` folder.
You can use `-ft` prompt argument to swap between CodeFormer and the default
GFPGAN. The above mentioned `-G` prompt argument will allow you to control the
strength of the restoration effect.
You can use `-ft` prompt argument to swap between CodeFormer and the
default GFPGAN. The above mentioned `-G` prompt argument will allow
you to control the strength of the restoration effect.
### Usage:

View File

@@ -1,21 +1,357 @@
---
title: Barebones Web Server
title: InvokeAI Web Server
---
# :material-web: Barebones Web Server
As of version 1.10, this distribution comes with a bare bones web server (see
screenshot). To use it, run the `invoke.py` script by adding the `--web`
option.
As of version 2.0.0, this distribution comes with a full-featured web
server (see screenshot). To use it, run the `invoke.py` script by
adding the `--web` option:
```bash
(ldm) ~/stable-diffusion$ python3 scripts/invoke.py --web
(ldm) ~/InvokeAI$ python3 scripts/invoke.py --web
```
You can then connect to the server by pointing your web browser at
http://localhost:9090, or to the network name or IP address of the server.
http://localhost:9090. To reach the server from a different machine on
your LAN, you may launch the web server with the `--host` argument and
either the IP address of the host you are running it on, or the
wildcard `0.0.0.0`. For example:
Kudos to [Tesseract Cat](https://github.com/TesseractCat) for contributing this
code, and to [dagf2101](https://github.com/dagf2101) for refining it.
```bash
(ldm) ~/InvokeAI$ python3 scripts/invoke.py --web --host 0.0.0.0
```
![Dream Web Server](../assets/invoke_web_server.png)
# Quick guided walkthrough of the WebGUI's features
While most of the WebGUI's features are intuitive, here is a guided
walkthrough through its various components.
<img src="../assets/invoke-web-server-1.png" width=640>
The screenshot above shows the Text to Image tab of the WebGUI. There
are three main sections:
1. A **control panel** on the left, which contains various settings
for text to image generation. The most important part is the text
field (currently showing `strawberry sushi`) for entering the text
prompt, and the camera icon directly underneath that will render the
image. We'll call this the *Invoke* button from now on.
2. The **current image** section in the middle, which shows a large
format version of the image you are currently working on. A series of
buttons at the top ("image to image", "Use All", "Use Seed", etc) lets
you modify the image in various ways.
3. A **gallery* section on the left that contains a history of the
images you have generated. These images are read and written to the
directory specified at launch time in `--outdir`.
In addition to these three elements, there are a series of icons for
changing global settings, reporting bugs, and changing the theme on
the upper right.
There are also a series of icons to the left of the control panel (see
highlighted area in the screenshot below) which select among a series
of tabs for performing different types of operations.
<img src="../assets/invoke-web-server-2.png" width=512>
From top to bottom, these are:
1. Text to Image - generate images from text
2. Image to Image - from an uploaded starting image (drawing or photograph) generate a new one, modified by the text prompt
3. Inpainting (pending) - Interactively erase portions of a starting image and have the AI fill in the erased region from a text prompt.
4. Outpainting (pending) - Interactively add blank space to the borders of a starting image and fill in the background from a text prompt.
5. Postprocessing (pending) - Interactively postprocess generated images using a variety of filters.
The inpainting, outpainting and postprocessing tabs are currently in
development. However, limited versions of their features can already
be accessed through the Text to Image and Image to Image tabs.
## Walkthrough
The following walkthrough will exercise most (but not all) of the
WebGUI's feature set.
### Text to Image
1. Launch the WebGUI using `python scripts/invoke.py --web` and
connect to it with your browser by accessing
`http://localhost:9090`. If the browser and server are running on
different machines on your LAN, add the option `--host 0.0.0.0` to the
launch command line and connect to the machine hosting the web server
using its IP address or domain name.
2. If all goes well, the WebGUI should come up and you'll see a green
`connected` message on the upper right.
#### Basics
3. Generate an image by typing *strawberry sushi* into the large
prompt field on the upper left and then clicking on the Invoke button
(the one with the Camera icon). After a short wait, you'll see a large
image of sushi in the image panel, and a new thumbnail in the gallery
on the right.
If you need more room on the screen, you can turn the gallery off
by clicking on the **x** to the right of "Your Invocations". You can
turn it back on later by clicking the image icon that appears in the
gallery's place.
The images are written into the directory indicated by the `--outdir`
option provided at script launch time. By default, this is
`outputs/img-samples` under the InvokeAI directory.
4. Generate a bunch of strawberry sushi images by increasing the
number of requested images by adjusting the Images counter just below
the Camera button. As each is generated, it will be added to the
gallery. You can switch the active image by clicking on the gallery
thumbnails.
5. Try playing with different settings, including image width and
height, the Sampler, the Steps and the CFG scale.
Image *Width* and *Height* do what you'd expect. However, be aware that
larger images consume more VRAM memory and take longer to generate.
The *Sampler* controls how the AI selects the image to display. Some
samplers are more "creative" than others and will produce a wider
range of variations (see next section). Some samplers run faster than
others.
*Steps* controls how many noising/denoising/sampling steps the AI will
take. The higher this value, the more refined the image will be, but
the longer the image will take to generate. A typical strategy is to
generate images with a low number of steps in order to select one to
work on further, and then regenerate it using a higher number of
steps.
The *CFG Scale* controls how hard the AI tries to match the generated
image to the input prompt. You can go as high or low as you like, but
generally values greater than 20 won't improve things much, and values
lower than 5 will produce unexpected images. There are complex
interactions between *Steps*, *CFG Scale* and the *Sampler*, so
experiment to find out what works for you.
6. To regenerate a previously-generated image, select the image you
want and click *Use All*. This loads the text prompt and other
original settings into the control panel. If you then press *Invoke*
it will regenerate the image exactly. You can also selectively modify
the prompt or other settings to tweak the image.
Alternatively, you may click on *Use Seed* to load just the image's
seed, and leave other settings unchanged.
7. To regenerate a Stable Diffusion image that was generated by
another SD package, you need to know its text prompt and its
*Seed*. Copy-paste the prompt into the prompt box, unset the
*Randomize Seed* control in the control panel, and copy-paste the
desired *Seed* into its text field. When you Invoke, you will get
something similar to the original image. It will not be exact unless
you also set the correct values for the original sampler, CFG,
steps and dimensions, but it will (usually) be close.
#### Variations on a theme
5. Let's try generating some variations. Select your favorite sushi
image from the gallery to load it. Then select "Use All" from the list
of buttons above. This will load up all the settings used to generate
this image, including its unique seed.
Go down to the Variations section of the Control Panel and set the
button to On. Set Variation Amount to 0.2 to generate a modest
number of variations on the image, and also set the Image counter to
4. Press the `invoke` button. This will generate a series of related
images. To obtain smaller variations, just lower the Variation
Amount. You may also experiment with changing the Sampler. Some
samplers generate more variability than others. *k_euler_a* is
particularly creative, while *ddim* is pretty conservative.
6. For even more variations, experiment with increasing the setting
for *Perlin*. This adds a bit of noise to the image generation
process. Note that values of Perlin noise greater than 0.15 produce
poor images for several of the samplers.
#### Facial reconstruction and upscaling
Stable Diffusion frequently produces mangled faces, particularly when
there are multiple figures in the same scene. Stable Diffusion has
particular issues with generating reallistic eyes. InvokeAI provides
the ability to reconstruct faces using either the GFPGAN or CodeFormer
libraries. For more information see [POSTPROCESS](POSTPROCESS.md).
7. Invoke a prompt that generates a mangled face. A prompt that often
gives this is "portrait of a lawyer, 3/4 shot" (this is not intended
as a slur against lawyers!) Once you have an image that needs some
touching up, load it into the Image panel, and press the button with
the face icon (highlighted in the first screenshot below). A dialog
box will appear. Leave *Strength* at 0.8 and press *Restore Faces". If
all goes well, the eyes and other aspects of the face will be improved
(see the second screenshot)
<img src="../assets/invoke-web-server-3.png">
<img src="../assets/invoke-web-server-4.png">
The facial reconstruction *Strength* field adjusts how aggressively
the face library will try to alter the face. It can be as high as 1.0,
but be aware that this often softens the face airbrush style, losing
some details. The default 0.8 is usually sufficient.
8. "Upscaling" is the process of increasing the size of an image while
retaining the sharpness. InvokeAI uses an external library called
"ESRGAN" to do this. To invoke upscaling, simply select an image and
press the *HD* button above it. You can select between 2X and 4X
upscaling, and adjust the upscaling strength, which has much the same
meaning as in facial reconstruction. Try running this on one of your
previously-generated images.
9. Finally, you can run facial reconstruction and/or upscaling
automatically after each Invocation. Go to the Advanced Options
section of the Control Panel and turn on *Restore Face* and/or
*Upscale*.
### Image to Image
InvokeAI lets you take an existing image and use it as the basis for a
new creation. You can use any sort of image, including a photograph, a
scanned sketch, or a digital drawing, as long as it is in PNG or JPEG
format.
For this tutorial, we'll use files named
[Lincoln-and-Parrot-512.png](../assets/Lincoln-and-Parrot-512.png),
and
[Lincoln-and-Parrot-512-transparent.png](../assets/Lincoln-and-Parrot-512-transparent.png).
Download these images to your local machine now to continue with the walkthrough.
10. Click on the *Image to Image* tab icon, which is the second icon
from the top on the left-hand side of the screen:
<img src="../assets/invoke-web-server-5.png">
This will bring you to a screen similar to the one shown here:
<img src="../assets/invoke-web-server-6.png" width=640>
Drag-and-drop the Lincoln-and-Parrot image into the Image panel, or
click the blank area to get an upload dialog. The image will load into
an area marked *Initial Image*. (The WebGUI will also load the most
recently-generated image from the gallery into a section on the left,
but this image will be replaced in the next step.)
11. Go to the prompt box and type *old sea captain with raven on
shoulder* and press Invoke. A derived image will appear to the right
of the original one:
<img src="../assets/invoke-web-server-7.png" width=640>
12. Experiment with the different settings. The most influential one
in Image to Image is *Image to Image Strength* located about midway
down the control panel. By default it is set to 0.75, but can range
from 0.0 to 0.99. The higher the value, the more of the original image
the AI will replace. A value of 0 will leave the initial image
completely unchanged, while 0.99 will replace it completely. However,
the Sampler and CFG Scale also influence the final result. You can
also generate variations in the same way as described in Text to
Image.
13. What if we only want to change certain part(s) of the image and
leave the rest intact? This is called Inpainting, and a future version
of the InvokeAI web server will provide an interactive painting canvas
on which you can directly draw the areas you wish to Inpaint into. For
now, you can achieve this effect by using an external photoeditor tool
to make one or more regions of the image transparent as described in
[INPAINTING.md] and uploading that.
The file
[Lincoln-and-Parrot-512-transparent.png](../assets/Lincoln-and-Parrot-512-transparent.png)
is a version of the earlier image in which the area around the parrot
has been replaced with transparency. Click on the "x" in the upper
right of the Initial Image and upload the transparent version. Using
the same prompt "old sea captain with raven on shoulder" try Invoking
an image. This time, only the parrot will be replaced, leaving the
rest of the original image intact:
<img src="../assets/invoke-web-server-8.png" width=640>
14. Would you like to modify a previously-generated image using the
Image to Image facility? Easy! While in the Image to Image panel,
hover over any of the gallery images to see a little menu of icons pop
up. Click the picture icon to instantly send the selected image to
Image to Image as the initial image.
You can do the same from the Text to Image tab by clicking on the
picture icon above the central image panel. The screenshot below
shows where the "use as initial image" icons are located.
<img src="../assets/invoke-web-server-9.png" width=640>
## Parting remarks
This concludes the walkthrough, but there are several more features that you
can explore. Please check out the [Command Line Interface](CLI.md)
documentation for further explanation of the advanced features that
were not covered here.
The WebGUI is only rapid development. Check back regularly for
updates!
# Reference
## Additional Options
`--web_develop` - Starts the web server in development mode.
`--web_verbose` - Enables verbose logging
`--cors [CORS ...]` - Additional allowed origins, comma-separated
`--host HOST` - Web server: Host or IP to listen on. Set to 0.0.0.0 to
accept traffic from other devices on your network.
`--port PORT` - Web server: Port to listen on
`--gui` - Start InvokeAI GUI - This is the "desktop mode" version of the web app. It uses Flask
to create a desktop app experience of the webserver.
## Web Specific Features
The web experience offers an incredibly easy-to-use experience for interacting with the InvokeAI toolkit.
For detailed guidance on individual features, see the Feature-specific help documents available in this directory.
Note that the latest functionality available in the CLI may not always be available in the Web interface.
### Dark Mode & Light Mode
The InvokeAI interface is available in a nano-carbon black & purple Dark Mode, and a "burn your eyes out Nosferatu" Light Mode. These can be toggled by clicking the Sun/Moon icons at the top right of the interface.
![InvokeAI Web Server - Dark Mode](../assets/invoke_web_dark.png)
![InvokeAI Web Server - Light Mode](../assets/invoke_web_light.png)
### Invocation Toolbar
The left side of the InvokeAI interface is available for customizing the prompt and the settings used for invoking your new image. Typing your prompt into the open text field and clicking the Invoke button will produce the image based on the settings configured in the toolbar.
See below for additional documentation related to each feature:
- [Core Prompt Settings](./CLI.md)
- [Variations](./VARIATIONS.md)
- [Upscaling](./UPSCALE.md)
- [Image to Image](./IMG2IMG.md)
- [Inpainting](./INPAINTING.md)
- [Other](./OTHER.md)
### Invocation Gallery
The currently selected --outdir (or the default outputs folder) will display all previously generated files on load. As new invocations are generated, these will be dynamically added to the gallery, and can be previewed by selecting them. Each image also has a simple set of actions (e.g., Delete, Use Seed, Use All Parameters, etc.) that can be accessed by hovering over the image.
### Image Workspace
When an image from the Invocation Gallery is selected, or is generated, the image will be displayed within the center of the interface. A quickbar of common image interactions are displayed along the top of the image, including:
- Use image in the `Image to Image` workflow
- Initialize Face Restoration on the selected file
- Initialize Upscaling on the selected file
- View File metadata and details
- Delete the file
## Acknowledgements
A huge shout-out to the core team working to make this vision a
reality, including
[psychedelicious](https://github.com/psychedelicious),
[Kyle0654](https://github.com/Kyle0654) and
[blessedcoolant](https://github.com/blessedcoolant). [hipsterusername](https://github.com/hipsterusername)
was the team's unofficial cheerleader and added tooltips/docs.

View File

@@ -25,24 +25,24 @@ template: main.html
[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link]
[CI checks on dev badge]: https://flat.badgen.net/github/checks/lstein/stable-diffusion/development?label=CI%20status%20on%20dev&cache=900&icon=github
[CI checks on dev link]: https://github.com/lstein/stable-diffusion/actions?query=branch%3Adevelopment
[CI checks on main badge]: https://flat.badgen.net/github/checks/lstein/stable-diffusion/main?label=CI%20status%20on%20main&cache=900&icon=github
[CI checks on main link]: https://github.com/lstein/stable-diffusion/actions/workflows/test-invoke-conda.yml
[CI checks on dev badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/development?label=CI%20status%20on%20dev&cache=900&icon=github
[CI checks on dev link]: https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Adevelopment
[CI checks on main badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
[CI checks on main link]: https://github.com/invoke-ai/InvokeAI/actions/workflows/test-invoke-conda.yml
[discord badge]: https://flat.badgen.net/discord/members/htRgbc7e?icon=discord
[discord link]: https://discord.com/invite/htRgbc7e
[github forks badge]: https://flat.badgen.net/github/forks/lstein/stable-diffusion?icon=github
[github forks badge]: https://flat.badgen.net/github/forks/invoke-ai/InvokeAI?icon=github
[github forks link]: https://useful-forks.github.io/?repo=lstein%2Fstable-diffusion
[github open issues badge]: https://flat.badgen.net/github/open-issues/lstein/stable-diffusion?icon=github
[github open issues link]: https://github.com/lstein/stable-diffusion/issues?q=is%3Aissue+is%3Aopen
[github open prs badge]: https://flat.badgen.net/github/open-prs/lstein/stable-diffusion?icon=github
[github open prs link]: https://github.com/lstein/stable-diffusion/pulls?q=is%3Apr+is%3Aopen
[github stars badge]: https://flat.badgen.net/github/stars/lstein/stable-diffusion?icon=github
[github stars link]: https://github.com/lstein/stable-diffusion/stargazers
[latest commit to dev badge]: https://flat.badgen.net/github/last-commit/lstein/stable-diffusion/development?icon=github&color=yellow&label=last%20dev%20commit&cache=900
[latest commit to dev link]: https://github.com/lstein/stable-diffusion/commits/development
[latest release badge]: https://flat.badgen.net/github/release/lstein/stable-diffusion/development?icon=github
[latest release link]: https://github.com/lstein/stable-diffusion/releases
[github open issues badge]: https://flat.badgen.net/github/open-issues/invoke-ai/InvokeAI?icon=github
[github open issues link]: https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen
[github open prs badge]: https://flat.badgen.net/github/open-prs/invoke-ai/InvokeAI?icon=github
[github open prs link]: https://github.com/invoke-ai/InvokeAI/pulls?q=is%3Apr+is%3Aopen
[github stars badge]: https://flat.badgen.net/github/stars/invoke-ai/InvokeAI?icon=github
[github stars link]: https://github.com/invoke-ai/InvokeAI/stargazers
[latest commit to dev badge]: https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/development?icon=github&color=yellow&label=last%20dev%20commit&cache=900
[latest commit to dev link]: https://github.com/invoke-ai/InvokeAI/commits/development
[latest release badge]: https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases
</div>
@@ -54,7 +54,7 @@ GPU cards with as little as 4 GB or RAM.
!!! note
This fork is rapidly evolving. Please use the
[Issues](https://github.com/lstein/stable-diffusion/issues) tab to report bugs and make feature
[Issues](https://github.com/invoke-ai/InvokeAI/issues) tab to report bugs and make feature
requests. Be sure to use the provided templates. They will help aid diagnose issues faster.
## :octicons-package-dependencies-24: Installation

View File

@@ -2,143 +2,110 @@
title: macOS
---
# :fontawesome-brands-apple: macOS
Invoke AI runs quite well on M1 Macs and we have a number of M1 users
in the community.
While the repo does run on Intel Macs, we only have a couple
reports. If you have an Intel Mac and run into issues, please create
an issue on Github and we will do our best to help.
## Requirements
- macOS 12.3 Monterey or later
- Python
- Patience
- Apple Silicon or Intel Mac
- About 10GB of storage (and 10GB of data if your internet connection has data caps)
- Any M1 Macs or an Intel Macs with 4GB+ of VRAM (ideally more)
Things have moved really fast and so these instructions change often which makes
them outdated pretty fast. One of the problems is that there are so many
different ways to run this.
## Installation
We are trying to build a testing setup so that when we make changes it doesn't
always break.
First you need to download a large checkpoint file.
## How to
1. Sign up at https://huggingface.co
2. Go to the [Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
3. Accept the terms and click Access Repository
4. Download [sd-v1-4.ckpt (4.27 GB)](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/blob/main/sd-v1-4.ckpt) and note where you have saved it (probably the Downloads folder). You may want to move it somewhere else for longer term storage - SD needs this file to run.
(this hasn't been 100% tested yet)
While that is downloading, open Terminal and run the following commands one at a time, reading the comments and taking care to run the appropriate command for your Mac's architecture (Intel or M1).
First get the weights checkpoint download started since it's big and will take
some time:
Do not just copy and paste the whole thing into your terminal!
1. Sign up at [huggingface.co](https://huggingface.co)
2. Go to the
[Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
3. Accept the terms and click Access Repository:
4. Download
[sd-v1-4.ckpt (4.27 GB)](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/blob/main/sd-v1-4.ckpt)
and note where you have saved it (probably the Downloads folder)
```bash
# Install brew (and Xcode command line tools):
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
While that is downloading, open a Terminal and run the following commands:
# Now there are two options to get the Python (miniconda) environment up and running:
# 1. Alongside pyenv
# 2. Standalone
#
# If you don't know what we are talking about, choose 2.
#
# If you are familiar with python environments, you'll know there are other options
# for setting up the environment - you are on your own if you go one of those routes.
!!! todo "Homebrew"
##### BEGIN TWO DIFFERENT OPTIONS #####
=== "no brew installation yet"
### BEGIN OPTION 1: Installing alongside pyenv ###
brew install pyenv-virtualenv # you might have this from before, no problem
pyenv install anaconda3-2022.05
pyenv virtualenv anaconda3-2022.05
eval "$(pyenv init -)"
pyenv activate anaconda3-2022.05
### END OPTION 1 ###
```bash title="install brew (and Xcode command line tools)"
/bin/bash -c \
"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
```
### BEGIN OPTION 2: Installing standalone ###
# Install cmake, protobuf, and rust:
brew install cmake protobuf rust
=== "brew is already installed"
Only if you installed protobuf in a previous version of this tutorial, otherwise skip
# BEGIN ARCHITECTURE-DEPENDENT STEP #
# For M1: install miniconda (M1 arm64 version):
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh -o Miniconda3-latest-MacOSX-arm64.sh
/bin/bash Miniconda3-latest-MacOSX-arm64.sh
`#!bash brew uninstall protobuf`
# For Intel: install miniconda (Intel x86-64 version):
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh -o Miniconda3-latest-MacOSX-x86_64.sh
/bin/bash Miniconda3-latest-MacOSX-x86_64.sh
# END ARCHITECTURE-DEPENDENT STEP #
!!! todo "Conda Installation"
### END OPTION 2 ###
Now there are two different ways to set up the Python (miniconda) environment:
1. Standalone
2. with pyenv
If you don't know what we are talking about, choose Standalone
##### END TWO DIFFERENT OPTIONS #####
=== "Standalone"
```bash
# install cmake and rust:
brew install cmake rust
```
=== "M1 arm64"
```bash title="Install miniconda for M1 arm64"
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh \
-o Miniconda3-latest-MacOSX-arm64.sh
/bin/bash Miniconda3-latest-MacOSX-arm64.sh
```
=== "Intel x86_64"
```bash title="Install miniconda for Intel"
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh \
-o Miniconda3-latest-MacOSX-x86_64.sh
/bin/bash Miniconda3-latest-MacOSX-x86_64.sh
```
=== "with pyenv"
```{.bash .annotate}
brew install rust pyenv-virtualenv # (1)!
pyenv install anaconda3-2022.05
pyenv virtualenv anaconda3-2022.05
eval "$(pyenv init -)"
pyenv activate anaconda3-2022.05
```
1. You might already have this installed, if that is the case just continue.
```{.bash .annotate title="local repo setup"}
# clone the repo
# Clone the Invoke AI repo
git clone https://github.com/invoke-ai/InvokeAI.git
cd InvokeAI
# wait until the checkpoint file has downloaded, then proceed
### WAIT FOR THE CHECKPOINT FILE TO DOWNLOAD, THEN PROCEED ###
# create symlink to checkpoint
# We will leave the big checkpoint wherever you stashed it for long-term storage,
# and make a link to it from the repo's folder. This allows you to use it for
# other repos, and if you need to delete Invoke AI, you won't have to download it again.
# Make the directory in the repo for the symlink
mkdir -p models/ldm/stable-diffusion-v1/
PATH_TO_CKPT="$HOME/Downloads" # (1)!
# This is the folder where you put the checkpoint file `sd-v1-4.ckpt`
PATH_TO_CKPT="$HOME/Downloads"
ln -s "$PATH_TO_CKPT/sd-v1-4.ckpt" \
models/ldm/stable-diffusion-v1/model.ckpt
```
# Create a link to the checkpoint
ln -s "$PATH_TO_CKPT/sd-v1-4.ckpt" models/ldm/stable-diffusion-v1/model.ckpt
1. or wherever you saved sd-v1-4.ckpt
# BEGIN ARCHITECTURE-DEPENDENT STEP #
# For M1: Create the environment & install packages
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 conda env create -f environment-mac.yml
!!! todo "create Conda Environment"
# For Intel: Create the environment & install packages
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-64 conda env create -f environment-mac.yml
=== "M1 arm64"
# END ARCHITECTURE-DEPENDENT STEP #
```bash
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 \
conda env create \
-f environment-mac.yml \
&& conda activate ldm
```
# Activate the environment (you need to do this every time you want to run SD)
conda activate ldm
=== "Intel x86_64"
```bash
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-64 \
conda env create \
-f environment-mac.yml \
&& conda activate ldm
```
```{.bash .annotate title="preload models and run script"}
# only need to do this once
# This will download some bits and pieces and make take a while
python scripts/preload_models.py
# now you can run SD in CLI mode
python scripts/invoke.py --full_precision # (1)!
# Run SD!
python scripts/dream.py
```
# or run the web interface!
python scripts/invoke.py --web
@@ -172,13 +139,13 @@ python ./scripts/orig_scripts/txt2img.py \
### Doesn't work anymore?
PyTorch nightly includes support for MPS. Because of this, this setup is
inherently unstable. One morning I woke up and it no longer worked no matter
what I did until I switched to miniforge. However, I have another Mac that works
just fine with Anaconda. If you can't get it to work, please search a little
first because many of the errors will get posted and solved. If you can't find a
solution please
[create an issue](https://github.com/invoke-ai/InvokeAI/issues).
PyTorch nightly includes support for MPS. Because of this, this setup
is inherently unstable. One morning I woke up and it no longer worked
no matter what I did until I switched to miniforge. However, I have
another Mac that works just fine with Anaconda. If you can't get it to
work, please search a little first because many of the errors will get
posted and solved. If you can't find a solution please [create an
issue](https://github.com/invoke-ai/InvokeAI/issues).
One debugging step is to update to the latest version of PyTorch nightly.
@@ -378,7 +345,7 @@ python scripts/preload_models.py
WARNING: this will be slower than running natively on MPS.
```
This fork already includes a fix for this in
The InvokeAI version includes this fix in
[environment-mac.yml](https://github.com/invoke-ai/InvokeAI/blob/main/environment-mac.yml).
### "Could not build wheels for tokenizers"
@@ -463,13 +430,10 @@ C.
You don't have a virus. It's part of the project. Here's
[Rick](https://github.com/invoke-ai/InvokeAI/blob/main/assets/rick.jpeg)
and here's
[the code](https://github.com/invoke-ai/InvokeAI/blob/69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc/scripts/txt2img.py#L79)
that swaps him in. It's a NSFW filter, which IMO, doesn't work very good (and we
call this "computer vision", sheesh).
Actually, this could be happening because there's not enough RAM. You could try
the `model.half()` suggestion or specify smaller output images.
and here's [the
code](https://github.com/invoke-ai/InvokeAI/blob/69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc/scripts/txt2img.py#L79)
that swaps him in. It's a NSFW filter, which IMO, doesn't work very
good (and we call this "computer vision", sheesh).
---
@@ -492,11 +456,9 @@ return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backen
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
```
Update to the latest version of invoke-ai/InvokeAI. We were patching
pytorch but we found a file in stable-diffusion that we could change instead.
This is a 32-bit vs 16-bit problem.
---
Update to the latest version of invoke-ai/InvokeAI. We were
patching pytorch but we found a file in stable-diffusion that we could
change instead. This is a 32-bit vs 16-bit problem.
### The processor must support the Intel bla bla bla

View File

@@ -58,6 +58,7 @@ We thank them for all of their time and hard work.
- [rabidcopy](https://github.com/rabidcopy)
- [Dominic Letz](https://github.com/dominicletz)
- [Dmitry T.](https://github.com/ArDiouscuros)
- [Kent Keirsey](https://github.com/hipsterusername)
## **Original CompVis Authors:**

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

483
frontend/dist/assets/index.989a0ca2.js vendored Normal file

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -6,8 +6,8 @@
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>InvokeAI - A Stable Diffusion Toolkit</title>
<link rel="shortcut icon" type="icon" href="/assets/favicon.0d253ced.ico" />
<script type="module" crossorigin src="/assets/index.bfda55e5.js"></script>
<link rel="stylesheet" href="/assets/index.22ee377a.css">
<script type="module" crossorigin src="/assets/index.989a0ca2.js"></script>
<link rel="stylesheet" href="/assets/index.58175ea1.css">
</head>
<body>

View File

@@ -25,6 +25,7 @@
"react-dropzone": "^14.2.2",
"react-hotkeys-hook": "^3.4.7",
"react-icons": "^4.4.0",
"react-masonry-css": "^1.0.16",
"react-redux": "^8.0.2",
"redux-persist": "^6.0.0",
"socket.io": "^4.5.2",

View File

@@ -2,6 +2,7 @@
.App {
display: grid;
width: max-content;
}
.app-content {
@@ -14,6 +15,7 @@
grid-auto-rows: max-content;
width: $app-width;
height: $app-height;
min-width: min-content;
}
.app-console {

View File

@@ -1,7 +1,13 @@
import { IconButtonProps, IconButton, Tooltip } from '@chakra-ui/react';
import {
IconButtonProps,
IconButton,
Tooltip,
PlacementWithLogical,
} from '@chakra-ui/react';
interface Props extends IconButtonProps {
tooltip?: string;
tooltipPlacement?: PlacementWithLogical | undefined;
}
/**
@@ -10,10 +16,14 @@ interface Props extends IconButtonProps {
* TODO: Get rid of this.
*/
const IAIIconButton = (props: Props) => {
const { tooltip = '', onClick, ...rest } = props;
const { tooltip = '', tooltipPlacement = 'bottom', onClick, ...rest } = props;
return (
<Tooltip label={tooltip}>
<IconButton {...rest} cursor={onClick ? 'pointer' : 'unset'} onClick={onClick}/>
<Tooltip label={tooltip} hasArrow placement={tooltipPlacement}>
<IconButton
{...rest}
cursor={onClick ? 'pointer' : 'unset'}
onClick={onClick}
/>
</Tooltip>
);
};

View File

@@ -71,7 +71,7 @@
.next-prev-button-trigger-area {
width: 7rem;
height: 100%;
width: 100%;
width: 15%;
display: grid;
align-items: center;
pointer-events: auto;

View File

@@ -0,0 +1,59 @@
.hoverable-image {
display: flex;
justify-content: center;
transition: transform 0.2s ease-out;
&:hover {
cursor: pointer;
border-radius: 0.5rem;
z-index: 2;
}
.hoverable-image-image {
width: 100%;
height: 100%;
object-fit: cover;
max-width: 100%;
max-height: 100%;
}
.hoverable-image-content {
display: flex;
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
align-items: center;
justify-content: center;
.hoverable-image-check {
fill: var(--status-good-color);
}
}
.hoverable-image-icons {
position: absolute;
bottom: -2rem;
display: grid;
width: min-content;
grid-template-columns: repeat(2, max-content);
border-radius: 0.4rem;
background-color: var(--background-color-secondary);
padding: 0.2rem;
gap: 0.2rem;
grid-auto-rows: max-content;
button {
width: 12px;
height: 12px;
border-radius: 0.2rem;
padding: 10px 0;
flex-shrink: 2;
svg {
width: 12px;
height: 12px;
}
}
}
}

View File

@@ -1,12 +1,4 @@
import {
Box,
Flex,
Icon,
IconButton,
Image,
Tooltip,
useColorModeValue,
} from '@chakra-ui/react';
import { Box, Icon, IconButton, Image, Tooltip } from '@chakra-ui/react';
import { RootState, useAppDispatch, useAppSelector } from '../../app/store';
import { setCurrentImage } from './gallerySlice';
import { FaCheck, FaImage, FaSeedling, FaTrashAlt } from 'react-icons/fa';
@@ -42,13 +34,6 @@ const HoverableImage = memo((props: HoverableImageProps) => {
(state: RootState) => state.options.activeTab
);
const checkColor = useColorModeValue('green.600', 'green.300');
const bgColor = useColorModeValue('gray.200', 'gray.700');
const bgGradient = useColorModeValue(
'radial-gradient(circle, rgba(255,255,255,0.7) 0%, rgba(255,255,255,0.7) 20%, rgba(0,0,0,0) 100%)',
'radial-gradient(circle, rgba(0,0,0,0.7) 0%, rgba(0,0,0,0.7) 20%, rgba(0,0,0,0) 100%)'
);
const { image, isSelected } = props;
const { url, uuid, metadata } = image;
@@ -76,91 +61,80 @@ const HoverableImage = memo((props: HoverableImageProps) => {
const handleClickImage = () => dispatch(setCurrentImage(image));
return (
<Box position={'relative'} key={uuid}>
<Box
position={'relative'}
key={uuid}
className="hoverable-image"
onMouseOver={handleMouseOver}
onMouseOut={handleMouseOut}
>
<Image
width={120}
height={120}
objectFit="cover"
rounded={'md'}
src={url}
loading={'lazy'}
backgroundColor={bgColor}
className="hoverable-image-image"
/>
<Flex
cursor={'pointer'}
position={'absolute'}
top={0}
left={0}
rounded={'md'}
width="100%"
height="100%"
alignItems={'center'}
justifyContent={'center'}
background={isSelected ? bgGradient : undefined}
onClick={handleClickImage}
onMouseOver={handleMouseOver}
onMouseOut={handleMouseOut}
>
<div className="hoverable-image-content" onClick={handleClickImage}>
{isSelected && (
<Icon fill={checkColor} width={'50%'} height={'50%'} as={FaCheck} />
<Icon
width={'50%'}
height={'50%'}
as={FaCheck}
className="hoverable-image-check"
/>
)}
{isHovered && (
<Flex
direction={'column'}
gap={1}
position={'absolute'}
top={1}
right={1}
>
<Tooltip label={'Delete image'}>
<DeleteImageModal image={image}>
<IconButton
colorScheme="red"
aria-label="Delete image"
icon={<FaTrashAlt />}
size="xs"
variant={'imageHoverIconButton'}
fontSize={14}
/>
</DeleteImageModal>
</Tooltip>
{['txt2img', 'img2img'].includes(image?.metadata?.image?.type) && (
<Tooltip label="Use all parameters">
<IconButton
aria-label="Use all parameters"
icon={<IoArrowUndoCircleOutline />}
size="xs"
fontSize={18}
variant={'imageHoverIconButton'}
onClickCapture={handleClickSetAllParameters}
/>
</Tooltip>
)}
{image?.metadata?.image?.seed !== undefined && (
<Tooltip label="Use seed">
<IconButton
aria-label="Use seed"
icon={<FaSeedling />}
size="xs"
fontSize={16}
variant={'imageHoverIconButton'}
onClickCapture={handleClickSetSeed}
/>
</Tooltip>
)}
<Tooltip label="Send To Image To Image">
</div>
{isHovered && (
<div className="hoverable-image-icons">
<Tooltip label={'Delete image'} hasArrow>
<DeleteImageModal image={image}>
<IconButton
aria-label="Send To Image To Image"
icon={<FaImage />}
colorScheme="red"
aria-label="Delete image"
icon={<FaTrashAlt />}
size="xs"
variant={'imageHoverIconButton'}
fontSize={14}
/>
</DeleteImageModal>
</Tooltip>
{['txt2img', 'img2img'].includes(image?.metadata?.image?.type) && (
<Tooltip label="Use All Parameters" hasArrow>
<IconButton
aria-label="Use All Parameters"
icon={<IoArrowUndoCircleOutline />}
size="xs"
fontSize={18}
variant={'imageHoverIconButton'}
onClickCapture={handleClickSetAllParameters}
/>
</Tooltip>
)}
{image?.metadata?.image?.seed !== undefined && (
<Tooltip label="Use Seed" hasArrow>
<IconButton
aria-label="Use Seed"
icon={<FaSeedling />}
size="xs"
fontSize={16}
variant={'imageHoverIconButton'}
onClickCapture={handleSetInitImage}
onClickCapture={handleClickSetSeed}
/>
</Tooltip>
</Flex>
)}
</Flex>
)}
<Tooltip label="Send To Image To Image" hasArrow>
<IconButton
aria-label="Send To Image To Image"
icon={<FaImage />}
size="xs"
fontSize={16}
variant={'imageHoverIconButton'}
onClickCapture={handleSetInitImage}
/>
</Tooltip>
</div>
)}
</Box>
);
}, memoEqualityCheck);

View File

@@ -2,10 +2,15 @@
.image-gallery-area {
.image-gallery-popup-btn {
position: absolute;
top: 50%;
right: 1rem;
border-radius: 0.5rem 0 0 0.5rem;
padding: 0 0.5rem;
@include Button(
$btn-width: 3rem,
$btn-height: 3rem,
$icon-size: 22px,
$btn-width: 1rem,
$btn-height: 6rem,
$icon-size: 20px,
$btn-color: var(--btn-grey),
$btn-color-hover: var(--btn-grey-hover)
);
@@ -14,21 +19,19 @@
.image-gallery-popup {
background-color: var(--tab-color);
position: fixed !important;
top: 0;
right: 0;
padding: 1rem;
animation: slideOut 0.3s ease-out;
display: grid;
grid-auto-rows: max-content;
display: flex;
flex-direction: column;
row-gap: 1rem;
border-radius: 0.5rem;
border-left-width: 0.2rem;
min-width: 300px;
border-color: var(--gallery-resizeable-color);
}
.image-gallery-header {
display: grid;
grid-template-columns: auto max-content;
display: flex;
align-items: center;
h1 {
@@ -44,19 +47,39 @@
}
.image-gallery-container {
display: grid;
display: flex;
flex-direction: column;
gap: 1rem;
max-height: $app-gallery-popover-height;
overflow-y: scroll;
@include HideScrollbar;
}
.image-gallery {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(120px, auto));
gap: 0.6rem;
justify-items: center;
.masonry-grid {
display: -webkit-box; /* Not needed if autoprefixing */
display: -ms-flexbox; /* Not needed if autoprefixing */
display: flex;
margin-left: 0.5rem; /* gutter size offset */
width: auto;
}
.masonry-grid_column {
padding-left: 0.5rem; /* gutter size */
background-clip: padding-box;
}
/* Style your items */
.masonry-grid_column > .hoverable-image {
/* change div to reference your elements you put in <Masonry> */
background: var(--tab-color);
margin-bottom: 0.5rem;
}
// .image-gallery {
// display: flex;
// grid-template-columns: repeat(auto-fill, minmax(80px, auto));
// gap: 0.5rem;
// justify-items: center;
// }
.image-gallery-load-more-btn {
background-color: var(--btn-load-more) !important;
@@ -75,7 +98,7 @@
}
.image-gallery-container-placeholder {
display: grid;
display: flex;
background-color: var(--background-color-secondary);
border-radius: 0.5rem;
place-items: center;

View File

@@ -1,27 +1,38 @@
import { Button, IconButton } from '@chakra-ui/button';
import { Resizable } from 're-resizable';
import React from 'react';
import React, { useState } from 'react';
import { useHotkeys } from 'react-hotkeys-hook';
import { MdClear, MdPhotoLibrary } from 'react-icons/md';
import Masonry from 'react-masonry-css';
import { requestImages } from '../../app/socketio/actions';
import { RootState, useAppDispatch, useAppSelector } from '../../app/store';
import {
selectNextImage,
selectPrevImage,
setShouldShowGallery,
} from './gallerySlice';
import IAIIconButton from '../../common/components/IAIIconButton';
import { selectNextImage, selectPrevImage } from './gallerySlice';
import HoverableImage from './HoverableImage';
import { setShouldShowGallery } from '../options/optionsSlice';
export default function ImageGallery() {
const {
images,
currentImageUuid,
areMoreImagesAvailable,
shouldShowGallery,
} = useAppSelector((state: RootState) => state.gallery);
const { images, currentImageUuid, areMoreImagesAvailable } = useAppSelector(
(state: RootState) => state.gallery
);
const shouldShowGallery = useAppSelector(
(state: RootState) => state.options.shouldShowGallery
);
const activeTab = useAppSelector(
(state: RootState) => state.options.activeTab
);
const dispatch = useAppDispatch();
const [column, setColumn] = useState<number | undefined>();
const handleResize = (event: MouseEvent | TouchEvent | any) => {
setColumn(Math.floor((window.innerWidth - event.x) / 120));
};
const handleShowGalleryToggle = () => {
dispatch(setShouldShowGallery(!shouldShowGallery));
};
@@ -61,21 +72,26 @@ export default function ImageGallery() {
return (
<div className="image-gallery-area">
{!shouldShowGallery && (
<Button
colorScheme="teal"
<IAIIconButton
tooltip="Show Gallery"
tooltipPlacement="top"
aria-label="Show Gallery"
onClick={handleShowGalleryToggle}
className="image-gallery-popup-btn"
>
<MdPhotoLibrary />
</Button>
</IAIIconButton>
)}
{shouldShowGallery && (
<Resizable
defaultSize={{ width: '300', height: '100%' }}
minWidth={'300'}
maxWidth={activeTab == 1 ? '300' : '600'}
className="image-gallery-popup"
onResize={handleResize}
>
{/* <div className="image-gallery-popup"></div> */}
<div className="image-gallery-header">
<h1>Your Invocations</h1>
<IconButton
@@ -88,7 +104,12 @@ export default function ImageGallery() {
</div>
<div className="image-gallery-container">
{images.length ? (
<div className="image-gallery">
<Masonry
className="masonry-grid"
columnClassName="masonry-grid_column"
breakpointCols={column}
>
{/* <div className="image-gallery"> */}
{images.map((image) => {
const { uuid } = image;
const isSelected = currentImageUuid === uuid;
@@ -100,7 +121,8 @@ export default function ImageGallery() {
/>
);
})}
</div>
{/* </div> */}
</Masonry>
) : (
<div className="image-gallery-container-placeholder">
<MdPhotoLibrary />

View File

@@ -15,9 +15,9 @@
background: var(--tab-panel-bg);
border-radius: 0 0 0.4rem 0.4rem;
border: 2px solid var(--tab-hover-color);
padding: .75rem 1rem .75rem 1rem;
padding: 0.75rem 1rem 0.75rem 1rem;
display: grid;
grid-template-rows: repeat(auto-fill, 1fr);
grid-auto-rows: max-content;
grid-row-gap: 0.5rem;
justify-content: space-between;
}

View File

@@ -11,14 +11,12 @@ export interface GalleryState {
areMoreImagesAvailable: boolean;
latest_mtime?: number;
earliest_mtime?: number;
shouldShowGallery: boolean;
}
const initialState: GalleryState = {
currentImageUuid: '',
images: [],
areMoreImagesAvailable: true,
shouldShowGallery: false,
};
export const gallerySlice = createSlice({
@@ -140,9 +138,6 @@ export const gallerySlice = createSlice({
state.areMoreImagesAvailable = areMoreImagesAvailable;
}
},
setShouldShowGallery: (state, action: PayloadAction<boolean>) => {
state.shouldShowGallery = action.payload;
},
},
});
@@ -155,7 +150,6 @@ export const {
setIntermediateImage,
selectNextImage,
selectPrevImage,
setShouldShowGallery,
} = gallerySlice.actions;
export default gallerySlice.reducer;

View File

@@ -1,5 +1,9 @@
import React from 'react';
import { RootState, useAppDispatch, useAppSelector } from '../../../../app/store';
import {
RootState,
useAppDispatch,
useAppSelector,
} from '../../../../app/store';
import IAINumberInput from '../../../../common/components/IAINumberInput';
import { setPerlin } from '../../optionsSlice';
@@ -11,13 +15,13 @@ export default function Perlin() {
return (
<IAINumberInput
label='Perlin'
min={0}
max={1}
step={0.05}
onChange={handleChangePerlin}
value={perlin}
isInteger={false}
label="Perlin Noise"
min={0}
max={1}
step={0.05}
onChange={handleChangePerlin}
value={perlin}
isInteger={false}
/>
);
}

View File

@@ -1,23 +1,29 @@
import React from 'react';
import { RootState, useAppDispatch, useAppSelector } from '../../../../app/store';
import {
RootState,
useAppDispatch,
useAppSelector,
} from '../../../../app/store';
import IAINumberInput from '../../../../common/components/IAINumberInput';
import { setThreshold } from '../../optionsSlice';
export default function Threshold() {
const dispatch = useAppDispatch();
const threshold = useAppSelector((state: RootState) => state.options.threshold);
const threshold = useAppSelector(
(state: RootState) => state.options.threshold
);
const handleChangeThreshold = (v: number) => dispatch(setThreshold(v));
return (
<IAINumberInput
label='Threshold'
min={0}
max={1000}
step={0.1}
onChange={handleChangeThreshold}
value={threshold}
isInteger={false}
label="Threshold"
min={0}
max={1000}
step={0.1}
onChange={handleChangeThreshold}
value={threshold}
isInteger={false}
/>
);
}

View File

@@ -1,8 +1,7 @@
import React from 'react';
import { MdAddAPhoto } from 'react-icons/md';
import { generateImage } from '../../../app/socketio/actions';
import { useAppDispatch } from '../../../app/store';
import IAIIconButton from '../../../common/components/IAIIconButton';
import IAIButton from '../../../common/components/IAIButton';
import useCheckParameters from '../../../common/hooks/useCheckParameters';
export default function InvokeButton() {
@@ -14,9 +13,8 @@ export default function InvokeButton() {
};
return (
<IAIIconButton
icon={<MdAddAPhoto />}
tooltip="Invoke"
<IAIButton
label="Invoke"
aria-label="Invoke"
type="submit"
isDisabled={!isReady}

View File

@@ -35,6 +35,7 @@ export interface OptionsState {
showAdvancedOptions: boolean;
activeTab: number;
shouldShowImageDetails: boolean;
shouldShowGallery: boolean;
}
const initialOptionsState: OptionsState = {
@@ -66,6 +67,7 @@ const initialOptionsState: OptionsState = {
showAdvancedOptions: true,
activeTab: 0,
shouldShowImageDetails: false,
shouldShowGallery: false,
};
const initialState: OptionsState = initialOptionsState;
@@ -281,6 +283,9 @@ export const optionsSlice = createSlice({
setShouldShowImageDetails: (state, action: PayloadAction<boolean>) => {
state.shouldShowImageDetails = action.payload;
},
setShouldShowGallery: (state, action: PayloadAction<boolean>) => {
state.shouldShowGallery = action.payload;
},
},
});
@@ -317,6 +322,7 @@ export const {
setShowAdvancedOptions,
setActiveTab,
setShouldShowImageDetails,
setShouldShowGallery,
} = optionsSlice.actions;
export default optionsSlice.reducer;

View File

@@ -100,7 +100,7 @@ const Console = () => {
</Resizable>
)}
{shouldShowLogViewer && (
<Tooltip label={shouldAutoscroll ? 'Autoscroll On' : 'Autoscroll Off'}>
<Tooltip hasArrow label={shouldAutoscroll ? 'Autoscroll On' : 'Autoscroll Off'}>
<IconButton
className={`console-autoscroll-icon-button ${
shouldAutoscroll && 'autoscroll-enabled'
@@ -113,7 +113,7 @@ const Console = () => {
/>
</Tooltip>
)}
<Tooltip label={shouldShowLogViewer ? 'Hide Console' : 'Show Console'}>
<Tooltip hasArrow label={shouldShowLogViewer ? 'Hide Console' : 'Show Console'}>
<IconButton
className={`console-toggle-icon-button ${
(hasError || !wasErrorSeen) && 'error-seen'

View File

@@ -21,7 +21,7 @@
.site-header-right-side {
display: grid;
grid-template-columns: repeat(6, max-content);
grid-template-columns: repeat(7, max-content);
align-items: center;
column-gap: 0.5rem;
}

View File

@@ -1,7 +1,7 @@
import { IconButton, Link, useColorMode } from '@chakra-ui/react';
import { IconButton, Link, Tooltip, useColorMode } from '@chakra-ui/react';
import { useHotkeys } from 'react-hotkeys-hook';
import { FaSun, FaMoon, FaGithub } from 'react-icons/fa';
import { FaSun, FaMoon, FaGithub, FaDiscord } from 'react-icons/fa';
import { MdHelp, MdKeyboard, MdSettings } from 'react-icons/md';
import InvokeAILogo from '../../assets/images/logo.png';
@@ -61,41 +61,61 @@ const SiteHeader = () => {
/>
</HotkeysModal>
<IconButton
aria-label="Link to Github Issues"
variant="link"
fontSize={23}
size={'sm'}
icon={
<Link
isExternal
href="http://github.com/lstein/stable-diffusion/issues"
>
<MdHelp />
</Link>
}
/>
<Tooltip hasArrow label="Report Bug" placement={'bottom'}>
<IconButton
aria-label="Link to Github Issues"
variant="link"
fontSize={23}
size={'sm'}
icon={
<Link
isExternal
href="http://github.com/invoke-ai/InvokeAI/issues"
>
<MdHelp />
</Link>
}
/>
</Tooltip>
<IconButton
aria-label="Link to Github Repo"
variant="link"
fontSize={20}
size={'sm'}
icon={
<Link isExternal href="http://github.com/lstein/stable-diffusion">
<FaGithub />
</Link>
}
/>
<Tooltip hasArrow label="Github" placement={'bottom'}>
<IconButton
aria-label="Link to Github Repo"
variant="link"
fontSize={20}
size={'sm'}
icon={
<Link isExternal href="http://github.com/invoke-ai/InvokeAI">
<FaGithub />
</Link>
}
/>
</Tooltip>
<IconButton
aria-label="Toggle Dark Mode"
onClick={toggleColorMode}
variant="link"
size={'sm'}
fontSize={colorModeIconFontSize}
icon={colorModeIcon}
/>
<Tooltip hasArrow label="Discord" placement={'bottom'}>
<IconButton
aria-label="Link to Discord Server"
variant="link"
fontSize={20}
size={'sm'}
icon={
<Link isExternal href="https://discord.gg/ZmtBAhwWhy">
<FaDiscord />
</Link>
}
/>
</Tooltip>
<Tooltip hasArrow label="Theme" placement={'bottom'}>
<IconButton
aria-label="Toggle Dark Mode"
onClick={toggleColorMode}
variant="link"
size={'sm'}
fontSize={colorModeIconFontSize}
icon={colorModeIcon}
/>
</Tooltip>
</div>
</div>
);

View File

@@ -18,17 +18,11 @@
.image-to-image-display-area {
display: grid;
grid-template-areas: 'image-to-image-display-area';
.image-to-image-display {
grid-area: image-to-image-display-area;
}
column-gap: 0.5rem;
.image-gallery-area {
grid-area: image-to-image-display-area;
z-index: 2;
place-self: end;
margin: 1rem;
height: 100%;
}
}
@@ -45,6 +39,7 @@
border-radius: 0.5rem;
background-color: var(--background-color-secondary);
display: grid;
height: $app-content-height;
.current-image-options {
grid-auto-columns: max-content;
@@ -68,7 +63,7 @@
.image-to-image-dual-preview {
grid-area: img2img-preview;
display: grid;
grid-template-columns: max-content max-content;
grid-template-columns: 1fr 1fr;
column-gap: 0.5rem;
padding: 0 1rem;
place-content: center;

View File

@@ -2,12 +2,24 @@ import React from 'react';
import ImageToImagePanel from './ImageToImagePanel';
import ImageToImageDisplay from './ImageToImageDisplay';
import ImageGallery from '../../gallery/ImageGallery';
import { RootState, useAppSelector } from '../../../app/store';
export default function ImageToImage() {
const shouldShowGallery = useAppSelector(
(state: RootState) => state.options.shouldShowGallery
);
return (
<div className="image-to-image-workarea">
<ImageToImagePanel />
<div className="image-to-image-display-area">
<div
className="image-to-image-display-area"
style={
shouldShowGallery
? { gridTemplateColumns: 'auto max-content' }
: { gridTemplateColumns: 'auto' }
}
>
<ImageToImageDisplay />
<ImageGallery />
</div>

View File

@@ -88,6 +88,7 @@ export default function InvokeTabs() {
tabsToRender.push(
<Tooltip
key={key}
hasArrow
label={tab_dict[key as keyof typeof tab_dict].tooltip}
placement={'right'}
>

View File

@@ -18,16 +18,17 @@
.text-to-image-display {
display: grid;
grid-template-areas: 'text-to-image-display';
column-gap: 0.5rem;
.current-image-display,
.current-image-display-placeholder {
grid-area: text-to-image-display;
height: $app-content-height;
}
.image-gallery-area {
grid-area: text-to-image-display;
height: 100%;
z-index: 2;
place-self: end;
margin: 1rem;
}
}

View File

@@ -2,12 +2,24 @@ import React from 'react';
import TextToImagePanel from './TextToImagePanel';
import CurrentImageDisplay from '../../gallery/CurrentImageDisplay';
import ImageGallery from '../../gallery/ImageGallery';
import { RootState, useAppSelector } from '../../../app/store';
export default function TextToImage() {
const shouldShowGallery = useAppSelector(
(state: RootState) => state.options.shouldShowGallery
);
return (
<div className="text-to-image-workarea">
<TextToImagePanel />
<div className="text-to-image-display">
<div
className="text-to-image-display"
style={
shouldShowGallery
? { gridTemplateColumns: 'auto max-content' }
: { gridTemplateColumns: 'auto' }
}
>
<CurrentImageDisplay />
<ImageGallery />
</div>

View File

@@ -8,9 +8,7 @@ $app-width: calc(100vw - $app-cutoff);
$app-height: calc(100vh - $app-cutoff);
$app-content-height: calc(100vh - $app-content-height-cutoff);
$app-gallery-height: calc(100vh - ($app-content-height-cutoff + 6rem));
$app-gallery-popover-height: calc(
100vh - ($app-content-height-cutoff - 2.5rem)
);
$app-gallery-popover-height: calc(100vh - ($app-content-height-cutoff + 6rem));
$app-metadata-height: calc(100vh - ($app-content-height-cutoff + 4.4rem));
// option bar

View File

@@ -6,3 +6,15 @@
transform: translateX(0);
}
}
@keyframes pulse {
0% {
transform: scale(1);
}
50% {
transform: scale(1.1);
}
100% {
transform: scale(1);
}
}

View File

@@ -26,6 +26,7 @@
// gallery
@use '../features/gallery/CurrentImageDisplay.scss';
@use '../features/gallery/ImageGallery.scss';
@use '../features/gallery/HoverableImage.scss';
@use '../features/gallery/InvokePopover.scss';
@use '../features/gallery/ImageMetaDataViewer/ImageMetadataViewer.scss';

View File

@@ -2850,6 +2850,11 @@ react-is@^18.0.0:
resolved "https://registry.yarnpkg.com/react-is/-/react-is-18.2.0.tgz#199431eeaaa2e09f86427efbb4f1473edb47609b"
integrity sha512-xWGDIW6x921xtzPkhiULtthJHoJvBbF3q26fzloPCK0hsvxtPVelvftw3zjbHWSkR2km9Z+4uxbDDK/6Zw9B8w==
react-masonry-css@^1.0.16:
version "1.0.16"
resolved "https://registry.yarnpkg.com/react-masonry-css/-/react-masonry-css-1.0.16.tgz#72b28b4ae3484e250534700860597553a10f1a2c"
integrity sha512-KSW0hR2VQmltt/qAa3eXOctQDyOu7+ZBevtKgpNDSzT7k5LA/0XntNa9z9HKCdz3QlxmJHglTZ18e4sX4V8zZQ==
react-redux@^8.0.2:
version "8.0.2"
resolved "https://registry.yarnpkg.com/react-redux/-/react-redux-8.0.2.tgz#bc2a304bb21e79c6808e3e47c50fe1caf62f7aad"

View File

@@ -34,26 +34,6 @@ from ldm.invoke.image_util import InitImageResizer
from ldm.invoke.devices import choose_torch_device, choose_precision
from ldm.invoke.conditioning import get_uc_and_c
def fix_func(orig):
if hasattr(torch.backends, 'mps') and torch.backends.mps.is_available():
def new_func(*args, **kw):
device = kw.get("device", "mps")
kw["device"]="cpu"
return orig(*args, **kw).to(device)
return new_func
return orig
torch.rand = fix_func(torch.rand)
torch.rand_like = fix_func(torch.rand_like)
torch.randn = fix_func(torch.randn)
torch.randn_like = fix_func(torch.randn_like)
torch.randint = fix_func(torch.randint)
torch.randint_like = fix_func(torch.randint_like)
torch.bernoulli = fix_func(torch.bernoulli)
torch.multinomial = fix_func(torch.multinomial)
"""Simplified text to image API for stable diffusion/latent diffusion
Example Usage:

View File

@@ -82,6 +82,7 @@ with metadata_from_png():
import argparse
from argparse import Namespace, RawTextHelpFormatter
import pydoc
import shlex
import json
import hashlib
@@ -115,6 +116,36 @@ PRECISION_CHOICES = [
APP_ID = 'lstein/stable-diffusion'
APP_VERSION = 'v1.15'
class ArgFormatter(argparse.RawTextHelpFormatter):
# use defined argument order to display usage
def _format_usage(self, usage, actions, groups, prefix):
if prefix is None:
prefix = 'usage: '
# if usage is specified, use that
if usage is not None:
usage = usage % dict(prog=self._prog)
# if no optionals or positionals are available, usage is just prog
elif usage is None and not actions:
usage = 'invoke>'
elif usage is None:
prog='invoke>'
# build full usage string
action_usage = self._format_actions_usage(actions, groups) # NEW
usage = ' '.join([s for s in [prog, action_usage] if s])
# omit the long line wrapping code
# prefix with 'usage:'
return '%s%s\n\n' % (prefix, usage)
class PagingArgumentParser(argparse.ArgumentParser):
'''
A custom ArgumentParser that uses pydoc to page its output.
'''
def print_help(self, file=None):
text = self.format_help()
pydoc.pager(text)
class Args(object):
def __init__(self,arg_parser=None,cmd_parser=None):
'''
@@ -480,8 +511,8 @@ class Args(object):
# This creates the parser that processes commands on the invoke> command line
def _create_dream_cmd_parser(self):
parser = argparse.ArgumentParser(
formatter_class=RawTextHelpFormatter,
parser = PagingArgumentParser(
formatter_class=ArgFormatter,
description=
"""
*Image generation:*

View File

@@ -171,6 +171,14 @@ class KSampler(Sampler):
if img_callback is not None:
img_callback(k_callback_values['x'],k_callback_values['i'])
# if make_schedule() hasn't been called, we do it now
if self.sigmas is None:
self.make_schedule(
ddim_num_steps=S,
ddim_eta = eta,
verbose = False,
)
# sigmas are set up in make_schedule - we take the last steps items
total_steps = len(self.sigmas)
sigmas = self.sigmas[-S-1:]
@@ -248,11 +256,15 @@ class KSampler(Sampler):
return img, None, None
# REVIEW THIS METHOD: it has never been tested. In particular,
# we should not be multiplying by self.sigmas[0] if we
# are at an intermediate step in img2img. See similar in
# sample() which does work.
def get_initial_image(self,x_T,shape,steps):
print(f'WARNING: ksampler.get_initial_image(): get_initial_image needs testing')
x = (torch.randn(shape, device=self.device) * self.sigmas[0])
if x_T is not None:
return x_T + x_T * self.sigmas[0]
return x_T + x
else:
return x

View File

@@ -20,6 +20,7 @@ from ldm.modules.diffusionmodules.util import (
class Sampler(object):
def __init__(self, model, schedule='linear', steps=None, device=None, **kwargs):
self.model = model
self.ddim_timesteps = None
self.ddpm_num_timesteps = steps
self.schedule = schedule
self.device = device or choose_torch_device()
@@ -157,6 +158,14 @@ class Sampler(object):
**kwargs,
):
# check to see if make_schedule() has run, and if not, run it
if self.ddim_timesteps is None:
self.make_schedule(
ddim_num_steps=S,
ddim_eta = eta,
verbose = False,
)
ts = self.get_timesteps(S)
# sampling

View File

@@ -223,15 +223,15 @@ def rand_perlin_2d(shape, res, device, fade = lambda t: 6*t**5 - 15*t**4 + 10*t*
rand_val = torch.rand(res[0]+1, res[1]+1)
angles = 2*math.pi*rand_val
gradients = torch.stack((torch.cos(angles), torch.sin(angles)), dim = -1)
gradients = torch.stack((torch.cos(angles), torch.sin(angles)), dim = -1).to(device)
tile_grads = lambda slice1, slice2: gradients[slice1[0]:slice1[1], slice2[0]:slice2[1]].repeat_interleave(d[0], 0).repeat_interleave(d[1], 1)
dot = lambda grad, shift: (torch.stack((grid[:shape[0],:shape[1],0] + shift[0], grid[:shape[0],:shape[1], 1] + shift[1] ), dim = -1) * grad[:shape[0], :shape[1]]).sum(dim = -1)
n00 = dot(tile_grads([0, -1], [0, -1]), [0, 0])
n10 = dot(tile_grads([1, None], [0, -1]), [-1, 0])
n01 = dot(tile_grads([0, -1],[1, None]), [0, -1])
n11 = dot(tile_grads([1, None], [1, None]), [-1,-1])
n00 = dot(tile_grads([0, -1], [0, -1]), [0, 0]).to(device)
n10 = dot(tile_grads([1, None], [0, -1]), [-1, 0]).to(device)
n01 = dot(tile_grads([0, -1],[1, None]), [0, -1]).to(device)
n11 = dot(tile_grads([1, None], [1, None]), [-1,-1]).to(device)
t = fade(grid[:shape[0], :shape[1]])
return math.sqrt(2) * torch.lerp(torch.lerp(n00, n10, t[..., 0]), torch.lerp(n01, n11, t[..., 0]), t[..., 1])
return math.sqrt(2) * torch.lerp(torch.lerp(n00, n10, t[..., 0]), torch.lerp(n01, n11, t[..., 0]), t[..., 1]).to(device)

View File

@@ -18,7 +18,7 @@
"---\n",
"<font color=\"red\">Note:</font> It takes some time to load, but after installing all dependencies you can use the bot all time you want while colab instance is up. <br>\n",
"<font color=\"red\">Requirements:</font> For this notebook to work you need to have [Stable-Diffusion-v-1-4](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original) stored in your Google Drive, it will be needed in cell #7\n",
"##### For more details visit Github repository: [lstein/stable-diffusion](https://github.com/lstein/stable-diffusion)\n",
"##### For more details visit Github repository: [invoke-ai/InvokeAI](https://github.com/invoke-ai/InvokeAI)\n",
"---\n"
]
},
@@ -57,7 +57,7 @@
"#@title 2. Download stable-diffusion Repository\n",
"from os.path import exists\n",
"\n",
"!git clone --quiet https://github.com/lstein/stable-diffusion.git # Original repo\n",
"!git clone --quiet https://github.com/invoke-ai/InvokeAI.git # Original repo\n",
"%cd /content/stable-diffusion/\n",
"!git checkout --quiet tags/release-1.14.1"
]
@@ -74,8 +74,8 @@
"#@title 3. Install dependencies\n",
"import gc\n",
"\n",
"!wget https://raw.githubusercontent.com/lstein/stable-diffusion/development/requirements.txt\n",
"!wget https://raw.githubusercontent.com/lstein/stable-diffusion/development/requirements-lin-win-colab-CUDA.txt\n",
"!wget https://raw.githubusercontent.com/invoke-ai/InvokeAI/development/requirements.txt\n",
"!wget https://raw.githubusercontent.com/invoke-ai/InvokeAI/development/requirements-lin-win-colab-CUDA.txt\n",
"!pip install colab-xterm\n",
"!pip install -r requirements-lin-win-colab-CUDA.txt\n",
"!pip install clean-fid torchtext\n",

View File

@@ -1,8 +1,5 @@
-r requirements.txt
--pre
--extra-index-url https://download.pytorch.org/whl/nightly/cpu --trusted-host https://download.pytorch.org
protobuf==3.19.4
torch
torchvision

View File

@@ -31,6 +31,7 @@ flaskwebgui==0.3.7
send2trash
dependency_injector==4.40.0
eventlet
realesrgan
git+https://github.com/openai/CLIP.git@main#egg=clip
git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k-diffusion
git+https://github.com/TencentARC/GFPGAN.git#egg=gfpgan