Compare commits

...

35 Commits

Author SHA1 Message Date
psychedelicious
f5dfd5b0dc Fixes CORS handling 2022-10-08 11:57:18 -04:00
Lincoln Stein
47a97f7e97 rebuild front end 2022-10-08 11:50:25 -04:00
blessedcoolant
3c146ebf9e Fix Gallery being open by default 2022-10-08 11:47:11 -04:00
blessedcoolant
efbcbb0d91 Add Image Gallery Drawer 2022-10-08 11:44:42 -04:00
blessedcoolant
578d8b0cb4 Add Image Gallery Drawer 2022-10-08 11:43:02 -04:00
Lincoln Stein
2b1aaf4ee7 rename all modules from ldm.dream to ldm.invoke
- scripts and documentation updated to match
- ran preflight checks on both web and CLI and seems to be working
2022-10-08 11:37:23 -04:00
Lincoln Stein
4a7f5c7469 Merge branch 'release-candidate-2' of github.com:invoke-ai/InvokeAI into release-candidate-2 2022-10-08 09:34:11 -04:00
Lincoln Stein
98fe044dee rebrand CLI from "dream" to "invoke"
- rename dream.py to invoke.py
- create a compatibility script named dream.py that execs() invoke.py
- redo documentation
- change help message in args
- this does **not** rename the libraries, which are still ldm.dream.util, etc
2022-10-08 09:32:06 -04:00
Lincoln Stein
97684d78d3 rebuild webui package 2022-10-07 16:44:23 -04:00
blessedcoolant
57791834ab [WebUI] Add Image To Image UI 2022-10-07 16:41:09 -04:00
Lincoln Stein
7a701506a4 restore ability of ksamplers to process -v variation options
- supersedes PR #977
- works with both img2img and txt2img
2022-10-07 16:25:58 -04:00
Lincoln Stein
3d7bc074cf autorotate init images using exif orientation tag 2022-10-07 12:06:50 -04:00
Jakub Kolčář
70bb7f4a61 fixed perlin noise generation for mps (macos) - fix for cpu fallback 2022-10-07 10:36:45 -04:00
Lincoln Stein
9c9cb71544 rebuild frontend package 2022-10-07 10:20:02 -04:00
spezialspezial
a7515624b2 remove duplicated code 2022-10-07 08:12:55 -04:00
Lincoln Stein
9f34ddfcea fix crash on len(Nonetype) in k_sampler 2022-10-07 08:05:13 -04:00
Lincoln Stein
c6a7be63b8 fix crash in generate._transparency_check_and_warning() 2022-10-06 21:00:27 -04:00
Lincoln Stein
75165957c9 Revert "realesrgan inherits precision setting from main program"
This reverts commit 5f42d08945.

This fix was intended to solve issue #939, in which ESRGAN generates
dark images when upscaling 4X on certain GTX cards. However, the fix
apparently causes conflicts with some versions of the ESRGAN library,
and this fix will have to wait until after release of 2.0.
2022-10-06 20:52:38 -04:00
Lincoln Stein
d60df54f69 fix k_samplers in img2img - probably correct now 2022-10-06 18:53:54 -04:00
Lincoln Stein
82481a6f9c Merge branch 'release-candidate-2' of github.com:invoke-ai/InvokeAI into release-candidate-2 2022-10-06 13:58:53 -04:00
Lincoln Stein
90d64388ab Merge branch 'release-candidate-2' into release-candidate-2
- This includes #949 "Bug fixes for new Threshold and Perlin Options"
2022-10-06 13:57:43 -04:00
Lincoln Stein
3444c8e6b8 Merge branch 'release-candidate-2' into release-candidate-2 2022-10-06 13:53:27 -04:00
psychedelicious
d84321e080 Adds hotkeys to modal 2022-10-06 13:49:09 -04:00
psychedelicious
6542556ebd Adds next/prev image buttons/hotkeys 2022-10-06 13:48:59 -04:00
blessedcoolant
70bbb670ec Add Basic Hotkey Support 2022-10-06 13:27:42 -04:00
Lincoln Stein
5f42d08945 realesrgan inherits precision setting from main program 2022-10-06 12:23:30 -04:00
blessedcoolant
911c99f125 Fix WebUI CORS Issue 2022-10-06 11:17:48 -04:00
Lincoln Stein
2154dd2349 prevent crashes due to uninitialized free_gpu_mem 2022-10-06 10:54:05 -04:00
Lincoln Stein
f3050fefce bug and warning message fixes
- txt2img2img back to using DDIM as img2img sampler; results produced
  by some k* samplers are just not reliable enough for good user
  experience
- img2img progress message clarifies why img2img steps taken != steps requested
- warn of potential problems when user tries to run img2img on a small init image
2022-10-06 10:39:08 -04:00
Lincoln Stein
183b98384f set perlin & threshold to zero on generator initialization 2022-10-06 09:35:04 -04:00
Peter Baylies
6d475ee290 * Bug fixes for new Threshold and Perlin options 2022-10-06 08:46:27 -04:00
Lincoln Stein
2f29b78a00 enable --hires to use k* samplers 2022-10-05 17:18:32 -04:00
ArDiouscuros
bcb6e2e506 Fix for crashes in txt2img hires fix mode 2022-10-05 17:13:43 -04:00
Lincoln Stein
194b875cf3 Update IMG2IMG.md
Added information on the small initial image size bug.
2022-10-05 15:55:38 -04:00
Lincoln Stein
b2cd98259d rename img files with colons 2022-10-05 12:56:57 -04:00
133 changed files with 3588 additions and 2058 deletions

View File

@@ -1,4 +1,4 @@
name: Test Dream with Conda
name: Test Invoke with Conda
on:
push:
branches:
@@ -9,7 +9,7 @@ jobs:
strategy:
matrix:
os: [ ubuntu-latest, macos-12 ]
name: Test dream.py on ${{ matrix.os }} with conda
name: Test invoke.py on ${{ matrix.os }} with conda
runs-on: ${{ matrix.os }}
steps:
- run: |
@@ -85,9 +85,9 @@ jobs:
fi
# Utterly hacky, but I don't know how else to do this
if [[ ${{ github.ref }} == 'refs/heads/master' ]]; then
time ${{ steps.vars.outputs.PYTHON_BIN }} scripts/dream.py --from_file tests/preflight_prompts.txt
time ${{ steps.vars.outputs.PYTHON_BIN }} scripts/invoke.py --from_file tests/preflight_prompts.txt
elif [[ ${{ github.ref }} == 'refs/heads/development' ]]; then
time ${{ steps.vars.outputs.PYTHON_BIN }} scripts/dream.py --from_file tests/dev_prompts.txt
time ${{ steps.vars.outputs.PYTHON_BIN }} scripts/invoke.py --from_file tests/dev_prompts.txt
fi
mkdir -p outputs/img-samples
- name: Archive results

View File

@@ -24,7 +24,7 @@ _This repository was formally known as lstein/stable-diffusion_
[CI checks on dev badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/development?label=CI%20status%20on%20dev&cache=900&icon=github
[CI checks on dev link]: https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Adevelopment
[CI checks on main badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
[CI checks on main link]: https://github.com/invoke-ai/InvokeAI/actions/workflows/test-dream-conda.yml
[CI checks on main link]: https://github.com/invoke-ai/InvokeAI/actions/workflows/test-invoke-conda.yml
[discord badge]: https://flat.badgen.net/discord/members/ZmtBAhwWhy?icon=discord
[discord link]: https://discord.gg/ZmtBAhwWhy
[github forks badge]: https://flat.badgen.net/github/forks/invoke-ai/InvokeAI?icon=github
@@ -94,10 +94,10 @@ You wil need one of the following:
Precision is auto configured based on the device. If however you encounter
errors like 'expected type Float but found Half' or 'not implemented for Half'
you can try starting `dream.py` with the `--precision=float32` flag:
you can try starting `invoke.py` with the `--precision=float32` flag:
```bash
(ldm) ~/stable-diffusion$ python scripts/dream.py --precision=float32
(ldm) ~/stable-diffusion$ python scripts/invoke.py --precision=float32
```
### Features
@@ -130,7 +130,7 @@ you can try starting `dream.py` with the `--precision=float32` flag:
- vNEXT (TODO 2022)
- Deprecated `--full_precision` / `-F`. Simply omit it and `dream.py` will auto
- Deprecated `--full_precision` / `-F`. Simply omit it and `invoke.py` will auto
configure. To switch away from auto use the new flag like `--precision=float32`.
- v1.14 (11 September 2022)
@@ -156,7 +156,7 @@ you can try starting `dream.py` with the `--precision=float32` flag:
- A new configuration file scheme that allows new models (including upcoming
stable-diffusion-v1.5) to be added without altering the code.
([David Wager](https://github.com/maddavid12))
- Can specify --grid on dream.py command line as the default.
- Can specify --grid on invoke.py command line as the default.
- Miscellaneous internal bug and stability fixes.
- Works on M1 Apple hardware.
- Multiple bug fixes.

View File

@@ -12,9 +12,9 @@ from PIL import Image
from uuid import uuid4
from threading import Event
from ldm.dream.args import Args, APP_ID, APP_VERSION, calculate_init_img_hash
from ldm.dream.pngwriter import PngWriter, retrieve_metadata
from ldm.dream.conditioning import split_weighted_subprompts
from ldm.invoke.args import Args, APP_ID, APP_VERSION, calculate_init_img_hash
from ldm.invoke.pngwriter import PngWriter, retrieve_metadata
from ldm.invoke.conditioning import split_weighted_subprompts
from backend.modules.parameters import parameters_to_command
@@ -49,24 +49,16 @@ class InvokeAIWebServer:
engineio_logger = True if args.web_verbose else False
max_http_buffer_size = 10000000
# CORS Allowed Setup
cors_allowed_origins = [
'http://127.0.0.1:5173',
'http://localhost:5173',
]
additional_allowed_origins = (
opt.cors if opt.cors else []
) # additional CORS allowed origins
if self.host == '127.0.0.1':
cors_allowed_origins.extend(
[
f'http://{self.host}:{self.port}',
f'http://localhost:{self.port}',
]
)
cors_allowed_origins = (
cors_allowed_origins + additional_allowed_origins
)
socketio_args = {
'logger': logger,
'engineio_logger': engineio_logger,
'max_http_buffer_size': max_http_buffer_size,
'ping_interval': (50, 50),
'ping_timeout': 60,
}
if opt.cors:
socketio_args['cors_allowed_origins'] = opt.cors
self.app = Flask(
__name__, static_url_path='', static_folder='../frontend/dist/'
@@ -74,12 +66,7 @@ class InvokeAIWebServer:
self.socketio = SocketIO(
self.app,
logger=logger,
engineio_logger=engineio_logger,
max_http_buffer_size=max_http_buffer_size,
cors_allowed_origins=cors_allowed_origins,
ping_interval=(50, 50),
ping_timeout=60,
**socketio_args
)
# Keep Server Alive Route
@@ -160,7 +147,7 @@ class InvokeAIWebServer:
self.init_image_path = os.path.join(self.result_path, 'init-images/')
self.mask_image_path = os.path.join(self.result_path, 'mask-images/')
# txt log
self.log_path = os.path.join(self.result_path, 'dream_log.txt')
self.log_path = os.path.join(self.result_path, 'invoke_log.txt')
# make all output paths
[
os.makedirs(path, exist_ok=True)

View File

@@ -1,6 +1,6 @@
import argparse
import os
from ldm.dream.args import PRECISION_CHOICES
from ldm.invoke.args import PRECISION_CHOICES
def create_cmd_parser():

View File

@@ -15,7 +15,7 @@ SAMPLER_CHOICES = [
def parameters_to_command(params):
"""
Converts dict of parameters into a `dream.py` REPL command.
Converts dict of parameters into a `invoke.py` REPL command.
"""
switches = list()

View File

@@ -30,10 +30,10 @@ from send2trash import send2trash
from ldm.generate import Generate
from ldm.dream.restoration import Restoration
from ldm.dream.pngwriter import PngWriter, retrieve_metadata
from ldm.dream.args import APP_ID, APP_VERSION, calculate_init_img_hash
from ldm.dream.conditioning import split_weighted_subprompts
from ldm.invoke.restoration import Restoration
from ldm.invoke.pngwriter import PngWriter, retrieve_metadata
from ldm.invoke.args import APP_ID, APP_VERSION, calculate_init_img_hash
from ldm.invoke.conditioning import split_weighted_subprompts
from modules.parameters import parameters_to_command
@@ -125,7 +125,7 @@ class CanceledException(Exception):
try:
gfpgan, codeformer, esrgan = None, None, None
from ldm.dream.restoration.base import Restoration
from ldm.invoke.restoration.base import Restoration
restoration = Restoration()
gfpgan, codeformer = restoration.load_face_restore_models()
@@ -164,7 +164,7 @@ init_image_path = os.path.join(result_path, "init-images/")
mask_image_path = os.path.join(result_path, "mask-images/")
# txt log
log_path = os.path.join(result_path, "dream_log.txt")
log_path = os.path.join(result_path, "invoke_log.txt")
# make all output paths
[

View File

@@ -5,9 +5,9 @@
- Supports a Google Colab notebook for a standalone server running on Google hardware [Arturo Mendivil](https://github.com/artmen1516)
- WebUI supports GFPGAN/ESRGAN facial reconstruction and upscaling [Kevin Gibbons](https://github.com/bakkot)
- WebUI supports incremental display of in-progress images during generation [Kevin Gibbons](https://github.com/bakkot)
- Output directory can be specified on the dream> command line.
- Output directory can be specified on the invoke> command line.
- The grid was displaying duplicated images when not enough images to fill the final row [Muhammad Usama](https://github.com/SMUsamaShah)
- Can specify --grid on dream.py command line as the default.
- Can specify --grid on invoke.py command line as the default.
- Miscellaneous internal bug and stability fixes.
---
@@ -16,13 +16,13 @@
- Improved file handling, including ability to read prompts from standard input.
(kudos to [Yunsaki](https://github.com/yunsaki)
- The web server is now integrated with the dream.py script. Invoke by adding --web to
the dream.py command arguments.
- The web server is now integrated with the invoke.py script. Invoke by adding --web to
the invoke.py command arguments.
- Face restoration and upscaling via GFPGAN and Real-ESGAN are now automatically
enabled if the GFPGAN directory is located as a sibling to Stable Diffusion.
VRAM requirements are modestly reduced. Thanks to both [Blessedcoolant](https://github.com/blessedcoolant) and
[Oceanswave](https://github.com/oceanswave) for their work on this.
- You can now swap samplers on the dream> command line. [Blessedcoolant](https://github.com/blessedcoolant)
- You can now swap samplers on the invoke> command line. [Blessedcoolant](https://github.com/blessedcoolant)
---
@@ -32,7 +32,7 @@
- You now can specify a seed of -1 to use the previous image's seed, -2 to use the seed for the image generated before that, etc.
Seed memory only extends back to the previous command, but will work on all images generated with the -n# switch.
- Variant generation support temporarily disabled pending more general solution.
- Created a feature branch named **yunsaki-morphing-dream** which adds experimental support for
- Created a feature branch named **yunsaki-morphing-invoke** which adds experimental support for
iteratively modifying the prompt and its parameters. Please see[ Pull Request #86](https://github.com/lstein/stable-diffusion/pull/86)
for a synopsis of how this works. Note that when this feature is eventually added to the main branch, it will may be modified
significantly.
@@ -57,7 +57,7 @@
## v1.08 (24 August 2022)
- Escape single quotes on the dream> command before trying to parse. This avoids
- Escape single quotes on the invoke> command before trying to parse. This avoids
parse errors.
- Removed instruction to get Python3.8 as first step in Windows install.
Anaconda3 does it for you.
@@ -94,7 +94,7 @@
be regenerated with the indicated key
- It should no longer be possible for one image to overwrite another
- You can use the "cd" and "pwd" commands at the dream> prompt to set and retrieve
- You can use the "cd" and "pwd" commands at the invoke> prompt to set and retrieve
the path of the output directory.
---
@@ -128,7 +128,7 @@
- added k_lms sampling.
**Please run "conda env update" to load the k_lms dependencies!!**
- use half precision arithmetic by default, resulting in faster execution and lower memory requirements
Pass argument --full_precision to dream.py to get slower but more accurate image generation
Pass argument --full_precision to invoke.py to get slower but more accurate image generation
---

View File

Before

Width:  |  Height:  |  Size: 501 KiB

After

Width:  |  Height:  |  Size: 501 KiB

View File

Before

Width:  |  Height:  |  Size: 473 KiB

After

Width:  |  Height:  |  Size: 473 KiB

View File

Before

Width:  |  Height:  |  Size: 618 KiB

After

Width:  |  Height:  |  Size: 618 KiB

View File

Before

Width:  |  Height:  |  Size: 557 KiB

After

Width:  |  Height:  |  Size: 557 KiB

View File

@@ -12,10 +12,10 @@ title: Changelog
[Kevin Gibbons](https://github.com/bakkot)
- WebUI supports incremental display of in-progress images during generation
[Kevin Gibbons](https://github.com/bakkot)
- Output directory can be specified on the dream> command line.
- Output directory can be specified on the invoke> command line.
- The grid was displaying duplicated images when not enough images to fill the
final row [Muhammad Usama](https://github.com/SMUsamaShah)
- Can specify --grid on dream.py command line as the default.
- Can specify --grid on invoke.py command line as the default.
- Miscellaneous internal bug and stability fixes.
---
@@ -24,14 +24,14 @@ title: Changelog
- Improved file handling, including ability to read prompts from standard input.
(kudos to [Yunsaki](https://github.com/yunsaki)
- The web server is now integrated with the dream.py script. Invoke by adding
--web to the dream.py command arguments.
- The web server is now integrated with the invoke.py script. Invoke by adding
--web to the invoke.py command arguments.
- Face restoration and upscaling via GFPGAN and Real-ESGAN are now automatically
enabled if the GFPGAN directory is located as a sibling to Stable Diffusion.
VRAM requirements are modestly reduced. Thanks to both
[Blessedcoolant](https://github.com/blessedcoolant) and
[Oceanswave](https://github.com/oceanswave) for their work on this.
- You can now swap samplers on the dream> command line.
- You can now swap samplers on the invoke> command line.
[Blessedcoolant](https://github.com/blessedcoolant)
---
@@ -45,7 +45,7 @@ title: Changelog
back to the previous command, but will work on all images generated with the
-n# switch.
- Variant generation support temporarily disabled pending more general solution.
- Created a feature branch named **yunsaki-morphing-dream** which adds
- Created a feature branch named **yunsaki-morphing-invoke** which adds
experimental support for iteratively modifying the prompt and its parameters.
Please
see[ Pull Request #86](https://github.com/lstein/stable-diffusion/pull/86) for
@@ -75,7 +75,7 @@ title: Changelog
## v1.08 <small>(24 August 2022)</small>
- Escape single quotes on the dream> command before trying to parse. This avoids
- Escape single quotes on the invoke> command before trying to parse. This avoids
parse errors.
- Removed instruction to get Python3.8 as first step in Windows install.
Anaconda3 does it for you.
@@ -112,7 +112,7 @@ title: Changelog
can be regenerated with the indicated key
- It should no longer be possible for one image to overwrite another
- You can use the "cd" and "pwd" commands at the dream> prompt to set and
- You can use the "cd" and "pwd" commands at the invoke> prompt to set and
retrieve the path of the output directory.
## v1.04 <small>(22 August 2022 - after the drop)</small>
@@ -139,5 +139,5 @@ title: Changelog
- added k_lms sampling. **Please run "conda env update -f environment.yaml" to
load the k_lms dependencies!!**
- use half precision arithmetic by default, resulting in faster execution and
lower memory requirements Pass argument --full_precision to dream.py to get
lower memory requirements Pass argument --full_precision to invoke.py to get
slower but more accurate image generation

View File

@@ -8,8 +8,8 @@ hide:
## **Interactive Command Line Interface**
The `dream.py` script, located in `scripts/dream.py`, provides an interactive
interface to image generation similar to the "dream mothership" bot that Stable
The `invoke.py` script, located in `scripts/dream.py`, provides an interactive
interface to image generation similar to the "invoke mothership" bot that Stable
AI provided on its Discord server.
Unlike the `txt2img.py` and `img2img.py` scripts provided in the original
@@ -34,21 +34,21 @@ The script is confirmed to work on Linux, Windows and Mac systems.
currently rudimentary, but a much better replacement is on its way.
```bash
(ldm) ~/stable-diffusion$ python3 ./scripts/dream.py
(ldm) ~/stable-diffusion$ python3 ./scripts/invoke.py
* Initializing, be patient...
Loading model from models/ldm/text2img-large/model.ckpt
(...more initialization messages...)
* Initialization done! Awaiting your command...
dream> ashley judd riding a camel -n2 -s150
invoke> ashley judd riding a camel -n2 -s150
Outputs:
outputs/img-samples/00009.png: "ashley judd riding a camel" -n2 -s150 -S 416354203
outputs/img-samples/00010.png: "ashley judd riding a camel" -n2 -s150 -S 1362479620
dream> "there's a fly in my soup" -n6 -g
invoke> "there's a fly in my soup" -n6 -g
outputs/img-samples/00011.png: "there's a fly in my soup" -n6 -g -S 2685670268
seeds for individual rows: [2685670268, 1216708065, 2335773498, 822223658, 714542046, 3395302430]
dream> q
invoke> q
# this shows how to retrieve the prompt stored in the saved image's metadata
(ldm) ~/stable-diffusion$ python ./scripts/images2prompt.py outputs/img_samples/*.png
@@ -57,10 +57,10 @@ dream> q
00011.png: "there's a fly in my soup" -n6 -g -S 2685670268
```
![dream-py-demo](../assets/dream-py-demo.png)
![invoke-py-demo](../assets/dream-py-demo.png)
The `dream>` prompt's arguments are pretty much identical to those used in the
Discord bot, except you don't need to type "!dream" (it doesn't hurt if you do).
The `invoke>` prompt's arguments are pretty much identical to those used in the
Discord bot, except you don't need to type "!invoke" (it doesn't hurt if you do).
A significant change is that creation of individual images is now the default
unless `--grid` (`-g`) is given. A full list is given in
[List of prompt arguments](#list-of-prompt-arguments).
@@ -73,7 +73,7 @@ the location of the model weight files.
### List of arguments recognized at the command line
These command-line arguments can be passed to `dream.py` when you first run it
These command-line arguments can be passed to `invoke.py` when you first run it
from the Windows, Mac or Linux command line. Some set defaults that can be
overridden on a per-prompt basis (see [List of prompt arguments]
(#list-of-prompt-arguments). Others
@@ -112,15 +112,15 @@ These arguments are deprecated but still work:
| --laion400m | -l | False | Use older LAION400m weights; use `--model=laion400m` instead |
**A note on path names:** On Windows systems, you may run into
problems when passing the dream script standard backslashed path
problems when passing the invoke script standard backslashed path
names because the Python interpreter treats "\" as an escape.
You can either double your slashes (ick): C:\\\\path\\\\to\\\\my\\\\file, or
use Linux/Mac style forward slashes (better): C:/path/to/my/file.
## List of prompt arguments
After the dream.py script initializes, it will present you with a
**dream>** prompt. Here you can enter information to generate images
After the invoke.py script initializes, it will present you with a
**invoke>** prompt. Here you can enter information to generate images
from text (txt2img), to embellish an existing image or sketch
(img2img), or to selectively alter chosen regions of the image
(inpainting).
@@ -128,13 +128,13 @@ from text (txt2img), to embellish an existing image or sketch
### This is an example of txt2img:
~~~~
dream> waterfall and rainbow -W640 -H480
invoke> waterfall and rainbow -W640 -H480
~~~~
This will create the requested image with the dimensions 640 (width)
and 480 (height).
Here are the dream> command that apply to txt2img:
Here are the invoke> command that apply to txt2img:
| Argument | Shortcut | Default | Description |
|--------------------|------------|---------------------|--------------|
@@ -167,7 +167,7 @@ the nearest multiple of 64.
### This is an example of img2img:
~~~~
dream> waterfall and rainbow -I./vacation-photo.png -W640 -H480 --fit
invoke> waterfall and rainbow -I./vacation-photo.png -W640 -H480 --fit
~~~~
This will modify the indicated vacation photograph by making it more
@@ -188,7 +188,7 @@ accepts additional options:
### This is an example of inpainting:
~~~~
dream> waterfall and rainbow -I./vacation-photo.png -M./vacation-mask.png -W640 -H480 --fit
invoke> waterfall and rainbow -I./vacation-photo.png -M./vacation-mask.png -W640 -H480 --fit
~~~~
This will do the same thing as img2img, but image alterations will
@@ -224,20 +224,20 @@ Some examples:
Upscale to 4X its original size and fix faces using codeformer:
~~~
dream> !fix 0000045.4829112.png -G1 -U4 -ft codeformer
invoke> !fix 0000045.4829112.png -G1 -U4 -ft codeformer
~~~
Use the GFPGAN algorithm to fix faces, then upscale to 3X using --embiggen:
~~~
dream> !fix 0000045.4829112.png -G0.8 -ft gfpgan
invoke> !fix 0000045.4829112.png -G0.8 -ft gfpgan
>> fixing outputs/img-samples/0000045.4829112.png
>> retrieved seed 4829112 and prompt "boy enjoying a banana split"
>> GFPGAN - Restoring Faces for image seed:4829112
Outputs:
[1] outputs/img-samples/000017.4829112.gfpgan-00.png: !fix "outputs/img-samples/0000045.4829112.png" -s 50 -S -W 512 -H 512 -C 7.5 -A k_lms -G 0.8
dream> !fix 000017.4829112.gfpgan-00.png --embiggen 3
invoke> !fix 000017.4829112.gfpgan-00.png --embiggen 3
...lots of text...
Outputs:
[2] outputs/img-samples/000018.2273800735.embiggen-00.png: !fix "outputs/img-samples/000017.243781548.gfpgan-00.png" -s 50 -S 2273800735 -W 512 -H 512 -C 7.5 -A k_lms --embiggen 3.0 0.75 0.25
@@ -251,9 +251,9 @@ provide either the name of a file in the current output directory, or
a full file path.
~~~
dream> !fetch 0000015.8929913.png
invoke> !fetch 0000015.8929913.png
# the script returns the next line, ready for editing and running:
dream> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5
invoke> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5
~~~
Note that this command may behave unexpectedly if given a PNG file that
@@ -261,7 +261,7 @@ was not generated by InvokeAI.
## !history
The dream script keeps track of all the commands you issue during a
The invoke script keeps track of all the commands you issue during a
session, allowing you to re-run them. On Mac and Linux systems, it
also writes the command-line history out to disk, giving you access to
the most recent 1000 commands issued.
@@ -272,7 +272,7 @@ issued during the session (Windows), or the most recent 1000 commands
where "NNN" is the history line number. For example:
~~~
dream> !history
invoke> !history
...
[14] happy woman sitting under tree wearing broad hat and flowing garment
[15] beautiful woman sitting under tree wearing broad hat and flowing garment
@@ -280,8 +280,8 @@ dream> !history
[20] watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
[21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
...
dream> !20
dream> watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
invoke> !20
invoke> watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
~~~
## !search <search string>
@@ -290,7 +290,7 @@ This is similar to !history but it only returns lines that contain
`search string`. For example:
~~~
dream> !search surreal
invoke> !search surreal
[21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
~~~
@@ -312,16 +312,16 @@ command completion.
- To paste a cut section back in, position the cursor where you want to paste, and type CTRL-Y
Windows users can get similar, but more limited, functionality if they
launch dream.py with the "winpty" program and have the `pyreadline3`
launch invoke.py with the "winpty" program and have the `pyreadline3`
library installed:
~~~
> winpty python scripts\dream.py
> winpty python scripts\invoke.py
~~~
On the Mac and Linux platforms, when you exit dream.py, the last 1000
On the Mac and Linux platforms, when you exit invoke.py, the last 1000
lines of your command-line history will be saved. When you restart
dream.py, you can access the saved history using the up-arrow key.
invoke.py, you can access the saved history using the up-arrow key.
In addition, limited command-line completion is installed. In various
contexts, you can start typing your command and press tab. A list of
@@ -334,7 +334,7 @@ will attempt to complete pathnames for you. This is most handy for the
the path with a slash ("/") or "./". For example:
~~~
dream> zebra with a mustache -I./test-pictures<TAB>
invoke> zebra with a mustache -I./test-pictures<TAB>
-I./test-pictures/Lincoln-and-Parrot.png -I./test-pictures/zebra.jpg -I./test-pictures/madonna.png
-I./test-pictures/bad-sketch.png -I./test-pictures/man_with_eagle/
```

View File

@@ -106,8 +106,8 @@ Running Embiggen with 512x512 tiles on an existing image, scaling up by a factor
and doing the same again (default ESRGAN strength is 0.75, default overlap between tiles is 0.25):
```bash
dream > a photo of a forest at sunset -s 100 -W 512 -H 512 -I outputs/forest.png -f 0.4 -embiggen 2.5
dream > a photo of a forest at sunset -s 100 -W 512 -H 512 -I outputs/forest.png -f 0.4 -embiggen 2.5 0.75 0.25
invoke > a photo of a forest at sunset -s 100 -W 512 -H 512 -I outputs/forest.png -f 0.4 -embiggen 2.5
invoke > a photo of a forest at sunset -s 100 -W 512 -H 512 -I outputs/forest.png -f 0.4 -embiggen 2.5 0.75 0.25
```
If your starting image was also 512x512 this should have taken 9 tiles.
@@ -118,7 +118,7 @@ If there weren't enough clouds in the sky of that forest you just made
tiles:
```bash
dream> a photo of puffy clouds over a forest at sunset -s 100 -W 512 -H 512 -I outputs/000002.seed.png -f 0.5 -embiggen_tiles 1 2 3
invoke> a photo of puffy clouds over a forest at sunset -s 100 -W 512 -H 512 -I outputs/000002.seed.png -f 0.5 -embiggen_tiles 1 2 3
```
## Fixing Previously-Generated Images
@@ -129,7 +129,7 @@ syntax `!fix path/to/file.png <embiggen>`. For example, you can rewrite the
previous command to look like this:
~~~~
dream> !fix ./outputs/000002.seed.png -embiggen_tiles 1 2 3
invoke> !fix ./outputs/000002.seed.png -embiggen_tiles 1 2 3
~~~~
A new file named `000002.seed.fixed.png` will be created in the output directory. Note that

View File

@@ -10,18 +10,39 @@ top of the image you provide, preserving the original's basic shape and layout.
the `--init_img` option as shown here:
```commandline
dream> "waterfall and rainbow" --init_img=./init-images/crude_drawing.png --strength=0.5 -s100 -n4
tree on a hill with a river, nature photograph, national geographic -I./test-pictures/tree-and-river-sketch.png -f 0.85
```
This will take the original image shown here:
<img src="https://user-images.githubusercontent.com/50542132/193946000-c42a96d8-5a74-4f8a-b4c3-5213e6cadcce.png" width=350>
and generate a new image based on it as shown here:
<img src="https://user-images.githubusercontent.com/111189/194135515-53d4c060-e994-4016-8121-7c685e281ac9.png" width=350>
The `--init_img (-I)` option gives the path to the seed picture. `--strength (-f)` controls how much
the original will be modified, ranging from `0.0` (keep the original intact), to `1.0` (ignore the
original completely). The default is `0.75`, and ranges from `0.25-0.75` give interesting results.
original completely). The default is `0.75`, and ranges from `0.25-0.90` give interesting results.
Other relevant options include `-C` (classification free guidance scale), and `-s` (steps). Unlike `txt2img`,
adding steps will continuously change the resulting image and it will not converge.
You may also pass a `-v<variation_amount>` option to generate `-n<iterations>` count variants on
the original image. This is done by passing the first generated image
back into img2img the requested number of times. It generates
interesting variants.
Note that the prompt makes a big difference. For example, this slight variation on the prompt produces
a very different image:
`photograph of a tree on a hill with a river`
<img src="https://user-images.githubusercontent.com/111189/194135220-16b62181-b60c-4248-8989-4834a8fd7fbd.png" width=350>
(When designing prompts, think about how the images scraped from the internet were captioned. Very few photographs will
be labeled "photograph" or "photorealistic." They will, however, be captioned with the publication, photographer, camera
model, or film settings.)
If the initial image contains transparent regions, then Stable Diffusion will only draw within the
transparent regions, a process called "inpainting". However, for this to work correctly, the color
information underneath the transparent needs to be preserved, not erased.
@@ -29,6 +50,14 @@ information underneath the transparent needs to be preserved, not erased.
More details can be found here:
[Creating Transparent Images For Inpainting](./INPAINTING.md#creating-transparent-regions-for-inpainting)
**IMPORTANT ISSUE** `img2img` does not work properly on initial images smaller than 512x512. Please scale your
image to at least 512x512 before using it. Larger images are not a problem, but may run out of VRAM on your
GPU card. To fix this, use the --fit option, which downscales the initial image to fit within the box specified
by width x height:
~~~
tree on a hill with a river, national geographic -I./test-pictures/big-sketch.png -H512 -W512 --fit
~~~
## How does it actually work, though?
The main difference between `img2img` and `prompt2img` is the starting point. While `prompt2img` always starts with pure
@@ -38,7 +67,7 @@ gaussian noise and progressively refines it over the requested number of steps,
**Let's start** by thinking about vanilla `prompt2img`, just generating an image from a prompt. If the step count is 10, then the "latent space" (Stable Diffusion's internal representation of the image) for the prompt "fire" with seed `1592514025` develops something like this:
```commandline
dream> "fire" -s10 -W384 -H384 -S1592514025
invoke> "fire" -s10 -W384 -H384 -S1592514025
```
![latent steps](../assets/img2img/000019.steps.png)
@@ -66,7 +95,7 @@ Notice how much more fuzzy the starting image is for strength `0.7` compared to
| | strength = 0.7 | strength = 0.4 |
| -- | -- | -- |
| initial image that SD sees | ![](../assets/img2img/000032.step-0.png) | ![](../assets/img2img/000030.step-0.png) |
| steps argument to `dream>` | `-S10` | `-S10` |
| steps argument to `invoke>` | `-S10` | `-S10` |
| steps actually taken | 7 | 4 |
| latent space at each step | ![](../assets/img2img/000032.steps.gravity.png) | ![](../assets/img2img/000030.steps.gravity.png) |
| output | ![](../assets/img2img/000032.1592514025.png) | ![](../assets/img2img/000030.1592514025.png) |
@@ -77,10 +106,10 @@ Both of the outputs look kind of like what I was thinking of. With the strength
If you want to try this out yourself, all of these are using a seed of `1592514025` with a width/height of `384`, step count `10`, the default sampler (`k_lms`), and the single-word prompt `fire`:
```commandline
dream> "fire" -s10 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png --strength 0.7
invoke> "fire" -s10 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png --strength 0.7
```
The code for rendering intermediates is on my (damian0815's) branch [document-img2img](https://github.com/damian0815/InvokeAI/tree/document-img2img) - run `dream.py` and check your `outputs/img-samples/intermediates` folder while generating an image.
The code for rendering intermediates is on my (damian0815's) branch [document-img2img](https://github.com/damian0815/InvokeAI/tree/document-img2img) - run `invoke.py` and check your `outputs/img-samples/intermediates` folder while generating an image.
### Compensating for the reduced step count
@@ -89,7 +118,7 @@ After putting this guide together I was curious to see how the difference would
Here's strength `0.4` (note step count `50`, which is `20 ÷ 0.4` to make sure SD does `20` steps from my image):
```commandline
dream> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
invoke> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
```
![](../assets/img2img/000035.1592514025.png)
@@ -97,7 +126,7 @@ dream> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
and strength `0.7` (note step count `30`, which is roughly `20 ÷ 0.7` to make sure SD does `20` steps from my image):
```commandline
dream> "fire" -s30 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.7
invoke> "fire" -s30 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.7
```
![](../assets/img2img/000046.1592514025.png)

View File

@@ -8,7 +8,7 @@ title: Inpainting
Inpainting is really cool. To do it, you start with an initial image and use a photoeditor to make
one or more regions transparent (i.e. they have a "hole" in them). You then provide the path to this
image at the dream> command line using the `-I` switch. Stable Diffusion will only paint within the
image at the invoke> command line using the `-I` switch. Stable Diffusion will only paint within the
transparent region.
There's a catch. In the current implementation, you have to prepare the initial image correctly so
@@ -17,13 +17,13 @@ applications will by default erase the color information under the transparent p
them with white or black, which will lead to suboptimal inpainting. You also must take care to
export the PNG file in such a way that the color information is preserved.
If your photoeditor is erasing the underlying color information, `dream.py` will give you a big fat
If your photoeditor is erasing the underlying color information, `invoke.py` will give you a big fat
warning. If you can't find a way to coax your photoeditor to retain color values under transparent
areas, then you can combine the `-I` and `-M` switches to provide both the original unedited image
and the masked (partially transparent) image:
```bash
dream> "man with cat on shoulder" -I./images/man.png -M./images/man-transparent.png
invoke> "man with cat on shoulder" -I./images/man.png -M./images/man-transparent.png
```
We are hoping to get rid of the need for this workaround in an upcoming release.
@@ -69,7 +69,7 @@ We are hoping to get rid of the need for this workaround in an upcoming release.
![step6](../assets/step6.png)
7. After following the inpainting instructions above (either through the CLI or the Web UI), marvel at your newfound ability to selectively dream. Lookin' good!
7. After following the inpainting instructions above (either through the CLI or the Web UI), marvel at your newfound ability to selectively invoke. Lookin' good!
![step7](../assets/step7.png)

View File

@@ -22,10 +22,10 @@ Output Example: ![Colab Notebook](../assets/colab_notebook.png)
The seamless tiling mode causes generated images to seamlessly tile with itself. To use it, add the
`--seamless` option when starting the script which will result in all generated images to tile, or
for each `dream>` prompt as shown here:
for each `invoke>` prompt as shown here:
```python
dream> "pond garden with lotus by claude monet" --seamless -s100 -n4
invoke> "pond garden with lotus by claude monet" --seamless -s100 -n4
```
---
@@ -42,12 +42,12 @@ Here's an example of using this to do a quick refinement. It also illustrates us
switch to turn on upscaling and face enhancement (see previous section):
```bash
dream> a cute child playing hopscotch -G0.5
invoke> a cute child playing hopscotch -G0.5
[...]
outputs/img-samples/000039.3498014304.png: "a cute child playing hopscotch" -s50 -W512 -H512 -C7.5 -mk_lms -S3498014304
# I wonder what it will look like if I bump up the steps and set facial enhancement to full strength?
dream> a cute child playing hopscotch -G1.0 -s100 -S -1
invoke> a cute child playing hopscotch -G1.0 -s100 -S -1
reusing previous seed 3498014304
[...]
outputs/img-samples/000040.3498014304.png: "a cute child playing hopscotch" -G1.0 -s100 -W512 -H512 -C7.5 -mk_lms -S3498014304

View File

@@ -31,7 +31,7 @@ Pretty nice, but it's annoying that the top of her head is cut
off. She's also a bit off center. Let's fix that!
~~~~
dream> !fix images/curly.png --outcrop top 64 right 64
invoke> !fix images/curly.png --outcrop top 64 right 64
~~~~
This is saying to apply the `outcrop` extension by extending the top
@@ -67,7 +67,7 @@ differences. Starting with the same image, here is how we would add an
additional 64 pixels to the top of the image:
~~~
dream> !fix images/curly.png --out_direction top 64
invoke> !fix images/curly.png --out_direction top 64
~~~
(you can abbreviate ``--out_direction` as `-D`.

View File

@@ -25,7 +25,7 @@ the standard install location for python packages, and will put GFPGAN into a
subdirectory of "src" in the InvokeAI directory. (The reason for this is
that the standard GFPGAN distribution has a minor bug that adversely affects
image color.) Upscaling with Real-ESRGAN should "just work" without further
intervention. Simply pass the --upscale (-U) option on the dream> command line,
intervention. Simply pass the --upscale (-U) option on the invoke> command line,
or indicate the desired scale on the popup in the Web GUI.
For **GFPGAN** to work, there is one additional step needed. You will need to
@@ -42,14 +42,14 @@ Make sure that you're in the InvokeAI directory when you do this.
Alternatively, if you have GFPGAN installed elsewhere, or if you are using an
earlier version of this package which asked you to install GFPGAN in a sibling
directory, you may use the `--gfpgan_dir` argument with `dream.py` to set a
directory, you may use the `--gfpgan_dir` argument with `invoke.py` to set a
custom path to your GFPGAN directory. _There are other GFPGAN related boot
arguments if you wish to customize further._
!!! warning "Internet connection needed"
Users whose GPU machines are isolated from the Internet (e.g.
on a University cluster) should be aware that the first time you run dream.py with GFPGAN and
on a University cluster) should be aware that the first time you run invoke.py with GFPGAN and
Real-ESRGAN turned on, it will try to download model files from the Internet. To rectify this, you
may run `python3 scripts/preload_models.py` after you have installed GFPGAN and all its
dependencies.
@@ -94,13 +94,13 @@ too.
### Example Usage
```bash
dream> superman dancing with a panda bear -U 2 0.6 -G 0.4
invoke> superman dancing with a panda bear -U 2 0.6 -G 0.4
```
This also works with img2img:
```bash
dream> a man wearing a pineapple hat -I path/to/your/file.png -U 2 0.5 -G 0.6
invoke> a man wearing a pineapple hat -I path/to/your/file.png -U 2 0.5 -G 0.6
```
!!! note
@@ -168,7 +168,7 @@ previously-generated file. Just use the syntax `!fix path/to/file.png
just run:
```
dream> !fix ./outputs/img-samples/000044.2945021133.png -G 0.8 -U 2
invoke> !fix ./outputs/img-samples/000044.2945021133.png -G 0.8 -U 2
```
A new file named `000044.2945021133.fixed.png` will be created in the output
@@ -178,5 +178,5 @@ unlike the behavior at generate time.
### Disabling:
If, for some reason, you do not wish to load the GFPGAN and/or ESRGAN libraries,
you can disable them on the dream.py command line with the `--no_restore` and
you can disable them on the invoke.py command line with the `--no_restore` and
`--no_upscale` options, respectively.

View File

@@ -6,9 +6,9 @@ title: Prompting Features
## **Reading Prompts from a File**
You can automate `dream.py` by providing a text file with the prompts you want to run, one line per
You can automate `invoke.py` by providing a text file with the prompts you want to run, one line per
prompt. The text file must be composed with a text editor (e.g. Notepad) and not a word processor.
Each line should look like what you would type at the dream> prompt:
Each line should look like what you would type at the invoke> prompt:
```bash
a beautiful sunny day in the park, children playing -n4 -C10
@@ -16,16 +16,16 @@ stormy weather on a mountain top, goats grazing -s100
innovative packaging for a squid's dinner -S137038382
```
Then pass this file's name to `dream.py` when you invoke it:
Then pass this file's name to `invoke.py` when you invoke it:
```bash
(ldm) ~/stable-diffusion$ python3 scripts/dream.py --from_file "path/to/prompts.txt"
(ldm) ~/stable-diffusion$ python3 scripts/invoke.py --from_file "path/to/prompts.txt"
```
You may read a series of prompts from standard input by providing a filename of `-`:
```bash
(ldm) ~/stable-diffusion$ echo "a beautiful day" | python3 scripts/dream.py --from_file -
(ldm) ~/stable-diffusion$ echo "a beautiful day" | python3 scripts/invoke.py --from_file -
```
---
@@ -114,7 +114,7 @@ is depth there, so the enclosing frame is actually a cube.
### "blue sphere:0.25 red cube:0.75 hybrid"
<img src="../assets/prompt-blending/blue-sphere:0.25-red-cube:0.75-hybrid.png" width=256>
<img src="../assets/prompt-blending/blue-sphere-0.25-red-cube-0.75-hybrid.png" width=256>
Now that's interesting. We get neither a blue sphere nor a red cube,
but a red sphere embedded in a brick wall, which represents a melding
@@ -123,14 +123,14 @@ representations. Where is Ludwig Wittgenstein when you need him?
### "blue sphere:0.75 red cube:0.25 hybrid"
<img src="../assets/prompt-blending/blue-sphere:0.75-red-cube:0.25-hybrid.png" width=256>
<img src="../assets/prompt-blending/blue-sphere-0.75-red-cube-0.25-hybrid.png" width=256>
Definitely more blue-spherey. The cube is gone entirely, but it's
really cool abstract art.
### "blue sphere:0.5 red cube:0.5 hybrid"
<img src="../assets/prompt-blending/blue-sphere:0.5-red-cube:0.5-hybrid.png" width=256>
<img src="../assets/prompt-blending/blue-sphere-0.5-red-cube-0.5-hybrid.png" width=256>
Whoa...! I see blue and red, but no spheres or cubes. Is the word
"hybrid" summoning up the concept of some sort of scifi creature?
@@ -138,7 +138,7 @@ Let's find out.
### "blue sphere:0.5 red cube:0.5"
<img src="../assets/prompt-blending/blue-sphere:0.5-red-cube:0.5.png" width=256>
<img src="../assets/prompt-blending/blue-sphere-0.5-red-cube-0.5.png" width=256>
Indeed, removing the word "hybrid" produces an image that is more like
what we'd expect.

View File

@@ -56,22 +56,22 @@ configs/stable_diffusion/v1-finetune.yaml (currently set to 4000000)
## **Run the Model**
Once the model is trained, specify the trained .pt or .bin file when starting
dream using
invoke using
```bash
python3 ./scripts/dream.py --embedding_path /path/to/embedding.pt
python3 ./scripts/invoke.py --embedding_path /path/to/embedding.pt
```
Then, to utilize your subject at the dream prompt
Then, to utilize your subject at the invoke prompt
```bash
dream> "a photo of *"
invoke> "a photo of *"
```
This also works with image2image
```bash
dream> "waterfall and rainbow in the style of *" --init_img=./init-images/crude_drawing.png --strength=0.5 -s100 -n4
invoke> "waterfall and rainbow in the style of *" --init_img=./init-images/crude_drawing.png --strength=0.5 -s100 -n4
```
For .pt files it's also possible to train multiple tokens (modify the

View File

@@ -34,7 +34,7 @@ First we let SD create a series of images in the usual way, in this case
requesting six iterations:
```bash
dream> lucy lawless as xena, warrior princess, character portrait, high resolution -n6
invoke> lucy lawless as xena, warrior princess, character portrait, high resolution -n6
...
Outputs:
./outputs/Xena/000001.1579445059.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S1579445059
@@ -57,7 +57,7 @@ differing by a variation amount of 0.2. This number ranges from `0` to `1.0`,
with higher numbers being larger amounts of variation.
```bash
dream> "prompt" -n6 -S3357757885 -v0.2
invoke> "prompt" -n6 -S3357757885 -v0.2
...
Outputs:
./outputs/Xena/000002.784039624.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 784039624:0.2 -S3357757885
@@ -89,7 +89,7 @@ We combine the two variations using `-V` (`--with_variations`). Again, we must
provide the seed for the originally-chosen image in order for this to work.
```bash
dream> "prompt" -S3357757885 -V3647897225,0.1,1614299449,0.1
invoke> "prompt" -S3357757885 -V3647897225,0.1,1614299449,0.1
Outputs:
./outputs/Xena/000003.1614299449.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1 -S3357757885
```
@@ -105,7 +105,7 @@ latter, using both the `-V` (combining) and `-v` (variation strength) options.
Note that we use `-n6` to generate 6 variations:
```bash
dream> "prompt" -S3357757885 -V3647897225,0.1,1614299449,0.1 -v0.05 -n6
invoke> "prompt" -S3357757885 -V3647897225,0.1,1614299449,0.1 -v0.05 -n6
Outputs:
./outputs/Xena/000004.3279757577.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,3279757577:0.05 -S3357757885
./outputs/Xena/000004.2853129515.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,2853129515:0.05 -S3357757885

View File

@@ -5,11 +5,11 @@ title: Barebones Web Server
# :material-web: Barebones Web Server
As of version 1.10, this distribution comes with a bare bones web server (see
screenshot). To use it, run the `dream.py` script by adding the `--web`
screenshot). To use it, run the `invoke.py` script by adding the `--web`
option.
```bash
(ldm) ~/stable-diffusion$ python3 scripts/dream.py --web
(ldm) ~/stable-diffusion$ python3 scripts/invoke.py --web
```
You can then connect to the server by pointing your web browser at
@@ -18,4 +18,4 @@ http://localhost:9090, or to the network name or IP address of the server.
Kudos to [Tesseract Cat](https://github.com/TesseractCat) for contributing this
code, and to [dagf2101](https://github.com/dagf2101) for refining it.
![Dream Web Server](../assets/dream_web_server.png)
![Dream Web Server](../assets/invoke_web_server.png)

View File

@@ -51,7 +51,7 @@ rm ${PIP_LOG}
### **QUESTION**
`dream.py` crashes with the complaint that it can't find `ldm.simplet2i.py`. Or it complains that
`invoke.py` crashes with the complaint that it can't find `ldm.simplet2i.py`. Or it complains that
function is being passed incorrect parameters.
### **SOLUTION**
@@ -63,7 +63,7 @@ Reinstall the stable diffusion modules. Enter the `stable-diffusion` directory a
### **QUESTION**
`dream.py` dies, complaining of various missing modules, none of which starts with `ldm``.
`invoke.py` dies, complaining of various missing modules, none of which starts with `ldm``.
### **SOLUTION**

View File

@@ -28,7 +28,7 @@ template: main.html
[CI checks on dev badge]: https://flat.badgen.net/github/checks/lstein/stable-diffusion/development?label=CI%20status%20on%20dev&cache=900&icon=github
[CI checks on dev link]: https://github.com/lstein/stable-diffusion/actions?query=branch%3Adevelopment
[CI checks on main badge]: https://flat.badgen.net/github/checks/lstein/stable-diffusion/main?label=CI%20status%20on%20main&cache=900&icon=github
[CI checks on main link]: https://github.com/lstein/stable-diffusion/actions/workflows/test-dream-conda.yml
[CI checks on main link]: https://github.com/lstein/stable-diffusion/actions/workflows/test-invoke-conda.yml
[discord badge]: https://flat.badgen.net/discord/members/htRgbc7e?icon=discord
[discord link]: https://discord.com/invite/htRgbc7e
[github forks badge]: https://flat.badgen.net/github/forks/lstein/stable-diffusion?icon=github
@@ -85,21 +85,21 @@ You wil need one of the following:
!!! note
If you are have a Nvidia 10xx series card (e.g. the 1080ti), please run the dream script in
If you are have a Nvidia 10xx series card (e.g. the 1080ti), please run the invoke script in
full-precision mode as shown below.
Similarly, specify full-precision mode on Apple M1 hardware.
To run in full-precision mode, start `dream.py` with the `--full_precision` flag:
To run in full-precision mode, start `invoke.py` with the `--full_precision` flag:
```bash
(ldm) ~/stable-diffusion$ python scripts/dream.py --full_precision
(ldm) ~/stable-diffusion$ python scripts/invoke.py --full_precision
```
## :octicons-log-16: Latest Changes
### vNEXT <small>(TODO 2022)</small>
- Deprecated `--full_precision` / `-F`. Simply omit it and `dream.py` will auto
- Deprecated `--full_precision` / `-F`. Simply omit it and `invoke.py` will auto
configure. To switch away from auto use the new flag like `--precision=float32`.
### v1.14 <small>(11 September 2022)</small>
@@ -124,7 +124,7 @@ You wil need one of the following:
[Kevin Gibbons](https://github.com/bakkot)
- A new configuration file scheme that allows new models (including upcoming stable-diffusion-v1.5)
to be added without altering the code. ([David Wager](https://github.com/maddavid12))
- Can specify --grid on dream.py command line as the default.
- Can specify --grid on invoke.py command line as the default.
- Miscellaneous internal bug and stability fixes.
- Works on M1 Apple hardware.
- Multiple bug fixes.

View File

@@ -136,7 +136,7 @@ $TAG_STABLE_DIFFUSION
## Startup
If you're on a **Linux container** the `dream` script is **automatically
If you're on a **Linux container** the `invoke` script is **automatically
started** and the output dir set to the Docker volume you created earlier.
If you're **directly on macOS follow these startup instructions**.
@@ -148,14 +148,14 @@ half-precision requires autocast and won't work.
By default the images are saved in `outputs/img-samples/`.
```Shell
python3 scripts/dream.py --full_precision
python3 scripts/invoke.py --full_precision
```
You'll get the script's prompt. You can see available options or quit.
```Shell
dream> -h
dream> q
invoke> -h
invoke> q
```
## Text to Image
@@ -166,10 +166,10 @@ Then increase steps to 100 or more for good (but slower) results.
The prompt can be in quotes or not.
```Shell
dream> The hulk fighting with sheldon cooper -s5 -n1
dream> "woman closeup highly detailed" -s 150
invoke> The hulk fighting with sheldon cooper -s5 -n1
invoke> "woman closeup highly detailed" -s 150
# Reuse previous seed and apply face restoration
dream> "woman closeup highly detailed" --steps 150 --seed -1 -G 0.75
invoke> "woman closeup highly detailed" --steps 150 --seed -1 -G 0.75
```
You'll need to experiment to see if face restoration is making it better or
@@ -210,28 +210,28 @@ If you're on a Docker container, copy your input image into the Docker volume
docker cp /Users/<your-user>/Pictures/sketch-mountains-input.jpg dummy:/data/
```
Try it out generating an image (or more). The `dream` script needs absolute
Try it out generating an image (or more). The `invoke` script needs absolute
paths to find the image so don't use `~`.
If you're on your Mac
```Shell
dream> "A fantasy landscape, trending on artstation" -I /Users/<your-user>/Pictures/sketch-mountains-input.jpg --strength 0.75 --steps 100 -n4
invoke> "A fantasy landscape, trending on artstation" -I /Users/<your-user>/Pictures/sketch-mountains-input.jpg --strength 0.75 --steps 100 -n4
```
If you're on a Linux container on your Mac
```Shell
dream> "A fantasy landscape, trending on artstation" -I /data/sketch-mountains-input.jpg --strength 0.75 --steps 50 -n1
invoke> "A fantasy landscape, trending on artstation" -I /data/sketch-mountains-input.jpg --strength 0.75 --steps 50 -n1
```
## Web Interface
You can use the `dream` script with a graphical web interface. Start the web
You can use the `invoke` script with a graphical web interface. Start the web
server with:
```Shell
python3 scripts/dream.py --full_precision --web
python3 scripts/invoke.py --full_precision --web
```
If it's running on your Mac point your Mac web browser to http://127.0.0.1:9090

View File

@@ -89,16 +89,16 @@ This will create InvokeAI folder where you will follow the rest of the steps.
```
# for the pre-release weights use the -l or --liaon400m switch
(ldm) ~/InvokeAI$ python3 scripts/dream.py -l
(ldm) ~/InvokeAI$ python3 scripts/invoke.py -l
# for the post-release weights do not use the switch
(ldm) ~/InvokeAI$ python3 scripts/dream.py
(ldm) ~/InvokeAI$ python3 scripts/invoke.py
# for additional configuration switches and arguments, use -h or --help
(ldm) ~/InvokeAI$ python3 scripts/dream.py -h
(ldm) ~/InvokeAI$ python3 scripts/invoke.py -h
```
9. Subsequently, to relaunch the script, be sure to run "conda activate ldm" (step 5, second command), enter the `InvokeAI` directory, and then launch the dream script (step 8). If you forget to activate the ldm environment, the script will fail with multiple `ModuleNotFound` errors.
9. Subsequently, to relaunch the script, be sure to run "conda activate ldm" (step 5, second command), enter the `InvokeAI` directory, and then launch the invoke script (step 8). If you forget to activate the ldm environment, the script will fail with multiple `ModuleNotFound` errors.
## Updating to newer versions of the script

View File

@@ -137,10 +137,10 @@ ln -s "$PATH_TO_CKPT/sd-v1-4.ckpt" \
python scripts/preload_models.py
# now you can run SD in CLI mode
python scripts/dream.py --full_precision # (1)!
python scripts/invoke.py --full_precision # (1)!
# or run the web interface!
python scripts/dream.py --web
python scripts/invoke.py --web
# The original scripts should work as well.
python scripts/orig_scripts/txt2img.py \
@@ -155,7 +155,7 @@ it isn't required but wont hurt.
## Common problems
After you followed all the instructions and try to run dream.py, you might
After you followed all the instructions and try to run invoke.py, you might
get several errors. Here's the errors I've seen and found solutions for.
### Is it slow?
@@ -220,9 +220,9 @@ There are several causes of these errors:
"(ldm)" then you activated it. If it begins with "(base)" or something else
you haven't.
2. You might've run `./scripts/preload_models.py` or `./scripts/dream.py`
2. You might've run `./scripts/preload_models.py` or `./scripts/invoke.py`
instead of `python ./scripts/preload_models.py` or
`python ./scripts/dream.py`. The cause of this error is long so it's below.
`python ./scripts/invoke.py`. The cause of this error is long so it's below.
<!-- I could not find out where the error is, otherwise would have marked it as a footnote -->
@@ -519,7 +519,7 @@ use ARM packages, and use `nomkl` as described above.
May appear when just starting to generate, e.g.:
```bash
dream> clouds
invoke> clouds
Generating: 0%| | 0/1 [00:00<?, ?it/s]/Users/[...]/dev/stable-diffusion/ldm/modules/embedding_manager.py:152: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1662016319283/work/aten/src/ATen/mps/MPSFallback.mm:11.)
placeholder_idx = torch.where(
loc("mps_add"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/20d6c351-ee94-11ec-bcaf-7247572f23b4/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":219:0)): error: input types 'tensor<2x1280xf32>' and 'tensor<*xf16>' are not broadcast compatible

View File

@@ -101,13 +101,13 @@ you may instead create a shortcut to it from within `models\ldm\stable-diffusion
```bash
# for the pre-release weights
python scripts\dream.py -l
python scripts\invoke.py -l
# for the post-release weights
python scripts\dream.py
python scripts\invoke.py
```
10. Subsequently, to relaunch the script, first activate the Anaconda command window (step 3),enter the InvokeAI directory (step 5, `cd \path\to\InvokeAI`), run `conda activate ldm` (step 6b), and then launch the dream script (step 9).
10. Subsequently, to relaunch the script, first activate the Anaconda command window (step 3),enter the InvokeAI directory (step 5, `cd \path\to\InvokeAI`), run `conda activate ldm` (step 6b), and then launch the invoke script (step 9).
**Note:** Tildebyte has written an alternative
["Easy peasy Windows install"](https://github.com/invoke-ai/InvokeAI/wiki/Easy-peasy-Windows-install)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 336 KiB

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

483
frontend/dist/assets/index.bfda55e5.js vendored Normal file

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -6,8 +6,8 @@
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>InvokeAI - A Stable Diffusion Toolkit</title>
<link rel="shortcut icon" type="icon" href="/assets/favicon.0d253ced.ico" />
<script type="module" crossorigin src="/assets/index.d9916e7a.js"></script>
<link rel="stylesheet" href="/assets/index.853a336f.css">
<script type="module" crossorigin src="/assets/index.bfda55e5.js"></script>
<link rel="stylesheet" href="/assets/index.22ee377a.css">
</head>
<body>

View File

@@ -23,6 +23,7 @@
"react": "^18.2.0",
"react-dom": "^18.2.0",
"react-dropzone": "^14.2.2",
"react-hotkeys-hook": "^3.4.7",
"react-icons": "^4.4.0",
"react-redux": "^8.0.2",
"redux-persist": "^6.0.0",

View File

@@ -15,3 +15,7 @@
width: $app-width;
height: $app-height;
}
.app-console {
z-index: 9999;
}

View File

@@ -26,7 +26,9 @@ const App = () => {
<SiteHeader />
<InvokeTabs />
</div>
<Console />
<div className="app-console">
<Console />
</div>
</div>
) : (
<Loading />

View File

@@ -6,6 +6,7 @@ import {
addLogEntry,
setIsProcessing,
} from '../../features/system/systemSlice';
import { tabMap, tab_dict } from '../../features/tabs/InvokeTabs';
import * as InvokeAI from '../invokeai';
/**
@@ -23,8 +24,14 @@ const makeSocketIOEmitters = (
emitGenerateImage: () => {
dispatch(setIsProcessing(true));
const options = { ...getState().options };
if (tabMap[options.activeTab] === 'txt2img') {
options.shouldUseInitImage = false;
}
const { generationParameters, esrganParameters, gfpganParameters } =
frontendToBackendParameters(getState().options, getState().system);
frontendToBackendParameters(options, getState().system);
socketio.emit(
'generateImage',

View File

@@ -0,0 +1,65 @@
import { Button, useToast } from '@chakra-ui/react';
import React, { useCallback } from 'react';
import { FileRejection } from 'react-dropzone';
import { useAppDispatch } from '../../app/store';
import ImageUploader from '../../features/options/ImageUploader';
interface InvokeImageUploaderProps {
label?: string;
icon?: any;
onMouseOver?: any;
OnMouseout?: any;
dispatcher: any;
styleClass?: string;
}
export default function InvokeImageUploader(props: InvokeImageUploaderProps) {
const { label, icon, dispatcher, styleClass, onMouseOver, OnMouseout } =
props;
const toast = useToast();
const dispatch = useAppDispatch();
// Callbacks to for handling file upload attempts
const fileAcceptedCallback = useCallback(
(file: File) => dispatch(dispatcher(file)),
[dispatch, dispatcher]
);
const fileRejectionCallback = useCallback(
(rejection: FileRejection) => {
const msg = rejection.errors.reduce(
(acc: string, cur: { message: string }) => acc + '\n' + cur.message,
''
);
toast({
title: 'Upload failed',
description: msg,
status: 'error',
isClosable: true,
});
},
[toast]
);
return (
<ImageUploader
fileAcceptedCallback={fileAcceptedCallback}
fileRejectionCallback={fileRejectionCallback}
styleClass={styleClass}
>
<Button
size={'sm'}
fontSize={'md'}
fontWeight={'normal'}
onMouseOver={onMouseOver}
onMouseOut={OnMouseout}
leftIcon={icon}
width={'100%'}
>
{label ? label : null}
</Button>
</ImageUploader>
);
}

View File

@@ -18,6 +18,7 @@ export const optionsSelector = createSelector(
maskPath: options.maskPath,
initialImagePath: options.initialImagePath,
seed: options.seed,
activeTab: options.activeTab,
};
},
{
@@ -55,6 +56,7 @@ const useCheckParameters = (): boolean => {
maskPath,
initialImagePath,
seed,
activeTab,
} = useAppSelector(optionsSelector);
const { isProcessing, isConnected } = useAppSelector(systemSelector);
@@ -65,6 +67,10 @@ const useCheckParameters = (): boolean => {
return false;
}
if (prompt && !initialImagePath && activeTab === 1) {
return false;
}
// Cannot generate with a mask without img2img
if (maskPath && !initialImagePath) {
return false;
@@ -100,6 +106,7 @@ const useCheckParameters = (): boolean => {
shouldGenerateVariations,
seedWeights,
seed,
activeTab,
]);
};

View File

@@ -6,9 +6,11 @@ import * as InvokeAI from '../../app/invokeai';
import { useAppDispatch, useAppSelector } from '../../app/store';
import { RootState } from '../../app/store';
import {
setActiveTab,
setAllParameters,
setInitialImagePath,
setSeed,
setShouldShowImageDetails,
} from '../options/optionsSlice';
import DeleteImageModal from './DeleteImageModal';
import { SystemState } from '../system/systemSlice';
@@ -19,6 +21,8 @@ import { MdDelete, MdFace, MdHd, MdImage, MdInfo } from 'react-icons/md';
import InvokePopover from './InvokePopover';
import UpscaleOptions from '../options/AdvancedOptions/Upscale/UpscaleOptions';
import FaceRestoreOptions from '../options/AdvancedOptions/FaceRestore/FaceRestoreOptions';
import { useHotkeys } from 'react-hotkeys-hook';
import { useToast } from '@chakra-ui/react';
const systemSelector = createSelector(
(state: RootState) => state.system,
@@ -39,21 +43,21 @@ const systemSelector = createSelector(
type CurrentImageButtonsProps = {
image: InvokeAI.Image;
shouldShowImageDetails: boolean;
setShouldShowImageDetails: (b: boolean) => void;
};
/**
* Row of buttons for common actions:
* Use as init image, use all params, use seed, upscale, fix faces, details, delete.
*/
const CurrentImageButtons = ({
image,
shouldShowImageDetails,
setShouldShowImageDetails,
}: CurrentImageButtonsProps) => {
const CurrentImageButtons = ({ image }: CurrentImageButtonsProps) => {
const dispatch = useAppDispatch();
const shouldShowImageDetails = useAppSelector(
(state: RootState) => state.options.shouldShowImageDetails
);
const toast = useToast();
const intermediateImage = useAppSelector(
(state: RootState) => state.gallery.intermediateImage
);
@@ -69,28 +73,176 @@ const CurrentImageButtons = ({
const { isProcessing, isConnected, isGFPGANAvailable, isESRGANAvailable } =
useAppSelector(systemSelector);
const handleClickUseAsInitialImage = () =>
const handleClickUseAsInitialImage = () => {
dispatch(setInitialImagePath(image.url));
dispatch(setActiveTab(1));
};
useHotkeys(
'shift+i',
() => {
if (image) {
handleClickUseAsInitialImage();
toast({
title: 'Sent To Image To Image',
status: 'success',
duration: 2500,
isClosable: true,
});
} else {
toast({
title: 'No Image Loaded',
description: 'No image found to send to image to image module.',
status: 'error',
duration: 2500,
isClosable: true,
});
}
},
[image]
);
const handleClickUseAllParameters = () =>
dispatch(setAllParameters(image.metadata));
useHotkeys(
'a',
() => {
if (['txt2img', 'img2img'].includes(image?.metadata?.image?.type)) {
handleClickUseAllParameters();
toast({
title: 'Parameters Set',
status: 'success',
duration: 2500,
isClosable: true,
});
} else {
toast({
title: 'Parameters Not Set',
description: 'No metadata found for this image.',
status: 'error',
duration: 2500,
isClosable: true,
});
}
},
[image]
);
// Non-null assertion: this button is disabled if there is no seed.
// eslint-disable-next-line @typescript-eslint/no-non-null-assertion
const handleClickUseSeed = () => dispatch(setSeed(image.metadata.image.seed));
useHotkeys(
's',
() => {
if (image?.metadata?.image?.seed) {
handleClickUseSeed();
toast({
title: 'Seed Set',
status: 'success',
duration: 2500,
isClosable: true,
});
} else {
toast({
title: 'Seed Not Set',
description: 'Could not find seed for this image.',
status: 'error',
duration: 2500,
isClosable: true,
});
}
},
[image]
);
const handleClickUpscale = () => dispatch(runESRGAN(image));
useHotkeys(
'u',
() => {
if (
isESRGANAvailable &&
Boolean(!intermediateImage) &&
isConnected &&
!isProcessing &&
upscalingLevel
) {
handleClickUpscale();
} else {
toast({
title: 'Upscaling Failed',
status: 'error',
duration: 2500,
isClosable: true,
});
}
},
[
image,
isESRGANAvailable,
intermediateImage,
isConnected,
isProcessing,
upscalingLevel,
]
);
const handleClickFixFaces = () => dispatch(runGFPGAN(image));
useHotkeys(
'r',
() => {
if (
isGFPGANAvailable &&
Boolean(!intermediateImage) &&
isConnected &&
!isProcessing &&
gfpganStrength
) {
handleClickFixFaces();
} else {
toast({
title: 'Face Restoration Failed',
status: 'error',
duration: 2500,
isClosable: true,
});
}
},
[
image,
isGFPGANAvailable,
intermediateImage,
isConnected,
isProcessing,
gfpganStrength,
]
);
const handleClickShowImageDetails = () =>
setShouldShowImageDetails(!shouldShowImageDetails);
dispatch(setShouldShowImageDetails(!shouldShowImageDetails));
useHotkeys(
'i',
() => {
if (image) {
handleClickShowImageDetails();
} else {
toast({
title: 'Failed to load metadata',
status: 'error',
duration: 2500,
isClosable: true,
});
}
},
[image, shouldShowImageDetails]
);
return (
<div className="current-image-options">
<IAIIconButton
icon={<MdImage />}
tooltip="Use As Initial Image"
aria-label="Use As Initial Image"
tooltip="Send To Image To Image"
aria-label="Send To Image To Image"
onClick={handleClickUseAsInitialImage}
/>

View File

@@ -11,23 +11,7 @@
border-radius: 0.5rem;
}
.current-image-display-placeholder {
background-color: var(--background-color-secondary);
display: flex;
align-items: center;
justify-content: center;
width: 100%;
height: 100%;
svg {
width: 10rem;
height: 10rem;
color: var(--svg-color);
}
}
.current-image-tools {
grid-area: current-image-tools;
width: 100%;
height: 100%;
display: grid;
@@ -58,34 +42,69 @@
align-items: center;
display: grid;
width: 100%;
grid-template-areas: 'current-image-content';
img {
grid-area: current-image-content;
background-color: var(--img2img-img-bg-color);
border-radius: 0.5rem;
object-fit: contain;
width: auto;
height: $app-gallery-height;
max-height: $app-gallery-height;
}
}
.current-image-metadata-viewer {
border-radius: 0.5rem;
position: absolute;
top: 0;
left: 0;
width: calc(100% - 2rem);
padding: 0.5rem;
margin-left: 1rem;
background-color: var(--metadata-bg-color);
z-index: 1;
overflow: scroll;
height: calc($app-metadata-height - 1rem);
.current-image-metadata {
grid-area: current-image-preview;
}
.current-image-json-viewer {
border-radius: 0.5rem;
margin: 0 0.5rem 1rem 0.5rem;
padding: 1rem;
overflow-x: scroll;
word-break: break-all;
background-color: var(--metadata-json-bg-color);
.current-image-next-prev-buttons {
grid-area: current-image-content;
display: flex;
justify-content: space-between;
z-index: 1;
height: 100%;
pointer-events: none;
}
.next-prev-button-trigger-area {
width: 7rem;
height: 100%;
width: 100%;
display: grid;
align-items: center;
pointer-events: auto;
&.prev-button-trigger-area {
justify-content: flex-start;
}
&.next-button-trigger-area {
justify-content: flex-end;
}
}
.next-prev-button {
font-size: 4rem;
fill: var(--white);
filter: drop-shadow(0 0 1rem var(--text-color-secondary));
opacity: 70%;
}
.current-image-display-placeholder {
background-color: var(--background-color-secondary);
display: grid;
display: flex;
align-items: center;
justify-content: center;
width: 100%;
height: 100%;
border-radius: 0.5rem;
svg {
width: 10rem;
height: 10rem;
color: var(--svg-color);
}
}

View File

@@ -1,10 +1,8 @@
import { Image } from '@chakra-ui/react';
import { useAppSelector } from '../../app/store';
import { RootState } from '../../app/store';
import { useState } from 'react';
import ImageMetadataViewer from './ImageMetadataViewer';
import { RootState, useAppSelector } from '../../app/store';
import CurrentImageButtons from './CurrentImageButtons';
import { MdPhoto } from 'react-icons/md';
import CurrentImagePreview from './CurrentImagePreview';
import ImageMetadataViewer from './ImageMetaDataViewer/ImageMetadataViewer';
/**
* Displays the current image if there is one, plus associated actions.
@@ -14,33 +12,24 @@ const CurrentImageDisplay = () => {
(state: RootState) => state.gallery
);
const [shouldShowImageDetails, setShouldShowImageDetails] =
useState<boolean>(false);
const shouldShowImageDetails = useAppSelector(
(state: RootState) => state.options.shouldShowImageDetails
);
const imageToDisplay = intermediateImage || currentImage;
return imageToDisplay ? (
<div className="current-image-display">
<div className="current-image-tools">
<CurrentImageButtons
<CurrentImageButtons image={imageToDisplay} />
</div>
<CurrentImagePreview imageToDisplay={imageToDisplay} />
{shouldShowImageDetails && (
<ImageMetadataViewer
image={imageToDisplay}
shouldShowImageDetails={shouldShowImageDetails}
setShouldShowImageDetails={setShouldShowImageDetails}
styleClass="current-image-metadata"
/>
</div>
<div className="current-image-preview">
<Image
src={imageToDisplay.url}
fit="contain"
maxWidth={'100%'}
maxHeight={'100%'}
/>
{shouldShowImageDetails && (
<div className="current-image-metadata-viewer">
<ImageMetadataViewer image={imageToDisplay} />
</div>
)}
</div>
)}
</div>
) : (
<div className="current-image-display-placeholder">

View File

@@ -0,0 +1,105 @@
import { IconButton, Image } from '@chakra-ui/react';
import React, { useState } from 'react';
import { FaAngleLeft, FaAngleRight } from 'react-icons/fa';
import { RootState, useAppDispatch, useAppSelector } from '../../app/store';
import { GalleryState, selectNextImage, selectPrevImage } from './gallerySlice';
import * as InvokeAI from '../../app/invokeai';
import { createSelector } from '@reduxjs/toolkit';
import _ from 'lodash';
const imagesSelector = createSelector(
(state: RootState) => state.gallery,
(gallery: GalleryState) => {
const currentImageIndex = gallery.images.findIndex(
(i) => i.uuid === gallery?.currentImage?.uuid
);
const imagesLength = gallery.images.length;
return {
isOnFirstImage: currentImageIndex === 0,
isOnLastImage:
!isNaN(currentImageIndex) && currentImageIndex === imagesLength - 1,
};
},
{
memoizeOptions: {
resultEqualityCheck: _.isEqual,
},
}
);
interface CurrentImagePreviewProps {
imageToDisplay: InvokeAI.Image;
}
export default function CurrentImagePreview(props: CurrentImagePreviewProps) {
const { imageToDisplay } = props;
const dispatch = useAppDispatch();
const { isOnFirstImage, isOnLastImage } = useAppSelector(imagesSelector);
const shouldShowImageDetails = useAppSelector(
(state: RootState) => state.options.shouldShowImageDetails
);
const [shouldShowNextPrevButtons, setShouldShowNextPrevButtons] =
useState<boolean>(false);
const handleCurrentImagePreviewMouseOver = () => {
setShouldShowNextPrevButtons(true);
};
const handleCurrentImagePreviewMouseOut = () => {
setShouldShowNextPrevButtons(false);
};
const handleClickPrevButton = () => {
dispatch(selectPrevImage());
};
const handleClickNextButton = () => {
dispatch(selectNextImage());
};
return (
<div className="current-image-preview">
<Image
src={imageToDisplay.url}
fit="contain"
maxWidth={'100%'}
maxHeight={'100%'}
/>
{!shouldShowImageDetails && (
<div className="current-image-next-prev-buttons">
<div
className="next-prev-button-trigger-area prev-button-trigger-area"
onMouseOver={handleCurrentImagePreviewMouseOver}
onMouseOut={handleCurrentImagePreviewMouseOut}
>
{shouldShowNextPrevButtons && !isOnFirstImage && (
<IconButton
aria-label="Previous image"
icon={<FaAngleLeft className="next-prev-button" />}
variant="unstyled"
onClick={handleClickPrevButton}
/>
)}
</div>
<div
className="next-prev-button-trigger-area next-button-trigger-area"
onMouseOver={handleCurrentImagePreviewMouseOver}
onMouseOut={handleCurrentImagePreviewMouseOut}
>
{shouldShowNextPrevButtons && !isOnLastImage && (
<IconButton
aria-label="Next image"
icon={<FaAngleRight className="next-prev-button" />}
variant="unstyled"
onClick={handleClickNextButton}
/>
)}
</div>
</div>
)}
</div>
);
}

View File

@@ -27,6 +27,7 @@ import { deleteImage } from '../../app/socketio/actions';
import { RootState } from '../../app/store';
import { setShouldConfirmOnDelete, SystemState } from '../system/systemSlice';
import * as InvokeAI from '../../app/invokeai';
import { useHotkeys } from 'react-hotkeys-hook';
interface DeleteImageModalProps {
/**
@@ -67,6 +68,14 @@ const DeleteImageModal = forwardRef(
onClose();
};
useHotkeys(
'del',
() => {
shouldConfirmOnDelete ? onOpen() : handleDelete();
},
[image, shouldConfirmOnDelete]
);
const handleChangeShouldConfirmOnDelete = (
e: ChangeEvent<HTMLInputElement>
) => dispatch(setShouldConfirmOnDelete(!e.target.checked));

View File

@@ -7,12 +7,17 @@ import {
Tooltip,
useColorModeValue,
} from '@chakra-ui/react';
import { useAppDispatch } from '../../app/store';
import { RootState, useAppDispatch, useAppSelector } from '../../app/store';
import { setCurrentImage } from './gallerySlice';
import { FaCheck, FaSeedling, FaTrashAlt } from 'react-icons/fa';
import { FaCheck, FaImage, FaSeedling, FaTrashAlt } from 'react-icons/fa';
import DeleteImageModal from './DeleteImageModal';
import { memo, SyntheticEvent, useState } from 'react';
import { setAllParameters, setSeed } from '../options/optionsSlice';
import {
setActiveTab,
setAllParameters,
setInitialImagePath,
setSeed,
} from '../options/optionsSlice';
import * as InvokeAI from '../../app/invokeai';
import { IoArrowUndoCircleOutline } from 'react-icons/io5';
@@ -33,6 +38,10 @@ const HoverableImage = memo((props: HoverableImageProps) => {
const [isHovered, setIsHovered] = useState<boolean>(false);
const dispatch = useAppDispatch();
const activeTab = useAppSelector(
(state: RootState) => state.options.activeTab
);
const checkColor = useColorModeValue('green.600', 'green.300');
const bgColor = useColorModeValue('gray.200', 'gray.700');
const bgGradient = useColorModeValue(
@@ -56,6 +65,14 @@ const HoverableImage = memo((props: HoverableImageProps) => {
dispatch(setSeed(image.metadata.image.seed));
};
const handleSetInitImage = (e: SyntheticEvent) => {
e.stopPropagation();
dispatch(setInitialImagePath(image.url));
if (activeTab !== 1) {
dispatch(setActiveTab(1));
}
};
const handleClickImage = () => dispatch(setCurrentImage(image));
return (
@@ -131,6 +148,16 @@ const HoverableImage = memo((props: HoverableImageProps) => {
/>
</Tooltip>
)}
<Tooltip label="Send To Image To Image">
<IconButton
aria-label="Send To Image To Image"
icon={<FaImage />}
size="xs"
fontSize={16}
variant={'imageHoverIconButton'}
onClickCapture={handleSetInitImage}
/>
</Tooltip>
</Flex>
)}
</Flex>

View File

@@ -1,43 +1,67 @@
@use '../../styles/Mixins/' as *;
.image-gallery-area {
.image-gallery-popup-btn {
@include Button(
$btn-width: 3rem,
$btn-height: 3rem,
$icon-size: 22px,
$btn-color: var(--btn-grey),
$btn-color-hover: var(--btn-grey-hover)
);
}
}
.image-gallery-popup {
background-color: var(--tab-color);
position: fixed !important;
top: 0;
right: 0;
padding: 1rem;
animation: slideOut 0.3s ease-out;
display: grid;
grid-auto-rows: max-content;
row-gap: 1rem;
border-left-width: 0.2rem;
border-color: var(--gallery-resizeable-color);
}
.image-gallery-header {
display: grid;
grid-template-columns: auto max-content;
align-items: center;
h1 {
font-weight: bold;
}
}
.image-gallery-close-btn {
background-color: var(--btn-load-more) !important;
&:hover {
background-color: var(--btn-load-more-hover) !important;
}
}
.image-gallery-container {
display: grid;
row-gap: 1rem;
grid-auto-rows: max-content;
min-width: 16rem;
}
.image-gallery-container-placeholder {
display: grid;
background-color: var(--background-color-secondary);
border-radius: 0.5rem;
place-items: center;
padding: 2rem 0;
p {
color: var(--subtext-color-bright);
}
svg {
width: 5rem;
height: 5rem;
color: var(--svg-color);
}
gap: 1rem;
max-height: $app-gallery-popover-height;
overflow-y: scroll;
@include HideScrollbar;
}
.image-gallery {
display: grid;
grid-template-columns: repeat(2, max-content);
grid-template-columns: repeat(auto-fill, minmax(120px, auto));
gap: 0.6rem;
justify-items: center;
max-height: $app-gallery-height;
overflow-y: scroll;
@include HideScrollbar;
}
.image-gallery-load-more-btn {
background-color: var(--btn-load-more) !important;
font-size: 0.85rem !important;
font-family: Inter;
&:disabled {
&:hover {
@@ -49,3 +73,22 @@
background-color: var(--btn-load-more-hover) !important;
}
}
.image-gallery-container-placeholder {
display: grid;
background-color: var(--background-color-secondary);
border-radius: 0.5rem;
place-items: center;
padding: 2rem 0;
p {
color: var(--subtext-color-bright);
font-family: Inter;
}
svg {
width: 5rem;
height: 5rem;
color: var(--svg-color);
}
}

View File

@@ -1,66 +1,122 @@
import { Button } from '@chakra-ui/react';
import { MdPhotoLibrary } from 'react-icons/md';
import { Button, IconButton } from '@chakra-ui/button';
import { Resizable } from 're-resizable';
import React from 'react';
import { useHotkeys } from 'react-hotkeys-hook';
import { MdClear, MdPhotoLibrary } from 'react-icons/md';
import { requestImages } from '../../app/socketio/actions';
import { RootState, useAppDispatch } from '../../app/store';
import { useAppSelector } from '../../app/store';
import { RootState, useAppDispatch, useAppSelector } from '../../app/store';
import {
selectNextImage,
selectPrevImage,
setShouldShowGallery,
} from './gallerySlice';
import HoverableImage from './HoverableImage';
/**
* Simple image gallery.
*/
const ImageGallery = () => {
const { images, currentImageUuid, areMoreImagesAvailable } = useAppSelector(
(state: RootState) => state.gallery
);
export default function ImageGallery() {
const {
images,
currentImageUuid,
areMoreImagesAvailable,
shouldShowGallery,
} = useAppSelector((state: RootState) => state.gallery);
const dispatch = useAppDispatch();
/**
* I don't like that this needs to rerender whenever the current image is changed.
* What if we have a large number of images? I suppose pagination (planned) will
* mitigate this issue.
*
* TODO: Refactor if performance complaints, or after migrating to new API which supports pagination.
*/
const handleShowGalleryToggle = () => {
dispatch(setShouldShowGallery(!shouldShowGallery));
};
const handleGalleryClose = () => {
dispatch(setShouldShowGallery(false));
};
const handleClickLoadMore = () => {
dispatch(requestImages());
};
useHotkeys(
'g',
() => {
handleShowGalleryToggle();
},
[shouldShowGallery]
);
useHotkeys(
'left',
() => {
dispatch(selectPrevImage());
},
[]
);
useHotkeys(
'right',
() => {
dispatch(selectNextImage());
},
[]
);
return (
<div className="image-gallery-container">
{images.length ? (
<>
<p>
<strong>Your Invocations</strong>
</p>
<div className="image-gallery">
{images.map((image) => {
const { uuid } = image;
const isSelected = currentImageUuid === uuid;
return (
<HoverableImage
key={uuid}
image={image}
isSelected={isSelected}
/>
);
})}
</div>
</>
) : (
<div className="image-gallery-container-placeholder">
<div className="image-gallery-area">
{!shouldShowGallery && (
<Button
colorScheme="teal"
onClick={handleShowGalleryToggle}
className="image-gallery-popup-btn"
>
<MdPhotoLibrary />
<p>No Images In Gallery</p>
</div>
</Button>
)}
{shouldShowGallery && (
<Resizable
defaultSize={{ width: '300', height: '100%' }}
minWidth={'300'}
className="image-gallery-popup"
>
<div className="image-gallery-header">
<h1>Your Invocations</h1>
<IconButton
size={'sm'}
aria-label={'Close Gallery'}
onClick={handleGalleryClose}
className="image-gallery-close-btn"
icon={<MdClear />}
/>
</div>
<div className="image-gallery-container">
{images.length ? (
<div className="image-gallery">
{images.map((image) => {
const { uuid } = image;
const isSelected = currentImageUuid === uuid;
return (
<HoverableImage
key={uuid}
image={image}
isSelected={isSelected}
/>
);
})}
</div>
) : (
<div className="image-gallery-container-placeholder">
<MdPhotoLibrary />
<p>No Images In Gallery</p>
</div>
)}
<Button
onClick={handleClickLoadMore}
isDisabled={!areMoreImagesAvailable}
className="image-gallery-load-more-btn"
>
{areMoreImagesAvailable ? 'Load More' : 'All Images Loaded'}
</Button>
</div>
</Resizable>
)}
<Button
onClick={handleClickLoadMore}
isDisabled={!areMoreImagesAvailable}
className="image-gallery-load-more-btn"
>
{areMoreImagesAvailable ? 'Load More' : 'All Images Loaded'}
</Button>
</div>
);
};
export default ImageGallery;
}

View File

@@ -0,0 +1,129 @@
import {
Button,
Drawer,
DrawerBody,
DrawerCloseButton,
DrawerContent,
DrawerHeader,
useDisclosure,
} from '@chakra-ui/react';
import React from 'react';
import { useHotkeys } from 'react-hotkeys-hook';
import { MdPhotoLibrary } from 'react-icons/md';
import { requestImages } from '../../app/socketio/actions';
import { RootState, useAppDispatch } from '../../app/store';
import { useAppSelector } from '../../app/store';
import { selectNextImage, selectPrevImage } from './gallerySlice';
import HoverableImage from './HoverableImage';
/**
* Simple image gallery.
*/
const ImageGalleryOld = () => {
const { images, currentImageUuid, areMoreImagesAvailable } = useAppSelector(
(state: RootState) => state.gallery
);
const dispatch = useAppDispatch();
const { isOpen, onOpen, onClose } = useDisclosure();
/**
* I don't like that this needs to rerender whenever the current image is changed.
* What if we have a large number of images? I suppose pagination (planned) will
* mitigate this issue.
*
* TODO: Refactor if performance complaints, or after migrating to new API which supports pagination.
*/
const handleClickLoadMore = () => {
dispatch(requestImages());
};
useHotkeys(
'g',
() => {
if (isOpen) {
onClose();
} else {
onOpen();
}
},
[isOpen]
);
useHotkeys(
'left',
() => {
dispatch(selectPrevImage());
},
[]
);
useHotkeys(
'right',
() => {
dispatch(selectNextImage());
},
[]
);
return (
<div className="image-gallery-area">
<Button
colorScheme="teal"
onClick={onOpen}
className="image-gallery-popup-btn"
>
<MdPhotoLibrary />
</Button>
<Drawer
isOpen={isOpen}
placement="right"
onClose={onClose}
autoFocus={false}
trapFocus={false}
closeOnOverlayClick={false}
>
<DrawerContent className="image-gallery-popup">
<div className="image-gallery-header">
<DrawerHeader>Your Invocations</DrawerHeader>
<DrawerCloseButton />
</div>
<DrawerBody className="image-gallery-body">
<div className="image-gallery-container">
{images.length ? (
<div className="image-gallery">
{images.map((image) => {
const { uuid } = image;
const isSelected = currentImageUuid === uuid;
return (
<HoverableImage
key={uuid}
image={image}
isSelected={isSelected}
/>
);
})}
</div>
) : (
<div className="image-gallery-container-placeholder">
<MdPhotoLibrary />
<p>No Images In Gallery</p>
</div>
)}
<Button
onClick={handleClickLoadMore}
isDisabled={!areMoreImagesAvailable}
className="image-gallery-load-more-btn"
>
{areMoreImagesAvailable ? 'Load More' : 'All Images Loaded'}
</Button>
</div>
</DrawerBody>
</DrawerContent>
</Drawer>
</div>
);
};
export default ImageGalleryOld;

View File

@@ -0,0 +1,20 @@
@use '../../../styles/Mixins/' as *;
.image-metadata-viewer {
width: 100%;
border-radius: 0.5rem;
padding: 1rem;
background-color: var(--metadata-bg-color);
overflow: scroll;
max-height: $app-metadata-height;
z-index: 10;
}
.image-json-viewer {
border-radius: 0.5rem;
margin: 0 0.5rem 1rem 0.5rem;
padding: 1rem;
overflow-x: scroll;
word-break: break-all;
background-color: var(--metadata-json-bg-color);
}

View File

@@ -0,0 +1,360 @@
import {
Center,
Flex,
Heading,
IconButton,
Link,
Text,
Tooltip,
} from '@chakra-ui/react';
import { ExternalLinkIcon } from '@chakra-ui/icons';
import { memo } from 'react';
import { IoArrowUndoCircleOutline } from 'react-icons/io5';
import { useAppDispatch } from '../../../app/store';
import * as InvokeAI from '../../../app/invokeai';
import {
setCfgScale,
setGfpganStrength,
setHeight,
setImg2imgStrength,
setInitialImagePath,
setMaskPath,
setPrompt,
setSampler,
setSeed,
setSeedWeights,
setShouldFitToWidthHeight,
setSteps,
setUpscalingLevel,
setUpscalingStrength,
setWidth,
} from '../../options/optionsSlice';
import promptToString from '../../../common/util/promptToString';
import { seedWeightsToString } from '../../../common/util/seedWeightPairs';
import { FaCopy } from 'react-icons/fa';
type MetadataItemProps = {
isLink?: boolean;
label: string;
onClick?: () => void;
value: number | string | boolean;
labelPosition?: string;
};
/**
* Component to display an individual metadata item or parameter.
*/
const MetadataItem = ({
label,
value,
onClick,
isLink,
labelPosition,
}: MetadataItemProps) => {
return (
<Flex gap={2}>
{onClick && (
<Tooltip label={`Recall ${label}`}>
<IconButton
aria-label="Use this parameter"
icon={<IoArrowUndoCircleOutline />}
size={'xs'}
variant={'ghost'}
fontSize={20}
onClick={onClick}
/>
</Tooltip>
)}
<Flex direction={labelPosition ? 'column' : 'row'}>
<Text fontWeight={'semibold'} whiteSpace={'pre-wrap'} pr={2}>
{label}:
</Text>
{isLink ? (
<Link href={value.toString()} isExternal wordBreak={'break-all'}>
{value.toString()} <ExternalLinkIcon mx="2px" />
</Link>
) : (
<Text overflowY={'scroll'} wordBreak={'break-all'}>
{value.toString()}
</Text>
)}
</Flex>
</Flex>
);
};
type ImageMetadataViewerProps = {
image: InvokeAI.Image;
styleClass?: string;
};
// TODO: I don't know if this is needed.
const memoEqualityCheck = (
prev: ImageMetadataViewerProps,
next: ImageMetadataViewerProps
) => prev.image.uuid === next.image.uuid;
// TODO: Show more interesting information in this component.
/**
* Image metadata viewer overlays currently selected image and provides
* access to any of its metadata for use in processing.
*/
const ImageMetadataViewer = memo(
({ image, styleClass }: ImageMetadataViewerProps) => {
const dispatch = useAppDispatch();
// const jsonBgColor = useColorModeValue('blackAlpha.100', 'whiteAlpha.100');
const metadata = image?.metadata?.image || {};
const {
type,
postprocessing,
sampler,
prompt,
seed,
variations,
steps,
cfg_scale,
seamless,
width,
height,
strength,
fit,
init_image_path,
mask_image_path,
orig_path,
scale,
} = metadata;
const metadataJSON = JSON.stringify(metadata, null, 2);
return (
<div className={`image-metadata-viewer ${styleClass}`}>
<Flex gap={1} direction={'column'} width={'100%'}>
<Flex gap={2}>
<Text fontWeight={'semibold'}>File:</Text>
<Link href={image.url} isExternal>
{image.url}
<ExternalLinkIcon mx="2px" />
</Link>
</Flex>
{Object.keys(metadata).length > 0 ? (
<>
{type && <MetadataItem label="Generation type" value={type} />}
{['esrgan', 'gfpgan'].includes(type) && (
<MetadataItem label="Original image" value={orig_path} />
)}
{type === 'gfpgan' && strength !== undefined && (
<MetadataItem
label="Fix faces strength"
value={strength}
onClick={() => dispatch(setGfpganStrength(strength))}
/>
)}
{type === 'esrgan' && scale !== undefined && (
<MetadataItem
label="Upscaling scale"
value={scale}
onClick={() => dispatch(setUpscalingLevel(scale))}
/>
)}
{type === 'esrgan' && strength !== undefined && (
<MetadataItem
label="Upscaling strength"
value={strength}
onClick={() => dispatch(setUpscalingStrength(strength))}
/>
)}
{prompt && (
<MetadataItem
label="Prompt"
labelPosition="top"
value={promptToString(prompt)}
onClick={() => dispatch(setPrompt(prompt))}
/>
)}
{seed !== undefined && (
<MetadataItem
label="Seed"
value={seed}
onClick={() => dispatch(setSeed(seed))}
/>
)}
{sampler && (
<MetadataItem
label="Sampler"
value={sampler}
onClick={() => dispatch(setSampler(sampler))}
/>
)}
{steps && (
<MetadataItem
label="Steps"
value={steps}
onClick={() => dispatch(setSteps(steps))}
/>
)}
{cfg_scale !== undefined && (
<MetadataItem
label="CFG scale"
value={cfg_scale}
onClick={() => dispatch(setCfgScale(cfg_scale))}
/>
)}
{variations && variations.length > 0 && (
<MetadataItem
label="Seed-weight pairs"
value={seedWeightsToString(variations)}
onClick={() =>
dispatch(setSeedWeights(seedWeightsToString(variations)))
}
/>
)}
{seamless && (
<MetadataItem
label="Seamless"
value={seamless}
onClick={() => dispatch(setWidth(seamless))}
/>
)}
{width && (
<MetadataItem
label="Width"
value={width}
onClick={() => dispatch(setWidth(width))}
/>
)}
{height && (
<MetadataItem
label="Height"
value={height}
onClick={() => dispatch(setHeight(height))}
/>
)}
{init_image_path && (
<MetadataItem
label="Initial image"
value={init_image_path}
isLink
onClick={() => dispatch(setInitialImagePath(init_image_path))}
/>
)}
{mask_image_path && (
<MetadataItem
label="Mask image"
value={mask_image_path}
isLink
onClick={() => dispatch(setMaskPath(mask_image_path))}
/>
)}
{type === 'img2img' && strength && (
<MetadataItem
label="Image to image strength"
value={strength}
onClick={() => dispatch(setImg2imgStrength(strength))}
/>
)}
{fit && (
<MetadataItem
label="Image to image fit"
value={fit}
onClick={() => dispatch(setShouldFitToWidthHeight(fit))}
/>
)}
{postprocessing && postprocessing.length > 0 && (
<>
<Heading size={'sm'}>Postprocessing</Heading>
{postprocessing.map(
(
postprocess: InvokeAI.PostProcessedImageMetadata,
i: number
) => {
if (postprocess.type === 'esrgan') {
const { scale, strength } = postprocess;
return (
<Flex
key={i}
pl={'2rem'}
gap={1}
direction={'column'}
>
<Text size={'md'}>{`${
i + 1
}: Upscale (ESRGAN)`}</Text>
<MetadataItem
label="Scale"
value={scale}
onClick={() => dispatch(setUpscalingLevel(scale))}
/>
<MetadataItem
label="Strength"
value={strength}
onClick={() =>
dispatch(setUpscalingStrength(strength))
}
/>
</Flex>
);
} else if (postprocess.type === 'gfpgan') {
const { strength } = postprocess;
return (
<Flex
key={i}
pl={'2rem'}
gap={1}
direction={'column'}
>
<Text size={'md'}>{`${
i + 1
}: Face restoration (GFPGAN)`}</Text>
<MetadataItem
label="Strength"
value={strength}
onClick={() =>
dispatch(setGfpganStrength(strength))
}
/>
</Flex>
);
}
}
)}
</>
)}
<Flex gap={2} direction={'column'}>
<Flex gap={2}>
<Tooltip label={`Copy metadata JSON`}>
<IconButton
aria-label="Copy metadata JSON"
icon={<FaCopy />}
size={'xs'}
variant={'ghost'}
fontSize={14}
onClick={() =>
navigator.clipboard.writeText(metadataJSON)
}
/>
</Tooltip>
<Text fontWeight={'semibold'}>Metadata JSON:</Text>
</Flex>
<div className={'image-json-viewer'}>
<pre>{metadataJSON}</pre>
</div>
</Flex>
</>
) : (
<Center width={'100%'} pt={10}>
<Text fontSize={'lg'} fontWeight="semibold">
No metadata available
</Text>
</Center>
)}
</Flex>
</div>
);
},
memoEqualityCheck
);
export default ImageMetadataViewer;

View File

@@ -1,338 +0,0 @@
import {
Center,
Flex,
Heading,
IconButton,
Link,
Text,
Tooltip,
} from '@chakra-ui/react';
import { ExternalLinkIcon } from '@chakra-ui/icons';
import { memo } from 'react';
import { IoArrowUndoCircleOutline } from 'react-icons/io5';
import { useAppDispatch } from '../../app/store';
import * as InvokeAI from '../../app/invokeai';
import {
setCfgScale,
setGfpganStrength,
setHeight,
setImg2imgStrength,
setInitialImagePath,
setMaskPath,
setPrompt,
setSampler,
setSeed,
setSeedWeights,
setShouldFitToWidthHeight,
setSteps,
setUpscalingLevel,
setUpscalingStrength,
setWidth,
} from '../options/optionsSlice';
import promptToString from '../../common/util/promptToString';
import { seedWeightsToString } from '../../common/util/seedWeightPairs';
import { FaCopy } from 'react-icons/fa';
type MetadataItemProps = {
isLink?: boolean;
label: string;
onClick?: () => void;
value: number | string | boolean;
labelPosition?: string;
};
/**
* Component to display an individual metadata item or parameter.
*/
const MetadataItem = ({
label,
value,
onClick,
isLink,
labelPosition,
}: MetadataItemProps) => {
return (
<Flex gap={2}>
{onClick && (
<Tooltip label={`Recall ${label}`}>
<IconButton
aria-label="Use this parameter"
icon={<IoArrowUndoCircleOutline />}
size={'xs'}
variant={'ghost'}
fontSize={20}
onClick={onClick}
/>
</Tooltip>
)}
<Flex direction={labelPosition ? 'column' : 'row'}>
<Text fontWeight={'semibold'} whiteSpace={'nowrap'} pr={2}>
{label}:
</Text>
{isLink ? (
<Link href={value.toString()} isExternal wordBreak={'break-all'}>
{value.toString()} <ExternalLinkIcon mx="2px" />
</Link>
) : (
<Text overflowY={'scroll'} wordBreak={'break-all'}>
{value.toString()}
</Text>
)}
</Flex>
</Flex>
);
};
type ImageMetadataViewerProps = {
image: InvokeAI.Image;
};
// TODO: I don't know if this is needed.
const memoEqualityCheck = (
prev: ImageMetadataViewerProps,
next: ImageMetadataViewerProps
) => prev.image.uuid === next.image.uuid;
// TODO: Show more interesting information in this component.
/**
* Image metadata viewer overlays currently selected image and provides
* access to any of its metadata for use in processing.
*/
const ImageMetadataViewer = memo(({ image }: ImageMetadataViewerProps) => {
const dispatch = useAppDispatch();
// const jsonBgColor = useColorModeValue('blackAlpha.100', 'whiteAlpha.100');
const metadata = image?.metadata?.image || {};
const {
type,
postprocessing,
sampler,
prompt,
seed,
variations,
steps,
cfg_scale,
seamless,
width,
height,
strength,
fit,
init_image_path,
mask_image_path,
orig_path,
scale,
} = metadata;
const metadataJSON = JSON.stringify(metadata, null, 2);
return (
<Flex gap={1} direction={'column'} width={'100%'}>
<Flex gap={2}>
<Text fontWeight={'semibold'}>File:</Text>
<Link href={image.url} isExternal>
{image.url}
<ExternalLinkIcon mx="2px" />
</Link>
</Flex>
{Object.keys(metadata).length > 0 ? (
<>
{type && <MetadataItem label="Generation type" value={type} />}
{['esrgan', 'gfpgan'].includes(type) && (
<MetadataItem label="Original image" value={orig_path} />
)}
{type === 'gfpgan' && strength !== undefined && (
<MetadataItem
label="Fix faces strength"
value={strength}
onClick={() => dispatch(setGfpganStrength(strength))}
/>
)}
{type === 'esrgan' && scale !== undefined && (
<MetadataItem
label="Upscaling scale"
value={scale}
onClick={() => dispatch(setUpscalingLevel(scale))}
/>
)}
{type === 'esrgan' && strength !== undefined && (
<MetadataItem
label="Upscaling strength"
value={strength}
onClick={() => dispatch(setUpscalingStrength(strength))}
/>
)}
{prompt && (
<MetadataItem
label="Prompt"
labelPosition="top"
value={promptToString(prompt)}
onClick={() => dispatch(setPrompt(prompt))}
/>
)}
{seed !== undefined && (
<MetadataItem
label="Seed"
value={seed}
onClick={() => dispatch(setSeed(seed))}
/>
)}
{sampler && (
<MetadataItem
label="Sampler"
value={sampler}
onClick={() => dispatch(setSampler(sampler))}
/>
)}
{steps && (
<MetadataItem
label="Steps"
value={steps}
onClick={() => dispatch(setSteps(steps))}
/>
)}
{cfg_scale !== undefined && (
<MetadataItem
label="CFG scale"
value={cfg_scale}
onClick={() => dispatch(setCfgScale(cfg_scale))}
/>
)}
{variations && variations.length > 0 && (
<MetadataItem
label="Seed-weight pairs"
value={seedWeightsToString(variations)}
onClick={() =>
dispatch(setSeedWeights(seedWeightsToString(variations)))
}
/>
)}
{seamless && (
<MetadataItem
label="Seamless"
value={seamless}
onClick={() => dispatch(setWidth(seamless))}
/>
)}
{width && (
<MetadataItem
label="Width"
value={width}
onClick={() => dispatch(setWidth(width))}
/>
)}
{height && (
<MetadataItem
label="Height"
value={height}
onClick={() => dispatch(setHeight(height))}
/>
)}
{init_image_path && (
<MetadataItem
label="Initial image"
value={init_image_path}
isLink
onClick={() => dispatch(setInitialImagePath(init_image_path))}
/>
)}
{mask_image_path && (
<MetadataItem
label="Mask image"
value={mask_image_path}
isLink
onClick={() => dispatch(setMaskPath(mask_image_path))}
/>
)}
{type === 'img2img' && strength && (
<MetadataItem
label="Image to image strength"
value={strength}
onClick={() => dispatch(setImg2imgStrength(strength))}
/>
)}
{fit && (
<MetadataItem
label="Image to image fit"
value={fit}
onClick={() => dispatch(setShouldFitToWidthHeight(fit))}
/>
)}
{postprocessing && postprocessing.length > 0 && (
<>
<Heading size={'sm'}>Postprocessing</Heading>
{postprocessing.map(
(
postprocess: InvokeAI.PostProcessedImageMetadata,
i: number
) => {
if (postprocess.type === 'esrgan') {
const { scale, strength } = postprocess;
return (
<Flex key={i} pl={'2rem'} gap={1} direction={'column'}>
<Text size={'md'}>{`${i + 1}: Upscale (ESRGAN)`}</Text>
<MetadataItem
label="Scale"
value={scale}
onClick={() => dispatch(setUpscalingLevel(scale))}
/>
<MetadataItem
label="Strength"
value={strength}
onClick={() =>
dispatch(setUpscalingStrength(strength))
}
/>
</Flex>
);
} else if (postprocess.type === 'gfpgan') {
const { strength } = postprocess;
return (
<Flex key={i} pl={'2rem'} gap={1} direction={'column'}>
<Text size={'md'}>{`${
i + 1
}: Face restoration (GFPGAN)`}</Text>
<MetadataItem
label="Strength"
value={strength}
onClick={() => dispatch(setGfpganStrength(strength))}
/>
</Flex>
);
}
}
)}
</>
)}
<Flex gap={2} direction={'column'}>
<Flex gap={2}>
<Tooltip label={`Copy metadata JSON`}>
<IconButton
aria-label="Copy metadata JSON"
icon={<FaCopy />}
size={'xs'}
variant={'ghost'}
fontSize={14}
onClick={() => navigator.clipboard.writeText(metadataJSON)}
/>
</Tooltip>
<Text fontWeight={'semibold'}>Metadata JSON:</Text>
</Flex>
<div className={'current-image-json-viewer'}>
<pre>{metadataJSON}</pre>
</div>
</Flex>
</>
) : (
<Center width={'100%'} pt={10}>
<Text fontSize={'lg'} fontWeight="semibold">
No metadata available
</Text>
</Center>
)}
</Flex>
);
}, memoEqualityCheck);
export default ImageMetadataViewer;

View File

@@ -1,6 +1,6 @@
import { createSlice } from '@reduxjs/toolkit';
import type { PayloadAction } from '@reduxjs/toolkit';
import { clamp } from 'lodash';
import _, { clamp } from 'lodash';
import * as InvokeAI from '../../app/invokeai';
export interface GalleryState {
@@ -11,12 +11,14 @@ export interface GalleryState {
areMoreImagesAvailable: boolean;
latest_mtime?: number;
earliest_mtime?: number;
shouldShowGallery: boolean;
}
const initialState: GalleryState = {
currentImageUuid: '',
images: [],
areMoreImagesAvailable: true,
shouldShowGallery: false,
};
export const gallerySlice = createSlice({
@@ -85,6 +87,32 @@ export const gallerySlice = createSlice({
clearIntermediateImage: (state) => {
state.intermediateImage = undefined;
},
selectNextImage: (state) => {
const { images, currentImage } = state;
if (currentImage) {
const currentImageIndex = images.findIndex(
(i) => i.uuid === currentImage.uuid
);
if (_.inRange(currentImageIndex, 0, images.length)) {
const newCurrentImage = images[currentImageIndex + 1];
state.currentImage = newCurrentImage;
state.currentImageUuid = newCurrentImage.uuid;
}
}
},
selectPrevImage: (state) => {
const { images, currentImage } = state;
if (currentImage) {
const currentImageIndex = images.findIndex(
(i) => i.uuid === currentImage.uuid
);
if (_.inRange(currentImageIndex, 1, images.length + 1)) {
const newCurrentImage = images[currentImageIndex - 1];
state.currentImage = newCurrentImage;
state.currentImageUuid = newCurrentImage.uuid;
}
}
},
addGalleryImages: (
state,
action: PayloadAction<{
@@ -112,6 +140,9 @@ export const gallerySlice = createSlice({
state.areMoreImagesAvailable = areMoreImagesAvailable;
}
},
setShouldShowGallery: (state, action: PayloadAction<boolean>) => {
state.shouldShowGallery = action.payload;
},
},
});
@@ -122,6 +153,9 @@ export const {
setCurrentImage,
addGalleryImages,
setIntermediateImage,
selectNextImage,
selectPrevImage,
setShouldShowGallery,
} = gallerySlice.actions;
export default gallerySlice.reducer;

View File

@@ -8,7 +8,7 @@ import React, { ReactElement } from 'react';
import { Feature } from '../../../app/features';
import GuideIcon from '../../../common/components/GuideIcon';
interface InvokeAccordionItemProps {
export interface InvokeAccordionItemProps {
header: ReactElement;
feature: Feature;
options: ReactElement;

View File

@@ -19,7 +19,7 @@ export default function ImageFit() {
return (
<IAISwitch
label="Fit initial image to output size"
label="Fit Initial Image To Output Size"
isChecked={shouldFitToWidthHeight}
onChange={handleChangeFit}
/>

View File

@@ -8,7 +8,7 @@ import {
import IAISwitch from '../../../../common/components/IAISwitch';
import { setShouldUseInitImage } from '../../optionsSlice';
export default function ImageToImage() {
export default function ImageToImageAccordion() {
const dispatch = useAppDispatch();
const initialImagePath = useAppSelector(

View File

@@ -7,7 +7,13 @@ import {
import IAINumberInput from '../../../../common/components/IAINumberInput';
import { setImg2imgStrength } from '../../optionsSlice';
export default function ImageToImageStrength() {
interface ImageToImageStrengthProps {
label?: string;
styleClass?: string;
}
export default function ImageToImageStrength(props: ImageToImageStrengthProps) {
const { label = 'Strength', styleClass } = props;
const img2imgStrength = useAppSelector(
(state: RootState) => state.options.img2imgStrength
);
@@ -18,7 +24,7 @@ export default function ImageToImageStrength() {
return (
<IAINumberInput
label="Strength"
label={label}
step={0.01}
min={0.01}
max={0.99}
@@ -26,6 +32,7 @@ export default function ImageToImageStrength() {
value={img2imgStrength}
width="90px"
isInteger={false}
styleClass={styleClass}
/>
);
}

View File

@@ -18,6 +18,8 @@ const SeedOptions = () => {
</Flex>
<Flex gap={2}>
<Threshold />
</Flex>
<Flex gap={2}>
<Perlin />
</Flex>
</Flex>

View File

@@ -15,6 +15,8 @@ type ImageUploaderProps = {
* Callback to handle a file being rejected.
*/
fileRejectionCallback: (rejection: FileRejection) => void;
// Styling
styleClass?: string;
};
/**
@@ -25,6 +27,7 @@ const ImageUploader = ({
children,
fileAcceptedCallback,
fileRejectionCallback,
styleClass,
}: ImageUploaderProps) => {
const onDrop = useCallback(
(acceptedFiles: Array<File>, fileRejections: Array<FileRejection>) => {
@@ -52,7 +55,7 @@ const ImageUploader = ({
};
return (
<Box {...getRootProps()} flexGrow={3}>
<Box {...getRootProps()} flexGrow={3} className={`${styleClass}`}>
<input {...getInputProps({ multiple: false })} />
{cloneElement(children, {
onClick: handleClickUploadIcon,

View File

@@ -1,4 +1,3 @@
import MainAdvancedOptions from './MainAdvancedOptions';
import MainCFGScale from './MainCFGScale';
import MainHeight from './MainHeight';
import MainIterations from './MainIterations';
@@ -23,7 +22,6 @@ export default function MainOptions() {
<MainHeight />
<MainSampler />
</div>
<MainAdvancedOptions />
</div>
</div>
);

View File

@@ -1,34 +1,25 @@
import {
Box,
Accordion,
ExpandedIndex,
// ExpandedIndex,
} from '@chakra-ui/react';
// import { RootState } from '../../app/store';
// import { useAppDispatch, useAppSelector } from '../../app/store';
// import { setOpenAccordions } from '../system/systemSlice';
import OutputOptions from './OutputOptions';
import ImageToImageOptions from './AdvancedOptions/ImageToImage/ImageToImageOptions';
import { Feature } from '../../app/features';
import SeedOptions from './AdvancedOptions/Seed/SeedOptions';
import Upscale from './AdvancedOptions/Upscale/Upscale';
import UpscaleOptions from './AdvancedOptions/Upscale/UpscaleOptions';
import FaceRestore from './AdvancedOptions/FaceRestore/FaceRestore';
import FaceRestoreOptions from './AdvancedOptions/FaceRestore/FaceRestoreOptions';
import ImageToImage from './AdvancedOptions/ImageToImage/ImageToImage';
import { Accordion, ExpandedIndex } from '@chakra-ui/react';
import { RootState, useAppDispatch, useAppSelector } from '../../app/store';
import { setOpenAccordions } from '../system/systemSlice';
import InvokeAccordionItem from './AccordionItems/InvokeAccordionItem';
import Variations from './AdvancedOptions/Variations/Variations';
import VariationsOptions from './AdvancedOptions/Variations/VariationsOptions';
import InvokeAccordionItem, {
InvokeAccordionItemProps,
} from './AccordionItems/InvokeAccordionItem';
import { ReactElement } from 'react';
type OptionsAccordionType = {
[optionAccordionKey: string]: InvokeAccordionItemProps;
};
type OptionAccordionsType = {
accordionInfo: OptionsAccordionType;
};
/**
* Main container for generation and processing parameters.
*/
const OptionsAccordion = () => {
const OptionsAccordion = (props: OptionAccordionsType) => {
const { accordionInfo } = props;
const openAccordions = useAppSelector(
(state: RootState) => state.system.openAccordions
);
@@ -41,6 +32,23 @@ const OptionsAccordion = () => {
const handleChangeAccordionState = (openAccordions: ExpandedIndex) =>
dispatch(setOpenAccordions(openAccordions));
const renderAccordions = () => {
const accordionsToRender: ReactElement[] = [];
if (accordionInfo) {
Object.keys(accordionInfo).forEach((key) => {
accordionsToRender.push(
<InvokeAccordionItem
key={key}
header={accordionInfo[key as keyof typeof accordionInfo].header}
feature={accordionInfo[key as keyof typeof accordionInfo].feature}
options={accordionInfo[key as keyof typeof accordionInfo].options}
/>
);
});
}
return accordionsToRender;
};
return (
<Accordion
defaultIndex={openAccordions}
@@ -49,49 +57,7 @@ const OptionsAccordion = () => {
onChange={handleChangeAccordionState}
className="advanced-settings"
>
<InvokeAccordionItem
header={
<Box flex="1" textAlign="left">
Seed
</Box>
}
feature={Feature.SEED}
options={<SeedOptions />}
/>
<InvokeAccordionItem
header={<Variations />}
feature={Feature.VARIATIONS}
options={<VariationsOptions />}
/>
<InvokeAccordionItem
header={<FaceRestore />}
feature={Feature.FACE_CORRECTION}
options={<FaceRestoreOptions />}
/>
<InvokeAccordionItem
header={<Upscale />}
feature={Feature.UPSCALE}
options={<UpscaleOptions />}
/>
<InvokeAccordionItem
header={<ImageToImage />}
feature={Feature.IMAGE_TO_IMAGE}
options={<ImageToImageOptions />}
/>
<InvokeAccordionItem
header={
<Box flex="1" textAlign="left">
Other
</Box>
}
feature={Feature.OTHER}
options={<OutputOptions />}
/>
{renderAccordions()}
</Accordion>
);
};

View File

@@ -4,12 +4,23 @@ import { cancelProcessing } from '../../../app/socketio/actions';
import { useAppDispatch, useAppSelector } from '../../../app/store';
import IAIIconButton from '../../../common/components/IAIIconButton';
import { systemSelector } from '../../../common/hooks/useCheckParameters';
import { useHotkeys } from 'react-hotkeys-hook';
export default function CancelButton() {
const dispatch = useAppDispatch();
const { isProcessing, isConnected } = useAppSelector(systemSelector);
const handleClickCancel = () => dispatch(cancelProcessing());
useHotkeys(
'shift+x',
() => {
if (isConnected || isProcessing) {
handleClickCancel();
}
},
[isConnected, isProcessing]
);
return (
<IAIIconButton
icon={<MdCancel />}

View File

@@ -1,5 +1,5 @@
import { FormControl, Textarea } from '@chakra-ui/react';
import { ChangeEvent, KeyboardEvent } from 'react';
import { ChangeEvent, KeyboardEvent, useRef } from 'react';
import { RootState, useAppDispatch, useAppSelector } from '../../../app/store';
import { generateImage } from '../../../app/socketio/actions';
@@ -9,6 +9,7 @@ import { isEqual } from 'lodash';
import useCheckParameters, {
systemSelector,
} from '../../../common/hooks/useCheckParameters';
import { useHotkeys } from 'react-hotkeys-hook';
export const optionsSelector = createSelector(
(state: RootState) => state.options,
@@ -28,6 +29,7 @@ export const optionsSelector = createSelector(
* Prompt input text area.
*/
const PromptInput = () => {
const promptRef = useRef<HTMLTextAreaElement>(null);
const { prompt } = useAppSelector(optionsSelector);
const { isProcessing } = useAppSelector(systemSelector);
const dispatch = useAppDispatch();
@@ -37,6 +39,24 @@ const PromptInput = () => {
dispatch(setPrompt(e.target.value));
};
useHotkeys(
'ctrl+enter',
() => {
if (isReady) {
dispatch(generateImage());
}
},
[isReady]
);
useHotkeys(
'alt+a',
() => {
promptRef.current?.focus();
},
[]
);
const handleKeyDown = (e: KeyboardEvent<HTMLTextAreaElement>) => {
if (e.key === 'Enter' && e.shiftKey === false && isReady) {
e.preventDefault();
@@ -60,6 +80,7 @@ const PromptInput = () => {
onKeyDown={handleKeyDown}
resize="vertical"
height={30}
ref={promptRef}
/>
</FormControl>
</div>

View File

@@ -22,7 +22,7 @@ export interface OptionsState {
upscalingLevel: UpscalingLevel;
upscalingStrength: number;
shouldUseInitImage: boolean;
initialImagePath: string;
initialImagePath: string | null;
maskPath: string;
seamless: boolean;
shouldFitToWidthHeight: boolean;
@@ -33,6 +33,8 @@ export interface OptionsState {
shouldRunGFPGAN: boolean;
shouldRandomizeSeed: boolean;
showAdvancedOptions: boolean;
activeTab: number;
shouldShowImageDetails: boolean;
}
const initialOptionsState: OptionsState = {
@@ -49,7 +51,7 @@ const initialOptionsState: OptionsState = {
seamless: false,
shouldUseInitImage: false,
img2imgStrength: 0.75,
initialImagePath: '',
initialImagePath: null,
maskPath: '',
shouldFitToWidthHeight: true,
shouldGenerateVariations: false,
@@ -62,6 +64,8 @@ const initialOptionsState: OptionsState = {
gfpganStrength: 0.8,
shouldRandomizeSeed: true,
showAdvancedOptions: true,
activeTab: 0,
shouldShowImageDetails: false,
};
const initialState: OptionsState = initialOptionsState;
@@ -121,7 +125,7 @@ export const optionsSlice = createSlice({
setShouldUseInitImage: (state, action: PayloadAction<boolean>) => {
state.shouldUseInitImage = action.payload;
},
setInitialImagePath: (state, action: PayloadAction<string>) => {
setInitialImagePath: (state, action: PayloadAction<string | null>) => {
const newInitialImagePath = action.payload;
state.shouldUseInitImage = newInitialImagePath ? true : false;
state.initialImagePath = newInitialImagePath;
@@ -246,7 +250,9 @@ export const optionsSlice = createSlice({
if (steps) state.steps = steps;
if (cfg_scale) state.cfgScale = cfg_scale;
if (threshold) state.threshold = threshold;
if (typeof threshold === 'undefined') state.threshold = 0;
if (perlin) state.perlin = perlin;
if (typeof perlin === 'undefined') state.perlin = 0;
if (typeof seamless === 'boolean') state.seamless = seamless;
if (width) state.width = width;
if (height) state.height = height;
@@ -269,6 +275,12 @@ export const optionsSlice = createSlice({
setShowAdvancedOptions: (state, action: PayloadAction<boolean>) => {
state.showAdvancedOptions = action.payload;
},
setActiveTab: (state, action: PayloadAction<number>) => {
state.activeTab = action.payload;
},
setShouldShowImageDetails: (state, action: PayloadAction<boolean>) => {
state.shouldShowImageDetails = action.payload;
},
},
});
@@ -303,6 +315,8 @@ export const {
setShouldRunESRGAN,
setShouldRandomizeSeed,
setShowAdvancedOptions,
setActiveTab,
setShouldShowImageDetails,
} = optionsSlice.actions;
export default optionsSlice.reducer;

View File

@@ -7,6 +7,7 @@ import { FaAngleDoubleDown, FaCode, FaMinus } from 'react-icons/fa';
import { createSelector } from '@reduxjs/toolkit';
import { isEqual } from 'lodash';
import { Resizable } from 're-resizable';
import { useHotkeys } from 'react-hotkeys-hook';
const logSelector = createSelector(
(state: RootState) => state.system,
@@ -66,6 +67,14 @@ const Console = () => {
dispatch(setShouldShowLogViewer(!shouldShowLogViewer));
};
useHotkeys(
'`',
() => {
dispatch(setShouldShowLogViewer(!shouldShowLogViewer));
},
[shouldShowLogViewer]
);
return (
<>
{shouldShowLogViewer && (

View File

@@ -0,0 +1,53 @@
@use '../../../styles/Mixins/' as *;
.hotkeys-modal {
display: grid;
padding: 1rem;
background-color: var(--settings-modal-bg) !important;
row-gap: 1rem;
font-family: Inter;
h1 {
font-size: 1.2rem;
font-weight: bold;
}
}
.hotkeys-modal-items {
display: grid;
row-gap: 0.5rem;
max-height: 32rem;
overflow-y: scroll;
@include HideScrollbar;
}
.hotkey-modal-item {
display: grid;
grid-template-columns: auto max-content;
justify-content: space-between;
align-items: center;
background-color: var(--background-color);
padding: 0.5rem 1rem;
border-radius: 0.3rem;
.hotkey-info {
display: grid;
.hotkey-title {
font-weight: bold;
}
.hotkey-description {
font-size: 0.9rem;
color: var(--text-color-secondary);
}
}
.hotkey-key {
font-size: 0.8rem;
font-weight: bold;
border: 2px solid var(--settings-modal-bg);
padding: 0.2rem 0.5rem;
border-radius: 0.3rem;
}
}

View File

@@ -0,0 +1,118 @@
import {
Modal,
ModalCloseButton,
ModalContent,
ModalOverlay,
useDisclosure,
} from '@chakra-ui/react';
import React, { cloneElement, ReactElement } from 'react';
import HotkeysModalItem from './HotkeysModalItem';
type HotkeysModalProps = {
/* The button to open the Settings Modal */
children: ReactElement;
};
export default function HotkeysModal({ children }: HotkeysModalProps) {
const {
isOpen: isHotkeyModalOpen,
onOpen: onHotkeysModalOpen,
onClose: onHotkeysModalClose,
} = useDisclosure();
const hotkeys = [
{ title: 'Invoke', desc: 'Generate an image', hotkey: 'Ctrl+Enter' },
{ title: 'Cancel', desc: 'Cancel image generation', hotkey: 'Shift+X' },
{
title: 'Toggle Gallery',
desc: 'Open and close the gallery drawer',
hotkey: 'G',
},
{
title: 'Set Seed',
desc: 'Use the seed of the current image',
hotkey: 'S',
},
{
title: 'Set Parameters',
desc: 'Use all parameters of the current image',
hotkey: 'A',
},
{ title: 'Restore Faces', desc: 'Restore the current image', hotkey: 'R' },
{ title: 'Upscale', desc: 'Upscale the current image', hotkey: 'U' },
{
title: 'Show Info',
desc: 'Show metadata info of the current image',
hotkey: 'I',
},
{
title: 'Send To Image To Image',
desc: 'Send the current image to Image to Image module',
hotkey: 'Shift+I',
},
{ title: 'Delete Image', desc: 'Delete the current image', hotkey: 'Del' },
{
title: 'Focus Prompt',
desc: 'Focus the prompt input area',
hotkey: 'Alt+A',
},
{
title: 'Previous Image',
desc: 'Display the previous image in the gallery',
hotkey: 'Arrow left',
},
{
title: 'Next Image',
desc: 'Display the next image in the gallery',
hotkey: 'Arrow right',
},
{
title: 'Change Tabs',
desc: 'Switch to another workspace',
hotkey: '1-6',
},
{
title: 'Theme Toggle',
desc: 'Switch between dark and light modes',
hotkey: 'Shift+D',
},
{
title: 'Console Toggle',
desc: 'Open and close console',
hotkey: '`',
},
];
const renderHotkeyModalItems = () => {
const hotkeyModalItemsToRender: ReactElement[] = [];
hotkeys.forEach((hotkey, i) => {
hotkeyModalItemsToRender.push(
<HotkeysModalItem
key={i}
title={hotkey.title}
description={hotkey.desc}
hotkey={hotkey.hotkey}
/>
);
});
return hotkeyModalItemsToRender;
};
return (
<>
{cloneElement(children, {
onClick: onHotkeysModalOpen,
})}
<Modal isOpen={isHotkeyModalOpen} onClose={onHotkeysModalClose}>
<ModalOverlay />
<ModalContent className="hotkeys-modal">
<ModalCloseButton />
<h1>Keyboard Shorcuts</h1>
<div className="hotkeys-modal-items">{renderHotkeyModalItems()}</div>
</ModalContent>
</Modal>
</>
);
}

View File

@@ -0,0 +1,20 @@
import React from 'react';
interface HotkeysModalProps {
hotkey: string;
title: string;
description?: string;
}
export default function HotkeysModalItem(props: HotkeysModalProps) {
const { title, hotkey, description } = props;
return (
<div className="hotkey-modal-item">
<div className="hotkey-info">
<p className="hotkey-title">{title}</p>
{description && <p className="hotkey-description">{description}</p>}
</div>
<div className="hotkey-key">{hotkey}</div>
</div>
);
}

View File

@@ -2,6 +2,7 @@
.settings-modal {
background-color: var(--settings-modal-bg) !important;
font-family: Inter;
.settings-modal-content {
display: grid;

View File

@@ -21,7 +21,7 @@
.site-header-right-side {
display: grid;
grid-template-columns: repeat(5, max-content);
grid-template-columns: repeat(6, max-content);
align-items: center;
column-gap: 0.5rem;
}

View File

@@ -1,9 +1,12 @@
import { IconButton, Link, useColorMode } from '@chakra-ui/react';
import { useHotkeys } from 'react-hotkeys-hook';
import { FaSun, FaMoon, FaGithub } from 'react-icons/fa';
import { MdHelp, MdSettings } from 'react-icons/md';
import { MdHelp, MdKeyboard, MdSettings } from 'react-icons/md';
import InvokeAILogo from '../../assets/images/logo.png';
import HotkeysModal from './HotkeysModal/HotkeysModal';
import SettingsModal from './SettingsModal/SettingsModal';
import StatusIndicator from './StatusIndicator';
@@ -13,6 +16,14 @@ import StatusIndicator from './StatusIndicator';
const SiteHeader = () => {
const { colorMode, toggleColorMode } = useColorMode();
useHotkeys(
'shift+d',
() => {
toggleColorMode();
},
[colorMode, toggleColorMode]
);
const colorModeIcon = colorMode == 'light' ? <FaMoon /> : <FaSun />;
// Make FaMoon and FaSun icon apparent size consistent
@@ -40,6 +51,16 @@ const SiteHeader = () => {
/>
</SettingsModal>
<HotkeysModal>
<IconButton
aria-label="Hotkeys"
variant="link"
fontSize={24}
size={'sm'}
icon={<MdKeyboard />}
/>
</HotkeysModal>
<IconButton
aria-label="Link to Github Issues"
variant="link"

View File

@@ -0,0 +1,148 @@
@use '../../../styles/Mixins/' as *;
.image-to-image-workarea {
display: grid;
grid-template-columns: max-content auto;
column-gap: 1rem;
}
.image-to-image-panel {
display: grid;
row-gap: 1rem;
grid-auto-rows: max-content;
width: $options-bar-max-width;
height: $app-content-height;
overflow-y: scroll;
@include HideScrollbar;
}
.image-to-image-display-area {
display: grid;
grid-template-areas: 'image-to-image-display-area';
.image-to-image-display {
grid-area: image-to-image-display-area;
}
.image-gallery-area {
grid-area: image-to-image-display-area;
z-index: 2;
place-self: end;
margin: 1rem;
}
}
.image-to-image-strength-main-option {
display: grid;
grid-template-columns: none !important;
.number-input-entry {
padding: 0 1rem;
}
}
.image-to-image-display {
border-radius: 0.5rem;
background-color: var(--background-color-secondary);
display: grid;
.current-image-options {
grid-auto-columns: max-content;
justify-self: center;
align-self: start;
}
}
.image-to-image-single-preview {
display: grid;
column-gap: 0.5rem;
padding: 0 1rem;
place-content: center;
}
.image-to-image-dual-preview-container {
display: grid;
grid-template-areas: 'img2img-preview';
}
.image-to-image-dual-preview {
grid-area: img2img-preview;
display: grid;
grid-template-columns: max-content max-content;
column-gap: 0.5rem;
padding: 0 1rem;
place-content: center;
.current-image-preview {
img {
height: calc($app-gallery-height - 2rem);
max-height: calc($app-gallery-height - 2rem);
}
}
}
.img2img-metadata {
grid-area: img2img-preview;
z-index: 3;
}
.init-image-preview {
display: grid;
grid-template-areas: 'init-image-content';
justify-content: center;
align-items: center;
border-radius: 0.5rem;
.init-image-preview-header {
grid-area: init-image-content;
z-index: 2;
display: grid;
grid-template-columns: auto max-content;
height: max-content;
align-items: center;
align-self: start;
padding: 1rem;
border-radius: 0.5rem;
h1 {
padding: 0.2rem 0.6rem;
border-radius: 0.4rem;
background-color: var(--tab-hover-color);
width: max-content;
font-weight: bold;
font-size: 0.85rem;
}
}
.init-image-image {
grid-area: init-image-content;
img {
border-radius: 0.5rem;
object-fit: contain;
background-color: var(--img2img-img-bg-color);
width: auto;
height: calc($app-gallery-height - 2rem);
max-height: calc($app-gallery-height - 2rem);
}
}
}
.image-to-image-upload-btn {
display: grid;
width: 100%;
height: $app-content-height;
button {
overflow: hidden;
width: 100%;
height: 100%;
font-size: 1.5rem;
color: var(--text-color-secondary);
background-color: var(--background-color-secondary);
&:hover {
background-color: var(--img2img-img-bg-color);
}
}
}

View File

@@ -0,0 +1,16 @@
import React from 'react';
import ImageToImagePanel from './ImageToImagePanel';
import ImageToImageDisplay from './ImageToImageDisplay';
import ImageGallery from '../../gallery/ImageGallery';
export default function ImageToImage() {
return (
<div className="image-to-image-workarea">
<ImageToImagePanel />
<div className="image-to-image-display-area">
<ImageToImageDisplay />
<ImageGallery />
</div>
</div>
);
}

View File

@@ -0,0 +1,74 @@
import React from 'react';
import { FaUpload } from 'react-icons/fa';
import { uploadInitialImage } from '../../../app/socketio/actions';
import { RootState, useAppSelector } from '../../../app/store';
import InvokeImageUploader from '../../../common/components/InvokeImageUploader';
import CurrentImageButtons from '../../gallery/CurrentImageButtons';
import CurrentImagePreview from '../../gallery/CurrentImagePreview';
import ImageMetadataViewer from '../../gallery/ImageMetaDataViewer/ImageMetadataViewer';
import InitImagePreview from './InitImagePreview';
export default function ImageToImageDisplay() {
const initialImagePath = useAppSelector(
(state: RootState) => state.options.initialImagePath
);
const { currentImage, intermediateImage } = useAppSelector(
(state: RootState) => state.gallery
);
const shouldShowImageDetails = useAppSelector(
(state: RootState) => state.options.shouldShowImageDetails
);
const imageToDisplay = intermediateImage || currentImage;
return (
<div
className="image-to-image-display"
style={
imageToDisplay
? { gridAutoRows: 'max-content auto' }
: { gridAutoRows: 'auto' }
}
>
{initialImagePath ? (
<>
{imageToDisplay ? (
<>
<CurrentImageButtons image={imageToDisplay} />
<div className="image-to-image-dual-preview-container">
<div className="image-to-image-dual-preview">
<InitImagePreview />
<div className="image-to-image-current-image-display">
<CurrentImagePreview imageToDisplay={imageToDisplay} />
</div>
</div>
{shouldShowImageDetails && (
<ImageMetadataViewer
image={imageToDisplay}
styleClass="img2img-metadata"
/>
)}
</div>
</>
) : (
<div className="image-to-image-single-preview">
<InitImagePreview />
</div>
)}
</>
) : (
<div className="upload-image">
<InvokeImageUploader
label="Upload or Drop Image Here"
icon={<FaUpload />}
styleClass="image-to-image-upload-btn"
dispatcher={uploadInitialImage}
/>
</div>
)}
</div>
);
}

View File

@@ -0,0 +1,78 @@
import { Box } from '@chakra-ui/react';
import React from 'react';
import { Feature } from '../../../app/features';
import { RootState, useAppSelector } from '../../../app/store';
import FaceRestore from '../../options/AdvancedOptions/FaceRestore/FaceRestore';
import FaceRestoreOptions from '../../options/AdvancedOptions/FaceRestore/FaceRestoreOptions';
import ImageFit from '../../options/AdvancedOptions/ImageToImage/ImageFit';
import ImageToImageStrength from '../../options/AdvancedOptions/ImageToImage/ImageToImageStrength';
import SeedOptions from '../../options/AdvancedOptions/Seed/SeedOptions';
import Upscale from '../../options/AdvancedOptions/Upscale/Upscale';
import UpscaleOptions from '../../options/AdvancedOptions/Upscale/UpscaleOptions';
import Variations from '../../options/AdvancedOptions/Variations/Variations';
import VariationsOptions from '../../options/AdvancedOptions/Variations/VariationsOptions';
import MainAdvancedOptions from '../../options/MainOptions/MainAdvancedOptions';
import MainOptions from '../../options/MainOptions/MainOptions';
import OptionsAccordion from '../../options/OptionsAccordion';
import OutputOptions from '../../options/OutputOptions';
import ProcessButtons from '../../options/ProcessButtons/ProcessButtons';
import PromptInput from '../../options/PromptInput/PromptInput';
export default function ImageToImagePanel() {
const showAdvancedOptions = useAppSelector(
(state: RootState) => state.options.showAdvancedOptions
);
const imageToImageAccordions = {
seed: {
header: (
<Box flex="1" textAlign="left">
Seed
</Box>
),
feature: Feature.SEED,
options: <SeedOptions />,
},
variations: {
header: <Variations />,
feature: Feature.VARIATIONS,
options: <VariationsOptions />,
},
face_restore: {
header: <FaceRestore />,
feature: Feature.FACE_CORRECTION,
options: <FaceRestoreOptions />,
},
upscale: {
header: <Upscale />,
feature: Feature.UPSCALE,
options: <UpscaleOptions />,
},
other: {
header: (
<Box flex="1" textAlign="left">
Other
</Box>
),
feature: Feature.OTHER,
options: <OutputOptions />,
},
};
return (
<div className="image-to-image-panel">
<PromptInput />
<ProcessButtons />
<MainOptions />
<ImageToImageStrength
label="Image To Image Strength"
styleClass="main-option-block image-to-image-strength-main-option"
/>
<ImageFit />
<MainAdvancedOptions />
{showAdvancedOptions ? (
<OptionsAccordion accordionInfo={imageToImageAccordions} />
) : null}
</div>
);
}

View File

@@ -0,0 +1,37 @@
import { IconButton, Image } from '@chakra-ui/react';
import React, { SyntheticEvent } from 'react';
import { MdClear } from 'react-icons/md';
import { RootState, useAppDispatch, useAppSelector } from '../../../app/store';
import { setInitialImagePath } from '../../options/optionsSlice';
export default function InitImagePreview() {
const initialImagePath = useAppSelector(
(state: RootState) => state.options.initialImagePath
);
const dispatch = useAppDispatch();
const handleClickResetInitialImage = (e: SyntheticEvent) => {
e.stopPropagation();
dispatch(setInitialImagePath(null));
};
return (
<div className="init-image-preview">
<div className="init-image-preview-header">
<h1>Initial Image</h1>
<IconButton
isDisabled={!initialImagePath}
size={'sm'}
aria-label={'Reset Initial Image'}
onClick={handleClickResetInitialImage}
icon={<MdClear />}
/>
</div>
{initialImagePath && (
<div className="init-image-image">
<Image fit={'contain'} src={initialImagePath} rounded={'md'} />
</div>
)}
</div>
);
}

View File

@@ -0,0 +1,18 @@
import { Image } from '@chakra-ui/react';
import React from 'react';
import { RootState, useAppSelector } from '../../../app/store';
export default function InitialImageOverlay() {
const initialImagePath = useAppSelector(
(state: RootState) => state.options.initialImagePath
);
return initialImagePath ? (
<Image
fit={'contain'}
src={initialImagePath}
rounded={'md'}
className={'checkerboard'}
/>
) : null;
}

View File

@@ -1,6 +1,8 @@
import { Tab, TabPanel, TabPanels, Tabs, Tooltip } from '@chakra-ui/react';
import _ from 'lodash';
import React, { ReactElement } from 'react';
import { ImageToImageWIP } from '../../common/components/WorkInProgress/ImageToImageWIP';
import { useHotkeys } from 'react-hotkeys-hook';
import { RootState, useAppDispatch, useAppSelector } from '../../app/store';
import InpaintingWIP from '../../common/components/WorkInProgress/InpaintingWIP';
import NodesWIP from '../../common/components/WorkInProgress/NodesWIP';
import OutpaintingWIP from '../../common/components/WorkInProgress/OutpaintingWIP';
@@ -11,41 +13,74 @@ import NodesIcon from '../../common/icons/NodesIcon';
import OutpaintIcon from '../../common/icons/OutpaintIcon';
import PostprocessingIcon from '../../common/icons/PostprocessingIcon';
import TextToImageIcon from '../../common/icons/TextToImageIcon';
import { setActiveTab } from '../options/optionsSlice';
import ImageToImage from './ImageToImage/ImageToImage';
import TextToImage from './TextToImage/TextToImage';
export const tab_dict = {
txt2img: {
title: <TextToImageIcon fill={'black'} boxSize={'2.5rem'} />,
panel: <TextToImage />,
tooltip: 'Text To Image',
},
img2img: {
title: <ImageToImageIcon fill={'black'} boxSize={'2.5rem'} />,
panel: <ImageToImage />,
tooltip: 'Image To Image',
},
inpainting: {
title: <InpaintIcon fill={'black'} boxSize={'2.5rem'} />,
panel: <InpaintingWIP />,
tooltip: 'Inpainting',
},
outpainting: {
title: <OutpaintIcon fill={'black'} boxSize={'2.5rem'} />,
panel: <OutpaintingWIP />,
tooltip: 'Outpainting',
},
nodes: {
title: <NodesIcon fill={'black'} boxSize={'2.5rem'} />,
panel: <NodesWIP />,
tooltip: 'Nodes',
},
postprocess: {
title: <PostprocessingIcon fill={'black'} boxSize={'2.5rem'} />,
panel: <PostProcessingWIP />,
tooltip: 'Post Processing',
},
};
export const tabMap = _.map(tab_dict, (tab, key) => key);
export default function InvokeTabs() {
const tab_dict = {
txt2img: {
title: <TextToImageIcon fill={'black'} boxSize={'2.5rem'} />,
panel: <TextToImage />,
tooltip: 'Text To Image',
},
img2img: {
title: <ImageToImageIcon fill={'black'} boxSize={'2.5rem'} />,
panel: <ImageToImageWIP />,
tooltip: 'Image To Image',
},
inpainting: {
title: <InpaintIcon fill={'black'} boxSize={'2.5rem'} />,
panel: <InpaintingWIP />,
tooltip: 'Inpainting',
},
outpainting: {
title: <OutpaintIcon fill={'black'} boxSize={'2.5rem'} />,
panel: <OutpaintingWIP />,
tooltip: 'Outpainting',
},
nodes: {
title: <NodesIcon fill={'black'} boxSize={'2.5rem'} />,
panel: <NodesWIP />,
tooltip: 'Nodes',
},
postprocess: {
title: <PostprocessingIcon fill={'black'} boxSize={'2.5rem'} />,
panel: <PostProcessingWIP />,
tooltip: 'Post Processing',
},
};
const activeTab = useAppSelector(
(state: RootState) => state.options.activeTab
);
const dispatch = useAppDispatch();
useHotkeys('1', () => {
dispatch(setActiveTab(0));
});
useHotkeys('2', () => {
dispatch(setActiveTab(1));
});
useHotkeys('3', () => {
dispatch(setActiveTab(2));
});
useHotkeys('4', () => {
dispatch(setActiveTab(3));
});
useHotkeys('5', () => {
dispatch(setActiveTab(4));
});
useHotkeys('6', () => {
dispatch(setActiveTab(5));
});
const renderTabs = () => {
const tabsToRender: ReactElement[] = [];
@@ -76,7 +111,16 @@ export default function InvokeTabs() {
};
return (
<Tabs className="app-tabs" variant={'unstyled'}>
<Tabs
isLazy
className="app-tabs"
variant={'unstyled'}
defaultIndex={activeTab}
index={activeTab}
onChange={(index: number) => {
dispatch(setActiveTab(index));
}}
>
<div className="app-tabs-list">{renderTabs()}</div>
<TabPanels className="app-tabs-panels">{renderTabPanels()}</TabPanels>
</Tabs>

View File

@@ -2,7 +2,7 @@
.text-to-image-workarea {
display: grid;
grid-template-columns: max-content auto max-content;
grid-template-columns: max-content auto;
column-gap: 1rem;
}
@@ -14,3 +14,20 @@
overflow-y: scroll;
@include HideScrollbar;
}
.text-to-image-display {
display: grid;
grid-template-areas: 'text-to-image-display';
.current-image-display,
.current-image-display-placeholder {
grid-area: text-to-image-display;
}
.image-gallery-area {
grid-area: text-to-image-display;
z-index: 2;
place-self: end;
margin: 1rem;
}
}

View File

@@ -1,14 +1,16 @@
import React from 'react';
import TextToImagePanel from './TextToImagePanel';
import CurrentImageDisplay from '../../gallery/CurrentImageDisplay';
import ImageGallery from '../../gallery/ImageGallery';
import TextToImagePanel from './TextToImagePanel';
export default function TextToImage() {
return (
<div className="text-to-image-workarea">
<TextToImagePanel />
<CurrentImageDisplay />
<ImageGallery />
<div className="text-to-image-display">
<CurrentImageDisplay />
<ImageGallery />
</div>
</div>
);
}

View File

@@ -1,7 +1,20 @@
import { Box } from '@chakra-ui/react';
import React from 'react';
import { Feature } from '../../../app/features';
import { RootState, useAppSelector } from '../../../app/store';
import FaceRestore from '../../options/AdvancedOptions/FaceRestore/FaceRestore';
import FaceRestoreOptions from '../../options/AdvancedOptions/FaceRestore/FaceRestoreOptions';
import ImageToImageAccordion from '../../options/AdvancedOptions/ImageToImage/ImageToImageAccordion';
import ImageToImageOptions from '../../options/AdvancedOptions/ImageToImage/ImageToImageOptions';
import SeedOptions from '../../options/AdvancedOptions/Seed/SeedOptions';
import Upscale from '../../options/AdvancedOptions/Upscale/Upscale';
import UpscaleOptions from '../../options/AdvancedOptions/Upscale/UpscaleOptions';
import Variations from '../../options/AdvancedOptions/Variations/Variations';
import VariationsOptions from '../../options/AdvancedOptions/Variations/VariationsOptions';
import MainAdvancedOptions from '../../options/MainOptions/MainAdvancedOptions';
import MainOptions from '../../options/MainOptions/MainOptions';
import OptionsAccordion from '../../options/OptionsAccordion';
import OutputOptions from '../../options/OutputOptions';
import ProcessButtons from '../../options/ProcessButtons/ProcessButtons';
import PromptInput from '../../options/PromptInput/PromptInput';
@@ -9,12 +22,57 @@ export default function TextToImagePanel() {
const showAdvancedOptions = useAppSelector(
(state: RootState) => state.options.showAdvancedOptions
);
const textToImageAccordions = {
seed: {
header: (
<Box flex="1" textAlign="left">
Seed
</Box>
),
feature: Feature.SEED,
options: <SeedOptions />,
},
variations: {
header: <Variations />,
feature: Feature.VARIATIONS,
options: <VariationsOptions />,
},
face_restore: {
header: <FaceRestore />,
feature: Feature.FACE_CORRECTION,
options: <FaceRestoreOptions />,
},
upscale: {
header: <Upscale />,
feature: Feature.UPSCALE,
options: <UpscaleOptions />,
},
// img2img: {
// header: <ImageToImageAccordion />,
// feature: Feature.IMAGE_TO_IMAGE,
// options: <ImageToImageOptions />,
// },
other: {
header: (
<Box flex="1" textAlign="left">
Other
</Box>
),
feature: Feature.OTHER,
options: <OutputOptions />,
},
};
return (
<div className="text-to-image-panel">
<PromptInput />
<ProcessButtons />
<MainOptions />
{showAdvancedOptions ? <OptionsAccordion /> : null}
<MainAdvancedOptions />
{showAdvancedOptions ? (
<OptionsAccordion accordionInfo={textToImageAccordions} />
) : null}
</div>
);
}

View File

@@ -8,6 +8,9 @@ $app-width: calc(100vw - $app-cutoff);
$app-height: calc(100vh - $app-cutoff);
$app-content-height: calc(100vh - $app-content-height-cutoff);
$app-gallery-height: calc(100vh - ($app-content-height-cutoff + 6rem));
$app-gallery-popover-height: calc(
100vh - ($app-content-height-cutoff - 2.5rem)
);
$app-metadata-height: calc(100vh - ($app-content-height-cutoff + 4.4rem));
// option bar

View File

@@ -0,0 +1,8 @@
@keyframes slideOut {
from {
transform: translateX(10rem);
}
to {
transform: translateX(0);
}
}

View File

@@ -46,7 +46,7 @@
--btn-red-hover: rgb(255, 75, 75);
--btn-load-more: rgb(30, 32, 42);
--btn-load-more-hover: rgb(36, 38, 48);
--btn-load-more-hover: rgb(54, 56, 66);
// Switch
--switch-bg-color: rgb(100, 102, 110);
@@ -89,4 +89,10 @@
--console-icon-button-bg-color: rgb(50, 53, 64);
--console-icon-button-bg-color-hover: rgb(70, 73, 84);
// Img2Img
--img2img-img-bg-color: rgb(30, 32, 42);
// Gallery
--gallery-resizeable-color: rgb(36, 38, 48);
}

View File

@@ -46,7 +46,7 @@
--btn-red-hover: rgb(255, 55, 55);
--btn-load-more: rgb(202, 204, 206);
--btn-load-more-hover: rgb(206, 208, 210);
--btn-load-more-hover: rgb(178, 180, 182);
// Switch
--switch-bg-color: rgb(178, 180, 182);
@@ -88,4 +88,10 @@
--console-border-color: rgb(160, 162, 164);
--console-icon-button-bg-color: var(--switch-bg-color);
--console-icon-button-bg-color-hover: var(--console-border-color);
// Img2Img
--img2img-img-bg-color: rgb(180, 182, 184);
// Gallery
--gallery-resizeable-color: rgb(192, 194, 196);
}

View File

@@ -2,6 +2,7 @@
@use 'Colors_Dark';
@use 'Colors_Light';
@use 'Fonts';
@use 'Animations';
// Component Styles
//app
@@ -11,6 +12,7 @@
@use '../features/system/SiteHeader.scss';
@use '../features/system/StatusIndicator.scss';
@use '../features/system/SettingsModal/SettingsModal.scss';
@use '../features/system/HotkeysModal/HotkeysModal.scss';
@use '../features/system/Console.scss';
// options
@@ -25,10 +27,12 @@
@use '../features/gallery/CurrentImageDisplay.scss';
@use '../features/gallery/ImageGallery.scss';
@use '../features/gallery/InvokePopover.scss';
@use '../features/gallery/ImageMetaDataViewer/ImageMetadataViewer.scss';
// Tabs
@use '../features/tabs/InvokeTabs.scss';
@use '../features/tabs/TextToImage/TextToImage.scss';
@use '../features/tabs/ImageToImage/ImageToImage.scss';
// Component Shared
@use '../common/components/IAINumberInput.scss';

View File

@@ -1582,6 +1582,11 @@ balanced-match@^1.0.0:
resolved "https://registry.yarnpkg.com/balanced-match/-/balanced-match-1.0.2.tgz#e83e3a7e3f300b34cb9d87f615fa0cbf357690ee"
integrity sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==
base64id@2.0.0, base64id@~2.0.0:
version "2.0.0"
resolved "https://registry.yarnpkg.com/base64id/-/base64id-2.0.0.tgz#2770ac6bc47d312af97a8bf9a634342e0cd25cb6"
integrity sha512-lGe34o6EHj9y3Kts9R4ZYs/Gr+6N7MCaMlIFA3F1R2O5/m7K06AxfSeO5530PEERE6/WyEg3lsuyw4GHlPZHog==
binary-extensions@^2.0.0:
version "2.2.0"
resolved "https://registry.yarnpkg.com/binary-extensions/-/binary-extensions-2.2.0.tgz#75f502eeaf9ffde42fc98829645be4ea76bd9e2d"
@@ -2359,6 +2364,11 @@ hoist-non-react-statics@^3.3.0, hoist-non-react-statics@^3.3.1, hoist-non-react-
dependencies:
react-is "^16.7.0"
hotkeys-js@3.9.4:
version "3.9.4"
resolved "https://registry.yarnpkg.com/hotkeys-js/-/hotkeys-js-3.9.4.tgz#ce1aa4c3a132b6a63a9dd5644fc92b8a9b9cbfb9"
integrity sha512-2zuLt85Ta+gIyvs4N88pCYskNrxf1TFv3LR9t5mdAZIX8BcgQQ48F2opUptvHa6m8zsy5v/a0i9mWzTrlNWU0Q==
ignore@^5.2.0:
version "5.2.0"
resolved "https://registry.yarnpkg.com/ignore/-/ignore-5.2.0.tgz#6d3bac8fa7fe0d45d9f9be7bac2fc279577e345a"
@@ -2618,7 +2628,7 @@ normalize-path@^3.0.0, normalize-path@~3.0.0:
resolved "https://registry.yarnpkg.com/normalize-path/-/normalize-path-3.0.0.tgz#0dcd69ff23a1c9b11fd0978316644a0388216a65"
integrity sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA==
object-assign@^4.1.1:
object-assign@^4, object-assign@^4.1.1:
version "4.1.1"
resolved "https://registry.yarnpkg.com/object-assign/-/object-assign-4.1.1.tgz#2109adc7965887cfc05cbbd442cac8bfbb360863"
integrity sha512-rJgTQnkUnH1sFw8yT6VSU3zD3sWmu6sZhIseY8VX+GRu3P6F7Fu+JNDoXfklElbLJSnc3FUQHVe4cU5hj+BcUg==
@@ -2818,6 +2828,13 @@ react-focus-lock@^2.9.1:
use-callback-ref "^1.3.0"
use-sidecar "^1.1.2"
react-hotkeys-hook@^3.4.7:
version "3.4.7"
resolved "https://registry.yarnpkg.com/react-hotkeys-hook/-/react-hotkeys-hook-3.4.7.tgz#e16a0a85f59feed9f48d12cfaf166d7df4c96b7a"
integrity sha512-+bbPmhPAl6ns9VkXkNNyxlmCAIyDAcWbB76O4I0ntr3uWCRuIQf/aRLartUahe9chVMPj+OEzzfk3CQSjclUEQ==
dependencies:
hotkeys-js "3.9.4"
react-icons@^4.4.0:
version "4.4.0"
resolved "https://registry.yarnpkg.com/react-icons/-/react-icons-4.4.0.tgz#a13a8a20c254854e1ec9aecef28a95cdf24ef703"
@@ -3044,6 +3061,18 @@ socket.io-parser@~4.2.0:
"@socket.io/component-emitter" "~3.1.0"
debug "~4.3.1"
socket.io@^4.5.2:
version "4.5.2"
resolved "https://registry.yarnpkg.com/socket.io/-/socket.io-4.5.2.tgz#1eb25fd380ab3d63470aa8279f8e48d922d443ac"
integrity sha512-6fCnk4ARMPZN448+SQcnn1u8OHUC72puJcNtSgg2xS34Cu7br1gQ09YKkO1PFfDn/wyUE9ZgMAwosJed003+NQ==
dependencies:
accepts "~1.3.4"
base64id "~2.0.0"
debug "~4.3.2"
engine.io "~6.2.0"
socket.io-adapter "~2.4.0"
socket.io-parser "~4.2.0"
"source-map-js@>=0.6.2 <2.0.0", source-map-js@^1.0.2:
version "1.0.2"
resolved "https://registry.yarnpkg.com/source-map-js/-/source-map-js-1.0.2.tgz#adbc361d9c62df380125e7f161f71c826f1e490c"

View File

@@ -1,4 +0,0 @@
'''
Initialization file for the ldm.dream.generator package
'''
from .base import Generator

View File

@@ -1,4 +0,0 @@
'''
Initialization file for the ldm.dream.restoration package
'''
from .base import Restoration

View File

@@ -19,7 +19,7 @@ import cv2
import skimage
from omegaconf import OmegaConf
from ldm.dream.generator.base import downsampling
from ldm.invoke.generator.base import downsampling
from PIL import Image, ImageOps
from torch import nn
from pytorch_lightning import seed_everything, logging
@@ -28,11 +28,11 @@ from ldm.util import instantiate_from_config
from ldm.models.diffusion.ddim import DDIMSampler
from ldm.models.diffusion.plms import PLMSSampler
from ldm.models.diffusion.ksampler import KSampler
from ldm.dream.pngwriter import PngWriter
from ldm.dream.args import metadata_from_png
from ldm.dream.image_util import InitImageResizer
from ldm.dream.devices import choose_torch_device, choose_precision
from ldm.dream.conditioning import get_uc_and_c
from ldm.invoke.pngwriter import PngWriter
from ldm.invoke.args import metadata_from_png
from ldm.invoke.image_util import InitImageResizer
from ldm.invoke.devices import choose_torch_device, choose_precision
from ldm.invoke.conditioning import get_uc_and_c
def fix_func(orig):
if hasattr(torch.backends, 'mps') and torch.backends.mps.is_available():
@@ -52,41 +52,7 @@ torch.randint_like = fix_func(torch.randint_like)
torch.bernoulli = fix_func(torch.bernoulli)
torch.multinomial = fix_func(torch.multinomial)
def fix_func(orig):
if hasattr(torch.backends, 'mps') and torch.backends.mps.is_available():
def new_func(*args, **kw):
device = kw.get("device", "mps")
kw["device"]="cpu"
return orig(*args, **kw).to(device)
return new_func
return orig
torch.rand = fix_func(torch.rand)
torch.rand_like = fix_func(torch.rand_like)
torch.randn = fix_func(torch.randn)
torch.randn_like = fix_func(torch.randn_like)
torch.randint = fix_func(torch.randint)
torch.randint_like = fix_func(torch.randint_like)
torch.bernoulli = fix_func(torch.bernoulli)
torch.multinomial = fix_func(torch.multinomial)
def fix_func(orig):
if hasattr(torch.backends, 'mps') and torch.backends.mps.is_available():
def new_func(*args, **kw):
device = kw.get("device", "mps")
kw["device"]="cpu"
return orig(*args, **kw).to(device)
return new_func
return orig
torch.rand = fix_func(torch.rand)
torch.rand_like = fix_func(torch.rand_like)
torch.randn = fix_func(torch.randn)
torch.randn_like = fix_func(torch.randn_like)
torch.randint = fix_func(torch.randint)
torch.randint_like = fix_func(torch.randint_like)
torch.bernoulli = fix_func(torch.bernoulli)
torch.multinomial = fix_func(torch.multinomial)
"""Simplified text to image API for stable diffusion/latent diffusion
@@ -174,7 +140,8 @@ class Generate:
config = None,
gfpgan=None,
codeformer=None,
esrgan=None
esrgan=None,
free_gpu_mem=False,
):
models = OmegaConf.load(conf)
mconfig = models[model]
@@ -201,6 +168,7 @@ class Generate:
self.gfpgan = gfpgan
self.codeformer = codeformer
self.esrgan = esrgan
self.free_gpu_mem = free_gpu_mem
# Note that in previous versions, there was an option to pass the
# device to Generate(). However the device was then ignored, so
@@ -327,9 +295,9 @@ class Generate:
def process_image(image,seed):
image.save(f{'images/seed.png'})
The callback used by the prompt2png() can be found in ldm/dream_util.py. It contains code
to create the requested output directory, select a unique informative name for each image, and
write the prompt into the PNG metadata.
The code used to save images to a directory can be found in ldm/invoke/pngwriter.py.
It contains code to create the requested output directory, select a unique informative
name for each image, and write the prompt into the PNG metadata.
"""
# TODO: convert this into a getattr() loop
steps = steps or self.steps
@@ -417,7 +385,8 @@ class Generate:
generator = self._make_txt2img()
generator.set_variation(
self.seed, variation_amount, with_variations)
self.seed, variation_amount, with_variations
)
results = generator.generate(
prompt,
iterations=iterations,
@@ -553,7 +522,7 @@ class Generate:
)
elif tool == 'outcrop':
from ldm.dream.restoration.outcrop import Outcrop
from ldm.invoke.restoration.outcrop import Outcrop
extend_instructions = {}
for direction,pixels in _pairwise(opt.outcrop):
extend_instructions[direction]=int(pixels)
@@ -590,7 +559,7 @@ class Generate:
image_callback = callback,
)
elif tool == 'outpaint':
from ldm.dream.restoration.outpaint import Outpaint
from ldm.invoke.restoration.outpaint import Outpaint
restorer = Outpaint(image,self)
return restorer.process(
opt,
@@ -626,18 +595,14 @@ class Generate:
height,
)
if image.width < self.width and image.height < self.height:
print(f'>> WARNING: img2img and inpainting may produce unexpected results with initial images smaller than {self.width}x{self.height} in both dimensions')
# if image has a transparent area and no mask was provided, then try to generate mask
if self._has_transparency(image) and not mask:
print(
'>> Initial image has transparent areas. Will inpaint in these regions.')
if self._check_for_erasure(image):
print(
'>> WARNING: Colors underneath the transparent region seem to have been erased.\n',
'>> Inpainting will be suboptimal. Please preserve the colors when making\n',
'>> a transparency mask, or provide mask explicitly using --init_mask (-M).'
)
if self._has_transparency(image):
self._transparency_check_and_warning(image, mask)
# this returns a torch tensor
init_mask = self._create_init_mask(image,width,height,fit=fit)
init_mask = self._create_init_mask(image, width, height, fit=fit)
if (image.width * image.height) > (self.width * self.height):
print(">> This input is larger than your defaults. If you run out of memory, please use a smaller image.")
@@ -653,39 +618,39 @@ class Generate:
def _make_base(self):
if not self.generators.get('base'):
from ldm.dream.generator import Generator
from ldm.invoke.generator import Generator
self.generators['base'] = Generator(self.model, self.precision)
return self.generators['base']
def _make_img2img(self):
if not self.generators.get('img2img'):
from ldm.dream.generator.img2img import Img2Img
from ldm.invoke.generator.img2img import Img2Img
self.generators['img2img'] = Img2Img(self.model, self.precision)
return self.generators['img2img']
def _make_embiggen(self):
if not self.generators.get('embiggen'):
from ldm.dream.generator.embiggen import Embiggen
from ldm.invoke.generator.embiggen import Embiggen
self.generators['embiggen'] = Embiggen(self.model, self.precision)
return self.generators['embiggen']
def _make_txt2img(self):
if not self.generators.get('txt2img'):
from ldm.dream.generator.txt2img import Txt2Img
from ldm.invoke.generator.txt2img import Txt2Img
self.generators['txt2img'] = Txt2Img(self.model, self.precision)
self.generators['txt2img'].free_gpu_mem = self.free_gpu_mem
return self.generators['txt2img']
def _make_txt2img2img(self):
if not self.generators.get('txt2img2'):
from ldm.dream.generator.txt2img2img import Txt2Img2Img
from ldm.invoke.generator.txt2img2img import Txt2Img2Img
self.generators['txt2img2'] = Txt2Img2Img(self.model, self.precision)
self.generators['txt2img2'].free_gpu_mem = self.free_gpu_mem
return self.generators['txt2img2']
def _make_inpaint(self):
if not self.generators.get('inpaint'):
from ldm.dream.generator.inpaint import Inpaint
from ldm.invoke.generator.inpaint import Inpaint
self.generators['inpaint'] = Inpaint(self.model, self.precision)
return self.generators['inpaint']
@@ -816,7 +781,7 @@ class Generate:
print(msg)
# Be warned: config is the path to the model config file, not the dream conf file!
# Be warned: config is the path to the model config file, not the invoke conf file!
# Also note that we can get config and weights from self, so why do we need to
# pass them as args?
def _load_model_from_config(self, config, weights):
@@ -881,6 +846,7 @@ class Generate:
print(
f'>> loaded input image of size {image.width}x{image.height}'
)
image = ImageOps.exif_transpose(image)
return image
def _create_init_image(self, image, width, height, fit=True):
@@ -889,7 +855,6 @@ class Generate:
image = self._fit_image(image, (width, height))
else:
image = self._squeeze_image(image)
image = np.array(image).astype(np.float32) / 255.0
image = image[None].transpose(0, 3, 1, 2)
image = torch.from_numpy(image)
@@ -906,7 +871,6 @@ class Generate:
image = self._fit_image(image, (width, height))
else:
image = self._squeeze_image(image)
image = image.resize((image.width//downsampling, image.height //
downsampling), resample=Image.Resampling.NEAREST)
image = np.array(image)
@@ -953,6 +917,17 @@ class Generate:
colored += 1
return colored == 0
def _transparency_check_and_warning(self,image, mask):
if not mask:
print(
'>> Initial image has transparent areas. Will inpaint in these regions.')
if self._check_for_erasure(image):
print(
'>> WARNING: Colors underneath the transparent region seem to have been erased.\n',
'>> Inpainting will be suboptimal. Please preserve the colors when making\n',
'>> a transparency mask, or provide mask explicitly using --init_mask (-M).'
)
def _squeeze_image(self, image):
x, y, resize_needed = self._resolution_check(image.width, image.height)
if resize_needed:

View File

@@ -1,7 +1,7 @@
"""Helper class for dealing with image generation arguments.
The Args class parses both the command line (shell) arguments, as well as the
command string passed at the dream> prompt. It serves as the definitive repository
command string passed at the invoke> prompt. It serves as the definitive repository
of all the arguments used by Generate and their default values, and implements the
preliminary metadata standards discussed here:
@@ -19,7 +19,7 @@ To use:
print('oops')
sys.exit(-1)
# read in a command passed to the dream> prompt:
# read in a command passed to the invoke> prompt:
opts = opt.parse_cmd('do androids dream of electric sheep? -H256 -W1024 -n4')
# The Args object acts like a namespace object
@@ -64,7 +64,7 @@ To generate a dict representing RFC266 metadata:
This will generate an RFC266 dictionary that can then be turned into a JSON
and written to the PNG file. The optional seeds, weights, model_hash and
postprocesser arguments are not available to the opt object and so must be
provided externally. See how dream.py does it.
provided externally. See how invoke.py does it.
Note that this function was originally called format_metadata() and a wrapper
is provided that issues a deprecation notice.
@@ -90,8 +90,8 @@ import re
import copy
import base64
import functools
import ldm.dream.pngwriter
from ldm.dream.conditioning import split_weighted_subprompts
import ldm.invoke.pngwriter
from ldm.invoke.conditioning import split_weighted_subprompts
SAMPLER_CHOICES = [
'ddim',
@@ -120,7 +120,7 @@ class Args(object):
'''
Initialize new Args class. It takes two optional arguments, an argparse
parser for switches given on the shell command line, and an argparse
parser for switches given on the dream> CLI line. If one or both are
parser for switches given on the invoke> CLI line. If one or both are
missing, it creates appropriate parsers internally.
'''
self._arg_parser = arg_parser or self._create_arg_parser()
@@ -137,7 +137,7 @@ class Args(object):
return None
def parse_cmd(self,cmd_string):
'''Parse a dream>-style command string '''
'''Parse a invoke>-style command string '''
command = cmd_string.replace("'", "\\'")
try:
elements = shlex.split(command)
@@ -478,23 +478,23 @@ class Args(object):
)
return parser
# This creates the parser that processes commands on the dream> command line
# This creates the parser that processes commands on the invoke> command line
def _create_dream_cmd_parser(self):
parser = argparse.ArgumentParser(
formatter_class=RawTextHelpFormatter,
description=
"""
*Image generation:*
dream> a fantastic alien landscape -W576 -H512 -s60 -n4
invoke> a fantastic alien landscape -W576 -H512 -s60 -n4
*postprocessing*
!fix applies upscaling/facefixing to a previously-generated image.
dream> !fix 0000045.4829112.png -G1 -U4 -ft codeformer
invoke> !fix 0000045.4829112.png -G1 -U4 -ft codeformer
*History manipulation*
!fetch retrieves the command used to generate an earlier image.
dream> !fetch 0000015.8929913.png
dream> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5
invoke> !fetch 0000015.8929913.png
invoke> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5
!history lists all the commands issued during the current session.
@@ -811,7 +811,7 @@ def metadata_from_png(png_file_path) -> Args:
an Args object containing the image metadata. Note that this
returns a single Args object, not multiple.
'''
meta = ldm.dream.pngwriter.retrieve_metadata(png_file_path)
meta = ldm.invoke.pngwriter.retrieve_metadata(png_file_path)
if 'sd-metadata' in meta and len(meta['sd-metadata'])>0 :
return metadata_loads(meta)[0]
else:

View File

@@ -0,0 +1,4 @@
'''
Initialization file for the ldm.invoke.generator package
'''
from .base import Generator

View File

@@ -1,5 +1,5 @@
'''
Base class for ldm.dream.generator.*
Base class for ldm.invoke.generator.*
including img2img, txt2img, and inpaint
'''
import torch
@@ -9,7 +9,7 @@ from tqdm import tqdm, trange
from PIL import Image
from einops import rearrange, repeat
from pytorch_lightning import seed_everything
from ldm.dream.devices import choose_autocast
from ldm.invoke.devices import choose_autocast
from ldm.util import rand_perlin_2d
downsampling = 8
@@ -21,6 +21,8 @@ class Generator():
self.seed = None
self.latent_channels = model.channels
self.downsampling_factor = downsampling # BUG: should come from model or config
self.perlin = 0.0
self.threshold = 0
self.variation_amount = 0
self.with_variations = []
@@ -122,8 +124,8 @@ class Generator():
raise NotImplementedError("get_noise() must be implemented in a descendent class")
def get_perlin_noise(self,width,height):
return torch.stack([rand_perlin_2d((height, width), (8, 8)).to(self.model.device) for _ in range(self.latent_channels)], dim=0)
fixdevice = 'cpu' if (self.model.device.type == 'mps') else self.model.device
return torch.stack([rand_perlin_2d((height, width), (8, 8), device = self.model.device).to(fixdevice) for _ in range(self.latent_channels)], dim=0).to(self.model.device)
def new_seed(self):
self.seed = random.randrange(0, np.iinfo(np.uint32).max)

View File

@@ -1,15 +1,15 @@
'''
ldm.dream.generator.embiggen descends from ldm.dream.generator
and generates with ldm.dream.generator.img2img
ldm.invoke.generator.embiggen descends from ldm.invoke.generator
and generates with ldm.invoke.generator.img2img
'''
import torch
import numpy as np
from tqdm import trange
from PIL import Image
from ldm.dream.generator.base import Generator
from ldm.dream.generator.img2img import Img2Img
from ldm.dream.devices import choose_autocast
from ldm.invoke.generator.base import Generator
from ldm.invoke.generator.img2img import Img2Img
from ldm.invoke.devices import choose_autocast
from ldm.models.diffusion.ddim import DDIMSampler
class Embiggen(Generator):
@@ -107,7 +107,7 @@ class Embiggen(Generator):
initsuperwidth = round(initsuperwidth*embiggen[0])
initsuperheight = round(initsuperheight*embiggen[0])
if embiggen[1] > 0: # No point in ESRGAN upscaling if strength is set zero
from ldm.dream.restoration.realesrgan import ESRGAN
from ldm.invoke.restoration.realesrgan import ESRGAN
esrgan = ESRGAN()
print(
f'>> ESRGAN upscaling init image prior to cutting with Embiggen with strength {embiggen[1]}')

View File

@@ -1,11 +1,11 @@
'''
ldm.dream.generator.img2img descends from ldm.dream.generator
ldm.invoke.generator.img2img descends from ldm.invoke.generator
'''
import torch
import numpy as np
from ldm.dream.devices import choose_autocast
from ldm.dream.generator.base import Generator
from ldm.invoke.devices import choose_autocast
from ldm.invoke.generator.base import Generator
from ldm.models.diffusion.ddim import DDIMSampler
class Img2Img(Generator):
@@ -49,6 +49,7 @@ class Img2Img(Generator):
img_callback = step_callback,
unconditional_guidance_scale=cfg_scale,
unconditional_conditioning=uc,
init_latent = self.init_latent, # changes how noising is performed in ksampler
)
return self.sample_to_image(samples)

View File

@@ -1,12 +1,12 @@
'''
ldm.dream.generator.inpaint descends from ldm.dream.generator
ldm.invoke.generator.inpaint descends from ldm.invoke.generator
'''
import torch
import numpy as np
from einops import rearrange, repeat
from ldm.dream.devices import choose_autocast
from ldm.dream.generator.img2img import Img2Img
from ldm.invoke.devices import choose_autocast
from ldm.invoke.generator.img2img import Img2Img
from ldm.models.diffusion.ddim import DDIMSampler
from ldm.models.diffusion.ksampler import KSampler
@@ -27,7 +27,7 @@ class Inpaint(Img2Img):
# klms samplers not supported yet, so ignore previous sampler
if isinstance(sampler,KSampler):
print(
f">> sampler '{sampler.__class__.__name__}' is not yet supported for inpainting, using DDIMSampler instead."
f">> Using recommended DDIM sampler for inpainting."
)
sampler = DDIMSampler(self.model, device=self.model.device)

View File

@@ -1,10 +1,10 @@
'''
ldm.dream.generator.txt2img inherits from ldm.dream.generator
ldm.invoke.generator.txt2img inherits from ldm.invoke.generator
'''
import torch
import numpy as np
from ldm.dream.generator.base import Generator
from ldm.invoke.generator.base import Generator
class Txt2Img(Generator):
def __init__(self, model, precision):

Some files were not shown because too many files have changed in this diff Show More