Compare commits

...

617 Commits

Author SHA1 Message Date
Lincoln Stein
8e2fd4c96a fix ROCm version 2023-03-27 22:38:04 -04:00
Lincoln Stein
2f424f29a0 generalized root directory version updating 2023-03-27 22:35:12 -04:00
Lincoln Stein
90f00db032 version 2.3.3-rc2
- installer now installs the pretty dialog-based console launcher
- added dialogrc for custom colors
- add updater to download new launcher when users do an update
2023-03-27 21:10:24 -04:00
Lincoln Stein
77a63e5310 this is release candidate 2.3.3-rc1 (#3033)
This includes a number of bug fixes described in the draft release
notes.

It also incorporates a modified version of the dialog-based invoke.sh
script suggested by JoshuaKimsey:
https://discord.com/channels/1020123559063990373/1089119602425995304
2023-03-27 12:09:56 -04:00
Lincoln Stein
8f921741a5 Update installer/templates/invoke.sh.in
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2023-03-26 23:45:00 -04:00
Lincoln Stein
589a817952 enhance model autodetection during import (#3043)
- Imported V2 legacy models will now autoconvert into diffusers at load
time regardless of setting of --ckpt_convert.

- model manager `heuristic_import()` function now looks for side-by-side
yaml and vae files for custom configuration and VAE respectively.

Example of this:

illuminati-v1.1.safetensors illuminati-v1.1.vae.safetensors
illuminati-v1.1.yaml

When the user tries to import `illuminati-v1.1.safetensors`, the yaml
file will be used for its configuration, and the VAE will be used for
its VAE. Conversion to diffusers will happen if needed, and the yaml
file will be used to determine which V2 format (if any) to apply.

NOTE that the changes to `ckpt_to_diffusers.py` were previously reviewed
by @JPPhoto on the `main` branch and approved.
2023-03-26 11:49:00 -04:00
Lincoln Stein
dcb21c0f46 enhance model autodetection during import
- Imported V2 legacy models will now autoconvert into diffusers
  at load time regardless of setting of --ckpt_convert.

- model manager `heuristic_import()` function now looks for
  side-by-side yaml and vae files for custom configuration and VAE
  respectively.

Example of this:

  illuminati-v1.1.safetensors
  illuminati-v1.1.vae.safetensors
  illuminati-v1.1.yaml

When the user tries to import `illuminati-v1.1.safetensors`, the yaml
file will be used for its configuration, and the VAE will be used for
its VAE. Conversion to diffusers will happen if needed, and the yaml
file will be used to determine which V2 format (if any) to apply.
2023-03-26 10:20:51 -04:00
Lincoln Stein
1cb88960fe this is release candidate 2.3.3-rc1
Incorporates a modified version of the dialog-based invoke.sh script
suggested by JoshuaKimsey:
https://discord.com/channels/1020123559063990373/1089119602425995304
2023-03-25 16:58:08 -04:00
Eugene Brodsky
610a1483b7 installer: fix indentation in invoke.sh template (tabs -> spaces) 2023-03-25 13:52:37 -04:00
Lincoln Stein
b4e7fc0d1d prevent infinite loop when launching developer's console 2023-03-25 13:52:37 -04:00
blessedcoolant
b792b7d68c Security patch: Scan all pickle files, including VAEs; default to safetensor loading (#3011)
Several related security fixes:

1. Port #2946 from main to 2.3.2 branch - this closes a hole that allows
a pickle checkpoint file to masquerade as a safetensors file.
2. Add pickle scanning to the checkpoint to diffusers conversion script.
3. Pickle scan VAE non-safetensors files
4. Avoid running scanner twice on same file during the probing and
conversion process.
5. Clean up diagnostic messages.
2023-03-24 22:35:15 +13:00
blessedcoolant
abaa91195d Merge branch 'v2.3' into security/scan-ckpt-models 2023-03-24 22:11:34 +13:00
Lincoln Stein
1806bfb755 fix batch generation logfile name to be compatible with Windows OS (#3018)
- The command `invokeai-batch --invoke` was created a time-stamped
logfile with colons in its name, which is a Windows no-no. This corrects
the problem by writing the timestamp out as "13-06-2023_8-35-10"

- Closes #3005
2023-03-24 01:32:24 -04:00
blessedcoolant
7377855c02 Merge branch 'v2.3' into bugfix/batch-logfile-format 2023-03-24 18:10:00 +13:00
Lincoln Stein
5f2a6f24cf fix corrupted outputs/.next_prefix file (#3020)
- Since 2.3.2 invokeai stores the next PNG file's numeric prefix in a
file named `.next_prefix` in the outputs directory. This avoids the
overhead of doing a directory listing to find out what file number comes
next.

- The code uses advisory locking to prevent corruption of this file in
the event that multiple invokeai's try to access it simultaneously, but
some users have experienced corruption of the file nevertheless.

- This PR addresses the problem by detecting a potentially corrupted
`.next_prefix` file and falling back to the directory listing method. A
fixed version of the file is then written out.

- Closes #3001
2023-03-23 23:53:10 -04:00
Lincoln Stein
5b8b92d957 Merge branch 'v2.3' into bugfix/batch-logfile-format 2023-03-23 23:34:05 -04:00
Lincoln Stein
352202a7bc Merge branch 'v2.3' into bugfix/fix-corrupted-image-sequence-file 2023-03-23 23:28:11 -04:00
blessedcoolant
82144de85f Fix textual inversion documentation and code (#3015)
This PR addresses issues raised by #3008.
    
1. Update documentation to indicate the correct maximum batch size for
TI training when xformers is and isn't used.
    
2. Update textual inversion code so that the default for batch size is
aware of xformer availability.
    
3. Add documentation for how to launch TI with distributed learning.
2023-03-24 16:14:47 +13:00
Lincoln Stein
b70d713e89 Merge branch 'v2.3' into bugfix/batch-logfile-format 2023-03-23 23:12:43 -04:00
blessedcoolant
e39dde4140 Merge branch 'v2.3' into feat/adjust-ti-param-for-xformers 2023-03-24 15:40:38 +13:00
blessedcoolant
c151541703 bump version to 2.3.3-rc1 (#3019)
Lots of little bugs have been squashed since 2.3.2 and a new minor point
release is imminent. This PR updates the version number in preparation
for a RC.
2023-03-24 15:27:57 +13:00
Lincoln Stein
29b348ece1 fix corrupted outputs/.next_prefix file
- Since 2.3.2 invokeai stores the next PNG file's numeric prefix in a
  file named `.next_prefix` in the outputs directory. This avoids the
  overhead of doing a directory listing to find out what file number
  comes next.

- The code uses advisory locking to prevent corruption of this file in
  the event that multiple invokeai's try to access it simultaneously,
  but some users have experienced corruption of the file nevertheless.

- This PR addresses the problem by detecting a potentially corrupted
  `.next_prefix` file and falling back to the directory listing method.
  A fixed version of the file is then written out.

- Closes #3001
2023-03-23 22:07:05 -04:00
Lincoln Stein
9f7c86c33e bump version to 2.3.3-rc1
Lots of little bugs have been squashed since 2.3.2 and a new minor
point release is imminent. This PR updates the version number in
preparation for a RC.
2023-03-23 21:47:56 -04:00
Lincoln Stein
a79d40519c fix batch generation logfile name to be compatible with Windows OS
- `invokeai-batch --invoke` was created a time-stamped logfile with colons in its
  name, which is a Windows no-no. This corrects the problem by writing
  the timestamp out as "13-06-2023_8-35-10"

- Closes #3005
2023-03-23 21:43:21 -04:00
Lincoln Stein
4515d52a42 fix textual inversion documentation and code
This PR addresses issues raised by #3008.

1. Update documentation to indicate the correct maximum batch size for
   TI training when xformers is and isn't used.

2. Update textual inversion code so that the default for batch size
   is aware of xformer availability.

3. Add documentation for how to launch TI with distributed learning.
2023-03-23 21:00:54 -04:00
Lincoln Stein
2a8513eee0 adjust textual inversion training parameters according to xformers availability
- If xformers is available, then default "use xformers" checkbox to on.
- Increase batch size to 8 (from 3).
2023-03-23 19:49:13 -04:00
Jonathan
b856fac713 Keep torch version at 1.13.1 (#2985)
Now that torch 2.0 is out, Invoke 2.3 should lock down its version to 1.13.1 for new installs and upgrades.
2023-03-23 15:27:12 -04:00
Lincoln Stein
4a3951681c prevent double-scanning during convert
- Avoid running scanner twice on same file during the probing and
  conversion process.

- Clean up diagnostic messages.
2023-03-23 14:24:10 -04:00
Lincoln Stein
ba89444e36 scan legacy checkpoint models in converter script prior to unpickling
Two related security fixes:

1. Port #2946 from main to 2.3.2 branch - this closes a hole that
   allows a pickle checkpoint file to masquerade as a safetensors
   file.

2. Add pickle scanning to the checkpoint to diffusers conversion
   script. This will be ported to main in a separate PR.
2023-03-23 13:44:08 -04:00
Lincoln Stein
a044403ac3 Bugfix/fix 2.3.2 upgrade path (#2943)
This fixes #2930 by adding a missing line in `pyproject.toml` needed to create the `config/stable-diffusion` directory.
2023-03-13 10:14:37 -07:00
Lincoln Stein
16dea46b79 remove outdated comment 2023-03-13 12:51:27 -04:00
Lincoln Stein
1f80b5335b reenable run_patches() 2023-03-13 10:38:08 -04:00
Lincoln Stein
eee7f13771 add back stable diffusion config files 2023-03-13 10:35:39 -04:00
Lincoln Stein
6db509a4ff add --upgrade to update script 2023-03-13 10:15:33 -04:00
Lincoln Stein
b7965e1ee6 restore find-packages to pyproject.toml 2023-03-13 10:11:37 -04:00
Lincoln Stein
c3d292e8f9 bump version to post1 2023-03-13 09:35:25 -04:00
Lincoln Stein
206593ec99 update version number 2023-03-13 09:34:00 -04:00
Lincoln Stein
1b62c781d7 temporarily disable run-patches 2023-03-13 09:33:32 -04:00
Lincoln Stein
c4de509983 fix failure to update to 2.3.2
- fixes #2930 #2941
2023-03-13 09:19:26 -04:00
Lincoln Stein
8d80802a35 improve support for V2 variant legacy checkpoints (#2926)
This commit enhances support for V2 variant (epsilon and v-predict)
import and conversion to diffusers, by prompting the user to select the
proper config file during startup time autoimport as well as in the
invokeai installer script. Previously the user was only prompted when
doing an `!import` from the command line or when using the WebUI Model
Manager.
2023-03-11 20:54:01 -05:00
Lincoln Stein
694925f427 improve support for V2 variant legacy checkpoints
This commit enhances support for V2 variant (epsilon and v-predict)
import and conversion to diffusers, by prompting the user to select
the proper config file during startup time autoimport as well as
in the invokeai installer script..
2023-03-11 19:34:10 -05:00
Lincoln Stein
61d5cb2536 rebuild frontend/dist 2023-03-11 18:34:17 -05:00
Lincoln Stein
c23fe4f6d2 Restore invokeai-update (#2909)
At some point `pyproject.toml` was modified to remove the
invokeai-update and invokeai-model-install scripts. This PR fixes the
issue.

If this was an intentional change, let me know and we'll discuss.
2023-03-11 18:31:30 -05:00
Lincoln Stein
e6e93bbb80 Merge branch 'v2.3' into bugfix/restore-update-command 2023-03-11 17:52:09 -05:00
blessedcoolant
b5bd5240b6 Support both v2-v and v2-e legacy ckpt models in v2.3 (#2907)
# Support SD version 2 "epsilon" and "v-predict" inference
configurations in v2.3

This is a port of the `main` PR #2870 back into V2.3. It allows both
"epsilon" inference V2 models (e.g. "v2-base") and "v-predict" models
(e.g. "V2-768") to be imported and converted into correct diffusers
models. This depends on picking the right configuration file to use, and
since there is no intrinsic difference between the two types of models,
when we detect that a V2 model is being imported, we fall back to asking
the user to select the model type.
2023-03-12 04:42:16 +13:00
blessedcoolant
827ac82d54 Merge branch 'v2.3' into bugfix/support-both-v2-variants 2023-03-12 04:18:11 +13:00
Lincoln Stein
9c2f3259ca use diffusers 0.14 cache layout, upgrade transformers, safetensors, accelerate (#2913)
This PR ports the `main` PR #2871 to the v2.3 branch. This adjusts the
global diffusers model cache to work with the 0.14 diffusers layout of
placing models in HF_HOME/hub rather than HF_HOME/diffusers. It also
implements the one-time migration action to the new layout.
2023-03-11 10:17:46 -05:00
Lincoln Stein
6abe2bfe42 Merge branch 'v2.3' into bugfix/support-both-v2-variants 2023-03-11 10:01:32 -05:00
Lincoln Stein
acf955fc7b upgrade transformers, accelerate, safetensors 2023-03-10 06:58:46 -05:00
Lincoln Stein
023db8ac41 use diffusers 0.14 cache layout
This PR ports the `main` PR #2871 to the v2.3 branch. This adjusts
the global diffusers model cache to work with the 0.14 diffusers
layout of placing models in HF_HOME/hub rather than HF_HOME/diffusers.
2023-03-09 22:35:43 -05:00
Lincoln Stein
65cf733a0c Merge branch 'v2.3' into bugfix/restore-update-command 2023-03-09 21:45:17 -05:00
Lincoln Stein
8323169864 Dynamic prompt generation script for parameter scans (#2831)
# Programatically generate a large number of images varying by prompt
and other image generation parameters

This is a little standalone script named `dynamic_prompting.py` that
enables the generation of dynamic prompts. Using YAML syntax, you
specify a template of prompt phrases and lists of generation parameters,
and the script will generate a cross product of prompts and generation
settings for you. You can save these prompts to disk for later use, or
pipe them to the invokeai CLI to generate the images on the fly.

Typical uses are testing step and CFG values systematically while
holding the seed and prompt constant, testing out various artist's
styles, and comparing the results of the same prompt across different
models.

A typical template will look like this:

```
model: stable-diffusion-1.5
steps: 30;50;10
seed: 50
dimensions: 512x512
cfg:
  - 7
  - 12
sampler:
  - k_euler_a
  - k_lms
prompt:
  style:
       - greg rutkowski
       - gustav klimt
  location:
       - the mountains
       - a desert
  object:
       - luxurious dwelling
       - crude tent
  template: a {object} in {location}, in the style of {style}
```

This will generate 96 different images, each of which varies by one of
the dimensions specified in the template. For example, the prompt axis
will generate a cross product list like:
```
a luxurious dwelling in the mountains, in the style of greg rutkowski
a luxurious dwelling in the mountains, in the style of gustav klimt
a luxious dwelling in a desert, in the style of greg rutkowski
... etc
```

A typical usage would be:
```
python scripts/dynamic_prompts.py --invoke --outdir=/tmp/scanning my_template.yaml
```
This will populate `/tmp/scanning` with each of the requested images,
and also generate a `log.md` file which you can open with an e-book
reader to show something like this:


![image](https://user-images.githubusercontent.com/111189/221970165-4bbd9070-3f32-4d89-8ff2-b03a82ada575.png)

Full instructions can be obtained using the `--instructions` switch, and
an example template can be printed out using `--example`:

```
python scripts/dynamic_prompts.py --instructions
python scripts/dynamic_prompts.py --example > my_first_template.yaml
```
2023-03-09 20:18:28 -05:00
Kevin Turner
bf5cd1bd3b Merge branch 'v2.3' into enhance/simple-param-scanner-script 2023-03-09 16:08:27 -08:00
Lincoln Stein
c9db01e272 Disable built-in NSFW checker on models converted with --ckpt_convert (#2908)
When a legacy ckpt model was converted into diffusers in RAM, the
built-in NSFW checker was not being disabled, in contrast to models
converted and saved to disk. Because InvokeAI does its NSFW checking as
a separate post-processing step (in order to generate blurred images
rather than black ones), this defeated the
--nsfw and --no-nsfw switches.

This closes #2836 and #2580.

Note - this fix will be applied to `main` as a separate PR.
2023-03-09 18:06:40 -05:00
Lincoln Stein
6d5e9161fb make version pep 440 compliant 2023-03-09 18:00:31 -05:00
Lincoln Stein
0636348585 bump version number to +a0 2023-03-09 17:57:19 -05:00
Lincoln Stein
4c44523ba0 Restore invokeai-update
At some point `pyproject.toml` was modified to remove the
invokeai-update script, which in turn breaks the update
function in the launcher scripts. This PR fixes the
issue.

If this was an intentional change, let me know and we'll discuss.
2023-03-09 17:49:58 -05:00
Lincoln Stein
5372800e60 Disable built-in NSFW checker on models converted with --ckpt_convert
When a legacy ckpt model was converted into diffusers in RAM, the
built-in NSFW checker was not being disabled, in contrast to models
converted and saved to disk. Because InvokeAI does its NSFW checking
as a separate post-processing step (in order to generate blurred
images rather than black ones), this defeated the
--nsfw and --no-nsfw switches.

This closes #2836 and #2580.
2023-03-09 17:38:58 -05:00
Lincoln Stein
2ae396640b Support both v2-v and v2-e legacy ckpt models 2023-03-09 15:35:17 -05:00
Lincoln Stein
252f222068 Merge branch 'v2.3' into enhance/simple-param-scanner-script 2023-03-09 12:02:40 -05:00
Lincoln Stein
142ba8c8ea add logging, support for prompts with shell metachars 2023-03-09 11:57:44 -05:00
Lincoln Stein
84dfd2003e fix documentation of range syntax 2023-03-09 02:29:07 -05:00
blessedcoolant
5a633ba811 [WebUI] Fix 'Use All' Params not Respecting Hi-Res Fix (#2840)
This is a different source/base branch from
https://github.com/invoke-ai/InvokeAI/pull/2823 but is otherwise the
same content. `yarn build` was ran on this clean branch.

## What was the problem/requirement? (What/Why)
As part of a [change in
2.3.0](d74c4009cb),
the high resolution fix was no longer being applied when 'Use all' was
selected. This effectively meant that users had to manually analyze
images to ensure that the parameters were set to match.
~~Additionally, and never actually working, Upscaling and Face
Restoration parameters were also not pulling through with the action,
causing a similar usability issue.~~ See:
https://github.com/invoke-ai/InvokeAI/pull/2823#issuecomment-1445530362

## What was the solution? (How)
This change adds a new reducer to the `postprocessingSlice` file,
mimicking the `generationSlice` reducer to assign all parameters
appropriate for the post processing options. This reducer assigns:
* Hi-res's toggle button only if the type is `txt2img`, since `img2img`
hi-res was removed previously
* ~~Upscaling's toggle button, scale, denoising strength, and upscale
strength~~
* ~~Face Restoration's toggle button, type, strength, and fidelity (if
present/applicable)~~

### Minor
* Added `endOfLine: 'crlf'` to prettier's config to prevent all files
from being checked out on Windows due to difference of line endings (and
git not picking up those changes as modifications, causing ghost
modified files from Git)

### Revision 2:
* Removed out upscaling and face restoration pulling of parameters
### Revision 3:
* More defensive coding for the `hires_fix` not present (assume false)

### Out of Scope
* Hi-res strength (applied as img2img strength in the initial image that
is generated) is not in the metadata of the final image and can't be
reconstructed easily
* Upscaling and face restoration have some peculiarities for multi-post
processing outside of the UI, which complicates it enough to scope out
of this PR.

## How were these changes tested?
* `yarn dev` => Server started successfully
* Manual testing on the development server to ensure parameters pulled
correctly
* `yarn build` => Success

## Notes
As with `generationSlice`, this code assumes `action.payload.image` is
valid and doesn't do a formal check on it to ensure it is valid.
2023-03-08 22:38:41 +13:00
Lincoln Stein
f207647f0f CLI now writes hires_fix to metadata 2023-03-07 17:22:16 -08:00
blhook
ad16581ab8 Change to auto EoL and fix property missing from assignment of hires fix 2023-03-07 17:22:16 -08:00
blhook
fd722ddf7d Fix High Resolution not Pulling for Use All Parameters 2023-03-07 17:22:16 -08:00
Jonathan
d669e69755 Merge branch 'v2.3' into enhance/simple-param-scanner-script 2023-03-07 11:45:45 -06:00
Lincoln Stein
d912bab4c2 install the script as "invokeai-batch" 2023-03-07 10:10:18 -05:00
Lincoln Stein
68c2722c02 Prevent crash when converting models from within CLI using legacy model URL (#2846)
- Crash would occur at the end of this sequence:
  - launch CLI
  - !convert &lt;URL pointing to a legacy ckpt file&gt;
  - Answer "Y" when asked to delete original .ckpt file

- This commit modifies model_manager.heuristic_import() to silently
delete the downloaded legacy file after it has been converted into a
diffusers model. The user is no longer asked to approve deletion.

NB: This should be cherry-picked into main once refactor is done.
2023-03-07 00:09:11 -05:00
Jonathan
426fea9681 Merge branch 'v2.3' into bugfix/crash-on-unlink-after-convert 2023-03-06 20:51:58 -06:00
Lincoln Stein
62cfdb9f11 fix newlines causing negative prompt to be parsed incorrectly (#2838)
This is the same fix that was applied to main in PR 2837.
2023-03-06 18:37:44 -05:00
Lincoln Stein
46b4d6497c Merge branch 'v2.3' into bugfix/crash-on-unlink-after-convert 2023-03-06 18:14:53 -05:00
Lincoln Stein
757c0a5775 Merge branch 'v2.3' into bugfix/negative_prompt_newline 2023-03-06 18:14:06 -05:00
Lincoln Stein
9c8f0b44ad propose more restrictive codeowners (#2781)
For your consideration, here is a revised set of codeowners for the v2.3
branch. The previous set had the bad property that both @blessedcoolant
and @lstein were codeowners of everything, meaning that we had the
superpower of being able to put in a PR and get full approval if any
other member of the team (not a codeowner) approved.

The proposed file is a bit more sensible but needs many eyes on it.
Please take a look and make improvements. I wasn't sure where to put
some people, such as @netsvetaev or @GreggHelt2

I don't think it makes sense to tinker with the `main` CODEOWNERS until
the "Big Freeze" code reorganization happens.

I subscribed everyone to this PR. Apologies
2023-03-06 18:12:29 -05:00
Lincoln Stein
21433a948c Merge branch 'v2.3' into dev/fix-codeowners 2023-03-06 18:11:19 -05:00
Lincoln Stein
183344b878 Merge branch 'v2.3' into bugfix/negative_prompt_newline 2023-03-06 12:06:58 -05:00
Kent Keirsey
fc164d5be2 updated template styles. 2023-03-06 00:34:49 -05:00
Lincoln Stein
45aa770cd1 implemented multiprocessing across multiple GPUs 2023-03-05 01:52:28 -05:00
Lincoln Stein
6d0e782d71 add perlin, init_img, threshold & strength 2023-03-04 17:28:19 -05:00
Lincoln Stein
117f70e1ec implement locking when acquiring next output file prefix 2023-03-04 09:13:17 -05:00
Lincoln Stein
c840bd8c12 this prevents a crash when converting models from CLI
- Crash would occur at the end of this sequence:
  - launch CLI
  - !convert <URL pointing to a legacy ckpt file>
  - Answer "Y" when asked to delete original .ckpt file

- This commit modifies model_manager.heuristic_import()
  to silently delete the downloaded legacy file after
  it has been converted into a diffusers model. The user
  is no longer asked to approve deletion.

NB: This should be cherry-picked into main once refactor
is done.
2023-03-02 10:49:53 -05:00
Lincoln Stein
3c64fad379 Merge branch 'v2.3' into enhance/simple-param-scanner-script 2023-03-02 08:11:57 -05:00
Lincoln Stein
bc813e4065 Introduce pre-commit, black, isort, ... (#2822)
basically the changes I tried to introduce in #2687 (which could imho be
closed then 🙈)
2023-02-28 23:11:28 -05:00
Lincoln Stein
7c1d2422f0 Merge branch 'v2.3' into dev/v2.3/add-dev-tools 2023-02-28 22:45:38 -05:00
Lincoln Stein
a5b11e1071 fix newlines causing negative prompt to be parsed incorrectly
This is the same fix that was applied to main in PR 2837.
2023-02-28 17:32:17 -05:00
Lincoln Stein
c7e4daf431 add support for templates written in JSON 2023-02-28 17:27:37 -05:00
Lincoln Stein
4c61f3a514 add multiple enhancements
- ability to cycle through models and dimensions
- process automatically through invokeai
- create an .md file to display the grid results
2023-02-28 15:10:20 -05:00
Lincoln Stein
2a179799d8 add a simple parameter scanning script to the scripts directory
Simple script to generate a file of InvokeAI prompts and settings
that scan across steps and other parameters.

To use, create a file named "template.yaml" (or similar) formatted like this
>>> cut here <<<
steps: "30:50:1"
seed: 50
cfg:
  - 7
  - 8
  - 12
sampler:
  - ddim
  - k_lms
prompt:
  - a sunny meadow in the mountains
  - a gathering storm in the mountains
>>> cut here <<<

Create sections named "steps", "seed", "cfg", "sampler" and "prompt".
- Each section can have a constant value such as this:
     steps: 50
- Or a range of numeric values in the format:
     steps: "<start>:<stop>:<step>"
- Or a list of values in the format:
     - value1
     - value2
     - value3

Be careful to: 1) put quotation marks around numeric ranges; 2) put a
space between the "-" and the value in a list of values; and 3) use spaces,
not tabs, at the beginnings of indented lines.

When you run this script, capture the output into a text file like this:

    python generate_param_scan.py template.yaml > output_prompts.txt

"output_prompts.txt" will now contain an expansion of all the list
values you provided. You can examine it in a text editor such as
Notepad.

Now start the CLI, and feed the expanded prompt file to it using the
"!replay" command:

   !replay output_prompts.txt

Alternatively, you can directly feed the output of this script
by issuing a command like this from the developer's console:

   python generate_param_scan.py template.yaml | invokeai

You can use the web interface to view the resulting images and their
metadata.
2023-02-27 17:30:57 -05:00
Lincoln Stein
650f4bb58c quote output, embedding and autoscan directores in invokeai.init (#2827)
This should prevent the errors that users are seeing with spaces in the
file paths
2023-02-27 00:17:37 -05:00
Lincoln Stein
7b92b27ceb Merge branch 'v2.3' into bugfix/quote-initfile-paths 2023-02-26 23:54:20 -05:00
Lincoln Stein
8f1b301d01 restore previous naming scheme for sd-2.x models: (#2820)
- stable-diffusion-2.1-base base model from
stabilityai/stable-diffusion-2-1-base

- stable-diffusion-2.1-768 768 pixel model from
stabilityai/stable-diffusion-2-1-768

- sd-inpainting-2.0 512 pixel inpainting model from
runwayml/stable-diffusion-inpainting

This PR also bumps the version number up to v2.3.1.post2
2023-02-26 23:54:06 -05:00
Lincoln Stein
e3a19d4f3e quote output, embedding and autoscan directores in invokeai.init
- this should prevent the errors that users are seeing with
  spaces in the file pathsa

quot
2023-02-26 23:02:18 -05:00
mauwii
70283f7d8d increase line_length to 120 2023-02-26 22:11:11 +01:00
Lincoln Stein
ecbb385447 bump version number 2023-02-26 16:11:07 -05:00
mauwii
8dc56471ef fix compel version in pyproject.toml 2023-02-26 22:08:07 +01:00
mauwii
282ba201d2 Revert "parent 9eed1919c2071f9199996df747c8638c4a75e8fb"
This reverts commit 357601e2d6.
2023-02-26 21:54:13 +01:00
mauwii
2394f6458f Revert "[nodes] Removed InvokerServices, simplying service model"
This reverts commit 81fd2ee8c1.
2023-02-26 21:54:06 +01:00
mauwii
47c1be3322 Revert "doc(invoke_ai_web_server): put docstrings inside their functions"
This reverts commit 1e7a6dc676.
2023-02-26 21:53:38 +01:00
Lincoln Stein
741464b053 restore previous naming scheme for sd-2.x models:
- stable-diffusion-2.1-base
  base model from stabilityai/stable-diffusion-2-1-base

- stable-diffusion-2.1-768
  768 pixel model from stabilityai/stable-diffusion-2-1-768

- sd-inpainting-2.0
  512 pixel inpainting model from runwayml/stable-diffusion-inpainting
2023-02-26 15:31:43 -05:00
mauwii
3aab5e7e20 update .editorconfig
- set `max_line_length = 88` for .py
2023-02-26 21:28:00 +01:00
Kevin Turner
1e7a6dc676 doc(invoke_ai_web_server): put docstrings inside their functions
Documentation strings are the first thing inside the function body.
https://docs.python.org/3/tutorial/controlflow.html#defining-functions
2023-02-26 21:28:00 +01:00
Kyle Schouviller
81fd2ee8c1 [nodes] Removed InvokerServices, simplying service model 2023-02-26 21:28:00 +01:00
Kyle Schouviller
357601e2d6 parent 9eed1919c2
author Kyle Schouviller <kyle0654@hotmail.com> 1669872800 -0800
committer Kyle Schouviller <kyle0654@hotmail.com> 1676240900 -0800

Adding base node architecture

Fix type annotation errors

Runs and generates, but breaks in saving session

Fix default model value setting. Fix deprecation warning.

Fixed node api

Adding markdown docs

Simplifying Generate construction in apps

[nodes] A few minor changes (#2510)

* Pin api-related requirements

* Remove confusing extra CORS origins list

* Adds response models for HTTP 200

[nodes] Adding graph_execution_state to soon replace session. Adding tests with pytest.

Minor typing fixes

[nodes] Fix some small output query hookups

[node] Fixing some additional typing issues

[nodes] Move and expand graph code. Add base item storage and sqlite implementation.

Update startup to match new code

[nodes] Add callbacks to item storage

[nodes] Adding an InvocationContext object to use for invocations to provide easier extensibility

[nodes] New execution model that handles iteration

[nodes] Fixing the CLI

[nodes] Adding a note to the CLI

[nodes] Split processing thread into separate service

[node] Add error message on node processing failure

Removing old files and duplicated packages

Adding python-multipart
2023-02-26 21:28:00 +01:00
mauwii
71ff759692 minor improvement to mermaid diagrams 2023-02-26 21:28:00 +01:00
mauwii
b0657d5fde just4fun 2023-02-26 21:27:59 +01:00
mauwii
fa391c0b78 fix pyproject.toml
- add missing asterisk for backend package
- remove old comment
2023-02-26 21:27:47 +01:00
mauwii
6082aace6d update docs/help/contributing/010_PULL_REQUEST
- prepend brand icons on tabs
2023-02-26 21:27:02 +01:00
mauwii
7ef63161ba add icons to some docs
- this also reformated `docs/index.md`
2023-02-26 21:27:02 +01:00
mauwii
b731b55de4 update title in docs/help/contributing/index.md 2023-02-26 21:27:02 +01:00
mauwii
51956ba356 update vs-code.md, fix docs/help/index.md 2023-02-26 21:27:02 +01:00
mauwii
f494077003 enable content.code.copy
- to get a handy copy button in code blocks
- also sort the features alphabetically
2023-02-26 21:27:02 +01:00
mauwii
317165c410 remove previous attempt for contributing docs 2023-02-26 21:27:02 +01:00
mauwii
f5aadbc200 rename docs/help/contributing`
- update vs-code.md
- update 30_DOCS.md
2023-02-26 21:27:02 +01:00
mauwii
774230f7b9 re-format docs/features/index.md 2023-02-26 21:27:02 +01:00
mauwii
72e25d99c7 add docs/help/contribute/030_DOCS.md 2023-02-26 21:27:02 +01:00
mauwii
7c7c1ba02d add docs/help/index.md 2023-02-26 21:27:01 +01:00
mauwii
9c6af74556 add docs/help/IDE-Settings 2023-02-26 21:27:01 +01:00
mauwii
57daa3e1c2 re-ignore .vscode 2023-02-26 21:27:01 +01:00
mauwii
ce98fdc5c4 after some complaints reomove .vscode
I still think they would be beneficial, but to lazy to re-discuss this
2023-02-26 21:27:01 +01:00
mauwii
f901645c12 use pip517 2023-02-26 21:27:01 +01:00
mauwii
f514f17e92 add variables to define:
- repo_url
- repo_name
- site_url
2023-02-26 21:27:01 +01:00
mauwii
8744dd0c46 fix edit_uri in mkdocs.yml 2023-02-26 21:27:01 +01:00
mauwii
f3d669319e get rid of requirements-mkdocs.txt 2023-02-26 21:27:01 +01:00
mauwii
ace7032067 add docs/help/contribute/issues, update index 2023-02-26 21:27:01 +01:00
mauwii
d32819875a fix docs/requirements-mkdocs.txt 2023-02-26 21:27:01 +01:00
mauwii
5b5898827c update vscode settings 2023-02-26 21:27:00 +01:00
mauwii
8a233174de update MkDocs-Material to v9 2023-02-26 21:27:00 +01:00
mauwii
bec81170b5 move contribution docs to help section, add index 2023-02-26 21:27:00 +01:00
mauwii
2f25363d76 update "how to contribute" doc and md indentation 2023-02-26 21:27:00 +01:00
mauwii
2aa5688d90 update docs/.markdownlint.jsonc
- disable ul-indent
- disable list-marker-space
2023-02-26 21:27:00 +01:00
mauwii
ed06a70eca add pre-commit hook no-commit-to-branch
additional layer to prevent accidential commits directly to main branch
2023-02-26 21:27:00 +01:00
mauwii
e80160f8dd update config of black and isort
black:
- extend-exclude legacy scripts
- config for python 3.9 as long as we support it
isort:
- set atomic to true to only apply if no syntax errors are introduced
- config for python 3.9 as long as we support it
- extend_skib_glob legacy scripts
- filter_files
- match line_length with black
- remove_redundant_aliases
- skip_gitignore
- set src paths
- include virtual_env to detect third party modules
2023-02-26 21:27:00 +01:00
mauwii
bfe64b1510 allign prettierrc with config in frontend 2023-02-26 21:27:00 +01:00
mauwii
bb1769abab remove non working .editorconfig entrys 2023-02-26 21:27:00 +01:00
mauwii
e3f906e90d update .flake8 - use extend-exclude
so that default excludes are not overwritten
2023-02-26 21:27:00 +01:00
mauwii
d77dc68119 better config of pre-commit hooks:
- better order of hooks
- add flake8-comprehensions and flake8-simplify
- remove unecesarry hooks which are covered by previous hooks
- add hooks
  - check-executables-have-shebangs
  - check-shebang-scripts-are-executable
2023-02-26 21:27:00 +01:00
mauwii
ee3d695e2e remove command from json to be compliant 2023-02-26 21:27:00 +01:00
mauwii
0443befd2f update pyproject.toml and vscode settings 2023-02-26 21:26:59 +01:00
mauwii
b4fd02b910 add more hooks, reorder hooks, update .flake8 2023-02-26 21:26:59 +01:00
mauwii
4e0fe4ad6e update black / flake8 related settings
- add flake8-black to dev extras
- update `.flake8`
- update flake8 pre-commit hook
2023-02-26 21:26:59 +01:00
mauwii
3231499992 update .vscode settings and extensions 2023-02-26 21:26:59 +01:00
mauwii
c134161a45 update .editorconfig 2023-02-26 21:26:59 +01:00
mauwii
c3f533f20f update .pre-commit-config.yaml 2023-02-26 21:26:59 +01:00
mauwii
519a9071a8 add "How to contribute" to docs
- not yet finished
2023-02-26 21:26:59 +01:00
mauwii
87b4663026 add /docs/.markdownlint.jsonc
- for now only disable `MD046`
2023-02-26 21:26:59 +01:00
mauwii
6c11e8ee06 update mkdocs.yml
- add feature `content.tabs.link`
2023-02-26 21:26:59 +01:00
mauwii
2a739890a3 add .pre-commit-config.yaml 2023-02-26 21:26:59 +01:00
mauwii
02e84c9565 add .flake8 2023-02-26 21:26:59 +01:00
mauwii
39715017f9 update pyproject.toml 2023-02-26 21:26:44 +01:00
mauwii
35518542f8 add .vscode files 2023-02-26 21:25:45 +01:00
mauwii
0aa1106c96 update .editorconfig 2023-02-26 21:25:45 +01:00
blessedcoolant
33f832e6ab [ui]: 2.3 hotfixes (#2806)
- Updated Spanish translation
- Updated Portuguese (Brazil) translation
- Fix a number of translation issues and add missing strings
- Fix vertical symmetry and symmetry steps issue when generation steps
is adjusted
2023-02-26 12:30:59 +13:00
psychedelicious
281c788489 chore(ui): build frontend 2023-02-25 14:26:50 +11:00
psychedelicious
3858bef185 fix(ui): clamp symmetry steps to generation steps
Also renamed the variables to `horizontalSymmetrySteps` as `TimePercentage` is not accurate.
2023-02-25 14:26:46 +11:00
psychedelicious
f9a1afd09c fix(ui): fix #2802 vertical symmetry not working 2023-02-25 11:28:17 +11:00
psychedelicious
251e9c0294 fix(ui): add missing strings
Fixes #2797
Fixes #2798
2023-02-25 11:27:47 +11:00
psychedelicious
d8bf2e3c10 fix(ui): fix translation typing, fix strings
I had inadvertently un-safe-d our translation types when migrating to Weblate.

This PR fixes that, and a number of translation string bugs that went unnoticed due to the lack of type safety,
2023-02-25 11:26:35 +11:00
Gabriel Mackievicz Telles
218f30b7d0 translationBot(ui): update translation (Portuguese (Brazil))
Currently translated at 91.8% (431 of 469 strings)

Co-authored-by: Gabriel Mackievicz Telles <telles.gabriel@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/pt_BR/
Translation: InvokeAI/Web UI
2023-02-25 11:13:23 +11:00
Jeff Mahoney
da983c7773 translationBot(ui): added translation (Romanian)
Co-authored-by: Jeff Mahoney <jbmahoney@gmail.com>
2023-02-25 11:13:23 +11:00
gallegonovato
7012e16c43 translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (469 of 469 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2023-02-25 11:13:23 +11:00
Lincoln Stein
b1050abf7f hotfix for broken merge function (#2801)
Bump version up to accommodate a hotfix on v2.3.1 release.
(model merge functionality was broken)
2023-02-24 15:33:54 -05:00
Lincoln Stein
210998081a use right pep-440 standard version number 2023-02-24 15:14:39 -05:00
Lincoln Stein
604acb9d91 use pep-440 standard version number 2023-02-24 15:07:54 -05:00
Lincoln Stein
5beeb1a897 hotfix for broken merge function 2023-02-24 15:00:22 -05:00
Lincoln Stein
de6304b729 fixes crashes on merge in both WebUI and console (#2800)
- an inadvertent change to the model manager broke the merging functions
- corrected here - will be a hotfix
2023-02-24 14:58:06 -05:00
Lincoln Stein
d0be79c33d fixes crashes on merge in both WebUI and console
- an inadvertent change to the model manager broke the merging functions
- corrected here - will be a hotfix
2023-02-24 14:54:23 -05:00
Lincoln Stein
b4ed8bc47a Merge branch 'main' into v2.3 2023-02-24 10:52:03 -05:00
Lincoln Stein
bd85e00530 Last PR needed for v2.3.1 (#2788)
- Add curated set of starter models based on team discussion. The final
list of starter models can be found in
`invokeai/configs/INITIAL_MODELS.yaml`

- To test model installation, I selected and installed all the models on
the list. This led to my discovering that when there are no more starter
models to display, the console front end crashes. So I made a fix to
this in which the entire starter model selection is no longer shown.

- Update model table in 050_INSTALL_MODELS.md

- Add guide to dealing with low-memory situations
- Version is now `v2.3.1`
2023-02-24 10:31:38 -05:00
Lincoln Stein
4e446130d8 Merge branch 'v2.3' into enhance/curated-2.3.1-models 2023-02-24 10:30:42 -05:00
Lincoln Stein
4c93b514bb bump version to final 2.3.1 2023-02-24 10:04:41 -05:00
Lincoln Stein
d078941316 add low memory troubleshooting guide 2023-02-24 10:04:06 -05:00
Lincoln Stein
230d3a496d document starter models
- add new script `scripts/make_models_markdown_table.py` that parses
  INITIAL_MODELS.yaml and creates markdown table for the model installation
  documentation file

- update 050_INSTALLING_MODELS.md with above table, and add a warning
  about additional license terms that apply to some of the models.
2023-02-24 09:33:07 -05:00
Jonathan
ec2890c19b Run garbage collection to allow the CUDA cache to completely empty. (#2791) 2023-02-24 08:48:54 -05:00
Lincoln Stein
a540cc537f add curated set of HuggingFace diffusers models for 2.3.1 release
- Final list can be found in invokeai/configs/INITIAL_MODELS.yaml

- After installing all the models, I discovered a bug in the file
  selection form that caused a crash when no remaining uninstalled
  models remained. So had to fix this.
2023-02-24 00:53:48 -05:00
Lincoln Stein
39c57aa358 fix generate backend to generate "accurate" intermediate images (#2787)
The sample_to_image method in `ldm.invoke.generator.base` was still
using ckpt-era code. As a result when the WebUI was set to show
"accurate" intermediate images, there'd be a crash. This PR corrects the
problem.

- Closes #2784
- Closes #2775
2023-02-24 00:33:29 -05:00
Lincoln Stein
2d990c1f54 Merge branch 'v2.3' into bugfix/webui-accurate-intermediates 2023-02-23 22:07:18 -05:00
Lincoln Stein
7fb2da8741 fix generate backend to generate "accurate" intermediate images
- Closes #2784
- Closes #2775
2023-02-23 22:03:28 -05:00
Lincoln Stein
c69fcb1c10 fix ckpt_convert module to work with dreambooth v2 models (#2776)
- Discord member @marcus.llewellyn reported that some civitai
2.1-derived checkpoints were not converting properly (probably
dreambooth-generated):
https://discord.com/channels/1020123559063990373/1078386197589655582/1078387806122025070

- @blessedcoolant tracked this down to a missing key that was used to
derive vector length of the CLIP model used by fetching the second
dimension of the tensor at "cond_stage_model.model.text_projection".

- On inspection, I found that the same second dimension can be recovered
from key 'cond_stage_model.model.ln_final.bias', and use that instead. I
hope this is correct; tested on multiple v1, v2 and inpainting models
and they converted correctly.

- While debugging this, I found and fixed several other issues:

- model download script was not pre-downloading the OpenCLIP
text_encoder or text_tokenizer. This is fixed.
- got rid of legacy code in `ckpt_to_diffuser.py` and replaced with
calls into `model_manager`
  - more consistent status reporting in the CLI.
2023-02-23 21:51:57 -05:00
Lincoln Stein
0982548e1f Merge branch 'v2.3' into bugfix/v2-model-conversion 2023-02-23 21:27:49 -05:00
Matthias Wild
11a29fdc4d fix python 3.9 compatibility (#2780)
without this change, the project can be installed on 3.9 but not used
this also fixes the container images

Maybe we should re-enable Python 3.9 checks which would have prevented
this.
2023-02-24 00:49:25 +01:00
Lincoln Stein
24407048a5 Version 2.3.1-rc4 (#2782)
Just a version bump to use a format recognized by PyPi.
2023-02-23 18:09:43 -05:00
Matthias Wild
a7c2333312 Merge branch 'main' into fix/py39-compatibility 2023-02-23 23:53:38 +01:00
Lincoln Stein
b5b541c747 bump version; use correct format for PyPi 2023-02-23 17:47:36 -05:00
Lincoln Stein
ad6ea02c9c Update main with V2.3 fixes (#2774)
Until the nodes merge happens, we can continue to merge bugfixes from
the 2.3 branch into `main`. This will bring main into sync with
`v2.3.1+rc3`
2023-02-23 17:38:16 -05:00
Lincoln Stein
c22326f9f8 propose more restrictive codeowners 2023-02-23 17:28:30 -05:00
mauwii
1a6ed85d99 fix typeing to be compatible with python 3.9
without this, the project can be installed on 3.9 but not used
this also fixes the container images
2023-02-23 23:27:16 +01:00
Lincoln Stein
a094bbd839 push to pypi from branch v2.3 (#2778)
This change will cause releases on the v2.3 branch to be pushed to PyPi.
2023-02-23 17:20:24 -05:00
Lincoln Stein
73dda812ea push to pypi from branch v2.3
This change will cause releases on the v2.3 branch to be pushed
to PyPi.
2023-02-23 16:55:25 -05:00
Lincoln Stein
8eaf1c4033 Revert "(updater) style 'pip' progress to use dark background"
This reverts commit 89239d1c54.

- This was making a subprocess call to 'bash', and hence crashing
  on windows systems!
2023-02-23 16:33:57 -05:00
Lincoln Stein
4f44b64052 fix ckpt_convert module to work with dreambooth v2 models
- Discord member @marcus.llewellyn reported that some civitai 2.1-derived checkpoints were
  not converting properly (probably dreambooth-generated):
  https://discord.com/channels/1020123559063990373/1078386197589655582/1078387806122025070

- @blessedcoolant tracked this down to a missing key that was used to
  derive vector length of the CLIP model used by fetching the second
  dimension of the tensor at "cond_stage_model.model.text_projection".
  His proposed solution was to hardcode a value of 1024.

- On inspection, I found that the same second dimension can be
  recovered from key 'cond_stage_model.model.ln_final.bias', and use
  that instead. I hope this is correct; tested on multiple v1, v2 and
  inpainting models and they converted correctly.

- While debugging this, I found and fixed several other issues:

  - model download script was not pre-downloading the OpenCLIP
    text_encoder or text_tokenizer. This is fixed.
  - got rid of legacy code in `ckpt_to_diffuser.py` and replaced
    with calls into `model_manager`
  - more consistent status reporting in the CLI.
2023-02-23 15:43:58 -05:00
Lincoln Stein
c559bf3e10 Add a sanity check to root directory finding algorithm (#2772)
Root directory finding algorithm is:

2) use --root argument
2) use INVOKEAI_ROOT environment variable
3) use VIRTUAL_ENV environment variable
4) use ~/invokeai

Since developers are liable to put virtual environments in their
favorite places, not necessarily in the invokeai root directory, this PR
adds a sanity check that looks for the existence of
`VIRTUAL_ENV/invokeai.init`, and moves on to (4) if not found.
2023-02-23 11:37:11 -05:00
Lincoln Stein
a485515bc6 Merge branch 'v2.3' into bugfix/sanity-check-rootdir 2023-02-23 11:14:52 -05:00
Lincoln Stein
2c9b29725b Bugfix/windows install (#2770)
# This will constitute v2.3.1+rc2

## Windows installer enhancements
  
1. resize installer window to give more room for configure and download
forms
2. replace '\' with '/' in directory names to allow user to
drag-and-drop
       folders into the dialogue boxes that accept directories.
3. similar change in CLI for the !import_model and !convert_model
commands
4. better error reporting when a model download fails due to network
errors
5. put the launcher scripts into a loop so that menu reappears after
       invokeai, merge script, etc exits. User can quit with "Q".
6. do not try to download fp16 of sd-ft-mse-vae, since it doesn't exist.
7. cleaned up status reporting when installing models
8. Detect when install failed for some reason and print helpful error
      message rather than stack trace.
9. Detect window size and resize to minimum acceptable values to provide
      better display of configure and install forms.
10. Fix a bug in the CLI which prevented diffusers imported by their
repo_ids
from being correctly registered in the current session (though they
install
      correctly)
11. Capitalize the "i" in Imported in the autogenerated descriptions.
2023-02-23 11:14:30 -05:00
Lincoln Stein
28612c899a add a sanity check to root directory finding algorithm
Root directory finding algorithm is:

2) use --root argument
2) use INVOKEAI_ROOT environment variable
3) use VIRTUAL_ENV environment variable
4) use ~/invokeai

Since developer's are liable to put virtual environments in their
favorite places, not necessarily in the invokeai root directory, this
PR adds a sanity check that looks for the existence of
VIRTUAL_ENV/invokeai.init, and moves to (4) if not found.
2023-02-23 10:15:01 -05:00
Lincoln Stein
88acbeaa35 install creator tags but don't commit 2023-02-23 07:08:41 -05:00
Lincoln Stein
46729efe95 upgrade to compel 0.1.7 2023-02-23 07:06:40 -05:00
Lincoln Stein
b3d03e1146 Merge branch 'v2.3.1' into bugfix/windows-install 2023-02-23 01:04:39 -05:00
Lincoln Stein
e29c9a7d9e fix CLI import of diffusers by repo_id
- Fix a bug in the CLI which prevented diffusers imported by their repo_ids
  from being correctly registered in the current session (though they install
  correctly)

- Capitalize the "i" in Imported in the autogenerated descriptions.
2023-02-23 01:00:14 -05:00
Lincoln Stein
9b157b6532 fix several issues with Windows installs
1. resize installer window to give more room for configure and download forms
2. replace '\' with '/' in directory names to allow user to drag-and-drop
   folders into the dialogue boxes that accept directories.
3. similar change in CLI for the !import_model and !convert_model commands
4. better error reporting when a model download fails due to network errors
5. put the launcher scripts into a loop so that menu reappears after
   invokeai, merge script, etc exits. User can quit with "Q".
6. do not try to download fp16 of sd-ft-mse-vae, since it doesn't exist.
7. cleaned up status reporting when installing models
2023-02-23 00:49:59 -05:00
blessedcoolant
10a1e7962b docs: add TRANSLATION.md (#2769) 2023-02-23 15:37:15 +13:00
Lincoln Stein
cb672d7d00 Merge branch 'v2.3.1' into docs/add-translation-md 2023-02-22 21:35:39 -05:00
psychedelicious
e791fb6b0b docs: tweak messaging 2023-02-23 13:00:05 +11:00
psychedelicious
1c9001ad21 docs: add TRANSLATION.md 2023-02-23 12:53:03 +11:00
Lincoln Stein
3083356cf0 installer enhancements
- Detect when install failed for some reason and print helpful error
  message rather than stack trace.

- Detect window size and resize to minimum acceptable values to provide
  better display of configure and install forms.
2023-02-22 19:18:07 -05:00
blessedcoolant
179814e50a [WebUI] 2.3.1 Localization (#2765) 2023-02-23 10:29:14 +13:00
blessedcoolant
9515c07fca Merge branch 'v2.3.1' into localization-231 2023-02-23 10:29:02 +13:00
blessedcoolant
a45e94fde7 build: localization (2.3.1-final) 2023-02-23 09:47:01 +13:00
Lincoln Stein
8b6196e0a2 version 2.3.1 release candidate 1 2023-02-22 15:26:35 -05:00
Sergey Krashevich
ee2c0ab51b translationBot(ui): update translation (Russian)
Currently translated at 81.4% (382 of 469 strings)

translationBot(ui): update translation (Russian)

Currently translated at 81.6% (382 of 468 strings)

Co-authored-by: Sergey Krashevich <svk@svk.su>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2023-02-22 21:25:08 +01:00
Riccardo Giovanetti
ca5f129902 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (469 of 469 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (468 of 468 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-02-22 21:25:08 +01:00
Lincoln Stein
cf2eca7c60 Add new console frontend to initial model selection, and other model mgmt improvements (#2644)
## Major Changes
The invokeai-configure script has now been refactored. The work of
selecting and downloading initial models at install time is now done by
a script named `invokeai-model-install` (module name is
`ldm.invoke.config.model_install`)

Screen 1 - adjust startup options:

![screenshot1](https://user-images.githubusercontent.com/111189/219976468-b642df78-a6fe-44a2-bf97-54ccf34e9656.png)

Screen 2 - select SD models:

![screenshot2](https://user-images.githubusercontent.com/111189/219976494-13c7d257-cc8d-4dae-9521-3b352aab010b.png)

The calling arguments for `invokeai-configure` have not changed, so
nothing should break. After initializing the root directory, the script
calls `invokeai-model-install` to let the user select the starting
models to install.

`invokeai-model-install puts up a console GUI with checkboxes to
indicate which models to install. It respects the `--default_only` and
`--yes` arguments so that CI will continue to work. Here are the various
effects you can achieve:

`invokeai-configure`
       This will use console-based UI to initialize invokeai.init,
       download support models, and choose and download SD models
    
`invokeai-configure --yes`
Without activating the GUI, populate invokeai.init with default values,
       download support models and download the "recommended" SD models
    
`invokeai-configure --default_only`
Activate the GUI for changing init options, but don't show the SD
download
form, and automatically download the default SD model (currently SD-1.5)
    
`invokeai-model-install`
       Select and install models. This can be used to download arbitrary
models from the Internet, install HuggingFace models using their
repo_id,
       or watch a directory for models to load at startup time
    
`invokeai-model-install --yes`
       Import the recommended SD models without a GUI
    
`invokeai-model-install --default_only`
       As above, but only import the default model

## Flexible Model Imports

The console GUI allows the user to import arbitrary models into InvokeAI
using:

1. A HuggingFace Repo_id
2. A URL (http/https/ftp) that points to a checkpoint or safetensors
file
3. A local path on disk pointing to a checkpoint/safetensors file or
diffusers directory
4. A directory to be scanned for all checkpoint/safetensors files to be
imported

The UI allows the user to specify multiple models to bulk import. The
user can specify whether to import the ckpt/safetensors as-is, or
convert to `diffusers`. The user can also designate a directory to be
scanned at startup time for checkpoint/safetensors files.

## Backend Changes

To support the model selection GUI PR introduces a new method in
`ldm.invoke.model_manager` called `heuristic_import(). This accepts a
string-like object which can be a repo_id, URL, local path or directory.
It will figure out what the object is and import it. It interrogates the
contents of checkpoint and safetensors files to determine what type of
SD model they are -- v1.x, v2.x or v1.x inpainting.

## Installer

I am attaching a zip file of the installer if you would like to try the
process from end to end.

[InvokeAI-installer-v2.3.0.zip](https://github.com/invoke-ai/InvokeAI/files/10785474/InvokeAI-installer-v2.3.0.zip)
2023-02-22 15:24:59 -05:00
Lincoln Stein
16aea1e869 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-22 14:22:52 -05:00
blessedcoolant
75ff6cd3c3 Refactor prompting code paths to use the compel library (#2729)
motivation: i want to be doing future prompting development work in the
`compel` lib (https://github.com/damian0815/compel) - which is currently
pip installable with `pip install compel`.
2023-02-23 08:09:52 +13:00
blessedcoolant
7b7b31637c Merge branch 'main' into refactor_use_compel 2023-02-23 07:43:30 +13:00
blessedcoolant
fca564c18a ui: fix use prompt when prompt has colon (#2760)
- Fixes wonky use prompt when prompt contains colon
2023-02-23 07:41:38 +13:00
Lincoln Stein
eb8d87e185 Merge branch 'main' into refactor_use_compel 2023-02-22 12:34:16 -05:00
Lincoln Stein
dbadb1d7b5 Merge branch 'main' into fix/ui/prompt-metadata 2023-02-22 12:33:54 -05:00
Lincoln Stein
a4afb69615 fix crash in textual inversion with "num_samples=0" error (#2762)
-At some point pathlib was added to the list of imported modules and
this broken the os.path code that assembled the sample data set.

-Now fixed by replacing os.path calls with Path methods
2023-02-22 12:31:28 -05:00
Lincoln Stein
8b7925edf3 fix crash in textual inversion with "num_samples=0" error
-At some point pathlib was added to the list of imported modules and this
broken the os.path code that assembled the sample data set.

-Now fixed by replacing os.path calls with Path methods
2023-02-22 11:29:30 -05:00
Lincoln Stein
168a51c5a6 fix textual inversion output directory path
- The configure script was misnaming the directory for text-inversion-output.
- Now fixed.
2023-02-22 10:06:04 -05:00
Damian Stewart
3f5d8c3e44 remove inaccurate docstring 2023-02-22 13:18:39 +01:00
Lincoln Stein
609bb19573 fixes to resizing and init file editing
- Disable responsive resizing below starting dimensions (you can make
  form larger, but not smaller than what it was at startup)

- Fix bug that caused multiple --ckpt_convert entries (and similar) to
  be written to init file.
2023-02-22 07:05:51 -05:00
psychedelicious
d561d6d3dd chore(ui): build frontend 2023-02-22 22:09:11 +11:00
psychedelicious
7ffaa17551 fix(ui): use prompt bug when prompt has colon
This bug is related to the format in which we stored prompts for some time: an array of weighted subprompts.

This caused some strife when recalling a prompt if the prompt had colons in it, due to our recently introduced handling of negative prompts.

Currently there is no need to store a prompt as anything other than a string, so we revert to doing that.

Compatibility with structured prompts is maintained via helper hook.
2023-02-22 20:33:58 +11:00
Damian Stewart
97eac58a50 fix blend tokenizaiton reporting; fix LDM checkpoint support 2023-02-22 10:29:42 +01:00
Damian Stewart
cedbe8fcd7 fix .blend 2023-02-22 09:04:23 +01:00
Jonathan
a461875abd Merge branch 'main' into refactor_use_compel 2023-02-21 21:14:28 -06:00
Lincoln Stein
ab018ccdfe Fallback to using filename to trigger embeddings (#2752)
Lots of earlier embeds use a common trigger token such as * or the
hebrew letter shan. Previously, the textual inversion manager would
refuse to load the second and subsequent embeddings that used a
previously-claimed trigger. Now, when this case is encountered, the
trigger token is replaced by <filename> and the user is informed of the
fact.
2023-02-21 21:58:11 -05:00
Lincoln Stein
d41dcdfc46 move trigger_str registration into try block 2023-02-21 21:38:42 -05:00
Lincoln Stein
972aecc4c5 fix responsive resizing 2023-02-21 21:33:44 -05:00
Lincoln Stein
6b7be4e5dc remove dangling debug statement 2023-02-21 20:09:34 -05:00
Lincoln Stein
9b1a7b553f add "hit any key to exit" pause at end of install 2023-02-21 20:03:08 -05:00
Lincoln Stein
7f99efc5df require diffusers 0.13 2023-02-21 17:28:07 -05:00
Lincoln Stein
0a6d8b4855 Merge branch 'main' into refactor_use_compel 2023-02-21 17:19:48 -05:00
Lincoln Stein
5e41811fb5 move trigger text munging to upper level per review 2023-02-21 17:04:42 -05:00
Lincoln Stein
5a4967582e reformat with black and isort 2023-02-21 14:12:57 -05:00
Jonathan
1d0ba4a1a7 Merge branch 'main' into bugfix/filename-embedding-fallback 2023-02-21 13:12:34 -06:00
Lincoln Stein
4878c7a2d5 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-21 14:09:38 -05:00
blessedcoolant
9e5aa645a7 Fix crashing when using 2.1 model (#2757)
We now require more free memory to avoid attention slicing. 17.5% free
was not sufficient headroom in all cases, so now we require 25%.
2023-02-22 08:03:51 +13:00
Lincoln Stein
d01e23973e fix problem that was causing CI failures 2023-02-21 13:44:32 -05:00
Jonathan
71bbd78574 Fix crashing when using 2.1 model
We now require more free memory to avoid attention slicing. 17.5% free was not sufficient headroom, so now we require 25%.
2023-02-21 12:35:03 -06:00
Lincoln Stein
fff41a7349 merged with main 2023-02-21 12:20:59 -05:00
blessedcoolant
d5f524a156 Merge branch 'main' into bugfix/filename-embedding-fallback 2023-02-22 06:13:41 +13:00
Jonathan
3ab9d02883 Fixed embiggening crash due to clear_cuda_cache not being passed on and bad cuda stats initialization. (#2756) 2023-02-22 06:12:24 +13:00
Lincoln Stein
27a2e27c3a fix crash when installed models < number columns
1. Fixed display crash when the number of installed models is less than
   the number of desired columns to display them.

2. Added --ckpt_convert option to init file.
2023-02-21 12:09:34 -05:00
Jonathan
da04b11a31 Merge branch 'main' into bugfix/filename-embedding-fallback 2023-02-21 10:52:13 -06:00
Lincoln Stein
3795b40f63 implemented the following fixes:
Enhancements:
1. Directory-based imports will not attempt to import components of diffusers models.
2. Diffuser directory imports now supported
3. Files that end with .ckpt that are not Stable Diffusion models (such as VAEs) are
   skipped during import.

Bugs identified in Psychedelicious's review:
1. The invokeai-configure form now tracks the current contents of `invokeai.init` correctly.
2. The autoencoders are no longer treated like installable models, but instead are
   mandatory support models. They will no longer appear in `models.yaml`

Bugs identified in Damian's review:
1. If invokeai-model-install is started before the root directory is initialized, it will
   call invokeai-configure to fix the matter.
2. Fix bug that was causing empty `models.yaml` under certain conditions.
3. Made import textbox smaller
4. Hide the "convert to diffusers" options if nothing to import.
2023-02-21 11:47:41 -05:00
Lincoln Stein
9436f2e3d1 alphabetize trigger strings 2023-02-21 06:23:34 -05:00
Lincoln Stein
7fadd5e5c4 performance: low-memory option for calculating guidance sequentially (#2732)
In theory, this reduces peak memory consumption by doing the conditioned
and un-conditioned predictions one after the other instead of in a
single mini-batch.

In practice, it doesn't reduce the reported "Max VRAM used for this
generation" for me, even without xformers. (But it does slow things down
by a good 18%.)

That suggests to me that the peak memory usage is during VAE decoding,
not the diffusion unet, but ymmv. It does [improve things for gogurt's
16 GB
M1](https://github.com/invoke-ai/InvokeAI/pull/2732#issuecomment-1436187407),
so it seems worthwhile.

To try it out, use the `--sequential_guidance` option:
2dded68267/ldm/invoke/args.py (L487-L492)
2023-02-20 23:00:54 -05:00
Lincoln Stein
4c2a588e1f Merge branch 'main' into perf/lowmem_sequential_guidance 2023-02-20 22:40:31 -05:00
Lincoln Stein
5f9de762ff update installation docs for 2.3.1 installer screens (#2749)
This PR updates the manual page for automatic installation, and contains
screenshots of the new installer screens.
2023-02-20 22:40:02 -05:00
Lincoln Stein
91f7abb398 replace repeated triggers with <filename> 2023-02-20 22:33:13 -05:00
Damian Stewart
6420b81a5d Merge remote-tracking branch 'upstream/main' into refactor_use_compel 2023-02-20 23:34:38 +01:00
Lincoln Stein
b6ed5eafd6 update installation docs for 2.3.1 installer screens 2023-02-20 17:24:52 -05:00
blessedcoolant
694d5aa2e8 Add 'update' action to launcher script (#2636)
- Adds an update action to launcher script
- This action calls new python script `invokeai-update`, which prompts
user to update to latest release version, main development version, or
an arbitrary git tag or branch name.
- It then uses `pip` to update to whatever tag was specified.

The user interface (such as it is) looks like this:

![updater-screenshot](https://user-images.githubusercontent.com/111189/218291539-e5542662-6bfd-46ef-8ea9-655ca77392b7.png)
2023-02-21 11:17:22 +13:00
Lincoln Stein
833079140b Merge branch 'main' into enhance/update-menu 2023-02-20 17:16:20 -05:00
Lincoln Stein
fd27948c36 Merge branch 'main' into perf/lowmem_sequential_guidance 2023-02-20 17:15:33 -05:00
Damian Stewart
1dfaaa2a57 fix web ui issues 2023-02-20 22:58:07 +01:00
Lincoln Stein
bac6b50dd1 During textual inversion training, skip over non-image files (#2747)
- The TI script was looping over all files in the training image
directory, regardless of whether they were image files or not. This PR
adds a check for image file extensions.
- 
- Closes #2715
2023-02-20 16:17:32 -05:00
blessedcoolant
a30c91f398 Merge branch 'main' into bugfix/textual-inversion-training 2023-02-21 09:58:19 +13:00
Lincoln Stein
17294bfa55 restore ability of textual inversion manager to read .pt files (#2746)
- Fixes longstanding bug in the token vector size code which caused .pt
files to be assigned the wrong token vector length. These were then
tossed out during directory scanning.
2023-02-20 15:34:56 -05:00
Lincoln Stein
3fa1771cc9 Merge branch 'main' into perf/lowmem_sequential_guidance 2023-02-20 15:20:15 -05:00
Lincoln Stein
f3bd386ff0 Merge branch 'main' into bugfix/textual-inversion-training 2023-02-20 15:19:53 -05:00
Lincoln Stein
8486ce31de Merge branch 'main' into bugfix/embedding-vector-length 2023-02-20 15:19:36 -05:00
Lincoln Stein
1d9845557f reduced verbosity of embed loading messages 2023-02-20 15:18:55 -05:00
Lincoln Stein
55dce6cfdd remove more dead code 2023-02-20 15:08:07 -05:00
Lincoln Stein
58be915446 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-20 14:48:41 -05:00
blessedcoolant
dc9268f772 [WebUI] Symmetry Fix (#2745)
Symmetry now has a toggle on and off. Won't be passed if not enabled.
Symmetry settings now moved to their accordion.
2023-02-21 08:47:23 +13:00
Lincoln Stein
47ddc00c6a in textual inversion training, skip over non-image files
- Closes #2715
2023-02-20 14:44:10 -05:00
Lincoln Stein
0d22fd59ed restore ability of textual inversion manager to read .pt files
- Fixes longstanding bug in the token vector size code which caused
  .pt files to be assigned the wrong token vector length. These
  were then tossed out during directory scanning.
2023-02-20 14:34:14 -05:00
blessedcoolant
d5efd57c28 Merge branch 'symmetry-fix' of https://github.com/blessedcoolant/InvokeAI into symmetry-fix 2023-02-21 07:44:34 +13:00
blessedcoolant
b52a92da7e build: symmetry-fix-2 2023-02-21 07:43:56 +13:00
blessedcoolant
b949162e7e Revert Symmetry Big Size Input 2023-02-21 07:42:20 +13:00
blessedcoolant
5409991256 Merge branch 'main' into symmetry-fix 2023-02-21 07:29:53 +13:00
blessedcoolant
be1bcbc173 build: symmetry-fix 2023-02-21 07:28:25 +13:00
blessedcoolant
d6196e863d Move symmetry settings to their own accordion 2023-02-21 07:25:24 +13:00
blessedcoolant
63e790b79b fix crash in CLI when --save_intermediates called (#2744)
Fixes #2733
2023-02-21 07:16:45 +13:00
Lincoln Stein
cf53bba99e Merge branch 'main' into bugfix/save-intermediates 2023-02-20 12:51:53 -05:00
Lincoln Stein
ed4c8f6a8a fix crash in CLI when --save_intermediates called
Fixes #2733
2023-02-20 12:50:32 -05:00
Lincoln Stein
aab8263c31 Fix crash on calling diffusers' prepare_attention_mask (#2743)
Diffusers' `prepare_attention_mask` was crashing when we didn't pass in
a batch size.
2023-02-20 12:35:33 -05:00
Jonathan
b21bd6f428 Fix crash on calling diffusers' prepare_attention_mask
Diffusers' `prepare_attention_mask` was crashing when we didn't pass in a batch size.
2023-02-20 11:12:47 -06:00
Kevin Turner
cb6903dfd0 Merge branch 'main' into perf/lowmem_sequential_guidance 2023-02-20 08:03:11 -08:00
blessedcoolant
cd87ca8214 Correctly detect when an embedding is incompatible with the current model (#2736)
- Fixed the test for token length; tested on several .pt and .bin files
- Also added a __main__ entrypoint for CLI.py, to make pdb debugging a
bit more convenient.
2023-02-21 04:32:32 +13:00
blessedcoolant
58e5bf5a58 Merge branch 'main' into bugfix/embedding-compatibility-test 2023-02-21 04:09:18 +13:00
blessedcoolant
f17c7ca6f7 [WebUI] Symmetry Settings (#2741)
Add the newly added Symmetry settings to the WebUI.
2023-02-21 04:07:30 +13:00
blessedcoolant
c3dd28cff9 Merge branch 'main' into symmetry-webui 2023-02-21 04:06:54 +13:00
blessedcoolant
db4e1e8b53 add @lstein and @blessedcoolant to all codeowner paths (#2742)
- In an emergency, one or the other of these individuals will be
available to review any part of the code.
2023-02-21 04:06:23 +13:00
Lincoln Stein
3e43c3e698 add @lstein and @blessedcoolant to all paths
- In an emergency, one or the other of these individuals will
  be available to review any part of the code.
2023-02-20 10:02:32 -05:00
blessedcoolant
cc7733af1c Merge branch 'main' into enhance/update-menu 2023-02-21 03:54:40 +13:00
blessedcoolant
2a29734a56 Merge branch 'main' into symmetry-webui 2023-02-21 03:18:47 +13:00
blessedcoolant
f2e533f7c8 build: threshold slider fix 2023-02-21 03:17:41 +13:00
blessedcoolant
078f897b67 Revert Threshold Slider to older values 2023-02-21 02:57:00 +13:00
Matthias Wild
8352ab2076 remove old swagger related files since security issues (#2730) 2023-02-20 14:55:21 +01:00
Matthias Wild
1a3d47814b Merge branch 'main' into update/docs/remove-swagger-related-files 2023-02-20 14:54:22 +01:00
Lincoln Stein
e852ad0a51 fix bug that prevented converted files from being written into models.yaml` 2023-02-20 08:48:54 -05:00
blessedcoolant
136cd0e868 Merge branch 'main' into symmetry-webui 2023-02-21 02:43:40 +13:00
blessedcoolant
7afe26320a build: symmetry-settings 2023-02-21 02:41:26 +13:00
Lincoln Stein
702da71515 swap y/n values for broken model reconfiguration prompt 2023-02-20 08:34:46 -05:00
blessedcoolant
b313cf8afd Add Symmetry Settings 2023-02-21 02:27:55 +13:00
Lincoln Stein
852d78d9ad Fix for issue #2707 (#2710)
When selecting the last model of the third model-list in the
model-merging-TUI it crashed because the code forgot about the "None"
element.

Additionally it seems that it accidentally always took the wrong model
as third model if selected?

This simple fix resolves both issues.
2023-02-20 08:02:00 -05:00
Lincoln Stein
5570a88858 Merge branch 'main' into update/docs/remove-swagger-related-files 2023-02-20 07:44:42 -05:00
Lincoln Stein
cfd897874b Merge branch 'main' into perf/lowmem_sequential_guidance 2023-02-20 07:42:35 -05:00
Lincoln Stein
1249147c57 Merge branch 'main' into enhance/update-menu 2023-02-20 07:38:56 -05:00
Lincoln Stein
eec5c3bbb1 Merge branch 'main' into main 2023-02-20 07:38:08 -05:00
Jonathan
ca8d9fb885 Add symmetry to generation (#2675)
Added symmetry to Invoke based on discussions with @damian0815. This can currently only be activated via the CLI with the `--h_symmetry_time_pct` and `--v_symmetry_time_pct` options. Those take values from 0.0-1.0, exclusive, indicating the percentage through generation at which symmetry is applied as a one-time operation. To have symmetry in either axis applied after the first step, use a very low value like 0.001.
2023-02-20 07:33:19 -05:00
Lincoln Stein
7d77fb9691 fixed --default_only behavior 2023-02-20 01:29:39 -05:00
Lincoln Stein
a4c0dfb33c fix broken --ckpt_convert option
- not sure why, but at some pont --ckpt_convert (which converts legacy checkpoints)
  into diffusers in memory, stopped working due to float16/float32 issues.

- this commit repairs the problem

- also removed some debugging messages I found in passing
2023-02-20 01:12:02 -05:00
Kevin Turner
2dded68267 add --sequential_guidance option for low-RAM tradeoff 2023-02-19 21:21:14 -08:00
Lincoln Stein
172ce3dc25 correctly detect when an embedding is incompatible with the current model
- Fixed the test for token length; tested on several .pt and .bin files
- Also added a __main__ entrypoint for CLI.py, to make pdb debugging a bit
  more convenient.
2023-02-19 22:30:57 -05:00
Kevin Turner
6c8d4b091e dev(InvokeAIDiffuserComponent): mollify type checker's concern about the optional argument 2023-02-19 16:58:54 -08:00
Lincoln Stein
7beebc3659 resolved conflicts; ran black and isort 2023-02-19 19:48:01 -05:00
Lincoln Stein
5461318eda clean up diagnostic messages 2023-02-19 19:38:29 -05:00
Kevin Turner
d0abe13b60 performance(InvokeAIDiffuserComponent): add low-memory path for calculating conditioned and unconditioned predictions sequentially
Proof of concept. Still needs to be wired up to options or heuristics.
2023-02-19 16:04:54 -08:00
Kevin Turner
aca9d74489 refactor(InvokeAIDiffuserComponent): rename internal methods
Prefix with `_` as is tradition.
2023-02-19 15:33:16 -08:00
mauwii
a0c213a158 remove /static 2023-02-19 23:51:00 +01:00
mauwii
740210fc99 remove old unused files since security issues 2023-02-19 23:47:28 +01:00
Lincoln Stein
ca10d0652f show title of add models screen 2023-02-19 16:55:09 -05:00
Lincoln Stein
e1a85d8184 fix incorrect passing of precision to model installer 2023-02-19 16:24:31 -05:00
Lincoln Stein
9d8236c59d tested and working on Ubuntu
- You can now achieve several effects:

   `invokeai-configure`
   This will use console-based UI to initialize invokeai.init,
   download support models, and choose and download SD models

   `invokeai-configure --yes`
   Without activating the GUI, populate invokeai.init with default values,
   download support models and download the "recommended" SD models

   `invokeai-configure --default_only`
   As above, but only download the default SD model (currently SD-1.5)

   `invokeai-model-install`
   Select and install models. This can be used to download arbitrary
   models from the Internet, install HuggingFace models using their repo_id,
   or watch a directory for models to load at startup time

   `invokeai-model-install --yes`
   Import the recommended SD models without a GUI

   `invokeai-model-install --default_only`
   As above, but only import the default model
2023-02-19 16:08:58 -05:00
blessedcoolant
7eafcd47a6 [WebUI] Bug Fixes (#2728)
A  few bugs fixed.

- After the recent update to the Cancel Button, it was no longer
respecting sizing in Floating Mode and the Beta Canvas. Fixed that.
- After the recent dependency update, useHotkeys was bugging out for the
fullscreen hotkey `f`. Realized this was happening because the hotkey
was initialized in two places -- in both the gallery and the parameter
floating button. Removed it from both those places and moved it to the
InvokeTabs component. It makes sense to reside it here because it is a
global hotkey.
- Also added index `0` to the default Accordion index in state in order
to ensure that the main accordions stay open. Conveniently this works
great on all tabs. We have all the primary options in accordions so they
stay open. And as for advanced settings, the first one is always Seed
which is an important accordion, so it opens up by default.

Think there may be some more bugs. Looking in to them.
2023-02-20 09:39:48 +13:00
Damian Stewart
ded3f13a33 move all prompting stuff to use compel 2023-02-19 20:42:29 +01:00
Lincoln Stein
e5646d7241 both forms functional; need integration 2023-02-19 13:12:05 -05:00
blessedcoolant
79ac9698c1 build: webui-bug-fixes 2023-02-20 05:28:52 +13:00
blessedcoolant
d29f57c93d fix: Keep the first accordion open by default on reset
We need to do this now because we are using multiple accordions.
2023-02-20 05:26:48 +13:00
blessedcoolant
9b7cde8918 fix: Fullscreen Hotkey Bug
After upgrading the deps, the full screen hotkey started to bug out. I believe this was happening because it was triggered in two different components causing it to run twice. Removed it from both floating buttons and moved it to the Invoke tab. Makes sense to keep it there as it is a global hotkey.
2023-02-20 05:20:51 +13:00
blessedcoolant
8ae71303a5 fix: Cancel Button not maintaining min height
After the recent changes the Cancel button wasn't maintaining min height in floating mode. Also the new button group was not scaling in width correctly on the Canvas Beta UI. Fixed both.
2023-02-20 05:18:37 +13:00
blessedcoolant
2cd7bd4a8e docs: add translation info to readme (#2725)
- Adds a translation status badge
- Adds a blurb about contributing a translation (we want Weblate to be
the source of truth for translations, and to avoid updating translations
directly here)
2023-02-20 04:46:26 +13:00
blessedcoolant
b813298f2a Merge branch 'main' into docs/readme-translation 2023-02-20 04:43:13 +13:00
blessedcoolant
58f787f7d4 ui: update deps, fix husky script (#2726)
- Upgraded all dependencies
- Removed beta TS 5.0 as it conflicted with some packages
- Added types for `Array.prototype.findLast` and
`Array.prototype.findLastIndex` (these definitions are provided in TS
5.0
- Fixed fixed type import syntax in a few components
- Re-patched `redux-deep-persist` and tested to ensure the patch still
works

The husky pre-commit command was `npx run lint`, but it should run
`lint-staged`. Also, `npx` wasn't working for me. Changed the command to
`npm run lint-staged` and it all works. Extended the `lint-staged`
triggers to hit `json`, `scss` and `html`.
2023-02-20 04:14:24 +13:00
blessedcoolant
2bba543d20 Merge branch 'main' into chore/ui/update-deps 2023-02-20 04:13:47 +13:00
Jonathan
d3c1b747ee Fix behavior when encountering a bad embedding (#2721)
When encountering a bad embedding, InvokeAI was asking about reconfiguring models. This is because the embedding load error was never handled - it now is.
2023-02-19 14:04:59 +00:00
psychedelicious
b9ecf93ba3 ui: translations update from weblate (#2727)
Translations update from [Hosted Weblate](https://hosted.weblate.org)
for [InvokeAI/Web
UI](https://hosted.weblate.org/projects/invokeai/web-ui/).



Current translation status:

![Weblate translation
status](https://hosted.weblate.org/widgets/invokeai/-/web-ui/horizontal-auto.svg)
2023-02-19 23:12:05 +11:00
psychedelicious
487da8394d translationBot(ui): update translation (French)
Currently translated at 85.4% (398 of 466 strings)

Co-authored-by: psychedelicious <mabianfu@icloud.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/fr/
Translation: InvokeAI/Web UI
2023-02-19 13:00:55 +01:00
Riccardo Giovanetti
4c93bc56f8 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (466 of 466 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-02-19 13:00:55 +01:00
Hosted Weblate
727dfeae43 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-02-19 13:00:55 +01:00
psychedelicious
88d561dee7 chore(ui): build frontend 2023-02-19 22:32:05 +11:00
psychedelicious
7a379f1d4f chore(ui): update deps
- Upgraded all dependencies
- Removed beta TS 5.0 as it conflicted with some packages
- Added types for `Array.prototype.findLast` and `Array.prototype.findLastIndex` (these definitions are provided in TS 5.0
- Fixed fixed type import syntax in a few components
- Re-patched `redux-deep-persist` and tested to ensure the patch still works
2023-02-19 22:32:05 +11:00
psychedelicious
3ad89f99d2 build(ui): fix husky & lint-staged 2023-02-19 22:32:00 +11:00
psychedelicious
d76c5da514 docs: add translation info to readme 2023-02-19 19:13:38 +11:00
blessedcoolant
da5b0673e7 docs(ti): add using & troubleshooting sections (#2717)
Add `Using Embeddings` and `Troubleshooting` sections clarifying issues
I had when using TI for the first time.
2023-02-19 16:52:44 +13:00
blessedcoolant
d7180afe9d Merge branch 'main' into docs/ti/add-using-troubleshooting 2023-02-19 16:51:50 +13:00
psychedelicious
2e9c15711b docs(ti): add using & troubleshooting sections 2023-02-19 14:45:26 +11:00
psychedelicious
e19b08b149 [WebUI] Model Manager Lag Fix (#2720)
Model Manager lags a bit if you have a lot of models.

Basically added a fake delay to rendering the model list so the modal
has time to load first. Hacky but if it works it works.
2023-02-19 14:42:25 +11:00
blessedcoolant
234d76a269 build: webui-model-manager-lag-fix 2023-02-19 15:25:14 +13:00
blessedcoolant
826d941068 fix: Fix Model Manager Modal Lag
By hacking in a fake delay to load the list.
2023-02-19 15:23:25 +13:00
Kevin Turner
34e449213c add ability to retrieve current list of embedding trigger strings (#2650) 2023-02-18 18:05:00 -08:00
Kevin Turner
671c5943e4 Merge remote-tracking branch 'origin/main' into api/add-trigger-string-retrieval
# Conflicts:
#	ldm/generate.py
2023-02-18 17:44:59 -08:00
blessedcoolant
16c24ec367 [WebUI] Implement a "Cancel after current iteration" Button (#2642)
## What was the problem/requirement? (What/Why)
Frequently, I wish to cancel the processing of images, but also want the
current image to finalize before I do. To work around this, I need to
wait until the current one finishes before pressing the cancel.

## What was the solution? (How)
* Implemented a button that allows to "Cancel after current iteration,"
which stores a state in the UI that will attempt to cancel the
processing after the current image finishes
* If the button is pressed again, while it is spinning and before the
next iteration happens, this will stop the scheduling of the cancel, and
behave as if the button was never pressed.

### Minor
* Added `.yarn` to `.gitignore` as this was an output folder produced
from following Frontend's README

### Revision 2
#### Major 
* Changed from a standalone button to a context menu next to the
original cancel button. Pressing the context menu will give the
drop-down option to select which type of cancel method the user prefers,
and they can press that button for canceling in the specified type
* Moved states to system state for cross-screen and toggled cancel types
management
* Added in distribution for the target yarn version (allowing any
version of yarn to compile successfully), and updated the README to
ensure `--immutable` is passed for onboarding developers

#### Minor 
* Updated `.gitignore` to ignore specific yarn folders, as specified by
their team -
https://yarnpkg.com/getting-started/qa#which-files-should-be-gitignored

## How were these changes tested?
* `yarn dev` => Server started successfully
* Manual testing on the development server to ensure the button behaved
as expected
* `yarn run  build` => Success

### Artifacts
#### Revision 1
* Video showing the UI changes in action

https://user-images.githubusercontent.com/89283782/218347722-3a15ce61-2d8c-4c38-b681-e7a3e79dd595.mov

* Images showing the basic UI changes

![image](https://user-images.githubusercontent.com/89283782/218347124-4afbb699-2abc-4e71-a794-b04f7179cfe2.png)

![image](https://user-images.githubusercontent.com/89283782/218347826-443db351-7a3a-4111-80af-56d56a81f07b.png)

#### Revision 2
* Video showing the UI changes in action

https://user-images.githubusercontent.com/89283782/219901217-048d2912-9b61-4415-85fd-9e8fedb00c79.mov

* Images showing the basic UI changes
(Default state) 

![image](https://user-images.githubusercontent.com/89283782/219901228-918b263a-dc75-4e5d-8897-5fc62c71a790.png)
(Drop-down context menu active) 

![image](https://user-images.githubusercontent.com/89283782/219901241-021be07a-b768-40a2-988f-eb59be4a962d.png)
(Scheduled cancel selected and running)

![image](https://user-images.githubusercontent.com/89283782/219901243-59a9c61a-71a7-44b3-adab-7aa4c9ee1f8e.png)
(Scheduled cancel started)

![image](https://user-images.githubusercontent.com/89283782/219901266-b4c0adc1-d791-4989-9351-075758e06534.png)


## Notes
* Using `SystemState`'s `currentStatus` variable, when the value is
`common:statusIterationComplete` is an alternative to this approach (and
would be more optimal as it should prevent the next iteration from even
starting), but since the names are within the translations, rather than
an enum or other type, this method of tracking the current iteration was
used instead.
* `isLoading` on `IAIIconButton` caused the Icon Button to also be
disabled, so the current solution works around that with conditionally
rendering the icon of the button instead of passing that value.
* I don't have context on the development expectation for `dist` folder
interactions (and couldn't find any documentation outside of the
`.gitignore` mentioning that the folder should remain. Let me know if
they need to be modified a certain way.
2023-02-19 14:35:34 +13:00
psychedelicious
e8240855e0 chore(ui): build frontend 2023-02-19 12:18:40 +11:00
psychedelicious
a5e065048e feat(ui): persist blacklist cancelAfter 2023-02-19 11:53:52 +11:00
blessedcoolant
a53c3269db build: cancel-after-iteration-webui 2023-02-19 13:30:15 +13:00
blessedcoolant
8bf93d3a32 Isolate Cancel Button Menu Styling 2023-02-19 13:23:04 +13:00
blessedcoolant
d42cc0fd1c Port Cancel Button Options Menu to New Component 2023-02-19 13:18:03 +13:00
blessedcoolant
d2553d783c Add IAISimpleMenu Component 2023-02-19 13:17:45 +13:00
blhook
10b747d22b Run yarn build once more due to merge 2023-02-18 14:45:00 -08:00
blhook
1d567fa593 Merge branch 'main' into scheduled-cancel 2023-02-18 14:43:05 -08:00
psychedelicious
3a3dd39d3a [ui] fix weblate merge conflict (#2716)
My last attempt to fix the Weblate missing keys was done incorrectly and
caused a merge conflict on the Weblate repo.

This PR follows these steps to fix it
https://docs.weblate.org/en/latest/faq.html#how-to-fix-merge-conflicts-in-translations

🤞
2023-02-19 09:14:35 +11:00
psychedelicious
f4b3d7dba2 fix(ui): add useSlidersForAll string 2023-02-19 09:12:14 +11:00
Riccardo Giovanetti
de2c7fd372 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (462 of 462 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
2023-02-19 09:05:01 +11:00
Anonymous
b140e1c619 translationBot(ui): update translation (English)
Currently translated at 100.0% (462 of 462 strings)

Co-authored-by: Anonymous <noreply@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/en/
Translation: InvokeAI/Web UI
2023-02-19 09:05:01 +11:00
Riccardo Giovanetti
1308584289 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (459 of 459 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-02-19 09:05:01 +11:00
blhook
2ac4778bcf Fix broken translation string location in Scheduled Cancel 2023-02-18 13:51:53 -08:00
blhook
6101d67dba Post-merge cleanup 2023-02-18 13:35:33 -08:00
blhook
3cd50fe3a1 Merge branch 'main' into scheduled-cancel 2023-02-18 13:30:45 -08:00
blhook
e683b574d1 Change scheduled send to be as part of context for Cancel button 2023-02-18 13:23:58 -08:00
blessedcoolant
0decd05913 fix conversion of checkpoints into incompatible diffusers models (#2714)
- The checkpoint conversion script was generating diffusers models with
the safety checker set to null. This resulted in models that could not
be merged with ones that have the safety checker activated.

- This PR fixes the issue by incorporating the safety checker into all
1.x-derived checkpoints, regardless of user's nsfw_checker setting.
2023-02-19 05:42:19 +13:00
Lincoln Stein
d01b7ea2d2 remove debug statement & actually do merge 2023-02-18 11:19:06 -05:00
Lincoln Stein
4fa91724d9 fix conversion of checkpoints into incompatible diffusers models
- The checkpoint conversion script was generating diffusers models
  with the safety checker set to null. This resulted in models
  that could not be merged with ones that have the safety checker
  activated.

- This PR fixes the issue by incorporating the safety checker into
  all 1.x-derived checkpoints, regardless of user's nsfw_checker setting.
2023-02-18 11:07:38 -05:00
blessedcoolant
e3d1c64b77 fix(diffusers_pipeline): ensure cuda.get_mem_info always gets a specific device index. (#2700)
Also tighten up the typing of `device` attributes in general.

Fixes 
> ValueError: Expected a torch.device with a specified index or an
integer, but got:cuda
2023-02-19 04:33:16 +13:00
blessedcoolant
17f35a7bba Merge branch 'main' into fix/expected-torch-device 2023-02-19 04:16:13 +13:00
blessedcoolant
ab2f0a6fbf fix(ui): fix translation files (#2708)
Weblate's first PR was it attempting to fix some translation issues we
had overlooked!

It wanted to remove some keys which it did not see in our translation
source due to typos.

This PR instead corrects the key names to resolve the issues.
2023-02-19 03:51:32 +13:00
blessedcoolant
41cbf2f7c4 Merge branch 'main' into feat/ui/fix-translations 2023-02-19 03:50:35 +13:00
Damian Stewart
d5d2e1d7a3 Merge branch 'main' into fix/expected-torch-device 2023-02-18 15:23:08 +01:00
Lincoln Stein
587faa3e52 preparation for startup option editor 2023-02-18 08:51:26 -05:00
psychedelicious
80229ab73e Fixed grammar in "other options" feature tooltip (#2711)
It bothered me so i fixed it
2023-02-18 22:05:46 +11:00
ExperimentalCyborg
68b2911d2f Fixed grammar in "other options" feature tooltip 2023-02-18 11:58:33 +01:00
Iman Karim
2bf2f627e4 Fix for issue #2707 2023-02-18 11:40:12 +01:00
psychedelicious
58676b2ce2 fix(ui): fix translation files 2023-02-18 19:08:46 +11:00
psychedelicious
11f79dc1e1 [WebUI] Localization Port Bug Fixes (#2706)
- Fixed missing localization string for "useSlidersForAll"
- Fixed status messages being broken.
2023-02-18 18:59:05 +11:00
blessedcoolant
2a095ddc8e build: localization-bug-fixes 2023-02-18 19:35:39 +13:00
blessedcoolant
dd849d2e91 Fix Localization Porting Bugs 2023-02-18 19:32:55 +13:00
Lincoln Stein
8c63fac958 AttributeError: 'Namespace' object has no attribute 'log_tokenization' (#2698)
Could be fixed here or alternatively declared in file globals.py
2023-02-18 01:08:50 -05:00
blessedcoolant
11a70e9764 Merge branch 'main' into patch-14 2023-02-18 18:45:05 +13:00
blessedcoolant
33ce78e4a2 feat(ui): set up for weblate translation (#2702)
# Weblate Translation 

After doing a full integration test of 3 translation service providers
on my fork of InvokeAI, we have chosen
[Weblate](https://hosted.weblate.org). The other two viable options were
[Crowdin](https://crowdin.com/) and
[Transifex](https://www.transifex.com/).

Weblate was the choice because its hosted service provides a very solid
UX / DX, can scale as much as we may ever need, is FOSS itself, and
generously offers free hosted service to other libre projects like ours.

## How it works

Weblate hosts its own fork of our repo and establishes a kind of
unidirectional relationship between our repo and its fork.

### InvokeAI --> Weblate

The `invoke-ai/InvokeAI` repo has had the Weblate GitHub app added to
it. This app watches for changes to our translation source
(`invokeai/frontend/public/locales/en.json`) and then updates the
Weblate fork. The Weblate UI then knows there are new strings to be
translated, or changes to be made.

### Translation

Our translators can then update the translations on the Weblate UI. The
plan now is to invite individual community members who have expressed
interest in maintaining a language or two and give them access to the
app. We can also open the doors to the general public if desired.

### Weblate --> InvokeAI

When a translation is ready or changed, the system will make a PR to
`main`. We have a substantial degree of control over this and will
likely manually trigger these PRs instead of letting them fire off
automatically.

Once a PR is merged, we will still need to rebuild the web UI. I think
we can set things up so that we only need the rebuild when a totally new
language is added, but for now, we will stick to this relatively simple
setup.

## This PR 

This PR sets up the web UI's translation stuff to work with Weblate:
- merged each locale into a single file
- updated the i18next config and UI to work with this simpler file
structure
- update our eslint and prettier rules to ensure the locale files have
the same format as what Weblate outputs (`tabWidth: 4`)
- added a thank you to Weblate in our README

Once this is merged, I'll link Weblate to `main` and do a couple tests
to ensure it is all working as expected.
2023-02-18 18:42:03 +13:00
psychedelicious
4f78518858 chore(ui): build frontend 2023-02-18 15:26:24 +11:00
psychedelicious
fad99ac4d2 docs: add thanks to weblate for translation 2023-02-18 15:26:24 +11:00
psychedelicious
423b592b25 feat(ui): set up for weblate translation 2023-02-18 15:26:04 +11:00
Kevin Turner
8aa7d1da55 fix(xformers): shush about not having Triton available. (#2701)
It's not readily available on Windows and xformers only uses it on some very specific hardware anyway.
2023-02-17 18:02:32 -08:00
Kevin Turner
6b702c32ca fix(xformers): shush about not having Triton available.
It's not readily available on Windows and xformers only uses it on some very specific hardware anyway.
2023-02-17 17:41:27 -08:00
blessedcoolant
767012aec0 [WebUI] Model Merging (#2699)
This PR brings Model Merging to the WebUI.

Inside the Model Manager, you can now find a new button called Merge
Models. Rest of it is self explanatory.


![firefox_BYCM4YNHEa](https://user-images.githubusercontent.com/54517381/219795631-dbb5c5c4-fc3a-4cdd-9549-18c2e5302835.png)
2023-02-18 14:34:35 +13:00
blessedcoolant
2267057e2b Merge branch 'main' into webui-model-merging 2023-02-18 14:13:44 +13:00
Kevin Turner
b8212e4dea fix(diffusers_pipeline): ensure cuda.get_mem_info always gets a specific device index.
Also tighten up the typing of `device` attributes in general.
2023-02-17 16:56:15 -08:00
blessedcoolant
5b7e4a5f5d Add Error Handling For Merging 2023-02-18 12:17:22 +13:00
Lincoln Stein
07f9fa63d0 Bugfixes on the merge_model GUI (#2697)
This fixes a few cosmetic bugs in the merge models console GUI:

1) Fix the minimum and maximum ranges on alpha. Was 0.05 to 0.95. Now
0.01 to 0.99.
2) Don't show the 'add_difference' interpolation method when 2 models
selected, or the other three methods when three models selected
2023-02-17 16:57:03 -05:00
Lincoln Stein
1ae8986451 add log_tokenization to globals 2023-02-17 16:47:32 -05:00
Lincoln Stein
b305c240de fix syntax errors introduced by github web-ui edits 2023-02-17 16:44:20 -05:00
blessedcoolant
248dc81ec3 build: [WebUI] model-merge 2023-02-18 10:18:29 +13:00
blessedcoolant
ebe0071ed2 feat: [WebUI] Model Merging 2023-02-18 10:13:56 +13:00
spezialspezial
7a518218e5 AttributeError: 'Namespace' object has no attribute 'log_tokenization'
Could be fixed here or alternatively declared in file globals.py
2023-02-17 22:11:49 +01:00
Lincoln Stein
fc14ac7faa Merge branch 'main' into api/add-trigger-string-retrieval 2023-02-17 15:53:57 -05:00
Lincoln Stein
95e2739c47 Merge branch 'main' into bugfix/merge-gui 2023-02-17 15:42:53 -05:00
Lincoln Stein
f129393a2e document add_difference on-screen 2023-02-17 15:42:06 -05:00
Lincoln Stein
c55bbd1a85 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-17 15:00:33 -05:00
Lincoln Stein
ccba41cdb2 Bugfix/convert v2 models (#2630)
## Convert v2 models in CLI

- This PR introduces a CLI prompt for the proper configuration file to
use when converting a ckpt file, in order to support both inpainting
      and v2 models files.
    
- When user tries to directly !import a v2 model, it prints out a proper
warning that v2 ckpts are not directly supported and converts it into a
diffusers model automatically.

The user interaction looks like this:
```
(stable-diffusion-1.5) invoke> !import_model /home/lstein/graphic-art.ckpt
Short name for this model [graphic-art]: graphic-art-test
Description for this model [Imported model graphic-art]: Imported model graphic-art
What type of model is this?:
[1] A model based on Stable Diffusion 1.X
[2] A model based on Stable Diffusion 2.X
[3] An inpainting model based on Stable Diffusion 1.X
[4] Something else
Your choice: [1] 2
```

In addition, this PR enhances the bulk checkpoint import function. If a
directory path is passed to `!import_model` then it will be scanned for
`.ckpt` and `.safetensors` files. The user will be prompted to import
all the files found, or select which ones to import.

Addresses
https://discord.com/channels/1020123559063990373/1073730061380894740/1073954728544845855
2023-02-17 14:50:54 -05:00
Lincoln Stein
3d442bbf22 Merge branch 'main' into bugfix/convert-v2-models 2023-02-17 14:50:05 -05:00
Lincoln Stein
4888d0d832 fix slider and interpolations
- fix alpha slider to show values from 0.01 to 0.99
- fix interpolation list to show 'difference' method for 3 models,
-   and weighted_sum, sigmoid and inverse_sigmoid methods for 2
2023-02-17 14:46:26 -05:00
Lincoln Stein
47de3fb007 correct display of 'add_difference' method when three models defined
- due to typo, the add_difference method was being displayed as "['add_difference']"
2023-02-17 14:41:02 -05:00
blessedcoolant
41bc160cb8 [WebUI] They see me slidin .. they hatin... (#2614)
Porting over as many usable options to slider as possible.

- Ported Face Restoration settings to Sliders.
- Ported Upscale Settings to Sliders.
- Ported Variation Amount to Sliders.
- Ported Noise Threshold to Sliders <-- Optimized slider so the values
actually make sense.
- Ported Perlin Noise to Sliders.
- Added a suboption hook for the High Res Strength Slider.
- Fixed a couple of small issues with the Slider component.
- Ported Main Options to Sliders.
2023-02-17 21:58:35 +13:00
psychedelicious
d0ba155c19 chore(ui): build frontend 2023-02-17 19:54:36 +11:00
blessedcoolant
5f0848bf7d feat(ui): add all-sliders option 2023-02-17 19:53:44 +11:00
Lincoln Stein
6551527fe2 Update 050_INSTALLING_MODELS.md (#2690)
Fix typo; "cute" to "cube"
2023-02-16 23:03:30 -05:00
Lincoln Stein
159ce2ea08 Merge branch 'main' into bugfix/convert-v2-models 2023-02-16 23:00:58 -05:00
Steven Frank
3715570d17 Update 050_INSTALLING_MODELS.md
Fix typo; "cute" to "cube"
2023-02-16 19:53:01 -08:00
Lincoln Stein
65a7432b5a disable xformers if cuda not available 2023-02-16 22:20:30 -05:00
Lincoln Stein
557e28f460 Fix workflow path filters (#2689)
remove leading Slash from paths
2023-02-16 22:15:31 -05:00
Lincoln Stein
62a7f252f5 Merge branch 'main' into fix/ci/workflow-path-filters 2023-02-16 22:14:45 -05:00
Lincoln Stein
2fa14200aa Merge branch 'main' into api/add-trigger-string-retrieval 2023-02-16 22:12:39 -05:00
mauwii
0605cf94f0 remove leading Slash from paths 2023-02-17 04:10:40 +01:00
Lincoln Stein
d69156c616 remove superseded code 2023-02-16 22:05:00 -05:00
Lincoln Stein
0963bbbe78 rebuild frontend after merge conflict 2023-02-16 21:52:20 -05:00
Lincoln Stein
f3351a5e47 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-16 21:51:15 -05:00
Lincoln Stein
f3f4c68acc fix model download and autodetection bugs
- Corrected error that caused --full-precision argument to be ignored
  when models downloaded using the --yes argument.

- Improved autodetection of v1 inpainting files; no longer relies on the
  file having 'inpaint' in the name.
2023-02-16 21:37:50 -05:00
Lincoln Stein
5d617ce63d rebuild front end 2023-02-16 20:03:59 -05:00
Kevin Turner
8a0d45ac5a new OffloadingDevice loads one model at a time, on demand (#2596)
* new OffloadingDevice loads one model at a time, on demand

* fixup! new OffloadingDevice loads one model at a time, on demand

* fix(prompt_to_embeddings): call the text encoder directly instead of its forward method

allowing any associated hooks to run with it.

* more attempts to get things on the right device from the offloader

* more attempts to get things on the right device from the offloader

* make offloading methods an explicit part of the pipeline interface

* inlining some calls where device is only used once

* ensure model group is ready after pipeline.to is called

* fixup! Strategize slicing based on free [V]RAM (#2572)

* doc(offloading): docstrings for offloading.ModelGroup

* doc(offloading): docstrings for offloading-related pipeline methods

* refactor(offloading): s/SimpleModelGroup/FullyLoadedModelGroup

* refactor(offloading): s/HotSeatModelGroup/LazilyLoadedModelGroup

to frame it is the same terms as "FullyLoadedModelGroup"

---------

Co-authored-by: Damian Stewart <null@damianstewart.com>
2023-02-16 23:48:27 +00:00
Matthias Wild
2468ba7445 skip huge workflows if not needed (#2688)
- filter paths for `build-container.yml` and `test-invoke-pip.yml`
  - add workflow to pass required checks on PRs with `paths-ignore`
  - this triggers if `test-invoke-pip.yml` does not
- fix "CI checks on main link" in `/README.md`
2023-02-16 22:57:36 +01:00
mauwii
65b7d2db47 skip huge workflows if not needed
- filter paths for `build-container.yml` and `test-invoke-pip.yml`
  - add workflow to pass required checks on PRs with `paths-ignore`
  - this triggers if `test-invoke-pip.yml` does not
- fix "CI checks on main link" in `/README.md`
2023-02-16 22:56:39 +01:00
Ryan Cao
e07f1bb89c build frontend 2023-02-16 21:33:47 +01:00
Ryan Cao
f4f813d108 design: smooth progress bar animations 2023-02-16 21:33:47 +01:00
Lincoln Stein
6217edcb6c tweak wording of python version requirements 2023-02-16 12:55:13 -05:00
Lincoln Stein
c5cc832304 check maximum value of python version as well as minimum 2023-02-16 12:52:07 -05:00
blessedcoolant
a76038bac4 [WebUI] Even off JSX string syntax (#2058)
Assuming that mixing `"literal strings"` and `{'JSX expressions'}`
throughout the code is not for a explicit reason but just a result IDE
autocompletion, I changed all props to be consistent with the
conventional style of using simple string literals where it is
sufficient.

This is a somewhat trivial change, but it makes the code a little more
readable and uniform
2023-02-17 01:22:17 +13:00
blessedcoolant
ff4942f9b4 Merge branch 'main' into pr/2058 2023-02-17 01:05:20 +13:00
blessedcoolant
1ccad64871 build: lint/format ignores stats.html (#2681) 2023-02-17 00:42:51 +13:00
psychedelicious
19f0022bbe build: lint/format ignores stats.html 2023-02-16 20:02:52 +11:00
psychedelicious
ecc7b7a700 builds frontend 2023-02-16 19:54:38 +11:00
David Regla
e46102124e [WebUI] Even off JSX string props
Increased consistency and readability by replacing any unnecessary JSX expressions in places where string literals are sufficient
2023-02-16 19:54:25 +11:00
Lincoln Stein
314ed7d8f6 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-16 03:24:02 -05:00
Lincoln Stein
b1341bc611 fully functional and ready for review
- quashed multiple bugs in model conversion and importing
- found old issue in handling of resume of interrupted downloads
- will require extensive testing
2023-02-16 03:22:25 -05:00
Lincoln Stein
07be605dcb mostly working 2023-02-16 01:30:59 -05:00
Lincoln Stein
fe318775c3 bring in url download bugfix from PR 2630 2023-02-16 00:37:17 -05:00
Lincoln Stein
1bb07795d8 model installer downloads starter models + user-provided paths and repo_ids
- Ability to scan directory not yet implemented
- Can't download from Civitai due to incomplete URL download implementation
2023-02-16 00:34:15 -05:00
Eugene Brodsky
caf07479ec fix spelling mistake 2023-02-16 00:19:08 -05:00
Johnathon Selstad
508780d07f Also fix .bat file to point at correct configurer 2023-02-16 00:19:08 -05:00
Johnathon Selstad
05e67e924c Make configure_invokeai.py call invokeai_configure 2023-02-16 00:19:08 -05:00
blessedcoolant
fb2488314f fix minor typos (#2666)
Very, very minor typos I noticed.
2023-02-16 10:14:30 +13:00
blessedcoolant
062f58209b Merge branch 'main' into fix_typos 2023-02-16 10:01:28 +13:00
Matthias Wild
7cb9d6b1a6 [WebUI] Model Conversion (#2616)
### WebUI Model Conversion

**Model Search Updates**

- Model Search now has a radio group that allows users to pick the type
of model they are importing. If they know their model has a custom
config file, they can assign it right here. Based on their pick, the
model config data is automatically populated. And this same information
is used when converting the model to `diffusers`.


![firefox_q8b4Iog73A](https://user-images.githubusercontent.com/54517381/218283322-6bf31fd5-349a-410f-991a-2aa50ee8b6e1.png)

- Files named `model.safetensors` and
`diffusion_pytorch_model.safetensors` are excluded from the search
because these are naming conventions used by diffusers models and they
will end up showing on the list because our conversion saves safetensors
and not bin files.

**Model Conversion UI**

- The **Convert To Diffusers** button can be found on the Edit page of
any **Checkpoint Model**.


![firefox_VUzv10CZ7m](https://user-images.githubusercontent.com/54517381/218283424-d9864406-ebb3-44a4-9e00-b6adda72d817.png)

- When converting the model, the entire process is handled
automatically. The corresponding config while at the time of the Ckpt
addition is used in the process.
- Users are presented with the choice on where to save the diffusers
converted model - same location as the ckpt, InvokeAI models root folder
or a completely custom location.


![firefox_HJlR97KY0u](https://user-images.githubusercontent.com/54517381/218283443-b9136edd-b432-4569-a8cc-50961544f31f.png)

- When the model is converted, the checkpoint entry is replaced with the
diffusers model entry. A user can readd the ckpt if they wish to.

--- 

More or less done. Might make some minor UX improvements as I refine
things.
2023-02-15 21:58:29 +01:00
blessedcoolant
fb721234ec final build (webui-model-conversion) 2023-02-16 09:32:54 +13:00
blessedcoolant
92906aeb08 Merge branch 'main' into webui-model-conversion 2023-02-16 09:31:28 +13:00
Jonathan
cab41f0538 Fix perlin noise generator for diffusers tensors (#2678)
Tensors with diffusers no longer have to be multiples of 8. This broke Perlin noise generation. We now generate noise for the next largest multiple of 8 and return a cropped result. Fixes #2674.
2023-02-15 19:37:42 +01:00
Kent Keirsey
5d0dcaf81e Fix typo and Hi-Res Bug 2023-02-15 13:06:31 +01:00
psychedelicious
9591c8d4e0 builds frontend 2023-02-15 22:30:47 +11:00
psychedelicious
bcb1fbe031 add tooltips & status messages to model conversion 2023-02-15 22:28:36 +11:00
Lincoln Stein
e87a2fe14b model installer frontend done - needs to be hooked to backend 2023-02-15 01:07:39 -05:00
blhook
d00571b5a4 Revert yarn.lock 2023-02-14 18:05:24 -08:00
fattire
b08a514594 missed one. 2023-02-14 17:49:01 -08:00
Eugene Brodsky
265ccaca4a Merge branch 'main' into enhance/update-menu 2023-02-14 20:48:36 -05:00
fattire
7aa6c827f7 fix minor typos 2023-02-14 17:38:21 -08:00
Jonathan
093174942b Add thresholding for all diffusers types (#2479)
`generator` now asks `InvokeAIDiffuserComponent` to do postprocessing work on latents after every step. Thresholding - now implemented as replacing latents outside of the threshold with random noise - is called at this point. This postprocessing step is also where we can hook up symmetry and other image latent manipulations in the future.

Note: code at this layer doesn't need to worry about MPS as relevant torch functions are wrapped and made MPS-safe by `generator.py`.
2023-02-14 18:00:34 -06:00
Lincoln Stein
f299f40763 convert existing model display to column format 2023-02-14 16:32:54 -05:00
Lincoln Stein
7545e38655 frontend design done; functionality not hooked up yet 2023-02-14 00:02:19 -05:00
Lincoln Stein
0bc55a0d55 Fix link to the installation documentation
Broken link in the README. Now pointing to correct mkdocs file.
2023-02-14 04:15:23 +01:00
Lincoln Stein
d38e7170fe fix broken !import_model downloads
1. Now works with sites that produce lots of redirects, such as CIVITAI
2. Derive name of destination model file from HTTP Content-Disposition header,
   if present.
3. Swap \\ for / in file paths provided by users, to hopefully fix issues with
   Windows.
2023-02-13 22:14:24 -05:00
mauwii
15a9412255 some small formatting fixes 2023-02-13 23:10:58 +01:00
Lincoln Stein
e29399e032 don't even try to load incompatible embeddings 2023-02-13 17:00:52 -05:00
Lincoln Stein
bc18a94d8c add ability to retrieve current list of embedding trigger strings
This PR adds a new attributer to ldm.generate, `embedding_trigger_strings`:

```
gen = Generate(...)
strings = gen.embedding_trigger_strings
strings = gen.embedding_trigger_strings()
```

The trigger strings will change when the model is updated to show only
those strings which are compatible with the current
model. Dynamically-downloaded triggers from the HF Concepts Library
will only show up after they are used for the first time. However, the
full list of concepts available for download can be retrieved
programatically like this:

```
from ldm.invoke.concepts_lib import HuggingFAceConceptsLibrary
concepts = HuggingFaceConceptsLibrary()
trigger_strings = concepts.list_concepts()
```
2023-02-13 14:11:36 -05:00
Lincoln Stein
5d2bdd478c Merge branch 'main' into bugfix/convert-v2-models 2023-02-13 13:15:05 -05:00
Lincoln Stein
9cacba916b Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-13 09:31:34 -05:00
Lincoln Stein
628e82fa79 Added arabic locale files (#2561)
I have added the arabic locale files. There need to be some
modifications to the code in order to detect the language direction and
add it to the current document body properties.

For example we can use this:

import { appWithTranslation, useTranslation } from "next-i18next";
import React, { useEffect } from "react";

  const { t, i18n } = useTranslation();
  const direction = i18n.dir();
  useEffect(() => {
    document.body.dir = direction;
  }, [direction]);

This should be added to the app file. It uses next-i18next to
automatically get the current language and sets the body text direction
(ltr or rtl) depending on the selected language.
2023-02-13 07:45:16 -05:00
Lincoln Stein
fbbbba2fac correct crash on edge case 2023-02-13 07:40:15 -05:00
blessedcoolant
9cbf9d52b4 Merge branch 'main' into pr/2561 2023-02-13 23:48:18 +13:00
blessedcoolant
fb35fe1a41 Merge branch 'main' into pr/2561 2023-02-13 23:47:21 +13:00
psychedelicious
b60b5750af builds frontend 2023-02-13 21:23:26 +11:00
psychedelicious
3ff40114fa adds arabic to language picker 2023-02-13 21:22:39 +11:00
psychedelicious
71c6ae8789 fixes mislocated language file 2023-02-13 21:22:18 +11:00
psychedelicious
d9a7536fa8 moves languages to fallback lang (en) 2023-02-13 21:21:46 +11:00
Lincoln Stein
99f4417cd7 Improve error messages from Textual Inversion and Merge scripts (#2641)
## Provide informative error messages when TI and Merge scripts have
insufficient space for console UI

- The invokeai-ti and invokeai-merge scripts will crash if there is not
enough space in the console to fit the user interface (even after
responsive formatting).

- This PR intercepts the errors and prints a useful error message
advising user to make window larger.
2023-02-13 00:12:32 -05:00
Lincoln Stein
47f94bde04 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-12 23:59:31 -05:00
Lincoln Stein
197e6b95e3 add missing file 2023-02-12 23:59:18 -05:00
Lincoln Stein
8e47ca8d57 Merge branch 'main' into bugfix/prevent-ti-frontend-crash 2023-02-12 23:56:41 -05:00
Lincoln Stein
714fff39ba add new console frontend to initial model selection, and other improvements
1. The invokeai-configure script has now been refactored. The work of
   selecting and downloading initial models at install time is now done
   by a script named invokeai-initial-models (module
   name is ldm.invoke.config.initial_model_select)

   The calling arguments for invokeai-configure have not changed, so
   nothing should break. After initializing the root directory, the
   script calls invokeai-initial-models to let the user select the
   starting models to install.

2. invokeai-initial-models puts up a console GUI with checkboxes to
   indicate which models to install. It respects the --default_only
   and --yes arguments so that CI will continue to work.

3. User can now edit the VAE assigned to diffusers models in the CLI.

4. Fixed a bug that caused a crash during model loading when the VAE
   is set to None, rather than being empty.
2023-02-12 23:52:44 -05:00
Eugene Brodsky
89239d1c54 (updater) style 'pip' progress to use dark background 2023-02-12 19:10:11 -05:00
blhook
c03d98cf46 Implement a cancel after next iteration button 2023-02-12 15:56:03 -08:00
Lincoln Stein
d1ad46d6f1 ask user to make window larger if not enough space for textual inversion/merge gui
- The invokeai-ti and invokeai-merge scripts will crash if there is not enough space
  in the console to fit the user interface (even after responsive formatting).

- This PR intercepts the errors and prints a useful error message advising user to
  make window larger.
2023-02-12 17:38:46 -05:00
Lincoln Stein
6ae7560f66 Merge branch 'main' into webui-model-conversion 2023-02-12 17:22:32 -05:00
Lincoln Stein
e561d19206 a few adjustments
- fix unused variables and f-strings found by pyflakes
- use global_converted_ckpts_dir() to find location of diffusers
- fixed bug in model_manager that was causing the description of converted
  models to read "Optimized version of {model_name}'
2023-02-12 17:20:13 -05:00
Jonathan
9eed1919c2 Strategize slicing based on free [V]RAM (#2572)
Strategize slicing based on free [V]RAM when not using xformers. Free [V]RAM is evaluated at every generation. When there's enough memory, the entire generation occurs without slicing. If there is not enough free memory, we use diffusers' sliced attention.
2023-02-12 18:24:15 +00:00
blessedcoolant
b87f7b1129 Update Model Conversion Help Text 2023-02-13 00:30:50 +13:00
blessedcoolant
7410a60208 Merge branch 'main' into webui-model-conversion 2023-02-12 23:35:49 +13:00
Matthias Wild
7c86130a3d add merge_group trigger to test-invoke-pip.yml (#2590) 2023-02-12 05:00:04 +01:00
Lincoln Stein
58a1d9aae0 Merge branch 'main' into update/ci/prepare-test-invoke-pip-for-queue 2023-02-11 22:38:55 -05:00
Lincoln Stein
24e32f6ae2 add 'update' action to launcher script
- Adds an update action to launcher script
- This action calls new python script `invokeai-update`, which prompts
  user to update to latest release version, main development version,
  or an arbitrary git tag or branch name.
- It then uses `pip` to update to whatever tag was specified.
2023-02-11 22:32:48 -05:00
Lincoln Stein
3dd7393984 Huge Docker Update - better caching, don't use root user, include dockerhub and more.... (#2597)
Some of the core features of this PR include:

- optional push image to dockerhub (will be skipped in repos which
didn't set it up)
- stop using the root user at runtime
- trigger builds also for update/docker/* and update/ci/docker/*
- always cache image from current branch and main branch
- separate caches for container flavors
- updated comments with instructions in build.sh and run.sh
2023-02-11 18:25:48 -05:00
Lincoln Stein
f18f743d03 Merge branch 'main' into update/docker/include-dockerhub 2023-02-11 18:03:03 -05:00
Lincoln Stein
c660dcdfcd improve ability to bulk import .ckpt and .safetensors
This commit cleans up the code that did bulk imports of legacy model
files. The code has been refactored, and the user is now offered the
option of importing all the model files found in the directory, or
selecting which ones to import.
2023-02-11 17:59:12 -05:00
blessedcoolant
9e0250c0b4 Merge branch 'main' into webui-model-conversion 2023-02-12 11:13:13 +13:00
blessedcoolant
08c747f1e0 test-build (model-conversion-v1) 2023-02-12 11:12:23 +13:00
blessedcoolant
04ae6fde80 Model Manager localization updates 2023-02-12 11:11:00 +13:00
blessedcoolant
b1a53c8ef0 {Model Manager] Backend update to support custom save locations and configs 2023-02-12 11:10:47 +13:00
blessedcoolant
cd64511f24 [Model Manager] Allows uses to pick Diffusers converted model save location
Users can now pick the folder to save their diffusers converted model. It can either be the same folder as the ckpt, or the invoke root models folder or a totally custom location.
2023-02-12 11:10:17 +13:00
blessedcoolant
1e98e0b159 [Model Manager] Allow users to pick model type
Users can now pick model type when adding a new model and the configuration files are automatically applied.
2023-02-12 11:09:09 +13:00
Lincoln Stein
4f7af55bc3 if importing a v2 ckpt model, convert to diffusers 2023-02-11 16:35:45 -05:00
Lincoln Stein
d0e6a57e48 make inpaint model conversion work
Fixed a couple of bugs:

1. The original config file for the ckpt file is derived from the entry in
   `models.yaml` rather than relying on the user to select. The implication
   of this is that V2 ckpt models need to be assigned `v2-inference-v.yaml`
   when they are first imported. Otherwise they won't convert right. Note
   that currently V2 ckpts are imported with `v1-inference.yaml`, which
   isn't right either.

2. Fixed a backslash in the output diffusers path, which was causing
   load failures on Linux.

Remaining issues:

1. The radio buttons for selecting the model type are
   nonfunctional. It feels to me like these should be moved into the
   dialogue for importing ckpt/safetensors files, because this is
   where the algorithm needs help from the user.

2. The output diffusers model is written into the same directory as
   the input ckpt file. The CLI does it differently and stores the
   diffusers model in `ROOTDIR/models/converted-ckpts`. We should
   settle on one way or the other.
2023-02-11 15:53:41 -05:00
Lincoln Stein
d28a486769 rebuild frontend 2023-02-11 15:07:12 -05:00
Lincoln Stein
84722d92f6 foo 2023-02-11 15:06:34 -05:00
Lincoln Stein
8a3b5ac21d rebuild frontend 2023-02-11 14:58:49 -05:00
Lincoln Stein
717d53a773 Merge branch 'main' into bugfix/convert-v2-models 2023-02-11 14:27:52 -05:00
blessedcoolant
96926d6648 v2 Conversion Support & Radio Picker
Converted the picker options to a Radio Group and also updated the backend to use the appropriate config if it is a v2 model that needs to be converted.
2023-02-12 05:00:29 +13:00
Lincoln Stein
f3639de8b1 add note in manual that directly running v2 models not supported 2023-02-11 09:43:14 -05:00
Lincoln Stein
b71e675e8d support conversion of v2 models
- This PR introduces a CLI prompt for the proper configuration file to
  use when converting a ckpt file, in order to support both inpainting
  and v2 models files.

- When user tries to directly !import a v2 model, it prints out a proper
  warning that v2 ckpts are not directly supported.
2023-02-11 09:39:41 -05:00
tyler
d3c850104b pulling esrgan denoise strength through to the generate API. 2023-02-12 02:47:37 +13:00
tyler
c00155f6a4 pulling esrgan denoise strength through to the generate API. 2023-02-12 02:47:37 +13:00
Lincoln Stein
8753070fc7 Fix Incorrect Windows Environment Activation Location (Manual Installation Documentation) (#2627)
## What was the problem/requirement? (What/Why)
* Windows location for the Python environment activate location is
currently incorrect
  * Due to this, this command will fail for Windows-based users
* The contributing link within the `Developer Install` sections leads to
a [404](https://invoke-ai.github.io/index.md#Contributing)
* `Developer Install`'s numbered list currently lists 1, 1, 2, . . .

## What was the solution? (How)
* Changed the location of Windows script based on actual location -
[reference](https://docs.python.org/3/library/venv.html)
* Moved the link to point to one directory higher -- the main index.md
* Minor format adjustments to allow for the numbered list to appear as
expected

## How were these changes tested?
* `mkdocs serve` => Verified on local server that the changes reflected
as expected

## Notes
Contributing mentions to set the upstream towards the `development`
branch, but that branch has been untouched for several months, so I've
pointed to the `main` branch. Let me know if we need to switch to a
different one.
2023-02-11 08:15:17 -05:00
Lincoln Stein
ed8f9f021d Merge branch 'main' into update-installation-documents 2023-02-11 07:46:29 -05:00
Lincoln Stein
3ccc705396 fix two bugs in conversion of inpaint models from ckpt to diffusers m… (#2620)
…odels

- If CLI asked to convert the currently loaded model, the model would
crash on the first rendering. CLI will now refuse to convert a model
loaded in memory (probably a good idea in any case).

- CLI will offer the `v1-inpainting-inference.yaml` as the configuration
file when importing an inpainting a .ckpt or .safetensors file that has
"inpainting" in the name. Otherwise it offers `v1-inference.yaml` as the
default.
2023-02-11 07:45:06 -05:00
blessedcoolant
11e422cf29 Ignore two files names instead of the entire folder
rather than bypassing any path with diffusers in it, im specifically bypassing model.safetensors and diffusion_pytorch_model.safetensors both of which should be diffusers files in most cases.
2023-02-12 00:13:22 +13:00
blessedcoolant
7f695fed39 Ignore safetensor or ckpt files inside diffusers model folders.
Basically skips the path if the path has the word diffusers anywhere inside it.
2023-02-12 00:03:42 +13:00
blessedcoolant
310501cd8a Add support for custom config files 2023-02-11 23:34:24 +13:00
blhook
106b3aea1b Fix incorrect Windows env activation location
Change broken link to Contributing inside of Developer Install
Minor format modification to allow for numbered list to appear properly
2023-02-11 00:30:07 -08:00
blessedcoolant
6e52ca3307 Model Convert Component 2023-02-11 20:41:49 +13:00
blessedcoolant
94c31f672f Add Initial Checks for Inpainting
The conversion itself is broken. But that's another issue.
2023-02-11 20:41:18 +13:00
Saifeddine ALOUI
240bbb9852 Merge branch 'main' into main 2023-02-11 01:17:42 +01:00
Matthias Wild
8cf2ed91a9 Merge branch 'main' into update/docker/include-dockerhub 2023-02-10 22:55:54 +01:00
mauwii
7be5b4ca8b update Dockerfile
- introduce build arg `VOLUME_DIR`
- fix permissions of the Volume
2023-02-10 22:55:19 +01:00
Lincoln Stein
d589ad96aa fix two bugs in conversion of inpaint models from ckpt to diffusers models
- If CLI asked to convert the currently loaded model, the model would crash
  on the first rendering. CLI will now refuse to convert a model loaded
  in memory (probably a good idea in any case).

- CLI will offer the `v1-inpainting-inference.yaml` as the configuration
  file when importing an inpainting a .ckpt or .safetensors file that
  has "inpainting" in the name. Otherwise it offers `v1-inference.yaml`
  as the default.
2023-02-10 15:06:37 -05:00
Lincoln Stein
097e41e8d2 2.3.0 Documentation Fixes (#2609)
Found a couple of places where the formatting was messed up. I also
added a "Quick Start Guide" to the README for people who encounter
InvokeAI through PyPi. It features the PyPi install!
2023-02-10 13:00:14 -05:00
mauwii
4cf43b858d update 020_INSTALL_MANUAL.md
- some formatting changes / fixes
- updates venv creation commands
- remove extra index from Mac Installations
2023-02-10 17:29:12 +01:00
mauwii
13a4666a6e update README.md
- fix some formatting issues
- fix command to create venv
- some other small updates
2023-02-10 16:27:21 +01:00
blessedcoolant
9232290950 Initial Implementation - Model Conversion Frontend 2023-02-11 03:53:31 +13:00
blessedcoolant
f3153d45bc Initial Implementation - Model Conversion Backend 2023-02-11 03:53:15 +13:00
Lincoln Stein
d9cb6da951 Merge branch 'main' into update/docker/include-dockerhub 2023-02-10 09:42:07 -05:00
Saifeddine ALOUI
17535d887f Merge branch 'invoke-ai:main' into main 2023-02-10 07:58:28 +01:00
Lincoln Stein
35da7f5b96 Merge branch 'main' into doc/manual-install-fixes 2023-02-09 21:55:21 -05:00
Lincoln Stein
4e95a68582 adding support for ESRGAN denoising strength (#2598)
pulling in denoising support from upstream (its already there, invoke
just isn't using it). I've enabled this as a command line argument as
construction of the ESRGAN handler happens once. Ideally this would be a
UI option that could be adjusted for each upscaling task. Unfortunately
that is beyond my current level of InvokeAI-foo.

Upstream reference is here, starting on line 99 "use dni to control the
denoise strength"

https://github.com/xinntao/Real-ESRGAN/blob/master/inference_realesrgan.py
2023-02-09 21:55:00 -05:00
Lincoln Stein
9dfeb93f80 add quick install instructions to README 2023-02-09 20:28:20 -05:00
blessedcoolant
02247ffc79 resolved build (denoise_str) 2023-02-10 14:12:21 +13:00
blessedcoolant
48da030415 resolving conflicts 2023-02-10 14:03:31 +13:00
Lincoln Stein
817e04bee0 add quickstart instructions for PyPi 2023-02-09 19:37:04 -05:00
Lincoln Stein
e5d0b0c37d Formatting fixes, Manual Installation docs
Found a couple of places where the formatting was messed up. This corrects them.
2023-02-09 17:56:25 -05:00
blessedcoolant
950f450665 Merge branch 'main' into main 2023-02-10 11:52:26 +13:00
blessedcoolant
79daf8b039 clean build (esrgan-denoise-str) 2023-02-10 10:20:37 +13:00
blessedcoolant
383cbca896 lint-resolve 2023-02-10 10:16:55 +13:00
psychedelicious
07c55d5e2a adds upscaling denoising to metadata viewer 2023-02-10 07:30:17 +11:00
blessedcoolant
156151df45 build (esrgan-denoise-str) 2023-02-10 09:19:55 +13:00
blessedcoolant
03b1d71af9 Resolving Conflicts 2023-02-10 09:18:02 +13:00
Matthias Wild
f6ad107fdd Merge branch 'main' into update/ci/prepare-test-invoke-pip-for-queue 2023-02-09 08:48:06 +01:00
blessedcoolant
e2c392631a build (esrgan-denoise-str) 2023-02-09 20:21:22 +13:00
blessedcoolant
4a1b4d63ef Change denoise_str default to 0.75 2023-02-09 20:21:09 +13:00
blessedcoolant
83ecda977c Add frontend UI for denoise_str for ESRGAN 2023-02-09 20:19:25 +13:00
blessedcoolant
9601febef8 Add denoise_str to ESRGARN - frontend server 2023-02-09 20:16:47 +13:00
blessedcoolant
0503680efa Change denoise_str to an arg instead of a class variable 2023-02-09 20:16:23 +13:00
mauwii
57ccec1df3 remove metadata to summary step since secret use
- This PR will also close #2593
2023-02-09 07:24:28 +01:00
Matthias Wild
22f3634481 Merge branch 'main' into update/docker/include-dockerhub 2023-02-09 07:18:45 +01:00
blessedcoolant
5590c73af2 Prettified Frontend 2023-02-09 19:16:36 +13:00
coreco
1f76b30e54 adding support for ESRGAN denoising strength, which allows for improved detail retention when upscaling photorelistic faces 2023-02-08 22:36:35 -06:00
mauwii
8bd04654c7 remove Trash folder if existing 2023-02-09 04:00:51 +01:00
mauwii
0dce3188cc make DOCKERHUB_USERNAME a secret 2023-02-09 03:35:16 +01:00
mauwii
106c7aa956 bindmount outputs directory to ./docker/outputs 2023-02-09 02:48:12 +01:00
mauwii
b04f199035 revert caching to main; cache to ref_name 2023-02-09 01:33:08 +01:00
mauwii
a2b992dfd1 always cache-to main 2023-02-09 01:25:32 +01:00
mauwii
745e253a78 fix condition of Docker Hub Description step 2023-02-09 01:05:08 +01:00
mauwii
2ea551d37d create user home, expose port 2023-02-09 00:56:09 +01:00
mauwii
8d1481ca10 cache from main and ref_name 2023-02-09 00:28:04 +01:00
mauwii
307e7e00c2 only push if refs/heads/main or refs/tags/* 2023-02-09 00:15:46 +01:00
mauwii
c3ad1c8a9f remove long sha from container tags 2023-02-09 00:06:09 +01:00
mauwii
05d51d7b5b re-use --link, lock pip cache 2023-02-08 23:45:15 +01:00
mauwii
09f69a4d28 Add output when activating venv 2023-02-08 23:39:28 +01:00
mauwii
a338af17c8 Initialize PIP_CACHE_DIR after setting the env 2023-02-08 23:17:15 +01:00
Matthias Wild
bc82fc0cdd Merge branch 'invoke-ai:main' into main 2023-02-08 22:46:16 +01:00
Saifeddine
418a3d6e41 Merge branch 'main' of https://github.com/ParisNeo/ArtBot 2023-02-08 21:59:58 +01:00
Saifeddine
fbcc52ec3d upgréaded arabic localization 2023-02-08 21:59:53 +01:00
Saifeddine ALOUI
47e89f4ba1 Merge branch 'invoke-ai:main' into main 2023-02-08 21:59:27 +01:00
mauwii
888d3ae968 cleanup Dockerfile 2023-02-08 21:54:06 +01:00
mauwii
a28120abdd small improvements to env.sh 2023-02-08 21:53:57 +01:00
mauwii
4493d83aea update docker-scripts instructions 2023-02-08 21:35:19 +01:00
mauwii
eff0fb9a69 add merge_group trigger to test-invoke-pip.yml 2023-02-08 21:23:13 +01:00
mauwii
5bb0f9bedc update Dockerfile - more simple user creation 2023-02-08 19:57:41 +01:00
mauwii
bf812e6493 hotfix - use context github.ref_name for cache 2023-02-08 11:01:55 +01:00
Matthias Wild
a3da12d867 Merge branch 'invoke-ai:main' into main 2023-02-08 10:22:48 +01:00
Matthias Wild
6b4a06c3fc Merge branch 'invoke-ai:main' into main 2023-02-08 00:25:49 +01:00
mauwii
3833b28132 Create a new user for the container runtime
without root permission
2023-02-07 23:46:43 +01:00
Matthias Wild
e8f9ab82ed Merge branch 'invoke-ai:main' into main 2023-02-07 18:48:47 +01:00
psychedelicious
6ab364b16a build frontend 2023-02-07 17:06:47 +01:00
psychedelicious
a4dc11addc switch to @vitejs/plugin-react-swc 2023-02-07 17:06:47 +01:00
psychedelicious
0372702eb4 remove unneeded polyfill 2023-02-07 17:06:47 +01:00
psychedelicious
aa8eeea478 update app build configuration 2023-02-07 17:06:47 +01:00
blessedcoolant
e54ecc4c37 build (vite-4-code-quality) 2023-02-07 17:06:47 +01:00
blessedcoolant
4a12c76097 Remove build-dev 2023-02-07 17:06:47 +01:00
blessedcoolant
be72faf78e Upgrade to Vite 4 2023-02-07 17:06:46 +01:00
blessedcoolant
28d44d80ed Rebase Fix - ModelSelect 2023-02-07 17:06:46 +01:00
psychedelicious
9008d9996f builds frontend 2023-02-07 17:06:46 +01:00
psychedelicious
be2a9b78bb fixes rebase issues 2023-02-07 17:06:46 +01:00
Ryan Cao
70003ee5b1 feat: add copy image in share menu 2023-02-07 17:06:46 +01:00
psychedelicious
45a5ccba84 Updates code quality tooling and formats codebase
- `eslint` and `prettier` configs
- `husky` to format and lint via pre-commit hook
- `babel-plugin-transform-imports` to treeshake `lodash` and other packages if needed

Lints and formats codebase.
2023-02-07 17:06:46 +01:00
psychedelicious
f80a64a0f4 Reorganises internal state
`options` slice was huge and managed a mix of generation parameters and general app settings. It has been split up:

- Generation parameters are now in `generationSlice`.
- Postprocessing parameters are now in `postprocessingSlice`
- UI related things are now in `uiSlice`

There is probably more to be done, like `gallerySlice` perhaps should only manage internal gallery state, and not if the gallery is displayed.

Full-slice selectors have been made for each slice.

Other organisational tweaks.
2023-02-07 17:06:46 +01:00
Lincoln Stein
511df2963b remove debugging statement 2023-02-07 17:06:46 +01:00
Lincoln Stein
f92f62a91b enhance model_manager support for converting inpainting ckpt files
Previously conversions of .ckpt and .safetensors files to diffusers
models were failing with channel mismatch errors. This is corrected
with this PR.

- The model_manager convert_and_import() method now accepts the path
  to the checkpoint file's configuration file, using the parameter
  `original_config_file`. For inpainting files this should be set to
  the full path to `v1-inpainting-inference.yaml`.

- If no configuration file is provided in the call, then the presence
  of an inpainting file will be inferred at the
  `ldm.ckpt_to_diffuser.convert_ckpt_to_diffUser()` level by looking
  for the string "inpaint" in the path. AUTO1111 does something
  similar to this, but it is brittle and not recommended.

- This PR also changes the model manager model_names() method to return
  the model names in case folded sort order.
2023-02-07 17:06:45 +01:00
mauwii
7f41893da4 set scope for caches 2023-02-07 09:27:20 +01:00
mauwii
42da4f57c2 update .dockerignore 2023-02-07 09:27:20 +01:00
mauwii
c2e11dfe83 update build-container.yml
- add long sha tag
- update cache-from
Dockerfile:
- re-use `apt-get update`
env.sh/build.sh:
- rename platform to lowercase
2023-02-07 09:27:20 +01:00
mauwii
17e1930229 remove CONTAINER_FLAVOR build arg
also disable currently unused PIP_PACKAGE build arg
will start using it when problems with XFORMERS are sorted out
2023-02-07 09:27:20 +01:00
mauwii
bde94347d3 don't use --linkin COPY 2023-02-07 09:27:20 +01:00
mauwii
b1612afff4 update .dockerignore 2023-02-07 09:27:20 +01:00
mauwii
1d10d952b2 use cleartext DOCKERHUB_USERNAME 2023-02-07 09:27:20 +01:00
mauwii
9150f9ef3c move LABEL to top 2023-02-07 09:27:20 +01:00
mauwii
7bc0f7cc6c update Docker Hub description 2023-02-07 09:27:20 +01:00
mauwii
c52d11b24c optionally push to DockerHub 2023-02-07 09:27:20 +01:00
mauwii
59486615dd update build-container.yml 2023-02-07 09:27:20 +01:00
mauwii
f0212cd361 update Dockerfile 2023-02-07 09:27:20 +01:00
mauwii
ee4cb5fdc9 add id to Build container 2023-02-07 09:27:20 +01:00
mauwii
75b919237b update cache-from 2023-02-07 09:27:20 +01:00
mauwii
07a9062e1f update .dockerignore and scripts 2023-02-07 09:27:20 +01:00
mauwii
cdb3e18b80 add flavor to pip cache id
to prevent cache invalidation
2023-02-07 09:27:20 +01:00
Saifeddine
01eb93d664 Added Arabic Localisation 2023-02-07 00:42:09 +01:00
Saifeddine
89f69c2d94 Merge branch 'main' of https://github.com/ParisNeo/ArtBot 2023-02-07 00:29:33 +01:00
Saifeddine
dc6f6fcab7 Added arabic locale files 2023-02-07 00:29:30 +01:00
Saifeddine ALOUI
6ca177e462 Added French localization 2023-02-04 09:54:30 +01:00
590 changed files with 220600 additions and 31930 deletions

View File

@@ -3,21 +3,23 @@
!invokeai
!ldm
!pyproject.toml
!README.md
# Guard against pulling in any models that might exist in the directory tree
**/*.pt*
**/*.ckpt
# ignore frontend but whitelist dist
invokeai/frontend/**
!invokeai/frontend/dist
invokeai/frontend/
!invokeai/frontend/dist/
# ignore invokeai/assets but whitelist invokeai/assets/web
invokeai/assets
!invokeai/assets/web
invokeai/assets/
!invokeai/assets/web/
# ignore python cache
**/__pycache__
# Byte-compiled / optimized / DLL files
**/__pycache__/
**/*.py[cod]
**/*.egg-info
# Distribution / packaging
*.egg-info/
*.egg

View File

@@ -1,5 +1,8 @@
root = true
# All files
[*]
max_line_length = 80
charset = utf-8
end_of_line = lf
indent_size = 2
@@ -10,3 +13,18 @@ trim_trailing_whitespace = true
# Python
[*.py]
indent_size = 4
max_line_length = 120
# css
[*.css]
indent_size = 4
# flake8
[.flake8]
indent_size = 4
# Markdown MkDocs
[docs/**/*.md]
max_line_length = 80
indent_size = 4
indent_style = unset

37
.flake8 Normal file
View File

@@ -0,0 +1,37 @@
[flake8]
max-line-length = 120
extend-ignore =
# See https://github.com/PyCQA/pycodestyle/issues/373
E203,
# use Bugbear's B950 instead
E501,
# from black repo https://github.com/psf/black/blob/main/.flake8
E266, W503, B907
extend-select =
# Bugbear line length
B950
extend-exclude =
scripts/orig_scripts/*
ldm/models/*
ldm/modules/*
ldm/data/*
ldm/generate.py
ldm/util.py
ldm/simplet2i.py
per-file-ignores =
# B950 line too long
# W605 invalid escape sequence
# F841 assigned to but never used
# F401 imported but unused
tests/test_prompt_parser.py: B950, W605, F401
tests/test_textual_inversion.py: F841, B950
# B023 Function definition does not bind loop variable
scripts/legacy_api.py: F401, B950, B023, F841
ldm/invoke/__init__.py: F401
# B010 Do not call setattr with a constant attribute value
ldm/invoke/server_legacy.py: B010
# =====================
# flake-quote settings:
# =====================
# Set this to match black style:
inline-quotes = double

59
.github/CODEOWNERS vendored
View File

@@ -1,18 +1,18 @@
# continuous integration
/.github/workflows/ @mauwii
/.github/workflows/ @mauwii @lstein @blessedcoolant
# documentation
/docs/ @lstein @mauwii @tildebyte
mkdocs.yml @lstein @mauwii
/docs/ @lstein @mauwii @blessedcoolant
mkdocs.yml @mauwii @lstein
# installation and configuration
/pyproject.toml @mauwii @lstein @ebr
/docker/ @mauwii
/scripts/ @ebr @lstein
/installer/ @ebr @lstein @tildebyte
/scripts/ @ebr @lstein @blessedcoolant
/installer/ @ebr @lstein
ldm/invoke/config @lstein @ebr
invokeai/assets @lstein @ebr
invokeai/configs @lstein @ebr
invokeai/assets @lstein @blessedcoolant
invokeai/configs @lstein @ebr @blessedcoolant
/ldm/invoke/_version.py @lstein @blessedcoolant
# web ui
@@ -20,31 +20,42 @@ invokeai/configs @lstein @ebr
/invokeai/backend @blessedcoolant @psychedelicious
# generation and model management
/ldm/*.py @lstein
/ldm/*.py @lstein @blessedcoolant
/ldm/generate.py @lstein @keturn
/ldm/invoke/args.py @lstein @blessedcoolant
/ldm/invoke/ckpt* @lstein
/ldm/invoke/ckpt_generator @lstein
/ldm/invoke/CLI.py @lstein
/ldm/invoke/config @lstein @ebr @mauwii
/ldm/invoke/ckpt* @lstein @blessedcoolant
/ldm/invoke/ckpt_generator @lstein @blessedcoolant
/ldm/invoke/CLI.py @lstein @blessedcoolant
/ldm/invoke/config @lstein @ebr @mauwii @blessedcoolant
/ldm/invoke/generator @keturn @damian0815
/ldm/invoke/globals.py @lstein @blessedcoolant
/ldm/invoke/merge_diffusers.py @lstein
/ldm/invoke/globals.py @lstein @blessedcoolant
/ldm/invoke/merge_diffusers.py @lstein @blessedcoolant
/ldm/invoke/model_manager.py @lstein @blessedcoolant
/ldm/invoke/txt2mask.py @lstein
/ldm/invoke/patchmatch.py @Kyle0654
/ldm/invoke/txt2mask.py @lstein @blessedcoolant
/ldm/invoke/patchmatch.py @Kyle0654 @lstein
/ldm/invoke/restoration @lstein @blessedcoolant
# attention, textual inversion, model configuration
/ldm/models @damian0815 @keturn
/ldm/modules @damian0815 @keturn
/ldm/models @damian0815 @keturn @blessedcoolant
/ldm/modules/textual_inversion_manager.py @lstein @blessedcoolant
/ldm/modules/attention.py @damian0815 @keturn
/ldm/modules/diffusionmodules @damian0815 @keturn
/ldm/modules/distributions @damian0815 @keturn
/ldm/modules/ema.py @damian0815 @keturn
/ldm/modules/embedding_manager.py @lstein
/ldm/modules/encoders @damian0815 @keturn
/ldm/modules/image_degradation @damian0815 @keturn
/ldm/modules/losses @damian0815 @keturn
/ldm/modules/x_transformer.py @damian0815 @keturn
# Nodes
apps/ @Kyle0654
apps/ @Kyle0654 @jpphoto
# legacy REST API
# is CapableWeb still engaged?
/ldm/invoke/pngwriter.py @CapableWeb
/ldm/invoke/server_legacy.py @CapableWeb
/scripts/legacy_api.py @CapableWeb
/tests/legacy_tests.sh @CapableWeb
# these are dead code
#/ldm/invoke/pngwriter.py @CapableWeb
#/ldm/invoke/server_legacy.py @CapableWeb
#/scripts/legacy_api.py @CapableWeb
#/tests/legacy_tests.sh @CapableWeb

View File

@@ -3,9 +3,19 @@ on:
push:
branches:
- 'main'
- 'update/ci/*'
- 'update/ci/docker/*'
- 'update/docker/*'
paths:
- 'pyproject.toml'
- 'ldm/**'
- 'invokeai/backend/**'
- 'invokeai/configs/**'
- 'invokeai/frontend/dist/**'
- 'docker/Dockerfile'
tags:
- 'v*.*.*'
workflow_dispatch:
jobs:
docker:
@@ -20,18 +30,15 @@ jobs:
include:
- flavor: amd
pip-extra-index-url: 'https://download.pytorch.org/whl/rocm5.2'
dockerfile: docker/Dockerfile
platforms: linux/amd64,linux/arm64
- flavor: cuda
pip-extra-index-url: ''
dockerfile: docker/Dockerfile
platforms: linux/amd64,linux/arm64
- flavor: cpu
pip-extra-index-url: 'https://download.pytorch.org/whl/cpu'
dockerfile: docker/Dockerfile
platforms: linux/amd64,linux/arm64
runs-on: ubuntu-latest
name: ${{ matrix.flavor }}
env:
PLATFORMS: 'linux/amd64,linux/arm64'
DOCKERFILE: 'docker/Dockerfile'
steps:
- name: Checkout
uses: actions/checkout@v3
@@ -41,7 +48,9 @@ jobs:
uses: docker/metadata-action@v4
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
images: ghcr.io/${{ github.repository }}
images: |
ghcr.io/${{ github.repository }}
${{ vars.DOCKERHUB_REPOSITORY }}
tags: |
type=ref,event=branch
type=ref,event=tag
@@ -52,13 +61,14 @@ jobs:
flavor: |
latest=${{ matrix.flavor == 'cuda' && github.ref == 'refs/heads/main' }}
suffix=-${{ matrix.flavor }},onlatest=false
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
platforms: ${{ matrix.platforms }}
platforms: ${{ env.PLATFORMS }}
- name: Login to GitHub Container Registry
if: github.event_name != 'pull_request'
@@ -68,25 +78,34 @@ jobs:
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to Docker Hub
if: github.event_name != 'pull_request' && vars.DOCKERHUB_REPOSITORY != ''
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build container
id: docker_build
uses: docker/build-push-action@v4
with:
context: .
file: ${{ matrix.dockerfile }}
platforms: ${{ matrix.platforms }}
push: ${{ github.event_name != 'pull_request' }}
file: ${{ env.DOCKERFILE }}
platforms: ${{ env.PLATFORMS }}
push: ${{ github.ref == 'refs/heads/main' || github.ref == 'refs/tags/*' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
build-args: PIP_EXTRA_INDEX_URL=${{ matrix.pip-extra-index-url }}
cache-from: type=gha
cache-to: type=gha,mode=max
cache-from: |
type=gha,scope=${{ github.ref_name }}-${{ matrix.flavor }}
type=gha,scope=main-${{ matrix.flavor }}
cache-to: type=gha,mode=max,scope=${{ github.ref_name }}-${{ matrix.flavor }}
- name: Output image, digest and metadata to summary
run: |
{
echo imageid: "${{ steps.docker_build.outputs.imageid }}"
echo digest: "${{ steps.docker_build.outputs.digest }}"
echo labels: "${{ steps.meta.outputs.labels }}"
echo tags: "${{ steps.meta.outputs.tags }}"
echo version: "${{ steps.meta.outputs.version }}"
} >> "$GITHUB_STEP_SUMMARY"
- name: Docker Hub Description
if: github.ref == 'refs/heads/main' || github.ref == 'refs/tags/*' && vars.DOCKERHUB_REPOSITORY != ''
uses: peter-evans/dockerhub-description@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
repository: ${{ vars.DOCKERHUB_REPOSITORY }}
short-description: ${{ github.event.repository.description }}

View File

@@ -9,6 +9,10 @@ jobs:
mkdocs-material:
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
env:
REPO_URL: '${{ github.server_url }}/${{ github.repository }}'
REPO_NAME: '${{ github.repository }}'
SITE_URL: 'https://${{ github.repository_owner }}.github.io/InvokeAI'
steps:
- name: checkout sources
uses: actions/checkout@v3
@@ -19,11 +23,15 @@ jobs:
uses: actions/setup-python@v4
with:
python-version: '3.10'
cache: pip
cache-dependency-path: pyproject.toml
- name: install requirements
env:
PIP_USE_PEP517: 1
run: |
python -m \
pip install -r docs/requirements-mkdocs.txt
pip install ".[docs]"
- name: confirm buildability
run: |

View File

@@ -28,7 +28,7 @@ jobs:
run: twine check dist/*
- name: check PyPI versions
if: github.ref == 'refs/heads/main'
if: github.ref == 'refs/heads/main' || github.ref == 'refs/heads/v2.3'
run: |
pip install --upgrade requests
python -c "\

View File

@@ -0,0 +1,67 @@
name: Test invoke.py pip
on:
pull_request:
paths-ignore:
- 'pyproject.toml'
- 'ldm/**'
- 'invokeai/backend/**'
- 'invokeai/configs/**'
- 'invokeai/frontend/dist/**'
merge_group:
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
matrix:
if: github.event.pull_request.draft == false
strategy:
matrix:
python-version:
# - '3.9'
- '3.10'
pytorch:
# - linux-cuda-11_6
- linux-cuda-11_7
- linux-rocm-5_2
- linux-cpu
- macos-default
- windows-cpu
# - windows-cuda-11_6
# - windows-cuda-11_7
include:
# - pytorch: linux-cuda-11_6
# os: ubuntu-22.04
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
# github-env: $GITHUB_ENV
- pytorch: linux-cuda-11_7
os: ubuntu-22.04
github-env: $GITHUB_ENV
- pytorch: linux-rocm-5_2
os: ubuntu-22.04
extra-index-url: 'https://download.pytorch.org/whl/rocm5.2'
github-env: $GITHUB_ENV
- pytorch: linux-cpu
os: ubuntu-22.04
extra-index-url: 'https://download.pytorch.org/whl/cpu'
github-env: $GITHUB_ENV
- pytorch: macos-default
os: macOS-12
github-env: $GITHUB_ENV
- pytorch: windows-cpu
os: windows-2022
github-env: $env:GITHUB_ENV
# - pytorch: windows-cuda-11_6
# os: windows-2022
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
# github-env: $env:GITHUB_ENV
# - pytorch: windows-cuda-11_7
# os: windows-2022
# extra-index-url: 'https://download.pytorch.org/whl/cu117'
# github-env: $env:GITHUB_ENV
name: ${{ matrix.pytorch }} on ${{ matrix.python-version }}
runs-on: ${{ matrix.os }}
steps:
- run: 'echo "No build required"'

View File

@@ -3,11 +3,24 @@ on:
push:
branches:
- 'main'
paths:
- 'pyproject.toml'
- 'ldm/**'
- 'invokeai/backend/**'
- 'invokeai/configs/**'
- 'invokeai/frontend/dist/**'
pull_request:
paths:
- 'pyproject.toml'
- 'ldm/**'
- 'invokeai/backend/**'
- 'invokeai/configs/**'
- 'invokeai/frontend/dist/**'
types:
- 'ready_for_review'
- 'opened'
- 'synchronize'
merge_group:
workflow_dispatch:
concurrency:

3
.gitignore vendored
View File

@@ -1,4 +1,5 @@
# ignore default image save location and model symbolic link
.idea/
embeddings/
outputs/
models/ldm/stable-diffusion-v1/model.ckpt
@@ -232,4 +233,4 @@ installer/update.bat
installer/update.sh
# no longer stored in source directory
models
models

41
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,41 @@
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
repos:
- repo: https://github.com/psf/black
rev: 23.1.0
hooks:
- id: black
- repo: https://github.com/pycqa/isort
rev: 5.12.0
hooks:
- id: isort
- repo: https://github.com/PyCQA/flake8
rev: 6.0.0
hooks:
- id: flake8
additional_dependencies:
- flake8-black
- flake8-bugbear
- flake8-comprehensions
- flake8-simplify
- repo: https://github.com/pre-commit/mirrors-prettier
rev: 'v3.0.0-alpha.4'
hooks:
- id: prettier
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: check-added-large-files
- id: check-executables-have-shebangs
- id: check-shebang-scripts-are-executable
- id: check-merge-conflict
- id: check-symlinks
- id: check-toml
- id: end-of-file-fixer
- id: no-commit-to-branch
args: ['--branch', 'main']
- id: trailing-whitespace

14
.prettierignore Normal file
View File

@@ -0,0 +1,14 @@
invokeai/frontend/.husky
invokeai/frontend/patches
# Ignore artifacts:
build
coverage
static
invokeai/frontend/dist
# Ignore all HTML files:
*.html
# Ignore deprecated docs
docs/installation/deprecated_documentation

View File

@@ -1,9 +1,9 @@
endOfLine: lf
tabWidth: 2
useTabs: false
singleQuote: true
quoteProps: as-needed
embeddedLanguageFormatting: auto
endOfLine: lf
singleQuote: true
semi: true
trailingComma: es5
useTabs: false
overrides:
- files: '*.md'
options:
@@ -11,3 +11,9 @@ overrides:
printWidth: 80
parser: markdown
cursorOffset: -1
- files: docs/**/*.md
options:
tabWidth: 4
- files: 'invokeai/frontend/public/locales/*.json'
options:
tabWidth: 4

158
README.md
View File

@@ -1,6 +1,6 @@
<div align="center">
![project logo](https://github.com/mauwii/InvokeAI/raw/main/docs/assets/invoke_ai_banner.png)
![project logo](https://github.com/invoke-ai/InvokeAI/raw/main/docs/assets/invoke_ai_banner.png)
# InvokeAI: A Stable Diffusion Toolkit
@@ -10,10 +10,10 @@
[![CI checks on main badge]][CI checks on main link] [![latest commit to main badge]][latest commit to main link]
[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link]
[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link] [![translation status badge]][translation status link]
[CI checks on main badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
[CI checks on main link]: https://github.com/invoke-ai/InvokeAI/actions/workflows/test-invoke-conda.yml
[CI checks on main link]:https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Amain
[discord badge]: https://flat.badgen.net/discord/members/ZmtBAhwWhy?icon=discord
[discord link]: https://discord.gg/ZmtBAhwWhy
[github forks badge]: https://flat.badgen.net/github/forks/invoke-ai/InvokeAI?icon=github
@@ -28,12 +28,14 @@
[latest commit to main link]: https://github.com/invoke-ai/InvokeAI/commits/main
[latest release badge]: https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases
[translation status badge]: https://hosted.weblate.org/widgets/invokeai/-/svg-badge.svg
[translation status link]: https://hosted.weblate.org/engage/invokeai/
</div>
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.
**Quick links**: [[How to Install](#installation)] [<a href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a href="https://invoke-ai.github.io/InvokeAI/">Documentation and Tutorials</a>] [<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas & Q&A</a>]
**Quick links**: [[How to Install](https://invoke-ai.github.io/InvokeAI/#installation)] [<a href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a href="https://invoke-ai.github.io/InvokeAI/">Documentation and Tutorials</a>] [<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas & Q&A</a>]
_Note: InvokeAI is rapidly evolving. Please use the
[Issues](https://github.com/invoke-ai/InvokeAI/issues) tab to report bugs and make feature
@@ -41,38 +43,136 @@ requests. Be sure to use the provided templates. They will help us diagnose issu
<div align="center">
![canvas preview](https://github.com/mauwii/InvokeAI/raw/main/docs/assets/canvas_preview.png)
![canvas preview](https://github.com/invoke-ai/InvokeAI/raw/main/docs/assets/canvas_preview.png)
</div>
# Getting Started with InvokeAI
## Table of Contents
1. [Quick Start](#getting-started-with-invokeai)
2. [Installation](#detailed-installation-instructions)
3. [Hardware Requirements](#hardware-requirements)
4. [Features](#features)
5. [Latest Changes](#latest-changes)
6. [Troubleshooting](#troubleshooting)
7. [Contributing](#contributing)
8. [Contributors](#contributors)
9. [Support](#support)
10. [Further Reading](#further-reading)
## Getting Started with InvokeAI
For full installation and upgrade instructions, please see:
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/)
### Automatic Installer (suggested for 1st time users)
1. Go to the bottom of the [Latest Release Page](https://github.com/invoke-ai/InvokeAI/releases/latest)
2. Download the .zip file for your OS (Windows/macOS/Linux).
3. Unzip the file.
4. If you are on Windows, double-click on the `install.bat` script. On macOS, open a Terminal window, drag the file `install.sh` from Finder into the Terminal, and press return. On Linux, run `install.sh`.
5. Wait a while, until it is done.
6. The folder where you ran the installer from will now be filled with lots of files. If you are on Windows, double-click on the `invoke.bat` file. On macOS, open a Terminal window, drag `invoke.sh` from the folder into the Terminal, and press return. On Linux, run `invoke.sh`
7. Press 2 to open the "browser-based UI", press enter/return, wait a minute or two for Stable Diffusion to start up, then open your browser and go to http://localhost:9090.
8. Type `banana sushi` in the box on the top left and click `Invoke`
4. If you are on Windows, double-click on the `install.bat` script. On
macOS, open a Terminal window, drag the file `install.sh` from Finder
into the Terminal, and press return. On Linux, run `install.sh`.
## Table of Contents
5. You'll be asked to confirm the location of the folder in which
to install InvokeAI and its image generation model files. Pick a
location with at least 15 GB of free memory. More if you plan on
installing lots of models.
1. [Installation](#installation)
2. [Hardware Requirements](#hardware-requirements)
3. [Features](#features)
4. [Latest Changes](#latest-changes)
5. [Troubleshooting](#troubleshooting)
6. [Contributing](#contributing)
7. [Contributors](#contributors)
8. [Support](#support)
9. [Further Reading](#further-reading)
6. Wait while the installer does its thing. After installing the software,
the installer will launch a script that lets you configure InvokeAI and
select a set of starting image generaiton models.
## Installation
7. Find the folder that InvokeAI was installed into (it is not the
same as the unpacked zip file directory!) The default location of this
folder (if you didn't change it in step 5) is `~/invokeai` on
Linux/Mac systems, and `C:\Users\YourName\invokeai` on Windows. This directory will contain launcher scripts named `invoke.sh` and `invoke.bat`.
8. On Windows systems, double-click on the `invoke.bat` file. On
macOS, open a Terminal window, drag `invoke.sh` from the folder into
the Terminal, and press return. On Linux, run `invoke.sh`
9. Press 2 to open the "browser-based UI", press enter/return, wait a
minute or two for Stable Diffusion to start up, then open your browser
and go to http://localhost:9090.
10. Type `banana sushi` in the box on the top left and click `Invoke`
### Command-Line Installation (for users familiar with Terminals)
You must have Python 3.9 or 3.10 installed on your machine. Earlier or later versions are
not supported.
1. Open a command-line window on your machine. The PowerShell is recommended for Windows.
2. Create a directory to install InvokeAI into. You'll need at least 15 GB of free space:
```terminal
mkdir invokeai
````
3. Create a virtual environment named `.venv` inside this directory and activate it:
```terminal
cd invokeai
python -m venv .venv --prompt InvokeAI
```
4. Activate the virtual environment (do it every time you run InvokeAI)
_For Linux/Mac users:_
```sh
source .venv/bin/activate
```
_For Windows users:_
```ps
.venv\Scripts\activate
```
5. Install the InvokeAI module and its dependencies. Choose the command suited for your platform & GPU.
_For Windows/Linux with an NVIDIA GPU:_
```terminal
pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117
```
_For Linux with an AMD GPU:_
```sh
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.4.2
```
_For Macintoshes, either Intel or M1/M2:_
```sh
pip install InvokeAI --use-pep517
```
6. Configure InvokeAI and install a starting set of image generation models (you only need to do this once):
```terminal
invokeai-configure
```
7. Launch the web server (do it every time you run InvokeAI):
```terminal
invokeai --web
```
8. Point your browser to http://localhost:9090 to bring up the web interface.
9. Type `banana sushi` in the box on the top left and click `Invoke`.
Be sure to activate the virtual environment each time before re-launching InvokeAI,
using `source .venv/bin/activate` or `.venv\Scripts\activate`.
### Detailed Installation Instructions
This fork is supported across Linux, Windows and Macintosh. Linux
users can use either an Nvidia-based card (with CUDA support) or an
@@ -80,13 +180,13 @@ AMD card (using the ROCm driver). For full installation and upgrade
instructions, please see:
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/INSTALL_SOURCE/)
### Hardware Requirements
## Hardware Requirements
InvokeAI is supported across Linux, Windows and macOS. Linux
users can use either an Nvidia-based card (with CUDA support) or an
AMD card (using the ROCm driver).
#### System
### System
You will need one of the following:
@@ -98,11 +198,11 @@ We do not recommend the GTX 1650 or 1660 series video cards. They are
unable to run in half-precision mode and do not have sufficient VRAM
to render 512x512 images.
#### Memory
### Memory
- At least 12 GB Main Memory RAM.
#### Disk
### Disk
- At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
@@ -152,13 +252,15 @@ Notes](https://github.com/invoke-ai/InvokeAI/releases) and the
Please check out our **[Q&A](https://invoke-ai.github.io/InvokeAI/help/TROUBLESHOOT/#faq)** to get solutions for common installation
problems and other issues.
# Contributing
## Contributing
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so.
To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the GitHub discussion board.
If you'd like to help with translation, please see our [translation guide](docs/other/TRANSLATION.md).
If you are unfamiliar with how
to contribute to GitHub projects, here is a
[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github). A full set of contribution guidelines, along with templates, are in progress. You can **make your pull request against the "main" branch**.
@@ -175,6 +277,8 @@ This fork is a combined effort of various people from across the world.
[Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for
their time, hard work and effort.
Thanks to [Weblate](https://weblate.org/) for generously providing translation services to this project.
### Support
For support, please use this repository's GitHub Issues tracking service, or join the Discord.

View File

@@ -1,164 +0,0 @@
@echo off
@rem This script will install git (if not found on the PATH variable)
@rem using micromamba (an 8mb static-linked single-file binary, conda replacement).
@rem For users who already have git, this step will be skipped.
@rem Next, it'll download the project's source code.
@rem Then it will download a self-contained, standalone Python and unpack it.
@rem Finally, it'll create the Python virtual environment and preload the models.
@rem This enables a user to install this project without manually installing git or Python
@rem change to the script's directory
PUSHD "%~dp0"
set "no_cache_dir=--no-cache-dir"
if "%1" == "use-cache" (
set "no_cache_dir="
)
echo ***** Installing InvokeAI.. *****
@rem Config
set INSTALL_ENV_DIR=%cd%\installer_files\env
@rem https://mamba.readthedocs.io/en/latest/installation.html
set MICROMAMBA_DOWNLOAD_URL=https://github.com/cmdr2/stable-diffusion-ui/releases/download/v1.1/micromamba.exe
set RELEASE_URL=https://github.com/invoke-ai/InvokeAI
set RELEASE_SOURCEBALL=/archive/refs/heads/main.tar.gz
set PYTHON_BUILD_STANDALONE_URL=https://github.com/indygreg/python-build-standalone/releases/download
set PYTHON_BUILD_STANDALONE=20221002/cpython-3.10.7+20221002-x86_64-pc-windows-msvc-shared-install_only.tar.gz
set PACKAGES_TO_INSTALL=
call git --version >.tmp1 2>.tmp2
if "%ERRORLEVEL%" NEQ "0" set PACKAGES_TO_INSTALL=%PACKAGES_TO_INSTALL% git
@rem Cleanup
del /q .tmp1 .tmp2
@rem (if necessary) install git into a contained environment
if "%PACKAGES_TO_INSTALL%" NEQ "" (
@rem download micromamba
echo ***** Downloading micromamba from %MICROMAMBA_DOWNLOAD_URL% to micromamba.exe *****
call curl -L "%MICROMAMBA_DOWNLOAD_URL%" > micromamba.exe
@rem test the mamba binary
echo ***** Micromamba version: *****
call micromamba.exe --version
@rem create the installer env
if not exist "%INSTALL_ENV_DIR%" (
call micromamba.exe create -y --prefix "%INSTALL_ENV_DIR%"
)
echo ***** Packages to install:%PACKAGES_TO_INSTALL% *****
call micromamba.exe install -y --prefix "%INSTALL_ENV_DIR%" -c conda-forge %PACKAGES_TO_INSTALL%
if not exist "%INSTALL_ENV_DIR%" (
echo ----- There was a problem while installing "%PACKAGES_TO_INSTALL%" using micromamba. Cannot continue. -----
pause
exit /b
)
)
del /q micromamba.exe
@rem For 'git' only
set PATH=%INSTALL_ENV_DIR%\Library\bin;%PATH%
@rem Download/unpack/clean up InvokeAI release sourceball
set err_msg=----- InvokeAI source download failed -----
echo Trying to download "%RELEASE_URL%%RELEASE_SOURCEBALL%"
curl -L %RELEASE_URL%%RELEASE_SOURCEBALL% --output InvokeAI.tgz
if %errorlevel% neq 0 goto err_exit
set err_msg=----- InvokeAI source unpack failed -----
tar -zxf InvokeAI.tgz
if %errorlevel% neq 0 goto err_exit
del /q InvokeAI.tgz
set err_msg=----- InvokeAI source copy failed -----
cd InvokeAI-*
xcopy . .. /e /h
if %errorlevel% neq 0 goto err_exit
cd ..
@rem cleanup
for /f %%i in ('dir /b InvokeAI-*') do rd /s /q %%i
rd /s /q .dev_scripts .github docker-build tests
del /q requirements.in requirements-mkdocs.txt shell.nix
echo ***** Unpacked InvokeAI source *****
@rem Download/unpack/clean up python-build-standalone
set err_msg=----- Python download failed -----
curl -L %PYTHON_BUILD_STANDALONE_URL%/%PYTHON_BUILD_STANDALONE% --output python.tgz
if %errorlevel% neq 0 goto err_exit
set err_msg=----- Python unpack failed -----
tar -zxf python.tgz
if %errorlevel% neq 0 goto err_exit
del /q python.tgz
echo ***** Unpacked python-build-standalone *****
@rem create venv
set err_msg=----- problem creating venv -----
.\python\python -E -s -m venv .venv
if %errorlevel% neq 0 goto err_exit
call .venv\Scripts\activate.bat
echo ***** Created Python virtual environment *****
@rem Print venv's Python version
set err_msg=----- problem calling venv's python -----
echo We're running under
.venv\Scripts\python --version
if %errorlevel% neq 0 goto err_exit
set err_msg=----- pip update failed -----
.venv\Scripts\python -m pip install %no_cache_dir% --no-warn-script-location --upgrade pip wheel
if %errorlevel% neq 0 goto err_exit
echo ***** Updated pip and wheel *****
set err_msg=----- requirements file copy failed -----
copy binary_installer\py3.10-windows-x86_64-cuda-reqs.txt requirements.txt
if %errorlevel% neq 0 goto err_exit
set err_msg=----- main pip install failed -----
.venv\Scripts\python -m pip install %no_cache_dir% --no-warn-script-location -r requirements.txt
if %errorlevel% neq 0 goto err_exit
echo ***** Installed Python dependencies *****
set err_msg=----- InvokeAI setup failed -----
.venv\Scripts\python -m pip install %no_cache_dir% --no-warn-script-location -e .
if %errorlevel% neq 0 goto err_exit
copy binary_installer\invoke.bat.in .\invoke.bat
echo ***** Installed invoke launcher script ******
@rem more cleanup
rd /s /q binary_installer installer_files
@rem preload the models
call .venv\Scripts\python scripts\configure_invokeai.py
set err_msg=----- model download clone failed -----
if %errorlevel% neq 0 goto err_exit
deactivate
echo ***** Finished downloading models *****
echo All done! Execute the file invoke.bat in this directory to start InvokeAI
pause
exit
:err_exit
echo %err_msg%
pause
exit

View File

@@ -1,235 +0,0 @@
#!/usr/bin/env bash
# ensure we're in the correct folder in case user's CWD is somewhere else
scriptdir=$(dirname "$0")
cd "$scriptdir"
set -euo pipefail
IFS=$'\n\t'
function _err_exit {
if test "$1" -ne 0
then
echo -e "Error code $1; Error caught was '$2'"
read -p "Press any key to exit..."
exit
fi
}
# This script will install git (if not found on the PATH variable)
# using micromamba (an 8mb static-linked single-file binary, conda replacement).
# For users who already have git, this step will be skipped.
# Next, it'll download the project's source code.
# Then it will download a self-contained, standalone Python and unpack it.
# Finally, it'll create the Python virtual environment and preload the models.
# This enables a user to install this project without manually installing git or Python
echo -e "\n***** Installing InvokeAI into $(pwd)... *****\n"
export no_cache_dir="--no-cache-dir"
if [ $# -ge 1 ]; then
if [ "$1" = "use-cache" ]; then
export no_cache_dir=""
fi
fi
OS_NAME=$(uname -s)
case "${OS_NAME}" in
Linux*) OS_NAME="linux";;
Darwin*) OS_NAME="darwin";;
*) echo -e "\n----- Unknown OS: $OS_NAME! This script runs only on Linux or macOS -----\n" && exit
esac
OS_ARCH=$(uname -m)
case "${OS_ARCH}" in
x86_64*) ;;
arm64*) ;;
*) echo -e "\n----- Unknown system architecture: $OS_ARCH! This script runs only on x86_64 or arm64 -----\n" && exit
esac
# https://mamba.readthedocs.io/en/latest/installation.html
MAMBA_OS_NAME=$OS_NAME
MAMBA_ARCH=$OS_ARCH
if [ "$OS_NAME" == "darwin" ]; then
MAMBA_OS_NAME="osx"
fi
if [ "$OS_ARCH" == "linux" ]; then
MAMBA_ARCH="aarch64"
fi
if [ "$OS_ARCH" == "x86_64" ]; then
MAMBA_ARCH="64"
fi
PY_ARCH=$OS_ARCH
if [ "$OS_ARCH" == "arm64" ]; then
PY_ARCH="aarch64"
fi
# Compute device ('cd' segment of reqs files) detect goes here
# This needs a ton of work
# Suggestions:
# - lspci
# - check $PATH for nvidia-smi, gtt CUDA/GPU version from output
# - Surely there's a similar utility for AMD?
CD="cuda"
if [ "$OS_NAME" == "darwin" ] && [ "$OS_ARCH" == "arm64" ]; then
CD="mps"
fi
# config
INSTALL_ENV_DIR="$(pwd)/installer_files/env"
MICROMAMBA_DOWNLOAD_URL="https://micro.mamba.pm/api/micromamba/${MAMBA_OS_NAME}-${MAMBA_ARCH}/latest"
RELEASE_URL=https://github.com/invoke-ai/InvokeAI
RELEASE_SOURCEBALL=/archive/refs/heads/main.tar.gz
PYTHON_BUILD_STANDALONE_URL=https://github.com/indygreg/python-build-standalone/releases/download
if [ "$OS_NAME" == "darwin" ]; then
PYTHON_BUILD_STANDALONE=20221002/cpython-3.10.7+20221002-${PY_ARCH}-apple-darwin-install_only.tar.gz
elif [ "$OS_NAME" == "linux" ]; then
PYTHON_BUILD_STANDALONE=20221002/cpython-3.10.7+20221002-${PY_ARCH}-unknown-linux-gnu-install_only.tar.gz
fi
echo "INSTALLING $RELEASE_SOURCEBALL FROM $RELEASE_URL"
PACKAGES_TO_INSTALL=""
if ! hash "git" &>/dev/null; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL git"; fi
# (if necessary) install git and conda into a contained environment
if [ "$PACKAGES_TO_INSTALL" != "" ]; then
# download micromamba
echo -e "\n***** Downloading micromamba from $MICROMAMBA_DOWNLOAD_URL to micromamba *****\n"
curl -L "$MICROMAMBA_DOWNLOAD_URL" | tar -xvjO bin/micromamba > micromamba
chmod u+x ./micromamba
# test the mamba binary
echo -e "\n***** Micromamba version: *****\n"
./micromamba --version
# create the installer env
if [ ! -e "$INSTALL_ENV_DIR" ]; then
./micromamba create -y --prefix "$INSTALL_ENV_DIR"
fi
echo -e "\n***** Packages to install:$PACKAGES_TO_INSTALL *****\n"
./micromamba install -y --prefix "$INSTALL_ENV_DIR" -c conda-forge "$PACKAGES_TO_INSTALL"
if [ ! -e "$INSTALL_ENV_DIR" ]; then
echo -e "\n----- There was a problem while initializing micromamba. Cannot continue. -----\n"
exit
fi
fi
rm -f micromamba.exe
export PATH="$INSTALL_ENV_DIR/bin:$PATH"
# Download/unpack/clean up InvokeAI release sourceball
_err_msg="\n----- InvokeAI source download failed -----\n"
curl -L $RELEASE_URL/$RELEASE_SOURCEBALL --output InvokeAI.tgz
_err_exit $? _err_msg
_err_msg="\n----- InvokeAI source unpack failed -----\n"
tar -zxf InvokeAI.tgz
_err_exit $? _err_msg
rm -f InvokeAI.tgz
_err_msg="\n----- InvokeAI source copy failed -----\n"
cd InvokeAI-*
cp -r . ..
_err_exit $? _err_msg
cd ..
# cleanup
rm -rf InvokeAI-*/
rm -rf .dev_scripts/ .github/ docker-build/ tests/ requirements.in requirements-mkdocs.txt shell.nix
echo -e "\n***** Unpacked InvokeAI source *****\n"
# Download/unpack/clean up python-build-standalone
_err_msg="\n----- Python download failed -----\n"
curl -L $PYTHON_BUILD_STANDALONE_URL/$PYTHON_BUILD_STANDALONE --output python.tgz
_err_exit $? _err_msg
_err_msg="\n----- Python unpack failed -----\n"
tar -zxf python.tgz
_err_exit $? _err_msg
rm -f python.tgz
echo -e "\n***** Unpacked python-build-standalone *****\n"
# create venv
_err_msg="\n----- problem creating venv -----\n"
if [ "$OS_NAME" == "darwin" ]; then
# patch sysconfig so that extensions can build properly
# adapted from https://github.com/cashapp/hermit-packages/commit/fcba384663892f4d9cfb35e8639ff7a28166ee43
PYTHON_INSTALL_DIR="$(pwd)/python"
SYSCONFIG="$(echo python/lib/python*/_sysconfigdata_*.py)"
TMPFILE="$(mktemp)"
chmod +w "${SYSCONFIG}"
cp "${SYSCONFIG}" "${TMPFILE}"
sed "s,'/install,'${PYTHON_INSTALL_DIR},g" "${TMPFILE}" > "${SYSCONFIG}"
rm -f "${TMPFILE}"
fi
./python/bin/python3 -E -s -m venv .venv
_err_exit $? _err_msg
source .venv/bin/activate
echo -e "\n***** Created Python virtual environment *****\n"
# Print venv's Python version
_err_msg="\n----- problem calling venv's python -----\n"
echo -e "We're running under"
.venv/bin/python3 --version
_err_exit $? _err_msg
_err_msg="\n----- pip update failed -----\n"
.venv/bin/python3 -m pip install $no_cache_dir --no-warn-script-location --upgrade pip
_err_exit $? _err_msg
echo -e "\n***** Updated pip *****\n"
_err_msg="\n----- requirements file copy failed -----\n"
cp binary_installer/py3.10-${OS_NAME}-"${OS_ARCH}"-${CD}-reqs.txt requirements.txt
_err_exit $? _err_msg
_err_msg="\n----- main pip install failed -----\n"
.venv/bin/python3 -m pip install $no_cache_dir --no-warn-script-location -r requirements.txt
_err_exit $? _err_msg
echo -e "\n***** Installed Python dependencies *****\n"
_err_msg="\n----- InvokeAI setup failed -----\n"
.venv/bin/python3 -m pip install $no_cache_dir --no-warn-script-location -e .
_err_exit $? _err_msg
echo -e "\n***** Installed InvokeAI *****\n"
cp binary_installer/invoke.sh.in ./invoke.sh
chmod a+rx ./invoke.sh
echo -e "\n***** Installed invoke launcher script ******\n"
# more cleanup
rm -rf binary_installer/ installer_files/
# preload the models
.venv/bin/python3 scripts/configure_invokeai.py
_err_msg="\n----- model download clone failed -----\n"
_err_exit $? _err_msg
deactivate
echo -e "\n***** Finished downloading models *****\n"
echo "All done! Run the command"
echo " $scriptdir/invoke.sh"
echo "to start InvokeAI."
read -p "Press any key to exit..."
exit

View File

@@ -1,36 +0,0 @@
@echo off
PUSHD "%~dp0"
call .venv\Scripts\activate.bat
echo Do you want to generate images using the
echo 1. command-line
echo 2. browser-based UI
echo OR
echo 3. open the developer console
set /p choice="Please enter 1, 2 or 3: "
if /i "%choice%" == "1" (
echo Starting the InvokeAI command-line.
.venv\Scripts\python scripts\invoke.py %*
) else if /i "%choice%" == "2" (
echo Starting the InvokeAI browser-based UI.
.venv\Scripts\python scripts\invoke.py --web %*
) else if /i "%choice%" == "3" (
echo Developer Console
echo Python command is:
where python
echo Python version is:
python --version
echo *************************
echo You are now in the system shell, with the local InvokeAI Python virtual environment activated,
echo so that you can troubleshoot this InvokeAI installation as necessary.
echo *************************
echo *** Type `exit` to quit this shell and deactivate the Python virtual environment ***
call cmd /k
) else (
echo Invalid selection
pause
exit /b
)
deactivate

View File

@@ -1,46 +0,0 @@
#!/usr/bin/env sh
set -eu
. .venv/bin/activate
# set required env var for torch on mac MPS
if [ "$(uname -s)" == "Darwin" ]; then
export PYTORCH_ENABLE_MPS_FALLBACK=1
fi
echo "Do you want to generate images using the"
echo "1. command-line"
echo "2. browser-based UI"
echo "OR"
echo "3. open the developer console"
echo "Please enter 1, 2, or 3:"
read choice
case $choice in
1)
printf "\nStarting the InvokeAI command-line..\n";
.venv/bin/python scripts/invoke.py $*;
;;
2)
printf "\nStarting the InvokeAI browser-based UI..\n";
.venv/bin/python scripts/invoke.py --web $*;
;;
3)
printf "\nDeveloper Console:\n";
printf "Python command is:\n\t";
which python;
printf "Python version is:\n\t";
python --version;
echo "*************************"
echo "You are now in your user shell ($SHELL) with the local InvokeAI Python virtual environment activated,";
echo "so that you can troubleshoot this InvokeAI installation as necessary.";
printf "*************************\n"
echo "*** Type \`exit\` to quit this shell and deactivate the Python virtual environment *** ";
/usr/bin/env "$SHELL";
;;
*)
echo "Invalid selection";
exit
;;
esac

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,17 +0,0 @@
InvokeAI
Project homepage: https://github.com/invoke-ai/InvokeAI
Installation on Windows:
NOTE: You might need to enable Windows Long Paths. If you're not sure,
then you almost certainly need to. Simply double-click the 'WinLongPathsEnabled.reg'
file. Note that you will need to have admin privileges in order to
do this.
Please double-click the 'install.bat' file (while keeping it inside the invokeAI folder).
Installation on Linux and Mac:
Please open the terminal, and run './install.sh' (while keeping it inside the invokeAI folder).
After installation, please run the 'invoke.bat' file (on Windows) or 'invoke.sh'
file (on Linux/Mac) to start InvokeAI.

View File

@@ -1,33 +0,0 @@
--prefer-binary
--extra-index-url https://download.pytorch.org/whl/torch_stable.html
--extra-index-url https://download.pytorch.org/whl/cu116
--trusted-host https://download.pytorch.org
accelerate~=0.15
albumentations
diffusers[torch]~=0.11
einops
eventlet
flask_cors
flask_socketio
flaskwebgui==1.0.3
getpass_asterisk
imageio-ffmpeg
pyreadline3
realesrgan
send2trash
streamlit
taming-transformers-rom1504
test-tube
torch-fidelity
torch==1.12.1 ; platform_system == 'Darwin'
torch==1.12.0+cu116 ; platform_system == 'Linux' or platform_system == 'Windows'
torchvision==0.13.1 ; platform_system == 'Darwin'
torchvision==0.13.0+cu116 ; platform_system == 'Linux' or platform_system == 'Windows'
transformers
picklescan
https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip
https://github.com/invoke-ai/clipseg/archive/1f754751c85d7d4255fa681f4491ff5711c1c288.zip
https://github.com/invoke-ai/GFPGAN/archive/3f5d2397361199bc4a91c08bb7d80f04d7805615.zip ; platform_system=='Windows'
https://github.com/invoke-ai/GFPGAN/archive/c796277a1cf77954e5fc0b288d7062d162894248.zip ; platform_system=='Linux' or platform_system=='Darwin'
https://github.com/Birch-san/k-diffusion/archive/363386981fee88620709cf8f6f2eea167bd6cd74.zip
https://github.com/invoke-ai/PyPatchMatch/archive/129863937a8ab37f6bbcec327c994c0f932abdbc.zip

View File

@@ -1,57 +1,63 @@
# syntax=docker/dockerfile:1
ARG PYTHON_VERSION=3.9
##################
## base image ##
##################
FROM python:${PYTHON_VERSION}-slim AS python-base
# prepare for buildkit cache
RUN rm -f /etc/apt/apt.conf.d/docker-clean
LABEL org.opencontainers.image.authors="mauwii@outlook.de"
# Install necesarry packages
# prepare for buildkit cache
RUN rm -f /etc/apt/apt.conf.d/docker-clean \
&& echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' >/etc/apt/apt.conf.d/keep-cache
# Install necessary packages
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update \
&& apt-get install \
-yqq \
&& apt-get install -y \
--no-install-recommends \
libgl1-mesa-glx=20.3.* \
libglib2.0-0=2.66.* \
libopencv-dev=4.5.* \
&& rm -rf /var/lib/apt/lists/*
libopencv-dev=4.5.*
# set working directory and path
# set working directory and env
ARG APPDIR=/usr/src
ARG APPNAME=InvokeAI
WORKDIR ${APPDIR}
ENV PATH=${APPDIR}/${APPNAME}/bin:$PATH
ENV PATH ${APPDIR}/${APPNAME}/bin:$PATH
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE 1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED 1
# don't fall back to legacy build system
ENV PIP_USE_PEP517=1
#######################
## build pyproject ##
#######################
FROM python-base AS pyproject-builder
ENV PIP_USE_PEP517=1
# prepare for buildkit cache
# Install dependencies
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update \
&& apt-get install -y \
--no-install-recommends \
build-essential=12.9 \
gcc=4:10.2.* \
python3-dev=3.9.*
# prepare pip for buildkit cache
ARG PIP_CACHE_DIR=/var/cache/buildkit/pip
ENV PIP_CACHE_DIR ${PIP_CACHE_DIR}
RUN mkdir -p ${PIP_CACHE_DIR}
# Install dependencies
RUN \
--mount=type=cache,target=${PIP_CACHE_DIR} \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
apt-get update \
&& apt-get install \
-yqq \
--no-install-recommends \
build-essential=12.9 \
gcc=4:10.2.* \
python3-dev=3.9.* \
&& rm -rf /var/lib/apt/lists/*
# create virtual environment
RUN --mount=type=cache,target=${PIP_CACHE_DIR} \
RUN --mount=type=cache,target=${PIP_CACHE_DIR},sharing=locked \
python3 -m venv "${APPNAME}" \
--upgrade-deps
@@ -61,9 +67,8 @@ COPY --link . .
# install pyproject.toml
ARG PIP_EXTRA_INDEX_URL
ENV PIP_EXTRA_INDEX_URL ${PIP_EXTRA_INDEX_URL}
ARG PIP_PACKAGE=.
RUN --mount=type=cache,target=${PIP_CACHE_DIR} \
"${APPDIR}/${APPNAME}/bin/pip" install ${PIP_PACKAGE}
RUN --mount=type=cache,target=${PIP_CACHE_DIR},sharing=locked \
"${APPNAME}/bin/pip" install .
# build patchmatch
RUN python3 -c "from patchmatch import patch_match"
@@ -73,14 +78,26 @@ RUN python3 -c "from patchmatch import patch_match"
#####################
FROM python-base AS runtime
# setup environment
COPY --from=pyproject-builder --link ${APPDIR}/${APPNAME} ${APPDIR}/${APPNAME}
ENV INVOKEAI_ROOT=/data
ENV INVOKE_MODEL_RECONFIGURE="--yes --default_only"
# Create a new user
ARG UNAME=appuser
RUN useradd \
--no-log-init \
-m \
-U \
"${UNAME}"
# set Entrypoint and default CMD
# create volume directory
ARG VOLUME_DIR=/data
RUN mkdir -p "${VOLUME_DIR}" \
&& chown -R "${UNAME}" "${VOLUME_DIR}"
# setup runtime environment
USER ${UNAME}
COPY --chown=${UNAME} --from=pyproject-builder ${APPDIR}/${APPNAME} ${APPNAME}
ENV INVOKEAI_ROOT ${VOLUME_DIR}
ENV TRANSFORMERS_CACHE ${VOLUME_DIR}/.cache
ENV INVOKE_MODEL_RECONFIGURE "--yes --default_only"
EXPOSE 9090
ENTRYPOINT [ "invokeai" ]
CMD [ "--web", "--host=0.0.0.0" ]
VOLUME [ "/data" ]
LABEL org.opencontainers.image.authors="mauwii@outlook.de"
CMD [ "--web", "--host", "0.0.0.0", "--port", "9090" ]
VOLUME [ "${VOLUME_DIR}" ]

View File

@@ -1,19 +1,24 @@
#!/usr/bin/env bash
set -e
# How to use: https://invoke-ai.github.io/InvokeAI/installation/INSTALL_DOCKER/#setup
# Some possible pip extra-index urls (cuda 11.7 is available without extra url):
# CUDA 11.6: https://download.pytorch.org/whl/cu116
# ROCm 5.2: https://download.pytorch.org/whl/rocm5.2
# CPU: https://download.pytorch.org/whl/cpu
# as found on https://pytorch.org/get-started/locally/
# If you want to build a specific flavor, set the CONTAINER_FLAVOR environment variable
# e.g. CONTAINER_FLAVOR=cpu ./build.sh
# Possible Values are:
# - cpu
# - cuda
# - rocm
# Don't forget to also set it when executing run.sh
# if it is not set, the script will try to detect the flavor by itself.
#
# Doc can be found here:
# https://invoke-ai.github.io/InvokeAI/installation/040_INSTALL_DOCKER/
SCRIPTDIR=$(dirname "$0")
SCRIPTDIR=$(dirname "${BASH_SOURCE[0]}")
cd "$SCRIPTDIR" || exit 1
source ./env.sh
DOCKERFILE=${INVOKE_DOCKERFILE:-Dockerfile}
DOCKERFILE=${INVOKE_DOCKERFILE:-./Dockerfile}
# print the settings
echo -e "You are using these values:\n"
@@ -21,23 +26,25 @@ echo -e "Dockerfile:\t\t${DOCKERFILE}"
echo -e "index-url:\t\t${PIP_EXTRA_INDEX_URL:-none}"
echo -e "Volumename:\t\t${VOLUMENAME}"
echo -e "Platform:\t\t${PLATFORM}"
echo -e "Registry:\t\t${CONTAINER_REGISTRY}"
echo -e "Repository:\t\t${CONTAINER_REPOSITORY}"
echo -e "Container Registry:\t${CONTAINER_REGISTRY}"
echo -e "Container Repository:\t${CONTAINER_REPOSITORY}"
echo -e "Container Tag:\t\t${CONTAINER_TAG}"
echo -e "Container Flavor:\t${CONTAINER_FLAVOR}"
echo -e "Container Image:\t${CONTAINER_IMAGE}\n"
# Create docker volume
if [[ -n "$(docker volume ls -f name="${VOLUMENAME}" -q)" ]]; then
echo -e "Volume already exists\n"
else
echo -n "createing docker volume "
echo -n "creating docker volume "
docker volume create "${VOLUMENAME}"
fi
# Build Container
DOCKER_BUILDKIT=1 docker build \
--platform="${PLATFORM}" \
--tag="${CONTAINER_IMAGE}" \
--platform="${PLATFORM:-linux/amd64}" \
--tag="${CONTAINER_IMAGE:-invokeai}" \
${CONTAINER_FLAVOR:+--build-arg="CONTAINER_FLAVOR=${CONTAINER_FLAVOR}"} \
${PIP_EXTRA_INDEX_URL:+--build-arg="PIP_EXTRA_INDEX_URL=${PIP_EXTRA_INDEX_URL}"} \
${PIP_PACKAGE:+--build-arg="PIP_PACKAGE=${PIP_PACKAGE}"} \
--file="${DOCKERFILE}" \

View File

@@ -1,19 +1,31 @@
#!/usr/bin/env bash
# This file is used to set environment variables for the build.sh and run.sh scripts.
# Try to detect the container flavor if no PIP_EXTRA_INDEX_URL got specified
if [[ -z "$PIP_EXTRA_INDEX_URL" ]]; then
# Activate virtual environment if not already activated and exists
if [[ -z $VIRTUAL_ENV ]]; then
[[ -e "$(dirname "${BASH_SOURCE[0]}")/../.venv/bin/activate" ]] \
&& source "$(dirname "${BASH_SOURCE[0]}")/../.venv/bin/activate" \
&& echo "Activated virtual environment: $VIRTUAL_ENV"
fi
# Decide which container flavor to build if not specified
if [[ -z "$CONTAINER_FLAVOR" ]] && python -c "import torch" &>/dev/null; then
# Check for CUDA and ROCm
CUDA_AVAILABLE=$(python -c "import torch;print(torch.cuda.is_available())")
ROCM_AVAILABLE=$(python -c "import torch;print(torch.version.hip is not None)")
if [[ "$(uname -s)" != "Darwin" && "${CUDA_AVAILABLE}" == "True" ]]; then
if [[ "${CUDA_AVAILABLE}" == "True" ]]; then
CONTAINER_FLAVOR="cuda"
elif [[ "$(uname -s)" != "Darwin" && "${ROCM_AVAILABLE}" == "True" ]]; then
elif [[ "${ROCM_AVAILABLE}" == "True" ]]; then
CONTAINER_FLAVOR="rocm"
else
CONTAINER_FLAVOR="cpu"
fi
fi
# Set PIP_EXTRA_INDEX_URL based on container flavor
if [[ "$CONTAINER_FLAVOR" == "rocm" ]]; then
PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/rocm"
@@ -26,9 +38,10 @@ fi
# Variables shared by build.sh and run.sh
REPOSITORY_NAME="${REPOSITORY_NAME-$(basename "$(git rev-parse --show-toplevel)")}"
VOLUMENAME="${VOLUMENAME-"${REPOSITORY_NAME,,}_data"}"
REPOSITORY_NAME="${REPOSITORY_NAME,,}"
VOLUMENAME="${VOLUMENAME-"${REPOSITORY_NAME}_data"}"
ARCH="${ARCH-$(uname -m)}"
PLATFORM="${PLATFORM-Linux/${ARCH}}"
PLATFORM="${PLATFORM-linux/${ARCH}}"
INVOKEAI_BRANCH="${INVOKEAI_BRANCH-$(git branch --show)}"
CONTAINER_REGISTRY="${CONTAINER_REGISTRY-"ghcr.io"}"
CONTAINER_REPOSITORY="${CONTAINER_REPOSITORY-"$(whoami)/${REPOSITORY_NAME}"}"

View File

@@ -1,14 +1,16 @@
#!/usr/bin/env bash
set -e
# How to use: https://invoke-ai.github.io/InvokeAI/installation/INSTALL_DOCKER/#run-the-container
# IMPORTANT: You need to have a token on huggingface.co to be able to download the checkpoints!!!
# How to use: https://invoke-ai.github.io/InvokeAI/installation/040_INSTALL_DOCKER/
SCRIPTDIR=$(dirname "$0")
SCRIPTDIR=$(dirname "${BASH_SOURCE[0]}")
cd "$SCRIPTDIR" || exit 1
source ./env.sh
# Create outputs directory if it does not exist
[[ -d ./outputs ]] || mkdir ./outputs
echo -e "You are using these values:\n"
echo -e "Volumename:\t${VOLUMENAME}"
echo -e "Invokeai_tag:\t${CONTAINER_IMAGE}"
@@ -22,10 +24,18 @@ docker run \
--name="${REPOSITORY_NAME,,}" \
--hostname="${REPOSITORY_NAME,,}" \
--mount=source="${VOLUMENAME}",target=/data \
${MODELSPATH:+-u "$(id -u):$(id -g)"} \
--mount type=bind,source="$(pwd)"/outputs,target=/data/outputs \
${MODELSPATH:+--mount="type=bind,source=${MODELSPATH},target=/data/models"} \
${HUGGING_FACE_HUB_TOKEN:+--env="HUGGING_FACE_HUB_TOKEN=${HUGGING_FACE_HUB_TOKEN}"} \
--publish=9090:9090 \
--cap-add=sys_nice \
${GPU_FLAGS:+--gpus="${GPU_FLAGS}"} \
"${CONTAINER_IMAGE}" ${1:+$@}
"${CONTAINER_IMAGE}" ${@:+$@}
# Remove Trash folder
for f in outputs/.Trash*; do
if [ -e "$f" ]; then
rm -Rf "$f"
break
fi
done

5
docs/.markdownlint.jsonc Normal file
View File

@@ -0,0 +1,5 @@
{
"MD046": false,
"MD007": false,
"MD030": false
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 114 KiB

View File

@@ -214,6 +214,8 @@ Here are the invoke> command that apply to txt2img:
| `--variation <float>` | `-v<float>` | `0.0` | Add a bit of noise (0.0=none, 1.0=high) to the image in order to generate a series of variations. Usually used in combination with `-S<seed>` and `-n<int>` to generate a series a riffs on a starting image. See [Variations](./VARIATIONS.md). |
| `--with_variations <pattern>` | | `None` | Combine two or more variations. See [Variations](./VARIATIONS.md) for now to use this. |
| `--save_intermediates <n>` | | `None` | Save the image from every nth step into an "intermediates" folder inside the output directory |
| `--h_symmetry_time_pct <float>` | | `None` | Create symmetry along the X axis at the desired percent complete of the generation process. (Must be between 0.0 and 1.0; set to a very small number like 0.0001 for just after the first step of generation.) |
| `--v_symmetry_time_pct <float>` | | `None` | Create symmetry along the Y axis at the desired percent complete of the generation process. (Must be between 0.0 and 1.0; set to a very small number like 0.0001 for just after the first step of generation.) |
!!! note

View File

@@ -40,7 +40,7 @@ for adj in adjectives:
print(f'a {adj} day -A{samp} -C{cg}')
```
It's output looks like this (abbreviated):
Its output looks like this (abbreviated):
```bash
a sunny day -Aklms -C7.5

View File

@@ -154,8 +154,11 @@ training sets will converge with 2000-3000 steps.
This adjusts how many training images are processed simultaneously in
each step. Higher values will cause the training process to run more
quickly, but use more memory. The default size will run with GPUs with
as little as 12 GB.
quickly, but use more memory. The default size is selected based on
whether you have the `xformers` memory-efficient attention library
installed. If `xformers` is available, the batch size will be 8,
otherwise 3. These values were chosen to allow training to run with
GPUs with as little as 12 GB VRAM.
### Learning rate
@@ -172,8 +175,10 @@ learning rate to improve performance.
### Use xformers acceleration
This will activate XFormers memory-efficient attention. You need to
have XFormers installed for this to have an effect.
This will activate XFormers memory-efficient attention, which will
reduce memory requirements by half or more and allow you to select a
higher batch size. You need to have XFormers installed for this to
have an effect.
### Learning rate scheduler
@@ -250,6 +255,67 @@ invokeai-ti \
--only_save_embeds
```
## Using Distributed Training
If you have multiple GPUs on one machine, or a cluster of GPU-enabled
machines, you can activate distributed training. See the [HuggingFace
Accelerate pages](https://huggingface.co/docs/accelerate/index) for
full information, but the basic recipe is:
1. Enter the InvokeAI developer's console command line by selecting
option [8] from the `invoke.sh`/`invoke.bat` script.
2. Configurate Accelerate using `accelerate config`:
```sh
accelerate config
```
This will guide you through the configuration process, including
specifying how many machines you will run training on and the number
of GPUs pe rmachine.
You only need to do this once.
3. Launch training from the command line using `accelerate launch`. Be sure
that your current working directory is the InvokeAI root directory (usually
named `invokeai` in your home directory):
```sh
accelerate launch .venv/bin/invokeai-ti \
--model=stable-diffusion-1.5 \
--resolution=512 \
--learnable_property=object \
--initializer_token='*' \
--placeholder_token='<shraddha>' \
--train_data_dir=/home/lstein/invokeai/text-inversion-training-data/shraddha \
--output_dir=/home/lstein/invokeai/text-inversion-training/shraddha \
--scale_lr \
--train_batch_size=10 \
--gradient_accumulation_steps=4 \
--max_train_steps=2000 \
--learning_rate=0.0005 \
--lr_scheduler=constant \
--mixed_precision=fp16 \
--only_save_embeds
```
## Using Embeddings
After training completes, the resultant embeddings will be saved into your `$INVOKEAI_ROOT/embeddings/<trigger word>/learned_embeds.bin`.
These will be automatically loaded when you start InvokeAI.
Add the trigger word, surrounded by angle brackets, to use that embedding. For example, if your trigger word was `terence`, use `<terence>` in prompts. This is the same syntax used by the HuggingFace concepts library.
**Note:** `.pt` embeddings do not require the angle brackets.
## Troubleshooting
### `Cannot load embedding for <trigger>. It was trained on a model with token dimension 1024, but the current model has token dimension 768`
Messages like this indicate you trained the embedding on a different base model than the currently selected one.
For example, in the error above, the training was done on SD2.1 (768x768) but it was used on SD1.5 (512x512).
## Reading
For more information on textual inversion, please see the following

View File

@@ -2,62 +2,82 @@
title: Overview
---
Here you can find the documentation for InvokeAI's various features.
- The Basics
## The Basics
### * The [Web User Interface](WEB.md)
Guide to the Web interface. Also see the [WebUI Hotkeys Reference Guide](WEBUIHOTKEYS.md)
- The [Web User Interface](WEB.md)
### * The [Unified Canvas](UNIFIED_CANVAS.md)
Build complex scenes by combine and modifying multiple images in a stepwise
fashion. This feature combines img2img, inpainting and outpainting in
a single convenient digital artist-optimized user interface.
Guide to the Web interface. Also see the
[WebUI Hotkeys Reference Guide](WEBUIHOTKEYS.md)
### * The [Command Line Interface (CLI)](CLI.md)
Scriptable access to InvokeAI's features.
- The [Unified Canvas](UNIFIED_CANVAS.md)
## Image Generation
### * [Prompt Engineering](PROMPTS.md)
Get the images you want with the InvokeAI prompt engineering language.
Build complex scenes by combine and modifying multiple images in a
stepwise fashion. This feature combines img2img, inpainting and
outpainting in a single convenient digital artist-optimized user
interface.
## * [Post-Processing](POSTPROCESS.md)
Restore mangled faces and make images larger with upscaling. Also see the [Embiggen Upscaling Guide](EMBIGGEN.md).
- The [Command Line Interface (CLI)](CLI.md)
## * The [Concepts Library](CONCEPTS.md)
Add custom subjects and styles using HuggingFace's repository of embeddings.
Scriptable access to InvokeAI's features.
### * [Image-to-Image Guide for the CLI](IMG2IMG.md)
Use a seed image to build new creations in the CLI.
- Image Generation
### * [Inpainting Guide for the CLI](INPAINTING.md)
Selectively erase and replace portions of an existing image in the CLI.
- [Prompt Engineering](PROMPTS.md)
### * [Outpainting Guide for the CLI](OUTPAINTING.md)
Extend the borders of the image with an "outcrop" function within the CLI.
Get the images you want with the InvokeAI prompt engineering language.
### * [Generating Variations](VARIATIONS.md)
Have an image you like and want to generate many more like it? Variations
are the ticket.
- [Post-Processing](POSTPROCESS.md)
## Model Management
Restore mangled faces and make images larger with upscaling. Also see
the [Embiggen Upscaling Guide](EMBIGGEN.md).
## * [Model Installation](../installation/050_INSTALLING_MODELS.md)
Learn how to import third-party models and switch among them. This
guide also covers optimizing models to load quickly.
- The [Concepts Library](CONCEPTS.md)
## * [Merging Models](MODEL_MERGING.md)
Teach an old model new tricks. Merge 2-3 models together to create a
new model that combines characteristics of the originals.
Add custom subjects and styles using HuggingFace's repository of
embeddings.
## * [Textual Inversion](TEXTUAL_INVERSION.md)
Personalize models by adding your own style or subjects.
- [Image-to-Image Guide for the CLI](IMG2IMG.md)
# Other Features
Use a seed image to build new creations in the CLI.
## * [The NSFW Checker](NSFW.md)
Prevent InvokeAI from displaying unwanted racy images.
- [Inpainting Guide for the CLI](INPAINTING.md)
## * [Miscellaneous](OTHER.md)
Run InvokeAI on Google Colab, generate images with repeating patterns,
batch process a file of prompts, increase the "creativity" of image
generation by adding initial noise, and more!
Selectively erase and replace portions of an existing image in the CLI.
- [Outpainting Guide for the CLI](OUTPAINTING.md)
Extend the borders of the image with an "outcrop" function within the
CLI.
- [Generating Variations](VARIATIONS.md)
Have an image you like and want to generate many more like it?
Variations are the ticket.
- Model Management
- [Model Installation](../installation/050_INSTALLING_MODELS.md)
Learn how to import third-party models and switch among them. This guide
also covers optimizing models to load quickly.
- [Merging Models](MODEL_MERGING.md)
Teach an old model new tricks. Merge 2-3 models together to create a new
model that combines characteristics of the originals.
- [Textual Inversion](TEXTUAL_INVERSION.md)
Personalize models by adding your own style or subjects.
- Other Features
- [The NSFW Checker](NSFW.md)
Prevent InvokeAI from displaying unwanted racy images.
- [Miscellaneous](OTHER.md)
Run InvokeAI on Google Colab, generate images with repeating patterns,
batch process a file of prompts, increase the "creativity" of image
generation by adding initial noise, and more!

View File

@@ -0,0 +1,4 @@
# :octicons-file-code-16: IDE-Settings
Here we will share settings for IDEs used by our developers, maybe you can find
something interestening which will help to boost your development efficency 🔥

View File

@@ -0,0 +1,250 @@
---
title: Visual Studio Code
---
# :material-microsoft-visual-studio-code:Visual Studio Code
The Workspace Settings are stored in the project (repository) root and get
higher priorized than your user settings.
This helps to have different settings for different projects, while the user
settings get used as a default value if no workspace settings are provided.
## tasks.json
First we will create a task configuration which will create a virtual
environment and update the deps (pip, setuptools and wheel).
Into this venv we will then install the pyproject.toml in editable mode with
dev, docs and test dependencies.
```json title=".vscode/tasks.json"
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"label": "Create virtual environment",
"detail": "Create .venv and upgrade pip, setuptools and wheel",
"command": "python3",
"args": [
"-m",
"venv",
".venv",
"--prompt",
"InvokeAI",
"--upgrade-deps"
],
"runOptions": {
"instanceLimit": 1,
"reevaluateOnRerun": true
},
"group": {
"kind": "build"
},
"presentation": {
"echo": true,
"reveal": "always",
"focus": false,
"panel": "shared",
"showReuseMessage": true,
"clear": false
}
},
{
"label": "build InvokeAI",
"detail": "Build pyproject.toml with extras dev, docs and test",
"command": "${workspaceFolder}/.venv/bin/python3",
"args": [
"-m",
"pip",
"install",
"--use-pep517",
"--editable",
".[dev,docs,test]"
],
"dependsOn": "Create virtual environment",
"dependsOrder": "sequence",
"group": {
"kind": "build",
"isDefault": true
},
"presentation": {
"echo": true,
"reveal": "always",
"focus": false,
"panel": "shared",
"showReuseMessage": true,
"clear": false
}
}
]
}
```
The fastest way to build InvokeAI now is ++cmd+shift+b++
## launch.json
This file is used to define debugger configurations, so that you can one-click
launch and monitor the application, set halt points to inspect specific states,
...
```json title=".vscode/launch.json"
{
"version": "0.2.0",
"configurations": [
{
"name": "invokeai web",
"type": "python",
"request": "launch",
"program": ".venv/bin/invokeai",
"justMyCode": true
},
{
"name": "invokeai cli",
"type": "python",
"request": "launch",
"program": ".venv/bin/invokeai",
"justMyCode": true
},
{
"name": "mkdocs serve",
"type": "python",
"request": "launch",
"program": ".venv/bin/mkdocs",
"args": ["serve"],
"justMyCode": true
}
]
}
```
Then you only need to hit ++f5++ and the fun begins :nerd: (It is asumed that
you have created a virtual environment via the [tasks](#tasksjson) from the
previous step.)
## extensions.json
A list of recommended vscode-extensions to make your life easier:
```json title=".vscode/extensions.json"
{
"recommendations": [
"editorconfig.editorconfig",
"github.vscode-pull-request-github",
"ms-python.black-formatter",
"ms-python.flake8",
"ms-python.isort",
"ms-python.python",
"ms-python.vscode-pylance",
"redhat.vscode-yaml",
"tamasfe.even-better-toml",
"eamodio.gitlens",
"foxundermoon.shell-format",
"timonwong.shellcheck",
"esbenp.prettier-vscode",
"davidanson.vscode-markdownlint",
"yzhang.markdown-all-in-one",
"bierner.github-markdown-preview",
"ms-azuretools.vscode-docker",
"mads-hartmann.bash-ide-vscode"
]
}
```
## settings.json
With bellow settings your files already get formated when you save them (only
your modifications if available), which will help you to not run into trouble
with the pre-commit hooks. If the hooks fail, they will prevent you from
commiting, but most hooks directly add a fixed version, so that you just need to
stage and commit them:
```json title=".vscode/settings.json"
{
"[json]": {
"editor.defaultFormatter": "esbenp.prettier-vscode",
"editor.quickSuggestions": {
"comments": false,
"strings": true,
"other": true
},
"editor.suggest.insertMode": "replace",
"gitlens.codeLens.scopes": ["document"]
},
"[jsonc]": {
"editor.defaultFormatter": "esbenp.prettier-vscode",
"editor.formatOnSave": true,
"editor.formatOnSaveMode": "modificationsIfAvailable"
},
"[python]": {
"editor.defaultFormatter": "ms-python.black-formatter",
"editor.formatOnSave": true,
"editor.formatOnSaveMode": "file"
},
"[toml]": {
"editor.defaultFormatter": "tamasfe.even-better-toml",
"editor.formatOnSave": true,
"editor.formatOnSaveMode": "modificationsIfAvailable"
},
"[yaml]": {
"editor.defaultFormatter": "esbenp.prettier-vscode",
"editor.formatOnSave": true,
"editor.formatOnSaveMode": "modificationsIfAvailable"
},
"[markdown]": {
"editor.defaultFormatter": "esbenp.prettier-vscode",
"editor.rulers": [80],
"editor.unicodeHighlight.ambiguousCharacters": false,
"editor.unicodeHighlight.invisibleCharacters": false,
"diffEditor.ignoreTrimWhitespace": false,
"editor.wordWrap": "on",
"editor.quickSuggestions": {
"comments": "off",
"strings": "off",
"other": "off"
},
"editor.formatOnSave": true,
"editor.formatOnSaveMode": "modificationsIfAvailable"
},
"[shellscript]": {
"editor.defaultFormatter": "foxundermoon.shell-format"
},
"[ignore]": {
"editor.defaultFormatter": "foxundermoon.shell-format"
},
"editor.rulers": [88],
"evenBetterToml.formatter.alignEntries": false,
"evenBetterToml.formatter.allowedBlankLines": 1,
"evenBetterToml.formatter.arrayAutoExpand": true,
"evenBetterToml.formatter.arrayTrailingComma": true,
"evenBetterToml.formatter.arrayAutoCollapse": true,
"evenBetterToml.formatter.columnWidth": 88,
"evenBetterToml.formatter.compactArrays": true,
"evenBetterToml.formatter.compactInlineTables": true,
"evenBetterToml.formatter.indentEntries": false,
"evenBetterToml.formatter.inlineTableExpand": true,
"evenBetterToml.formatter.reorderArrays": true,
"evenBetterToml.formatter.reorderKeys": true,
"evenBetterToml.formatter.compactEntries": false,
"evenBetterToml.schema.enabled": true,
"python.analysis.typeCheckingMode": "basic",
"python.formatting.provider": "black",
"python.languageServer": "Pylance",
"python.linting.enabled": true,
"python.linting.flake8Enabled": true,
"python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": true,
"python.testing.pytestArgs": [
"tests",
"--cov=ldm",
"--cov-branch",
"--cov-report=term:skip-covered"
],
"yaml.schemas": {
"https://json.schemastore.org/prettierrc.json": "${workspaceFolder}/.prettierrc.yaml"
}
}
```

View File

@@ -0,0 +1,135 @@
---
title: Pull-Request
---
# :octicons-git-pull-request-16: Pull-Request
## pre-requirements
To follow the steps in this tutorial you will need:
- [GitHub](https://github.com) account
- [git](https://git-scm.com/downloads) source controll
- Text / Code Editor (personally I preffer
[Visual Studio Code](https://code.visualstudio.com/Download))
- Terminal:
- If you are on Linux/MacOS you can use bash or zsh
- for Windows Users the commands are written for PowerShell
## Fork Repository
The first step to be done if you want to contribute to InvokeAI, is to fork the
rpeository.
Since you are already reading this doc, the easiest way to do so is by clicking
[here](https://github.com/invoke-ai/InvokeAI/fork). You could also open
[InvokeAI](https://github.com/invoke-ai/InvoekAI) and click on the "Fork" Button
in the top right.
## Clone your fork
After you forked the Repository, you should clone it to your dev machine:
=== ":fontawesome-brands-linux:Linux / :simple-apple:macOS"
``` sh
git clone https://github.com/<github username>/InvokeAI \
&& cd InvokeAI
```
=== ":fontawesome-brands-windows:Windows"
``` powershell
git clone https://github.com/<github username>/InvokeAI `
&& cd InvokeAI
```
## Install in Editable Mode
To install InvokeAI in editable mode, (as always) we recommend to create and
activate a venv first. Afterwards you can install the InvokeAI Package,
including dev and docs extras in editable mode, follwed by the installation of
the pre-commit hook:
=== ":fontawesome-brands-linux:Linux / :simple-apple:macOS"
``` sh
python -m venv .venv \
--prompt InvokeAI \
--upgrade-deps \
&& source .venv/bin/activate \
&& pip install \
--upgrade-deps \
--use-pep517 \
--editable=".[dev,docs]" \
&& pre-commit install
```
=== ":fontawesome-brands-windows:Windows"
``` powershell
python -m venv .venv `
--prompt InvokeAI `
--upgrade-deps `
&& .venv/scripts/activate.ps1 `
&& pip install `
--upgrade `
--use-pep517 `
--editable=".[dev,docs]" `
&& pre-commit install
```
## Create a branch
Make sure you are on main branch, from there create your feature branch:
=== ":fontawesome-brands-linux:Linux / :simple-apple:macOS"
``` sh
git checkout main \
&& git pull \
&& git checkout -B <branch name>
```
=== ":fontawesome-brands-windows:Windows"
``` powershell
git checkout main `
&& git pull `
&& git checkout -B <branch name>
```
## Commit your changes
When you are done with adding / updating content, you need to commit those
changes to your repository before you can actually open an PR:
```{ .sh .annotate }
git add <files you have changed> # (1)!
git commit -m "A commit message which describes your change"
git push
```
1. Replace this with a space seperated list of the files you changed, like:
`README.md foo.sh bar.json baz`
## Create a Pull Request
After pushing your changes, you are ready to create a Pull Request. just head
over to your fork on [GitHub](https://github.com), which should already show you
a message that there have been recent changes on your feature branch and a green
button which you could use to create the PR.
The default target for your PRs would be the main branch of
[invoke-ai/InvokeAI](https://github.com/invoke-ai/InvokeAI)
Another way would be to create it in VS-Code or via the GitHub CLI (or even via
the GitHub CLI in a VS-Code Terminal Window 🤭):
```sh
gh pr create
```
The CLI will inform you if there are still unpushed commits on your branch. It
will also prompt you for things like the the Title and the Body (Description) if
you did not already pass them as arguments.

View File

@@ -0,0 +1,26 @@
---
title: Issues
---
# :octicons-issue-opened-16: Issues
## :fontawesome-solid-bug: Report a bug
If you stumbled over a bug while using InvokeAI, we would apreciate it a lot if
you
[open a issue](https://github.com/invoke-ai/InvokeAI/issues/new?assignees=&labels=bug&template=BUG_REPORT.yml&title=%5Bbug%5D%3A+)
to inform us about the details so that our developers can look into it.
If you also know how to fix the bug, take a look [here](010_PULL_REQUEST.md) to
find out how to create a Pull Request.
## Request a feature
If you have a idea for a new feature on your mind which you would like to see in
InvokeAI, there is a
[feature request](https://github.com/invoke-ai/InvokeAI/issues/new?assignees=&labels=bug&template=BUG_REPORT.yml&title=%5Bbug%5D%3A+)
available in the issues section of the repository.
If you are just curious which features already got requested you can find the
overview of open requests
[here](https://github.com/invoke-ai/InvokeAI/labels/enhancement)

View File

@@ -0,0 +1,32 @@
---
title: docs
---
# :simple-readthedocs: MkDocs-Material
If you want to contribute to the docs, there is a easy way to verify the results
of your changes before commiting them.
Just follow the steps in the [Pull-Requests](010_PULL_REQUEST.md) docs, there we
already
[create a venv and install the docs extras](010_PULL_REQUEST.md#install-in-editable-mode).
When installed it's as simple as:
```sh
mkdocs serve
```
This will build the docs locally and serve them on your local host, even
auto-refresh is included, so you can just update a doc, save it and tab to the
browser, without the needs of restarting the `mkdocs serve`.
More information about the "mkdocs flavored markdown syntax" can be found
[here](https://squidfunk.github.io/mkdocs-material/reference/).
## :material-microsoft-visual-studio-code:VS-Code
We also provide a
[launch configuration for VS-Code](../IDE-Settings/vs-code.md#launchjson) which
includes a `mkdocs serve` entrypoint as well. You also don't have to worry about
the formatting since this is automated via prettier, but this is of course not
limited to VS-Code.

View File

@@ -0,0 +1,76 @@
# Tranformation to nodes
## Current state
```mermaid
flowchart TD
web[WebUI];
cli[CLI];
web --> |img2img| generate(generate);
web --> |txt2img| generate(generate);
cli --> |txt2img| generate(generate);
cli --> |img2img| generate(generate);
generate --> model_manager;
generate --> generators;
generate --> ti_manager[TI Manager];
generate --> etc;
```
## Transitional Architecture
### first step
```mermaid
flowchart TD
web[WebUI];
cli[CLI];
web --> |img2img| img2img_node(Img2img node);
web --> |txt2img| generate(generate);
img2img_node --> model_manager;
img2img_node --> generators;
cli --> |txt2img| generate;
cli --> |img2img| generate;
generate --> model_manager;
generate --> generators;
generate --> ti_manager[TI Manager];
generate --> etc;
```
### second step
```mermaid
flowchart TD
web[WebUI];
cli[CLI];
web --> |img2img| img2img_node(img2img node);
img2img_node --> model_manager;
img2img_node --> generators;
web --> |txt2img| txt2img_node(txt2img node);
cli --> |txt2img| txt2img_node;
cli --> |img2img| generate(generate);
generate --> model_manager;
generate --> generators;
generate --> ti_manager[TI Manager];
generate --> etc;
txt2img_node --> model_manager;
txt2img_node --> generators;
txt2img_node --> ti_manager[TI Manager];
```
## Final Architecture
```mermaid
flowchart TD
web[WebUI];
cli[CLI];
web --> |img2img|img2img_node(img2img node);
cli --> |img2img|img2img_node;
web --> |txt2img|txt2img_node(txt2img node);
cli --> |txt2img|txt2img_node;
img2img_node --> model_manager;
txt2img_node --> model_manager;
img2img_node --> generators;
txt2img_node --> generators;
img2img_node --> ti_manager[TI Manager];
txt2img_node --> ti_manager[TI Manager];
```

View File

@@ -0,0 +1,16 @@
---
title: Contributing
---
# :fontawesome-solid-code-commit: Contributing
There are different ways how you can contribute to
[InvokeAI](https://github.com/invoke-ai/InvokeAI), like Translations, opening
Issues for Bugs or ideas how to improve.
This Section of the docs will explain some of the different ways of how you can
contribute to make it easier for newcommers as well as advanced users :nerd:
If you want to contribute code, but you do not have an exact idea yet, take a
look at the currently open
[:fontawesome-solid-bug: Bug Reports](https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen+label%3Abug)

12
docs/help/index.md Normal file
View File

@@ -0,0 +1,12 @@
# :material-help:Help
If you are looking for help with the installation of InvokeAI, please take a
look into the [Installation](../installation/index.md) section of the docs.
Here you will find help to topics like
- how to contribute
- configuration recommendation for IDEs
If you have an Idea about what's missing and aren't scared from contributing,
just take a look at [DOCS](./contributing/030_DOCS.md) to find out how to do so.

View File

@@ -1,19 +0,0 @@
<!-- HTML for static distribution bundle build -->
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Swagger UI</title>
<link rel="stylesheet" type="text/css" href="swagger-ui/swagger-ui.css" />
<link rel="stylesheet" type="text/css" href="swagger-ui/index.css" />
<link rel="icon" type="image/png" href="swagger-ui/favicon-32x32.png" sizes="32x32" />
<link rel="icon" type="image/png" href="swagger-ui/favicon-16x16.png" sizes="16x16" />
</head>
<body>
<div id="swagger-ui"></div>
<script src="swagger-ui/swagger-ui-bundle.js" charset="UTF-8"> </script>
<script src="swagger-ui/swagger-ui-standalone-preset.js" charset="UTF-8"> </script>
<script src="swagger-ui/swagger-initializer.js" charset="UTF-8"> </script>
</body>
</html>

View File

@@ -2,6 +2,8 @@
title: Home
---
# :octicons-home-16: Home
<!--
The Docs you find here (/docs/*) are built and deployed via mkdocs. If you want to run a local version to verify your changes, it's as simple as::
@@ -29,36 +31,36 @@ title: Home
[![github open prs badge]][github open prs link]
[ci checks on dev badge]:
https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/development?label=CI%20status%20on%20dev&cache=900&icon=github
https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/development?label=CI%20status%20on%20dev&cache=900&icon=github
[ci checks on dev link]:
https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Adevelopment
https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Adevelopment
[ci checks on main badge]:
https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
[ci checks on main link]:
https://github.com/invoke-ai/InvokeAI/actions/workflows/test-invoke-conda.yml
https://github.com/invoke-ai/InvokeAI/actions/workflows/test-invoke-conda.yml
[discord badge]: https://flat.badgen.net/discord/members/ZmtBAhwWhy?icon=discord
[discord link]: https://discord.gg/ZmtBAhwWhy
[github forks badge]:
https://flat.badgen.net/github/forks/invoke-ai/InvokeAI?icon=github
https://flat.badgen.net/github/forks/invoke-ai/InvokeAI?icon=github
[github forks link]:
https://useful-forks.github.io/?repo=lstein%2Fstable-diffusion
https://useful-forks.github.io/?repo=lstein%2Fstable-diffusion
[github open issues badge]:
https://flat.badgen.net/github/open-issues/invoke-ai/InvokeAI?icon=github
https://flat.badgen.net/github/open-issues/invoke-ai/InvokeAI?icon=github
[github open issues link]:
https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen
https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen
[github open prs badge]:
https://flat.badgen.net/github/open-prs/invoke-ai/InvokeAI?icon=github
https://flat.badgen.net/github/open-prs/invoke-ai/InvokeAI?icon=github
[github open prs link]:
https://github.com/invoke-ai/InvokeAI/pulls?q=is%3Apr+is%3Aopen
https://github.com/invoke-ai/InvokeAI/pulls?q=is%3Apr+is%3Aopen
[github stars badge]:
https://flat.badgen.net/github/stars/invoke-ai/InvokeAI?icon=github
https://flat.badgen.net/github/stars/invoke-ai/InvokeAI?icon=github
[github stars link]: https://github.com/invoke-ai/InvokeAI/stargazers
[latest commit to dev badge]:
https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/development?icon=github&color=yellow&label=last%20dev%20commit&cache=900
https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/development?icon=github&color=yellow&label=last%20dev%20commit&cache=900
[latest commit to dev link]:
https://github.com/invoke-ai/InvokeAI/commits/development
https://github.com/invoke-ai/InvokeAI/commits/development
[latest release badge]:
https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github
https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases
</div>
@@ -87,24 +89,24 @@ Q&A</a>]
You wil need one of the following:
- :simple-nvidia: An NVIDIA-based graphics card with 4 GB or more VRAM memory.
- :simple-amd: An AMD-based graphics card with 4 GB or more VRAM memory (Linux
only)
- :fontawesome-brands-apple: An Apple computer with an M1 chip.
- :simple-nvidia: An NVIDIA-based graphics card with 4 GB or more VRAM memory.
- :simple-amd: An AMD-based graphics card with 4 GB or more VRAM memory (Linux
only)
- :fontawesome-brands-apple: An Apple computer with an M1 chip.
We do **not recommend** the following video cards due to issues with their
running in half-precision mode and having insufficient VRAM to render 512x512
images in full-precision mode:
- NVIDIA 10xx series cards such as the 1080ti
- GTX 1650 series cards
- GTX 1660 series cards
- NVIDIA 10xx series cards such as the 1080ti
- GTX 1650 series cards
- GTX 1660 series cards
### :fontawesome-solid-memory: Memory and Disk
- At least 12 GB Main Memory RAM.
- At least 18 GB of free disk space for the machine learning model, Python, and
all its dependencies.
- At least 12 GB Main Memory RAM.
- At least 18 GB of free disk space for the machine learning model, Python,
and all its dependencies.
## :octicons-package-dependencies-24: Installation
@@ -113,48 +115,65 @@ either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm
driver).
### [Installation Getting Started Guide](installation)
#### [Automated Installer](installation/010_INSTALL_AUTOMATED.md)
This method is recommended for 1st time users
#### [Manual Installation](installation/020_INSTALL_MANUAL.md)
This method is recommended for experienced users and developers
#### [Docker Installation](installation/040_INSTALL_DOCKER.md)
This method is recommended for those familiar with running Docker containers
### Other Installation Guides
- [PyPatchMatch](installation/060_INSTALL_PATCHMATCH.md)
- [XFormers](installation/070_INSTALL_XFORMERS.md)
- [CUDA and ROCm Drivers](installation/030_INSTALL_CUDA_AND_ROCM.md)
- [Installing New Models](installation/050_INSTALLING_MODELS.md)
- [PyPatchMatch](installation/060_INSTALL_PATCHMATCH.md)
- [XFormers](installation/070_INSTALL_XFORMERS.md)
- [CUDA and ROCm Drivers](installation/030_INSTALL_CUDA_AND_ROCM.md)
- [Installing New Models](installation/050_INSTALLING_MODELS.md)
## :octicons-gift-24: InvokeAI Features
### The InvokeAI Web Interface
- [WebUI overview](features/WEB.md)
- [WebUI hotkey reference guide](features/WEBUIHOTKEYS.md)
- [WebUI Unified Canvas for Img2Img, inpainting and outpainting](features/UNIFIED_CANVAS.md)
- [WebUI overview](features/WEB.md)
- [WebUI hotkey reference guide](features/WEBUIHOTKEYS.md)
- [WebUI Unified Canvas for Img2Img, inpainting and outpainting](features/UNIFIED_CANVAS.md)
<!-- separator -->
### The InvokeAI Command Line Interface
- [Command Line Interace Reference Guide](features/CLI.md)
- [Command Line Interace Reference Guide](features/CLI.md)
<!-- separator -->
### Image Management
- [Image2Image](features/IMG2IMG.md)
- [Inpainting](features/INPAINTING.md)
- [Outpainting](features/OUTPAINTING.md)
- [Adding custom styles and subjects](features/CONCEPTS.md)
- [Upscaling and Face Reconstruction](features/POSTPROCESS.md)
- [Embiggen upscaling](features/EMBIGGEN.md)
- [Other Features](features/OTHER.md)
- [Image2Image](features/IMG2IMG.md)
- [Inpainting](features/INPAINTING.md)
- [Outpainting](features/OUTPAINTING.md)
- [Adding custom styles and subjects](features/CONCEPTS.md)
- [Upscaling and Face Reconstruction](features/POSTPROCESS.md)
- [Embiggen upscaling](features/EMBIGGEN.md)
- [Other Features](features/OTHER.md)
<!-- separator -->
### Model Management
- [Installing](installation/050_INSTALLING_MODELS.md)
- [Model Merging](features/MODEL_MERGING.md)
- [Style/Subject Concepts and Embeddings](features/CONCEPTS.md)
- [Textual Inversion](features/TEXTUAL_INVERSION.md)
- [Not Safe for Work (NSFW) Checker](features/NSFW.md)
- [Installing](installation/050_INSTALLING_MODELS.md)
- [Model Merging](features/MODEL_MERGING.md)
- [Style/Subject Concepts and Embeddings](features/CONCEPTS.md)
- [Textual Inversion](features/TEXTUAL_INVERSION.md)
- [Not Safe for Work (NSFW) Checker](features/NSFW.md)
<!-- seperator -->
### Prompt Engineering
- [Prompt Syntax](features/PROMPTS.md)
- [Generating Variations](features/VARIATIONS.md)
- [Prompt Syntax](features/PROMPTS.md)
- [Generating Variations](features/VARIATIONS.md)
## :octicons-log-16: Latest Changes
@@ -162,84 +181,188 @@ This method is recommended for those familiar with running Docker containers
#### Migration to Stable Diffusion `diffusers` models
Previous versions of InvokeAI supported the original model file format introduced with Stable Diffusion 1.4. In the original format, known variously as "checkpoint", or "legacy" format, there is a single large weights file ending with `.ckpt` or `.safetensors`. Though this format has served the community well, it has a number of disadvantages, including file size, slow loading times, and a variety of non-standard variants that require special-case code to handle. In addition, because checkpoint files are actually a bundle of multiple machine learning sub-models, it is hard to swap different sub-models in and out, or to share common sub-models. A new format, introduced by the StabilityAI company in collaboration with HuggingFace, is called `diffusers` and consists of a directory of individual models. The most immediate benefit of `diffusers` is that they load from disk very quickly. A longer term benefit is that in the near future `diffusers` models will be able to share common sub-models, dramatically reducing disk space when you have multiple fine-tune models derived from the same base.
Previous versions of InvokeAI supported the original model file format
introduced with Stable Diffusion 1.4. In the original format, known variously as
"checkpoint", or "legacy" format, there is a single large weights file ending
with `.ckpt` or `.safetensors`. Though this format has served the community
well, it has a number of disadvantages, including file size, slow loading times,
and a variety of non-standard variants that require special-case code to handle.
In addition, because checkpoint files are actually a bundle of multiple machine
learning sub-models, it is hard to swap different sub-models in and out, or to
share common sub-models. A new format, introduced by the StabilityAI company in
collaboration with HuggingFace, is called `diffusers` and consists of a
directory of individual models. The most immediate benefit of `diffusers` is
that they load from disk very quickly. A longer term benefit is that in the near
future `diffusers` models will be able to share common sub-models, dramatically
reducing disk space when you have multiple fine-tune models derived from the
same base.
When you perform a new install of version 2.3.0, you will be offered the option to install the `diffusers` versions of a number of popular SD models, including Stable Diffusion versions 1.5 and 2.1 (including the 768x768 pixel version of 2.1). These will act and work just like the checkpoint versions. Do not be concerned if you already have a lot of ".ckpt" or ".safetensors" models on disk! InvokeAI 2.3.0 can still load these and generate images from them without any extra intervention on your part.
When you perform a new install of version 2.3.0, you will be offered the option
to install the `diffusers` versions of a number of popular SD models, including
Stable Diffusion versions 1.5 and 2.1 (including the 768x768 pixel version of
2.1). These will act and work just like the checkpoint versions. Do not be
concerned if you already have a lot of ".ckpt" or ".safetensors" models on disk!
InvokeAI 2.3.0 can still load these and generate images from them without any
extra intervention on your part.
To take advantage of the optimized loading times of `diffusers` models, InvokeAI offers options to convert legacy checkpoint models into optimized `diffusers` models. If you use the `invokeai` command line interface, the relevant commands are:
To take advantage of the optimized loading times of `diffusers` models, InvokeAI
offers options to convert legacy checkpoint models into optimized `diffusers`
models. If you use the `invokeai` command line interface, the relevant commands
are:
* `!convert_model` -- Take the path to a local checkpoint file or a URL that is pointing to one, convert it into a `diffusers` model, and import it into InvokeAI's models registry file.
* `!optimize_model` -- If you already have a checkpoint model in your InvokeAI models file, this command will accept its short name and convert it into a like-named `diffusers` model, optionally deleting the original checkpoint file.
* `!import_model` -- Take the local path of either a checkpoint file or a `diffusers` model directory and import it into InvokeAI's registry file. You may also provide the ID of any diffusers model that has been published on the [HuggingFace models repository](https://huggingface.co/models?pipeline_tag=text-to-image&sort=downloads) and it will be downloaded and installed automatically.
- `!convert_model` -- Take the path to a local checkpoint file or a URL that
is pointing to one, convert it into a `diffusers` model, and import it into
InvokeAI's models registry file.
- `!optimize_model` -- If you already have a checkpoint model in your InvokeAI
models file, this command will accept its short name and convert it into a
like-named `diffusers` model, optionally deleting the original checkpoint
file.
- `!import_model` -- Take the local path of either a checkpoint file or a
`diffusers` model directory and import it into InvokeAI's registry file. You
may also provide the ID of any diffusers model that has been published on
the
[HuggingFace models repository](https://huggingface.co/models?pipeline_tag=text-to-image&sort=downloads)
and it will be downloaded and installed automatically.
The WebGUI offers similar functionality for model management.
For advanced users, new command-line options provide additional functionality. Launching `invokeai` with the argument `--autoconvert <path to directory>` takes the path to a directory of checkpoint files, automatically converts them into `diffusers` models and imports them. Each time the script is launched, the directory will be scanned for new checkpoint files to be loaded. Alternatively, the `--ckpt_convert` argument will cause any checkpoint or safetensors model that is already registered with InvokeAI to be converted into a `diffusers` model on the fly, allowing you to take advantage of future diffusers-only features without explicitly converting the model and saving it to disk.
For advanced users, new command-line options provide additional functionality.
Launching `invokeai` with the argument `--autoconvert <path to directory>` takes
the path to a directory of checkpoint files, automatically converts them into
`diffusers` models and imports them. Each time the script is launched, the
directory will be scanned for new checkpoint files to be loaded. Alternatively,
the `--ckpt_convert` argument will cause any checkpoint or safetensors model
that is already registered with InvokeAI to be converted into a `diffusers`
model on the fly, allowing you to take advantage of future diffusers-only
features without explicitly converting the model and saving it to disk.
Please see [INSTALLING MODELS](https://invoke-ai.github.io/InvokeAI/installation/050_INSTALLING_MODELS/) for more information on model management in both the command-line and Web interfaces.
Please see
[INSTALLING MODELS](https://invoke-ai.github.io/InvokeAI/installation/050_INSTALLING_MODELS/)
for more information on model management in both the command-line and Web
interfaces.
#### Support for the `XFormers` Memory-Efficient Crossattention Package
On CUDA (Nvidia) systems, version 2.3.0 supports the `XFormers` library. Once installed, the`xformers` package dramatically reduces the memory footprint of loaded Stable Diffusion models files and modestly increases image generation speed. `xformers` will be installed and activated automatically if you specify a CUDA system at install time.
On CUDA (Nvidia) systems, version 2.3.0 supports the `XFormers` library. Once
installed, the`xformers` package dramatically reduces the memory footprint of
loaded Stable Diffusion models files and modestly increases image generation
speed. `xformers` will be installed and activated automatically if you specify a
CUDA system at install time.
The caveat with using `xformers` is that it introduces slightly non-deterministic behavior, and images generated using the same seed and other settings will be subtly different between invocations. Generally the changes are unnoticeable unless you rapidly shift back and forth between images, but to disable `xformers` and restore fully deterministic behavior, you may launch InvokeAI using the `--no-xformers` option. This is most conveniently done by opening the file `invokeai/invokeai.init` with a text editor, and adding the line `--no-xformers` at the bottom.
The caveat with using `xformers` is that it introduces slightly
non-deterministic behavior, and images generated using the same seed and other
settings will be subtly different between invocations. Generally the changes are
unnoticeable unless you rapidly shift back and forth between images, but to
disable `xformers` and restore fully deterministic behavior, you may launch
InvokeAI using the `--no-xformers` option. This is most conveniently done by
opening the file `invokeai/invokeai.init` with a text editor, and adding the
line `--no-xformers` at the bottom.
#### A Negative Prompt Box in the WebUI
There is now a separate text input box for negative prompts in the WebUI. This is convenient for stashing frequently-used negative prompts ("mangled limbs, bad anatomy"). The `[negative prompt]` syntax continues to work in the main prompt box as well.
There is now a separate text input box for negative prompts in the WebUI. This
is convenient for stashing frequently-used negative prompts ("mangled limbs, bad
anatomy"). The `[negative prompt]` syntax continues to work in the main prompt
box as well.
To see exactly how your prompts are being parsed, launch `invokeai` with the `--log_tokenization` option. The console window will then display the tokenization process for both positive and negative prompts.
To see exactly how your prompts are being parsed, launch `invokeai` with the
`--log_tokenization` option. The console window will then display the
tokenization process for both positive and negative prompts.
#### Model Merging
Version 2.3.0 offers an intuitive user interface for merging up to three Stable Diffusion models using an intuitive user interface. Model merging allows you to mix the behavior of models to achieve very interesting effects. To use this, each of the models must already be imported into InvokeAI and saved in `diffusers` format, then launch the merger using a new menu item in the InvokeAI launcher script (`invoke.sh`, `invoke.bat`) or directly from the command line with `invokeai-merge --gui`. You will be prompted to select the models to merge, the proportions in which to mix them, and the mixing algorithm. The script will create a new merged `diffusers` model and import it into InvokeAI for your use.
Version 2.3.0 offers an intuitive user interface for merging up to three Stable
Diffusion models using an intuitive user interface. Model merging allows you to
mix the behavior of models to achieve very interesting effects. To use this,
each of the models must already be imported into InvokeAI and saved in
`diffusers` format, then launch the merger using a new menu item in the InvokeAI
launcher script (`invoke.sh`, `invoke.bat`) or directly from the command line
with `invokeai-merge --gui`. You will be prompted to select the models to merge,
the proportions in which to mix them, and the mixing algorithm. The script will
create a new merged `diffusers` model and import it into InvokeAI for your use.
See [MODEL MERGING](https://invoke-ai.github.io/InvokeAI/features/MODEL_MERGING/) for more details.
See
[MODEL MERGING](https://invoke-ai.github.io/InvokeAI/features/MODEL_MERGING/)
for more details.
#### Textual Inversion Training
Textual Inversion (TI) is a technique for training a Stable Diffusion model to emit a particular subject or style when triggered by a keyword phrase. You can perform TI training by placing a small number of images of the subject or style in a directory, and choosing a distinctive trigger phrase, such as "pointillist-style". After successful training, The subject or style will be activated by including `<pointillist-style>` in your prompt.
Textual Inversion (TI) is a technique for training a Stable Diffusion model to
emit a particular subject or style when triggered by a keyword phrase. You can
perform TI training by placing a small number of images of the subject or style
in a directory, and choosing a distinctive trigger phrase, such as
"pointillist-style". After successful training, The subject or style will be
activated by including `<pointillist-style>` in your prompt.
Previous versions of InvokeAI were able to perform TI, but it required using a command-line script with dozens of obscure command-line arguments. Version 2.3.0 features an intuitive TI frontend that will build a TI model on top of any `diffusers` model. To access training you can launch from a new item in the launcher script or from the command line using `invokeai-ti --gui`.
Previous versions of InvokeAI were able to perform TI, but it required using a
command-line script with dozens of obscure command-line arguments. Version 2.3.0
features an intuitive TI frontend that will build a TI model on top of any
`diffusers` model. To access training you can launch from a new item in the
launcher script or from the command line using `invokeai-ti --gui`.
See [TEXTUAL INVERSION](https://invoke-ai.github.io/InvokeAI/features/TEXTUAL_INVERSION/) for further details.
See
[TEXTUAL INVERSION](https://invoke-ai.github.io/InvokeAI/features/TEXTUAL_INVERSION/)
for further details.
#### A New Installer Experience
The InvokeAI installer has been upgraded in order to provide a smoother and hopefully more glitch-free experience. In addition, InvokeAI is now packaged as a PyPi project, allowing developers and power-users to install InvokeAI with the command `pip install InvokeAI --use-pep517`. Please see [Installation](#installation) for details.
The InvokeAI installer has been upgraded in order to provide a smoother and
hopefully more glitch-free experience. In addition, InvokeAI is now packaged as
a PyPi project, allowing developers and power-users to install InvokeAI with the
command `pip install InvokeAI --use-pep517`. Please see
[Installation](#installation) for details.
Developers should be aware that the `pip` installation procedure has been simplified and that the `conda` method is no longer supported at all. Accordingly, the `environments_and_requirements` directory has been deleted from the repository.
Developers should be aware that the `pip` installation procedure has been
simplified and that the `conda` method is no longer supported at all.
Accordingly, the `environments_and_requirements` directory has been deleted from
the repository.
#### Command-line name changes
All of InvokeAI's functionality, including the WebUI, command-line interface, textual inversion training and model merging, can all be accessed from the `invoke.sh` and `invoke.bat` launcher scripts. The menu of options has been expanded to add the new functionality. For the convenience of developers and power users, we have normalized the names of the InvokeAI command-line scripts:
All of InvokeAI's functionality, including the WebUI, command-line interface,
textual inversion training and model merging, can all be accessed from the
`invoke.sh` and `invoke.bat` launcher scripts. The menu of options has been
expanded to add the new functionality. For the convenience of developers and
power users, we have normalized the names of the InvokeAI command-line scripts:
* `invokeai` -- Command-line client
* `invokeai --web` -- Web GUI
* `invokeai-merge --gui` -- Model merging script with graphical front end
* `invokeai-ti --gui` -- Textual inversion script with graphical front end
* `invokeai-configure` -- Configuration tool for initializing the `invokeai` directory and selecting popular starter models.
- `invokeai` -- Command-line client
- `invokeai --web` -- Web GUI
- `invokeai-merge --gui` -- Model merging script with graphical front end
- `invokeai-ti --gui` -- Textual inversion script with graphical front end
- `invokeai-configure` -- Configuration tool for initializing the `invokeai`
directory and selecting popular starter models.
For backward compatibility, the old command names are also recognized, including `invoke.py` and `configure-invokeai.py`. However, these are deprecated and will eventually be removed.
For backward compatibility, the old command names are also recognized, including
`invoke.py` and `configure-invokeai.py`. However, these are deprecated and will
eventually be removed.
Developers should be aware that the locations of the script's source code has been moved. The new locations are:
* `invokeai` => `ldm/invoke/CLI.py`
* `invokeai-configure` => `ldm/invoke/config/configure_invokeai.py`
* `invokeai-ti`=> `ldm/invoke/training/textual_inversion.py`
* `invokeai-merge` => `ldm/invoke/merge_diffusers`
Developers should be aware that the locations of the script's source code has
been moved. The new locations are:
Developers are strongly encouraged to perform an "editable" install of InvokeAI using `pip install -e . --use-pep517` in the Git repository, and then to call the scripts using their 2.3.0 names, rather than executing the scripts directly. Developers should also be aware that the several important data files have been relocated into a new directory named `invokeai`. This includes the WebGUI's `frontend` and `backend` directories, and the `INITIAL_MODELS.yaml` files used by the installer to select starter models. Eventually all InvokeAI modules will be in subdirectories of `invokeai`.
- `invokeai` => `ldm/invoke/CLI.py`
- `invokeai-configure` => `ldm/invoke/config/configure_invokeai.py`
- `invokeai-ti`=> `ldm/invoke/training/textual_inversion.py`
- `invokeai-merge` => `ldm/invoke/merge_diffusers`
Please see [2.3.0 Release Notes](https://github.com/invoke-ai/InvokeAI/releases/tag/v2.3.0) for further details.
For older changelogs, please visit the
Developers are strongly encouraged to perform an "editable" install of InvokeAI
using `pip install -e . --use-pep517` in the Git repository, and then to call
the scripts using their 2.3.0 names, rather than executing the scripts directly.
Developers should also be aware that the several important data files have been
relocated into a new directory named `invokeai`. This includes the WebGUI's
`frontend` and `backend` directories, and the `INITIAL_MODELS.yaml` files used
by the installer to select starter models. Eventually all InvokeAI modules will
be in subdirectories of `invokeai`.
Please see
[2.3.0 Release Notes](https://github.com/invoke-ai/InvokeAI/releases/tag/v2.3.0)
for further details. For older changelogs, please visit the
**[CHANGELOG](CHANGELOG/#v223-2-december-2022)**.
## :material-target: Troubleshooting
Please check out our **[:material-frequently-asked-questions:
Troubleshooting
Guide](installation/010_INSTALL_AUTOMATED.md#troubleshooting)** to
get solutions for common installation problems and other issues.
Please check out our
**[:material-frequently-asked-questions: Troubleshooting Guide](installation/010_INSTALL_AUTOMATED.md#troubleshooting)**
to get solutions for common installation problems and other issues.
## :octicons-repo-push-24: Contributing
@@ -265,8 +388,8 @@ thank them for their time, hard work and effort.
For support, please use this repository's GitHub Issues tracking service. Feel
free to send me an email if you use and like the script.
Original portions of the software are Copyright (c) 2022-23
by [The InvokeAI Team](https://github.com/invoke-ai).
Original portions of the software are Copyright (c) 2022-23 by
[The InvokeAI Team](https://github.com/invoke-ai).
## :octicons-book-24: Further Reading

View File

@@ -40,9 +40,10 @@ experimental versions later.
this, open up a command-line window ("Terminal" on Linux and
Macintosh, "Command" or "Powershell" on Windows) and type `python
--version`. If Python is installed, it will print out the version
number. If it is version `3.9.1` or `3.10.x`, you meet
requirements.
number. If it is version `3.9.*` or `3.10.*`, you meet
requirements. We do not recommend using Python 3.11 or higher,
as not all the libraries that InvokeAI depends on work properly
with this version.
!!! warning "What to do if you have an unsupported version"
@@ -50,8 +51,7 @@ experimental versions later.
and download the appropriate installer package for your
platform. We recommend [Version
3.10.9](https://www.python.org/downloads/release/python-3109/),
which has been extensively tested with InvokeAI. At this time
we do not recommend Python 3.11.
which has been extensively tested with InvokeAI.
_Please select your platform in the section below for platform-specific
setup requirements._
@@ -150,7 +150,7 @@ experimental versions later.
```cmd
C:\Documents\Linco> cd InvokeAI-Installer
C:\Documents\Linco\invokeAI> install.bat
C:\Documents\Linco\invokeAI> .\install.bat
```
7. **Select the location to install InvokeAI**: The script will ask you to choose where to install InvokeAI. Select a
@@ -167,6 +167,11 @@ experimental versions later.
`/home/YourName/invokeai` on Linux systems, and `/Users/YourName/invokeai`
on Macintoshes, where "YourName" is your login name.
-If you have previously installed InvokeAI, you will be asked to
confirm whether you want to reinstall into this directory. You
may choose to reinstall, in which case your version will be upgraded,
or choose a different directory.
- The script uses tab autocompletion to suggest directory path completions.
Type part of the path (e.g. "C:\Users") and press ++tab++ repeatedly
to suggest completions.
@@ -181,11 +186,6 @@ experimental versions later.
are unsure what GPU you are using, you can ask the installer to
guess.
<figure markdown>
![choose-gpu-screenshot](../assets/installer-walkthrough/choose-gpu.png)
</figure>
9. **Watch it go!**: Sit back and let the install script work. It will install the third-party
libraries needed by InvokeAI and the application itself.
@@ -197,25 +197,141 @@ experimental versions later.
minutes and nothing is happening, you can interrupt the script with ^C. You
may restart it and it will pick up where it left off.
10. **Post-install Configuration**: After installation completes, the installer will launch the
configuration script, which will guide you through the first-time
process of selecting one or more Stable Diffusion model weights
files, downloading and configuring them. We provide a list of
popular models that InvokeAI performs well with. However, you can
add more weight files later on using the command-line client or
the Web UI. See [Installing Models](050_INSTALLING_MODELS.md) for
details.
<figure markdown>
![downloading-models-screenshot](../assets/installer-walkthrough/downloading-models.png)
![initial-settings-screenshot](../assets/installer-walkthrough/settings-form.png)
</figure>
If you have already downloaded the weights file(s) for another Stable
Diffusion distribution, you may skip this step (by selecting "skip" when
prompted) and configure InvokeAI to use the previously-downloaded files. The
process for this is described in [Installing Models](050_INSTALLING_MODELS.md).
10. **Post-install Configuration**: After installation completes, the
installer will launch the configuration form, which will guide you
through the first-time process of adjusting some of InvokeAI's
startup settings. To move around this form use ctrl-N for
&lt;N&gt;ext and ctrl-P for &lt;P&gt;revious, or use &lt;tab&gt;
and shift-&lt;tab&gt; to move forward and back. Once you are in a
multi-checkbox field use the up and down cursor keys to select the
item you want, and &lt;space&gt; to toggle it on and off. Within
a directory field, pressing &lt;tab&gt; will provide autocomplete
options.
11. **Running InvokeAI for the first time**: The script will now exit and you'll be ready to generate some images. Look
Generally the defaults are fine, and you can come back to this screen at
any time to tweak your system. Here are the options you can adjust:
- ***Output directory for images***
This is the path to a directory in which InvokeAI will store all its
generated images.
- ***NSFW checker***
If checked, InvokeAI will test images for potential sexual content
and blur them out if found. Note that the NSFW checker consumes
an additional 0.6 GB of VRAM on top of the 2-3 GB of VRAM used
by most image models. If you have a low VRAM GPU (4-6 GB), you
can reduce out of memory errors by disabling the checker.
- ***HuggingFace Access Token***
InvokeAI has the ability to download embedded styles and subjects
from the HuggingFace Concept Library on-demand. However, some of
the concept library files are password protected. To make download
smoother, you can set up an account at huggingface.co, obtain an
access token, and paste it into this field. Note that you paste
to this screen using ctrl-shift-V
- ***Free GPU memory after each generation***
This is useful for low-memory machines and helps minimize the
amount of GPU VRAM used by InvokeAI.
- ***Enable xformers support if available***
If the xformers library was successfully installed, this will activate
it to reduce memory consumption and increase rendering speed noticeably.
Note that xformers has the side effect of generating slightly different
images even when presented with the same seed and other settings.
- ***Force CPU to be used on GPU systems***
This will use the (slow) CPU rather than the accelerated GPU. This
can be used to generate images on systems that don't have a compatible
GPU.
- ***Precision***
This controls whether to use float32 or float16 arithmetic.
float16 uses less memory but is also slightly less accurate.
Ordinarily the right arithmetic is picked automatically ("auto"),
but you may have to use float32 to get images on certain systems
and graphics cards. The "autocast" option is deprecated and
shouldn't be used unless you are asked to by a member of the team.
- ***Number of models to cache in CPU memory***
This allows you to keep models in memory and switch rapidly among
them rather than having them load from disk each time. This slider
controls how many models to keep loaded at once. Each
model will use 2-4 GB of RAM, so use this cautiously
- ***Directory containing embedding/textual inversion files***
This is the directory in which you can place custom embedding
files (.pt or .bin). During startup, this directory will be
scanned and InvokeAI will print out the text terms that
are available to trigger the embeddings.
At the bottom of the screen you will see a checkbox for accepting
the CreativeML Responsible AI License. You need to accept the license
in order to download Stable Diffusion models from the next screen.
_You can come back to the startup options form_ as many times as you like.
From the `invoke.sh` or `invoke.bat` launcher, select option (6) to relaunch
this script. On the command line, it is named `invokeai-configure`.
11. **Downloading Models**: After you press `[NEXT]` on the screen, you will be taken
to another screen that prompts you to download a series of starter models. The ones
we recommend are preselected for you, but you are encouraged to use the checkboxes to
pick and choose.
You will probably wish to download `autoencoder-840000` for use with models that
were trained with an older version of the Stability VAE.
<figure markdown>
![select-models-screenshot](../assets/installer-walkthrough/installing-models.png)
</figure>
Below the preselected list of starter models is a large text field which you can use
to specify a series of models to import. You can specify models in a variety of formats,
each separated by a space or newline. The formats accepted are:
- The path to a .ckpt or .safetensors file. On most systems, you can drag a file from
the file browser to the textfield to automatically paste the path. Be sure to remove
extraneous quotation marks and other things that come along for the ride.
- The path to a directory containing a combination of `.ckpt` and `.safetensors` files.
The directory will be scanned from top to bottom (including subfolders) and any
file that can be imported will be.
- A URL pointing to a `.ckpt` or `.safetensors` file. You can cut
and paste directly from a web page, or simply drag the link from the web page
or navigation bar. (You can also use ctrl-shift-V to paste into this field)
The file will be downloaded and installed.
- The HuggingFace repository ID (repo_id) for a `diffusers` model. These IDs have
the format _author_name/model_name_, as in `andite/anything-v4.0`
- The path to a local directory containing a `diffusers`
model. These directories always have the file `model_index.json`
at their top level.
_Select a directory for models to import_ You may select a local
directory for autoimporting at startup time. If you select this
option, the directory you choose will be scanned for new
.ckpt/.safetensors files each time InvokeAI starts up, and any new
files will be automatically imported and made available for your
use.
_Convert imported models into diffusers_ When legacy checkpoint
files are imported, you may select to use them unmodified (the
default) or to convert them into `diffusers` models. The latter
load much faster and have slightly better rendering performance,
but not all checkpoint files can be converted. Note that Stable Diffusion
Version 2.X files are **only** supported in `diffusers` format and will
be converted regardless.
_You can come back to the model install form_ as many times as you like.
From the `invoke.sh` or `invoke.bat` launcher, select option (5) to relaunch
this script. On the command line, it is named `invokeai-model-install`.
12. **Running InvokeAI for the first time**: The script will now exit and you'll be ready to generate some images. Look
for the directory `invokeai` installed in the location you chose at the
beginning of the install session. Look for a shell script named `invoke.sh`
(Linux/Mac) or `invoke.bat` (Windows). Launch the script by double-clicking
@@ -301,7 +417,7 @@ Then type the following commands:
=== "AMD System"
```bash
pip install torch torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/rocm5.2
pip install torch torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/rocm5.4.2
```
### Corrupted configuration file
@@ -327,6 +443,52 @@ the [InvokeAI Issues](https://github.com/invoke-ai/InvokeAI/issues) section, or
visit our [Discord Server](https://discord.gg/ZmtBAhwWhy) for interactive
assistance.
### Out of Memory Issues
The models are large, VRAM is expensive, and you may find yourself
faced with Out of Memory errors when generating images. Here are some
tips to reduce the problem:
* **4 GB of VRAM**
This should be adequate for 512x512 pixel images using Stable Diffusion 1.5
and derived models, provided that you **disable** the NSFW checker. To
disable the filter, do one of the following:
* Select option (6) "_change InvokeAI startup options_" from the
launcher. This will bring up the console-based startup settings
dialogue and allow you to unselect the "NSFW Checker" option.
* Start the startup settings dialogue directly by running
`invokeai-configure --skip-sd-weights --skip-support-models`
from the command line.
* Find the `invokeai.init` initialization file in the InvokeAI root
directory, open it in a text editor, and change `--nsfw_checker`
to `--no-nsfw_checker`
If you are on a CUDA system, you can realize significant memory
savings by activating the `xformers` library as described above. The
downside is `xformers` introduces non-deterministic behavior, such
that images generated with exactly the same prompt and settings will
be slightly different from each other. See above for more information.
* **6 GB of VRAM**
This is a border case. Using the SD 1.5 series you should be able to
generate images up to 640x640 with the NSFW checker enabled, and up to
1024x1024 with it disabled and `xformers` activated.
If you run into persistent memory issues there are a series of
environment variables that you can set before launching InvokeAI that
alter how the PyTorch machine learning library manages memory. See
https://pytorch.org/docs/stable/notes/cuda.html#memory-management for
a list of these tweaks.
* **12 GB of VRAM**
This should be sufficient to generate larger images up to about
1280x1280. If you wish to push further, consider activating
`xformers`.
### Other Problems
If you run into problems during or after installation, the InvokeAI team is
@@ -348,25 +510,11 @@ version (recommended), follow these steps:
1. Start the `invoke.sh`/`invoke.bat` launch script from within the
`invokeai` root directory.
2. Choose menu item (6) "Developer's Console". This will launch a new
command line.
3. Type the following command:
```bash
pip install InvokeAI --upgrade
```
4. Watch the installation run. Once it is complete, you may exit the
command line by typing `exit`, and then start InvokeAI from the
launch script as per usual.
Alternatively, if you wish to get the most recent unreleased
development version, perform the same steps to enter the developer's
console, and then type:
```bash
pip install https://github.com/invoke-ai/InvokeAI/archive/refs/heads/main.zip
```
2. Choose menu item (10) "Update InvokeAI".
3. This will launch a menu that gives you the option of:
1. Updating to the latest official release;
2. Updating to the bleeding-edge development version; or
3. Manually entering the tag or branch name of a version of
InvokeAI you wish to try out.

View File

@@ -30,25 +30,35 @@ Installation](010_INSTALL_AUTOMATED.md), and in many cases will
already be installed (if, for example, you have used your system for
gaming):
* **Python** version 3.9 or 3.10 (3.11 is not recommended).
* **Python**
* **CUDA Tools** For those with _NVidia GPUs_, you will need to
install the [CUDA toolkit and optionally the XFormers library](070_INSTALL_XFORMERS.md).
version 3.9 or 3.10 (3.11 is not recommended).
* **ROCm Tools** For _Linux users with AMD GPUs_, you will need
to install the [ROCm toolkit](./030_INSTALL_CUDA_AND_ROCM.md). Note that
InvokeAI does not support AMD GPUs on Windows systems due to
lack of a Windows ROCm library.
* **CUDA Tools**
* **Visual C++ Libraries** _Windows users_ must install the free
[Visual C++ libraries from Microsoft](https://learn.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-170)
For those with _NVidia GPUs_, you will need to
install the [CUDA toolkit and optionally the XFormers library](070_INSTALL_XFORMERS.md).
* **The Xcode command line tools** for _Macintosh users_. Instructions are
available at [Free Code Camp](https://www.freecodecamp.org/news/install-xcode-command-line-tools/)
* **ROCm Tools**
* _Macintosh users_ may also need to run the `Install Certificates` command
if model downloads give lots of certificate errors. Run:
`/Applications/Python\ 3.10/Install\ Certificates.command`
For _Linux users with AMD GPUs_, you will need
to install the [ROCm toolkit](./030_INSTALL_CUDA_AND_ROCM.md). Note that
InvokeAI does not support AMD GPUs on Windows systems due to
lack of a Windows ROCm library.
* **Visual C++ Libraries**
_Windows users_ must install the free
[Visual C++ libraries from Microsoft](https://learn.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-170)
* **The Xcode command line tools**
for _Macintosh users_. Instructions are available at
[Free Code Camp](https://www.freecodecamp.org/news/install-xcode-command-line-tools/)
* _Macintosh users_ may also need to run the `Install Certificates` command
if model downloads give lots of certificate errors. Run:
`/Applications/Python\ 3.10/Install\ Certificates.command`
### Installation Walkthrough
@@ -75,7 +85,7 @@ manager, please follow these steps:
=== "Linux/Mac"
```bash
export INVOKEAI_ROOT="~/invokeai"
export INVOKEAI_ROOT=~/invokeai
mkdir $INVOKEAI_ROOT
```
@@ -99,35 +109,30 @@ manager, please follow these steps:
Windows environment variable using the Advanced System Settings dialogue.
Refer to your operating system documentation for details.
=== "Linux/Mac"
```bash
cd $INVOKEAI_ROOT
python -m venv create .venv
```
=== "Windows"
```bash
cd $INVOKEAI_ROOT
python -m venv create .venv
```
```terminal
cd $INVOKEAI_ROOT
python -m venv .venv --prompt InvokeAI
```
4. Activate the new environment:
=== "Linux/Mac"
```bash
```bash
source .venv/bin/activate
```
```
=== "Windows"
```bash
.venv\script\activate
```
If you get a permissions error at this point, run the command
`Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope CurrentUser`
and try `activate` again.
The command-line prompt should change to to show `(.venv)` at the
```ps
.venv\Scripts\activate
```
If you get a permissions error at this point, run this command and try again
`Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser`
The command-line prompt should change to to show `(InvokeAI)` at the
beginning of the prompt. Note that all the following steps should be
run while inside the INVOKEAI_ROOT directory
@@ -137,40 +142,47 @@ manager, please follow these steps:
python -m pip install --upgrade pip
```
6. Install the InvokeAI Package. The `--extra-index-url` option is used to select among CUDA, ROCm and CPU/MPS drivers as shown below:
6. Install the InvokeAI Package. The `--extra-index-url` option is used to select among
CUDA, ROCm and CPU/MPS drivers as shown below:
=== "CUDA (NVidia)"
```bash
pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117
```
=== "ROCm (AMD)"
```bash
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.2
```
=== "CPU (Intel Macs & non-GPU systems)"
```bash
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
```
=== "MPS (M1 and M2 Macs)"
```bash
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
pip install InvokeAI --use-pep517
```
7. Deactivate and reactivate your runtime directory so that the invokeai-specific commands
become available in the environment
=== "Linux/Macintosh"
```bash
deactivate && source .venv/bin/activate
```
=== "Windows"
```bash
```ps
deactivate
.venv\Scripts\activate
.venv\Scripts\activate
```
8. Set up the runtime directory
@@ -179,7 +191,7 @@ manager, please follow these steps:
models, model config files, directory for textual inversion embeddings, and
your outputs.
```bash
```terminal
invokeai-configure
```
@@ -283,13 +295,12 @@ on your system, please see the [Git Installation
Guide](https://github.com/git-guides/install-git)
1. From the command line, run this command:
```bash
git clone https://github.com/invoke-ai/InvokeAI.git
```
This will create a directory named `InvokeAI` and populate it with the
full source code from the InvokeAI repository.
This will create a directory named `InvokeAI` and populate it with the
full source code from the InvokeAI repository.
2. Activate the InvokeAI virtual environment as per step (4) of the manual
installation protocol (important!)
@@ -314,7 +325,7 @@ installation protocol (important!)
=== "MPS (M1 and M2 Macs)"
```bash
pip install -e . --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
pip install -e . --use-pep517
```
Be sure to pass `-e` (for an editable install) and don't forget the
@@ -330,5 +341,29 @@ installation protocol (important!)
repository. You can then use GitHub functions to create and submit
pull requests to contribute improvements to the project.
Please see [Contributing](/index.md#Contributing) for hints
Please see [Contributing](../index.md#contributing) for hints
on getting started.
### Unsupported Conda Install
Congratulations, you found the "secret" Conda installation
instructions. If you really **really** want to use Conda with InvokeAI
you can do so using this unsupported recipe:
```
mkdir ~/invokeai
conda create -n invokeai python=3.10
conda activate invokeai
pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117
invokeai-configure --root ~/invokeai
invokeai --root ~/invokeai --web
```
The `pip install` command shown in this recipe is for Linux/Windows
systems with an NVIDIA GPU. See step (6) above for the command to use
with other platforms/GPU combinations. If you don't wish to pass the
`--root` argument to `invokeai` with each launch, you may set the
environment variable INVOKEAI_ROOT to point to the installation directory.
Note that if you run into problems with the Conda installation, the InvokeAI
staff will **not** be able to help you out. Caveat Emptor!

View File

@@ -110,7 +110,7 @@ recipes are available
When installing torch and torchvision manually with `pip`, remember to provide
the argument `--extra-index-url
https://download.pytorch.org/whl/rocm5.2` as described in the [Manual
https://download.pytorch.org/whl/rocm5.4.2` as described in the [Manual
Installation Guide](020_INSTALL_MANUAL.md).
This will be done automatically for you if you use the installer

View File

@@ -43,25 +43,31 @@ InvokeAI comes with support for a good set of starter models. You'll
find them listed in the master models file
`configs/INITIAL_MODELS.yaml` in the InvokeAI root directory. The
subset that are currently installed are found in
`configs/models.yaml`. The current list is:
`configs/models.yaml`. As of v2.3.1, the list of starter models is:
| Model | HuggingFace Repo ID | Description | URL
| -------------------- | --------------------------------- | ---------------------------------------------------------- | -------------------------------------------------------------- |
| stable-diffusion-1.5 | runwayml/stable-diffusion-v1-5 | Most recent version of base Stable Diffusion model | https://huggingface.co/runwayml/stable-diffusion-v1-5 |
| stable-diffusion-1.4 | runwayml/stable-diffusion-v1-4 | Previous version of base Stable Diffusion model | https://huggingface.co/runwayml/stable-diffusion-v1-4 |
| inpainting-1.5 | runwayml/stable-diffusion-inpainting | Stable diffusion 1.5 optimized for inpainting | https://huggingface.co/runwayml/stable-diffusion-inpainting |
| stable-diffusion-2.1-base |stabilityai/stable-diffusion-2-1-base | Stable Diffusion version 2.1 trained on 512 pixel images | https://huggingface.co/stabilityai/stable-diffusion-2-1-base |
| stable-diffusion-2.1-768 |stabilityai/stable-diffusion-2-1 | Stable Diffusion version 2.1 trained on 768 pixel images | https://huggingface.co/stabilityai/stable-diffusion-2-1 |
| dreamlike-diffusion-1.0 | dreamlike-art/dreamlike-diffusion-1.0 | An SD 1.5 model finetuned on high quality art | https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0 |
| dreamlike-photoreal-2.0 | dreamlike-art/dreamlike-photoreal-2.0 | A photorealistic model trained on 768 pixel images| https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0 |
| openjourney-4.0 | prompthero/openjourney | An SD 1.5 model finetuned on Midjourney images prompt with "mdjrny-v4 style" | https://huggingface.co/prompthero/openjourney |
| nitro-diffusion-1.0 | nitrosocke/Nitro-Diffusion | An SD 1.5 model finetuned on three styles, prompt with "archer style", "arcane style" or "modern disney style" | https://huggingface.co/nitrosocke/Nitro-Diffusion|
| trinart-2.0 | naclbit/trinart_stable_diffusion_v2 | An SD 1.5 model finetuned with ~40,000 assorted high resolution manga/anime-style pictures | https://huggingface.co/naclbit/trinart_stable_diffusion_v2|
| trinart-characters-2_0 | naclbit/trinart_derrida_characters_v2_stable_diffusion | An SD 1.5 model finetuned with 19.2M manga/anime-style pictures | https://huggingface.co/naclbit/trinart_derrida_characters_v2_stable_diffusion|
|Model Name | HuggingFace Repo ID | Description | URL |
|---------- | ---------- | ----------- | --- |
|stable-diffusion-1.5|runwayml/stable-diffusion-v1-5|Stable Diffusion version 1.5 diffusers model (4.27 GB)|https://huggingface.co/runwayml/stable-diffusion-v1-5 |
|sd-inpainting-1.5|runwayml/stable-diffusion-inpainting|RunwayML SD 1.5 model optimized for inpainting, diffusers version (4.27 GB)|https://huggingface.co/runwayml/stable-diffusion-inpainting |
|stable-diffusion-2.1|stabilityai/stable-diffusion-2-1|Stable Diffusion version 2.1 diffusers model, trained on 768 pixel images (5.21 GB)|https://huggingface.co/stabilityai/stable-diffusion-2-1 |
|sd-inpainting-2.0|stabilityai/stable-diffusion-2-1|Stable Diffusion version 2.0 inpainting model (5.21 GB)|https://huggingface.co/stabilityai/stable-diffusion-2-1 |
|analog-diffusion-1.0|wavymulder/Analog-Diffusion|An SD-1.5 model trained on diverse analog photographs (2.13 GB)|https://huggingface.co/wavymulder/Analog-Diffusion |
|deliberate-1.0|XpucT/Deliberate|Versatile model that produces detailed images up to 768px (4.27 GB)|https://huggingface.co/XpucT/Deliberate |
|d&d-diffusion-1.0|0xJustin/Dungeons-and-Diffusion|Dungeons & Dragons characters (2.13 GB)|https://huggingface.co/0xJustin/Dungeons-and-Diffusion |
|dreamlike-photoreal-2.0|dreamlike-art/dreamlike-photoreal-2.0|A photorealistic model trained on 768 pixel images based on SD 1.5 (2.13 GB)|https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0 |
|inkpunk-1.0|Envvi/Inkpunk-Diffusion|Stylized illustrations inspired by Gorillaz, FLCL and Shinkawa; prompt with "nvinkpunk" (4.27 GB)|https://huggingface.co/Envvi/Inkpunk-Diffusion |
|openjourney-4.0|prompthero/openjourney|An SD 1.5 model fine tuned on Midjourney; prompt with "mdjrny-v4 style" (2.13 GB)|https://huggingface.co/prompthero/openjourney |
|portrait-plus-1.0|wavymulder/portraitplus|An SD-1.5 model trained on close range portraits of people; prompt with "portrait+" (2.13 GB)|https://huggingface.co/wavymulder/portraitplus |
|seek-art-mega-1.0|coreco/seek.art_MEGA|A general use SD-1.5 "anything" model that supports multiple styles (2.1 GB)|https://huggingface.co/coreco/seek.art_MEGA |
|trinart-2.0|naclbit/trinart_stable_diffusion_v2|An SD-1.5 model finetuned with ~40K assorted high resolution manga/anime-style images (2.13 GB)|https://huggingface.co/naclbit/trinart_stable_diffusion_v2 |
|waifu-diffusion-1.4|hakurei/waifu-diffusion|An SD-1.5 model trained on 680k anime/manga-style images (2.13 GB)|https://huggingface.co/hakurei/waifu-diffusion |
Note that these files are covered by an "Ethical AI" license which forbids
certain uses. When you initially download them, you are asked to
accept the license terms.
Note that these files are covered by an "Ethical AI" license which
forbids certain uses. When you initially download them, you are asked
to accept the license terms. In addition, some of these models carry
additional license terms that limit their use in commercial
applications or on public servers. Be sure to familiarize yourself
with the model terms by visiting the URLs in the table above.
## Community-Contributed Models
@@ -80,6 +86,13 @@ only `.safetensors` and `.ckpt` models, but they can be easily loaded
into InvokeAI and/or converted into optimized `diffusers` models. Be
aware that CIVITAI hosts many models that generate NSFW content.
!!! note
InvokeAI 2.3.x does not support directly importing and
running Stable Diffusion version 2 checkpoint models. You may instead
convert them into `diffusers` models using the conversion methods
described below.
## Installation
There are multiple ways to install and manage models:
@@ -90,7 +103,7 @@ There are multiple ways to install and manage models:
models files.
3. The web interface (WebUI) has a GUI for importing and managing
models.
models.
### Installation via `invokeai-configure`
@@ -106,7 +119,7 @@ confirm that the files are complete.
You can install a new model, including any of the community-supported ones, via
the command-line client's `!import_model` command.
#### Installing `.ckpt` and `.safetensors` models
#### Installing individual `.ckpt` and `.safetensors` models
If the model is already downloaded to your local disk, use
`!import_model /path/to/file.ckpt` to load it. For example:
@@ -131,15 +144,40 @@ invoke> !import_model https://example.org/sd_models/martians.safetensors
For this to work, the URL must not be password-protected. Otherwise
you will receive a 404 error.
When you import a legacy model, the CLI will ask you a few questions
about the model, including what size image it was trained on (usually
512x512), what name and description you wish to use for it, what
configuration file to use for it (usually the default
`v1-inference.yaml`), whether you'd like to make this model the
default at startup time, and whether you would like to install a
custom VAE (variable autoencoder) file for the model. For recent
models, the answer to the VAE question is usually "no," but it won't
hurt to answer "yes".
When you import a legacy model, the CLI will first ask you what type
of model this is. You can indicate whether it is a model based on
Stable Diffusion 1.x (1.4 or 1.5), one based on Stable Diffusion 2.x,
or a 1.x inpainting model. Be careful to indicate the correct model
type, or it will not load correctly. You can correct the model type
after the fact using the `!edit_model` command.
The system will then ask you a few other questions about the model,
including what size image it was trained on (usually 512x512), what
name and description you wish to use for it, and whether you would
like to install a custom VAE (variable autoencoder) file for the
model. For recent models, the answer to the VAE question is usually
"no," but it won't hurt to answer "yes".
After importing, the model will load. If this is successful, you will
be asked if you want to keep the model loaded in memory to start
generating immediately. You'll also be asked if you wish to make this
the default model on startup. You can change this later using
`!edit_model`.
#### Importing a batch of `.ckpt` and `.safetensors` models from a directory
You may also point `!import_model` to a directory containing a set of
`.ckpt` or `.safetensors` files. They will be imported _en masse_.
!!! example
```console
invoke> !import_model C:/Users/fred/Downloads/civitai_models/
```
You will be given the option to import all models found in the
directory, or select which ones to import. If there are subfolders
within the directory, they will be searched for models to import.
#### Installing `diffusers` models
@@ -173,6 +211,26 @@ description for the model, whether to make this the default model that
is loaded at InvokeAI startup time, and whether to replace its
VAE. Generally the answer to the latter question is "no".
### Specifying a configuration file for legacy checkpoints
Some checkpoint files come with instructions to use a specific .yaml
configuration file. For InvokeAI load this file correctly, please put
the config file in the same directory as the corresponding `.ckpt` or
`.safetensors` file and make sure the file has the same basename as
the weights file. Here is an example:
```bash
wonderful-model-v2.ckpt
wonderful-model-v2.yaml
```
Similarly, to use a custom VAE, name the VAE like this:
```bash
wonderful-model-v2.vae.pt
```
### Converting legacy models into `diffusers`
The CLI `!convert_model` will convert a `.safetensors` or `.ckpt`
@@ -279,19 +337,23 @@ After you save the modified `models.yaml` file relaunch
### Installation via the WebUI
To access the WebUI Model Manager, click on the button that looks like
a cute in the upper right side of the browser screen. This will bring
a cube in the upper right side of the browser screen. This will bring
up a dialogue that lists the models you have already installed, and
allows you to load, delete or edit them:
<figure markdown>
![model-manager](../assets/installing-models/webui-models-1.png)
</figure>
To add a new model, click on **+ Add New** and select to either a
checkpoint/safetensors model, or a diffusers model:
<figure markdown>
![model-manager-add-new](../assets/installing-models/webui-models-2.png)
</figure>
In this example, we chose **Add Diffusers**. As shown in the figure
@@ -302,7 +364,9 @@ choose to enter a path to disk, the system will autocomplete for you
as you type:
<figure markdown>
![model-manager-add-diffusers](../assets/installing-models/webui-models-3.png)
</figure>
Press **Add Model** at the bottom of the dialogue (scrolled out of
@@ -317,7 +381,9 @@ directory and press the "Search" icon. This will display the
subfolders, and allow you to choose which ones to import:
<figure markdown>
![model-manager-add-checkpoint](../assets/installing-models/webui-models-4.png)
</figure>
## Model Management Startup Options
@@ -342,9 +408,8 @@ invoke.sh --autoconvert /home/fred/stable-diffusion-checkpoints
And here is what the same argument looks like in `invokeai.init`:
```
```bash
--outdir="/home/fred/invokeai/outputs
--no-nsfw_checker
--autoconvert /home/fred/stable-diffusion-checkpoints
```

View File

@@ -1,73 +0,0 @@
openapi: 3.0.3
info:
title: Stable Diffusion
description: |-
TODO: Description Here
Some useful links:
- [Stable Diffusion Dream Server](https://github.com/lstein/stable-diffusion)
license:
name: MIT License
url: https://github.com/lstein/stable-diffusion/blob/main/LICENSE
version: 1.0.0
servers:
- url: http://localhost:9090/api
tags:
- name: images
description: Retrieve and manage generated images
paths:
/images/{imageId}:
get:
tags:
- images
summary: Get image by ID
description: Returns a single image
operationId: getImageById
parameters:
- name: imageId
in: path
description: ID of image to return
required: true
schema:
type: string
responses:
'200':
description: successful operation
content:
image/png:
schema:
type: string
format: binary
'404':
description: Image not found
/intermediates/{intermediateId}/{step}:
get:
tags:
- images
summary: Get intermediate image by ID
description: Returns a single intermediate image
operationId: getIntermediateById
parameters:
- name: intermediateId
in: path
description: ID of intermediate to return
required: true
schema:
type: string
- name: step
in: path
description: The generation step of the intermediate
required: true
schema:
type: string
responses:
'200':
description: successful operation
content:
image/png:
schema:
type: string
format: binary
'404':
description: Intermediate not found

19
docs/other/TRANSLATION.md Normal file
View File

@@ -0,0 +1,19 @@
# Translation
InvokeAI uses [Weblate](https://weblate.org) for translation. Weblate is a FOSS project providing a scalable translation service. Weblate automates the tedious parts of managing translation of a growing project, and the service is generously provided at no cost to FOSS projects like InvokeAI.
## Contributing
If you'd like to contribute by adding or updating a translation, please visit our [Weblate project](https://hosted.weblate.org/engage/invokeai/). You'll need to sign in with your GitHub account (a number of other accounts are supported, including Google).
Once signed in, select a language and then the Web UI component. From here you can Browse and Translate strings from English to your chosen language. Zen mode offers a simpler translation experience.
Your changes will be attributed to you in the automated PR process; you don't need to do anything else.
## Help & Questions
Please check Weblate's [documentation](https://docs.weblate.org/en/latest/index.html) or ping @psychedelicious or @blessedcoolant on Discord if you have any questions.
## Thanks
Thanks to the InvokeAI community for their efforts to translate the project!

View File

@@ -1,5 +0,0 @@
mkdocs
mkdocs-material>=8, <9
mkdocs-git-revision-date-localized-plugin
mkdocs-redirects==1.2.0

Binary file not shown.

Before

Width:  |  Height:  |  Size: 665 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 628 B

View File

@@ -1,16 +0,0 @@
html {
box-sizing: border-box;
overflow: -moz-scrollbars-vertical;
overflow-y: scroll;
}
*,
*:before,
*:after {
box-sizing: inherit;
}
body {
margin: 0;
background: #fafafa;
}

View File

@@ -1,79 +0,0 @@
<!doctype html>
<html lang="en-US">
<head>
<title>Swagger UI: OAuth2 Redirect</title>
</head>
<body>
<script>
'use strict';
function run () {
var oauth2 = window.opener.swaggerUIRedirectOauth2;
var sentState = oauth2.state;
var redirectUrl = oauth2.redirectUrl;
var isValid, qp, arr;
if (/code|token|error/.test(window.location.hash)) {
qp = window.location.hash.substring(1).replace('?', '&');
} else {
qp = location.search.substring(1);
}
arr = qp.split("&");
arr.forEach(function (v,i,_arr) { _arr[i] = '"' + v.replace('=', '":"') + '"';});
qp = qp ? JSON.parse('{' + arr.join() + '}',
function (key, value) {
return key === "" ? value : decodeURIComponent(value);
}
) : {};
isValid = qp.state === sentState;
if ((
oauth2.auth.schema.get("flow") === "accessCode" ||
oauth2.auth.schema.get("flow") === "authorizationCode" ||
oauth2.auth.schema.get("flow") === "authorization_code"
) && !oauth2.auth.code) {
if (!isValid) {
oauth2.errCb({
authId: oauth2.auth.name,
source: "auth",
level: "warning",
message: "Authorization may be unsafe, passed state was changed in server. The passed state wasn't returned from auth server."
});
}
if (qp.code) {
delete oauth2.state;
oauth2.auth.code = qp.code;
oauth2.callback({auth: oauth2.auth, redirectUrl: redirectUrl});
} else {
let oauthErrorMsg;
if (qp.error) {
oauthErrorMsg = "["+qp.error+"]: " +
(qp.error_description ? qp.error_description+ ". " : "no accessCode received from the server. ") +
(qp.error_uri ? "More info: "+qp.error_uri : "");
}
oauth2.errCb({
authId: oauth2.auth.name,
source: "auth",
level: "error",
message: oauthErrorMsg || "[Authorization failed]: no accessCode received from the server."
});
}
} else {
oauth2.callback({auth: oauth2.auth, token: qp, isValid: isValid, redirectUrl: redirectUrl});
}
window.close();
}
if (document.readyState !== 'loading') {
run();
} else {
document.addEventListener('DOMContentLoaded', function () {
run();
});
}
</script>
</body>
</html>

View File

@@ -1,20 +0,0 @@
window.onload = function() {
//<editor-fold desc="Changeable Configuration Block">
// the following lines will be replaced by docker/configurator, when it runs in a docker-container
window.ui = SwaggerUIBundle({
url: "openapi3_0.yaml",
dom_id: '#swagger-ui',
deepLinking: true,
presets: [
SwaggerUIBundle.presets.apis,
SwaggerUIStandalonePreset
],
plugins: [
SwaggerUIBundle.plugins.DownloadUrl
],
layout: "StandaloneLayout"
});
//</editor-fold>
};

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -20,10 +20,9 @@ echo Building installer for version $VERSION
echo "Be certain that you're in the 'installer' directory before continuing."
read -p "Press any key to continue, or CTRL-C to exit..."
read -e -p "Commit and tag this repo with '${VERSION}' and '${LATEST_TAG}'? [n]: " input
read -e -p "Tag this repo with '${VERSION}' and '${LATEST_TAG}'? [n]: " input
RESPONSE=${input:='n'}
if [ "$RESPONSE" == 'y' ]; then
git commit -a
if ! git tag $VERSION ; then
echo "Existing/invalid tag"
@@ -32,6 +31,8 @@ if [ "$RESPONSE" == 'y' ]; then
git push origin :refs/tags/$LATEST_TAG
git tag -fa $LATEST_TAG
echo "remember to push --tags!"
fi
# ----------------------

View File

@@ -67,6 +67,8 @@ del /q .tmp1 .tmp2
@rem -------------- Install and Configure ---------------
call python .\lib\main.py
pause
exit /b
@rem ------------------------ Subroutines ---------------
@rem routine to do comparison of semantic version numbers

View File

@@ -9,13 +9,16 @@ cd $scriptdir
function version { echo "$@" | awk -F. '{ printf("%d%03d%03d%03d\n", $1,$2,$3,$4); }'; }
MINIMUM_PYTHON_VERSION=3.9.0
MAXIMUM_PYTHON_VERSION=3.11.0
PYTHON=""
for candidate in python3.10 python3.9 python3 python python3.11 ; do
for candidate in python3.10 python3.9 python3 python ; do
if ppath=`which $candidate`; then
python_version=$($ppath -V | awk '{ print $2 }')
if [ $(version $python_version) -ge $(version "$MINIMUM_PYTHON_VERSION") ]; then
PYTHON=$ppath
break
if [ $(version $python_version) -lt $(version "$MAXIMUM_PYTHON_VERSION") ]; then
PYTHON=$ppath
break
fi
fi
fi
done
@@ -28,3 +31,4 @@ if [ -z "$PYTHON" ]; then
fi
exec $PYTHON ./lib/main.py ${@}
read -p "Press any key to exit"

View File

@@ -241,14 +241,18 @@ class InvokeAiInstance:
from plumbum import FG, local
# Note that we're installing pinned versions of torch and
# torchvision here, which may not correspond to what is
# in pyproject.toml. This is a hack to prevent torch 2.0 from
# being installed and immediately uninstalled and replaced with 1.13
pip = local[self.pip]
(
pip[
"install",
"--require-virtualenv",
"torch",
"torchvision",
"torch~=1.13.1",
"torchvision>=0.14.1",
"--force-reinstall",
"--find-links" if find_links is not None else None,
find_links,
@@ -336,7 +340,8 @@ class InvokeAiInstance:
elif el in ['-y','--yes','--yes-to-all']:
new_argv.append(el)
sys.argv = new_argv
import requests # to catch download exceptions
from messages import introduction
introduction()
@@ -346,7 +351,21 @@ class InvokeAiInstance:
# NOTE: currently the config script does its own arg parsing! this means the command-line switches
# from the installer will also automatically propagate down to the config script.
# this may change in the future with config refactoring!
invokeai_configure.main()
succeeded = False
try:
invokeai_configure.main()
succeeded = True
except requests.exceptions.ConnectionError as e:
print(f'\nA network error was encountered during configuration and download: {str(e)}')
except OSError as e:
print(f'\nAn OS error was encountered during configuration and download: {str(e)}')
except Exception as e:
print(f'\nA problem was encountered during the configuration and download steps: {str(e)}')
finally:
if not succeeded:
print('To try again, find the "invokeai" directory, run the script "invoke.sh" or "invoke.bat"')
print('and choose option 7 to fix a broken install, optionally followed by option 5 to install models.')
print('Alternatively you can relaunch the installer.')
def install_user_scripts(self):
"""
@@ -364,6 +383,9 @@ class InvokeAiInstance:
shutil.copy(src, dest)
os.chmod(dest, 0o0755)
if OS == "Linux":
shutil.copy(Path(__file__).parent / '..' / "templates" / "dialogrc", self.runtime / '.dialogrc')
def update(self):
pass

View File

@@ -0,0 +1,27 @@
# Screen
use_shadow = OFF
use_colors = ON
screen_color = (BLACK, BLACK, ON)
# Box
dialog_color = (YELLOW, BLACK , ON)
title_color = (YELLOW, BLACK, ON)
border_color = (YELLOW, BLACK, OFF)
border2_color = (YELLOW, BLACK, OFF)
# Button
button_active_color = (RED, BLACK, OFF)
button_inactive_color = (YELLOW, BLACK, OFF)
button_label_active_color = (YELLOW,BLACK,ON)
button_label_inactive_color = (YELLOW,BLACK,ON)
# Menu box
menubox_color = (BLACK, BLACK, ON)
menubox_border_color = (YELLOW, BLACK, OFF)
menubox_border2_color = (YELLOW, BLACK, OFF)
# Menu window
item_color = (YELLOW, BLACK, OFF)
item_selected_color = (BLACK, YELLOW, OFF)
tag_key_color = (YELLOW, BLACK, OFF)
tag_key_selected_color = (BLACK, YELLOW, OFF)

View File

@@ -6,15 +6,20 @@ setlocal
call .venv\Scripts\activate.bat
set INVOKEAI_ROOT=.
:start
echo Do you want to generate images using the
echo 1. command-line
echo 1. command-line interface
echo 2. browser-based UI
echo 3. run textual inversion training
echo 4. merge models (diffusers type only)
echo 5. re-run the configure script to download new models
echo 6. open the developer console
echo 7. command-line help
set /P restore="Please enter 1, 2, 3, 4, 5, 6 or 7: [2] "
echo 5. download and install models
echo 6. change InvokeAI startup options
echo 7. re-run the configure script to fix a broken install
echo 8. open the developer console
echo 9. update InvokeAI
echo 10. command-line help
echo Q - quit
set /P restore="Please enter 1-10, Q: [2] "
if not defined restore set restore=2
IF /I "%restore%" == "1" (
echo Starting the InvokeAI command-line..
@@ -24,14 +29,20 @@ IF /I "%restore%" == "1" (
python .venv\Scripts\invokeai.exe --web %*
) ELSE IF /I "%restore%" == "3" (
echo Starting textual inversion training..
python .venv\Scripts\invokeai-ti.exe --gui %*
python .venv\Scripts\invokeai-ti.exe --gui
) ELSE IF /I "%restore%" == "4" (
echo Starting model merging script..
python .venv\Scripts\invokeai-merge.exe --gui %*
python .venv\Scripts\invokeai-merge.exe --gui
) ELSE IF /I "%restore%" == "5" (
echo Running invokeai-configure...
python .venv\Scripts\invokeai-configure.exe %*
echo Running invokeai-model-install...
python .venv\Scripts\invokeai-model-install.exe
) ELSE IF /I "%restore%" == "6" (
echo Running invokeai-configure...
python .venv\Scripts\invokeai-configure.exe --skip-sd-weight --skip-support-models
) ELSE IF /I "%restore%" == "7" (
echo Running invokeai-configure...
python .venv\Scripts\invokeai-configure.exe --yes --default_only
) ELSE IF /I "%restore%" == "8" (
echo Developer Console
echo Python command is:
where python
@@ -43,14 +54,27 @@ IF /I "%restore%" == "1" (
echo *************************
echo *** Type `exit` to quit this shell and deactivate the Python virtual environment ***
call cmd /k
) ELSE IF /I "%restore%" == "7" (
) ELSE IF /I "%restore%" == "9" (
echo Running invokeai-update...
python .venv\Scripts\invokeai-update.exe %*
) ELSE IF /I "%restore%" == "10" (
echo Displaying command line help...
python .venv\Scripts\invokeai.exe --help %*
pause
exit /b
) ELSE IF /I "%restore%" == "q" (
echo Goodbye!
goto ending
) ELSE (
echo Invalid selection
pause
exit /b
)
goto start
endlocal
pause
:ending
exit /b

View File

@@ -1,5 +1,10 @@
#!/bin/bash
# MIT License
# Coauthored by Lincoln Stein, Eugene Brodsky and Joshua Kimsey
# Copyright 2023, The InvokeAI Development Team
####
# This launch script assumes that:
# 1. it is located in the runtime directory,
@@ -11,65 +16,168 @@
set -eu
# ensure we're in the correct folder in case user's CWD is somewhere else
# Ensure we're in the correct folder in case user's CWD is somewhere else
scriptdir=$(dirname "$0")
cd "$scriptdir"
. .venv/bin/activate
export INVOKEAI_ROOT="$scriptdir"
PARAMS=$@
# set required env var for torch on mac MPS
# Check to see if dialog is installed (it seems to be fairly standard, but good to check regardless) and if the user has passed the --no-tui argument to disable the dialog TUI
tui=true
if command -v dialog &>/dev/null; then
# This must use $@ to properly loop through the arguments passed by the user
for arg in "$@"; do
if [ "$arg" == "--no-tui" ]; then
tui=false
# Remove the --no-tui argument to avoid errors later on when passing arguments to InvokeAI
PARAMS=$(echo "$PARAMS" | sed 's/--no-tui//')
break
fi
done
else
tui=false
fi
# Set required env var for torch on mac MPS
if [ "$(uname -s)" == "Darwin" ]; then
export PYTORCH_ENABLE_MPS_FALLBACK=1
fi
if [ "$0" != "bash" ]; then
echo "Do you want to generate images using the"
echo "1. command-line"
echo "2. browser-based UI"
echo "3. run textual inversion training"
echo "4. merge models (diffusers type only)"
echo "5. open the developer console"
echo "6. re-run the configure script to download new models"
echo "7. command-line help "
echo ""
read -p "Please enter 1, 2, 3, 4, 5, 6 or 7: [2] " yn
choice=${yn:='2'}
case $choice in
1)
echo "Starting the InvokeAI command-line..."
exec invokeai $@
;;
2)
echo "Starting the InvokeAI browser-based UI..."
exec invokeai --web $@
;;
3)
echo "Starting Textual Inversion:"
exec invokeai-ti --gui $@
;;
4)
echo "Merging Models:"
exec invokeai-merge --gui $@
;;
5)
echo "Developer Console:"
file_name=$(basename "${BASH_SOURCE[0]}")
bash --init-file "$file_name"
;;
6)
exec invokeai-configure --root ${INVOKEAI_ROOT}
;;
7)
exec invokeai --help
;;
*)
echo "Invalid selection"
exit;;
# Primary function for the case statement to determine user input
do_choice() {
case $1 in
1)
clear
printf "Generate images with a browser-based interface\n"
invokeai --web $PARAMS
;;
2)
clear
printf "Generate images using a command-line interface\n"
invokeai $PARAMS
;;
3)
clear
printf "Textual inversion training\n"
invokeai-ti --gui $PARAMS
;;
4)
clear
printf "Merge models (diffusers type only)\n"
invokeai-merge --gui $PARAMS
;;
5)
clear
printf "Download and install models\n"
invokeai-model-install --root ${INVOKEAI_ROOT}
;;
6)
clear
printf "Change InvokeAI startup options\n"
invokeai-configure --root ${INVOKEAI_ROOT} --skip-sd-weights --skip-support-models
;;
7)
clear
printf "Re-run the configure script to fix a broken install\n"
invokeai-configure --root ${INVOKEAI_ROOT} --yes --default_only
;;
8)
clear
printf "Open the developer console\n"
file_name=$(basename "${BASH_SOURCE[0]}")
bash --init-file "$file_name"
;;
9)
clear
printf "Update InvokeAI\n"
invokeai-update
;;
10)
clear
printf "Command-line help\n"
invokeai --help
;;
"HELP 1")
clear
printf "Command-line help\n"
invokeai --help
;;
*)
clear
printf "Exiting...\n"
exit
;;
esac
clear
}
# Dialog-based TUI for launcing Invoke functions
do_dialog() {
options=(
1 "Generate images with a browser-based interface"
2 "Generate images using a command-line interface"
3 "Textual inversion training"
4 "Merge models (diffusers type only)"
5 "Download and install models"
6 "Change InvokeAI startup options"
7 "Re-run the configure script to fix a broken install"
8 "Open the developer console"
9 "Update InvokeAI")
choice=$(dialog --clear \
--backtitle "\Zb\Zu\Z3InvokeAI" \
--colors \
--title "What would you like to run?" \
--ok-label "Run" \
--cancel-label "Exit" \
--help-button \
--help-label "CLI Help" \
--menu "Select an option:" \
0 0 0 \
"${options[@]}" \
2>&1 >/dev/tty) || clear
do_choice "$choice"
clear
}
# Command-line interface for launching Invoke functions
do_line_input() {
clear
printf " ** For a more attractive experience, please install the 'dialog' utility using your package manager. **\n\n"
printf "Do you want to generate images using the\n"
printf "1: Browser-based UI\n"
printf "2: Command-line interface\n"
printf "3: Run textual inversion training\n"
printf "4: Merge models (diffusers type only)\n"
printf "5: Download and install models\n"
printf "6: Change InvokeAI startup options\n"
printf "7: Re-run the configure script to fix a broken install\n"
printf "8: Open the developer console\n"
printf "9: Update InvokeAI\n"
printf "10: Command-line help\n"
printf "Q: Quit\n\n"
read -p "Please enter 1-10, Q: [1] " yn
choice=${yn:='1'}
do_choice $choice
clear
}
# Main IF statement for launching Invoke with either the TUI or CLI, and for checking if the user is in the developer console
if [ "$0" != "bash" ]; then
while true; do
if $tui; then
# .dialogrc must be located in the same directory as the invoke.sh script
export DIALOGRC="./.dialogrc"
do_dialog
else
do_line_input
fi
done
else # in developer console
python --version
echo "Press ^D to exit"
printf "Press ^D to exit\n"
export PS1="(InvokeAI) \u@\h \w> "
fi

View File

@@ -25,12 +25,15 @@ from invokeai.backend.modules.parameters import parameters_to_command
import invokeai.frontend.dist as frontend
from ldm.generate import Generate
from ldm.invoke.args import Args, APP_ID, APP_VERSION, calculate_init_img_hash
from ldm.invoke.conditioning import get_tokens_for_prompt, get_prompt_structure
from ldm.invoke.conditioning import get_tokens_for_prompt_object, get_prompt_structure, split_weighted_subprompts, \
get_tokenizer
from ldm.invoke.generator.diffusers_pipeline import PipelineIntermediateState
from ldm.invoke.generator.inpaint import infill_methods
from ldm.invoke.globals import Globals
from ldm.invoke.globals import Globals, global_converted_ckpts_dir
from ldm.invoke.pngwriter import PngWriter, retrieve_metadata
from ldm.invoke.prompt_parser import split_weighted_subprompts, Blend
from compel.prompt_parser import Blend
from ldm.invoke.globals import global_models_dir
from ldm.invoke.merge_diffusers import merge_diffusion_models
# Loading Arguments
opt = Args()
@@ -43,7 +46,8 @@ if not os.path.isabs(args.outdir):
# normalize the config directory relative to root
if not os.path.isabs(opt.conf):
opt.conf = os.path.normpath(os.path.join(Globals.root,opt.conf))
opt.conf = os.path.normpath(os.path.join(Globals.root, opt.conf))
class InvokeAIWebServer:
def __init__(self, generate: Generate, gfpgan, codeformer, esrgan) -> None:
@@ -189,7 +193,8 @@ class InvokeAIWebServer:
(width, height) = pil_image.size
thumbnail_path = save_thumbnail(
pil_image, os.path.basename(file_path), self.thumbnail_image_path
pil_image, os.path.basename(
file_path), self.thumbnail_image_path
)
response = {
@@ -203,11 +208,7 @@ class InvokeAIWebServer:
return make_response(response, 200)
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
self.handle_exceptions(e)
return make_response("Error uploading file", 500)
self.load_socketio_listeners(self.socketio)
@@ -264,14 +265,16 @@ class InvokeAIWebServer:
# location for "finished" images
self.result_path = args.outdir
# temporary path for intermediates
self.intermediate_path = os.path.join(self.result_path, "intermediates/")
self.intermediate_path = os.path.join(
self.result_path, "intermediates/")
# path for user-uploaded init images and masks
self.init_image_path = os.path.join(self.result_path, "init-images/")
self.mask_image_path = os.path.join(self.result_path, "mask-images/")
# path for temp images e.g. gallery generations which are not committed
self.temp_image_path = os.path.join(self.result_path, "temp-images/")
# path for thumbnail images
self.thumbnail_image_path = os.path.join(self.result_path, "thumbnails/")
self.thumbnail_image_path = os.path.join(
self.result_path, "thumbnails/")
# txt log
self.log_path = os.path.join(self.result_path, "invoke_log.txt")
# make all output paths
@@ -290,7 +293,7 @@ class InvokeAIWebServer:
def load_socketio_listeners(self, socketio):
@socketio.on("requestSystemConfig")
def handle_request_capabilities():
print(f">> System config requested")
print(">> System config requested")
config = self.get_system_config()
config["model_list"] = self.generate.model_manager.list_models()
config["infill_methods"] = infill_methods()
@@ -301,20 +304,19 @@ class InvokeAIWebServer:
try:
if not search_folder:
socketio.emit(
"foundModels",
{'search_folder': None, 'found_models': None},
)
"foundModels",
{'search_folder': None, 'found_models': None},
)
else:
search_folder, found_models = self.generate.model_manager.search_models(search_folder)
search_folder, found_models = self.generate.model_manager.search_models(
search_folder)
socketio.emit(
"foundModels",
{'search_folder': search_folder, 'found_models': found_models},
{'search_folder': search_folder,
'found_models': found_models},
)
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
self.handle_exceptions(e)
print("\n")
@socketio.on("addNewModel")
@@ -344,11 +346,7 @@ class InvokeAIWebServer:
)
print(f">> New Model Added: {model_name}")
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
self.handle_exceptions(e)
@socketio.on("deleteModel")
def handle_delete_model(model_name: str):
@@ -364,11 +362,7 @@ class InvokeAIWebServer:
)
print(f">> Model Deleted: {model_name}")
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
self.handle_exceptions(e)
@socketio.on("requestModelChange")
def handle_set_model(model_name: str):
@@ -387,11 +381,110 @@ class InvokeAIWebServer:
{"model_name": model_name, "model_list": model_list},
)
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
self.handle_exceptions(e)
traceback.print_exc()
print("\n")
@socketio.on('convertToDiffusers')
def convert_to_diffusers(model_to_convert: dict):
try:
if (model_info := self.generate.model_manager.model_info(model_name=model_to_convert['model_name'])):
if 'weights' in model_info:
ckpt_path = Path(model_info['weights'])
original_config_file = Path(model_info['config'])
model_name = model_to_convert['model_name']
model_description = model_info['description']
else:
self.socketio.emit(
"error", {"message": "Model is not a valid checkpoint file"})
else:
self.socketio.emit(
"error", {"message": "Could not retrieve model info."})
if not ckpt_path.is_absolute():
ckpt_path = Path(Globals.root, ckpt_path)
if original_config_file and not original_config_file.is_absolute():
original_config_file = Path(
Globals.root, original_config_file)
diffusers_path = Path(
ckpt_path.parent.absolute(),
f'{model_name}_diffusers'
)
if model_to_convert['save_location'] == 'root':
diffusers_path = Path(
global_converted_ckpts_dir(), f'{model_name}_diffusers')
if model_to_convert['save_location'] == 'custom' and model_to_convert['custom_location'] is not None:
diffusers_path = Path(
model_to_convert['custom_location'], f'{model_name}_diffusers')
if diffusers_path.exists():
shutil.rmtree(diffusers_path)
self.generate.model_manager.convert_and_import(
ckpt_path,
diffusers_path,
model_name=model_name,
model_description=model_description,
vae=None,
original_config_file=original_config_file,
commit_to_conf=opt.conf,
)
new_model_list = self.generate.model_manager.list_models()
socketio.emit(
"modelConverted",
{"new_model_name": model_name,
"model_list": new_model_list, 'update': True},
)
print(f">> Model Converted: {model_name}")
except Exception as e:
self.handle_exceptions(e)
@socketio.on('mergeDiffusersModels')
def merge_diffusers_models(model_merge_info: dict):
try:
models_to_merge = model_merge_info['models_to_merge']
model_ids_or_paths = [
self.generate.model_manager.model_name_or_path(x) for x in models_to_merge]
merged_pipe = merge_diffusion_models(
model_ids_or_paths, model_merge_info['alpha'], model_merge_info['interp'], model_merge_info['force'])
dump_path = global_models_dir() / 'merged_models'
if model_merge_info['model_merge_save_path'] is not None:
dump_path = Path(model_merge_info['model_merge_save_path'])
os.makedirs(dump_path, exist_ok=True)
dump_path = dump_path / model_merge_info['merged_model_name']
merged_pipe.save_pretrained(dump_path, safe_serialization=1)
merged_model_config = dict(
model_name=model_merge_info['merged_model_name'],
description=f'Merge of models {", ".join(models_to_merge)}',
commit_to_conf=opt.conf
)
if vae := self.generate.model_manager.config[models_to_merge[0]].get("vae", None):
print(
f">> Using configured VAE assigned to {models_to_merge[0]}")
merged_model_config.update(vae=vae)
self.generate.model_manager.import_diffuser_model(
dump_path, **merged_model_config)
new_model_list = self.generate.model_manager.list_models()
socketio.emit(
"modelsMerged",
{"merged_models": models_to_merge,
"merged_model_name": model_merge_info['merged_model_name'],
"model_list": new_model_list, 'update': True},
)
print(f">> Models Merged: {models_to_merge}")
print(
f">> New Model Added: {model_merge_info['merged_model_name']}")
except Exception as e:
self.handle_exceptions(e)
@socketio.on("requestEmptyTempFolder")
def empty_temp_folder():
@@ -406,22 +499,20 @@ class InvokeAIWebServer:
)
os.remove(thumbnail_path)
except Exception as e:
socketio.emit("error", {"message": f"Unable to delete {f}: {str(e)}"})
socketio.emit(
"error", {"message": f"Unable to delete {f}: {str(e)}"})
pass
socketio.emit("tempFolderEmptied")
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
self.handle_exceptions(e)
@socketio.on("requestSaveStagingAreaImageToGallery")
def save_temp_image_to_gallery(url):
try:
image_path = self.get_image_path_from_url(url)
new_path = os.path.join(self.result_path, os.path.basename(image_path))
new_path = os.path.join(
self.result_path, os.path.basename(image_path))
shutil.copy2(image_path, new_path)
if os.path.splitext(new_path)[1] == ".png":
@@ -434,7 +525,8 @@ class InvokeAIWebServer:
(width, height) = pil_image.size
thumbnail_path = save_thumbnail(
pil_image, os.path.basename(new_path), self.thumbnail_image_path
pil_image, os.path.basename(
new_path), self.thumbnail_image_path
)
image_array = [
@@ -455,11 +547,7 @@ class InvokeAIWebServer:
)
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
self.handle_exceptions(e)
@socketio.on("requestLatestImages")
def handle_request_latest_images(category, latest_mtime):
@@ -497,7 +585,8 @@ class InvokeAIWebServer:
(width, height) = pil_image.size
thumbnail_path = save_thumbnail(
pil_image, os.path.basename(path), self.thumbnail_image_path
pil_image, os.path.basename(
path), self.thumbnail_image_path
)
image_array.append(
@@ -515,7 +604,8 @@ class InvokeAIWebServer:
}
)
except Exception as e:
socketio.emit("error", {"message": f"Unable to load {path}: {str(e)}"})
socketio.emit(
"error", {"message": f"Unable to load {path}: {str(e)}"})
pass
socketio.emit(
@@ -523,11 +613,7 @@ class InvokeAIWebServer:
{"images": image_array, "category": category},
)
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
self.handle_exceptions(e)
@socketio.on("requestImages")
def handle_request_images(category, earliest_mtime=None):
@@ -569,7 +655,8 @@ class InvokeAIWebServer:
(width, height) = pil_image.size
thumbnail_path = save_thumbnail(
pil_image, os.path.basename(path), self.thumbnail_image_path
pil_image, os.path.basename(
path), self.thumbnail_image_path
)
image_array.append(
@@ -588,7 +675,8 @@ class InvokeAIWebServer:
)
except Exception as e:
print(f">> Unable to load {path}")
socketio.emit("error", {"message": f"Unable to load {path}: {str(e)}"})
socketio.emit(
"error", {"message": f"Unable to load {path}: {str(e)}"})
pass
socketio.emit(
@@ -600,11 +688,7 @@ class InvokeAIWebServer:
},
)
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
self.handle_exceptions(e)
@socketio.on("generateImage")
def handle_generate_image_event(
@@ -626,7 +710,8 @@ class InvokeAIWebServer:
printable_parameters["init_mask"][:64] + "..."
)
print(f'\n>> Image Generation Parameters:\n\n{printable_parameters}\n')
print(
f'\n>> Image Generation Parameters:\n\n{printable_parameters}\n')
print(f'>> ESRGAN Parameters: {esrgan_parameters}')
print(f'>> Facetool Parameters: {facetool_parameters}')
@@ -636,11 +721,7 @@ class InvokeAIWebServer:
facetool_parameters,
)
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
self.handle_exceptions(e)
@socketio.on("runPostprocessing")
def handle_run_postprocessing(original_image, postprocessing_parameters):
@@ -662,16 +743,18 @@ class InvokeAIWebServer:
try:
seed = original_image["metadata"]["image"]["seed"]
except (KeyError) as e:
except KeyError:
seed = "unknown_seed"
pass
if postprocessing_parameters["type"] == "esrgan":
progress.set_current_status("common:statusUpscalingESRGAN")
progress.set_current_status("common.statusUpscalingESRGAN")
elif postprocessing_parameters["type"] == "gfpgan":
progress.set_current_status("common:statusRestoringFacesGFPGAN")
progress.set_current_status(
"common.statusRestoringFacesGFPGAN")
elif postprocessing_parameters["type"] == "codeformer":
progress.set_current_status("common:statusRestoringFacesCodeFormer")
progress.set_current_status(
"common.statusRestoringFacesCodeFormer")
socketio.emit("progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
@@ -680,7 +763,8 @@ class InvokeAIWebServer:
image = self.esrgan.process(
image=image,
upsampler_scale=postprocessing_parameters["upscale"][0],
strength=postprocessing_parameters["upscale"][1],
denoise_str=postprocessing_parameters["upscale"][1],
strength=postprocessing_parameters["upscale"][2],
seed=seed,
)
elif postprocessing_parameters["type"] == "gfpgan":
@@ -704,7 +788,7 @@ class InvokeAIWebServer:
f'{postprocessing_parameters["type"]} is not a valid postprocessing type'
)
progress.set_current_status("common:statusSavingImage")
progress.set_current_status("common.statusSavingImage")
socketio.emit("progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
@@ -751,15 +835,11 @@ class InvokeAIWebServer:
},
)
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
self.handle_exceptions(e)
@socketio.on("cancel")
def handle_cancel():
print(f">> Cancel processing requested")
print(">> Cancel processing requested")
self.canceled.set()
# TODO: I think this needs a safety mechanism.
@@ -780,11 +860,7 @@ class InvokeAIWebServer:
{"url": url, "uuid": uuid, "category": category},
)
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
self.handle_exceptions(e)
# App Functions
def get_system_config(self):
@@ -841,12 +917,10 @@ class InvokeAIWebServer:
So we need to convert each into a PIL Image.
"""
truncated_outpaint_image_b64 = generation_parameters["init_img"][:64]
truncated_outpaint_mask_b64 = generation_parameters["init_mask"][:64]
init_img_url = generation_parameters["init_img"]
original_bounding_box = generation_parameters["bounding_box"].copy()
original_bounding_box = generation_parameters["bounding_box"].copy(
)
initial_image = dataURL_to_image(
generation_parameters["init_img"]
@@ -923,7 +997,8 @@ class InvokeAIWebServer:
elif generation_parameters["generation_mode"] == "img2img":
init_img_url = generation_parameters["init_img"]
init_img_path = self.get_image_path_from_url(init_img_url)
generation_parameters["init_img"] = Image.open(init_img_path).convert('RGB')
generation_parameters["init_img"] = Image.open(
init_img_path).convert('RGB')
def image_progress(sample, step):
if self.canceled.is_set():
@@ -934,10 +1009,10 @@ class InvokeAIWebServer:
nonlocal progress
generation_messages = {
"txt2img": "common:statusGeneratingTextToImage",
"img2img": "common:statusGeneratingImageToImage",
"inpainting": "common:statusGeneratingInpainting",
"outpainting": "common:statusGeneratingOutpainting",
"txt2img": "common.statusGeneratingTextToImage",
"img2img": "common.statusGeneratingImageToImage",
"inpainting": "common.statusGeneratingInpainting",
"outpainting": "common.statusGeneratingOutpainting",
}
progress.set_current_step(step + 1)
@@ -982,9 +1057,9 @@ class InvokeAIWebServer:
},
)
if generation_parameters["progress_latents"]:
image = self.generate.sample_to_lowres_estimated_image(sample)
image = self.generate.sample_to_lowres_estimated_image(
sample)
(width, height) = image.size
width *= 8
height *= 8
@@ -1003,7 +1078,8 @@ class InvokeAIWebServer:
},
)
self.socketio.emit("progressUpdate", progress.to_formatted_dict())
self.socketio.emit(
"progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
def image_done(image, seed, first_seed, attention_maps_image=None):
@@ -1015,7 +1091,6 @@ class InvokeAIWebServer:
nonlocal facetool_parameters
nonlocal progress
step_index = 1
nonlocal prior_variations
"""
@@ -1029,9 +1104,10 @@ class InvokeAIWebServer:
**generation_parameters["bounding_box"],
)
progress.set_current_status("common:statusGenerationComplete")
progress.set_current_status("common.statusGenerationComplete")
self.socketio.emit("progressUpdate", progress.to_formatted_dict())
self.socketio.emit(
"progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
all_parameters = generation_parameters
@@ -1042,7 +1118,8 @@ class InvokeAIWebServer:
and all_parameters["variation_amount"] > 0
):
first_seed = first_seed or seed
this_variation = [[seed, all_parameters["variation_amount"]]]
this_variation = [
[seed, all_parameters["variation_amount"]]]
all_parameters["with_variations"] = (
prior_variations + this_variation
)
@@ -1056,14 +1133,16 @@ class InvokeAIWebServer:
raise CanceledException
if esrgan_parameters:
progress.set_current_status("common:statusUpscaling")
progress.set_current_status("common.statusUpscaling")
progress.set_current_status_has_steps(False)
self.socketio.emit("progressUpdate", progress.to_formatted_dict())
self.socketio.emit(
"progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
image = self.esrgan.process(
image=image,
upsampler_scale=esrgan_parameters["level"],
denoise_str=esrgan_parameters['denoise_str'],
strength=esrgan_parameters["strength"],
seed=seed,
)
@@ -1071,6 +1150,7 @@ class InvokeAIWebServer:
postprocessing = True
all_parameters["upscale"] = [
esrgan_parameters["level"],
esrgan_parameters['denoise_str'],
esrgan_parameters["strength"],
]
@@ -1079,12 +1159,15 @@ class InvokeAIWebServer:
if facetool_parameters:
if facetool_parameters["type"] == "gfpgan":
progress.set_current_status("common:statusRestoringFacesGFPGAN")
progress.set_current_status(
"common.statusRestoringFacesGFPGAN")
elif facetool_parameters["type"] == "codeformer":
progress.set_current_status("common:statusRestoringFacesCodeFormer")
progress.set_current_status(
"common.statusRestoringFacesCodeFormer")
progress.set_current_status_has_steps(False)
self.socketio.emit("progressUpdate", progress.to_formatted_dict())
self.socketio.emit(
"progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
if facetool_parameters["type"] == "gfpgan":
@@ -1113,8 +1196,9 @@ class InvokeAIWebServer:
]
all_parameters["facetool_type"] = facetool_parameters["type"]
progress.set_current_status("common:statusSavingImage")
self.socketio.emit("progressUpdate", progress.to_formatted_dict())
progress.set_current_status("common.statusSavingImage")
self.socketio.emit(
"progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
# restore the stashed URLS and discard the paths, we are about to send the result to client
@@ -1125,12 +1209,14 @@ class InvokeAIWebServer:
)
if "init_mask" in all_parameters:
all_parameters["init_mask"] = "" # TODO: store the mask in metadata
# TODO: store the mask in metadata
all_parameters["init_mask"] = ""
if generation_parameters["generation_mode"] == "unifiedCanvas":
all_parameters["bounding_box"] = original_bounding_box
metadata = self.parameters_to_generated_image_metadata(all_parameters)
metadata = self.parameters_to_generated_image_metadata(
all_parameters)
command = parameters_to_command(all_parameters)
@@ -1160,17 +1246,20 @@ class InvokeAIWebServer:
if progress.total_iterations > progress.current_iteration:
progress.set_current_step(1)
progress.set_current_status("common:statusIterationComplete")
progress.set_current_status(
"common.statusIterationComplete")
progress.set_current_status_has_steps(False)
else:
progress.mark_complete()
self.socketio.emit("progressUpdate", progress.to_formatted_dict())
self.socketio.emit(
"progressUpdate", progress.to_formatted_dict())
eventlet.sleep(0)
parsed_prompt, _ = get_prompt_structure(generation_parameters["prompt"])
parsed_prompt, _ = get_prompt_structure(
generation_parameters["prompt"])
tokens = None if type(parsed_prompt) is Blend else \
get_tokens_for_prompt(self.generate.model, parsed_prompt)
get_tokens_for_prompt_object(get_tokenizer(self.generate.model), parsed_prompt)
attention_maps_image_base64_url = None if attention_maps_image is None \
else image_to_dataURL(attention_maps_image)
@@ -1221,11 +1310,7 @@ class InvokeAIWebServer:
# Clear the CUDA cache on an exception
self.empty_cuda_cache()
print(e)
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
self.handle_exceptions(e)
def empty_cuda_cache(self):
if self.generate.device.type == "cuda":
@@ -1287,7 +1372,8 @@ class InvokeAIWebServer:
{
"type": "esrgan",
"scale": int(parameters["upscale"][0]),
"strength": float(parameters["upscale"][1]),
"denoise_str": int(parameters["upscale"][1]),
"strength": float(parameters["upscale"][2]),
}
)
@@ -1298,13 +1384,6 @@ class InvokeAIWebServer:
# semantic drift
rfc_dict["sampler"] = parameters["sampler_name"]
# display weighted subprompts (liable to change)
subprompts = split_weighted_subprompts(
parameters["prompt"], skip_normalize=True
)
subprompts = [{"prompt": x[0], "weight": x[1]} for x in subprompts]
rfc_dict["prompt"] = subprompts
# 'variations' should always exist and be an array, empty or consisting of {'seed': seed, 'weight': weight} pairs
variations = []
@@ -1331,17 +1410,14 @@ class InvokeAIWebServer:
return metadata
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
self.handle_exceptions(e)
def parameters_to_post_processed_image_metadata(
self, parameters, original_image_path
):
try:
current_metadata = retrieve_metadata(original_image_path)["sd-metadata"]
current_metadata = retrieve_metadata(
original_image_path)["sd-metadata"]
postprocessing_metadata = {}
"""
@@ -1361,7 +1437,8 @@ class InvokeAIWebServer:
if parameters["type"] == "esrgan":
postprocessing_metadata["type"] = "esrgan"
postprocessing_metadata["scale"] = parameters["upscale"][0]
postprocessing_metadata["strength"] = parameters["upscale"][1]
postprocessing_metadata["denoise_str"] = parameters["upscale"][1]
postprocessing_metadata["strength"] = parameters["upscale"][2]
elif parameters["type"] == "gfpgan":
postprocessing_metadata["type"] = "gfpgan"
postprocessing_metadata["strength"] = parameters["facetool_strength"]
@@ -1380,16 +1457,13 @@ class InvokeAIWebServer:
postprocessing_metadata
)
else:
current_metadata["image"]["postprocessing"] = [postprocessing_metadata]
current_metadata["image"]["postprocessing"] = [
postprocessing_metadata]
return current_metadata
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
self.handle_exceptions(e)
def save_result_image(
self,
@@ -1419,7 +1493,7 @@ class InvokeAIWebServer:
if step_index:
filename += f".{step_index}"
if postprocessing:
filename += f".postprocessed"
filename += ".postprocessed"
filename += ".png"
@@ -1433,11 +1507,7 @@ class InvokeAIWebServer:
return os.path.abspath(path)
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
self.handle_exceptions(e)
def make_unique_init_image_filename(self, name):
try:
@@ -1446,11 +1516,7 @@ class InvokeAIWebServer:
name = f"{split[0]}.{uuid}{split[1]}"
return name
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
self.handle_exceptions(e)
def calculate_real_steps(self, steps, strength, has_init_image):
import math
@@ -1465,11 +1531,7 @@ class InvokeAIWebServer:
file.writelines(message)
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
self.handle_exceptions(e)
def get_image_path_from_url(self, url):
"""Given a url to an image used by the client, returns the absolute file path to that image"""
@@ -1492,18 +1554,15 @@ class InvokeAIWebServer:
)
elif "thumbnails" in url:
return os.path.abspath(
os.path.join(self.thumbnail_image_path, os.path.basename(url))
os.path.join(self.thumbnail_image_path,
os.path.basename(url))
)
else:
return os.path.abspath(
os.path.join(self.result_path, os.path.basename(url))
)
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
self.handle_exceptions(e)
def get_url_from_image_path(self, path):
"""Given an absolute file path to an image, returns the URL that the client can use to load the image"""
@@ -1521,11 +1580,7 @@ class InvokeAIWebServer:
else:
return os.path.join(self.result_url, os.path.basename(path))
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
self.handle_exceptions(e)
def save_file_unique_uuid_name(self, bytes, name, path):
try:
@@ -1544,11 +1599,13 @@ class InvokeAIWebServer:
return file_path
except Exception as e:
self.socketio.emit("error", {"message": (str(e))})
print("\n")
self.handle_exceptions(e)
traceback.print_exc()
print("\n")
def handle_exceptions(self, exception, emit_key: str = 'error'):
self.socketio.emit(emit_key, {"message": (str(exception))})
print("\n")
traceback.print_exc()
print("\n")
class Progress:
@@ -1569,7 +1626,7 @@ class Progress:
self.total_iterations = (
generation_parameters["iterations"] if generation_parameters else 1
)
self.current_status = "common:statusPreparing"
self.current_status = "common.statusPreparing"
self.is_processing = True
self.current_status_has_steps = False
self.has_error = False
@@ -1599,7 +1656,7 @@ class Progress:
self.has_error = has_error
def mark_complete(self):
self.current_status = "common:statusProcessingComplete"
self.current_status = "common.statusProcessingComplete"
self.current_step = 0
self.total_steps = 0
self.current_iteration = 0
@@ -1661,10 +1718,12 @@ def dataURL_to_image(dataURL: str) -> ImageType:
)
return image
"""
Converts an image into a base64 image dataURL.
"""
def image_to_dataURL(image: ImageType) -> str:
buffered = io.BytesIO()
image.save(buffered, format="PNG")
@@ -1674,7 +1733,6 @@ def image_to_dataURL(image: ImageType) -> str:
return image_base64
"""
Converts a base64 image dataURL into bytes.
The dataURL is split on the first commma.

View File

@@ -6,83 +6,83 @@ stable-diffusion-1.5:
repo_id: stabilityai/sd-vae-ft-mse
recommended: True
default: True
inpainting-1.5:
sd-inpainting-1.5:
description: RunwayML SD 1.5 model optimized for inpainting, diffusers version (4.27 GB)
repo_id: runwayml/stable-diffusion-inpainting
format: diffusers
vae:
repo_id: stabilityai/sd-vae-ft-mse
recommended: True
dreamlike-diffusion-1.0:
description: An SD 1.5 model fine tuned on high quality art by dreamlike.art, diffusers version (2.13 BG)
format: diffusers
repo_id: dreamlike-art/dreamlike-diffusion-1.0
vae:
repo_id: stabilityai/sd-vae-ft-mse
recommended: True
dreamlike-photoreal-2.0:
description: A photorealistic model trained on 768 pixel images based on SD 1.5 (2.13 GB)
format: diffusers
repo_id: dreamlike-art/dreamlike-photoreal-2.0
recommended: False
stable-diffusion-2.1-768:
description: Stable Diffusion version 2.1 diffusers model, trained on 768 pixel images (5.21 GB)
repo_id: stabilityai/stable-diffusion-2-1
format: diffusers
recommended: True
stable-diffusion-2.1-base:
description: Stable Diffusion version 2.1 diffusers base model, trained on 512 pixel images (5.21 GB)
description: Stable Diffusion version 2.1 diffusers model, trained on 512 pixel images (5.21 GB)
repo_id: stabilityai/stable-diffusion-2-1-base
format: diffusers
recommended: False
sd-inpainting-2.0:
description: Stable Diffusion version 2.0 inpainting model (5.21 GB)
repo_id: stabilityai/stable-diffusion-2-inpainting
format: diffusers
recommended: False
analog-diffusion-1.0:
description: An SD-1.5 model trained on diverse analog photographs (2.13 GB)
repo_id: wavymulder/Analog-Diffusion
format: diffusers
recommended: false
deliberate-1.0:
description: Versatile model that produces detailed images up to 768px (4.27 GB)
format: diffusers
repo_id: XpucT/Deliberate
recommended: False
d&d-diffusion-1.0:
description: Dungeons & Dragons characters (2.13 GB)
format: diffusers
repo_id: 0xJustin/Dungeons-and-Diffusion
recommended: False
dreamlike-photoreal-2.0:
description: A photorealistic model trained on 768 pixel images based on SD 1.5 (2.13 GB)
format: diffusers
repo_id: dreamlike-art/dreamlike-photoreal-2.0
recommended: False
inkpunk-1.0:
description: Stylized illustrations inspired by Gorillaz, FLCL and Shinkawa; prompt with "nvinkpunk" (4.27 GB)
format: diffusers
repo_id: Envvi/Inkpunk-Diffusion
recommended: False
openjourney-4.0:
description: An SD 1.5 model fine tuned on Midjourney images by PromptHero - include "mdjrny-v4 style" in your prompts (2.13 GB)
format: diffusers
repo_id: prompthero/openjourney
vae:
description: An SD 1.5 model fine tuned on Midjourney; prompt with "mdjrny-v4 style" (2.13 GB)
format: diffusers
repo_id: prompthero/openjourney
vae:
repo_id: stabilityai/sd-vae-ft-mse
recommended: False
nitro-diffusion-1.0:
description: A SD 1.5 model trained on three artstyles - prompt with "archer style", "arcane style" and/or "modern disney style" (2.13 GB)
repo_id: nitrosocke/Nitro-Diffusion
recommended: False
portrait-plus-1.0:
description: An SD-1.5 model trained on close range portraits of people; prompt with "portrait+" (2.13 GB)
format: diffusers
repo_id: wavymulder/portraitplus
recommended: False
seek-art-mega-1.0:
description: A general use SD-1.5 "anything" model that supports multiple styles (2.1 GB)
repo_id: coreco/seek.art_MEGA
format: diffusers
vae:
repo_id: stabilityai/sd-vae-ft-mse
recommended: False
trinart-2.0:
description: An SD model finetuned with ~40,000 assorted high resolution manga/anime-style pictures, diffusers version (2.13 GB)
description: An SD-1.5 model finetuned with ~40K assorted high resolution manga/anime-style images (2.13 GB)
repo_id: naclbit/trinart_stable_diffusion_v2
format: diffusers
vae:
repo_id: stabilityai/sd-vae-ft-mse
recommended: False
trinart-characters-2_0:
description: An SD model finetuned with 19.2M anime/manga style images (ckpt version) (4.27 GB)
repo_id: naclbit/trinart_derrida_characters_v2_stable_diffusion
config: v1-inference.yaml
file: derrida_final.ckpt
format: ckpt
waifu-diffusion-1.4:
description: An SD-1.5 model trained on 680k anime/manga-style images (2.13 GB)
repo_id: hakurei/waifu-diffusion
format: diffusers
vae:
repo_id: naclbit/trinart_derrida_characters_v2_stable_diffusion
file: autoencoder_fix_kl-f8-trinart_characters.ckpt
width: 512
height: 512
recommended: False
ft-mse-improved-autoencoder-840000:
description: StabilityAI improved autoencoder fine-tuned for human faces. Improves legacy .ckpt models (335 MB)
repo_id: stabilityai/sd-vae-ft-mse-original
format: ckpt
config: VAE/default
file: vae-ft-mse-840000-ema-pruned.ckpt
width: 512
height: 512
recommended: True
trinart_vae:
description: Custom autoencoder for trinart_characters for legacy .ckpt models only (335 MB)
repo_id: naclbit/trinart_characters_19.2m_stable_diffusion_v1
config: VAE/trinart
format: ckpt
file: autoencoder_fix_kl-f8-trinart_characters.ckpt
width: 512
height: 512
repo_id: stabilityai/sd-vae-ft-mse
recommended: False

View File

@@ -0,0 +1,67 @@
model:
base_learning_rate: 1.0e-4
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: "jpg"
cond_stage_key: "txt"
image_size: 64
channels: 4
cond_stage_trainable: false
conditioning_key: crossattn
monitor: val/loss_simple_ema
scale_factor: 0.18215
use_ema: False # we set this to false because this is an inference only config
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
use_checkpoint: True
use_fp16: True
image_size: 32 # unused
in_channels: 4
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_head_channels: 64 # need to fix for flash-attn
use_spatial_transformer: True
use_linear_in_transformer: True
transformer_depth: 1
context_dim: 1024
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
#attn_type: "vanilla-xformers"
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.FrozenOpenCLIPEmbedder
params:
freeze: True
layer: "penultimate"

View File

@@ -2,4 +2,4 @@ dist/
.husky/
node_modules/
patches/
public/
stats.html

View File

@@ -25,4 +25,13 @@ dist-ssr
*.sw?
# build stats
stats.html
stats.html
# Yarn - https://yarnpkg.com/getting-started/qa#which-files-should-be-gitignored
.pnp.*
.yarn/*
!.yarn/patches
!.yarn/plugins
!.yarn/releases
!.yarn/sdks
!.yarn/versions

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env sh
. "$(dirname -- "$0")/_/husky.sh"
cd invokeai/frontend/ && npx run lint
cd invokeai/frontend/ && npm run lint-staged

View File

@@ -2,4 +2,4 @@ dist/
.husky/
node_modules/
patches/
public/
stats.html

View File

@@ -1,6 +1,15 @@
module.exports = {
trailingComma: 'es5',
tabWidth: 2,
endOfLine: 'auto',
semi: true,
singleQuote: true,
overrides: [
{
files: ['public/locales/*.json'],
options: {
tabWidth: 4,
},
},
],
};

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,5 @@
# THIS IS AN AUTOGENERATED FILE. DO NOT EDIT THIS FILE DIRECTLY.
# yarn lockfile v1
yarn-path ".yarn/releases/yarn-1.22.19.cjs"

View File

@@ -0,0 +1 @@
yarnPath: .yarn/releases/yarn-1.22.19.cjs

View File

@@ -7,7 +7,7 @@ The UI is in `invokeai/frontend`.
Install [node](https://nodejs.org/en/download/) (includes npm) and
[yarn](https://yarnpkg.com/getting-started/install).
From `invokeai/frontend/` run `yarn install` to get everything set up.
From `invokeai/frontend/` run `yarn install --immutable` to get everything set up.
## Dev

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -5,8 +5,8 @@
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>InvokeAI - A Stable Diffusion Toolkit</title>
<link rel="shortcut icon" type="icon" href="./assets/favicon-0d253ced.ico" />
<script type="module" crossorigin src="./assets/index-252612ad.js"></script>
<link rel="stylesheet" href="./assets/index-b0bf79f4.css">
<script type="module" crossorigin src="./assets/index-c09cf9ca.js"></script>
<link rel="stylesheet" href="./assets/index-14cb2922.css">
</head>
<body>

520
invokeai/frontend/dist/locales/ar.json vendored Normal file
View File

@@ -0,0 +1,520 @@
{
"common": {
"hotkeysLabel": "مفاتيح الأختصار",
"themeLabel": "الموضوع",
"languagePickerLabel": "منتقي اللغة",
"reportBugLabel": "بلغ عن خطأ",
"settingsLabel": "إعدادات",
"darkTheme": "داكن",
"lightTheme": "فاتح",
"greenTheme": "أخضر",
"text2img": "نص إلى صورة",
"img2img": "صورة إلى صورة",
"unifiedCanvas": "لوحة موحدة",
"nodes": "عقد",
"langArabic": "العربية",
"nodesDesc": "نظام مبني على العقد لإنتاج الصور قيد التطوير حاليًا. تبقى على اتصال مع تحديثات حول هذه الميزة المذهلة.",
"postProcessing": "معالجة بعد الإصدار",
"postProcessDesc1": "Invoke AI توفر مجموعة واسعة من ميزات المعالجة بعد الإصدار. تحسين الصور واستعادة الوجوه متاحين بالفعل في واجهة الويب. يمكنك الوصول إليهم من الخيارات المتقدمة في قائمة الخيارات في علامة التبويب Text To Image و Image To Image. يمكن أيضًا معالجة الصور مباشرةً باستخدام أزرار الإجراء على الصورة فوق عرض الصورة الحالي أو في العارض.",
"postProcessDesc2": "سيتم إصدار واجهة رسومية مخصصة قريبًا لتسهيل عمليات المعالجة بعد الإصدار المتقدمة.",
"postProcessDesc3": "واجهة سطر الأوامر Invoke AI توفر ميزات أخرى عديدة بما في ذلك Embiggen.",
"training": "تدريب",
"trainingDesc1": "تدفق خاص مخصص لتدريب تضميناتك الخاصة ونقاط التحقق باستخدام العكس النصي و دريم بوث من واجهة الويب.",
"trainingDesc2": " استحضر الذكاء الصناعي يدعم بالفعل تدريب تضمينات مخصصة باستخدام العكس النصي باستخدام السكريبت الرئيسي.",
"upload": "رفع",
"close": "إغلاق",
"load": "تحميل",
"back": "الى الخلف",
"statusConnected": "متصل",
"statusDisconnected": "غير متصل",
"statusError": "خطأ",
"statusPreparing": "جاري التحضير",
"statusProcessingCanceled": "تم إلغاء المعالجة",
"statusProcessingComplete": "اكتمال المعالجة",
"statusGenerating": "جاري التوليد",
"statusGeneratingTextToImage": "جاري توليد النص إلى الصورة",
"statusGeneratingImageToImage": "جاري توليد الصورة إلى الصورة",
"statusGeneratingInpainting": "جاري توليد Inpainting",
"statusGeneratingOutpainting": "جاري توليد Outpainting",
"statusGenerationComplete": "اكتمال التوليد",
"statusIterationComplete": "اكتمال التكرار",
"statusSavingImage": "جاري حفظ الصورة",
"statusRestoringFaces": "جاري استعادة الوجوه",
"statusRestoringFacesGFPGAN": "تحسيت الوجوه (جي إف بي جان)",
"statusRestoringFacesCodeFormer": "تحسين الوجوه (كود فورمر)",
"statusUpscaling": "تحسين الحجم",
"statusUpscalingESRGAN": "تحسين الحجم (إي إس آر جان)",
"statusLoadingModel": "تحميل النموذج",
"statusModelChanged": "تغير النموذج"
},
"gallery": {
"generations": "الأجيال",
"showGenerations": "عرض الأجيال",
"uploads": "التحميلات",
"showUploads": "عرض التحميلات",
"galleryImageSize": "حجم الصورة",
"galleryImageResetSize": "إعادة ضبط الحجم",
"gallerySettings": "إعدادات المعرض",
"maintainAspectRatio": "الحفاظ على نسبة الأبعاد",
"autoSwitchNewImages": "التبديل التلقائي إلى الصور الجديدة",
"singleColumnLayout": "تخطيط عمود واحد",
"pinGallery": "تثبيت المعرض",
"allImagesLoaded": "تم تحميل جميع الصور",
"loadMore": "تحميل المزيد",
"noImagesInGallery": "لا توجد صور في المعرض"
},
"hotkeys": {
"keyboardShortcuts": "مفاتيح الأزرار المختصرة",
"appHotkeys": "مفاتيح التطبيق",
"generalHotkeys": "مفاتيح عامة",
"galleryHotkeys": "مفاتيح المعرض",
"unifiedCanvasHotkeys": "مفاتيح اللوحةالموحدة ",
"invoke": {
"title": "أدعو",
"desc": "إنشاء صورة"
},
"cancel": {
"title": "إلغاء",
"desc": "إلغاء إنشاء الصورة"
},
"focusPrompt": {
"title": "تركيز الإشعار",
"desc": "تركيز منطقة الإدخال الإشعار"
},
"toggleOptions": {
"title": "تبديل الخيارات",
"desc": "فتح وإغلاق لوحة الخيارات"
},
"pinOptions": {
"title": "خيارات التثبيت",
"desc": "ثبت لوحة الخيارات"
},
"toggleViewer": {
"title": "تبديل العارض",
"desc": "فتح وإغلاق مشاهد الصور"
},
"toggleGallery": {
"title": "تبديل المعرض",
"desc": "فتح وإغلاق درابزين المعرض"
},
"maximizeWorkSpace": {
"title": "تكبير مساحة العمل",
"desc": "إغلاق اللوحات وتكبير مساحة العمل"
},
"changeTabs": {
"title": "تغيير الألسنة",
"desc": "التبديل إلى مساحة عمل أخرى"
},
"consoleToggle": {
"title": "تبديل الطرفية",
"desc": "فتح وإغلاق الطرفية"
},
"setPrompt": {
"title": "ضبط التشعب",
"desc": "استخدم تشعب الصورة الحالية"
},
"setSeed": {
"title": "ضبط البذور",
"desc": "استخدم بذور الصورة الحالية"
},
"setParameters": {
"title": "ضبط المعلمات",
"desc": "استخدم جميع المعلمات الخاصة بالصورة الحالية"
},
"restoreFaces": {
"title": "استعادة الوجوه",
"desc": "استعادة الصورة الحالية"
},
"upscale": {
"title": "تحسين الحجم",
"desc": "تحسين حجم الصورة الحالية"
},
"showInfo": {
"title": "عرض المعلومات",
"desc": "عرض معلومات البيانات الخاصة بالصورة الحالية"
},
"sendToImageToImage": {
"title": "أرسل إلى صورة إلى صورة",
"desc": "أرسل الصورة الحالية إلى صورة إلى صورة"
},
"deleteImage": {
"title": "حذف الصورة",
"desc": "حذف الصورة الحالية"
},
"closePanels": {
"title": "أغلق اللوحات",
"desc": "يغلق اللوحات المفتوحة"
},
"previousImage": {
"title": "الصورة السابقة",
"desc": "عرض الصورة السابقة في الصالة"
},
"nextImage": {
"title": "الصورة التالية",
"desc": "عرض الصورة التالية في الصالة"
},
"toggleGalleryPin": {
"title": "تبديل تثبيت الصالة",
"desc": "يثبت ويفتح تثبيت الصالة على الواجهة الرسومية"
},
"increaseGalleryThumbSize": {
"title": "زيادة حجم صورة الصالة",
"desc": "يزيد حجم الصور المصغرة في الصالة"
},
"decreaseGalleryThumbSize": {
"title": "انقاص حجم صورة الصالة",
"desc": "ينقص حجم الصور المصغرة في الصالة"
},
"selectBrush": {
"title": "تحديد الفرشاة",
"desc": "يحدد الفرشاة على اللوحة"
},
"selectEraser": {
"title": "تحديد الممحاة",
"desc": "يحدد الممحاة على اللوحة"
},
"decreaseBrushSize": {
"title": "تصغير حجم الفرشاة",
"desc": "يصغر حجم الفرشاة/الممحاة على اللوحة"
},
"increaseBrushSize": {
"title": "زيادة حجم الفرشاة",
"desc": "يزيد حجم فرشة اللوحة / الممحاة"
},
"decreaseBrushOpacity": {
"title": "تخفيض شفافية الفرشاة",
"desc": "يخفض شفافية فرشة اللوحة"
},
"increaseBrushOpacity": {
"title": "زيادة شفافية الفرشاة",
"desc": "يزيد شفافية فرشة اللوحة"
},
"moveTool": {
"title": "أداة التحريك",
"desc": "يتيح التحرك في اللوحة"
},
"fillBoundingBox": {
"title": "ملء الصندوق المحدد",
"desc": "يملأ الصندوق المحدد بلون الفرشاة"
},
"eraseBoundingBox": {
"title": "محو الصندوق المحدد",
"desc": "يمحو منطقة الصندوق المحدد"
},
"colorPicker": {
"title": "اختيار منتقي اللون",
"desc": "يختار منتقي اللون الخاص باللوحة"
},
"toggleSnap": {
"title": "تبديل التأكيد",
"desc": "يبديل تأكيد الشبكة"
},
"quickToggleMove": {
"title": "تبديل سريع للتحريك",
"desc": "يبديل مؤقتا وضع التحريك"
},
"toggleLayer": {
"title": "تبديل الطبقة",
"desc": "يبديل إختيار الطبقة القناع / الأساسية"
},
"clearMask": {
"title": "مسح القناع",
"desc": "مسح القناع بأكمله"
},
"hideMask": {
"title": "إخفاء الكمامة",
"desc": "إخفاء وإظهار الكمامة"
},
"showHideBoundingBox": {
"title": "إظهار / إخفاء علبة التحديد",
"desc": "تبديل ظهور علبة التحديد"
},
"mergeVisible": {
"title": "دمج الطبقات الظاهرة",
"desc": "دمج جميع الطبقات الظاهرة في اللوحة"
},
"saveToGallery": {
"title": "حفظ إلى صالة الأزياء",
"desc": "حفظ اللوحة الحالية إلى صالة الأزياء"
},
"copyToClipboard": {
"title": "نسخ إلى الحافظة",
"desc": "نسخ اللوحة الحالية إلى الحافظة"
},
"downloadImage": {
"title": "تنزيل الصورة",
"desc": "تنزيل اللوحة الحالية"
},
"undoStroke": {
"title": "تراجع عن الخط",
"desc": "تراجع عن خط الفرشاة"
},
"redoStroke": {
"title": "إعادة الخط",
"desc": "إعادة خط الفرشاة"
},
"resetView": {
"title": "إعادة تعيين العرض",
"desc": "إعادة تعيين عرض اللوحة"
},
"previousStagingImage": {
"title": "الصورة السابقة في المرحلة التجريبية",
"desc": "الصورة السابقة في منطقة المرحلة التجريبية"
},
"nextStagingImage": {
"title": "الصورة التالية في المرحلة التجريبية",
"desc": "الصورة التالية في منطقة المرحلة التجريبية"
},
"acceptStagingImage": {
"title": "قبول الصورة في المرحلة التجريبية",
"desc": "قبول الصورة الحالية في منطقة المرحلة التجريبية"
}
},
"modelManager": {
"modelManager": "مدير النموذج",
"model": "نموذج",
"allModels": "جميع النماذج",
"checkpointModels": "نقاط التحقق",
"diffusersModels": "المصادر المتعددة",
"safetensorModels": "التنسورات الآمنة",
"modelAdded": "تمت إضافة النموذج",
"modelUpdated": "تم تحديث النموذج",
"modelEntryDeleted": "تم حذف مدخل النموذج",
"cannotUseSpaces": "لا يمكن استخدام المساحات",
"addNew": "إضافة جديد",
"addNewModel": "إضافة نموذج جديد",
"addCheckpointModel": "إضافة نقطة تحقق / نموذج التنسور الآمن",
"addDiffuserModel": "إضافة مصادر متعددة",
"addManually": "إضافة يدويًا",
"manual": "يدوي",
"name": "الاسم",
"nameValidationMsg": "أدخل اسما لنموذجك",
"description": "الوصف",
"descriptionValidationMsg": "أضف وصفا لنموذجك",
"config": "تكوين",
"configValidationMsg": "مسار الملف الإعدادي لنموذجك.",
"modelLocation": "موقع النموذج",
"modelLocationValidationMsg": "موقع النموذج على الجهاز الخاص بك.",
"repo_id": "معرف المستودع",
"repoIDValidationMsg": "المستودع الإلكتروني لنموذجك",
"vaeLocation": "موقع فاي إي",
"vaeLocationValidationMsg": "موقع فاي إي على الجهاز الخاص بك.",
"vaeRepoID": "معرف مستودع فاي إي",
"vaeRepoIDValidationMsg": "المستودع الإلكتروني فاي إي",
"width": "عرض",
"widthValidationMsg": "عرض افتراضي لنموذجك.",
"height": "ارتفاع",
"heightValidationMsg": "ارتفاع افتراضي لنموذجك.",
"addModel": "أضف نموذج",
"updateModel": "تحديث النموذج",
"availableModels": "النماذج المتاحة",
"search": "بحث",
"load": "تحميل",
"active": "نشط",
"notLoaded": "غير محمل",
"cached": "مخبأ",
"checkpointFolder": "مجلد التدقيق",
"clearCheckpointFolder": "مسح مجلد التدقيق",
"findModels": "إيجاد النماذج",
"scanAgain": "فحص مرة أخرى",
"modelsFound": "النماذج الموجودة",
"selectFolder": "حدد المجلد",
"selected": "تم التحديد",
"selectAll": "حدد الكل",
"deselectAll": "إلغاء تحديد الكل",
"showExisting": "إظهار الموجود",
"addSelected": "أضف المحدد",
"modelExists": "النموذج موجود",
"selectAndAdd": "حدد وأضف النماذج المدرجة أدناه",
"noModelsFound": "لم يتم العثور على نماذج",
"delete": "حذف",
"deleteModel": "حذف النموذج",
"deleteConfig": "حذف التكوين",
"deleteMsg1": "هل أنت متأكد من رغبتك في حذف إدخال النموذج هذا من استحضر الذكاء الصناعي",
"deleteMsg2": "هذا لن يحذف ملف نقطة التحكم للنموذج من القرص الخاص بك. يمكنك إعادة إضافتهم إذا كنت ترغب في ذلك.",
"formMessageDiffusersModelLocation": "موقع النموذج للمصعد",
"formMessageDiffusersModelLocationDesc": "يرجى إدخال واحد على الأقل.",
"formMessageDiffusersVAELocation": "موقع فاي إي",
"formMessageDiffusersVAELocationDesc": "إذا لم يتم توفيره، سيبحث استحضر الذكاء الصناعي عن ملف فاي إي داخل موقع النموذج المعطى أعلاه."
},
"parameters": {
"images": "الصور",
"steps": "الخطوات",
"cfgScale": "مقياس الإعداد الذاتي للجملة",
"width": "عرض",
"height": "ارتفاع",
"sampler": "مزج",
"seed": "بذرة",
"randomizeSeed": "تبديل بذرة",
"shuffle": "تشغيل",
"noiseThreshold": "عتبة الضوضاء",
"perlinNoise": "ضجيج برلين",
"variations": "تباينات",
"variationAmount": "كمية التباين",
"seedWeights": "أوزان البذور",
"faceRestoration": "استعادة الوجه",
"restoreFaces": "استعادة الوجوه",
"type": "نوع",
"strength": "قوة",
"upscaling": "تصغير",
"upscale": "تصغير",
"upscaleImage": "تصغير الصورة",
"scale": "مقياس",
"otherOptions": "خيارات أخرى",
"seamlessTiling": "تجهيز بلاستيكي بدون تشققات",
"hiresOptim": "تحسين الدقة العالية",
"imageFit": "ملائمة الصورة الأولية لحجم الخرج",
"codeformerFidelity": "الوثوقية",
"seamSize": "حجم التشقق",
"seamBlur": "ضباب التشقق",
"seamStrength": "قوة التشقق",
"seamSteps": "خطوات التشقق",
"scaleBeforeProcessing": "تحجيم قبل المعالجة",
"scaledWidth": "العرض المحجوب",
"scaledHeight": "الارتفاع المحجوب",
"infillMethod": "طريقة التعبئة",
"tileSize": "حجم البلاطة",
"boundingBoxHeader": "صندوق التحديد",
"seamCorrectionHeader": "تصحيح التشقق",
"infillScalingHeader": "التعبئة والتحجيم",
"img2imgStrength": "قوة صورة إلى صورة",
"toggleLoopback": "تبديل الإعادة",
"invoke": "إطلاق",
"promptPlaceholder": "اكتب المحث هنا. [العلامات السلبية], (زيادة الوزن) ++, (نقص الوزن)--, التبديل و الخلط متاحة (انظر الوثائق)",
"sendTo": "أرسل إلى",
"sendToImg2Img": "أرسل إلى صورة إلى صورة",
"sendToUnifiedCanvas": "أرسل إلى الخطوط الموحدة",
"copyImage": "نسخ الصورة",
"copyImageToLink": "نسخ الصورة إلى الرابط",
"downloadImage": "تحميل الصورة",
"openInViewer": "فتح في العارض",
"closeViewer": "إغلاق العارض",
"usePrompt": "استخدم المحث",
"useSeed": "استخدام البذور",
"useAll": "استخدام الكل",
"useInitImg": "استخدام الصورة الأولية",
"info": "معلومات",
"deleteImage": "حذف الصورة",
"initialImage": "الصورة الأولية",
"showOptionsPanel": "إظهار لوحة الخيارات"
},
"settings": {
"models": "موديلات",
"displayInProgress": "عرض الصور المؤرشفة",
"saveSteps": "حفظ الصور كل n خطوات",
"confirmOnDelete": "تأكيد عند الحذف",
"displayHelpIcons": "عرض أيقونات المساعدة",
"useCanvasBeta": "استخدام مخطط الأزرار بيتا",
"enableImageDebugging": "تمكين التصحيح عند التصوير",
"resetWebUI": "إعادة تعيين واجهة الويب",
"resetWebUIDesc1": "إعادة تعيين واجهة الويب يعيد فقط ذاكرة التخزين المؤقت للمتصفح لصورك وإعداداتك المذكورة. لا يحذف أي صور من القرص.",
"resetWebUIDesc2": "إذا لم تظهر الصور في الصالة أو إذا كان شيء آخر غير ناجح، يرجى المحاولة إعادة تعيين قبل تقديم مشكلة على جيت هب.",
"resetComplete": "تم إعادة تعيين واجهة الويب. تحديث الصفحة لإعادة التحميل."
},
"toast": {
"tempFoldersEmptied": "تم تفريغ مجلد المؤقت",
"uploadFailed": "فشل التحميل",
"uploadFailedMultipleImagesDesc": "تم الصق صور متعددة، قد يتم تحميل صورة واحدة فقط في الوقت الحالي",
"uploadFailedUnableToLoadDesc": "تعذر تحميل الملف",
"downloadImageStarted": "بدأ تنزيل الصورة",
"imageCopied": "تم نسخ الصورة",
"imageLinkCopied": "تم نسخ رابط الصورة",
"imageNotLoaded": "لم يتم تحميل أي صورة",
"imageNotLoadedDesc": "لم يتم العثور على صورة لإرسالها إلى وحدة الصورة",
"imageSavedToGallery": "تم حفظ الصورة في المعرض",
"canvasMerged": "تم دمج الخط",
"sentToImageToImage": "تم إرسال إلى صورة إلى صورة",
"sentToUnifiedCanvas": "تم إرسال إلى لوحة موحدة",
"parametersSet": "تم تعيين المعلمات",
"parametersNotSet": "لم يتم تعيين المعلمات",
"parametersNotSetDesc": "لم يتم العثور على معلمات بيانية لهذه الصورة.",
"parametersFailed": "حدث مشكلة في تحميل المعلمات",
"parametersFailedDesc": "تعذر تحميل صورة البدء.",
"seedSet": "تم تعيين البذرة",
"seedNotSet": "لم يتم تعيين البذرة",
"seedNotSetDesc": "تعذر العثور على البذرة لهذه الصورة.",
"promptSet": "تم تعيين الإشعار",
"promptNotSet": "Prompt Not Set",
"promptNotSetDesc": "تعذر العثور على الإشعار لهذه الصورة.",
"upscalingFailed": "فشل التحسين",
"faceRestoreFailed": "فشل استعادة الوجه",
"metadataLoadFailed": "فشل تحميل البيانات الوصفية",
"initialImageSet": "تم تعيين الصورة الأولية",
"initialImageNotSet": "لم يتم تعيين الصورة الأولية",
"initialImageNotSetDesc": "تعذر تحميل الصورة الأولية"
},
"tooltip": {
"feature": {
"prompt": "هذا هو حقل التحذير. يشمل التحذير عناصر الإنتاج والمصطلحات الأسلوبية. يمكنك إضافة الأوزان (أهمية الرمز) في التحذير أيضًا، ولكن أوامر CLI والمعلمات لن تعمل.",
"gallery": "تعرض Gallery منتجات من مجلد الإخراج عندما يتم إنشاؤها. تخزن الإعدادات داخل الملفات ويتم الوصول إليها عن طريق قائمة السياق.",
"other": "ستمكن هذه الخيارات من وضع عمليات معالجة بديلة لـاستحضر الذكاء الصناعي. سيؤدي 'الزخرفة بلا جدران' إلى إنشاء أنماط تكرارية في الإخراج. 'دقة عالية' هي الإنتاج خلال خطوتين عبر صورة إلى صورة: استخدم هذا الإعداد عندما ترغب في توليد صورة أكبر وأكثر تجانبًا دون العيوب. ستستغرق الأشياء وقتًا أطول من نص إلى صورة المعتاد.",
"seed": "يؤثر قيمة البذور على الضوضاء الأولي الذي يتم تكوين الصورة منه. يمكنك استخدام البذور الخاصة بالصور السابقة. 'عتبة الضوضاء' يتم استخدامها لتخفيف العناصر الخللية في قيم CFG العالية (جرب مدى 0-10), و Perlin لإضافة ضوضاء Perlin أثناء الإنتاج: كلا منهما يعملان على إضافة التنوع إلى النتائج الخاصة بك.",
"variations": "جرب التغيير مع قيمة بين 0.1 و 1.0 لتغيير النتائج لبذور معينة. التغييرات المثيرة للاهتمام للبذور تكون بين 0.1 و 0.3.",
"upscale": "استخدم إي إس آر جان لتكبير الصورة على الفور بعد الإنتاج.",
"faceCorrection": "تصحيح الوجه باستخدام جي إف بي جان أو كود فورمر: يكتشف الخوارزمية الوجوه في الصورة وتصحح أي عيوب. قيمة عالية ستغير الصورة أكثر، مما يؤدي إلى وجوه أكثر جمالا. كود فورمر بدقة أعلى يحتفظ بالصورة الأصلية على حساب تصحيح وجه أكثر قوة.",
"imageToImage": "تحميل صورة إلى صورة أي صورة كأولية، والتي يتم استخدامها لإنشاء صورة جديدة مع التشعيب. كلما كانت القيمة أعلى، كلما تغيرت نتيجة الصورة. من الممكن أن تكون القيم بين 0.0 و 1.0، وتوصي النطاق الموصى به هو .25-.75",
"boundingBox": "مربع الحدود هو نفس الإعدادات العرض والارتفاع لنص إلى صورة أو صورة إلى صورة. فقط المنطقة في المربع سيتم معالجتها.",
"seamCorrection": "يتحكم بالتعامل مع الخطوط المرئية التي تحدث بين الصور المولدة في سطح اللوحة.",
"infillAndScaling": "إدارة أساليب التعبئة (المستخدمة على المناطق المخفية أو الممحوة في سطح اللوحة) والزيادة في الحجم (مفيدة لحجوزات الإطارات الصغيرة)."
}
},
"unifiedCanvas": {
"layer": "طبقة",
"base": "قاعدة",
"mask": "قناع",
"maskingOptions": "خيارات القناع",
"enableMask": "مكن القناع",
"preserveMaskedArea": "الحفاظ على المنطقة المقنعة",
"clearMask": "مسح القناع",
"brush": "فرشاة",
"eraser": "ممحاة",
"fillBoundingBox": "ملئ إطار الحدود",
"eraseBoundingBox": "مسح إطار الحدود",
"colorPicker": "اختيار اللون",
"brushOptions": "خيارات الفرشاة",
"brushSize": "الحجم",
"move": "تحريك",
"resetView": "إعادة تعيين العرض",
"mergeVisible": "دمج الظاهر",
"saveToGallery": "حفظ إلى المعرض",
"copyToClipboard": "نسخ إلى الحافظة",
"downloadAsImage": "تنزيل على شكل صورة",
"undo": "تراجع",
"redo": "إعادة",
"clearCanvas": "مسح سبيكة الكاملة",
"canvasSettings": "إعدادات سبيكة الكاملة",
"showIntermediates": "إظهار الوسطاء",
"showGrid": "إظهار الشبكة",
"snapToGrid": "الالتفاف إلى الشبكة",
"darkenOutsideSelection": "تعمية خارج التحديد",
"autoSaveToGallery": "حفظ تلقائي إلى المعرض",
"saveBoxRegionOnly": "حفظ منطقة الصندوق فقط",
"limitStrokesToBox": "تحديد عدد الخطوط إلى الصندوق",
"showCanvasDebugInfo": "إظهار معلومات تصحيح سبيكة الكاملة",
"clearCanvasHistory": "مسح تاريخ سبيكة الكاملة",
"clearHistory": "مسح التاريخ",
"clearCanvasHistoryMessage": "مسح تاريخ اللوحة تترك اللوحة الحالية عائمة، ولكن تمسح بشكل غير قابل للتراجع تاريخ التراجع والإعادة.",
"clearCanvasHistoryConfirm": "هل أنت متأكد من رغبتك في مسح تاريخ اللوحة؟",
"emptyTempImageFolder": "إفراغ مجلد الصور المؤقتة",
"emptyFolder": "إفراغ المجلد",
"emptyTempImagesFolderMessage": "إفراغ مجلد الصور المؤقتة يؤدي أيضًا إلى إعادة تعيين اللوحة الموحدة بشكل كامل. وهذا يشمل كل تاريخ التراجع / الإعادة والصور في منطقة التخزين وطبقة الأساس لللوحة.",
"emptyTempImagesFolderConfirm": "هل أنت متأكد من رغبتك في إفراغ مجلد الصور المؤقتة؟",
"activeLayer": "الطبقة النشطة",
"canvasScale": "مقياس اللوحة",
"boundingBox": "صندوق الحدود",
"scaledBoundingBox": "صندوق الحدود المكبر",
"boundingBoxPosition": "موضع صندوق الحدود",
"canvasDimensions": "أبعاد اللوحة",
"canvasPosition": "موضع اللوحة",
"cursorPosition": "موضع المؤشر",
"previous": "السابق",
"next": "التالي",
"accept": "قبول",
"showHide": "إظهار/إخفاء",
"discardAll": "تجاهل الكل",
"betaClear": "مسح",
"betaDarkenOutside": "ظل الخارج",
"betaLimitToBox": "تحديد إلى الصندوق",
"betaPreserveMasked": "المحافظة على المخفية"
}
}

View File

@@ -1,55 +0,0 @@
{
"hotkeysLabel": "Hotkeys",
"themeLabel": "Thema",
"languagePickerLabel": "Sprachauswahl",
"reportBugLabel": "Fehler melden",
"githubLabel": "Github",
"discordLabel": "Discord",
"settingsLabel": "Einstellungen",
"darkTheme": "Dunkel",
"lightTheme": "Hell",
"greenTheme": "Grün",
"langEnglish": "Englisch",
"langRussian": "Russisch",
"langItalian": "Italienisch",
"langPortuguese": "Portugiesisch",
"langFrench": "Französich",
"langGerman": "Deutsch",
"langSpanish": "Spanisch",
"text2img": "Text zu Bild",
"img2img": "Bild zu Bild",
"unifiedCanvas": "Unified Canvas",
"nodes": "Knoten",
"nodesDesc": "Ein knotenbasiertes System, für die Erzeugung von Bildern, ist derzeit in der Entwicklung. Bleiben Sie gespannt auf Updates zu dieser fantastischen Funktion.",
"postProcessing": "Nachbearbeitung",
"postProcessDesc1": "InvokeAI bietet eine breite Palette von Nachbearbeitungsfunktionen. Bildhochskalierung und Gesichtsrekonstruktion sind bereits in der WebUI verfügbar. Sie können sie über das Menü Erweiterte Optionen der Reiter Text in Bild und Bild in Bild aufrufen. Sie können Bilder auch direkt bearbeiten, indem Sie die Schaltflächen für Bildaktionen oberhalb der aktuellen Bildanzeige oder im Viewer verwenden.",
"postProcessDesc2": "Eine spezielle Benutzeroberfläche wird in Kürze veröffentlicht, um erweiterte Nachbearbeitungs-Workflows zu erleichtern.",
"postProcessDesc3": "Die InvokeAI Kommandozeilen-Schnittstelle bietet verschiedene andere Funktionen, darunter Embiggen.",
"training": "Training",
"trainingDesc1": "Ein spezieller Arbeitsablauf zum Trainieren Ihrer eigenen Embeddings und Checkpoints mit Textual Inversion und Dreambooth über die Weboberfläche.",
"trainingDesc2": "InvokeAI unterstützt bereits das Training von benutzerdefinierten Embeddings mit Textual Inversion unter Verwendung des Hauptskripts.",
"upload": "Upload",
"close": "Schließen",
"load": "Laden",
"statusConnected": "Verbunden",
"statusDisconnected": "Getrennt",
"statusError": "Fehler",
"statusPreparing": "Vorbereiten",
"statusProcessingCanceled": "Verarbeitung abgebrochen",
"statusProcessingComplete": "Verarbeitung komplett",
"statusGenerating": "Generieren",
"statusGeneratingTextToImage": "Erzeugen von Text zu Bild",
"statusGeneratingImageToImage": "Erzeugen von Bild zu Bild",
"statusGeneratingInpainting": "Erzeuge Inpainting",
"statusGeneratingOutpainting": "Erzeuge Outpainting",
"statusGenerationComplete": "Generierung abgeschlossen",
"statusIterationComplete": "Iteration abgeschlossen",
"statusSavingImage": "Speichere Bild",
"statusRestoringFaces": "Gesichter restaurieren",
"statusRestoringFacesGFPGAN": "Gesichter restaurieren (GFPGAN)",
"statusRestoringFacesCodeFormer": "Gesichter restaurieren (CodeFormer)",
"statusUpscaling": "Hochskalierung",
"statusUpscalingESRGAN": "Hochskalierung (ESRGAN)",
"statusLoadingModel": "Laden des Modells",
"statusModelChanged": "Modell Geändert"
}

View File

@@ -1,62 +0,0 @@
{
"hotkeysLabel": "Hotkeys",
"themeLabel": "Theme",
"languagePickerLabel": "Language Picker",
"reportBugLabel": "Report Bug",
"githubLabel": "Github",
"discordLabel": "Discord",
"settingsLabel": "Settings",
"darkTheme": "Dark",
"lightTheme": "Light",
"greenTheme": "Green",
"langEnglish": "English",
"langRussian": "Russian",
"langItalian": "Italian",
"langBrPortuguese": "Portuguese (Brazilian)",
"langGerman": "German",
"langPortuguese": "Portuguese",
"langFrench": "French",
"langPolish": "Polish",
"langSimplifiedChinese": "Simplified Chinese",
"langSpanish": "Spanish",
"langJapanese": "Japanese",
"langDutch": "Dutch",
"langUkranian": "Ukranian",
"text2img": "Text To Image",
"img2img": "Image To Image",
"unifiedCanvas": "Unified Canvas",
"nodes": "Nodes",
"nodesDesc": "A node based system for the generation of images is under development currently. Stay tuned for updates about this amazing feature.",
"postProcessing": "Post Processing",
"postProcessDesc1": "Invoke AI offers a wide variety of post processing features. Image Upscaling and Face Restoration are already available in the WebUI. You can access them from the Advanced Options menu of the Text To Image and Image To Image tabs. You can also process images directly, using the image action buttons above the current image display or in the viewer.",
"postProcessDesc2": "A dedicated UI will be released soon to facilitate more advanced post processing workflows.",
"postProcessDesc3": "The Invoke AI Command Line Interface offers various other features including Embiggen.",
"training": "Training",
"trainingDesc1": "A dedicated workflow for training your own embeddings and checkpoints using Textual Inversion and Dreambooth from the web interface.",
"trainingDesc2": "InvokeAI already supports training custom embeddings using Textual Inversion using the main script.",
"upload": "Upload",
"close": "Close",
"load": "Load",
"back": "Back",
"statusConnected": "Connected",
"statusDisconnected": "Disconnected",
"statusError": "Error",
"statusPreparing": "Preparing",
"statusProcessingCanceled": "Processing Canceled",
"statusProcessingComplete": "Processing Complete",
"statusGenerating": "Generating",
"statusGeneratingTextToImage": "Generating Text To Image",
"statusGeneratingImageToImage": "Generating Image To Image",
"statusGeneratingInpainting": "Generating Inpainting",
"statusGeneratingOutpainting": "Generating Outpainting",
"statusGenerationComplete": "Generation Complete",
"statusIterationComplete": "Iteration Complete",
"statusSavingImage": "Saving Image",
"statusRestoringFaces": "Restoring Faces",
"statusRestoringFacesGFPGAN": "Restoring Faces (GFPGAN)",
"statusRestoringFacesCodeFormer": "Restoring Faces (CodeFormer)",
"statusUpscaling": "Upscaling",
"statusUpscalingESRGAN": "Upscaling (ESRGAN)",
"statusLoadingModel": "Loading Model",
"statusModelChanged": "Model Changed"
}

View File

@@ -1,62 +0,0 @@
{
"hotkeysLabel": "Hotkeys",
"themeLabel": "Theme",
"languagePickerLabel": "Language Picker",
"reportBugLabel": "Report Bug",
"githubLabel": "Github",
"discordLabel": "Discord",
"settingsLabel": "Settings",
"darkTheme": "Dark",
"lightTheme": "Light",
"greenTheme": "Green",
"langEnglish": "English",
"langRussian": "Russian",
"langItalian": "Italian",
"langBrPortuguese": "Portuguese (Brazilian)",
"langGerman": "German",
"langPortuguese": "Portuguese",
"langFrench": "French",
"langPolish": "Polish",
"langSimplifiedChinese": "Simplified Chinese",
"langSpanish": "Spanish",
"langJapanese": "Japanese",
"langDutch": "Dutch",
"langUkranian": "Ukranian",
"text2img": "Text To Image",
"img2img": "Image To Image",
"unifiedCanvas": "Unified Canvas",
"nodes": "Nodes",
"nodesDesc": "A node based system for the generation of images is under development currently. Stay tuned for updates about this amazing feature.",
"postProcessing": "Post Processing",
"postProcessDesc1": "Invoke AI offers a wide variety of post processing features. Image Upscaling and Face Restoration are already available in the WebUI. You can access them from the Advanced Options menu of the Text To Image and Image To Image tabs. You can also process images directly, using the image action buttons above the current image display or in the viewer.",
"postProcessDesc2": "A dedicated UI will be released soon to facilitate more advanced post processing workflows.",
"postProcessDesc3": "The Invoke AI Command Line Interface offers various other features including Embiggen.",
"training": "Training",
"trainingDesc1": "A dedicated workflow for training your own embeddings and checkpoints using Textual Inversion and Dreambooth from the web interface.",
"trainingDesc2": "InvokeAI already supports training custom embeddings using Textual Inversion using the main script.",
"upload": "Upload",
"close": "Close",
"load": "Load",
"back": "Back",
"statusConnected": "Connected",
"statusDisconnected": "Disconnected",
"statusError": "Error",
"statusPreparing": "Preparing",
"statusProcessingCanceled": "Processing Canceled",
"statusProcessingComplete": "Processing Complete",
"statusGenerating": "Generating",
"statusGeneratingTextToImage": "Generating Text To Image",
"statusGeneratingImageToImage": "Generating Image To Image",
"statusGeneratingInpainting": "Generating Inpainting",
"statusGeneratingOutpainting": "Generating Outpainting",
"statusGenerationComplete": "Generation Complete",
"statusIterationComplete": "Iteration Complete",
"statusSavingImage": "Saving Image",
"statusRestoringFaces": "Restoring Faces",
"statusRestoringFacesGFPGAN": "Restoring Faces (GFPGAN)",
"statusRestoringFacesCodeFormer": "Restoring Faces (CodeFormer)",
"statusUpscaling": "Upscaling",
"statusUpscalingESRGAN": "Upscaling (ESRGAN)",
"statusLoadingModel": "Loading Model",
"statusModelChanged": "Model Changed"
}

View File

@@ -1,58 +0,0 @@
{
"hotkeysLabel": "Atajos de teclado",
"themeLabel": "Tema",
"languagePickerLabel": "Selector de idioma",
"reportBugLabel": "Reportar errores",
"githubLabel": "GitHub",
"discordLabel": "Discord",
"settingsLabel": "Ajustes",
"darkTheme": "Oscuro",
"lightTheme": "Claro",
"greenTheme": "Verde",
"langEnglish": "Inglés",
"langRussian": "Ruso",
"langItalian": "Italiano",
"langBrPortuguese": "Portugués (Brasil)",
"langGerman": "Alemán",
"langPortuguese": "Portugués",
"langFrench": "French",
"langPolish": "Polish",
"langSpanish": "Español",
"text2img": "Texto a Imagen",
"img2img": "Imagen a Imagen",
"unifiedCanvas": "Lienzo Unificado",
"nodes": "Nodos",
"nodesDesc": "Un sistema de generación de imágenes basado en nodos, actualmente se encuentra en desarrollo. Mantente pendiente a nuestras actualizaciones acerca de esta fabulosa funcionalidad.",
"postProcessing": "Post-procesamiento",
"postProcessDesc1": "Invoke AI ofrece una gran variedad de funciones de post-procesamiento, El aumento de tamaño y Restauración de Rostros ya se encuentran disponibles en la interfaz web, puedes acceder desde el menú de Opciones Avanzadas en las pestañas de Texto a Imagen y de Imagen a Imagen. También puedes acceder a estas funciones directamente mediante el botón de acciones en el menú superior de la imagen actual o en el visualizador",
"postProcessDesc2": "Una interfaz de usuario dedicada se lanzará pronto para facilitar flujos de trabajo de postprocesamiento más avanzado.",
"postProcessDesc3": "La Interfaz de Línea de Comandos de Invoke AI ofrece muchas otras características, incluyendo -Embiggen-.",
"training": "Entrenamiento",
"trainingDesc1": "Un flujo de trabajo dedicado para el entrenamiento de sus propios -embeddings- y puntos de control utilizando Inversión Textual y Dreambooth desde la interfaz web.",
"trainingDesc2": "InvokeAI already supports training custom embeddings using Textual Inversion using the main script.",
"trainingDesc2": "InvokeAI ya soporta el entrenamiento de -embeddings- personalizados utilizando la Inversión Textual mediante el script principal.",
"upload": "Subir imagen",
"close": "Cerrar",
"load": "Cargar",
"statusConnected": "Conectado",
"statusDisconnected": "Desconectado",
"statusError": "Error",
"statusPreparing": "Preparando",
"statusProcessingCanceled": "Procesamiento Cancelado",
"statusProcessingComplete": "Procesamiento Completo",
"statusGenerating": "Generando",
"statusGeneratingTextToImage": "Generando Texto a Imagen",
"statusGeneratingImageToImage": "Generando Imagen a Imagen",
"statusGeneratingInpainting": "Generando pintura interior",
"statusGeneratingOutpainting": "Generando pintura exterior",
"statusGenerationComplete": "Generación Completa",
"statusIterationComplete": "Iteración Completa",
"statusSavingImage": "Guardando Imagen",
"statusRestoringFaces": "Restaurando Rostros",
"statusRestoringFacesGFPGAN": "Restaurando Rostros (GFPGAN)",
"statusRestoringFacesCodeFormer": "Restaurando Rostros (CodeFormer)",
"statusUpscaling": "Aumentando Tamaño",
"statusUpscalingESRGAN": "Restaurando Rostros(ESRGAN)",
"statusLoadingModel": "Cargando Modelo",
"statusModelChanged": "Modelo cambiado"
}

Some files were not shown because too many files have changed in this diff Show More