Compare commits

...

738 Commits

Author SHA1 Message Date
Lincoln Stein
303a2495c7 fix broken url fetch in preload_models.py 2022-10-30 17:43:48 -04:00
Lincoln Stein
23d54ee69e fix mps crash with safety checker 2022-10-30 16:54:06 -04:00
Lincoln Stein
330b417a7b installer pulls from release-candidate-2-1 2022-10-30 12:20:28 -04:00
Lincoln Stein
f70af7afb9 remove debug image gen from outcrop 2022-10-30 12:19:43 -04:00
Lincoln Stein
e7368d7231 preload_models interactively downloads sd model files 2022-10-30 12:19:05 -04:00
Lincoln Stein
07c3c57cde folded in the 1-click installer
- The installer will pull from the branch release-candidate-2-1 for the
  purposes of testing.
- This needs to be changed to "main" before release.
2022-10-30 12:13:11 -04:00
Lincoln Stein
b774c8afc3 Merge branch 'main' of https://github.com/cmdr2/InvokeAI into cmdr2-main 2022-10-30 11:10:54 -04:00
Lincoln Stein
231dfe01f4 fix incorrect thresholding reporting for karras noise; close #1300 2022-10-30 10:35:55 -04:00
mauwii
5319796e58 add --no-interactive to preload_models step 2022-10-30 08:26:51 -04:00
Lincoln Stein
39daa5aea7 Merge branch 'integrate-models-into-test-matrix' of https://github.com/mauwii/stable-diffusion into mauwii-integrate-models-into-test-matrix 2022-10-30 01:09:29 -04:00
Lincoln Stein
a7517ce0de add pointer to hugging face concepts library 2022-10-30 00:54:00 -04:00
Lincoln Stein
fbfffe028f add --no-interactive mode 2022-10-30 00:33:48 -04:00
Lincoln Stein
19b6c671a6 further improvements to preload_models script
- User can choose to download just recommended models, customize list to download,
  or skip downloading altogether.
- Does direct download to models directory instead of to HuggingFace cache
- Able to resume interrupted downloads
2022-10-30 00:17:05 -04:00
spezialspezial
c2fab45a6e Prevent indexing error for mode RGB
I have not explicitly tested mode P
2022-10-29 18:20:53 -04:00
mauwii
0596ebd5a9 **IMPORTANT FIX**
- pull_request_target trigger does not verify the requesters commit(s)
- is more to be used for automations like labeling, commenting, ...
  - stuff where a token with write access to the repo is necesarry
2022-10-29 23:48:25 +02:00
mauwii
338efa5a7a remove debug branch 2022-10-29 22:39:37 +02:00
mauwii
5d4d8f54df Merge branch 'update-workflows' into development 2022-10-29 22:34:49 +02:00
mauwii
3d4a9c2deb remove redundant information from pipeline names 2022-10-29 22:25:41 +02:00
cmdr2
74fad5f6ed Merge branch 'development' into main 2022-10-30 00:00:26 +05:30
cmdr2
9c264b42c3 Make create_installers.sh executable 2022-10-29 23:41:37 +05:30
cmdr2
09ee1b1877 Run the installer create script inside its own directory 2022-10-29 23:41:00 +05:30
cmdr2
4b27d8821d Script to create the installer zips 2022-10-29 23:40:03 +05:30
cmdr2
c49d9c2611 Open the developer console on windows, and print some debugging info 2022-10-29 23:26:21 +05:30
cmdr2
4134e2e9da Refactored invoke.sh to open a dev console only if the user wants it 2022-10-29 23:22:24 +05:30
mauwii
e4a212dfca prop. integrate stable-diffusion-model in matrix 2022-10-29 13:52:16 -04:00
mauwii
19bb185fd9 remove '-O' from curl arguments 2022-10-29 13:52:16 -04:00
mauwii
1eaa58c970 use propper bearer authentication to dwnload model
instead of --user username:token
2022-10-29 13:52:16 -04:00
mauwii
4245c9e0cd fix environment-mac.yml - tested on x64 and arm64
- using conda packages where possible according to conda docs
2022-10-29 13:50:06 -04:00
cmdr2
2b078c0d6e Don't need break 2022-10-29 23:14:58 +05:30
Lincoln Stein
0f4413da7d Merge branch 'inpainting-rebase' of https://github.com/psychedelicious/stable-diffusion into psychedelicious-inpainting-rebase 2022-10-29 13:42:00 -04:00
cmdr2
91b491b7e7 Don't need to create the models folder using this script 2022-10-29 23:07:48 +05:30
cmdr2
61e8916141 Merge branch 'development' into main 2022-10-29 23:05:45 +05:30
mauwii
da5de6a240 remove some bloating caches
since free 10GB Limit is already  overused multiple times
2022-10-29 18:50:37 +02:00
Lincoln Stein
fdf9b1c40c fix CLI inpainting crash 2022-10-29 11:47:06 -04:00
cmdr2
bc7bfed0d3 Show the next steps to the user; Allow starting the command-line or web UI 2022-10-29 21:16:40 +05:30
Lincoln Stein
b532e6dd17 wording and formatting tweaks 2022-10-29 11:28:17 -04:00
Lincoln Stein
b46921c22d move model installation docs into installation dir 2022-10-29 11:15:57 -04:00
Lincoln Stein
13f26a99b8 documentation and usability fixes 2022-10-29 10:37:38 -04:00
mauwii
3d265e28ff call invoke.py with model parameter 2022-10-29 16:21:31 +02:00
cmdr2
29d9ce03ab Redownload micromamba if the download failed midway; Start the script in the script's directory, not where it was run from 2022-10-29 19:47:55 +05:30
Lincoln Stein
3caa95ced9 add more step-by-step documentation and links 2022-10-29 09:18:48 -04:00
cmdr2
94cf660848 Merge branch 'main' of github.com:cmdr2/InvokeAI 2022-10-29 18:42:18 +05:30
psychedelicious
e1cb5b8251 Fixes: inpaint canvas not cleared when its src image is deleted 2022-10-29 23:53:47 +11:00
psychedelicious
101fe9efa9 Adds full-app image drag and drop, also image paste 2022-10-29 23:34:21 +11:00
psychedelicious
2e9463089d Fixes bounding box being able to escape canvas 2022-10-29 23:32:16 +11:00
mauwii
8127f0691e fix os matrix 2022-10-29 10:14:24 +02:00
mauwii
b55dcf5943 remove id from test prompts 2022-10-29 10:11:54 +02:00
mauwii
bb5fe98e94 rename matrix-job, use macOS-12, add ids to steps 2022-10-29 10:03:03 +02:00
psychedelicious
0290cd6814 Fixes bounding box slider & adds canvas caching on gallery actions 2022-10-29 18:32:01 +11:00
mauwii
fc4d07f198 reenable preload models, move huggingface-cache...
... on top of conda env activation, since `~/.cache` also contains pip
2022-10-29 08:59:23 +02:00
mauwii
e7aeaa310c run without preload_models.py
since an upcoming update makes it interactive
2022-10-29 08:39:28 +02:00
mauwii
85b5fcd5e1 fix cache hit expression in download sd-model step
- also update sd-cache display name to include current model
2022-10-29 08:28:17 +02:00
mauwii
e5d0c9c224 include sd-switch in artifact name 2022-10-29 08:08:15 +02:00
psychedelicious
162e420e9c Adds socketio event for Ctrl+C cancel, style fixes 2022-10-29 16:57:05 +11:00
psychedelicious
bfbae09a9c Merge remote-tracking branch 'upstream/development' into inpainting-rebase 2022-10-29 16:35:46 +11:00
mauwii
d2e8ecbd4b fix missing matrix-parameters 2022-10-29 07:35:43 +02:00
psychedelicious
a701e4f90b Fixes compilation error; builds new bundle 2022-10-29 16:32:21 +11:00
psychedelicious
f22f81b4ff Refactors gallery resizing, persists width 2022-10-29 16:30:51 +11:00
mauwii
63202e2467 try to run matrix with different models 2022-10-29 07:25:19 +02:00
Lincoln Stein
ef68a419f1 preload_models.py script downloads the weight files
- user can select which weight files to download using huggingface cache
- user must log in to huggingface, generate an access token, and accept
  license terms the very first time this is run. After that, everything
  works automatically.
- added placeholder for docs for installing models
- also got rid of unused config files. hopefully they weren't needed
  for textual inversion, but I don't think so.
2022-10-29 01:02:45 -04:00
psychedelicious
9fc6ee0c4c WIP: Fixes gallery resize bug 2022-10-29 14:50:04 +11:00
mauwii
ea65650883 add conda pkgs cache, remove conda env cache
also directly setup correct conda env
2022-10-29 05:49:12 +02:00
psychedelicious
5d76c57ce2 Misc fixes 2022-10-29 14:18:07 +11:00
psychedelicious
2c250a515e Misc fixes 2022-10-29 14:17:38 +11:00
psychedelicious
4204740cb2 Fixes failed inpainting with float bounding box 2022-10-29 14:17:18 +11:00
mauwii
bd3ba596c2 fix hashFiles function 2022-10-29 04:58:10 +02:00
mauwii
0a89d350d9 update conda cache to use actions/cache@v3 2022-10-29 04:54:01 +02:00
mauwii
b7fcf6dc04 readd conda env cache 2022-10-29 04:49:43 +02:00
psychedelicious
accb1779cb Fixes react dom warnings 2022-10-29 13:36:07 +11:00
psychedelicious
387f39407a Fixes bounding box hotkey conditions 2022-10-29 13:28:53 +11:00
psychedelicious
6a32adb7ed Fixes brush strokes not compositing correct on initial load 2022-10-29 13:12:34 +11:00
mauwii
3ab3a7d37a use same environment-mac.yml as in #1289 2022-10-29 04:01:44 +02:00
mauwii
da5fd10bb9 pin nomkl 2022-10-29 03:34:12 +02:00
psychedelicious
9291fde960 Fixes responsive images (omg finally), removes app padding 2022-10-29 12:22:07 +11:00
mauwii
31ef15210d pin versions corelating between arm64 and x64 2022-10-29 03:21:26 +02:00
mauwii
aa01657678 very fast on m1, synced with output of main branch 2022-10-29 02:43:52 +02:00
psychedelicious
6fb6bc6d7f Fixes #940 2022-10-29 11:10:48 +11:00
mauwii
da33e038ca unpin pytorch / torchvision
also loosen verisons of scipy flask-socketio flask_cors
2022-10-29 02:06:33 +02:00
mauwii
78f7094a0b set torchmetrics >=0.7.0 and use py-opencv=4.6.0 2022-10-29 01:24:38 +02:00
psychedelicious
0b046c95ef Merge remote-tracking branch 'origin/user-image-uploads' into inpainting-rebase 2022-10-29 09:41:07 +11:00
psychedelicious
c13d7aea56 Merge pull request #1 from blessedcoolant/user-image-uploads-styling
Styling Updates
2022-10-29 09:40:23 +11:00
mauwii
f7a47c1b67 reenable caching of sd model 2022-10-28 23:56:13 +02:00
mauwii
6c34b89cfb loosen pytorch and torchvision version 2022-10-28 23:51:43 +02:00
mauwii
7138faf5d3 include stable-diffusion-model in job name 2022-10-28 23:51:17 +02:00
mauwii
0d3a931e88 update mac environment
use conda packages where possible as mentioned in conda docs
https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-pkgs.html#installing-non-conda-packages
2022-10-28 23:11:07 +02:00
blessedcoolant
861e825ebf Styling Updates 2022-10-29 09:27:44 +13:00
mauwii
1ca1ab594c fix matrix 2022-10-28 21:11:18 +02:00
mauwii
9425389240 prop. integrate stable-diffusion-model into matrix 2022-10-28 21:07:59 +02:00
mauwii
9f16ff1774 remove cache for debugging 2022-10-28 21:01:06 +02:00
mauwii
2ac3c9e8fd remove -O from curl arguments 2022-10-28 20:58:18 +02:00
mauwii
4a9209c5e8 add debug branch to trigger run 2022-10-28 20:42:14 +02:00
mauwii
b78d718357 use propper bearer authentication to dwnload model
instead of --user username:token
2022-10-28 20:41:04 +02:00
mauwii
104466f5c0 use sd-model link from matrix
this enables running tests with diffferent models
2022-10-28 13:47:45 -04:00
mauwii
2ecdfca52f also update create-caches.yml
imho this could also be deleted, not sure what it is used for
2022-10-28 13:47:45 -04:00
mauwii
e81df1a701 add forgotten output-file 2022-10-28 13:47:45 -04:00
mauwii
61013e8eee prevent secret leakage with pull_request_target
- in this way the action is used from the base repository
- also use new secret HUGGINGFACE_TOKEN (username:token)
  - f.e. `noreply@github.com:hf_lkaugfklagwrjglaslzfgkjzzf`
- change pr prompt file to validate_pr_prompt.txt
2022-10-28 13:47:45 -04:00
mauwii
48d4fccd61 add tests/validate_pr_prompt.txt 2022-10-28 13:47:45 -04:00
psychedelicious
2859af386c Fixes places where isCancelable could get stuck on 2022-10-29 04:44:03 +11:00
psychedelicious
8dee3387fd Merge branch 'user-image-uploads' into inpainting-rebase
Adds
- Separate user uploads gallery
- Drag and drop uploads for img2img and inpainting
- Many bugfixes, scss refactored
2022-10-29 04:40:38 +11:00
psychedelicious
63eeac49f8 Blacklists isCancelable 2022-10-29 04:25:27 +11:00
psychedelicious
d5fdee72d3 Builds fresh bundle 2022-10-29 04:25:27 +11:00
psychedelicious
765092eb12 Fixes bug with gallery not closing 2022-10-29 04:25:27 +11:00
psychedelicious
2c9747fd41 Fixes bug with infinite redux action loop 2022-10-29 04:25:27 +11:00
psychedelicious
62898b0f8f Adds gallery auto-switch toggle; ref #1272 2022-10-29 04:25:27 +11:00
psychedelicious
ac7ee9d0a5 Puts model switching into accordion, styling 2022-10-29 04:25:27 +11:00
psychedelicious
0adb7d4676 Adds error handling to & improves model switching UI 2022-10-29 04:25:27 +11:00
blessedcoolant
27a7980dad Update Site Header Icons Layout 2022-10-29 04:25:27 +11:00
psychedelicious
a5915ccd2c Adds initial model switching UI 2022-10-29 04:25:27 +11:00
psychedelicious
d6815f61ee Fixes build error 2022-10-29 04:25:27 +11:00
psychedelicious
d71f11f55c Fixes typo 2022-10-29 04:25:27 +11:00
psychedelicious
ed45dca7c1 Improves bounding box hotkeys/UX 2022-10-29 04:25:27 +11:00
psychedelicious
dd71066391 Fixes more bounding box bugs 2022-10-29 04:25:27 +11:00
psychedelicious
6f51b2078e Improves bounding box behavior 2022-10-29 04:25:27 +11:00
psychedelicious
d035e0e811 Fixes bounding box move ending when mouse leaves canvas 2022-10-29 04:25:27 +11:00
psychedelicious
55a8da0f02 Adds lock bounding box 2022-10-29 04:25:27 +11:00
Kyle Schouviller
43de16cae4 Don't try to tile fill if image doesn't have an alpha layer 2022-10-29 04:25:27 +11:00
psychedelicious
320cbdd62d Builds fresh bundle 2022-10-29 04:25:27 +11:00
blessedcoolant
f8dce07486 Add Space Hotkey to legend
Add space as hotkey for moving boundbox to the hotkeys modal legend.
2022-10-29 04:25:27 +11:00
blessedcoolant
37382042c1 Adding Bounding Box Reset Disables
Add disable conditions for reset buttons on bounding box width and height
2022-10-29 04:25:27 +11:00
blessedcoolant
2af8139029 Styling Updates
- Moved Inpaint Replace higher in the options panel
- Fixed inpaint replace switch getting cut off slightly by padding a bit.
2022-10-29 04:25:27 +11:00
blessedcoolant
a5c77ff926 Fully Updated Hotkeys + Categorization
Added the entire list of available hotkeys to the hotkey module and categorized them accordingly.
2022-10-29 04:25:27 +11:00
blessedcoolant
15df6c148a Fix galleryImageObjectFit now persisting 2022-10-29 04:25:27 +11:00
blessedcoolant
e6226b45de Change default of inpaintReplace to 1 2022-10-29 04:25:27 +11:00
psychedelicious
ab1e207765 Fixes gallery closing on context menu 2022-10-29 04:25:27 +11:00
psychedelicious
d2ed8883f7 Adds support for inpaint_replace 2022-10-29 04:25:27 +11:00
psychedelicious
3ddf1f6c3e Removes mask lines check 2022-10-29 04:25:27 +11:00
psychedelicious
5395707280 Adds Maintain Aspect Ratio checkbox to ImageGallery 2022-10-29 04:25:27 +11:00
blessedcoolant
710e465054 Add Inpainting Settings
- Enable and Disable Inpainting Box (with backend support)
- Enable and Disable Bounding Box Darkening
- Reset Bounding Box
2022-10-29 04:25:27 +11:00
psychedelicious
30bd79ffa1 Adds fn to checkIsMaskEmpty, tidy 2022-10-29 04:25:27 +11:00
psychedelicious
20c83d7568 Fixes bug with overflowing bounding box 2022-10-29 04:25:27 +11:00
psychedelicious
67e0e97eda Increases size of bounding box handles 2022-10-29 04:25:27 +11:00
psychedelicious
6bebc679c4 Fixes invoke button working when only eraser strokes 2022-10-29 04:25:26 +11:00
psychedelicious
9406b95518 Fixes missing border on brush 2022-10-29 04:25:26 +11:00
psychedelicious
8d8f93fd00 Fixes bug where bounding box could escape bounds of canvas 2022-10-29 04:25:26 +11:00
psychedelicious
20a3875f32 Fixes edge cases with bounding box 2022-10-29 04:25:26 +11:00
psychedelicious
8ab428e588 Adds bounding box handles 2022-10-29 04:25:26 +11:00
psychedelicious
e5dcae5fff Merges development 2022-10-29 04:25:26 +11:00
psychedelicious
329cd8a38b Adds inpainting image reset button 2022-10-29 04:13:15 +11:00
psychedelicious
39f0995d78 Reverts models.yaml 2022-10-29 04:12:36 +11:00
psychedelicious
0855ab4173 Adds user/result galleries, refactors workarea CSS 2022-10-29 03:54:46 +11:00
Lincoln Stein
fe7ab6e480 fix crash in !del_model command 2022-10-28 11:20:04 -04:00
mauwii
f8dd2df953 remove conda cache 2022-10-28 11:12:42 -04:00
mauwii
3795bec037 remove debug branch, set fail-fast to false
to find out if only mac or ubuntu is broken
(otherwise if one fails the otherone automatically cancels)
2022-10-28 11:12:42 -04:00
jakehl
35face48da adds models.user.yml to .gitignore 2022-10-28 10:43:22 -04:00
Damian at mba
864d080502 handle all unicode characters 2022-10-28 10:39:12 -04:00
psychedelicious
3a7b495167 Initial user uploads implementation 2022-10-28 23:15:03 +11:00
psychedelicious
9d1594cbcc Builds fresh bundle 2022-10-28 21:10:23 +11:00
psychedelicious
c48a1092f7 Fixes bug with gallery not closing 2022-10-28 21:03:33 +11:00
psychedelicious
35dba1381c Fixes bug with infinite redux action loop 2022-10-28 20:50:10 +11:00
psychedelicious
631dce3aca Adds gallery auto-switch toggle; ref #1272 2022-10-28 20:22:01 +11:00
psychedelicious
ea6e998094 Puts model switching into accordion, styling 2022-10-28 20:04:57 +11:00
psychedelicious
d551de6e06 Adds error handling to & improves model switching UI 2022-10-28 18:51:50 +11:00
blessedcoolant
7ce1cf6f3e Update Site Header Icons Layout 2022-10-28 19:49:50 +13:00
psychedelicious
2e89997d29 Adds initial model switching UI 2022-10-28 16:47:15 +11:00
psychedelicious
a7e2a7037a Fixes build error 2022-10-28 15:44:04 +11:00
psychedelicious
75d8fc77c2 Fixes typo 2022-10-28 15:00:45 +11:00
psychedelicious
4ea954fd66 Improves bounding box hotkeys/UX 2022-10-28 14:53:07 +11:00
mauwii
8b8c1068d9 fix missleading name to Build container
since it it not pushing the  container anywhere
2022-10-27 23:14:31 -04:00
mauwii
7793dbb4b4 change pull_request_target to pull_request
since no secrets are used in this action this should be totally fine.
2022-10-27 23:14:31 -04:00
mauwii
77b93ad0c2 remove debug branch from action trigger 2022-10-27 23:14:31 -04:00
mauwii
f99671b764 fix tag for repositorys containing uppercase 2022-10-27 23:14:31 -04:00
psychedelicious
a8a30065a4 Fixes more bounding box bugs 2022-10-28 14:14:28 +11:00
Lincoln Stein
05b8de5300 fix --hires to support inpainting model 2022-10-27 23:12:21 -04:00
Lincoln Stein
387f796ebe Merge branch 'development' into development 2022-10-27 23:04:04 -04:00
psychedelicious
27ba91e74d Improves bounding box behavior 2022-10-28 13:59:52 +11:00
Lincoln Stein
3033331f65 remove unneeded warnings from attention.py 2022-10-27 22:50:06 -04:00
Lincoln Stein
362b234cd1 fix long-standing issue with metadata retrieval
The Args object would crap out when trying to retrieve metadata from
an image file that did not contain InvokeAI-generated metadata, such
as a JPG. This corrects that and returns dummy values (seed of zero,
prompt of '') to avoid downstream breakage.
2022-10-27 22:43:34 -04:00
psychedelicious
bbe53841e4 Fixes bounding box move ending when mouse leaves canvas 2022-10-28 11:50:39 +11:00
Kyle Schouviller
a825210bd3 Merge branch 'inpainting-rebase' of https://github.com/psychedelicious/stable-diffusion into inpainting-rebase 2022-10-27 17:49:30 -07:00
Kyle Schouviller
88fb2a6b46 Don't try to tile fill if image doesn't have an alpha layer 2022-10-27 17:49:26 -07:00
psychedelicious
042d3e866f Adds lock bounding box 2022-10-28 11:37:46 +11:00
psychedelicious
0ea711e520 Builds fresh bundle 2022-10-28 11:28:58 +11:00
blessedcoolant
ef5f9600e6 Add Space Hotkey to legend
Add space as hotkey for moving boundbox to the hotkeys modal legend.
2022-10-28 11:28:45 +11:00
blessedcoolant
acdffb1503 Adding Bounding Box Reset Disables
Add disable conditions for reset buttons on bounding box width and height
2022-10-28 11:28:39 +11:00
blessedcoolant
6679e5be69 Styling Updates
- Moved Inpaint Replace higher in the options panel
- Fixed inpaint replace switch getting cut off slightly by padding a bit.
2022-10-28 11:28:28 +11:00
blessedcoolant
89ad2e55d9 Fully Updated Hotkeys + Categorization
Added the entire list of available hotkeys to the hotkey module and categorized them accordingly.
2022-10-28 11:28:20 +11:00
blessedcoolant
f8dff5b6c2 Fix galleryImageObjectFit now persisting 2022-10-28 11:28:13 +11:00
blessedcoolant
104b0ef0ba Change default of inpaintReplace to 1 2022-10-28 11:28:03 +11:00
psychedelicious
07cdf6e9cb Fixes gallery closing on context menu 2022-10-28 11:27:41 +11:00
psychedelicious
4cf9c965d4 Adds support for inpaint_replace 2022-10-28 11:27:41 +11:00
psychedelicious
4039e9e368 Removes mask lines check 2022-10-28 11:27:41 +11:00
psychedelicious
38fd0668ba Adds Maintain Aspect Ratio checkbox to ImageGallery 2022-10-28 11:27:41 +11:00
blessedcoolant
5cae8206f9 Add Inpainting Settings
- Enable and Disable Inpainting Box (with backend support)
- Enable and Disable Bounding Box Darkening
- Reset Bounding Box
2022-10-28 11:27:41 +11:00
psychedelicious
3ce60161d2 Adds fn to checkIsMaskEmpty, tidy 2022-10-28 11:27:41 +11:00
psychedelicious
00b5466f0d Fixes bug with overflowing bounding box 2022-10-28 11:27:41 +11:00
psychedelicious
6eeef7c17e Increases size of bounding box handles 2022-10-28 11:27:41 +11:00
psychedelicious
219da47576 Fixes invoke button working when only eraser strokes 2022-10-28 11:27:41 +11:00
psychedelicious
47106eeeea Fixes missing border on brush 2022-10-28 11:27:41 +11:00
psychedelicious
07e21acab5 Fixes bug where bounding box could escape bounds of canvas 2022-10-28 11:27:41 +11:00
psychedelicious
65acdfb09b Fixes edge cases with bounding box 2022-10-28 11:27:35 +11:00
psychedelicious
9e2ce00f7b Adds bounding box handles 2022-10-28 11:27:35 +11:00
psychedelicious
44599a239f Merges development 2022-10-28 11:27:22 +11:00
Lincoln Stein
7b46d5f823 complete inpaint/outpaint documentation
- still need to write INSTALLING-MODELS.md documentation.
2022-10-27 18:43:17 -04:00
Lincoln Stein
2115874587 resolve conflicts with outpainting implementation 2022-10-27 18:06:38 -04:00
Lincoln Stein
cd5141f3d1 fix issues with outpaint merge 2022-10-27 18:02:08 -04:00
Lincoln Stein
b815aa2130 Merge branch 'development' into outpaint 2022-10-27 17:17:34 -04:00
Lincoln Stein
19a6e904ec resolved whitespace difference 2022-10-27 17:12:22 -04:00
Lincoln Stein
1200fbd3bd add threshold for switchover from Karras to LDM noise schedule 2022-10-27 17:07:50 -04:00
mauwii
343ae8b7af update docker docs 2022-10-27 17:06:50 -04:00
mauwii
442f584afa add action to build the container
it does not push the container but verify buildability
2022-10-27 17:06:50 -04:00
mauwii
55482d7ce3 add conda env for linux-aarch64
- neither environment.yml nor environment-mac.yml was working
2022-10-27 17:06:50 -04:00
mauwii
0c3de595df update entrypoint
- when run without arguments it starts the web-interface
- can also be run with your own arguments
  - if u do so it does not start web unless u do
2022-10-27 17:06:50 -04:00
mauwii
38ff75c7ea add script to easily run the container 2022-10-27 17:06:50 -04:00
mauwii
963e0f8a53 add env.sh with variables shared in run and build 2022-10-27 17:06:50 -04:00
mauwii
12f40cbbeb add build script which also creates volume
it needs a token from huggingface to be able to download the checkpoint
2022-10-27 17:06:50 -04:00
mauwii
e524fb2086 add .dockerignore to repo-root
since transfering  the 2GB context would not be rly productive
2022-10-27 17:06:50 -04:00
mauwii
eb7ccc356f update Dockerfile 2022-10-27 17:06:50 -04:00
Taylor Kems
4635836ebc Update IMG2IMG.md 2022-10-27 17:06:49 -04:00
Lincoln Stein
d25bf7a55a cut over from karras to model noise schedule for higher steps
The k_samplers come with a "karras" noise schedule which performs
very well at low step counts but becomes noisy at higher ones.

This commit introduces a threshold (currently 30 steps) at which the
k samplers will switch over from using karras to the older model
noise schedule.
2022-10-27 17:06:49 -04:00
Damian at mba
3539f0a1da slightly more verbose docs 2022-10-27 22:50:32 +02:00
Damian at mba
737a7f779b tweak prompt syntax docs 2022-10-27 22:48:06 +02:00
Damian at mba
71dcc17fa0 fix prompt syntax doc table error 2022-10-27 22:45:59 +02:00
Damian at mba
a90ce61b1b fix broken images 2022-10-27 22:43:21 +02:00
Damian at mba
d43167ac0b improve documentation of "attention weighting" syntax 2022-10-27 22:41:06 +02:00
Damian at mba
245cf606a3 be more forgiving about prompts with ((words)) 2022-10-27 22:36:33 +02:00
Lincoln Stein
943616044a Merge branch 'switch-ksampler-noise-scheduler-adaptively' into development
- This sets a step switchover point at which the k-samplers stop using the
  Karras noise schedule and start using the LatentDiffusion noise schedule.
  The advantage of this is that the Karras schedule produces excellent
  results at low step counts but starts to become unstable at high
  steps.

- A new command argument --karras_max, lets the user set where the
  switchover occurs. Default is 29 steps (1-29 steps Karras),
  (30 or greater LDM)

- Tildebyte, sorry to do a fast forward three-way merge for this
  but rebasing was just too painful due to extensive recent
  changes to the diffuser code.
2022-10-27 16:11:26 -04:00
Lincoln Stein
943808b925 add threshold for switchover from Karras to LDM noise schedule 2022-10-27 15:50:32 -04:00
Damian at mba
30745f163d add one more test case 2022-10-27 21:18:08 +02:00
Damian at mba
e20108878c fix attention weight inside .swap() 2022-10-27 21:17:23 +02:00
Damian at mba
f73d349dfe refactor hybrid and cross attention control codepaths for readability 2022-10-27 19:40:37 +02:00
Damian at mba
dc86fc92ce fix crash parsing empty prompt "" 2022-10-27 19:01:54 +02:00
Lincoln Stein
aa785c3ef1 ready for merge after documentation added 2022-10-27 11:55:00 -04:00
mauwii
fb4feb380b update docker docs 2022-10-27 11:51:36 -04:00
mauwii
9b15b228b8 add action to build the container
it does not push the container but verify buildability
2022-10-27 11:51:36 -04:00
mauwii
99eb7e6ef2 add conda env for linux-aarch64
- neither environment.yml nor environment-mac.yml was working
2022-10-27 11:51:36 -04:00
mauwii
bf50a68eb5 update entrypoint
- when run without arguments it starts the web-interface
- can also be run with your own arguments
  - if u do so it does not start web unless u do
2022-10-27 11:51:36 -04:00
mauwii
67a7d46a29 add script to easily run the container 2022-10-27 11:51:36 -04:00
mauwii
3e2cf8a259 add env.sh with variables shared in run and build 2022-10-27 11:51:36 -04:00
mauwii
624fe4794b add build script which also creates volume
it needs a token from huggingface to be able to download the checkpoint
2022-10-27 11:51:36 -04:00
mauwii
44731f8a37 add .dockerignore to repo-root
since transfering  the 2GB context would not be rly productive
2022-10-27 11:51:36 -04:00
mauwii
b2a3c5cbe8 update Dockerfile 2022-10-27 11:51:36 -04:00
Taylor Kems
e9f690bf9d Update IMG2IMG.md 2022-10-27 11:16:15 -04:00
Lincoln Stein
0eb07b7488 Merge branch 'outpaint' of https://github.com/Kyle0654/InvokeAI into Kyle0654-outpaint 2022-10-27 09:16:40 -04:00
Lincoln Stein
16e7cbdb38 tweaks to documentation and call signature for advanced prompting 2022-10-27 08:30:09 -04:00
Damian at mba
135c62f1a4 fix issue with hot-dog, improve () suppression 2022-10-27 07:37:48 -04:00
Lincoln Stein
582e19056a Merge branch 'development' of github.com:invoke-ai/InvokeAI into development 2022-10-27 02:06:25 -04:00
Lincoln Stein
52de5c8b33 documentation fix 2022-10-27 01:58:20 -04:00
Lincoln Stein
799dc6d0df acceptable integration of new prompting system and inpainting
This was a difficult merge because both PR #1108 and #1243 made
changes to obscure parts of the diffusion code.

- prompt weighting, merging and cross-attention working
  - cross-attention does not work with runwayML inpainting
    model, but weighting and merging are tested and working
- CLI command parsing code rewritten in order to get embedded
  quotes right
- --hires now works with runwayML inpainting
- --embiggen does not work with runwayML and will give an error
- Added an --invert option to invert masks applied to inpainting
- Updated documentation
2022-10-27 01:51:35 -04:00
Damian at mba
79689e87ce fix crash making embeddings from too-long prompts with attention weights 2022-10-26 22:42:17 -04:00
Lincoln Stein
0d0481ce75 inpaint model progress
- working with plain prompts, weighted prompts and merge prompts
- not tested with prompt2prompt
2022-10-26 22:40:01 -04:00
Lincoln Stein
869d9e22c7 documentation fix 2022-10-26 22:37:30 -04:00
Lincoln Stein
3f77b68a9d fix mishandling of embedded quotes in prompt 2022-10-26 18:27:35 -04:00
Lincoln Stein
2daf187bdb working with 1.4, 1.5, not with inpainting 1.5 2022-10-26 18:25:48 -04:00
cmdr2
e73a2d68b5 Add Library\usr\bin to the PATH 2022-10-26 15:38:08 -04:00
cmdr2
2dd5c0696d Repo URL constant 2022-10-26 15:38:08 -04:00
cmdr2
f25ad03011 header 2022-10-26 15:38:08 -04:00
cmdr2
c00da1702f Single-file installer script, micromamba will now be downloaded automatically on the first run; Activate the base environment before running the rest of the conda commands; Don't download conda/git again if it's already been installed by the installer 2022-10-26 15:38:08 -04:00
cmdr2
83f20c23aa Use the correct conda os arch for mac x64 2022-10-26 15:38:08 -04:00
cmdr2
0050176d57 Don't continue if micromamba was required but didn't initialize properly 2022-10-26 15:38:08 -04:00
cmdr2
f7bb90234d Fix line endings for mac 2022-10-26 15:38:08 -04:00
cmdr2
1d3c43b67f Add a pause before the script ends 2022-10-26 15:38:08 -04:00
cmdr2
ef505d2bc5 Update How to create the installers.md 2022-10-26 15:38:08 -04:00
cmdr2
a9a59a3046 Prefer the locally installed conda over any global conda installation 2022-10-26 15:38:08 -04:00
cmdr2
da012e1bfd Prefer the locally installed conda over any global conda installation; activate the env before updating 2022-10-26 15:38:08 -04:00
cmdr2
90c8aa716d Typo in bash path 2022-10-26 15:38:08 -04:00
cmdr2
94cd20de05 Typo in the bash script 2022-10-26 15:38:08 -04:00
cmdr2
14725f9d59 Initialize conda for the shell before running the activate 2022-10-26 15:38:08 -04:00
cmdr2
c6c146f54f Remove -y in linux script 2022-10-26 15:38:08 -04:00
cmdr2
90d9d6ea00 Typo in install.sh 2022-10-26 15:38:08 -04:00
cmdr2
1f62517636 Don't close after updating 2022-10-26 15:38:08 -04:00
cmdr2
29eea93592 Fix the tmp file used for checking the existence of git and conda commands 2022-10-26 15:38:08 -04:00
cmdr2
7179cc7f25 Remove unnecessary quotes while checking if git and conda exist 2022-10-26 15:38:08 -04:00
cmdr2
b12c8a28d7 Updated the installer to simplify the use of micromamba, and use conda for the actual installation; Update conda during the update script 2022-10-26 15:38:08 -04:00
cmdr2
8c2e82cc54 Make the linux/mac scripts executable 2022-10-26 15:38:08 -04:00
cmdr2
3ae094b673 Create the env using -y 2022-10-26 15:38:08 -04:00
cmdr2
74e6ce3e6a Check for missing python/git before activating micromamba 2022-10-26 15:38:08 -04:00
cmdr2
71426d200e 1-click installer using micromamba to install git and python into a contained environment (if necessary) before running the normal installation script 2022-10-26 15:38:08 -04:00
Lincoln Stein
9b7159720f resolve conflicts between PR #1108 and #1243 2022-10-26 15:37:24 -04:00
Kyle Schouviller
e7c2b90bd1 Merge branch 'outpaint' of https://github.com/Kyle0654/InvokeAI into outpaint 2022-10-26 12:12:17 -07:00
Kyle Schouviller
d05373d35a Force RGB for img2img 2022-10-26 12:12:08 -07:00
Kyle Schouviller
bd8bb8c80b Adding outpainting implementation (as part of inpaint). 2022-10-26 12:12:08 -07:00
Kyle Schouviller
dac1ab0a05 Better inpainting color-correction 2022-10-26 12:12:08 -07:00
Kyle Schouviller
2a44411f5b Force RGB for img2img 2022-10-26 12:09:38 -07:00
Lincoln Stein
2f1c1e7695 Merge branch 'fix-prompts' of https://github.com/damian0815/InvokeAI into merge-prompt-and-inpaint-model 2022-10-26 08:50:55 -04:00
Lincoln Stein
2b6d78e436 minor cleanups
- remove --fnformat from canonicalized dream prompt arguments
  (not needed for image reproducibility)
- add -tm to canonicalized dream prompt arguments
  (definitely needed for image reproducibility)
2022-10-26 08:32:54 -04:00
Lincoln Stein
b1da13a984 minor cleanups
- change default model back to 1.4
- remove --fnformat from canonicalized dream prompt arguments
  (not needed for image reproducibility)
- add -tm to canonicalized dream prompt arguments
  (definitely needed for image reproducibility)
2022-10-26 08:29:56 -04:00
cmdr2
d03947a6ee Add Library\usr\bin to the PATH 2022-10-26 16:39:21 +05:30
cmdr2
422f2ecc91 Repo URL constant 2022-10-26 15:38:49 +05:30
cmdr2
f73a116f43 header 2022-10-26 15:35:42 +05:30
cmdr2
8aa40714e3 Single-file installer script, micromamba will now be downloaded automatically on the first run; Activate the base environment before running the rest of the conda commands; Don't download conda/git again if it's already been installed by the installer 2022-10-26 15:30:48 +05:30
Kyle Schouviller
eaf6d46a7b Adding outpainting implementation (as part of inpaint). 2022-10-26 00:39:36 -07:00
Lincoln Stein
906dafe3cd make variations work with inpainting model 2022-10-26 00:18:31 -04:00
Lincoln Stein
d3047c7cb0 do not encode init image in starting latent 2022-10-25 22:44:42 -04:00
tyler
62412f8398 fixing aspect ratio on hires 2022-10-25 21:28:50 -05:00
Kyle Schouviller
f1ca789097 Better inpainting color-correction 2022-10-25 17:10:28 -07:00
Lincoln Stein
4104ac6270 copied workflows from main to dev 2022-10-25 17:27:38 -04:00
Lincoln Stein
8d5a225011 allow for empty prompts (useful for inpaint removal) 2022-10-25 17:26:00 -04:00
Lincoln Stein
ca2f579f43 prevent crash when providing empty quoted prompt ("") 2022-10-25 15:56:07 -04:00
Lincoln Stein
b1a2f4ab44 Merge branch 'inpaint-model' of github.com:invoke-ai/InvokeAI into inpaint-model 2022-10-25 14:00:18 -04:00
Lincoln Stein
3c1ef48fe2 fix crash when doing img2img with ddim sampler and SD 1.5 2022-10-25 13:57:42 -04:00
Lincoln Stein
c732fd0740 Merge branch 'inpaint-model' of github.com:invoke-ai/InvokeAI into inpaint-model 2022-10-25 13:21:00 -04:00
Lincoln Stein
04c8937fb6 Merge branch 'inpaint-model' of github.com:invoke-ai/InvokeAI into inpaint-model 2022-10-25 13:17:20 -04:00
Lincoln Stein
4352eb6628 stop crashes on non-square images 2022-10-25 13:17:06 -04:00
Lincoln Stein
1ae269b8e0 Merge branch 'development' into inpaint-model 2022-10-25 11:50:08 -04:00
Lincoln Stein
dd07392045 Merge branch 'inpaint-model' of github.com:invoke-ai/InvokeAI into inpaint-model 2022-10-25 11:45:24 -04:00
Lincoln Stein
e33971fe2c plms works, bugs quashed
- The plms sampler now works with custom inpainting model
- Quashed bug that was causing generation on normal models to fail (oops!)
- Can now generate non-square images with custom inpainting model

Credits for advice and assistance during porting:

@any-winter-4079 (http://github.com/any-winter-4079)
@db3000 (Danny Beer http://github.com/db3000)
2022-10-25 11:44:01 -04:00
Lincoln Stein
83e1c39ab8 plms works, bugs quashed
- The plms sampler now works with custom inpainting model
- Quashed bug that was causing generation on normal models to fail (oops!)
- Can now generate non-square images with custom inpainting model
2022-10-25 11:42:30 -04:00
Lincoln Stein
b101be041b add support for runwayML custom inpainting model
This is still a work in progress but seems functional. It supports
inpainting, txt2img and img2img on the ddim and k* samplers (plms
still needs work, but I know what to do).

To test this, get the file `sd-v1-5-inpainting.ckpt' from
https://huggingface.co/runwayml/stable-diffusion-inpainting and place it
at `models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt`

Launch invoke.py with --model inpainting-1.5 and proceed as usual.

Caveats:

1. The inpainting model takes about 800 Mb more memory than the standard
   1.5 model. This model will not work on 4 GB cards.

2. The inpainting model is temperamental. It wants you to describe the
   entire scene and not just the masked area to replace. So if you want
   to replace the parrot on a man's shoulder with a crow, the prompt
   "crow" may fail. Try "man with a crow on shoulder" instead. The
   symptom of a failed inpainting is that the area will be erased and
   replaced with background.

3. This has not been tested well. Please report bugs.
2022-10-25 10:45:15 -04:00
psychedelicious
909740f430 Builds fresh bundle 2022-10-26 03:04:14 +13:00
Lincoln Stein
aaf7a4f1d3 inpaint and txt2img working with ddim sampler 2022-10-25 10:00:28 -04:00
Lincoln Stein
99d23c4d81 fix merge conflicts 2022-10-25 07:30:26 -04:00
Lincoln Stein
5e8d1ca19f resolve conflicts 2022-10-25 07:17:54 -04:00
Lincoln Stein
fb4dc7eaf9 Merge branch 'development' into fix-disabled-prompt 2022-10-25 07:13:57 -04:00
Lincoln Stein
175c7bddfc add missing inpainting yaml file 2022-10-25 07:12:31 -04:00
Lincoln Stein
71a1e0d0e1 Merge branch 'development' into vite-relative-paths 2022-10-25 07:09:14 -04:00
Jari Vetoniemi
ce1bfbc32d nix: add shell.nix file 2022-10-25 07:08:31 -04:00
Lincoln Stein
a2e53892ec fixed synax errors; now channel mismatch issue 2022-10-25 00:47:13 -04:00
Lincoln Stein
7a923beb4c add missing image needed by nsfw filter 2022-10-25 00:39:00 -04:00
Lincoln Stein
be8a992b85 add missing file 2022-10-25 00:38:24 -04:00
Lincoln Stein
03353ce978 Merge branch 'development' of github.com:invoke-ai/InvokeAI into development 2022-10-25 00:31:58 -04:00
Lincoln Stein
c8f4a04196 fix clipseg install problem; close #1150 2022-10-25 00:31:43 -04:00
Lincoln Stein
9bef643bf5 fix a few more metadata bugs
- facetool and upscale arguments now written into metadata
- cleaned up handling of !fetch command
2022-10-25 00:31:43 -04:00
Lincoln Stein
f6b31d51e0 fix incorrect handling of single quotes in prompts 2022-10-25 00:31:43 -04:00
Lincoln Stein
62e1cb48fd developer documentation fixes 2022-10-25 00:31:43 -04:00
Lincoln Stein
543464182f inpainting fix per PR #1218
- This is a merge of the final version of PR #1218 "Inpainting
  Improvements"

  Various merge conflicts made it easier to commit directly.

Author: Kyle0654
Co-Author: lstein
2022-10-25 00:31:42 -04:00
Lincoln Stein
83a3cc9eb4 start support for 1.5 inpainting model, not complete 2022-10-25 00:30:48 -04:00
Damian at mba
d12ae3bab0 documentation for new prompt syntax 2022-10-24 14:58:38 +02:00
Damian at mba
61a4897b71 re-enable tokenization logging 2022-10-24 11:49:47 +02:00
Damian at mba
194c8e1c2e Merge branch 'development' into fix-prompts 2022-10-24 11:28:37 +02:00
Damian at mba
44e4090909 re-enable legacy blend syntax 2022-10-24 11:16:52 +02:00
Damian at mba
0564397ee6 cleanup logs 2022-10-24 11:16:43 +02:00
Lincoln Stein
3081b6b7dd fix clipseg install problem; close #1150 2022-10-23 23:46:16 -04:00
Lincoln Stein
37d38f196e fix a few more metadata bugs
- facetool and upscale arguments now written into metadata
- cleaned up handling of !fetch command
2022-10-23 23:01:32 -04:00
Lincoln Stein
17aee48734 fix incorrect handling of single quotes in prompts 2022-10-23 23:01:32 -04:00
Lincoln Stein
9cdd78c6cb developer documentation fixes 2022-10-23 22:56:58 -04:00
Lincoln Stein
5561a95232 inpainting fix per PR #1218
- This is a merge of the final version of PR #1218 "Inpainting
  Improvements"

  Various merge conflicts made it easier to commit directly.

Author: Kyle0654
Co-Author: lstein
2022-10-23 22:52:32 -04:00
Lincoln Stein
27f0f3e52b Merge branch 'inpaint-improvement' of https://github.com/Kyle0654/InvokeAI into add-safety-checker 2022-10-23 22:37:43 -04:00
Lincoln Stein
b159b2fe42 add support for safety checker (NSFW filter)
Now you can activate the Hugging Face `diffusers` library safety check
for NSFW and other potentially disturbing imagery.

To turn on the safety check, pass --safety_checker at the command
line. For developers, the flag is `safety_checker=True` passed to
ldm.generate.Generate(). Once the safety checker is turned on, it
cannot be turned off unless you reinitialize a new Generate object.

When the safety checker is active, suspect images will be blurred and
a warning icon is added. There is also a warning message printed in
the CLI, but it can be a little hard to see because of its positioning
in the output stream.

There is a slight but noticeable delay when the safety checker runs.

Note that invisible watermarking is *not* currently implemented. The
watermark code distributed by the CompViz distribution uses a library
that does not seem to be able to retrieve the watermarks it creates,
and it does not appear that Hugging Face `diffusers` or other SD
distributions are doing any watermarking.
2022-10-23 22:26:18 -04:00
Damian at mba
63902f3d34 also apply conditioing during hires fix upscale 2022-10-24 02:08:55 +02:00
Damian at mba
1fb15d5c81 fix hires fix 2022-10-24 02:02:42 +02:00
Damian at mba
cc2042bd4c keep the effect of _start and _end arguments consistent across k* and other samplers 2022-10-24 01:43:35 +02:00
Damian at mba
ee4273d760 fix step count on ddim 2022-10-24 01:23:43 +02:00
Damian at mba
2619a0b286 allow longer substitutions without quotes for cross attention swap 2022-10-24 00:22:14 +02:00
Damian at mba
92c6a3812d catch fewer exceptions in prompt2image 2022-10-24 00:06:53 +02:00
Kyle Schouviller
230527b1fb Add back model description for 1.4 2022-10-23 14:08:41 -07:00
Kyle Schouviller
bfe36c9f8b Revert unintended model changes 2022-10-23 14:08:05 -07:00
Kyle Schouviller
40388b5b90 Merge branch 'development' into inpaint-improvement 2022-10-23 14:06:30 -07:00
Kyle Schouviller
0c34554170 Merge branch 'inpaint-improvement' of https://github.com/Kyle0654/InvokeAI into inpaint-improvement 2022-10-23 14:02:52 -07:00
Damian at mba
b0eb864a25 move attention weighting operations to postfix 2022-10-23 23:01:53 +02:00
Kyle Schouviller
1264cc2d36 Switch from dilate to erode to fix inpaint edges. Default model to 1.4 instead of 1.5. 2022-10-23 14:01:06 -07:00
Damian at mba
f7cd98c238 tweak default cross-attention values 2022-10-23 20:38:28 +02:00
Damian at mba
8e7d744c60 fix bad math 2022-10-23 19:43:35 +02:00
Damian at mba
9210bf7d3a also parse shape_freedom keyword 2022-10-23 19:40:00 +02:00
Damian at mba
8f35819ddf add shape_freedom arg to .swap() 2022-10-23 19:38:31 +02:00
Damian at mba
04d93f0445 for k* samplers, estimate step_index from sigma 2022-10-23 16:26:50 +02:00
Lincoln Stein
b7ce5b4f1b Merge branch 'development' of github.com:invoke-ai/InvokeAI into development 2022-10-23 09:42:28 -04:00
Lincoln Stein
7e27f189cf minor fixes to inpaint code
1. If tensors are passed to inpaint as init_image and/or init_mask, then
   the post-generation image fixup code will be skipped.

2. Post-generation image fixup will work with either a black and white "L"
   or "RGB"  mask, or an "RGBA" mask.
2022-10-23 09:33:15 -04:00
Lincoln Stein
9472945299 ported code refactor changes from PR #1221
- pass a PIL.Image to img2img and inpaint rather than tensor
- To support clipseg, inpaint needs to accept an "L" or "1" format
  mask. Made the appropriate change.
2022-10-23 09:33:15 -04:00
Lincoln Stein
f25c1f900f add support for loading VAE autoencoders
To add a VAE autoencoder to an existing model:

1. Download the appropriate autoencoder and put it into
   models/ldm/stable-diffusion

   Note that you MUST use a VAE that was written for the
   original CompViz Stable Diffusion codebase. For v1.4,
   that would be the file named vae-ft-mse-840000-ema-pruned.ckpt
   that you can download from https://huggingface.co/stabilityai/sd-vae-ft-mse-original

2. Edit config/models.yaml to contain the following stanza, modifying `weights`
   and `vae` as required to match the weights and vae model file names. There is
   no requirement to rename the VAE file.

~~~
stable-diffusion-1.4:
  weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
  description: Stable Diffusion v1.4
  config: configs/stable-diffusion/v1-inference.yaml
  vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
  width: 512
  height: 512
~~~

3. Alternatively from within the `invoke.py` CLI, you may use the command
   `!editmodel stable-diffusion-1.4` to bring up a simple editor that will
   allow you to add the path to the VAE.

4. If you are just installing InvokeAI for the first time, you can also
   use `!import_model models/ldm/stable-diffusion/sd-v1.4.ckpt` instead
   to create the configuration from scratch.

5. That's it!
2022-10-23 09:33:15 -04:00
Kyle Schouviller
493eaa7389 Improve inpainting by color-correcting result and pasting init image over result using mask 2022-10-23 09:33:15 -04:00
Lincoln Stein
ce6d618e3b outcropping improvements
- catch syntax errors in the outcrop coordinates
- work (after a fashion) on non-Invoke generated images
2022-10-23 09:33:00 -04:00
wfng92
8254ca9492 Removed duplicate fix_func for MPS 2022-10-23 09:32:59 -04:00
Damian at mba
7d677a63b8 cross attention control options 2022-10-23 14:58:25 +02:00
Kyle Schouviller
a2fb2e0d6b Merge branch 'development' into inpaint-improvement 2022-10-22 20:12:04 -07:00
Lincoln Stein
93cba3fba5 Kyle0654 inpaint improvement - with refactoring from PR #1221 (#1)
* Removed duplicate fix_func for MPS

* add support for loading VAE autoencoders

To add a VAE autoencoder to an existing model:

1. Download the appropriate autoencoder and put it into
   models/ldm/stable-diffusion

   Note that you MUST use a VAE that was written for the
   original CompViz Stable Diffusion codebase. For v1.4,
   that would be the file named vae-ft-mse-840000-ema-pruned.ckpt
   that you can download from https://huggingface.co/stabilityai/sd-vae-ft-mse-original

2. Edit config/models.yaml to contain the following stanza, modifying `weights`
   and `vae` as required to match the weights and vae model file names. There is
   no requirement to rename the VAE file.

~~~
stable-diffusion-1.4:
  weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
  description: Stable Diffusion v1.4
  config: configs/stable-diffusion/v1-inference.yaml
  vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
  width: 512
  height: 512
~~~

3. Alternatively from within the `invoke.py` CLI, you may use the command
   `!editmodel stable-diffusion-1.4` to bring up a simple editor that will
   allow you to add the path to the VAE.

4. If you are just installing InvokeAI for the first time, you can also
   use `!import_model models/ldm/stable-diffusion/sd-v1.4.ckpt` instead
   to create the configuration from scratch.

5. That's it!

* ported code refactor changes from PR #1221

- pass a PIL.Image to img2img and inpaint rather than tensor
- To support clipseg, inpaint needs to accept an "L" or "1" format
  mask. Made the appropriate change.

* minor fixes to inpaint code

1. If tensors are passed to inpaint as init_image and/or init_mask, then
   the post-generation image fixup code will be skipped.

2. Post-generation image fixup will work with either a black and white "L"
   or "RGB"  mask, or an "RGBA" mask.

Co-authored-by: wfng92 <43742196+wfng92@users.noreply.github.com>
2022-10-22 20:09:38 -07:00
Lincoln Stein
3e48b9ff85 cut over from karras to model noise schedule for higher steps
The k_samplers come with a "karras" noise schedule which performs
very well at low step counts but becomes noisy at higher ones.

This commit introduces a threshold (currently 30 steps) at which the
k samplers will switch over from using karras to the older model
noise schedule.
2022-10-22 23:02:50 -04:00
Lincoln Stein
a956bf9fda Merge branch 'development' into fix-disabled-prompt 2022-10-22 22:46:34 -04:00
Lincoln Stein
9f77df70c9 minor fixes to inpaint code
1. If tensors are passed to inpaint as init_image and/or init_mask, then
   the post-generation image fixup code will be skipped.

2. Post-generation image fixup will work with either a black and white "L"
   or "RGB"  mask, or an "RGBA" mask.
2022-10-22 22:28:54 -04:00
Lincoln Stein
c04133a512 ported code refactor changes from PR #1221
- pass a PIL.Image to img2img and inpaint rather than tensor
- To support clipseg, inpaint needs to accept an "L" or "1" format
  mask. Made the appropriate change.
2022-10-22 20:06:45 -04:00
Lincoln Stein
59747ecf24 Merge branch 'inpaint-improvement' of https://github.com/Kyle0654/InvokeAI into Kyle0654-inpaint-improvement 2022-10-22 19:30:52 -04:00
Lincoln Stein
a6e7aa8f97 Merge branch 'development' into patch-1 2022-10-22 19:28:50 -04:00
Lincoln Stein
51fdbe22d2 add support for loading VAE autoencoders
To add a VAE autoencoder to an existing model:

1. Download the appropriate autoencoder and put it into
   models/ldm/stable-diffusion

   Note that you MUST use a VAE that was written for the
   original CompViz Stable Diffusion codebase. For v1.4,
   that would be the file named vae-ft-mse-840000-ema-pruned.ckpt
   that you can download from https://huggingface.co/stabilityai/sd-vae-ft-mse-original

2. Edit config/models.yaml to contain the following stanza, modifying `weights`
   and `vae` as required to match the weights and vae model file names. There is
   no requirement to rename the VAE file.

~~~
stable-diffusion-1.4:
  weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
  description: Stable Diffusion v1.4
  config: configs/stable-diffusion/v1-inference.yaml
  vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
  width: 512
  height: 512
~~~

3. Alternatively from within the `invoke.py` CLI, you may use the command
   `!editmodel stable-diffusion-1.4` to bring up a simple editor that will
   allow you to add the path to the VAE.

4. If you are just installing InvokeAI for the first time, you can also
   use `!import_model models/ldm/stable-diffusion/sd-v1.4.ckpt` instead
   to create the configuration from scratch.

5. That's it!
2022-10-22 19:27:46 -04:00
Kyle Schouviller
3b01e6e423 Improve inpainting by color-correcting result and pasting init image over result using mask 2022-10-22 14:56:33 -07:00
Lincoln Stein
2e14ba8716 Let the text-to-mask .mask.png file be used as a mask
Ironically, the black and white mask file generated by the
`invoke> !mask` command could not be passed as the mask to
`img2img`. This is now fixed and the documentation updated.
2022-10-22 13:53:23 -04:00
Lincoln Stein
7308022bc7 outcropping improvements
- catch syntax errors in the outcrop coordinates
- work (after a fashion) on non-Invoke generated images
2022-10-22 13:38:32 -04:00
Damian at mba
8273c04575 wip implementing options in diffuse step 2022-10-22 12:15:34 +02:00
Damian at mba
ee7d4d712a parsing CrossAttentionControlSubstitute options works 2022-10-22 11:27:56 +02:00
krummrey
d8c1b78d83 Update CLI.md
Corrected path to script in line 11
2022-10-22 10:56:21 +02:00
Lincoln Stein
554445a985 remove debug statement 2022-10-21 21:31:41 -04:00
Lincoln Stein
b2bf2b08ff Merge branch 'model-switching' into development 2022-10-21 21:27:59 -04:00
wfng92
e7573ac90f Removed duplicate fix_func for MPS 2022-10-22 09:03:31 +08:00
Damian at mba
cdb664f6e5 Merge branch 'development' into fix-prompts 2022-10-21 21:34:09 +02:00
psychedelicious
a127eeff20 Fixes gallery bugs & adds gallery context menu 2022-10-22 07:54:39 +13:00
Lincoln Stein
1ca517d73b Merge branch 'fix-high-step-count' of https://github.com/holstvoogd/InvokeAI into holstvoogd-fix-high-step-count 2022-10-21 13:58:00 -04:00
Lincoln Stein
38b1dce7c3 Merge branch 'development' of github.com:invoke-ai/InvokeAI into development 2022-10-21 12:58:51 -04:00
Lincoln Stein
c9f9eed04e resolve numerous small merge bugs
- This merges PR #882

Coauthor: ArDiouscuros
2022-10-21 12:57:15 -04:00
Lincoln Stein
fbea657eff fix a number of bugs in textual inversion
- remove unsupported testtubelogger, use csvlogger instead
- fix logic for parsing --gpus option so that it won't crash if
  trailing comma absent
- change trainer accelerator from unsupported 'ddp' to 'auto'
2022-10-21 16:35:35 +02:00
Lincoln Stein
55db9dba0a Merge branch 'Improved-fetch-and-option-to-replay-commands-from-file' of https://github.com/ArDiouscuros/stable-diffusion into ArDiouscuros-Improved-fetch-and-option-to-replay-commands-from-file
- various small conflicts fixed
2022-10-21 10:12:35 -04:00
Damian at mba
64051d081c cleanup 2022-10-21 15:07:11 +02:00
Lincoln Stein
ddb007af65 Merge branch 'development' into fix-high-step-count 2022-10-21 06:55:17 -04:00
Damian at mba
e574a1574f txt2mask.py now tracking development again 2022-10-21 12:42:07 +02:00
Damian at mba
2bf9f1f0d8 rename StrcuturedConditioning to ExtraConditioningInfo 2022-10-21 12:18:40 +02:00
Damian at mba
8142b72bcd Merge remote-tracking branch 'upstream/development' into fix-prompts 2022-10-21 11:59:44 +02:00
Damian at mba
dc2f30a34e put back txt2mask import 2022-10-21 11:59:42 +02:00
Lincoln Stein
be7de4849c Merge branch 'development' into model-switching 2022-10-21 00:55:52 -04:00
Lincoln Stein
83e6ab08aa further improvements to model loading
- code for committing config changes to models.yaml now in module
  rather than in invoke script
- model marked "default" is now loaded if model not specified on
  command line
- uncache changed models when edited, so that they reload properly
- removed liaon from models.yaml and added stable-diffusion-1.5
2022-10-21 00:28:54 -04:00
Damian at mba
b385fdd7de non-normalized blend 2022-10-21 04:34:53 +02:00
Damian at mba
d965540103 more blend fixes 2022-10-21 04:23:19 +02:00
Damian at mba
404d59b1b8 fix blend 2022-10-21 04:18:17 +02:00
Lincoln Stein
9980c4baf9 Merge branch 'development' into vite-relative-paths 2022-10-20 22:12:52 -04:00
Damian at mba
4c1267338b bring in attention etc. 2022-10-21 03:54:13 +02:00
Damian at mba
2e0b1c4c8b ok now we're cooking 2022-10-21 03:29:50 +02:00
Damian at mba
da75876639 better support for word.swap(otherWord) without parantheses or quotes 2022-10-21 00:08:28 +02:00
Jan Skurovec
d4d1014c9f fix for 'model is not defined' when loading embedding 2022-10-20 17:31:46 -04:00
psychedelicious
213e12fe13 Filters existing images when adding new images; Fixes #1085; Builds fresh bundle 2022-10-20 16:53:48 -04:00
wfng92
3e0a7b6229 Correct color channels in upscale using array slicing 2022-10-20 16:52:07 -04:00
Damian at mba
da88097aba fix prompt handling in conditioning.py 2022-10-20 21:41:32 +02:00
Damian at mba
3f13dd3ae8 prompt parsing is now much more robust 2022-10-20 21:05:36 +02:00
psychedelicious
d3b0c54c14 Adds socket.io path 2022-10-20 23:03:48 +08:00
Damian at mba
79b4afeae7 parser working with basic escapes 2022-10-20 16:56:34 +02:00
psychedelicious
9c61aed7d0 Removes isDisabled from PromptInput; Resolves #1027 2022-10-20 22:28:07 +08:00
Damian at mba
da223dfe81 wip re-writing parts of prompt parser 2022-10-20 15:56:46 +02:00
psychedelicious
e035397dcf Changes vite dist asset paths to relative 2022-10-20 20:17:34 +08:00
psychedelicious
899ba975a6 Improves logic to determine if clipseg weights should be downloaded 2022-10-20 06:56:50 -04:00
psychedelicious
bfa65560eb Fixes torch.load() for MPS/CPU 2022-10-20 06:56:50 -04:00
psychedelicious
ed9307f469 Fix typo 2022-10-20 06:56:50 -04:00
Lincoln Stein
ff87239fb0 fix broken image in docs 2022-10-20 06:56:50 -04:00
Lincoln Stein
a357bf4f19 add !mask command to view output of clipseg
- The !mask command takes an image path, a text prompt, and
  (optionally) a masking threshold. It creates a mask over the region
  indicated by the prompt, and outputs several files that show which
  regions will be masked by the chosen prompt and threshold.

- The mask images should not be passed directly to img2img because
  they are designed for visualization only. Instead, use the
  --text_mask option to pass the selected prompt and threshold.

- See docs/features/INPAINTING.md for details.
2022-10-20 06:56:50 -04:00
Lincoln Stein
63f274f6df adjust environment & requirements files 2022-10-20 06:56:50 -04:00
Lincoln Stein
2ca4242f5f fix clipseg loading problems
- The directory "models" in the main InvokeAI directory was conflicting
  with loading "models.clipseg". To fix this issue, I have renamed the
  models.clipseg to clipseg_models.clipseg, and applied this change to
  the 'models-rename' branch of invoke-ai's fork of clipseg.
2022-10-20 06:56:50 -04:00
Damian at mba
c9d27634b4 bring in prompt parser from fix-prompts branch
attention is parsed but ignored, blends old syntax doesn't work,
	  conjunctions are parsed but ignored, the only part that's used
	  here is the new .blend() syntax and cross-attention control
	  using .swap()
2022-10-20 12:01:48 +02:00
noodlebox
027990928e Fix typo in docs: s/Formally/Formerly 2022-10-20 02:44:16 -04:00
psychedelicious
87469a5fdd Flips channels using array slicing instead of using OpenCV 2022-10-19 23:44:47 -04:00
psychedelicious
4101127011 Corrects color channels in face restoration; Fixes #1167 2022-10-19 23:32:57 -04:00
psychedelicious
f6191a4f12 Builds fresh bundle 2022-10-19 20:12:19 -04:00
psychedelicious
8c5d614c38 Increases max CFG Scale to 200 2022-10-19 20:12:19 -04:00
Damian at mba
42883545f9 add prompt language support for cross-attention .swap 2022-10-20 01:42:04 +02:00
Damian at mba
61357e4e6e be less verbose when assembling prompt 2022-10-19 21:12:07 +02:00
Damian at mba
c6ae9f1176 remove unnecessary assertion 2022-10-19 21:12:07 +02:00
Damian at mba
11d7e6b92f undo unwanted changes 2022-10-19 21:12:07 +02:00
Damian at mba
c3b992db96 Squashed commit of the following:
commit 9bb0b5d0036c4dffbb72ce11e097fae4ab63defd
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sat Oct 15 23:43:41 2022 +0200

    undo local_files_only stuff

commit eed93f5d30c34cfccaf7497618ae9af17a5ecfbb
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sat Oct 15 23:40:37 2022 +0200

    Revert "Merge branch 'development-invoke' into fix-prompts"

    This reverts commit 7c40892a9f184f7e216f14d14feb0411c5a90e24, reversing
    changes made to e3f2dd62b0548ca6988818ef058093a4f5b022f2.

commit f06d6024e345c69e6d5a91ab5423925a68ee95a7
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Thu Oct 13 23:30:16 2022 +0200

    more efficiently handle multiple conditioning

commit 5efdfcbcd980ce6202ab74e7f90e7415ce7260da
Merge: b9c0dc5 ac08bb6
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Thu Oct 13 14:51:01 2022 +0200

    Merge branch 'optional-disable-karras-schedule' into fix-prompts

commit ac08bb6fd25e19a9d35cf6c199e66500fb604af1
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Thu Oct 13 14:50:43 2022 +0200

    append '*use_model_sigmas*' to prompt string to use model sigmas

commit 70d8c05a3ff329409f76204f4af94e55d468ab8b
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Thu Oct 13 12:12:17 2022 +0200

    make karras scheduling switchable

    commit d60df54f69 replaced the model's
    own scheduling with karras scheduling. this has changed image generation
    (seems worse now?)

    this commit wraps the change in a bool.

commit b9c0dc5f1a658a0e6c3936000e9ae559e1c7a1db
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 20:16:00 2022 +0200

    add test of more complex conjunction

commit 9ac0c15cc0d7b5f6df3289d3ad474260972a17be
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 17:18:25 2022 +0200

    improve comments

commit ad33bce60590b87b2a93e90f16dc9d3e935d04a5
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 17:04:46 2022 +0200

    put back thresholding stuff

commit 4852c698a325049834ba0d4b358f07210bc7171a
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 14:25:02 2022 +0200

    notes on improving conjunction efficiency

commit a53bb1e5b68025d09642b935ae6a9a015cfaf2d6
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 14:14:33 2022 +0200

    optional weights support for Conjunction

commit fec79ab15e4f0c84dd61cb1b45a5e6a72ae4aaeb
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 12:07:27 2022 +0200

    fix blend error and log parsing output

commit 1f751c2a039f9c97af57b18e0f019512631d5a25
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 10:33:33 2022 +0200

    fix broken euler sampler

commit 02f8148d17efe4b6bde8d29b827092a0626363ee
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 10:24:20 2022 +0200

    cleanup prompt parser

commit 8028d49ae6c16c0d6ec9c9de9c12d56c32201421
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 10:14:18 2022 +0200

    explicit conjunction, improve flattening logic

commit 8a1710892185f07eb77483f7edae0fc4d6bbb250
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 22:59:30 2022 +0200

    adapt multi-conditioning to also work with ddim

commit 53802a839850d0d1ff017c6bafe457c4bed750b0
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 22:31:42 2022 +0200

    unconditioning is also fancy-prompt-syntaxable

commit 7c40892a9f184f7e216f14d14feb0411c5a90e24
Merge: e3f2dd6 dbe0da4
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 21:39:54 2022 +0200

    Merge branch 'development-invoke' into fix-prompts

commit e3f2dd62b0548ca6988818ef058093a4f5b022f2
Merge: eef0e48 06f542e
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 21:38:09 2022 +0200

    Merge remote-tracking branch 'upstream/development' into fix-prompts

commit eef0e484c2eaa1bd4e0e0b1d3f8d7bba38478144
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 21:26:25 2022 +0200

    fix run-on paren-less attention, add some comments

commit fd29afdf0e9f5e0cdc60239e22480c36ca0aaeca
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 21:03:02 2022 +0200

    python 3.9 compatibility

commit 26f7646eef7f39bc8f7ce805e747df0f723464da
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 20:58:42 2022 +0200

    first pass connecting PromptParser to conditioning

commit ae53dff3796d7b9a5e7ed30fa1edb0374af6cd8d
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 20:51:15 2022 +0200

    update frontend dist

commit 9be4a59a2d76f49e635474b5984bfca826a5dab4
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 19:01:39 2022 +0200

    fix issues with correctness checking FlattenedPrompt

commit 3be212323eab68e72a363a654124edd9809e4cf0
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 18:43:16 2022 +0200

    parsing nested seems to work pretty ok

commit acd73eb08cf67c27cac8a22934754321256f56a9
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 18:26:17 2022 +0200

    wip introducing FlattenedPrompt class

commit 71698d5c7c2ac855b690d8ef67e8830148c59eda
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 15:59:42 2022 +0200

    recursive attention weighting seems to actually work

commit a4e1ec6b20deb7cc0cd12737bdbd266e56144709
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 15:06:24 2022 +0200

    now apparently almost supported nested attention

commit da76fd1ddf22a3888cdc08fd4fed38d8b178e524
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 13:23:37 2022 +0200

    wip prompt parsing

commit dbe0da4572c2ac22f26a7afd722349a5680a9e47
Author: Kyle Schouviller <kyle0654@hotmail.com>
Date:   Mon Oct 10 22:32:35 2022 -0700

    Adding node-based invocation apps

commit 8f2a2ffc083366de74d7dae471b50b6f98a7c5f8
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Mon Oct 10 19:03:18 2022 +0200

    fix merge issues

commit 73118dee2a8f4891700756e014caf1c9ca629267
Merge: fd00844 12413b0
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Mon Oct 10 12:42:48 2022 +0200

    Merge remote-tracking branch 'upstream/development' into fix-prompts

commit fd0084413541013c2cf71e006af0392719bef53d
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Mon Oct 10 12:39:38 2022 +0200

    wip prompt parsing

commit 0be9363db9307859d2b65cffc6af01f57d7873a4
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Mon Oct 10 03:20:06 2022 +0200

    better +/- attention parsing

commit 5383f691874a58ab01cda1e4fac6cf330146526a
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Mon Oct 10 02:27:47 2022 +0200

    prompt parser seems to work

commit 591d098a33ce35462428d8c169501d8ed73615ab
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 9 20:25:37 2022 +0200

    supports weighting unconditioning, cross-attention with |

commit 7a7220563aa05a2980235b5b908362f66b728309
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 9 18:15:56 2022 +0200

    i think cross attention might be working?

commit 951ed391e7126bff228c18b2db304ad28d59644a
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 9 16:04:54 2022 +0200

    weighted CFG denoiser working with a single item

commit ee532a0c2827368c9e45a6a5f3975666402873da
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 9 06:33:40 2022 +0200

    wip probably doesn't work or compile

commit 14654bcbd207b9ca28a6cbd37dbd967d699b062d
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 7 18:11:48 2022 +0200

    use tan() to calculate embedding weight for <1 attentions

commit 1a8e76b31aa5abf5150419ebf3b29d4658d07f2b
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 7 16:14:54 2022 +0200

    fix bad math.max reference

commit f697ff896875876ccaa1e5527405bdaa7ed27cde
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 7 15:55:57 2022 +0200

    respect http[s]x protocol when making socket.io middleware

commit 41d3dd4eeae8d4efb05dfb44fc6d8aac5dc468ab
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 7 13:29:54 2022 +0200

    fractional weighting works, by blending with prompts excluding the word

commit 087fb6dfb3e8f5e84de8c911f75faa3e3fa3553c
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 7 10:52:03 2022 +0200

    wip doing weights <1 by averaging with conditioning absent the lower-weighted fragment

commit 3c49e3f3ec7c18dc60f3e18ed2f7f0d97aad3a47
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 7 10:36:15 2022 +0200

    notate CFGDenoiser, perhaps

commit d2bcf1bb522026ebf209ad0103f6b370383e5070
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Thu Oct 6 05:04:47 2022 +0200

    hack blending syntax to test attention weighting more extensively

commit 94904ef2cf917f74ec23ef7a570e12ff8255b048
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Thu Oct 6 04:56:37 2022 +0200

    conditioning works, apparently

commit 7c6663ddd70f665fd1308b6dd74f92ca393a8df5
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Thu Oct 6 02:20:24 2022 +0200

    attention weighting, definitely works in positive direction

commit 5856d453a9b020bc1a28ff643ae1f58c12c9be73
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 4 19:02:14 2022 +0200

    wip bubbling weights down

commit a2ed14fd9b7d3cb36b6c5348018b364c76d1e892
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 4 17:35:39 2022 +0200

    bring in changes from PC
2022-10-19 21:12:07 +02:00
Damian at mba
1ffd4a9e06 refactored single diffusion path seems to be working for all samplers 2022-10-19 21:08:03 +02:00
Damian at mba
147d39cb7c wip refactoring shared InvokeAI diffuser mixin to component 2022-10-19 21:08:03 +02:00
Damian at mba
824cb201b1 pass img2img ddim/plms edited conditioning through kwargs 2022-10-19 21:08:03 +02:00
Damian at mba
582880b314 add cross-attention support to im2img; prevent inpainting from crashing 2022-10-19 21:08:03 +02:00
Damian at mba
2b79a716aa wip hi-res fix 2022-10-19 21:08:03 +02:00
Damian at mba
d572af2acf fix cross-attention on k* samplers 2022-10-19 21:08:03 +02:00
Damian at mba
54e6a68acb wip bringing cross-attention to PLMS and DDIM 2022-10-19 21:08:03 +02:00
Damian at mba
09f62032ec cleanup and clarify comments 2022-10-19 21:08:03 +02:00
Damian at mba
711ffd238f cleanup 2022-10-19 21:08:03 +02:00
Damian at mba
056cb0d8a8 sliced cross-attention wrangler works 2022-10-19 21:08:03 +02:00
Damian at mba
37a204324b go back to using InvokeAI attention 2022-10-19 21:08:03 +02:00
Damian at mba
1fc1f8bf05 cross-attention working with placeholder {} syntax 2022-10-19 21:06:42 +02:00
Damian at mba
8ff507b03b runs but doesn't work properly - see below for test prompt
test prompt:
"a cat sitting on a car {a dog sitting on a car}" -W 384 -H 256 -s 10 -S 12346 -A k_euler
note that substition of dog for cat is currently hard-coded (ksampler.py
	line 43-44)
2022-10-19 21:06:42 +02:00
Damian at mba
33d6603fef cleanup initial experiments 2022-10-19 21:06:42 +02:00
Damian at mba
b0b1993918 initial experiments 2022-10-19 21:06:42 +02:00
Ben Alkov
07a3df6001 DRAFT: Cross-Attention Control
Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
2022-10-19 21:06:42 +02:00
Lincoln Stein
92d4dfaabf Merge branch 'asymmetric-tiling' of https://github.com/carson-katri/InvokeAI into carson-katri-asymmetric-tiling 2022-10-19 13:46:07 -04:00
psychedelicious
bc626af6ca Skips normalizing prompts for web UI metadata 2022-10-19 13:38:16 -04:00
psychedelicious
a45786ca2e Builds fresh bundle 2022-10-19 13:27:43 -04:00
psychedelicious
2926c8299c Fixes lingering references to GFPGAN vs Facetool 2022-10-19 13:27:43 -04:00
psychedelicious
32a5ffe436 Adds Codeformer support 2022-10-19 13:27:43 -04:00
Lincoln Stein
62dd3b7d7d resolve models.clipseg vs clipseg ambiguity 2022-10-18 23:09:26 -04:00
Carson Katri
15aa7593f6 Merge branch 'development' into asymmetric-tiling 2022-10-18 22:37:18 -04:00
Lincoln Stein
9b3ac92c24 fix incorrect import of clipseg 2022-10-18 19:28:30 -04:00
Lincoln Stein
66f6ef1b35 fix syntax errors in preload 2022-10-18 19:25:18 -04:00
Carson Katri
d93cd10b0d Merge branch 'development' into asymmetric-tiling 2022-10-18 17:27:29 -04:00
Lincoln Stein
a488b14373 prevent preload warning message 2022-10-18 17:09:17 -04:00
Lincoln Stein
0147dd6431 update requirements to address #1149 2022-10-18 16:28:58 -04:00
Carson Katri
9d19213b8a Merge branch 'development' of github.com:lstein/stable-diffusion into asymmetric-tiling 2022-10-18 13:34:10 -04:00
Damian at mba
71c3835f3e yarn built 2022-10-18 13:22:58 -04:00
Damian at mba
0fbd26e9bf simpler socketio setup URL handling 2022-10-18 13:22:58 -04:00
Lincoln Stein
2a78eb96d0 Merge branch 'development' of github.com:invoke-ai/InvokeAI into development 2022-10-18 08:30:02 -04:00
Lincoln Stein
3a1003f702 Fix typo
Taken from `main` PR #1147 
Author: eltociear
2022-10-18 08:29:26 -04:00
Lincoln Stein
329a9d0b11 Merge branch 'text-masking' of github.com:invoke-ai/InvokeAI into text-masking 2022-10-18 08:28:56 -04:00
Lincoln Stein
17d75f3da8 update environment/requirements for clipseg dependency 2022-10-18 08:27:49 -04:00
Lincoln Stein
20551857da add clipseg support for creating inpaint masks from text
On the command line, the new option is --text_mask or -tm.
Example:

```
invoke> a baseball -I /path/to/still_life.png -tm orange
```

This will find the orange fruit in the still life painting and replace
it with an image of a baseball.
2022-10-18 08:27:48 -04:00
Lincoln Stein
32122e0312 clipseg library and environment in place 2022-10-18 08:27:48 -04:00
Lincoln Stein
e6fc8af249 Fix typo
Taken from `main` PR #1147 
Author: eltociear
2022-10-18 08:08:58 -04:00
Lincoln Stein
c974c95e2b Merge branch 'development' of github.com:invoke-ai/InvokeAI into development 2022-10-17 23:14:55 -04:00
Lincoln Stein
3b2590243c ^C at invoke> cmd line exits gracefully 2022-10-17 23:14:32 -04:00
wfng92
1c2bd275fe Fix img2img DDIM index out of bound
Added a [community solution](https://github.com/CompVis/stable-diffusion/issues/111#issuecomment-1229483511) to fix index out of bound when doing img2img generation with `ddim` sampler. Also, restored `steps_out` to be `ddim_timesteps + 1` since the removal was meant to fix the [1000 steps issue](https://github.com/CompVis/stable-diffusion/issues/111)
2022-10-17 22:32:15 -04:00
Lincoln Stein
0cf11ce488 add option to CLI and pngwriter that allows user to set PNG compression level
- In CLI: the argument is --png_compression <0..9> (-z<0..9>)
- In API, pass `compress_level` to PngWriter.save_image_and_prompt_to_png()

Compression ranges from 0 (no compression) to 9 (maximum compression).
Default value is 6 (as specified by Pillow package).

This addresses an issue first raised in #652.
2022-10-17 22:27:47 -04:00
Carson Katri
d6195522aa Add seamless_axes docs to CLI.md 2022-10-17 20:17:34 -04:00
Carson Katri
3b79b935a3 Merge branch 'development' into asymmetric-tiling 2022-10-17 20:15:42 -04:00
Carson Katri
4079333e29 Document the seamless_axes argument 2022-10-17 19:33:17 -04:00
Carson Katri
99581dbbf7 Split seamless config into separate file 2022-10-17 19:31:20 -04:00
db3000
9e599c65c5 Only output facetool parameters if enhancing faces 2022-10-17 11:49:07 -04:00
Lincoln Stein
22267475eb update environment/requirements for clipseg dependency 2022-10-16 23:34:29 -04:00
Lincoln Stein
5eb0f8ffa7 add clipseg support for creating inpaint masks from text
On the command line, the new option is --text_mask or -tm.
Example:

```
invoke> a baseball -I /path/to/still_life.png -tm orange
```

This will find the orange fruit in the still life painting and replace
it with an image of a baseball.
2022-10-16 23:30:24 -04:00
Carson Katri
e03a3fcf68 Add seamless_axes options 2022-10-16 22:45:18 -04:00
Lincoln Stein
57bff2a663 clipseg library and environment in place 2022-10-16 16:45:07 -04:00
Lincoln Stein
528a183d42 add option to CLI and pngwriter that allows user to set PNG compression level
- In CLI: the argument is --png_compression <0..9> (-z<0..9>)
- In API, pass `compress_level` to PngWriter.save_image_and_prompt_to_png()

Compression ranges from 0 (no compression) to 9 (maximum compression).
Default value is 6 (as specified by Pillow package).

This addresses an issue first raised in #652.
2022-10-16 11:53:15 -04:00
Lincoln Stein
b953f82346 Merge branch 'development' into fix-doc-typos 2022-10-16 11:28:59 -04:00
Lincoln Stein
ef2058824a add a strength value to inpaint_replace
- --inpaint_replace 0.X will cause inpainting to ignore what is under
  the masked region with a strength ranging from 0 (don't ignore at all)
  to 1.0 (ignore completely)
- sync with upstream development
- update docs
2022-10-16 10:06:47 -04:00
Lincoln Stein
6f93dc7712 cleanup inpainting and img2img
- add a `--inpaint_replace` option that fills masked regions with
  latent noise. This allows radical changes to inpainted regions
  at the cost of losing context.
- fix up readline, arg processing and metadata writing to accommodate
  this change
- fixed bug in storage and retrieval of variations, discovered incidentally
  during testing
- update documentation
2022-10-16 08:50:55 -04:00
Rupesh Sreeraman
a6e28d2eb7 Fixed documentation typos and resolved merge conflicts in the documentation. 2022-10-16 17:55:57 +05:30
Lincoln Stein
a705a5a0aa enhance support for model switching and editing
- Error checks for invalid model
- Add !del_model command to invoke.py
- Add del_model() method to model_cache
- Autocompleter kept in sync with model addition/subtraction.
2022-10-15 15:46:29 -04:00
Joseph Dries III
f6bc13736a Fix Typo, committed changing ldm environment to invokeai 2022-10-15 08:48:18 -04:00
Lincoln Stein
194d4c75b3 Update license again
Added back copyright statements from latent diffusion and stable diffusion repos.
2022-10-14 16:40:35 -04:00
Lincoln Stein
bc9c60ae71 Modifiy MIT License using GitHub's template
The license has been there all along, but didn't use GitHub's template and wasn't being picked up automatically
2022-10-14 16:37:18 -04:00
Lincoln Stein
0a7005f2bc update changelogs 2022-10-14 16:25:47 -04:00
Lincoln Stein
c4fb8e304b fix noisy images at high step counts
At step counts greater than ~75, the ksamplers start producing noisy
images when using the Karras noise schedule. This PR reverts to using
the model's own noise schedule, which eliminates the problem at the
cost of slowing convergence at lower step counts.

This PR also introduces a new CLI `--save_intermediates <n>' argument,
which will save every nth intermediate image into a subdirectory
named `intermediates/<image_prefix>'.

Addresses issue #1083.
2022-10-14 16:19:45 -04:00
Lincoln Stein
fe2a2cfc8b Merge branch 'development' into model-switching 2022-10-14 13:18:59 -04:00
Lincoln Stein
32dab7d4bf close #1094, dangling gfpgan_strength reference 2022-10-14 07:45:10 -04:00
db3000
1ea541baa6 Reword deprecation warning for dream.py
- this plus previous commit closes #1087
2022-10-14 07:33:10 -04:00
db3000
82b7c118c4 Forward dream.py to invoke.py using the same interpreter, add deprecation warning 2022-10-14 07:31:35 -04:00
Lincoln Stein
1c501333e8 minor doc fixes 2022-10-14 07:30:26 -04:00
cmdr2
9a3c7800a7 Use the correct conda os arch for mac x64 2022-10-14 10:23:55 +05:30
cmdr2
11dc3ca1f8 Don't continue if micromamba was required but didn't initialize properly 2022-10-14 10:13:41 +05:30
db3000
ce5e57d828 Generalize facetool strength argument 2022-10-14 00:03:06 -04:00
Lincoln Stein
e98fe9c22d fix noisy images at high step counts
At step counts greater than ~75, the ksamplers start producing noisy
images when using the Karras noise schedule. This PR reverts to using
the model's own noise schedule, which eliminates the problem at the
cost of slowing convergence at lower step counts.

This PR also introduces a new CLI `--save_intermediates <n>' argument,
which will save every nth intermediate image into a subdirectory
named `intermediates/<image_prefix>'.

Addresses issue #1083.
2022-10-14 00:01:59 -04:00
Lincoln Stein
6afc0f9b38 add ability to import and edit alternative models online
- !import_model <path/to/model/weights> will import a new model,
  prompt the user for its name and description, write it to the
  models.yaml file, and load it.

- !edit_model <model_name> will bring up a previously-defined model
  and prompt the user to edit its descriptive fields.

Example of !import_model

<pre>
invoke> <b>!import_model models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt</b>
>> Model import in process. Please enter the values needed to configure this model:

Name for this model: <b>waifu-diffusion</b>
Description of this model: <b>Waifu Diffusion v1.3</b>
Configuration file for this model: <b>configs/stable-diffusion/v1-inference.yaml</b>
Default image width: <b>512</b>
Default image height: <b>512</b>
>> New configuration:
waifu-diffusion:
  config: configs/stable-diffusion/v1-inference.yaml
  description: Waifu Diffusion v1.3
  height: 512
  weights: models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
  width: 512
OK to import [n]? <b>y</b>
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
   | LatentDiffusion: Running in eps-prediction mode
   | DiffusionWrapper has 859.52 M params.
   | Making attention of type 'vanilla' with 512 in_channels
   | Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
   | Making attention of type 'vanilla' with 512 in_channels
   | Using faster float16 precision
</pre>

Example of !edit_model

<pre>
invoke> <b>!edit_model waifu-diffusion</b>
>> Editing model waifu-diffusion from configuration file ./configs/models.yaml
description: <b>Waifu diffusion v1.4beta</b>
weights: models/ldm/stable-diffusion-v1/<b>model-epoch10-float16.ckpt</b>
config: configs/stable-diffusion/v1-inference.yaml
width: 512
height: 512

>> New configuration:
waifu-diffusion:
  config: configs/stable-diffusion/v1-inference.yaml
  description: Waifu diffusion v1.4beta
  weights: models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
  height: 512
  width: 512

OK to import [n]? y
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
...
</pre>
2022-10-13 23:48:07 -04:00
cmdr2
065a1da9d1 Fix line endings for mac 2022-10-14 08:56:27 +05:30
Lincoln Stein
916f5bfbb2 gracefully recover from failed model load 2022-10-13 12:27:04 -04:00
db3000
7f491fd2d2 Reword deprecation warning for dream.py 2022-10-13 12:12:05 -04:00
db3000
203a6d8a00 Forward dream.py to invoke.py using the same interpreter, add deprecation warning 2022-10-13 12:12:05 -04:00
Jan Skurovec
cac3f5fc61 fix for "1 leaked semaphore objects to clean up at shutdown" on M1
Implements fix by @Any-Winter-4079 referenced in https://github.com/invoke-ai/InvokeAI/issues/1016#issuecomment-1276825640
2022-10-13 13:33:59 +02:00
hipsterusername
7e33560010 Hires Addition
Updated ImageMetaDataViewer with correct values
Updated tooltip text
Add arguments for Hires & Seamless Metadata
2022-10-13 23:57:24 +13:00
cmdr2
759f563b6d Add a pause before the script ends 2022-10-13 15:20:29 +05:30
cmdr2
8c47638eec Update How to create the installers.md 2022-10-13 11:25:02 +05:30
cmdr2
8233098136 Merge pull request #1 from invoke-ai/main
Merge upstream
2022-10-13 11:07:44 +05:30
cmdr2
1cb365fff1 Prefer the locally installed conda over any global conda installation 2022-10-13 11:01:09 +05:30
cmdr2
e405385e0d Prefer the locally installed conda over any global conda installation; activate the env before updating 2022-10-13 10:56:04 +05:30
cmdr2
15c5d6a5ef Typo in bash path 2022-10-13 10:24:44 +05:30
cmdr2
132e2b3ae5 Typo in the bash script 2022-10-13 10:00:53 +05:30
cmdr2
c16b7f090e Initialize conda for the shell before running the activate 2022-10-13 09:57:14 +05:30
Daniel Manzke
057fc95aa3 Print out the device type which is used
Print out the device type which is used for generating images.
2022-10-12 20:36:43 -04:00
CapableWeb
94bad8555a Add myself in CODEOWNERS for various "legacy" parts 2022-10-12 20:35:56 -04:00
CapableWeb
6c0dd9b5ef Add back old dream.py as legacy_api.py
This commit "reverts" the new API changes by extracting the old
functionality into new files.

The work is based on the commit `803a51d5adca7e6e28491fc414fd3937bee7cb79`

PngWriter regained PromptFormatter as old server used that.

`server_legacy.py` is the old server that `dream.py` used.

Finally `legacy_api.py` is what `dream.py` used to be at the mentioned
commit.

One manually run test has been added in order to be able to test
compatibility with the old API, currently just testing that the API
endpoint works the same way + the image hash is the same as it used to
be before.
2022-10-12 20:35:56 -04:00
Lincoln Stein
1c102c71fc final fixups to memory_cache
- fixed backwards calculation of minimum available memory
- only execute m.padding adjustment code once upon load
2022-10-12 15:56:06 -04:00
cmdr2
75f23793df Remove -y in linux script 2022-10-12 23:01:08 +05:30
cmdr2
9dcfa8de25 Typo in install.sh 2022-10-12 22:56:28 +05:30
cmdr2
3d6650e59b Don't close after updating 2022-10-12 22:44:11 +05:30
cmdr2
7d201d7be0 Fix the tmp file used for checking the existence of git and conda commands 2022-10-12 22:27:22 +05:30
cmdr2
cafaef11f7 Remove unnecessary quotes while checking if git and conda exist 2022-10-12 22:20:06 +05:30
cmdr2
1e201132ed Merge branch 'main' of github.com:cmdr2/InvokeAI 2022-10-12 22:15:49 +05:30
cmdr2
8604fd2727 Updated the installer to simplify the use of micromamba, and use conda for the actual installation; Update conda during the update script 2022-10-12 22:15:38 +05:30
Lincoln Stein
aa6aa68753 proposed fix to work on mps systems 2022-10-12 11:08:27 -04:00
cmdr2
86b7b07c24 Make the linux/mac scripts executable 2022-10-12 16:58:19 +05:30
cmdr2
af56aee5c6 Create the env using -y 2022-10-12 16:56:40 +05:30
cmdr2
1ec92dd5f3 Check for missing python/git before activating micromamba 2022-10-12 16:53:07 +05:30
cmdr2
1c946561d3 1-click installer using micromamba to install git and python into a contained environment (if necessary) before running the normal installation script 2022-10-12 16:38:06 +05:30
Lincoln Stein
b537e92789 move tokenizer into cpu cache as well 2022-10-12 03:03:29 -04:00
Lincoln Stein
7c06849c4d Merge branch 'model-switching' of github.com:invoke-ai/InvokeAI into model-switching 2022-10-12 02:39:57 -04:00
Lincoln Stein
488334710b enable fast switching between models in invoke.py
- This PR enables two new commands in the invoke.py script

 !models         -- list the available models and their cache status
 !switch <model> -- switch to the indicated model

Example:

 invoke> !models
   laion400m            not loaded  Latent Diffusion LAION400M model
   stable-diffusion-1.4     active  Stable Diffusion inference model version 1.4
   waifu-1.3                cached  Waifu anime model version 1.3
 invoke> !switch waifu-1.3
   >> Caching model stable-diffusion-1.4 in system RAM
   >> Retrieving model waifu-1.3 from system RAM cache

The name and descriptions of the models are taken from
`config/models.yaml`. A future enhancement to `model_cache.py` will be
to enable new model stanzas to be added to the file
programmatically. This will be useful for the WebGUI.

More details:

- Use fast switching algorithm described in PR #948
- Models are selected using their configuration stanza name
  given in models.yaml.
- To avoid filling up CPU RAM with cached models, this PR
  implements an LRU cache that monitors available CPU RAM.
- The caching code allows the minimum value of available RAM
  to be adjusted, but invoke.py does not currently have a
  command-line argument that allows you to set it. The
  minimum free RAM is arbitrarily set to 2 GB.
- Add optional description field to configs/models.yaml

Unrelated fixes:
- Added ">>" to CompViz model loading messages in order to make user experience
  more consistent.
- When generating an image greater than defaults, will only warn about possible
  VRAM filling the first time.
- Fixed bug that was causing help message to be printed twice. This involved
  moving the import line for the web backend into the section where it is
  called.

Coauthored by: @ArDiouscuros
2022-10-12 02:37:42 -04:00
Lincoln Stein
19341e95a6 enable fast switching between models in invoke.py
- This PR enables two new commands in the invoke.py script

 !models         -- list the available models and their cache status
 !switch <model> -- switch to the indicated model

Example:

 invoke> !models
   laion400m            not loaded  Latent Diffusion LAION400M model
   stable-diffusion-1.4     active  Stable Diffusion inference model version 1.4
   waifu-1.3                cached  Waifu anime model version 1.3
 invoke> !switch waifu-1.3
   >> Caching model stable-diffusion-1.4 in system RAM
   >> Retrieving model waifu-1.3 from system RAM cache

More details:

- Use fast switching algorithm described in PR #948
- Models are selected using their configuration stanza name
  given in models.yaml.
- To avoid filling up CPU RAM with cached models, this PR
  implements an LRU cache that monitors available CPU RAM.
- The caching code allows the minimum value of available RAM
  to be adjusted, but invoke.py does not currently have a
  command-line argument that allows you to set it. The
  minimum free RAM is arbitrarily set to 2 GB.
- Add optional description field to configs/models.yaml

Unrelated fixes:
- Added ">>" to CompViz model loading messages in order to make user experience
  more consistent.
- When generating an image greater than defaults, will only warn about possible
  VRAM filling the first time.
- Fixed bug that was causing help message to be printed twice. This involved
  moving the import line for the web backend into the section where it is
  called.
2022-10-12 02:19:12 -04:00
Chloe
c82e94811b Update Stable_Diffusion_AI_Notebook.ipynb 2022-10-11 21:42:31 -04:00
Chloe
c15a902e8d Update Stable_Diffusion_AI_Notebook.ipynb
Making Stable_Diffusion_AI_Notebook.ipynb work smoothly on Google Colab
2022-10-11 21:42:31 -04:00
mauwii
ca6385e6fa fix TEXTUAL_INVERSION.md emoji 2022-10-11 21:41:52 -04:00
mauwii
828ec1fb5c fix emoji im PROMPTS.md 2022-10-11 21:41:52 -04:00
mauwii
1c687d6d03 more updates to many docs, including:
- better readability in dark mode since color change
- better looking changelog
- fix images which where not loading
- also center most of the images
- fix some syntax errors like
  - headlines ending with a colon
  - codeblocks with wrong fences
  - codeblocks without shell
- update conda prompts from ldm to invokeai
- ....
2022-10-11 21:41:52 -04:00
Lincoln Stein
b9e910b5f4 add mostly functional model caching module 2022-10-11 17:24:10 -04:00
Jan Skurovec
101cac6a21 reintroduce fix for m1 from PR#579 missing after merge
Make results reproducible (so runs with the same seed produce the same result).
Implements fix by @wbowling referenced in https://github.com/invoke-ai/InvokeAI/issues/397#issuecomment-1240679294
2022-10-11 23:00:20 +02:00
Jan Skurovec
8ea07f3bb0 reintroduce fix for m1 from PR#579 missing after merge
Make results reproducible (so runs with the same seed produce the same result).
Implements fix by @wbowling referenced in https://github.com/invoke-ai/InvokeAI/issues/397#issuecomment-1240679294
2022-10-11 21:50:59 +02:00
Lincoln Stein
79e79b78aa mkdocs fixes, PR #1032
Squashed commit of the following:

commit 2c1e0168bb03a2cd625f2d4aca40eee0fdf7e4af
Merge: 2325c6c 31f2733
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Oct 11 08:33:18 2022 -0400

    Merge branch 'mkdocs-fixes' of https://github.com/mauwii/stable-diffusion into mauwii-mkdocs-fixes

commit 31f2733e89
Merge: d9d6d3a a61a690
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Oct 11 08:05:52 2022 -0400

    Merge branch 'main' into mkdocs-fixes

commit d9d6d3af3f
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 08:13:04 2022 +0200

    some more minor, overseen fixes to IMG2IMG

commit 4ab5a2aeba
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 07:49:11 2022 +0200

    add 4gotten alt-text to images

commit f778bd9c0f
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 07:18:11 2022 +0200

    update OTHER.md
    - fix codeblocks, add admonitions, embed graphic

commit a19f148a8e
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 06:51:29 2022 +0200

    update IMG2IMG.md

commit c1f1dfa714
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 06:10:25 2022 +0200

    update EMBIGGEN.md
    - fix codeblocks
    - fix toc
    - use admonitions

commit 791e6c63ef
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 05:58:53 2022 +0200

    better admonitions for CLI.md

commit e078025f00
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 05:50:32 2022 +0200

    huge update to CLI.md
    way too many updates to list them all, including:
    - render keys for keyboard-shortcuts
    - quote commands and "unhide" parameter-values (like `<int>`, `<string>`
    - fix codeblocks
    - quote commands
    - quote filenames
    - use admonitions
    - ....

commit bd98dd2307
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 04:49:57 2022 +0200

    fix INPAINTING.md
    - fix numbered List
    - replace text key combos with actual rendered keyboard keys

commit 5392000335
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 04:30:11 2022 +0200

    fix nubered list and codeblocks in INSTALL_WINDOWS

commit ffe9276f1e
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 04:12:56 2022 +0200

    fix numbered list in INSTALL_LINUX.md
    also fix blank lines, codeblocks and admonition

commit 2c6a6a567f
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 03:51:03 2022 +0200

    upgrade INSTALL_MAC.md:
    - use annotations and content-tabs

    yes, this looks ugly in repo afterwards, but plz also look at mkdocs:
    https://mauwii.github.io/stable-diffusion/installation/INSTALL_MAC/

commit 8f6c544480
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 01:43:11 2022 +0200

    comment out PR part in mkdocs-flow.yml

commit b52c14a67f
Merge: 97ebe58 a1b0b91
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 01:17:28 2022 +0200

    Merge branch 'mkdocs-fixes' of github.com:mauwii/stable-diffusion into mkdocs-fixes

commit a1b0b91bb3
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:59:44 2022 +0200

    fix conda env in codeblock

commit 5f9f9a266e
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:43:46 2022 +0200

    fix 4gotten title in TEXTUAL_INVERSION

commit 8f025b034e
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:41:52 2022 +0200

    quote repo_url and repo_name
    otherwise the version/stars/forks did not appear

commit 3a52b7deb3
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:39:54 2022 +0200

    fix TEXTUAL_INVERSION headline to fit the others

commit 389b21f966
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:35:48 2022 +0200

    fix SAMPLER_CONVERGENCE and add emoji

commit f26fc79a18
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:32:04 2022 +0200

    fix INSTALL_DOCKER.md:
    - fix title (Docker instead of "Before you begin")
    - add headline with Emoji
    - fix headlines to render toc correct

commit cbc3520489
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:24:58 2022 +0200

    add headline with emoji to INSTALL_MAC.md

commit 25f0614d66
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:21:01 2022 +0200

    add log emoji to docs/CHANGELOG.md

commit 42005688fa
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:20:47 2022 +0200

    use better fitting Icon for new Name

commit 0c65bad7f5
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:09:07 2022 +0200

    add Headline with Emoji to WEB and POSTPROCESS

commit 1c1cf2692e
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 23:56:16 2022 +0200

    update index.md:
    - remove unused template reference
    - make headline rendered bold and underlined, add (kind of) subtitle
    - update discord badge and link
    - update Quick links to look like in GH-Readme
      - also remove self reference to docs
    - add screenshot as in GH-Readme
    - add note pointing to issues tab
    - update path in command line to reflect new Repo Name

commit 0e29b0737e
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 23:23:10 2022 +0200

    chng site_name to `Stable Diffusion Toolkit Docs`

commit ad8a60d992
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 23:00:02 2022 +0200

    fix repo_url in mkdocs.yml

commit 234569d6b6
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 22:54:39 2022 +0200

    fix link to upscaling in WEB.md and TOC
    - TOC fixed by adding `#` to every headline after `## Parting remarks`
    - add missing blank lines

commit 97c84ad824
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 22:25:32 2022 +0200

    fix broken links in docs/CHANGELOG.md

commit bce62b3a32
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 22:15:37 2022 +0200

    add title to CHANGELOG.md to render TOC wo. `**`
    alternatively remove `**` around headline

commit 97ebe58b5b
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:59:44 2022 +0200

    fix conda env in codeblock

commit 87ac217e43
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:43:46 2022 +0200

    fix 4gotten title in TEXTUAL_INVERSION

commit 91439e8a52
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:41:52 2022 +0200

    quote repo_url and repo_name
    otherwise the version/stars/forks did not appear

commit 8a632a9e8f
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:39:54 2022 +0200

    fix TEXTUAL_INVERSION headline to fit the others

commit 7c8ffe2feb
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:35:48 2022 +0200

    fix SAMPLER_CONVERGENCE and add emoji

commit e2e86d2d11
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:32:04 2022 +0200

    fix INSTALL_DOCKER.md:
    - fix title (Docker instead of "Before you begin")
    - add headline with Emoji
    - fix headlines to render toc correct

commit 8b54c083fe
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:24:58 2022 +0200

    add headline with emoji to INSTALL_MAC.md

commit 8d8a032434
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:21:01 2022 +0200

    add log emoji to docs/CHANGELOG.md

commit 76519f6fa4
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:20:47 2022 +0200

    use better fitting Icon for new Name

commit aff0725533
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:09:07 2022 +0200

    add Headline with Emoji to WEB and POSTPROCESS

commit 0f7898cbdd
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 23:56:16 2022 +0200

    update index.md:
    - remove unused template reference
    - make headline rendered bold and underlined, add (kind of) subtitle
    - update discord badge and link
    - update Quick links to look like in GH-Readme
      - also remove self reference to docs
    - add screenshot as in GH-Readme
    - add note pointing to issues tab
    - update path in command line to reflect new Repo Name

commit f4c04eadf8
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 23:23:10 2022 +0200

    chng site_name to `Stable Diffusion Toolkit Docs`

commit 6e624827c0
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 23:00:02 2022 +0200

    fix repo_url in mkdocs.yml

commit 158848dd7e
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 22:54:39 2022 +0200

    fix link to upscaling in WEB.md and TOC
    - TOC fixed by adding `#` to every headline after `## Parting remarks`
    - add missing blank lines

commit 533736e135
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 22:29:46 2022 +0200

    fix link to truncation_comparison.jpg in OTHER.md

commit dd335142df
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 22:25:32 2022 +0200

    fix broken links in docs/CHANGELOG.md

commit 374dd54f30
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 22:15:37 2022 +0200

    add title to CHANGELOG.md to render TOC wo. `**`
    alternatively remove `**` around headline
2022-10-11 08:36:00 -04:00
Lincoln Stein
2325c6cd40 Update index.md
Bump up disk requirements to 12 GB.
2022-10-11 08:17:37 -04:00
Lincoln Stein
3ec33414ec Update README.md
Bump up disk storage requirements to 12 GB.
2022-10-11 08:16:36 -04:00
hj
a61a690f6c Fix the url 2022-10-11 08:03:19 -04:00
blessedcoolant
06f542ed7a Update .gitignore 2022-10-11 16:28:48 +13:00
Lincoln Stein
8954171eea add steps for updating environment
Closes #1017
2022-10-10 18:16:08 -04:00
Lincoln Stein
e0e69ad279 Fix broken path in CodeFormer instructions
Closes #1023
2022-10-10 18:07:20 -04:00
Lincoln Stein
e3e8024e15 Update OTHER.md
Fix up reference to perlin demo image.
2022-10-10 18:04:14 -04:00
Lincoln Stein
c4cf888532 Update IMG2IMG.md
Fix merge messages inadvertently left in file.
2022-10-10 17:56:28 -04:00
Will
9eff9e5752 update mac instructions to use invokeai for env name 2022-10-10 17:45:18 -04:00
Nuno Coração
84c1825abc fixed old reference to ldm on activate env 2022-10-10 17:44:30 -04:00
Rich Jones
0621dd7ed4 Fix two broken links in README
Trivial change, two links went to `.m` rather than `.md`.
2022-10-10 17:43:58 -04:00
Lincoln Stein
67ddba9cff add discussion of samplers to VARIATIONS.md doc 2022-10-10 14:15:08 -04:00
Ben Alkov
cbf5426d27 fix(venv): rename 'ldm' -> 'invokeai' 2022-10-10 13:04:03 -04:00
Lincoln Stein
bac60ca21e Update index.md
-add Discord link
2022-10-10 12:38:31 -04:00
Lincoln Stein
8e0d671488 Update README.md 2022-10-10 12:36:50 -04:00
Lincoln Stein
ee6deef14c Update README.md 2022-10-10 12:33:00 -04:00
Lincoln Stein
5d8c048d0d Update README.md
Add quick links to documentation, bug reports and discussion.
2022-10-10 11:33:45 -04:00
Lincoln Stein
f8fd6e39a3 add quicklinks to README 2022-10-10 11:28:53 -04:00
Lincoln Stein
dafca16c8b Merge branch 'main' of github.com:invoke-ai/InvokeAI into main 2022-10-10 11:23:49 -04:00
Lincoln Stein
3449c05bf4 fix embedded images 2022-10-10 11:23:43 -04:00
Lincoln Stein
5c3fad22fd Update mkdocs.yml
Changed owner and repo name
2022-10-10 11:11:06 -04:00
Lincoln Stein
425cf67ee5 bring gh-page landing page up to date 2022-10-10 11:05:14 -04:00
Lincoln Stein
4f9529db9e Merge branch 'main' of github.com:invoke-ai/InvokeAI into main 2022-10-10 10:51:06 -04:00
Lincoln Stein
f3931a031d update changelog for gh-pages 2022-10-10 10:50:59 -04:00
Lincoln Stein
a4995b7878 README fixes
-add screenshot of WebGUI
-remove redundant TOC
2022-10-10 10:03:55 -04:00
Lincoln Stein
10d8d1bb25 Merge branch 'main' of github.com:invoke-ai/InvokeAI into main 2022-10-10 09:35:54 -04:00
Lincoln Stein
b30ae57731 update web gui walkthrough 2022-10-10 09:35:40 -04:00
Lincoln Stein
b0bfbafd3d update web gui walkthrough 2022-10-10 09:33:40 -04:00
Lincoln Stein
7c50bd2039 rebuild front end 2022-10-10 09:19:52 -04:00
Lincoln Stein
ae4e385abd merge changes to mac installation instructions 2022-10-10 09:18:48 -04:00
Lincoln Stein
e301cd3321 Update OUTPAINTING.md
fix typo
2022-10-10 09:13:35 -04:00
Lincoln Stein
2977680ca1 add links for history processing 2022-10-10 09:13:35 -04:00
Lincoln Stein
2a5aa6e986 fix typos 2022-10-10 09:13:35 -04:00
Lincoln Stein
3bba41ee89 add more features to changelog 2022-10-10 09:13:35 -04:00
Lincoln Stein
179b5f7839 frontend rebuild 2022-10-10 09:13:35 -04:00
Lincoln Stein
26d7712f03 fix link error 2022-10-10 09:13:35 -04:00
Lincoln Stein
c0b370e1b9 add perlin noise to list of new features 2022-10-10 09:13:34 -04:00
Lincoln Stein
15cc92e54a fix environment-mac.yml as per #964 2022-10-10 09:12:48 -04:00
Marco Labarile
acdd5b3922 Fix markdown typo in WEB.md 2022-10-10 09:09:04 -04:00
psychedelicious
9685fc210c Updates INSTALL_MAC.md 2022-10-10 09:08:14 -04:00
Jim Hays
f4cdc0001f Update references to lstein/stable-diffusion to invoke-ai/InvokeAI 2022-10-10 09:06:51 -04:00
Lincoln Stein
3f78e9a1a3 rebuild frontend 2022-10-10 09:06:06 -04:00
Eric Wolf
280e2899d7 fix typo 2022-10-10 09:05:45 -04:00
Lincoln Stein
82b0bb838c fix link error 2022-10-10 09:05:45 -04:00
Lincoln Stein
8482518618 add perlin noise to list of new features 2022-10-10 09:05:45 -04:00
Lincoln Stein
6425bda663 add short list of 2.0.0 new features 2022-10-10 09:05:45 -04:00
psychedelicious
12413b0be6 Fix safari display:grid lag 2022-10-10 03:13:56 +02:00
Lincoln Stein
275dca83be Update OUTPAINTING.md
fix typo
2022-10-09 18:46:23 -04:00
Lincoln Stein
be5bf03ccc add links for history processing 2022-10-09 18:44:31 -04:00
Lincoln Stein
0c479cd706 fix typos 2022-10-09 18:43:09 -04:00
Lincoln Stein
7325b73073 add more features to changelog 2022-10-09 18:41:57 -04:00
Lincoln Stein
49380f75a9 frontend rebuild 2022-10-09 18:25:18 -04:00
Lincoln Stein
3d4276439f merge prior to backing out PR #1000 2022-10-09 18:24:15 -04:00
Lincoln Stein
a4c36dbc15 fix link error 2022-10-09 18:21:13 -04:00
Lincoln Stein
4fbd11a1f2 add perlin noise to list of new features 2022-10-09 18:21:13 -04:00
Lincoln Stein
8ce3d4dd7f add short list of 2.0.0 new features 2022-10-09 18:21:13 -04:00
Lincoln Stein
b82c968278 fix references from lstein/stable-diffusion to invoke-ai/InvokeAI
- as per #989
2022-10-09 18:21:13 -04:00
Lincoln Stein
bc8e86e643 fix environment-mac.yml as per #964 2022-10-09 18:21:13 -04:00
Lincoln Stein
1b6fab59a4 run make_schedule() if it hasn't already been called
- fixes #984
2022-10-09 18:21:13 -04:00
Lincoln Stein
d1dd35a1d2 final tweak to embedded screenshots in WEB.md 2022-10-09 18:21:13 -04:00
Lincoln Stein
400f062771 make initial screenshot even larger 2022-10-09 18:21:13 -04:00
Lincoln Stein
40894d67ac fixup image sizes in WEB.md 2022-10-09 18:21:13 -04:00
Lincoln Stein
08a0b85111 fix image links in documentation 2022-10-09 18:21:13 -04:00
Lincoln Stein
7da6fad359 add missing doc files 2022-10-09 18:21:05 -04:00
rpagliuca
b24d182237 Update README.md
Small writing error
2022-10-09 18:17:11 -04:00
Marco Labarile
2bdcc106f2 Fix markdown typo in WEB.md 2022-10-09 18:16:27 -04:00
psychedelicious
7a98387e8d Updates INSTALL_MAC.md 2022-10-09 18:16:27 -04:00
Jim Hays
58d0f14d03 Update references to lstein/stable-diffusion to invoke-ai/InvokeAI 2022-10-09 18:16:27 -04:00
rpagliuca
bc9471987b Update README.md
Small writing error
2022-10-09 18:16:27 -04:00
Lincoln Stein
dc6e60cbcc Update INPAINTING.md
Changed Gimp instructions to indicate that partial transparency is better than full transparency.
2022-10-09 18:16:27 -04:00
Lincoln Stein
7dae5fb131 rebuild frontend 2022-10-09 18:16:24 -04:00
Eric Wolf
3bc1ff5e5a fix typo 2022-10-09 18:07:57 -04:00
Lincoln Stein
8ff9c69e2f fix link error 2022-10-09 16:41:05 -04:00
Lincoln Stein
988ace8029 add perlin noise to list of new features 2022-10-09 16:39:36 -04:00
Lincoln Stein
6e9d996ece add short list of 2.0.0 new features 2022-10-09 16:36:00 -04:00
Lincoln Stein
789714b0b1 fix references from lstein/stable-diffusion to invoke-ai/InvokeAI
- as per #989
2022-10-09 15:38:22 -04:00
Lincoln Stein
773a64d4c0 fix references from lstein/stable-diffusion to invoke-ai/InvokeAI
- as per #989
2022-10-09 15:37:45 -04:00
Lincoln Stein
bb7629d2b8 fix environment-mac.yml as per #964 2022-10-09 15:34:19 -04:00
Lincoln Stein
745c020aa2 fix environment-mac.yml as per #964 2022-10-09 15:33:56 -04:00
Lincoln Stein
c5344acb25 run make_schedule() if it hasn't already been called
- fixes #984
2022-10-09 15:30:23 -04:00
Lincoln Stein
318eb35ea0 run make_schedule() if it hasn't already been called
- fixes #984
2022-10-09 15:29:04 -04:00
Lincoln Stein
6e2fd2affe rebuild frontend 2022-10-09 14:52:00 -04:00
Lincoln Stein
8faa06fb15 Merge branch 'main' into development
- this syncs documentation and code
2022-10-09 14:47:27 -04:00
ArDiouscuros
0b7ca6a326 Allow user to generate images with initial noise as on M1 / mps system 2022-10-09 12:25:22 -04:00
Lincoln Stein
ce8c238ac4 final tweak to embedded screenshots in WEB.md 2022-10-09 11:45:16 -04:00
Lincoln Stein
f6c37e46e1 make initial screenshot even larger 2022-10-09 11:43:42 -04:00
Lincoln Stein
2d69efccef fixup image sizes in WEB.md 2022-10-09 11:42:59 -04:00
Lincoln Stein
f9d2aafaeb fix image links in documentation 2022-10-09 11:42:03 -04:00
Kent Keirsey
22514aec2e Minor CSS & Link Updates
Also updated dist files with new CSS
2022-10-09 11:40:16 -04:00
Lincoln Stein
5a22a83f4c add missing doc files 2022-10-09 11:38:39 -04:00
Lincoln Stein
b1d43eae46 almost ready for public release
- merged release-candidate-2
- fix up documentation
- add web tutorial
2022-10-09 11:37:00 -04:00
Marco Labarile
0b8cdb6964 Fix markdown typo in WEB.md 2022-10-09 08:57:14 -04:00
psychedelicious
aed5ad22fb Updates INSTALL_MAC.md 2022-10-09 08:49:29 -04:00
Jim Hays
dc9c16b93d Update references to lstein/stable-diffusion to invoke-ai/InvokeAI 2022-10-09 08:48:23 -04:00
rpagliuca
f6e858a548 Update README.md
Small writing error
2022-10-09 08:45:55 -04:00
Lincoln Stein
4c2db171ca Update INPAINTING.md
Changed Gimp instructions to indicate that partial transparency is better than full transparency.
2022-10-09 08:45:33 -04:00
Lincoln Stein
1255127e49 rebuild frontend 2022-10-09 08:37:51 -04:00
blessedcoolant
1cb74a6357 [WebUI] Masonry Layout for Gallery 2022-10-09 08:36:28 -04:00
psychedelicious
5e2b250426 Images grow to fit space in gallery 2022-10-09 08:36:17 -04:00
blessedcoolant
ad190cfbb2 Smaller Gallery Images 2022-10-09 08:36:03 -04:00
blessedcoolant
542ceb051b Rework Gallery DIsplay 2022-10-09 08:34:57 -04:00
blessedcoolant
3473669458 WebUI Bug Fixes & Tweaks 2022-10-09 08:33:18 -04:00
blessedcoolant
3170c83d8d [WebUI] Masonry Layout for Gallery 2022-10-09 08:32:06 -04:00
psychedelicious
3046dabde2 Images grow to fit space in gallery 2022-10-09 08:32:06 -04:00
blessedcoolant
1b02074fea Smaller Gallery Images 2022-10-09 08:32:06 -04:00
blessedcoolant
f15fd2c3d3 Rework Gallery DIsplay 2022-10-09 08:32:06 -04:00
blessedcoolant
081271d6a1 WebUI Bug Fixes & Tweaks 2022-10-09 08:32:06 -04:00
Peter Baylies
27f62999c9 * Fix for Perlin noise issue for cuda as well. 2022-10-09 08:24:02 -04:00
Peter Baylies
89d130edf4 * Fix for Perlin noise issue for cuda as well. 2022-10-09 08:23:23 -04:00
Rainer Bernhardt
0e551a3844 Merge branch 'development' into fnformat 2022-10-09 13:43:09 +02:00
Lincoln Stein
31869885d9 enhance the in-line -h command help text
- the prompt argument comes before the optional arguments
- usage statement shows 'invoke>' rather than 'invoke.py'
- use pydoc pager to help display long help message
2022-10-08 13:55:05 -04:00
Lincoln Stein
4c026d9d92 enhance the in-line -h command help text
- the prompt argument comes before the optional arguments
- usage statement shows 'invoke>' rather than 'invoke.py'
- use pydoc pager to help display long help message
2022-10-08 13:53:56 -04:00
Any-Winter-4079
435231ef08 Get for external TI .bin files to work
Issue referenced in https://github.com/invoke-ai/InvokeAI/issues/980#issuecomment-1272162880
Users whose embeddings are trained on a non-regular num_vectors_per_token (e.g. 6), should update this value in their local repo, to get that embedding to work.
2022-10-08 13:18:19 -04:00
Any-Winter-4079
19a79caf41 Get for external TI .bin files to work
Issue referenced in https://github.com/invoke-ai/InvokeAI/issues/980#issuecomment-1272162880
Users whose embeddings are trained on a non-regular num_vectors_per_token (e.g. 6), should update this value in their local repo, to get that embedding to work.
2022-10-08 13:17:44 -04:00
David Burnett
7b095f8f97 add realesrgan to requirements.txt, remove nightie for torch and torchvision due to performance issues 2022-10-08 12:01:45 -04:00
psychedelicious
f5dfd5b0dc Fixes CORS handling 2022-10-08 11:57:18 -04:00
psychedelicious
9579a401b5 Fixes CORS handling 2022-10-08 11:56:38 -04:00
Lincoln Stein
47a97f7e97 rebuild front end 2022-10-08 11:50:25 -04:00
blessedcoolant
3c146ebf9e Fix Gallery being open by default 2022-10-08 11:47:11 -04:00
blessedcoolant
efbcbb0d91 Add Image Gallery Drawer 2022-10-08 11:44:42 -04:00
blessedcoolant
578d8b0cb4 Add Image Gallery Drawer 2022-10-08 11:43:02 -04:00
Lincoln Stein
2b1aaf4ee7 rename all modules from ldm.dream to ldm.invoke
- scripts and documentation updated to match
- ran preflight checks on both web and CLI and seems to be working
2022-10-08 11:37:23 -04:00
Lincoln Stein
4a7f5c7469 Merge branch 'release-candidate-2' of github.com:invoke-ai/InvokeAI into release-candidate-2 2022-10-08 09:34:11 -04:00
Lincoln Stein
98fe044dee rebrand CLI from "dream" to "invoke"
- rename dream.py to invoke.py
- create a compatibility script named dream.py that execs() invoke.py
- redo documentation
- change help message in args
- this does **not** rename the libraries, which are still ldm.dream.util, etc
2022-10-08 09:32:06 -04:00
ArDiouscuros
62d4bb05d4 Add exception handling during metadata processing 2022-10-08 13:42:30 +02:00
ArDiouscuros
02b1040264 Fix typo in ldm/dream/readline.py during merge, add more exception handling 2022-10-08 13:41:31 +02:00
ArDiouscuros
dfd5899611 Merge branch 'development' into Improved-fetch-and-option-to-replay-commands-from-file 2022-10-08 13:26:22 +02:00
blessedcoolant
8ea88f49b1 Fix Gallery being open by default 2022-10-08 21:23:41 +13:00
blessedcoolant
a62541d976 Merge branch 'webui-image-drawer' of https://github.com/blessedcoolant/InvokeAI into webui-image-drawer 2022-10-08 17:39:50 +13:00
blessedcoolant
fbd9a49899 [WebUI] Gallery Drawer Release Build 2022-10-08 17:36:18 +13:00
blessedcoolant
4e571e12b8 Add Image Gallery Drawer 2022-10-08 17:33:47 +13:00
blessedcoolant
2567f5faa5 Add Image Gallery Drawer 2022-10-08 16:55:39 +13:00
Lincoln Stein
97684d78d3 rebuild webui package 2022-10-07 16:44:23 -04:00
blessedcoolant
57791834ab [WebUI] Add Image To Image UI 2022-10-07 16:41:09 -04:00
blessedcoolant
3b0c4b74b6 [WebUI] Add Image To Image UI 2022-10-07 16:28:19 -04:00
Lincoln Stein
7a701506a4 restore ability of ksamplers to process -v variation options
- supersedes PR #977
- works with both img2img and txt2img
2022-10-07 16:25:58 -04:00
Lincoln Stein
5157cbeda1 restore ability of ksamplers to process -v variation options
- supersedes #977
2022-10-07 16:21:16 -04:00
Lincoln Stein
3d7bc074cf autorotate init images using exif orientation tag 2022-10-07 12:06:50 -04:00
Lincoln Stein
b296933ba0 autorotate init images using exif orientation tag 2022-10-07 12:06:40 -04:00
Jakub Kolčář
70bb7f4a61 fixed perlin noise generation for mps (macos) - fix for cpu fallback 2022-10-07 10:36:45 -04:00
Jakub Kolčář
45cc867b0c fixed perlin noise generation for mps (macos) - fix for cpu fallback 2022-10-07 10:35:42 -04:00
Lincoln Stein
9c9cb71544 rebuild frontend package 2022-10-07 10:20:02 -04:00
Rainer Bernhardt
173dc34194 Merge branch 'development' into fnformat 2022-10-07 15:39:41 +02:00
Lincoln Stein
333219be35 fix broken image generation on plms and ddim samplers 2022-10-07 08:26:53 -04:00
spezialspezial
c1230da3ab remove duplicated code 2022-10-07 08:13:34 -04:00
spezialspezial
a7515624b2 remove duplicated code 2022-10-07 08:12:55 -04:00
Lincoln Stein
9f34ddfcea fix crash on len(Nonetype) in k_sampler 2022-10-07 08:05:13 -04:00
plucked
6499b99dad revert accidental edit 2022-10-07 10:26:14 +00:00
plucked
c6611b2ad6 doc: described how filename format works 2022-10-07 10:21:16 +00:00
plucked
395445e7b0 using string.format for filename formatting 2022-10-07 09:24:39 +00:00
plucked
89c6c11214 feat: adding filename format template 2022-10-07 08:32:39 +00:00
Lincoln Stein
c6a7be63b8 fix crash in generate._transparency_check_and_warning() 2022-10-06 21:00:27 -04:00
Lincoln Stein
75165957c9 Revert "realesrgan inherits precision setting from main program"
This reverts commit 5f42d08945.

This fix was intended to solve issue #939, in which ESRGAN generates
dark images when upscaling 4X on certain GTX cards. However, the fix
apparently causes conflicts with some versions of the ESRGAN library,
and this fix will have to wait until after release of 2.0.
2022-10-06 20:52:38 -04:00
Kent Keirsey
4f247a3672 Web Docs Update 2022-10-07 13:41:27 +13:00
Lincoln Stein
d60df54f69 fix k_samplers in img2img - probably correct now 2022-10-06 18:53:54 -04:00
Lincoln Stein
1f25f52af9 Merge branch 'development' of github.com:invoke-ai/InvokeAI into development 2022-10-06 18:31:25 -04:00
Lincoln Stein
7541c7cf5d fix k_samplers in img2img - probably correct now 2022-10-06 18:31:04 -04:00
blessedcoolant
a6cdde3ce4 Change Invoke Button Text To Invoke 2022-10-07 10:37:35 +13:00
blessedcoolant
a53b9a443f Fix WebUI Not Working 2022-10-07 08:09:55 +13:00
blessedcoolant
6e1328d4c2 Fix WebUI Not Working 2022-10-07 08:02:10 +13:00
Lincoln Stein
440065f7f8 revert previous change 2022-10-06 14:57:06 -04:00
Lincoln Stein
2c27e759cd fix #889 - fuzzy k* img2img at low strength 2022-10-06 14:16:56 -04:00
Lincoln Stein
82481a6f9c Merge branch 'release-candidate-2' of github.com:invoke-ai/InvokeAI into release-candidate-2 2022-10-06 13:58:53 -04:00
Lincoln Stein
90d64388ab Merge branch 'release-candidate-2' into release-candidate-2
- This includes #949 "Bug fixes for new Threshold and Perlin Options"
2022-10-06 13:57:43 -04:00
Lincoln Stein
3444c8e6b8 Merge branch 'release-candidate-2' into release-candidate-2 2022-10-06 13:53:27 -04:00
blessedcoolant
74419f41a3 Release Candidate 2.0 WebUI 2022-10-07 06:50:34 +13:00
psychedelicious
d84321e080 Adds hotkeys to modal 2022-10-06 13:49:09 -04:00
psychedelicious
6542556ebd Adds next/prev image buttons/hotkeys 2022-10-06 13:48:59 -04:00
blessedcoolant
542ee56c77 [WebUI] Fix Threshold and Perlin Noise Styling 2022-10-07 06:48:16 +13:00
psychedelicious
461e662644 Adds hotkeys to modal 2022-10-07 06:44:47 +13:00
psychedelicious
58d73f5cae Adds next/prev image buttons/hotkeys 2022-10-07 06:44:47 +13:00
blessedcoolant
0c1c220bb9 Revert Auto Build Frontend Workflow 2022-10-07 06:41:03 +13:00
blessedcoolant
bf5ccfffa5 Merge branch 'development' of https://github.com/invoke-ai/InvokeAI into development 2022-10-07 06:29:24 +13:00
blessedcoolant
70bbb670ec Add Basic Hotkey Support 2022-10-06 13:27:42 -04:00
blessedcoolant
7b270ec3b0 Revert "[bot] builds dev bundle"
This reverts commit 7a0d4c3350.
2022-10-07 06:26:58 +13:00
Lincoln Stein
e4ef7bdbb9 Merge branch 'development' into webui-hotkeys 2022-10-06 13:25:12 -04:00
Lincoln Stein
5f42d08945 realesrgan inherits precision setting from main program 2022-10-06 12:23:30 -04:00
blessedcoolant
911c99f125 Fix WebUI CORS Issue 2022-10-06 11:17:48 -04:00
blessedcoolant
c7ccb9dacd Fix WebUI CORS Issue 2022-10-06 11:15:33 -04:00
GitHub Actions Bot
7a0d4c3350 [bot] builds dev bundle 2022-10-06 11:15:33 -04:00
Lincoln Stein
2154dd2349 prevent crashes due to uninitialized free_gpu_mem 2022-10-06 10:54:05 -04:00
Lincoln Stein
f3050fefce bug and warning message fixes
- txt2img2img back to using DDIM as img2img sampler; results produced
  by some k* samplers are just not reliable enough for good user
  experience
- img2img progress message clarifies why img2img steps taken != steps requested
- warn of potential problems when user tries to run img2img on a small init image
2022-10-06 10:39:08 -04:00
Arthur Holstvoogd
595d15455a Fix generation of image with s>1000 2022-10-06 15:49:35 +02:00
Lincoln Stein
183b98384f set perlin & threshold to zero on generator initialization 2022-10-06 09:35:04 -04:00
blessedcoolant
40d7141a4d Add Basic Hotkey Support 2022-10-07 02:29:47 +13:00
Peter Baylies
6d475ee290 * Bug fixes for new Threshold and Perlin options 2022-10-06 08:46:27 -04:00
psychedelicious
c430f5452b Resolves @bakkot's review 2022-10-06 07:27:45 -04:00
psychedelicious
97de5e31f9 Sets up GH actions to auto-build frontend bundle 2022-10-06 07:27:45 -04:00
Lincoln Stein
a99aab6309 enable --hires to use k* samplers 2022-10-05 20:10:21 -04:00
ArDiouscuros
5a40f7ad15 Fix for crashes in txt2img hires fix mode 2022-10-05 20:10:06 -04:00
Lincoln Stein
2f29b78a00 enable --hires to use k* samplers 2022-10-05 17:18:32 -04:00
ArDiouscuros
bcb6e2e506 Fix for crashes in txt2img hires fix mode 2022-10-05 17:13:43 -04:00
Lincoln Stein
194b875cf3 Update IMG2IMG.md
Added information on the small initial image size bug.
2022-10-05 15:55:38 -04:00
Lincoln Stein
b2cd98259d rename img files with colons 2022-10-05 12:56:57 -04:00
Damian at mba
0f55d89e20 Improve IMG2IMG docs with deeper explanation of what is happening under the hood 2022-10-05 10:21:03 -04:00
Marco Labarile
8a8be92eac Fix markdown typo in WEB.md 2022-10-04 22:53:56 -04:00
psychedelicious
9318719b9e Updates INSTALL_MAC.md 2022-10-04 07:16:42 -04:00
ArDiouscuros
935a9d3c75 Update !fetch command, add documentation and autocomplete list
-- !fetch takes second optional argument name of the file to save commands to
2022-10-03 10:38:22 +02:00
Jim Hays
8e76bc2b5d Update references to lstein/stable-diffusion to invoke-ai/InvokeAI 2022-10-01 18:15:20 -04:00
ArDiouscuros
93b1298d46 Improve !fetch, add !replay
Allow save fetched commands to a file Replay command to get commands from file inside of interactive prompt
Related to #871
2022-10-01 21:24:10 +02:00
rpagliuca
1af86618e3 Update README.md
Small writing error
2022-10-01 15:00:25 -04:00
Lincoln Stein
b732bcad2f Update INPAINTING.md
Changed Gimp instructions to indicate that partial transparency is better than full transparency.
2022-10-01 12:17:46 -04:00
371 changed files with 19998 additions and 8477 deletions

3
.dockerignore Normal file
View File

@@ -0,0 +1,3 @@
*
!environment*.yml
!docker-build

4
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1,4 @@
ldm/invoke/pngwriter.py @CapableWeb
ldm/invoke/server_legacy.py @CapableWeb
scripts/legacy_api.py @CapableWeb
tests/legacy_tests.sh @CapableWeb

42
.github/workflows/build-container.yml vendored Normal file
View File

@@ -0,0 +1,42 @@
# Building the Image without pushing to confirm it is still buildable
# confirum functionality would unfortunately need way more resources
name: build container image
on:
push:
branches:
- 'main'
- 'development'
pull_request:
branches:
- 'main'
- 'development'
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: prepare docker-tag
env:
repository: ${{ github.repository }}
run: echo "dockertag=${repository,,}" >> $GITHUB_ENV
- name: Checkout
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache
key: buildx-${{ hashFiles('docker-build/Dockerfile') }}
- name: Build container
uses: docker/build-push-action@v3
with:
context: .
file: docker-build/Dockerfile
platforms: linux/amd64
push: false
tags: ${{ env.dockertag }}:latest
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache

View File

@@ -1,26 +1,43 @@
name: Create Caches
on:
workflow_dispatch
on: workflow_dispatch
jobs:
build:
os_matrix:
strategy:
matrix:
os: [ ubuntu-latest, macos-12 ]
name: Create Caches on ${{ matrix.os }} conda
os: [ubuntu-latest, macos-latest]
include:
- os: ubuntu-latest
environment-file: environment.yml
default-shell: bash -l {0}
- os: macos-latest
environment-file: environment-mac.yml
default-shell: bash -l {0}
name: Test invoke.py on ${{ matrix.os }} with conda
runs-on: ${{ matrix.os }}
defaults:
run:
shell: ${{ matrix.default-shell }}
steps:
- name: Set platform variables
id: vars
run: |
if [ "$RUNNER_OS" = "macOS" ]; then
echo "::set-output name=ENV_FILE::environment-mac.yml"
echo "::set-output name=PYTHON_BIN::/usr/local/miniconda/envs/ldm/bin/python"
elif [ "$RUNNER_OS" = "Linux" ]; then
echo "::set-output name=ENV_FILE::environment.yml"
echo "::set-output name=PYTHON_BIN::/usr/share/miniconda/envs/ldm/bin/python"
fi
- name: Checkout sources
uses: actions/checkout@v3
- name: setup miniconda
uses: conda-incubator/setup-miniconda@v2
with:
auto-activate-base: false
auto-update-conda: false
miniconda-version: latest
- name: set environment
run: |
[[ "$GITHUB_REF" == 'refs/heads/main' ]] \
&& echo "TEST_PROMPTS=tests/preflight_prompts.txt" >> $GITHUB_ENV \
|| echo "TEST_PROMPTS=tests/dev_prompts.txt" >> $GITHUB_ENV
echo "CONDA_ROOT=$CONDA" >> $GITHUB_ENV
echo "CONDA_ENV_NAME=invokeai" >> $GITHUB_ENV
- name: Use Cached Stable Diffusion v1.4 Model
id: cache-sd-v1-4
uses: actions/cache@v3
@@ -29,42 +46,35 @@ jobs:
with:
path: models/ldm/stable-diffusion-v1/model.ckpt
key: ${{ env.cache-name }}
restore-keys: |
${{ env.cache-name }}
restore-keys: ${{ env.cache-name }}
- name: Download Stable Diffusion v1.4 Model
if: ${{ steps.cache-sd-v1-4.outputs.cache-hit != 'true' }}
run: |
if [ ! -e models/ldm/stable-diffusion-v1 ]; then
mkdir -p models/ldm/stable-diffusion-v1
fi
if [ ! -e models/ldm/stable-diffusion-v1/model.ckpt ]; then
curl -o models/ldm/stable-diffusion-v1/model.ckpt ${{ secrets.SD_V1_4_URL }}
fi
- name: Use Cached Dependencies
id: cache-conda-env-ldm
uses: actions/cache@v3
env:
cache-name: cache-conda-env-ldm
[[ -d models/ldm/stable-diffusion-v1 ]] \
|| mkdir -p models/ldm/stable-diffusion-v1
[[ -r models/ldm/stable-diffusion-v1/model.ckpt ]] \
|| curl \
-H "Authorization: Bearer ${{ secrets.HUGGINGFACE_TOKEN }}" \
-o models/ldm/stable-diffusion-v1/model.ckpt \
-L https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt
- name: Activate Conda Env
uses: conda-incubator/setup-miniconda@v2
with:
path: ~/.conda/envs/ldm
key: ${{ env.cache-name }}
restore-keys: |
${{ env.cache-name }}-${{ runner.os }}-${{ hashFiles(steps.vars.outputs.ENV_FILE) }}
- name: Install Dependencies
if: ${{ steps.cache-conda-env-ldm.outputs.cache-hit != 'true' }}
run: |
conda env create -f ${{ steps.vars.outputs.ENV_FILE }}
activate-environment: ${{ env.CONDA_ENV_NAME }}
environment-file: ${{ matrix.environment-file }}
- name: Use Cached Huggingface and Torch models
id: cache-huggingface-torch
id: cache-hugginface-torch
uses: actions/cache@v3
env:
cache-name: cache-huggingface-torch
cache-name: cache-hugginface-torch
with:
path: ~/.cache
key: ${{ env.cache-name }}
restore-keys: |
${{ env.cache-name }}-${{ hashFiles('scripts/preload_models.py') }}
- name: Download Huggingface and Torch models
if: ${{ steps.cache-huggingface-torch.outputs.cache-hit != 'true' }}
run: |
${{ steps.vars.outputs.PYTHON_BIN }} scripts/preload_models.py
- name: run preload_models.py
run: python scripts/preload_models.py

View File

@@ -3,9 +3,9 @@ on:
push:
branches:
- main
pull_request:
branches:
- main
# pull_request:
# branches:
# - main
jobs:
build:
name: Deploy docs to GitHub Pages

40
.github/workflows/mkdocs-material.yml vendored Normal file
View File

@@ -0,0 +1,40 @@
name: mkdocs-material
on:
push:
branches:
- 'main'
- 'development'
jobs:
mkdocs-material:
runs-on: ubuntu-latest
steps:
- name: checkout sources
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: setup python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: install requirements
run: |
python -m \
pip install -r requirements-mkdocs.txt
- name: confirm buildability
run: |
python -m \
mkdocs build \
--clean \
--verbose
- name: deploy to gh-pages
if: ${{ github.ref == 'refs/heads/main' }}
run: |
python -m \
mkdocs gh-deploy \
--clean \
--force

View File

@@ -1,97 +0,0 @@
name: Test Dream with Conda
on:
push:
branches:
- 'main'
- 'development'
jobs:
os_matrix:
strategy:
matrix:
os: [ ubuntu-latest, macos-12 ]
name: Test dream.py on ${{ matrix.os }} with conda
runs-on: ${{ matrix.os }}
steps:
- run: |
echo The PR was merged
- name: Set platform variables
id: vars
run: |
# Note, can't "activate" via github action; specifying the env's python has the same effect
if [ "$RUNNER_OS" = "macOS" ]; then
echo "::set-output name=ENV_FILE::environment-mac.yml"
echo "::set-output name=PYTHON_BIN::/usr/local/miniconda/envs/ldm/bin/python"
elif [ "$RUNNER_OS" = "Linux" ]; then
echo "::set-output name=ENV_FILE::environment.yml"
echo "::set-output name=PYTHON_BIN::/usr/share/miniconda/envs/ldm/bin/python"
fi
- name: Checkout sources
uses: actions/checkout@v3
- name: Use Cached Stable Diffusion v1.4 Model
id: cache-sd-v1-4
uses: actions/cache@v3
env:
cache-name: cache-sd-v1-4
with:
path: models/ldm/stable-diffusion-v1/model.ckpt
key: ${{ env.cache-name }}
restore-keys: |
${{ env.cache-name }}
- name: Download Stable Diffusion v1.4 Model
if: ${{ steps.cache-sd-v1-4.outputs.cache-hit != 'true' }}
run: |
if [ ! -e models/ldm/stable-diffusion-v1 ]; then
mkdir -p models/ldm/stable-diffusion-v1
fi
if [ ! -e models/ldm/stable-diffusion-v1/model.ckpt ]; then
curl -o models/ldm/stable-diffusion-v1/model.ckpt ${{ secrets.SD_V1_4_URL }}
fi
- name: Use Cached Dependencies
id: cache-conda-env-ldm
uses: actions/cache@v3
env:
cache-name: cache-conda-env-ldm
with:
path: ~/.conda/envs/ldm
key: ${{ env.cache-name }}
restore-keys: |
${{ env.cache-name }}-${{ runner.os }}-${{ hashFiles(steps.vars.outputs.ENV_FILE) }}
- name: Install Dependencies
if: ${{ steps.cache-conda-env-ldm.outputs.cache-hit != 'true' }}
run: |
conda env create -f ${{ steps.vars.outputs.ENV_FILE }}
- name: Use Cached Huggingface and Torch models
id: cache-hugginface-torch
uses: actions/cache@v3
env:
cache-name: cache-hugginface-torch
with:
path: ~/.cache
key: ${{ env.cache-name }}
restore-keys: |
${{ env.cache-name }}-${{ hashFiles('scripts/preload_models.py') }}
- name: Download Huggingface and Torch models
if: ${{ steps.cache-hugginface-torch.outputs.cache-hit != 'true' }}
run: |
${{ steps.vars.outputs.PYTHON_BIN }} scripts/preload_models.py
# - name: Run tmate
# uses: mxschmitt/action-tmate@v3
# timeout-minutes: 30
- name: Run the tests
run: |
# Note, can't "activate" via github action; specifying the env's python has the same effect
if [ $(uname) = "Darwin" ]; then
export PYTORCH_ENABLE_MPS_FALLBACK=1
fi
# Utterly hacky, but I don't know how else to do this
if [[ ${{ github.ref }} == 'refs/heads/master' ]]; then
time ${{ steps.vars.outputs.PYTHON_BIN }} scripts/dream.py --from_file tests/preflight_prompts.txt
elif [[ ${{ github.ref }} == 'refs/heads/development' ]]; then
time ${{ steps.vars.outputs.PYTHON_BIN }} scripts/dream.py --from_file tests/dev_prompts.txt
fi
mkdir -p outputs/img-samples
- name: Archive results
uses: actions/upload-artifact@v3
with:
name: results
path: outputs/img-samples

109
.github/workflows/test-invoke-conda.yml vendored Normal file
View File

@@ -0,0 +1,109 @@
name: Test invoke.py
on:
push:
branches:
- 'main'
- 'development'
pull_request:
branches:
- 'main'
- 'development'
jobs:
matrix:
strategy:
fail-fast: false
matrix:
stable-diffusion-model:
- 'https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt'
- 'https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt'
os:
- ubuntu-latest
- macOS-12
include:
- os: ubuntu-latest
environment-file: environment.yml
default-shell: bash -l {0}
- os: macOS-12
environment-file: environment-mac.yml
default-shell: bash -l {0}
- stable-diffusion-model: https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt
stable-diffusion-model-dl-path: models/ldm/stable-diffusion-v1/model.ckpt
stable-diffusion-model-switch: stable-diffusion-1.4
- stable-diffusion-model: https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt
stable-diffusion-model-dl-path: models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt
stable-diffusion-model-switch: stable-diffusion-1.5
name: ${{ matrix.os }} with ${{ matrix.stable-diffusion-model-switch }}
runs-on: ${{ matrix.os }}
env:
CONDA_ENV_NAME: invokeai
defaults:
run:
shell: ${{ matrix.default-shell }}
steps:
- name: Checkout sources
id: checkout-sources
uses: actions/checkout@v3
- name: Use cached conda packages
id: use-cached-conda-packages
uses: actions/cache@v3
with:
path: ~/conda_pkgs_dir
key: conda-pkgs-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles(matrix.environment-file) }}
- name: Activate Conda Env
id: activate-conda-env
uses: conda-incubator/setup-miniconda@v2
with:
activate-environment: ${{ env.CONDA_ENV_NAME }}
environment-file: ${{ matrix.environment-file }}
miniconda-version: latest
- name: set test prompt to main branch validation
if: ${{ github.ref == 'refs/heads/main' }}
run: echo "TEST_PROMPTS=tests/preflight_prompts.txt" >> $GITHUB_ENV
- name: set test prompt to development branch validation
if: ${{ github.ref == 'refs/heads/development' }}
run: echo "TEST_PROMPTS=tests/dev_prompts.txt" >> $GITHUB_ENV
- name: set test prompt to Pull Request validation
if: ${{ github.ref != 'refs/heads/main' && github.ref != 'refs/heads/development' }}
run: echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> $GITHUB_ENV
- name: Download ${{ matrix.stable-diffusion-model-switch }}
id: download-stable-diffusion-model
run: |
[[ -d models/ldm/stable-diffusion-v1 ]] \
|| mkdir -p models/ldm/stable-diffusion-v1
curl \
-H "Authorization: Bearer ${{ secrets.HUGGINGFACE_TOKEN }}" \
-o ${{ matrix.stable-diffusion-model-dl-path }} \
-L ${{ matrix.stable-diffusion-model }}
- name: run preload_models.py
id: run-preload-models
run: |
python scripts/preload_models.py \
--no-interactive
- name: Run the tests
id: run-tests
run: |
time python scripts/invoke.py \
--model ${{ matrix.stable-diffusion-model-switch }} \
--from_file ${{ env.TEST_PROMPTS }}
- name: export conda env
id: export-conda-env
run: |
mkdir -p outputs/img-samples
conda env export --name ${{ env.CONDA_ENV_NAME }} > outputs/img-samples/environment-${{ runner.os }}-${{ runner.arch }}.yml
- name: Archive results
id: archive-results
uses: actions/upload-artifact@v3
with:
name: results_${{ matrix.os }}_${{ matrix.stable-diffusion-model-switch }}
path: outputs/img-samples

14
.gitignore vendored
View File

@@ -1,7 +1,11 @@
# ignore default image save location and model symbolic link
outputs/
models/ldm/stable-diffusion-v1/model.ckpt
ldm/dream/restoration/codeformer/weights
ldm/invoke/restoration/codeformer/weights
# ignore user models config
configs/models.user.yaml
config/models.user.yml
# ignore the Anaconda/Miniconda installer used while building Docker image
anaconda.sh
@@ -195,7 +199,13 @@ checkpoints
.scratch/
.vscode/
gfpgan/
models/ldm/stable-diffusion-v1/model.sha256
models/ldm/stable-diffusion-v1/*.sha256
# GFPGAN model files
gfpgan/
# config file (will be created by installer)
configs/models.yaml
# weights (will be created by installer)
models/ldm/stable-diffusion-v1/*.ckpt

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2022 Lincoln D. Stein (https://github.com/lstein)
Copyright (c) 2022 Lincoln Stein and InvokeAI Organization
This software is derived from a fork of the source code available from
https://github.com/pesser/stable-diffusion and

101
README.md
View File

@@ -2,14 +2,7 @@
# InvokeAI: A Stable Diffusion Toolkit
_Note: This fork is rapidly evolving. Please use the
[Issues](https://github.com/invoke-ai/InvokeAI/issues) tab to
report bugs and make feature requests. Be sure to use the provided
templates. They will help aid diagnose issues faster._
_This repository was formally known as lstein/stable-diffusion_
# **Table of Contents**
_Formerly known as lstein/stable-diffusion_
![project logo](docs/assets/logo.png)
@@ -24,7 +17,7 @@ _This repository was formally known as lstein/stable-diffusion_
[CI checks on dev badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/development?label=CI%20status%20on%20dev&cache=900&icon=github
[CI checks on dev link]: https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Adevelopment
[CI checks on main badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
[CI checks on main link]: https://github.com/invoke-ai/InvokeAI/actions/workflows/test-dream-conda.yml
[CI checks on main link]: https://github.com/invoke-ai/InvokeAI/actions/workflows/test-invoke-conda.yml
[discord badge]: https://flat.badgen.net/discord/members/ZmtBAhwWhy?icon=discord
[discord link]: https://discord.gg/ZmtBAhwWhy
[github forks badge]: https://flat.badgen.net/github/forks/invoke-ai/InvokeAI?icon=github
@@ -41,10 +34,18 @@ _This repository was formally known as lstein/stable-diffusion_
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases
</div>
This is a fork of [CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion), the open
source text-to-image generator. It provides a streamlined process with various new features and
options to aid the image generation process. It runs on Windows, Mac and Linux machines, and runs on
GPU cards with as little as 4 GB or RAM.
This is a fork of
[CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion),
the open source text-to-image generator. It provides a streamlined
process with various new features and options to aid the image
generation process. It runs on Windows, Mac and Linux machines, with
GPU cards with as little as 4 GB of RAM. It provides both a polished
Web interface (see below), and an easy-to-use command-line interface.
**Quick links**: [<a href="https://discord.gg/NwVCmKwY">Discord Server</a>] [<a href="https://invoke-ai.github.io/InvokeAI/">Documentation and Tutorials</a>] [<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas & Q&A</a>]
<div align="center"><img src="docs/assets/invoke-web-server-1.png" width=640></div>
_Note: This fork is rapidly evolving. Please use the
[Issues](https://github.com/invoke-ai/InvokeAI/issues) tab to report bugs and make feature
@@ -88,22 +89,28 @@ You wil need one of the following:
#### Disk
- At least 6 GB of free disk space for the machine learning model, Python, and all its dependencies.
- At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
#### Note
**Note**
If you have a Nvidia 10xx series card (e.g. the 1080ti), please
run the dream script in full-precision mode as shown below.
Similarly, specify full-precision mode on Apple M1 hardware.
Precision is auto configured based on the device. If however you encounter
errors like 'expected type Float but found Half' or 'not implemented for Half'
you can try starting `dream.py` with the `--precision=float32` flag:
you can try starting `invoke.py` with the `--precision=float32` flag:
```bash
(ldm) ~/stable-diffusion$ python scripts/dream.py --precision=float32
(ldm) ~/stable-diffusion$ python scripts/invoke.py --precision=float32
```
### Features
#### Major Features
- [Web Server](docs/features/WEB.md)
- [Interactive Command Line Interface](docs/features/CLI.md)
- [Image To Image](docs/features/IMG2IMG.md)
- [Inpainting Support](docs/features/INPAINTING.md)
@@ -111,7 +118,6 @@ you can try starting `dream.py` with the `--precision=float32` flag:
- [Upscaling, face-restoration and outpainting](docs/features/POSTPROCESS.md)
- [Seamless Tiling](docs/features/OTHER.md#seamless-tiling)
- [Google Colab](docs/features/OTHER.md#google-colab)
- [Web Server](docs/features/WEB.md)
- [Reading Prompts From File](docs/features/PROMPTS.md#reading-prompts-from-a-file)
- [Shortcut: Reusing Seeds](docs/features/OTHER.md#shortcuts-reusing-seeds)
- [Prompt Blending](docs/features/PROMPTS.md#prompt-blending)
@@ -128,39 +134,38 @@ you can try starting `dream.py` with the `--precision=float32` flag:
### Latest Changes
- vNEXT (TODO 2022)
- v2.0.1 (13 October 2022)
- fix noisy images at high step count when using k* samplers
- dream.py script now calls invoke.py module directly rather than
via a new python process (which could break the environment)
- Deprecated `--full_precision` / `-F`. Simply omit it and `dream.py` will auto
- v2.0.0 (9 October 2022)
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains
for backward compatibility.
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
- Support for <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/INPAINTING.md">inpainting</a> and <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/OUTPAINTING.md">outpainting</a>
- img2img runs on all k* samplers
- Support for <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/PROMPTS.md#negative-and-unconditioned-prompts">negative prompts</a>
- Support for CodeFormer face reconstruction
- Support for Textual Inversion on Macintoshes
- Support in both WebGUI and CLI for <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/POSTPROCESS.md">post-processing of previously-generated images</a>
using facial reconstruction, ESRGAN upscaling, outcropping (similar to DALL-E infinite canvas),
and "embiggen" upscaling. See the `!fix` command.
- New `--hires` option on `invoke>` line allows <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/CLI.md#this-is-an-example-of-txt2img">larger images to be created without duplicating elements</a>, at the cost of some performance.
- New `--perlin` and `--threshold` options allow you to add and control variation
during image generation (see <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/OTHER.md#thresholding-and-perlin-noise-initialization-options">Thresholding and Perlin Noise Initialization</a>
- Extensive metadata now written into PNG files, allowing reliable regeneration of images
and tweaking of previous settings.
- Command-line completion in `invoke.py` now works on Windows, Linux and Mac platforms.
- Improved <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/CLI.md">command-line completion behavior</a>.
New commands added:
* List command-line history with `!history`
* Search command-line history with `!search`
* Clear history with `!clear`
- Deprecated `--full_precision` / `-F`. Simply omit it and `invoke.py` will auto
configure. To switch away from auto use the new flag like `--precision=float32`.
- v1.14 (11 September 2022)
- Memory optimizations for small-RAM cards. 512x512 now possible on 4 GB GPUs.
- Full support for Apple hardware with M1 or M2 chips.
- Add "seamless mode" for circular tiling of image. Generates beautiful effects.
([prixt](https://github.com/prixt)).
- Inpainting support.
- Improved web server GUI.
- Lots of code and documentation cleanups.
- v1.13 (3 September 2022
- Support image variations (see [VARIATIONS](docs/features/VARIATIONS.md)
([Kevin Gibbons](https://github.com/bakkot) and many contributors and reviewers)
- Supports a Google Colab notebook for a standalone server running on Google hardware
[Arturo Mendivil](https://github.com/artmen1516)
- WebUI supports GFPGAN/ESRGAN facial reconstruction and upscaling
[Kevin Gibbons](https://github.com/bakkot)
- WebUI supports incremental display of in-progress images during generation
[Kevin Gibbons](https://github.com/bakkot)
- A new configuration file scheme that allows new models (including upcoming
stable-diffusion-v1.5) to be added without altering the code.
([David Wager](https://github.com/maddavid12))
- Can specify --grid on dream.py command line as the default.
- Miscellaneous internal bug and stability fixes.
- Works on M1 Apple hardware.
- Multiple bug fixes.
For older changelogs, please visit the **[CHANGELOG](docs/features/CHANGELOG.md)**.
### Troubleshooting

BIN
assets/caution.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
import argparse
import os
from ldm.dream.args import PRECISION_CHOICES
from ldm.invoke.args import PRECISION_CHOICES
def create_cmd_parser():

View File

@@ -15,7 +15,7 @@ SAMPLER_CHOICES = [
def parameters_to_command(params):
"""
Converts dict of parameters into a `dream.py` REPL command.
Converts dict of parameters into a `invoke.py` REPL command.
"""
switches = list()
@@ -36,6 +36,8 @@ def parameters_to_command(params):
switches.append(f'-A {params["sampler_name"]}')
if "seamless" in params and params["seamless"] == True:
switches.append(f"--seamless")
if "hires_fix" in params and params["hires_fix"] == True:
switches.append(f"--hires")
if "init_img" in params and len(params["init_img"]) > 0:
switches.append(f'-I {params["init_img"]}')
if "init_mask" in params and len(params["init_mask"]) > 0:
@@ -46,8 +48,14 @@ def parameters_to_command(params):
switches.append(f'-f {params["strength"]}')
if "fit" in params and params["fit"] == True:
switches.append(f"--fit")
if "gfpgan_strength" in params and params["gfpgan_strength"]:
if "facetool" in params:
switches.append(f'-ft {params["facetool"]}')
if "facetool_strength" in params and params["facetool_strength"]:
switches.append(f'-G {params["facetool_strength"]}')
elif "gfpgan_strength" in params and params["gfpgan_strength"]:
switches.append(f'-G {params["gfpgan_strength"]}')
if "codeformer_fidelity" in params:
switches.append(f'-cf {params["codeformer_fidelity"]}')
if "upscale" in params and params["upscale"]:
switches.append(f'-U {params["upscale"][0]} {params["upscale"][1]}')
if "variation_amount" in params and params["variation_amount"] > 0:

View File

@@ -1,821 +0,0 @@
import mimetypes
import transformers
import json
import os
import traceback
import eventlet
import glob
import shlex
import math
import shutil
import sys
sys.path.append(".")
from argparse import ArgumentTypeError
from modules.create_cmd_parser import create_cmd_parser
parser = create_cmd_parser()
opt = parser.parse_args()
from flask_socketio import SocketIO
from flask import Flask, send_from_directory, url_for, jsonify
from pathlib import Path
from PIL import Image
from pytorch_lightning import logging
from threading import Event
from uuid import uuid4
from send2trash import send2trash
from ldm.generate import Generate
from ldm.dream.restoration import Restoration
from ldm.dream.pngwriter import PngWriter, retrieve_metadata
from ldm.dream.args import APP_ID, APP_VERSION, calculate_init_img_hash
from ldm.dream.conditioning import split_weighted_subprompts
from modules.parameters import parameters_to_command
"""
USER CONFIG
"""
if opt.cors and "*" in opt.cors:
raise ArgumentTypeError('"*" is not an allowed CORS origin')
output_dir = "outputs/" # Base output directory for images
host = opt.host # Web & socket.io host
port = opt.port # Web & socket.io port
verbose = opt.verbose # enables copious socket.io logging
precision = opt.precision
free_gpu_mem = opt.free_gpu_mem
embedding_path = opt.embedding_path
additional_allowed_origins = (
opt.cors if opt.cors else []
) # additional CORS allowed origins
model = "stable-diffusion-1.4"
"""
END USER CONFIG
"""
print("* Initializing, be patient...\n")
"""
SERVER SETUP
"""
# fix missing mimetypes on windows due to registry wonkiness
mimetypes.add_type("application/javascript", ".js")
mimetypes.add_type("text/css", ".css")
app = Flask(__name__, static_url_path="", static_folder="../frontend/dist/")
app.config["OUTPUTS_FOLDER"] = "../outputs"
@app.route("/outputs/<path:filename>")
def outputs(filename):
return send_from_directory(app.config["OUTPUTS_FOLDER"], filename)
@app.route("/", defaults={"path": ""})
def serve(path):
return send_from_directory(app.static_folder, "index.html")
logger = True if verbose else False
engineio_logger = True if verbose else False
# default 1,000,000, needs to be higher for socketio to accept larger images
max_http_buffer_size = 10000000
cors_allowed_origins = [f"http://{host}:{port}"] + additional_allowed_origins
socketio = SocketIO(
app,
logger=logger,
engineio_logger=engineio_logger,
max_http_buffer_size=max_http_buffer_size,
cors_allowed_origins=cors_allowed_origins,
ping_interval=(50, 50),
ping_timeout=60,
)
"""
END SERVER SETUP
"""
"""
APP SETUP
"""
class CanceledException(Exception):
pass
try:
gfpgan, codeformer, esrgan = None, None, None
from ldm.dream.restoration.base import Restoration
restoration = Restoration()
gfpgan, codeformer = restoration.load_face_restore_models()
esrgan = restoration.load_esrgan()
# coreformer.process(self, image, strength, device, seed=None, fidelity=0.75)
except (ModuleNotFoundError, ImportError):
print(traceback.format_exc(), file=sys.stderr)
print(">> You may need to install the ESRGAN and/or GFPGAN modules")
canceled = Event()
# reduce logging outputs to error
transformers.logging.set_verbosity_error()
logging.getLogger("pytorch_lightning").setLevel(logging.ERROR)
# Initialize and load model
generate = Generate(
model,
precision=precision,
embedding_path=embedding_path,
)
generate.free_gpu_mem = free_gpu_mem
generate.load_model()
# location for "finished" images
result_path = os.path.join(output_dir, "img-samples/")
# temporary path for intermediates
intermediate_path = os.path.join(result_path, "intermediates/")
# path for user-uploaded init images and masks
init_image_path = os.path.join(result_path, "init-images/")
mask_image_path = os.path.join(result_path, "mask-images/")
# txt log
log_path = os.path.join(result_path, "dream_log.txt")
# make all output paths
[
os.makedirs(path, exist_ok=True)
for path in [result_path, intermediate_path, init_image_path, mask_image_path]
]
"""
END APP SETUP
"""
"""
SOCKET.IO LISTENERS
"""
@socketio.on("requestSystemConfig")
def handle_request_capabilities():
print(f">> System config requested")
config = get_system_config()
socketio.emit("systemConfig", config)
@socketio.on("requestImages")
def handle_request_images(page=1, offset=0, last_mtime=None):
chunk_size = 50
if last_mtime:
print(f">> Latest images requested")
else:
print(
f">> Page {page} of images requested (page size {chunk_size} offset {offset})"
)
paths = glob.glob(os.path.join(result_path, "*.png"))
sorted_paths = sorted(paths, key=lambda x: os.path.getmtime(x), reverse=True)
if last_mtime:
image_paths = filter(lambda x: os.path.getmtime(x) > last_mtime, sorted_paths)
else:
image_paths = sorted_paths[
slice(chunk_size * (page - 1) + offset, chunk_size * page + offset)
]
page = page + 1
image_array = []
for path in image_paths:
metadata = retrieve_metadata(path)
image_array.append(
{
"url": path,
"mtime": os.path.getmtime(path),
"metadata": metadata["sd-metadata"],
}
)
socketio.emit(
"galleryImages",
{
"images": image_array,
"nextPage": page,
"offset": offset,
"onlyNewImages": True if last_mtime else False,
},
)
@socketio.on("generateImage")
def handle_generate_image_event(
generation_parameters, esrgan_parameters, gfpgan_parameters
):
print(
f">> Image generation requested: {generation_parameters}\nESRGAN parameters: {esrgan_parameters}\nGFPGAN parameters: {gfpgan_parameters}"
)
generate_images(generation_parameters, esrgan_parameters, gfpgan_parameters)
@socketio.on("runESRGAN")
def handle_run_esrgan_event(original_image, esrgan_parameters):
print(
f'>> ESRGAN upscale requested for "{original_image["url"]}": {esrgan_parameters}'
)
progress = {
"currentStep": 1,
"totalSteps": 1,
"currentIteration": 1,
"totalIterations": 1,
"currentStatus": "Preparing",
"isProcessing": True,
"currentStatusHasSteps": False,
}
socketio.emit("progressUpdate", progress)
eventlet.sleep(0)
image = Image.open(original_image["url"])
seed = (
original_image["metadata"]["seed"]
if "seed" in original_image["metadata"]
else "unknown_seed"
)
progress["currentStatus"] = "Upscaling"
socketio.emit("progressUpdate", progress)
eventlet.sleep(0)
image = esrgan.process(
image=image,
upsampler_scale=esrgan_parameters["upscale"][0],
strength=esrgan_parameters["upscale"][1],
seed=seed,
)
progress["currentStatus"] = "Saving image"
socketio.emit("progressUpdate", progress)
eventlet.sleep(0)
esrgan_parameters["seed"] = seed
metadata = parameters_to_post_processed_image_metadata(
parameters=esrgan_parameters,
original_image_path=original_image["url"],
type="esrgan",
)
command = parameters_to_command(esrgan_parameters)
path = save_image(image, command, metadata, result_path, postprocessing="esrgan")
write_log_message(f'[Upscaled] "{original_image["url"]}" > "{path}": {command}')
progress["currentStatus"] = "Finished"
progress["currentStep"] = 0
progress["totalSteps"] = 0
progress["currentIteration"] = 0
progress["totalIterations"] = 0
progress["isProcessing"] = False
socketio.emit("progressUpdate", progress)
eventlet.sleep(0)
socketio.emit(
"esrganResult",
{
"url": os.path.relpath(path),
"mtime": os.path.getmtime(path),
"metadata": metadata,
},
)
@socketio.on("runGFPGAN")
def handle_run_gfpgan_event(original_image, gfpgan_parameters):
print(
f'>> GFPGAN face fix requested for "{original_image["url"]}": {gfpgan_parameters}'
)
progress = {
"currentStep": 1,
"totalSteps": 1,
"currentIteration": 1,
"totalIterations": 1,
"currentStatus": "Preparing",
"isProcessing": True,
"currentStatusHasSteps": False,
}
socketio.emit("progressUpdate", progress)
eventlet.sleep(0)
image = Image.open(original_image["url"])
seed = (
original_image["metadata"]["seed"]
if "seed" in original_image["metadata"]
else "unknown_seed"
)
progress["currentStatus"] = "Fixing faces"
socketio.emit("progressUpdate", progress)
eventlet.sleep(0)
image = gfpgan.process(
image=image, strength=gfpgan_parameters["gfpgan_strength"], seed=seed
)
progress["currentStatus"] = "Saving image"
socketio.emit("progressUpdate", progress)
eventlet.sleep(0)
gfpgan_parameters["seed"] = seed
metadata = parameters_to_post_processed_image_metadata(
parameters=gfpgan_parameters,
original_image_path=original_image["url"],
type="gfpgan",
)
command = parameters_to_command(gfpgan_parameters)
path = save_image(image, command, metadata, result_path, postprocessing="gfpgan")
write_log_message(f'[Fixed faces] "{original_image["url"]}" > "{path}": {command}')
progress["currentStatus"] = "Finished"
progress["currentStep"] = 0
progress["totalSteps"] = 0
progress["currentIteration"] = 0
progress["totalIterations"] = 0
progress["isProcessing"] = False
socketio.emit("progressUpdate", progress)
eventlet.sleep(0)
socketio.emit(
"gfpganResult",
{
"url": os.path.relpath(path),
"mtime": os.path.mtime(path),
"metadata": metadata,
},
)
@socketio.on("cancel")
def handle_cancel():
print(f">> Cancel processing requested")
canceled.set()
socketio.emit("processingCanceled")
# TODO: I think this needs a safety mechanism.
@socketio.on("deleteImage")
def handle_delete_image(path, uuid):
print(f'>> Delete requested "{path}"')
send2trash(path)
socketio.emit("imageDeleted", {"url": path, "uuid": uuid})
# TODO: I think this needs a safety mechanism.
@socketio.on("uploadInitialImage")
def handle_upload_initial_image(bytes, name):
print(f'>> Init image upload requested "{name}"')
uuid = uuid4().hex
split = os.path.splitext(name)
name = f"{split[0]}.{uuid}{split[1]}"
file_path = os.path.join(init_image_path, name)
os.makedirs(os.path.dirname(file_path), exist_ok=True)
newFile = open(file_path, "wb")
newFile.write(bytes)
socketio.emit("initialImageUploaded", {"url": file_path, "uuid": ""})
# TODO: I think this needs a safety mechanism.
@socketio.on("uploadMaskImage")
def handle_upload_mask_image(bytes, name):
print(f'>> Mask image upload requested "{name}"')
uuid = uuid4().hex
split = os.path.splitext(name)
name = f"{split[0]}.{uuid}{split[1]}"
file_path = os.path.join(mask_image_path, name)
os.makedirs(os.path.dirname(file_path), exist_ok=True)
newFile = open(file_path, "wb")
newFile.write(bytes)
socketio.emit("maskImageUploaded", {"url": file_path, "uuid": ""})
"""
END SOCKET.IO LISTENERS
"""
"""
ADDITIONAL FUNCTIONS
"""
def get_system_config():
return {
"model": "stable diffusion",
"model_id": model,
"model_hash": generate.model_hash,
"app_id": APP_ID,
"app_version": APP_VERSION,
}
def parameters_to_post_processed_image_metadata(parameters, original_image_path, type):
# top-level metadata minus `image` or `images`
metadata = get_system_config()
orig_hash = calculate_init_img_hash(original_image_path)
image = {"orig_path": original_image_path, "orig_hash": orig_hash}
if type == "esrgan":
image["type"] = "esrgan"
image["scale"] = parameters["upscale"][0]
image["strength"] = parameters["upscale"][1]
elif type == "gfpgan":
image["type"] = "gfpgan"
image["strength"] = parameters["gfpgan_strength"]
else:
raise TypeError(f"Invalid type: {type}")
metadata["image"] = image
return metadata
def parameters_to_generated_image_metadata(parameters):
# top-level metadata minus `image` or `images`
metadata = get_system_config()
# remove any image keys not mentioned in RFC #266
rfc266_img_fields = [
"type",
"postprocessing",
"sampler",
"prompt",
"seed",
"variations",
"steps",
"cfg_scale",
"threshold",
"perlin",
"step_number",
"width",
"height",
"extra",
"seamless",
]
rfc_dict = {}
for item in parameters.items():
key, value = item
if key in rfc266_img_fields:
rfc_dict[key] = value
postprocessing = []
# 'postprocessing' is either null or an
if "gfpgan_strength" in parameters:
postprocessing.append(
{"type": "gfpgan", "strength": float(parameters["gfpgan_strength"])}
)
if "upscale" in parameters:
postprocessing.append(
{
"type": "esrgan",
"scale": int(parameters["upscale"][0]),
"strength": float(parameters["upscale"][1]),
}
)
rfc_dict["postprocessing"] = postprocessing if len(postprocessing) > 0 else None
# semantic drift
rfc_dict["sampler"] = parameters["sampler_name"]
# display weighted subprompts (liable to change)
subprompts = split_weighted_subprompts(parameters["prompt"])
subprompts = [{"prompt": x[0], "weight": x[1]} for x in subprompts]
rfc_dict["prompt"] = subprompts
# 'variations' should always exist and be an array, empty or consisting of {'seed': seed, 'weight': weight} pairs
variations = []
if "with_variations" in parameters:
variations = [
{"seed": x[0], "weight": x[1]} for x in parameters["with_variations"]
]
rfc_dict["variations"] = variations
if "init_img" in parameters:
rfc_dict["type"] = "img2img"
rfc_dict["strength"] = parameters["strength"]
rfc_dict["fit"] = parameters["fit"] # TODO: Noncompliant
rfc_dict["orig_hash"] = calculate_init_img_hash(parameters["init_img"])
rfc_dict["init_image_path"] = parameters["init_img"] # TODO: Noncompliant
rfc_dict["sampler"] = "ddim" # TODO: FIX ME WHEN IMG2IMG SUPPORTS ALL SAMPLERS
if "init_mask" in parameters:
rfc_dict["mask_hash"] = calculate_init_img_hash(
parameters["init_mask"]
) # TODO: Noncompliant
rfc_dict["mask_image_path"] = parameters["init_mask"] # TODO: Noncompliant
else:
rfc_dict["type"] = "txt2img"
metadata["image"] = rfc_dict
return metadata
def make_unique_init_image_filename(name):
uuid = uuid4().hex
split = os.path.splitext(name)
name = f"{split[0]}.{uuid}{split[1]}"
return name
def write_log_message(message, log_path=log_path):
"""Logs the filename and parameters used to generate or process that image to log file"""
message = f"{message}\n"
with open(log_path, "a", encoding="utf-8") as file:
file.writelines(message)
def save_image(
image, command, metadata, output_dir, step_index=None, postprocessing=False
):
pngwriter = PngWriter(output_dir)
prefix = pngwriter.unique_prefix()
seed = "unknown_seed"
if "image" in metadata:
if "seed" in metadata["image"]:
seed = metadata["image"]["seed"]
filename = f"{prefix}.{seed}"
if step_index:
filename += f".{step_index}"
if postprocessing:
filename += f".postprocessed"
filename += ".png"
path = pngwriter.save_image_and_prompt_to_png(
image=image, dream_prompt=command, metadata=metadata, name=filename
)
return path
def calculate_real_steps(steps, strength, has_init_image):
return math.floor(strength * steps) if has_init_image else steps
def generate_images(generation_parameters, esrgan_parameters, gfpgan_parameters):
canceled.clear()
step_index = 1
prior_variations = (
generation_parameters["with_variations"]
if "with_variations" in generation_parameters
else []
)
"""
If a result image is used as an init image, and then deleted, we will want to be
able to use it as an init image in the future. Need to copy it.
If the init/mask image doesn't exist in the init_image_path/mask_image_path,
make a unique filename for it and copy it there.
"""
if "init_img" in generation_parameters:
filename = os.path.basename(generation_parameters["init_img"])
if not os.path.exists(os.path.join(init_image_path, filename)):
unique_filename = make_unique_init_image_filename(filename)
new_path = os.path.join(init_image_path, unique_filename)
shutil.copy(generation_parameters["init_img"], new_path)
generation_parameters["init_img"] = new_path
if "init_mask" in generation_parameters:
filename = os.path.basename(generation_parameters["init_mask"])
if not os.path.exists(os.path.join(mask_image_path, filename)):
unique_filename = make_unique_init_image_filename(filename)
new_path = os.path.join(init_image_path, unique_filename)
shutil.copy(generation_parameters["init_img"], new_path)
generation_parameters["init_mask"] = new_path
totalSteps = calculate_real_steps(
steps=generation_parameters["steps"],
strength=generation_parameters["strength"]
if "strength" in generation_parameters
else None,
has_init_image="init_img" in generation_parameters,
)
progress = {
"currentStep": 1,
"totalSteps": totalSteps,
"currentIteration": 1,
"totalIterations": generation_parameters["iterations"],
"currentStatus": "Preparing",
"isProcessing": True,
"currentStatusHasSteps": False,
}
socketio.emit("progressUpdate", progress)
eventlet.sleep(0)
def image_progress(sample, step):
if canceled.is_set():
raise CanceledException
nonlocal step_index
nonlocal generation_parameters
nonlocal progress
progress["currentStep"] = step + 1
progress["currentStatus"] = "Generating"
progress["currentStatusHasSteps"] = True
if (
generation_parameters["progress_images"]
and step % 5 == 0
and step < generation_parameters["steps"] - 1
):
image = generate.sample_to_image(sample)
metadata = parameters_to_generated_image_metadata(generation_parameters)
command = parameters_to_command(generation_parameters)
path = save_image(image, command, metadata, intermediate_path, step_index=step_index, postprocessing=False)
step_index += 1
socketio.emit(
"intermediateResult",
{
"url": os.path.relpath(path),
"mtime": os.path.getmtime(path),
"metadata": metadata,
},
)
socketio.emit("progressUpdate", progress)
eventlet.sleep(0)
def image_done(image, seed, first_seed):
nonlocal generation_parameters
nonlocal esrgan_parameters
nonlocal gfpgan_parameters
nonlocal progress
step_index = 1
nonlocal prior_variations
progress["currentStatus"] = "Generation complete"
socketio.emit("progressUpdate", progress)
eventlet.sleep(0)
all_parameters = generation_parameters
postprocessing = False
if (
"variation_amount" in all_parameters
and all_parameters["variation_amount"] > 0
):
first_seed = first_seed or seed
this_variation = [[seed, all_parameters["variation_amount"]]]
all_parameters["with_variations"] = prior_variations + this_variation
all_parameters["seed"] = first_seed
elif ("with_variations" in all_parameters):
all_parameters["seed"] = first_seed
else:
all_parameters["seed"] = seed
if esrgan_parameters:
progress["currentStatus"] = "Upscaling"
progress["currentStatusHasSteps"] = False
socketio.emit("progressUpdate", progress)
eventlet.sleep(0)
image = esrgan.process(
image=image,
upsampler_scale=esrgan_parameters["level"],
strength=esrgan_parameters["strength"],
seed=seed,
)
postprocessing = True
all_parameters["upscale"] = [
esrgan_parameters["level"],
esrgan_parameters["strength"],
]
if gfpgan_parameters:
progress["currentStatus"] = "Fixing faces"
progress["currentStatusHasSteps"] = False
socketio.emit("progressUpdate", progress)
eventlet.sleep(0)
image = gfpgan.process(
image=image, strength=gfpgan_parameters["strength"], seed=seed
)
postprocessing = True
all_parameters["gfpgan_strength"] = gfpgan_parameters["strength"]
progress["currentStatus"] = "Saving image"
socketio.emit("progressUpdate", progress)
eventlet.sleep(0)
metadata = parameters_to_generated_image_metadata(all_parameters)
command = parameters_to_command(all_parameters)
path = save_image(
image, command, metadata, result_path, postprocessing=postprocessing
)
print(f'>> Image generated: "{path}"')
write_log_message(f'[Generated] "{path}": {command}')
if progress["totalIterations"] > progress["currentIteration"]:
progress["currentStep"] = 1
progress["currentIteration"] += 1
progress["currentStatus"] = "Iteration finished"
progress["currentStatusHasSteps"] = False
else:
progress["currentStep"] = 0
progress["totalSteps"] = 0
progress["currentIteration"] = 0
progress["totalIterations"] = 0
progress["currentStatus"] = "Finished"
progress["isProcessing"] = False
socketio.emit("progressUpdate", progress)
eventlet.sleep(0)
socketio.emit(
"generationResult",
{
"url": os.path.relpath(path),
"mtime": os.path.getmtime(path),
"metadata": metadata,
},
)
eventlet.sleep(0)
try:
generate.prompt2image(
**generation_parameters,
step_callback=image_progress,
image_callback=image_done,
)
except KeyboardInterrupt:
raise
except CanceledException:
pass
except Exception as e:
socketio.emit("error", {"message": (str(e))})
print("\n")
traceback.print_exc()
print("\n")
"""
END ADDITIONAL FUNCTIONS
"""
if __name__ == "__main__":
print(f">> Starting server at http://{host}:{port}")
socketio.run(app, host=host, port=port)

View File

@@ -1,54 +0,0 @@
model:
base_learning_rate: 4.5e-6
target: ldm.models.autoencoder.AutoencoderKL
params:
monitor: "val/rec_loss"
embed_dim: 16
lossconfig:
target: ldm.modules.losses.LPIPSWithDiscriminator
params:
disc_start: 50001
kl_weight: 0.000001
disc_weight: 0.5
ddconfig:
double_z: True
z_channels: 16
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult: [ 1,1,2,2,4] # num_down = len(ch_mult)-1
num_res_blocks: 2
attn_resolutions: [16]
dropout: 0.0
data:
target: main.DataModuleFromConfig
params:
batch_size: 12
wrap: True
train:
target: ldm.data.imagenet.ImageNetSRTrain
params:
size: 256
degradation: pil_nearest
validation:
target: ldm.data.imagenet.ImageNetSRValidation
params:
size: 256
degradation: pil_nearest
lightning:
callbacks:
image_logger:
target: main.ImageLogger
params:
batch_frequency: 1000
max_images: 8
increase_log_steps: True
trainer:
benchmark: True
accumulate_grad_batches: 2

View File

@@ -1,53 +0,0 @@
model:
base_learning_rate: 4.5e-6
target: ldm.models.autoencoder.AutoencoderKL
params:
monitor: "val/rec_loss"
embed_dim: 4
lossconfig:
target: ldm.modules.losses.LPIPSWithDiscriminator
params:
disc_start: 50001
kl_weight: 0.000001
disc_weight: 0.5
ddconfig:
double_z: True
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult: [ 1,2,4,4 ] # num_down = len(ch_mult)-1
num_res_blocks: 2
attn_resolutions: [ ]
dropout: 0.0
data:
target: main.DataModuleFromConfig
params:
batch_size: 12
wrap: True
train:
target: ldm.data.imagenet.ImageNetSRTrain
params:
size: 256
degradation: pil_nearest
validation:
target: ldm.data.imagenet.ImageNetSRValidation
params:
size: 256
degradation: pil_nearest
lightning:
callbacks:
image_logger:
target: main.ImageLogger
params:
batch_frequency: 1000
max_images: 8
increase_log_steps: True
trainer:
benchmark: True
accumulate_grad_batches: 2

View File

@@ -1,54 +0,0 @@
model:
base_learning_rate: 4.5e-6
target: ldm.models.autoencoder.AutoencoderKL
params:
monitor: "val/rec_loss"
embed_dim: 3
lossconfig:
target: ldm.modules.losses.LPIPSWithDiscriminator
params:
disc_start: 50001
kl_weight: 0.000001
disc_weight: 0.5
ddconfig:
double_z: True
z_channels: 3
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult: [ 1,2,4 ] # num_down = len(ch_mult)-1
num_res_blocks: 2
attn_resolutions: [ ]
dropout: 0.0
data:
target: main.DataModuleFromConfig
params:
batch_size: 12
wrap: True
train:
target: ldm.data.imagenet.ImageNetSRTrain
params:
size: 256
degradation: pil_nearest
validation:
target: ldm.data.imagenet.ImageNetSRValidation
params:
size: 256
degradation: pil_nearest
lightning:
callbacks:
image_logger:
target: main.ImageLogger
params:
batch_frequency: 1000
max_images: 8
increase_log_steps: True
trainer:
benchmark: True
accumulate_grad_batches: 2

View File

@@ -1,53 +0,0 @@
model:
base_learning_rate: 4.5e-6
target: ldm.models.autoencoder.AutoencoderKL
params:
monitor: "val/rec_loss"
embed_dim: 64
lossconfig:
target: ldm.modules.losses.LPIPSWithDiscriminator
params:
disc_start: 50001
kl_weight: 0.000001
disc_weight: 0.5
ddconfig:
double_z: True
z_channels: 64
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult: [ 1,1,2,2,4,4] # num_down = len(ch_mult)-1
num_res_blocks: 2
attn_resolutions: [16,8]
dropout: 0.0
data:
target: main.DataModuleFromConfig
params:
batch_size: 12
wrap: True
train:
target: ldm.data.imagenet.ImageNetSRTrain
params:
size: 256
degradation: pil_nearest
validation:
target: ldm.data.imagenet.ImageNetSRValidation
params:
size: 256
degradation: pil_nearest
lightning:
callbacks:
image_logger:
target: main.ImageLogger
params:
batch_frequency: 1000
max_images: 8
increase_log_steps: True
trainer:
benchmark: True
accumulate_grad_batches: 2

View File

@@ -1,86 +0,0 @@
model:
base_learning_rate: 2.0e-06
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.0015
linear_end: 0.0195
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: image
image_size: 64
channels: 3
monitor: val/loss_simple_ema
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 64
in_channels: 3
out_channels: 3
model_channels: 224
attention_resolutions:
# note: this isn\t actually the resolution but
# the downsampling factor, i.e. this corresnponds to
# attention on spatial resolution 8,16,32, as the
# spatial reolution of the latents is 64 for f4
- 8
- 4
- 2
num_res_blocks: 2
channel_mult:
- 1
- 2
- 3
- 4
num_head_channels: 32
first_stage_config:
target: ldm.models.autoencoder.VQModelInterface
params:
embed_dim: 3
n_embed: 8192
ckpt_path: models/first_stage_models/vq-f4/model.ckpt
ddconfig:
double_z: false
z_channels: 3
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config: __is_unconditional__
data:
target: main.DataModuleFromConfig
params:
batch_size: 48
num_workers: 5
wrap: false
train:
target: taming.data.faceshq.CelebAHQTrain
params:
size: 256
validation:
target: taming.data.faceshq.CelebAHQValidation
params:
size: 256
lightning:
callbacks:
image_logger:
target: main.ImageLogger
params:
batch_frequency: 5000
max_images: 8
increase_log_steps: False
trainer:
benchmark: True

View File

@@ -1,98 +0,0 @@
model:
base_learning_rate: 1.0e-06
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.0015
linear_end: 0.0195
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: image
cond_stage_key: class_label
image_size: 32
channels: 4
cond_stage_trainable: true
conditioning_key: crossattn
monitor: val/loss_simple_ema
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 32
in_channels: 4
out_channels: 4
model_channels: 256
attention_resolutions:
#note: this isn\t actually the resolution but
# the downsampling factor, i.e. this corresnponds to
# attention on spatial resolution 8,16,32, as the
# spatial reolution of the latents is 32 for f8
- 4
- 2
- 1
num_res_blocks: 2
channel_mult:
- 1
- 2
- 4
num_head_channels: 32
use_spatial_transformer: true
transformer_depth: 1
context_dim: 512
first_stage_config:
target: ldm.models.autoencoder.VQModelInterface
params:
embed_dim: 4
n_embed: 16384
ckpt_path: configs/first_stage_models/vq-f8/model.yaml
ddconfig:
double_z: false
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 2
- 4
num_res_blocks: 2
attn_resolutions:
- 32
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.ClassEmbedder
params:
embed_dim: 512
key: class_label
data:
target: main.DataModuleFromConfig
params:
batch_size: 64
num_workers: 12
wrap: false
train:
target: ldm.data.imagenet.ImageNetTrain
params:
config:
size: 256
validation:
target: ldm.data.imagenet.ImageNetValidation
params:
config:
size: 256
lightning:
callbacks:
image_logger:
target: main.ImageLogger
params:
batch_frequency: 5000
max_images: 8
increase_log_steps: False
trainer:
benchmark: True

View File

@@ -1,68 +0,0 @@
model:
base_learning_rate: 0.0001
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.0015
linear_end: 0.0195
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: image
cond_stage_key: class_label
image_size: 64
channels: 3
cond_stage_trainable: true
conditioning_key: crossattn
monitor: val/loss
use_ema: False
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 64
in_channels: 3
out_channels: 3
model_channels: 192
attention_resolutions:
- 8
- 4
- 2
num_res_blocks: 2
channel_mult:
- 1
- 2
- 3
- 5
num_heads: 1
use_spatial_transformer: true
transformer_depth: 1
context_dim: 512
first_stage_config:
target: ldm.models.autoencoder.VQModelInterface
params:
embed_dim: 3
n_embed: 8192
ddconfig:
double_z: false
z_channels: 3
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.ClassEmbedder
params:
n_classes: 1001
embed_dim: 512
key: class_label

View File

@@ -1,85 +0,0 @@
model:
base_learning_rate: 2.0e-06
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.0015
linear_end: 0.0195
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: image
image_size: 64
channels: 3
monitor: val/loss_simple_ema
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 64
in_channels: 3
out_channels: 3
model_channels: 224
attention_resolutions:
# note: this isn\t actually the resolution but
# the downsampling factor, i.e. this corresnponds to
# attention on spatial resolution 8,16,32, as the
# spatial reolution of the latents is 64 for f4
- 8
- 4
- 2
num_res_blocks: 2
channel_mult:
- 1
- 2
- 3
- 4
num_head_channels: 32
first_stage_config:
target: ldm.models.autoencoder.VQModelInterface
params:
embed_dim: 3
n_embed: 8192
ckpt_path: configs/first_stage_models/vq-f4/model.yaml
ddconfig:
double_z: false
z_channels: 3
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config: __is_unconditional__
data:
target: main.DataModuleFromConfig
params:
batch_size: 42
num_workers: 5
wrap: false
train:
target: taming.data.faceshq.FFHQTrain
params:
size: 256
validation:
target: taming.data.faceshq.FFHQValidation
params:
size: 256
lightning:
callbacks:
image_logger:
target: main.ImageLogger
params:
batch_frequency: 5000
max_images: 8
increase_log_steps: False
trainer:
benchmark: True

View File

@@ -1,85 +0,0 @@
model:
base_learning_rate: 2.0e-06
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.0015
linear_end: 0.0195
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: image
image_size: 64
channels: 3
monitor: val/loss_simple_ema
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 64
in_channels: 3
out_channels: 3
model_channels: 224
attention_resolutions:
# note: this isn\t actually the resolution but
# the downsampling factor, i.e. this corresnponds to
# attention on spatial resolution 8,16,32, as the
# spatial reolution of the latents is 64 for f4
- 8
- 4
- 2
num_res_blocks: 2
channel_mult:
- 1
- 2
- 3
- 4
num_head_channels: 32
first_stage_config:
target: ldm.models.autoencoder.VQModelInterface
params:
ckpt_path: configs/first_stage_models/vq-f4/model.yaml
embed_dim: 3
n_embed: 8192
ddconfig:
double_z: false
z_channels: 3
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config: __is_unconditional__
data:
target: main.DataModuleFromConfig
params:
batch_size: 48
num_workers: 5
wrap: false
train:
target: ldm.data.lsun.LSUNBedroomsTrain
params:
size: 256
validation:
target: ldm.data.lsun.LSUNBedroomsValidation
params:
size: 256
lightning:
callbacks:
image_logger:
target: main.ImageLogger
params:
batch_frequency: 5000
max_images: 8
increase_log_steps: False
trainer:
benchmark: True

View File

@@ -1,91 +0,0 @@
model:
base_learning_rate: 5.0e-5 # set to target_lr by starting main.py with '--scale_lr False'
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.0015
linear_end: 0.0155
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
loss_type: l1
first_stage_key: "image"
cond_stage_key: "image"
image_size: 32
channels: 4
cond_stage_trainable: False
concat_mode: False
scale_by_std: True
monitor: 'val/loss_simple_ema'
scheduler_config: # 10000 warmup steps
target: ldm.lr_scheduler.LambdaLinearScheduler
params:
warm_up_steps: [10000]
cycle_lengths: [10000000000000]
f_start: [1.e-6]
f_max: [1.]
f_min: [ 1.]
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 32
in_channels: 4
out_channels: 4
model_channels: 192
attention_resolutions: [ 1, 2, 4, 8 ] # 32, 16, 8, 4
num_res_blocks: 2
channel_mult: [ 1,2,2,4,4 ] # 32, 16, 8, 4, 2
num_heads: 8
use_scale_shift_norm: True
resblock_updown: True
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: "val/rec_loss"
ckpt_path: "models/first_stage_models/kl-f8/model.ckpt"
ddconfig:
double_z: True
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult: [ 1,2,4,4 ] # num_down = len(ch_mult)-1
num_res_blocks: 2
attn_resolutions: [ ]
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config: "__is_unconditional__"
data:
target: main.DataModuleFromConfig
params:
batch_size: 96
num_workers: 5
wrap: False
train:
target: ldm.data.lsun.LSUNChurchesTrain
params:
size: 256
validation:
target: ldm.data.lsun.LSUNChurchesValidation
params:
size: 256
lightning:
callbacks:
image_logger:
target: main.ImageLogger
params:
batch_frequency: 5000
max_images: 8
increase_log_steps: False
trainer:
benchmark: True

View File

@@ -1,71 +0,0 @@
model:
base_learning_rate: 5.0e-05
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.00085
linear_end: 0.012
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: image
cond_stage_key: caption
image_size: 32
channels: 4
cond_stage_trainable: true
conditioning_key: crossattn
monitor: val/loss_simple_ema
scale_factor: 0.18215
use_ema: False
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 32
in_channels: 4
out_channels: 4
model_channels: 320
attention_resolutions:
- 4
- 2
- 1
num_res_blocks: 2
channel_mult:
- 1
- 2
- 4
- 4
num_heads: 8
use_spatial_transformer: true
transformer_depth: 1
context_dim: 1280
use_checkpoint: true
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.BERTEmbedder
params:
n_embed: 1280
n_layer: 32

View File

@@ -1,18 +1,36 @@
# This file describes the alternative machine learning models
# available to the dream script.
# available to InvokeAI script.
#
# To add a new model, follow the examples below. Each
# model requires a model config file, a weights file,
# and the width and height of the images it
# was trained on.
laion400m:
config: configs/latent-diffusion/txt2img-1p4B-eval.yaml
weights: models/ldm/text2img-large/model.ckpt
width: 256
height: 256
stable-diffusion-1.4:
config: configs/stable-diffusion/v1-inference.yaml
weights: models/ldm/stable-diffusion-v1/model.ckpt
width: 512
height: 512
config: ./configs/stable-diffusion/v1-inference.yaml
weights: ./models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
vae: ./models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
description: The original Stable Diffusion version 1.4 weight file (4.27 GB)
width: 512
height: 512
stable-diffusion-1.5:
description: The newest Stable Diffusion version 1.5 weight file (4.27 GB)
weights: ./models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt
config: ./configs/stable-diffusion/v1-inference.yaml
width: 512
height: 512
vae: ./models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
default: true
inpainting-1.5:
description: RunwayML SD 1.5 model optimized for inpainting (4.27 GB)
weights: ./models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt
config: ./configs/stable-diffusion/v1-inpainting-inference.yaml
width: 512
height: 512
vae: ./models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
waifu-diffusion-1.3:
description: Stable Diffusion 1.4 fine tuned on anime-styled images (4.27)
weights: ./models/ldm/stable-diffusion-v1/model-epoch09-float32.ckpt
config: ./configs/stable-diffusion/v1-inference.yaml
width: 512
height: 512
vae: ./models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt

View File

@@ -1,68 +0,0 @@
model:
base_learning_rate: 0.0001
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.0015
linear_end: 0.015
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: jpg
cond_stage_key: nix
image_size: 48
channels: 16
cond_stage_trainable: false
conditioning_key: crossattn
monitor: val/loss_simple_ema
scale_by_std: false
scale_factor: 0.22765929
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 48
in_channels: 16
out_channels: 16
model_channels: 448
attention_resolutions:
- 4
- 2
- 1
num_res_blocks: 2
channel_mult:
- 1
- 2
- 3
- 4
use_scale_shift_norm: false
resblock_updown: false
num_head_channels: 32
use_spatial_transformer: true
transformer_depth: 1
context_dim: 768
use_checkpoint: true
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
monitor: val/rec_loss
embed_dim: 16
ddconfig:
double_z: true
z_channels: 16
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 1
- 2
- 2
- 4
num_res_blocks: 2
attn_resolutions:
- 16
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: torch.nn.Identity

View File

@@ -32,7 +32,7 @@ model:
placeholder_strings: ["*"]
initializer_words: ['face', 'man', 'photo', 'africanmale']
per_image_tokens: false
num_vectors_per_token: 6
num_vectors_per_token: 1
progressive_words: False
unet_config:
@@ -76,4 +76,4 @@ model:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.FrozenCLIPEmbedder
target: ldm.modules.encoders.modules.WeightedFrozenCLIPEmbedder

View File

@@ -0,0 +1,79 @@
model:
base_learning_rate: 7.5e-05
target: ldm.models.diffusion.ddpm.LatentInpaintDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: "jpg"
cond_stage_key: "txt"
image_size: 64
channels: 4
cond_stage_trainable: false # Note: different from the one we trained before
conditioning_key: hybrid # important
monitor: val/loss_simple_ema
scale_factor: 0.18215
finetune_keys: null
scheduler_config: # 10000 warmup steps
target: ldm.lr_scheduler.LambdaLinearScheduler
params:
warm_up_steps: [ 2500 ] # NOTE for resuming. use 10000 if starting from scratch
cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases
f_start: [ 1.e-6 ]
f_max: [ 1. ]
f_min: [ 1. ]
personalization_config:
target: ldm.modules.embedding_manager.EmbeddingManager
params:
placeholder_strings: ["*"]
initializer_words: ['face', 'man', 'photo', 'africanmale']
per_image_tokens: false
num_vectors_per_token: 1
progressive_words: False
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 32 # unused
in_channels: 9 # 4 data + 4 downscaled image + 1 mask
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_heads: 8
use_spatial_transformer: True
transformer_depth: 1
context_dim: 768
use_checkpoint: True
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.WeightedFrozenCLIPEmbedder

View File

@@ -1,57 +1,74 @@
FROM debian
FROM ubuntu AS get_miniconda
ARG gsd
ENV GITHUB_STABLE_DIFFUSION $gsd
SHELL ["/bin/bash", "-c"]
ARG rsd
ENV REQS $rsd
# install wget
RUN apt-get update \
&& apt-get install -y \
wget \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
ARG cs
ENV CONDA_SUBDIR $cs
# download and install miniconda
ARG conda_version=py39_4.12.0-Linux-x86_64
ARG conda_prefix=/opt/conda
RUN wget --progress=dot:giga -O /miniconda.sh \
https://repo.anaconda.com/miniconda/Miniconda3-${conda_version}.sh \
&& bash /miniconda.sh -b -p ${conda_prefix} \
&& rm -f /miniconda.sh
ENV PIP_EXISTS_ACTION="w"
FROM ubuntu AS invokeai
# TODO: Optimize image size
# use bash
SHELL [ "/bin/bash", "-c" ]
SHELL ["/bin/bash", "-c"]
# clean bashrc
RUN echo "" > ~/.bashrc
WORKDIR /
RUN apt update && apt upgrade -y \
&& apt install -y \
git \
libgl1-mesa-glx \
libglib2.0-0 \
pip \
python3 \
&& git clone $GITHUB_STABLE_DIFFUSION
# Install necesarry packages
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
gcc \
git \
libgl1-mesa-glx \
libglib2.0-0 \
pip \
python3 \
python3-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Install Anaconda or Miniconda
COPY anaconda.sh .
RUN bash anaconda.sh -b -u -p /anaconda && /anaconda/bin/conda init bash
# clone repository and create symlinks
ARG invokeai_git=https://github.com/invoke-ai/InvokeAI.git
ARG project_name=invokeai
RUN git clone ${invokeai_git} /${project_name} \
&& mkdir /${project_name}/models/ldm/stable-diffusion-v1 \
&& ln -s /data/models/sd-v1-4.ckpt /${project_name}/models/ldm/stable-diffusion-v1/model.ckpt \
&& ln -s /data/outputs/ /${project_name}/outputs
# SD
WORKDIR /stable-diffusion
RUN source ~/.bashrc \
&& conda create -y --name ldm && conda activate ldm \
&& conda config --env --set subdir $CONDA_SUBDIR \
&& pip3 install -r $REQS \
&& pip3 install basicsr facexlib realesrgan \
&& mkdir models/ldm/stable-diffusion-v1 \
&& ln -s "/data/sd-v1-4.ckpt" models/ldm/stable-diffusion-v1/model.ckpt
# set workdir
WORKDIR /${project_name}
# Face restoreation
# by default expected in a sibling directory to stable-diffusion
WORKDIR /
RUN git clone https://github.com/TencentARC/GFPGAN.git
# install conda env and preload models
ARG conda_prefix=/opt/conda
ARG conda_env_file=environment.yml
COPY --from=get_miniconda ${conda_prefix} ${conda_prefix}
RUN source ${conda_prefix}/etc/profile.d/conda.sh \
&& conda init bash \
&& source ~/.bashrc \
&& conda env create \
--name ${project_name} \
--file ${conda_env_file} \
&& rm -Rf ~/.cache \
&& conda clean -afy \
&& echo "conda activate ${project_name}" >> ~/.bashrc \
&& ln -s /data/models/GFPGANv1.4.pth ./src/gfpgan/experiments/pretrained_models/GFPGANv1.4.pth \
&& conda activate ${project_name} \
&& python scripts/preload_models.py
WORKDIR /GFPGAN
RUN pip3 install -r requirements.txt \
&& python3 setup.py develop \
&& ln -s "/data/GFPGANv1.4.pth" experiments/pretrained_models/GFPGANv1.4.pth
WORKDIR /stable-diffusion
RUN python3 scripts/preload_models.py
WORKDIR /
COPY entrypoint.sh .
ENTRYPOINT ["/entrypoint.sh"]
# Copy entrypoint and set env
ENV CONDA_PREFIX=${conda_prefix}
ENV PROJECT_NAME=${project_name}
COPY docker-build/entrypoint.sh /
ENTRYPOINT [ "/entrypoint.sh" ]

81
docker-build/build.sh Executable file
View File

@@ -0,0 +1,81 @@
#!/usr/bin/env bash
set -e
# IMPORTANT: You need to have a token on huggingface.co to be able to download the checkpoint!!!
# configure values by using env when executing build.sh
# f.e. env ARCH=aarch64 GITHUB_INVOKE_AI=https://github.com/yourname/yourfork.git ./build.sh
source ./docker-build/env.sh || echo "please run from repository root" || exit 1
invokeai_conda_version=${INVOKEAI_CONDA_VERSION:-py39_4.12.0-${platform/\//-}}
invokeai_conda_prefix=${INVOKEAI_CONDA_PREFIX:-\/opt\/conda}
invokeai_conda_env_file=${INVOKEAI_CONDA_ENV_FILE:-environment.yml}
invokeai_git=${INVOKEAI_GIT:-https://github.com/invoke-ai/InvokeAI.git}
huggingface_token=${HUGGINGFACE_TOKEN?}
# print the settings
echo "You are using these values:"
echo -e "project_name:\t\t ${project_name}"
echo -e "volumename:\t\t ${volumename}"
echo -e "arch:\t\t\t ${arch}"
echo -e "platform:\t\t ${platform}"
echo -e "invokeai_conda_version:\t ${invokeai_conda_version}"
echo -e "invokeai_conda_prefix:\t ${invokeai_conda_prefix}"
echo -e "invokeai_conda_env_file: ${invokeai_conda_env_file}"
echo -e "invokeai_git:\t\t ${invokeai_git}"
echo -e "invokeai_tag:\t\t ${invokeai_tag}\n"
_runAlpine() {
docker run \
--rm \
--interactive \
--tty \
--mount source="$volumename",target=/data \
--workdir /data \
alpine "$@"
}
_copyCheckpoints() {
echo "creating subfolders for models and outputs"
_runAlpine mkdir models
_runAlpine mkdir outputs
echo -n "downloading sd-v1-4.ckpt"
_runAlpine wget --header="Authorization: Bearer ${huggingface_token}" -O models/sd-v1-4.ckpt https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt
echo "done"
echo "downloading GFPGANv1.4.pth"
_runAlpine wget -O models/GFPGANv1.4.pth https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth
}
_checkVolumeContent() {
_runAlpine ls -lhA /data/models
}
_getModelMd5s() {
_runAlpine \
alpine sh -c "md5sum /data/models/*"
}
if [[ -n "$(docker volume ls -f name="${volumename}" -q)" ]]; then
echo "Volume already exists"
if [[ -z "$(_checkVolumeContent)" ]]; then
echo "looks empty, copying checkpoint"
_copyCheckpoints
fi
echo "Models in ${volumename}:"
_checkVolumeContent
else
echo -n "createing docker volume "
docker volume create "${volumename}"
_copyCheckpoints
fi
# Build Container
docker build \
--platform="${platform}" \
--tag "${invokeai_tag}" \
--build-arg project_name="${project_name}" \
--build-arg conda_version="${invokeai_conda_version}" \
--build-arg conda_prefix="${invokeai_conda_prefix}" \
--build-arg conda_env_file="${invokeai_conda_env_file}" \
--build-arg invokeai_git="${invokeai_git}" \
--file ./docker-build/Dockerfile \
.

View File

@@ -1,10 +1,8 @@
#!/bin/bash
set -e
cd /stable-diffusion
source "${CONDA_PREFIX}/etc/profile.d/conda.sh"
conda activate "${PROJECT_NAME}"
if [ $# -eq 0 ]; then
python3 scripts/dream.py --full_precision -o /data
# bash
else
python3 scripts/dream.py --full_precision -o /data "$@"
fi
python scripts/invoke.py \
${@:---web --host=0.0.0.0}

13
docker-build/env.sh Normal file
View File

@@ -0,0 +1,13 @@
#!/usr/bin/env bash
project_name=${PROJECT_NAME:-invokeai}
volumename=${VOLUMENAME:-${project_name}_data}
arch=${ARCH:-x86_64}
platform=${PLATFORM:-Linux/${arch}}
invokeai_tag=${INVOKEAI_TAG:-${project_name}-${arch}}
export project_name
export volumename
export arch
export platform
export invokeai_tag

15
docker-build/run.sh Executable file
View File

@@ -0,0 +1,15 @@
#!/usr/bin/env bash
set -e
source ./docker-build/env.sh || echo "please run from repository root" || exit 1
docker run \
--interactive \
--tty \
--rm \
--platform "$platform" \
--name "$project_name" \
--hostname "$project_name" \
--mount source="$volumename",target=/data \
--publish 9090:9090 \
"$invokeai_tag" ${1:+$@}

View File

@@ -1,51 +1,106 @@
# **Changelog**
---
title: Changelog
---
## v1.13 (in process)
# :octicons-log-16: **Changelog**
- Supports a Google Colab notebook for a standalone server running on Google hardware [Arturo Mendivil](https://github.com/artmen1516)
- WebUI supports GFPGAN/ESRGAN facial reconstruction and upscaling [Kevin Gibbons](https://github.com/bakkot)
- WebUI supports incremental display of in-progress images during generation [Kevin Gibbons](https://github.com/bakkot)
- Output directory can be specified on the dream> command line.
- The grid was displaying duplicated images when not enough images to fill the final row [Muhammad Usama](https://github.com/SMUsamaShah)
- Can specify --grid on dream.py command line as the default.
- Miscellaneous internal bug and stability fixes.
## v2.0.1 (13 October 2022)
- fix noisy images at high step count when using k* samplers
- dream.py script now calls invoke.py module directly rather than
via a new python process (which could break the environment)
## v2.0.0 <small>(9 October 2022)</small>
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains
for backward compatibility.
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
- Support for <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/INPAINTING.md">inpainting</a> and <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/OUTPAINTING.md">outpainting</a>
- img2img runs on all k* samplers
- Support for <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/PROMPTS.md#negative-and-unconditioned-prompts">negative prompts</a>
- Support for CodeFormer face reconstruction
- Support for Textual Inversion on Macintoshes
- Support in both WebGUI and CLI for <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/POSTPROCESS.md">post-processing of previously-generated images</a>
using facial reconstruction, ESRGAN upscaling, outcropping (similar to DALL-E infinite canvas),
and "embiggen" upscaling. See the `!fix` command.
- New `--hires` option on `invoke>` line allows <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/CLI.m#this-is-an-example-of-txt2img">larger images to be created without duplicating elements</a>, at the cost of some performance.
- New `--perlin` and `--threshold` options allow you to add and control variation
during image generation (see <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/OTHER.md#thresholding-and-perlin-noise-initialization-options">Thresholding and Perlin Noise Initialization</a>
- Extensive metadata now written into PNG files, allowing reliable regeneration of images
and tweaking of previous settings.
- Command-line completion in `invoke.py` now works on Windows, Linux and Mac platforms.
- Improved <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/CLI.m">command-line completion behavior</a>.
New commands added:
* List command-line history with `!history`
* Search command-line history with `!search`
* Clear history with `!clear`
- Deprecated `--full_precision` / `-F`. Simply omit it and `invoke.py` will auto
configure. To switch away from auto use the new flag like `--precision=float32`.
## v1.14 <small>(11 September 2022)</small>
- Memory optimizations for small-RAM cards. 512x512 now possible on 4 GB GPUs.
- Full support for Apple hardware with M1 or M2 chips.
- Add "seamless mode" for circular tiling of image. Generates beautiful effects.
([prixt](https://github.com/prixt)).
- Inpainting support.
- Improved web server GUI.
- Lots of code and documentation cleanups.
## v1.13 <small>(3 September 2022)</small>
- Support image variations (see [VARIATIONS](features/VARIATIONS.md)
([Kevin Gibbons](https://github.com/bakkot) and many contributors and reviewers)
- Supports a Google Colab notebook for a standalone server running on Google hardware
[Arturo Mendivil](https://github.com/artmen1516)
- WebUI supports GFPGAN/ESRGAN facial reconstruction and upscaling
[Kevin Gibbons](https://github.com/bakkot)
- WebUI supports incremental display of in-progress images during generation
[Kevin Gibbons](https://github.com/bakkot)
- A new configuration file scheme that allows new models (including upcoming
stable-diffusion-v1.5) to be added without altering the code.
([David Wager](https://github.com/maddavid12))
- Can specify --grid on invoke.py command line as the default.
- Miscellaneous internal bug and stability fixes.
- Works on M1 Apple hardware.
- Multiple bug fixes.
---
## v1.12 (28 August 2022)
## v1.12 <small>(28 August 2022)</small>
- Improved file handling, including ability to read prompts from standard input.
(kudos to [Yunsaki](https://github.com/yunsaki)
- The web server is now integrated with the dream.py script. Invoke by adding --web to
the dream.py command arguments.
- The web server is now integrated with the invoke.py script. Invoke by adding --web to
the invoke.py command arguments.
- Face restoration and upscaling via GFPGAN and Real-ESGAN are now automatically
enabled if the GFPGAN directory is located as a sibling to Stable Diffusion.
VRAM requirements are modestly reduced. Thanks to both [Blessedcoolant](https://github.com/blessedcoolant) and
[Oceanswave](https://github.com/oceanswave) for their work on this.
- You can now swap samplers on the dream> command line. [Blessedcoolant](https://github.com/blessedcoolant)
- You can now swap samplers on the invoke> command line. [Blessedcoolant](https://github.com/blessedcoolant)
---
## v1.11 (26 August 2022)
## v1.11 <small>(26 August 2022)</small>
- NEW FEATURE: Support upscaling and face enhancement using the GFPGAN module. (kudos to [Oceanswave](https://github.com/Oceanswave)
- You now can specify a seed of -1 to use the previous image's seed, -2 to use the seed for the image generated before that, etc.
Seed memory only extends back to the previous command, but will work on all images generated with the -n# switch.
- Variant generation support temporarily disabled pending more general solution.
- Created a feature branch named **yunsaki-morphing-dream** which adds experimental support for
- Created a feature branch named **yunsaki-morphing-invoke** which adds experimental support for
iteratively modifying the prompt and its parameters. Please see[ Pull Request #86](https://github.com/lstein/stable-diffusion/pull/86)
for a synopsis of how this works. Note that when this feature is eventually added to the main branch, it will may be modified
significantly.
---
## v1.10 (25 August 2022)
## v1.10 <small>(25 August 2022)</small>
- A barebones but fully functional interactive web server for online generation of txt2img and img2img.
---
## v1.09 (24 August 2022)
## v1.09 <small>(24 August 2022)</small>
- A new -v option allows you to generate multiple variants of an initial image
in img2img mode. (kudos to [Oceanswave](https://github.com/Oceanswave). [
@@ -55,9 +110,9 @@
---
## v1.08 (24 August 2022)
## v1.08 <small>(24 August 2022)</small>
- Escape single quotes on the dream> command before trying to parse. This avoids
- Escape single quotes on the invoke> command before trying to parse. This avoids
parse errors.
- Removed instruction to get Python3.8 as first step in Windows install.
Anaconda3 does it for you.
@@ -66,7 +121,7 @@
---
## v1.07 (23 August 2022)
## v1.07 <small>(23 August 2022)</small>
- Image filenames will now never fill gaps in the sequence, but will be assigned the
next higher name in the chosen directory. This ensures that the alphabetic and chronological
@@ -74,14 +129,14 @@
---
## v1.06 (23 August 2022)
## v1.06 <small>(23 August 2022)</small>
- Added weighted prompt support contributed by [xraxra](https://github.com/xraxra)
- Example of using weighted prompts to tweak a demonic figure contributed by [bmaltais](https://github.com/bmaltais)
---
## v1.05 (22 August 2022 - after the drop)
## v1.05 <small>(22 August 2022 - after the drop)</small>
- Filenames now use the following formats:
000010.95183149.png -- Two files produced by the same command (e.g. -n2),
@@ -94,12 +149,12 @@
be regenerated with the indicated key
- It should no longer be possible for one image to overwrite another
- You can use the "cd" and "pwd" commands at the dream> prompt to set and retrieve
- You can use the "cd" and "pwd" commands at the invoke> prompt to set and retrieve
the path of the output directory.
---
## v1.04 (22 August 2022 - after the drop)
## v1.04 <small>(22 August 2022 - after the drop)</small>
- Updated README to reflect installation of the released weights.
- Suppressed very noisy and inconsequential warning when loading the frozen CLIP
@@ -107,14 +162,14 @@
---
## v1.03 (22 August 2022)
## v1.03 <small>(22 August 2022)</small>
- The original txt2img and img2img scripts from the CompViz repository have been moved into
a subfolder named "orig_scripts", to reduce confusion.
---
## v1.02 (21 August 2022)
## v1.02 <small>(21 August 2022)</small>
- A copy of the prompt and all of its switches and options is now stored in the corresponding
image in a tEXt metadata field named "Dream". You can read the prompt using scripts/images2prompt.py,
@@ -123,15 +178,15 @@
---
## v1.01 (21 August 2022)
## v1.01 <small>(21 August 2022)</small>
- added k_lms sampling.
**Please run "conda env update" to load the k_lms dependencies!!**
- use half precision arithmetic by default, resulting in faster execution and lower memory requirements
Pass argument --full_precision to dream.py to get slower but more accurate image generation
Pass argument --full_precision to invoke.py to get slower but more accurate image generation
---
## Links
- **[Read Me](../readme.md)**
- **[Read Me](index.md)**

Binary file not shown.

After

Width:  |  Height:  |  Size: 284 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 252 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 519 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 519 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 439 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 983 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 101 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 546 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 336 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 148 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 637 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 529 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 838 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 838 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 989 KiB

View File

Before

Width:  |  Height:  |  Size: 501 KiB

After

Width:  |  Height:  |  Size: 501 KiB

View File

Before

Width:  |  Height:  |  Size: 473 KiB

After

Width:  |  Height:  |  Size: 473 KiB

View File

Before

Width:  |  Height:  |  Size: 618 KiB

After

Width:  |  Height:  |  Size: 618 KiB

View File

Before

Width:  |  Height:  |  Size: 557 KiB

After

Width:  |  Height:  |  Size: 557 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 587 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 572 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 557 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 571 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 570 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 568 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 527 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 489 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 503 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 488 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 499 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 524 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 593 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 598 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 488 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 487 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 489 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 338 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

View File

@@ -4,7 +4,7 @@ title: Changelog
# :octicons-log-16: Changelog
## v1.13 <small>(in process)</small>
## v1.13
- Supports a Google Colab notebook for a standalone server running on Google
hardware [Arturo Mendivil](https://github.com/artmen1516)
@@ -12,10 +12,10 @@ title: Changelog
[Kevin Gibbons](https://github.com/bakkot)
- WebUI supports incremental display of in-progress images during generation
[Kevin Gibbons](https://github.com/bakkot)
- Output directory can be specified on the dream> command line.
- Output directory can be specified on the invoke> command line.
- The grid was displaying duplicated images when not enough images to fill the
final row [Muhammad Usama](https://github.com/SMUsamaShah)
- Can specify --grid on dream.py command line as the default.
- Can specify --grid on invoke.py command line as the default.
- Miscellaneous internal bug and stability fixes.
---
@@ -24,14 +24,14 @@ title: Changelog
- Improved file handling, including ability to read prompts from standard input.
(kudos to [Yunsaki](https://github.com/yunsaki)
- The web server is now integrated with the dream.py script. Invoke by adding
--web to the dream.py command arguments.
- The web server is now integrated with the invoke.py script. Invoke by adding
--web to the invoke.py command arguments.
- Face restoration and upscaling via GFPGAN and Real-ESGAN are now automatically
enabled if the GFPGAN directory is located as a sibling to Stable Diffusion.
VRAM requirements are modestly reduced. Thanks to both
[Blessedcoolant](https://github.com/blessedcoolant) and
[Oceanswave](https://github.com/oceanswave) for their work on this.
- You can now swap samplers on the dream> command line.
- You can now swap samplers on the invoke> command line.
[Blessedcoolant](https://github.com/blessedcoolant)
---
@@ -45,7 +45,7 @@ title: Changelog
back to the previous command, but will work on all images generated with the
-n# switch.
- Variant generation support temporarily disabled pending more general solution.
- Created a feature branch named **yunsaki-morphing-dream** which adds
- Created a feature branch named **yunsaki-morphing-invoke** which adds
experimental support for iteratively modifying the prompt and its parameters.
Please
see[ Pull Request #86](https://github.com/lstein/stable-diffusion/pull/86) for
@@ -75,7 +75,7 @@ title: Changelog
## v1.08 <small>(24 August 2022)</small>
- Escape single quotes on the dream> command before trying to parse. This avoids
- Escape single quotes on the invoke> command before trying to parse. This avoids
parse errors.
- Removed instruction to get Python3.8 as first step in Windows install.
Anaconda3 does it for you.
@@ -112,7 +112,7 @@ title: Changelog
can be regenerated with the indicated key
- It should no longer be possible for one image to overwrite another
- You can use the "cd" and "pwd" commands at the dream> prompt to set and
- You can use the "cd" and "pwd" commands at the invoke> prompt to set and
retrieve the path of the output directory.
## v1.04 <small>(22 August 2022 - after the drop)</small>
@@ -139,5 +139,5 @@ title: Changelog
- added k_lms sampling. **Please run "conda env update -f environment.yaml" to
load the k_lms dependencies!!**
- use half precision arithmetic by default, resulting in faster execution and
lower memory requirements Pass argument --full_precision to dream.py to get
lower memory requirements Pass argument --full_precision to invoke.py to get
slower but more accurate image generation

View File

@@ -8,8 +8,8 @@ hide:
## **Interactive Command Line Interface**
The `dream.py` script, located in `scripts/dream.py`, provides an interactive
interface to image generation similar to the "dream mothership" bot that Stable
The `invoke.py` script, located in `scripts/`, provides an interactive
interface to image generation similar to the "invoke mothership" bot that Stable
AI provided on its Discord server.
Unlike the `txt2img.py` and `img2img.py` scripts provided in the original
@@ -34,33 +34,33 @@ The script is confirmed to work on Linux, Windows and Mac systems.
currently rudimentary, but a much better replacement is on its way.
```bash
(ldm) ~/stable-diffusion$ python3 ./scripts/dream.py
(invokeai) ~/stable-diffusion$ python3 ./scripts/invoke.py
* Initializing, be patient...
Loading model from models/ldm/text2img-large/model.ckpt
(...more initialization messages...)
* Initialization done! Awaiting your command...
dream> ashley judd riding a camel -n2 -s150
invoke> ashley judd riding a camel -n2 -s150
Outputs:
outputs/img-samples/00009.png: "ashley judd riding a camel" -n2 -s150 -S 416354203
outputs/img-samples/00010.png: "ashley judd riding a camel" -n2 -s150 -S 1362479620
dream> "there's a fly in my soup" -n6 -g
invoke> "there's a fly in my soup" -n6 -g
outputs/img-samples/00011.png: "there's a fly in my soup" -n6 -g -S 2685670268
seeds for individual rows: [2685670268, 1216708065, 2335773498, 822223658, 714542046, 3395302430]
dream> q
invoke> q
# this shows how to retrieve the prompt stored in the saved image's metadata
(ldm) ~/stable-diffusion$ python ./scripts/images2prompt.py outputs/img_samples/*.png
(invokeai) ~/stable-diffusion$ python ./scripts/images2prompt.py outputs/img_samples/*.png
00009.png: "ashley judd riding a camel" -s150 -S 416354203
00010.png: "ashley judd riding a camel" -s150 -S 1362479620
00011.png: "there's a fly in my soup" -n6 -g -S 2685670268
```
![dream-py-demo](../assets/dream-py-demo.png)
![invoke-py-demo](../assets/dream-py-demo.png)
The `dream>` prompt's arguments are pretty much identical to those used in the
Discord bot, except you don't need to type "!dream" (it doesn't hurt if you do).
The `invoke>` prompt's arguments are pretty much identical to those used in the
Discord bot, except you don't need to type `!invoke` (it doesn't hurt if you do).
A significant change is that creation of individual images is now the default
unless `--grid` (`-g`) is given. A full list is given in
[List of prompt arguments](#list-of-prompt-arguments).
@@ -73,10 +73,9 @@ the location of the model weight files.
### List of arguments recognized at the command line
These command-line arguments can be passed to `dream.py` when you first run it
These command-line arguments can be passed to `invoke.py` when you first run it
from the Windows, Mac or Linux command line. Some set defaults that can be
overridden on a per-prompt basis (see [List of prompt arguments]
(#list-of-prompt-arguments). Others
overridden on a per-prompt basis (see [List of prompt arguments](#list-of-prompt-arguments). Others
| Argument <img width="240" align="right"/> | Shortcut <img width="100" align="right"/> | Default <img width="320" align="right"/> | Description |
| ----------------------------------------- | ----------------------------------------- | ---------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
@@ -86,6 +85,8 @@ overridden on a per-prompt basis (see [List of prompt arguments]
| `--from_file <path>` | | `None` | Read list of prompts from a file. Use `-` to read from standard input |
| `--model <modelname>` | | `stable-diffusion-1.4` | Loads model specified in configs/models.yaml. Currently one of "stable-diffusion-1.4" or "laion400m" |
| `--full_precision` | `-F` | `False` | Run in slower full-precision mode. Needed for Macintosh M1/M2 hardware and some older video cards. |
| `--png_compression <0-9>` | `-z<0-9>` | 6 | Select level of compression for output files, from 0 (no compression) to 9 (max compression) |
| `--safety-checker` | | False | Activate safety checker for NSFW and other potentially disturbing imagery |
| `--web` | | `False` | Start in web server mode |
| `--host <ip addr>` | | `localhost` | Which network interface web server should listen on. Set to 0.0.0.0 to listen on any. |
| `--port <port>` | | `9090` | Which port web server should listen for requests on. |
@@ -97,46 +98,52 @@ overridden on a per-prompt basis (see [List of prompt arguments]
| `--embedding_path <path>` | | `None` | Path to pre-trained embedding manager checkpoints, for custom models |
| `--gfpgan_dir` | | `src/gfpgan` | Path to where GFPGAN is installed. |
| `--gfpgan_model_path` | | `experiments/pretrained_models/GFPGANv1.4.pth` | Path to GFPGAN model file, relative to `--gfpgan_dir`. |
| `--device <device>` | `-d<device>` | `torch.cuda.current_device()` | Device to run SD on, e.g. "cuda:0" |
| `--free_gpu_mem` | | `False` | Free GPU memory after sampling, to allow image decoding and saving in low VRAM conditions |
| `--precision` | | `auto` | Set model precision, default is selected by device. Options: auto, float32, float16, autocast |
#### deprecated
!!! warning deprecated
These arguments are deprecated but still work:
These arguments are deprecated but still work:
<div align="center" markdown>
| Argument | Shortcut | Default | Description |
|--------------------|------------|---------------------|--------------|
| --weights <path> | | None | Pth to weights file; use `--model stable-diffusion-1.4` instead |
| --laion400m | -l | False | Use older LAION400m weights; use `--model=laion400m` instead |
| Argument | Shortcut | Default | Description |
|--------------------|------------|---------------------|--------------|
| `--weights <path>` | | `None` | Pth to weights file; use `--model stable-diffusion-1.4` instead |
| `--laion400m` | `-l` | `False` | Use older LAION400m weights; use `--model=laion400m` instead |
**A note on path names:** On Windows systems, you may run into
problems when passing the dream script standard backslashed path
names because the Python interpreter treats "\" as an escape.
You can either double your slashes (ick): C:\\\\path\\\\to\\\\my\\\\file, or
use Linux/Mac style forward slashes (better): C:/path/to/my/file.
</div>
!!! tip
On Windows systems, you may run into
problems when passing the invoke script standard backslashed path
names because the Python interpreter treats "\" as an escape.
You can either double your slashes (ick): `C:\\path\\to\\my\\file`, or
use Linux/Mac style forward slashes (better): `C:/path/to/my/file`.
## List of prompt arguments
After the dream.py script initializes, it will present you with a
**dream>** prompt. Here you can enter information to generate images
from text (txt2img), to embellish an existing image or sketch
(img2img), or to selectively alter chosen regions of the image
(inpainting).
After the invoke.py script initializes, it will present you with a
`invoke>` prompt. Here you can enter information to generate images
from text ([txt2img](#txt2img)), to embellish an existing image or sketch
([img2img](#img2img)), or to selectively alter chosen regions of the image
([inpainting](#inpainting)).
### This is an example of txt2img:
### txt2img
~~~~
dream> waterfall and rainbow -W640 -H480
~~~~
!!! example
This will create the requested image with the dimensions 640 (width)
and 480 (height).
```bash
invoke> waterfall and rainbow -W640 -H480
```
Here are the dream> command that apply to txt2img:
This will create the requested image with the dimensions 640 (width)
and 480 (height).
| Argument | Shortcut | Default | Description |
Here are the invoke> command that apply to txt2img:
| Argument <img width="680" align="right"/> | Shortcut <img width="420" align="right"/> | Default <img width="480" align="right"/> | Description |
|--------------------|------------|---------------------|--------------|
| "my prompt" | | | Text prompt to use. The quotation marks are optional. |
| --width <int> | -W<int> | 512 | Width of generated image |
@@ -146,18 +153,24 @@ Here are the dream> command that apply to txt2img:
| --cfg_scale <float>| -C<float> | 7.5 | How hard to try to match the prompt to the generated image; any number greater than 1.0 works, but the useful range is roughly 5.0 to 20.0 |
| --seed <int> | -S<int> | None | Set the random seed for the next series of images. This can be used to recreate an image generated previously.|
| --sampler <sampler>| -A<sampler>| k_lms | Sampler to use. Use -h to get list of available samplers. |
| --karras_max <int> | | 29 | When using k_* samplers, set the maximum number of steps before shifting from using the Karras noise schedule (good for low step counts) to the LatentDiffusion noise schedule (good for high step counts) This value is sticky. [29] |
| --hires_fix | | | Larger images often have duplication artefacts. This option suppresses duplicates by generating the image at low res, and then using img2img to increase the resolution |
| --png_compression <0-9> | -z<0-9> | 6 | Select level of compression for output files, from 0 (no compression) to 9 (max compression) |
| --grid | -g | False | Turn on grid mode to return a single image combining all the images generated by this prompt |
| --individual | -i | True | Turn off grid mode (deprecated; leave off --grid instead) |
| --outdir <path> | -o<path> | outputs/img_samples | Temporarily change the location of these images |
| --seamless | | False | Activate seamless tiling for interesting effects |
| --seamless_axes | | x,y | Specify which axes to use circular convolution on. |
| --log_tokenization | -t | False | Display a color-coded list of the parsed tokens derived from the prompt |
| --skip_normalization| -x | False | Weighted subprompts will not be normalized. See [Weighted Prompts](./OTHER.md#weighted-prompts) |
| --upscale <int> <float> | -U <int> <float> | -U 1 0.75| Upscale image by magnification factor (2, 4), and set strength of upscaling (0.0-1.0). If strength not set, will default to 0.75. |
| --gfpgan_strength <float> | -G <float> | -G0 | Fix faces using the GFPGAN algorithm; argument indicates how hard the algorithm should try (0.0-1.0) |
| --facetool_strength <float> | -G <float> | -G0 | Fix faces (defaults to using the GFPGAN algorithm); argument indicates how hard the algorithm should try (0.0-1.0) |
| --facetool <name> | -ft <name> | -ft gfpgan | Select face restoration algorithm to use: gfpgan, codeformer |
| --codeformer_fidelity | -cf <float> | 0.75 | Used along with CodeFormer. Takes values between 0 and 1. 0 produces high quality but low accuracy. 1 produces high accuracy but low quality |
| --save_original | -save_orig| False | When upscaling or fixing faces, this will cause the original image to be saved rather than replaced. |
| --variation <float> |-v<float>| 0.0 | Add a bit of noise (0.0=none, 1.0=high) to the image in order to generate a series of variations. Usually used in combination with -S<seed> and -n<int> to generate a series a riffs on a starting image. See [Variations](./VARIATIONS.md). |
| --with_variations <pattern> | -V<pattern>| None | Combine two or more variations. See [Variations](./VARIATIONS.md) for now to use this. |
| --with_variations <pattern> | | None | Combine two or more variations. See [Variations](./VARIATIONS.md) for now to use this. |
| --save_intermediates <n> | | None | Save the image from every nth step into an "intermediates" folder inside the output directory |
Note that the width and height of the image must be multiples of
64. You can provide different values, but they will be rounded down to
@@ -167,7 +180,7 @@ the nearest multiple of 64.
### This is an example of img2img:
~~~~
dream> waterfall and rainbow -I./vacation-photo.png -W640 -H480 --fit
invoke> waterfall and rainbow -I./vacation-photo.png -W640 -H480 --fit
~~~~
This will modify the indicated vacation photograph by making it more
@@ -179,100 +192,281 @@ photo and you may run out of memory if it is large.
In addition to the command-line options recognized by txt2img, img2img
accepts additional options:
| Argument | Shortcut | Default | Description |
|--------------------|------------|---------------------|--------------|
| --init_img <path> | -I<path> | None | Path to the initialization image |
| --fit | -F | False | Scale the image to fit into the specified -H and -W dimensions |
| --strength <float> | -s<float> | 0.75 | How hard to try to match the prompt to the initial image. Ranges from 0.0-0.99, with higher values replacing the initial image completely.|
| Argument <img width="160" align="right"/> | Shortcut | Default | Description |
|----------------------|-------------|-----------------|--------------|
| `--init_img <path>` | `-I<path>` | `None` | Path to the initialization image |
| `--fit` | `-F` | `False` | Scale the image to fit into the specified -H and -W dimensions |
| `--strength <float>` | `-s<float>` | `0.75` | How hard to try to match the prompt to the initial image. Ranges from 0.0-0.99, with higher values replacing the initial image completely.|
### This is an example of inpainting:
### inpainting
~~~~
dream> waterfall and rainbow -I./vacation-photo.png -M./vacation-mask.png -W640 -H480 --fit
~~~~
!!! example
This will do the same thing as img2img, but image alterations will
only occur within transparent areas defined by the mask file specified
by -M. You may also supply just a single initial image with the areas
to overpaint made transparent, but you must be careful not to destroy
the pixels underneath when you create the transparent areas. See
[Inpainting](./INPAINTING.md) for details.
```bash
invoke> waterfall and rainbow -I./vacation-photo.png -M./vacation-mask.png -W640 -H480 --fit
```
This will do the same thing as img2img, but image alterations will
only occur within transparent areas defined by the mask file specified
by `-M`. You may also supply just a single initial image with the areas
to overpaint made transparent, but you must be careful not to destroy
the pixels underneath when you create the transparent areas. See
[Inpainting](./INPAINTING.md) for details.
inpainting accepts all the arguments used for txt2img and img2img, as
well as the --mask (-M) argument:
well as the --mask (-M) and --text_mask (-tm) arguments:
| Argument | Shortcut | Default | Description |
| Argument <img width="100" align="right"/> | Shortcut | Default | Description |
|--------------------|------------|---------------------|--------------|
| --init_mask <path> | -M<path> | None |Path to an image the same size as the initial_image, with areas for inpainting made transparent.|
| `--init_mask <path>` | `-M<path>` | `None` |Path to an image the same size as the initial_image, with areas for inpainting made transparent.|
| `--invert_mask ` | | False |If true, invert the mask so that transparent areas are opaque and vice versa.|
| `--text_mask <prompt> [<float>]` | `-tm <prompt> [<float>]` | <none> | Create a mask from a text prompt describing part of the image|
The mask may either be an image with transparent areas, in which case
the inpainting will occur in the transparent areas only, or a black
and white image, in which case all black areas will be painted into.
# Convenience commands
`--text_mask` (short form `-tm`) is a way to generate a mask using a
text description of the part of the image to replace. For example, if
you have an image of a breakfast plate with a bagel, toast and
scrambled eggs, you can selectively mask the bagel and replace it with
a piece of cake this way:
In addition to the standard image generation arguments, there are a
series of convenience commands that begin with !:
~~~
invoke> a piece of cake -I /path/to/breakfast.png -tm bagel
~~~
## !fix
The algorithm uses <a
href="https://github.com/timojl/clipseg">clipseg</a> to classify
different regions of the image. The classifier puts out a confidence
score for each region it identifies. Generally regions that score
above 0.5 are reliable, but if you are getting too much or too little
masking you can adjust the threshold down (to get more mask), or up
(to get less). In this example, by passing `-tm` a higher value, we
are insisting on a more stringent classification.
~~~
invoke> a piece of cake -I /path/to/breakfast.png -tm bagel 0.6
~~~
# Other Commands
The CLI offers a number of commands that begin with "!".
## Postprocessing images
To postprocess a file using face restoration or upscaling, use the
`!fix` command.
### `!fix`
This command runs a post-processor on a previously-generated image. It
takes a PNG filename or path and applies your choice of the -U, -G, or
--embiggen switches in order to fix faces or upscale. If you provide a
takes a PNG filename or path and applies your choice of the `-U`, `-G`, or
`--embiggen` switches in order to fix faces or upscale. If you provide a
filename, the script will look for it in the current output
directory. Otherwise you can provide a full or partial path to the
desired file.
Some examples:
Upscale to 4X its original size and fix faces using codeformer:
~~~
dream> !fix 0000045.4829112.png -G1 -U4 -ft codeformer
~~~
!!! example ""
Use the GFPGAN algorithm to fix faces, then upscale to 3X using --embiggen:
Upscale to 4X its original size and fix faces using codeformer:
~~~
dream> !fix 0000045.4829112.png -G0.8 -ft gfpgan
>> fixing outputs/img-samples/0000045.4829112.png
>> retrieved seed 4829112 and prompt "boy enjoying a banana split"
>> GFPGAN - Restoring Faces for image seed:4829112
Outputs:
[1] outputs/img-samples/000017.4829112.gfpgan-00.png: !fix "outputs/img-samples/0000045.4829112.png" -s 50 -S -W 512 -H 512 -C 7.5 -A k_lms -G 0.8
```bash
invoke> !fix 0000045.4829112.png -G1 -U4 -ft codeformer
```
dream> !fix 000017.4829112.gfpgan-00.png --embiggen 3
...lots of text...
Outputs:
[2] outputs/img-samples/000018.2273800735.embiggen-00.png: !fix "outputs/img-samples/000017.243781548.gfpgan-00.png" -s 50 -S 2273800735 -W 512 -H 512 -C 7.5 -A k_lms --embiggen 3.0 0.75 0.25
~~~
!!! example ""
## !fetch
Use the GFPGAN algorithm to fix faces, then upscale to 3X using --embiggen:
This command retrieves the generation parameters from a previously
generated image and either loads them into the command line. You may
provide either the name of a file in the current output directory, or
a full file path.
```bash
invoke> !fix 0000045.4829112.png -G0.8 -ft gfpgan
>> fixing outputs/img-samples/0000045.4829112.png
>> retrieved seed 4829112 and prompt "boy enjoying a banana split"
>> GFPGAN - Restoring Faces for image seed:4829112
Outputs:
[1] outputs/img-samples/000017.4829112.gfpgan-00.png: !fix "outputs/img-samples/0000045.4829112.png" -s 50 -S -W 512 -H 512 -C 7.5 -A k_lms -G 0.8
~~~
dream> !fetch 0000015.8929913.png
# the script returns the next line, ready for editing and running:
dream> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5
~~~
### !mask
Note that this command may behave unexpectedly if given a PNG file that
was not generated by InvokeAI.
This command takes an image, a text prompt, and uses the `clipseg`
algorithm to automatically generate a mask of the area that matches
the text prompt. It is useful for debugging the text masking process
prior to inpainting with the `--text_mask` argument. See
[INPAINTING.md] for details.
## !history
## Model selection and importation
The dream script keeps track of all the commands you issue during a
The CLI allows you to add new models on the fly, as well as to switch
among them rapidly without leaving the script.
### !models
This prints out a list of the models defined in `config/models.yaml'.
The active model is bold-faced
Example:
<pre>
laion400m not loaded <no description>
<b>stable-diffusion-1.4 active Stable Diffusion v1.4</b>
waifu-diffusion not loaded Waifu Diffusion v1.3
</pre>
### !switch <model>
This quickly switches from one model to another without leaving the
CLI script. `invoke.py` uses a memory caching system; once a model
has been loaded, switching back and forth is quick. The following
example shows this in action. Note how the second column of the
`!models` table changes to `cached` after a model is first loaded,
and that the long initialization step is not needed when loading
a cached model.
<pre>
invoke> !models
laion400m not loaded <no description>
<b>stable-diffusion-1.4 cached Stable Diffusion v1.4</b>
waifu-diffusion active Waifu Diffusion v1.3
invoke> !switch waifu-diffusion
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
| LatentDiffusion: Running in eps-prediction mode
| DiffusionWrapper has 859.52 M params.
| Making attention of type 'vanilla' with 512 in_channels
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
| Making attention of type 'vanilla' with 512 in_channels
| Using faster float16 precision
>> Model loaded in 18.24s
>> Max VRAM used to load the model: 2.17G
>> Current VRAM usage:2.17G
>> Setting Sampler to k_lms
invoke> !models
laion400m not loaded <no description>
stable-diffusion-1.4 cached Stable Diffusion v1.4
<b>waifu-diffusion active Waifu Diffusion v1.3</b>
invoke> !switch stable-diffusion-1.4
>> Caching model waifu-diffusion in system RAM
>> Retrieving model stable-diffusion-1.4 from system RAM cache
>> Setting Sampler to k_lms
invoke> !models
laion400m not loaded <no description>
<b>stable-diffusion-1.4 active Stable Diffusion v1.4</b>
waifu-diffusion cached Waifu Diffusion v1.3
</pre>
### !import_model <path/to/model/weights>
This command imports a new model weights file into InvokeAI, makes it
available for image generation within the script, and writes out the
configuration for the model into `config/models.yaml` for use in
subsequent sessions.
Provide `!import_model` with the path to a weights file ending in
`.ckpt`. If you type a partial path and press tab, the CLI will
autocomplete. Although it will also autocomplete to `.vae` files,
these are not currenty supported (but will be soon).
When you hit return, the CLI will prompt you to fill in additional
information about the model, including the short name you wish to use
for it with the `!switch` command, a brief description of the model,
the default image width and height to use with this model, and the
model's configuration file. The latter three fields are automatically
filled with reasonable defaults. In the example below, the bold-faced
text shows what the user typed in with the exception of the width,
height and configuration file paths, which were filled in
automatically.
Example:
<pre>
invoke> <b>!import_model models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt</b>
>> Model import in process. Please enter the values needed to configure this model:
Name for this model: <b>waifu-diffusion</b>
Description of this model: <b>Waifu Diffusion v1.3</b>
Configuration file for this model: <b>configs/stable-diffusion/v1-inference.yaml</b>
Default image width: <b>512</b>
Default image height: <b>512</b>
>> New configuration:
waifu-diffusion:
config: configs/stable-diffusion/v1-inference.yaml
description: Waifu Diffusion v1.3
height: 512
weights: models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
width: 512
OK to import [n]? <b>y</b>
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
| LatentDiffusion: Running in eps-prediction mode
| DiffusionWrapper has 859.52 M params.
| Making attention of type 'vanilla' with 512 in_channels
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
| Making attention of type 'vanilla' with 512 in_channels
| Using faster float16 precision
invoke>
</pre>
###!edit_model <name_of_model>
The `!edit_model` command can be used to modify a model that is
already defined in `config/models.yaml`. Call it with the short
name of the model you wish to modify, and it will allow you to
modify the model's `description`, `weights` and other fields.
Example:
<pre>
invoke> <b>!edit_model waifu-diffusion</b>
>> Editing model waifu-diffusion from configuration file ./configs/models.yaml
description: <b>Waifu diffusion v1.4beta</b>
weights: models/ldm/stable-diffusion-v1/<b>model-epoch10-float16.ckpt</b>
config: configs/stable-diffusion/v1-inference.yaml
width: 512
height: 512
>> New configuration:
waifu-diffusion:
config: configs/stable-diffusion/v1-inference.yaml
description: Waifu diffusion v1.4beta
weights: models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
height: 512
width: 512
OK to import [n]? y
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
...
</pre>
=======
invoke> !fix 000017.4829112.gfpgan-00.png --embiggen 3
...lots of text...
Outputs:
[2] outputs/img-samples/000018.2273800735.embiggen-00.png: !fix "outputs/img-samples/000017.243781548.gfpgan-00.png" -s 50 -S 2273800735 -W 512 -H 512 -C 7.5 -A k_lms --embiggen 3.0 0.75 0.25
```
## History processing
The CLI provides a series of convenient commands for reviewing previous
actions, retrieving them, modifying them, and re-running them.
### !history
The invoke script keeps track of all the commands you issue during a
session, allowing you to re-run them. On Mac and Linux systems, it
also writes the command-line history out to disk, giving you access to
the most recent 1000 commands issued.
The `!history` command will return a numbered list of all the commands
issued during the session (Windows), or the most recent 1000 commands
(Mac|Linux). You can then repeat a command by using the command !NNN,
(Mac|Linux). You can then repeat a command by using the command `!NNN`,
where "NNN" is the history line number. For example:
~~~
dream> !history
```bash
invoke> !history
...
[14] happy woman sitting under tree wearing broad hat and flowing garment
[15] beautiful woman sitting under tree wearing broad hat and flowing garment
@@ -280,61 +474,98 @@ dream> !history
[20] watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
[21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
...
dream> !20
dream> watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
invoke> !20
invoke> watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
```
### !fetch
This command retrieves the generation parameters from a previously
generated image and either loads them into the command line
(Linux|Mac), or prints them out in a comment for copy-and-paste
(Windows). You may provide either the name of a file in the current
output directory, or a full file path. Specify path to a folder with
image png files, and wildcard *.png to retrieve the dream command used
to generate the images, and save them to a file commands.txt for
further processing.
This example loads the generation command for a single png file:
```bash
invoke> !fetch 0000015.8929913.png
# the script returns the next line, ready for editing and running:
invoke> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5
```
This one fetches the generation commands from a batch of files and
stores them into `selected.txt`:
```bash
invoke> !fetch outputs\selected-imgs\*.png selected.txt
```
### !replay
This command replays a text file generated by !fetch or created manually
~~~
invoke> !replay outputs\selected-imgs\selected.txt
~~~
## !search <search string>
Note that these commands may behave unexpectedly if given a PNG file that
was not generated by InvokeAI.
### !search <search string>
This is similar to !history but it only returns lines that contain
`search string`. For example:
~~~
dream> !search surreal
```bash
invoke> !search surreal
[21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
~~~
```
## !clear
### `!clear`
This clears the search history from memory and disk. Be advised that
this operation is irreversible and does not issue any warnings!
# Command-line editing and completion
## Command-line editing and completion
The command-line offers convenient history tracking, editing, and
command completion.
- To scroll through previous commands and potentially edit/reuse them, use the up and down cursor keys.
- To edit the current command, use the left and right cursor keys to position the cursor, and then backspace, delete or insert characters.
- To move to the very beginning of the command, type CTRL-A (or command-A on the Mac)
- To move to the end of the command, type CTRL-E.
- To cut a section of the command, position the cursor where you want to start cutting and type CTRL-K.
- To paste a cut section back in, position the cursor where you want to paste, and type CTRL-Y
- To scroll through previous commands and potentially edit/reuse them, use the ++up++ and ++down++ keys.
- To edit the current command, use the ++left++ and ++right++ keys to position the cursor, and then ++backspace++, ++delete++ or insert characters.
- To move to the very beginning of the command, type ++ctrl+a++ (or ++command+a++ on the Mac)
- To move to the end of the command, type ++ctrl+e++.
- To cut a section of the command, position the cursor where you want to start cutting and type ++ctrl+k++
- To paste a cut section back in, position the cursor where you want to paste, and type ++ctrl+y++
Windows users can get similar, but more limited, functionality if they
launch dream.py with the "winpty" program and have the `pyreadline3`
launch `invoke.py` with the `winpty` program and have the `pyreadline3`
library installed:
~~~
> winpty python scripts\dream.py
~~~
```batch
> winpty python scripts\invoke.py
```
On the Mac and Linux platforms, when you exit dream.py, the last 1000
On the Mac and Linux platforms, when you exit invoke.py, the last 1000
lines of your command-line history will be saved. When you restart
dream.py, you can access the saved history using the up-arrow key.
`invoke.py`, you can access the saved history using the ++up++ key.
In addition, limited command-line completion is installed. In various
contexts, you can start typing your command and press tab. A list of
contexts, you can start typing your command and press ++tab++. A list of
potential completions will be presented to you. You can then type a
little more, hit tab again, and eventually autocomplete what you want.
little more, hit ++tab++ again, and eventually autocomplete what you want.
When specifying file paths using the one-letter shortcuts, the CLI
will attempt to complete pathnames for you. This is most handy for the
-I (init image) and -M (init mask) paths. To initiate completion, start
the path with a slash ("/") or "./". For example:
`-I` (init image) and `-M` (init mask) paths. To initiate completion, start
the path with a slash (`/`) or `./`. For example:
~~~
dream> zebra with a mustache -I./test-pictures<TAB>
```bash
invoke> zebra with a mustache -I./test-pictures<TAB>
-I./test-pictures/Lincoln-and-Parrot.png -I./test-pictures/zebra.jpg -I./test-pictures/madonna.png
-I./test-pictures/bad-sketch.png -I./test-pictures/man_with_eagle/
```

View File

@@ -43,7 +43,7 @@ it's similar to that, except it can work up to an arbitrarily large size
has extra logic to re-run any number of the tile sub-sections of the image
if for example a small part of a huge run got messed up.
## Usage
### Usage
`-embiggen <scaling_factor> <esrgan_strength> <overlap_ratio OR overlap_pixels>`
@@ -100,26 +100,30 @@ Tiles are numbered starting with one, and left-to-right,
top-to-bottom. So, if you are generating a 3x3 tiled image, the
middle row would be `4 5 6`.
## Example Usage
### Examples
Running Embiggen with 512x512 tiles on an existing image, scaling up by a factor of 2.5x;
and doing the same again (default ESRGAN strength is 0.75, default overlap between tiles is 0.25):
!!! example ""
```bash
dream > a photo of a forest at sunset -s 100 -W 512 -H 512 -I outputs/forest.png -f 0.4 -embiggen 2.5
dream > a photo of a forest at sunset -s 100 -W 512 -H 512 -I outputs/forest.png -f 0.4 -embiggen 2.5 0.75 0.25
```
Running Embiggen with 512x512 tiles on an existing image, scaling up by a factor of 2.5x;
and doing the same again (default ESRGAN strength is 0.75, default overlap between tiles is 0.25):
If your starting image was also 512x512 this should have taken 9 tiles.
```bash
invoke > a photo of a forest at sunset -s 100 -W 512 -H 512 -I outputs/forest.png -f 0.4 -embiggen 2.5
invoke > a photo of a forest at sunset -s 100 -W 512 -H 512 -I outputs/forest.png -f 0.4 -embiggen 2.5 0.75 0.25
```
If there weren't enough clouds in the sky of that forest you just made
(and that image is about 1280 pixels (512*2.5) wide A.K.A. three
512x512 tiles with 0.25 overlaps wide) we can replace that top row of
tiles:
If your starting image was also 512x512 this should have taken 9 tiles.
```bash
dream> a photo of puffy clouds over a forest at sunset -s 100 -W 512 -H 512 -I outputs/000002.seed.png -f 0.5 -embiggen_tiles 1 2 3
```
!!! example ""
If there weren't enough clouds in the sky of that forest you just made
(and that image is about 1280 pixels (512*2.5) wide A.K.A. three
512x512 tiles with 0.25 overlaps wide) we can replace that top row of
tiles:
```bash
invoke> a photo of puffy clouds over a forest at sunset -s 100 -W 512 -H 512 -I outputs/000002.seed.png -f 0.5 -embiggen_tiles 1 2 3
```
## Fixing Previously-Generated Images
@@ -128,27 +132,27 @@ look up the original prompt and provide an initial image. Just use the
syntax `!fix path/to/file.png <embiggen>`. For example, you can rewrite the
previous command to look like this:
~~~~
dream> !fix ./outputs/000002.seed.png -embiggen_tiles 1 2 3
~~~~
```bash
invoke> !fix ./outputs/000002.seed.png -embiggen_tiles 1 2 3
```
A new file named `000002.seed.fixed.png` will be created in the output directory. Note that
the `!fix` command does not replace the original file, unlike the behavior at generate time.
You do not need to provide the prompt, and `!fix` automatically selects a good strength for
embiggen-ing.
!!! note
**Note**
Because the same prompt is used on all the tiled images, and the model
doesn't have the context of anything outside the tile being run - it
can end up creating repeated pattern (also called 'motifs') across all
the tiles based on that prompt. The best way to combat this is
lowering the `--strength` (`-f`) to stay more true to the init image,
and increasing the number of steps so there is more compute-time to
create the detail. Anecdotally `--strength` 0.35-0.45 works pretty
well on most things. It may also work great in some examples even with
the `--strength` set high for patterns, landscapes, or subjects that
are more abstract. Because this is (relatively) fast, you can also
preserve the best parts from each.
Because the same prompt is used on all the tiled images, and the model
doesn't have the context of anything outside the tile being run - it
can end up creating repeated pattern (also called 'motifs') across all
the tiles based on that prompt. The best way to combat this is
lowering the `--strength` (`-f`) to stay more true to the init image,
and increasing the number of steps so there is more compute-time to
create the detail. Anecdotally `--strength` 0.35-0.45 works pretty
well on most things. It may also work great in some examples even with
the `--strength` set high for patterns, landscapes, or subjects that
are more abstract. Because this is (relatively) fast, you can also
preserve the best parts from each.
Author: [Travco](https://github.com/travco)

View File

@@ -2,7 +2,9 @@
title: Image-to-Image
---
# :material-image-multiple: **IMG2IMG**
# :material-image-multiple: Image-to-Image
## `img2img`
This script also provides an `img2img` feature that lets you seed your creations with an initial
drawing or photo. This is a really cool feature that tells stable diffusion to build the prompt on
@@ -10,77 +12,120 @@ top of the image you provide, preserving the original's basic shape and layout.
the `--init_img` option as shown here:
```commandline
dream> "waterfall and rainbow" --init_img=./init-images/crude_drawing.png --strength=0.5 -s100 -n4
tree on a hill with a river, nature photograph, national geographic -I./test-pictures/tree-and-river-sketch.png -f 0.85
```
The `--init_img (-I)` option gives the path to the seed picture. `--strength (-f)` controls how much
This will take the original image shown here:
<div align="center" markdown>
<img src="https://user-images.githubusercontent.com/50542132/193946000-c42a96d8-5a74-4f8a-b4c3-5213e6cadcce.png" width=350>
</div>
and generate a new image based on it as shown here:
<div align="center" markdown>
<img src="https://user-images.githubusercontent.com/111189/194135515-53d4c060-e994-4016-8121-7c685e281ac9.png" width=350>
</div>
The `--init_img` (`-I`) option gives the path to the seed picture. `--strength` (`-f`) controls how much
the original will be modified, ranging from `0.0` (keep the original intact), to `1.0` (ignore the
original completely). The default is `0.75`, and ranges from `0.25-0.75` give interesting results.
original completely). The default is `0.75`, and ranges from `0.25-0.90` give interesting results.
Other relevant options include `-C` (classification free guidance scale), and `-s` (steps). Unlike `txt2img`,
adding steps will continuously change the resulting image and it will not converge.
You may also pass a `-v<variation_amount>` option to generate `-n<iterations>` count variants on
the original image. This is done by passing the first generated image
back into img2img the requested number of times. It generates
interesting variants.
Note that the prompt makes a big difference. For example, this slight variation on the prompt produces
a very different image:
`photograph of a tree on a hill with a river`
<div align="center" markdown>
<img src="https://user-images.githubusercontent.com/111189/194135220-16b62181-b60c-4248-8989-4834a8fd7fbd.png" width=350>
</div>
!!! tip
When designing prompts, think about how the images scraped from the internet were captioned. Very few photographs will
be labeled "photograph" or "photorealistic." They will, however, be captioned with the publication, photographer, camera
model, or film settings.
If the initial image contains transparent regions, then Stable Diffusion will only draw within the
transparent regions, a process called "inpainting". However, for this to work correctly, the color
transparent regions, a process called [`inpainting`](./INPAINTING.md#creating-transparent-regions-for-inpainting). However, for this to work correctly, the color
information underneath the transparent needs to be preserved, not erased.
More details can be found here:
[Creating Transparent Images For Inpainting](./INPAINTING.md#creating-transparent-regions-for-inpainting)
!!! warning
**IMPORTANT ISSUE** `img2img` does not work properly on initial images smaller than 512x512. Please scale your
image to at least 512x512 before using it. Larger images are not a problem, but may run out of VRAM on your
GPU card. To fix this, use the --fit option, which downscales the initial image to fit within the box specified
by width x height:
~~~
tree on a hill with a river, national geographic -I./test-pictures/big-sketch.png -H512 -W512 --fit
~~~
## How does it actually work, though?
The main difference between `img2img` and `prompt2img` is the starting point. While `prompt2img` always starts with pure
gaussian noise and progressively refines it over the requested number of steps, `img2img` skips some of these earlier steps
(how many it skips is indirectly controlled by the `--strength` parameter), and uses instead your initial image mixed with gaussian noise as the starting image.
The main difference between `img2img` and `prompt2img` is the starting point. While `prompt2img` always starts with pure
gaussian noise and progressively refines it over the requested number of steps, `img2img` skips some of these earlier steps
(how many it skips is indirectly controlled by the `--strength` parameter), and uses instead your initial image mixed with gaussian noise as the starting image.
**Let's start** by thinking about vanilla `prompt2img`, just generating an image from a prompt. If the step count is 10, then the "latent space" (Stable Diffusion's internal representation of the image) for the prompt "fire" with seed `1592514025` develops something like this:
```commandline
dream> "fire" -s10 -W384 -H384 -S1592514025
invoke> "fire" -s10 -W384 -H384 -S1592514025
```
<div align="center" markdown>
![latent steps](../assets/img2img/000019.steps.png)
</div>
Put simply: starting from a frame of fuzz/static, SD finds details in each frame that it thinks look like "fire" and brings them a little bit more into focus, gradually scrubbing out the fuzz until a clear image remains.
Put simply: starting from a frame of fuzz/static, SD finds details in each frame that it thinks look like "fire" and brings them a little bit more into focus, gradually scrubbing out the fuzz until a clear image remains.
**When you use `img2img`** some of the earlier steps are cut, and instead an initial image of your choice is used. But because of how the maths behind Stable Diffusion works, this image needs to be mixed with just the right amount of noise (fuzz/static) for where it is being inserted. This is where the strength parameter comes in. Depending on the set strength, your image will be inserted into the sequence at the appropriate point, with just the right amount of noise.
**When you use `img2img`** some of the earlier steps are cut, and instead an initial image of your choice is used. But because of how the maths behind Stable Diffusion works, this image needs to be mixed with just the right amount of noise (fuzz/static) for where it is being inserted. This is where the strength parameter comes in. Depending on the set strength, your image will be inserted into the sequence at the appropriate point, with just the right amount of noise.
### A concrete example
Say I want SD to draw a fire based on this hand-drawn image:
I want SD to draw a fire based on this hand-drawn image:
<div align="center" markdown>
![drawing of a fireplace](../assets/img2img/fire-drawing.png)
</div>
Let's only do 10 steps, to make it easier to see what's happening. If strength is `0.7`, this is what the internal steps the algorithm has to take will look like:
![](../assets/img2img/000032.steps.gravity.png)
<div align="center" markdown>
![gravity32](../assets/img2img/000032.steps.gravity.png)
</div>
With strength `0.4`, the steps look more like this:
![](../assets/img2img/000030.steps.gravity.png)
<div align="center" markdown>
![gravity30](../assets/img2img/000030.steps.gravity.png)
</div>
Notice how much more fuzzy the starting image is for strength `0.7` compared to `0.4`, and notice also how much longer the sequence is with `0.7`:
| | strength = 0.7 | strength = 0.4 |
| -- | -- | -- |
| initial image that SD sees | ![](../assets/img2img/000032.step-0.png) | ![](../assets/img2img/000030.step-0.png) |
| steps argument to `dream>` | `-S10` | `-S10` |
| steps argument to `invoke>` | `-S10` | `-S10` |
| steps actually taken | 7 | 4 |
| latent space at each step | ![](../assets/img2img/000032.steps.gravity.png) | ![](../assets/img2img/000030.steps.gravity.png) |
| output | ![](../assets/img2img/000032.1592514025.png) | ![](../assets/img2img/000030.1592514025.png) |
| latent space at each step | ![gravity32](../assets/img2img/000032.steps.gravity.png) | ![gravity30](../assets/img2img/000030.steps.gravity.png) |
| output | ![000032.1592514025](../assets/img2img/000032.1592514025.png) | ![000030.1592514025](../assets/img2img/000030.1592514025.png) |
Both of the outputs look kind of like what I was thinking of. With the strength higher, my input becomes more vague, *and* Stable Diffusion has more steps to refine its output. But it's not really making what I want, which is a picture of cheery open fire. With the strength lower, my input is more clear, *but* Stable Diffusion has less chance to refine itself, so the result ends up inheriting all the problems of my bad drawing.
If you want to try this out yourself, all of these are using a seed of `1592514025` with a width/height of `384`, step count `10`, the default sampler (`k_lms`), and the single-word prompt `fire`:
If you want to try this out yourself, all of these are using a seed of `1592514025` with a width/height of `384`, step count `10`, the default sampler (`k_lms`), and the single-word prompt `"fire"`:
```commandline
dream> "fire" -s10 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png --strength 0.7
invoke> "fire" -s10 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png --strength 0.7
```
The code for rendering intermediates is on my (damian0815's) branch [document-img2img](https://github.com/damian0815/InvokeAI/tree/document-img2img) - run `dream.py` and check your `outputs/img-samples/intermediates` folder while generating an image.
The code for rendering intermediates is on my (damian0815's) branch [document-img2img](https://github.com/damian0815/InvokeAI/tree/document-img2img) - run `invoke.py` and check your `outputs/img-samples/intermediates` folder while generating an image.
### Compensating for the reduced step count
@@ -89,36 +134,42 @@ After putting this guide together I was curious to see how the difference would
Here's strength `0.4` (note step count `50`, which is `20 ÷ 0.4` to make sure SD does `20` steps from my image):
```commandline
dream> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
invoke> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
```
![](../assets/img2img/000035.1592514025.png)
<div align="center" markdown>
![000035.1592514025](../assets/img2img/000035.1592514025.png)
</div>
and strength `0.7` (note step count `30`, which is roughly `20 ÷ 0.7` to make sure SD does `20` steps from my image):
and here is strength `0.7` (note step count `30`, which is roughly `20 ÷ 0.7` to make sure SD does `20` steps from my image):
```commandline
dream> "fire" -s30 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.7
invoke> "fire" -s30 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.7
```
![](../assets/img2img/000046.1592514025.png)
<div align="center" markdown>
![000046.1592514025](../assets/img2img/000046.1592514025.png)
</div>
In both cases the image is nice and clean and "finished", but because at strength `0.7` Stable Diffusion has been give so much more freedom to improve on my badly-drawn flames, they've come out looking much better. You can really see the difference when looking at the latent steps. There's more noise on the first image with strength `0.7`:
![](../assets/img2img/000046.steps.gravity.png)
![gravity46](../assets/img2img/000046.steps.gravity.png)
than there is for strength `0.4`:
![](../assets/img2img/000035.steps.gravity.png)
![gravity35](../assets/img2img/000035.steps.gravity.png)
and that extra noise gives the algorithm more choices when it is evaluating how to denoise any particular pixel in the image.
and that extra noise gives the algorithm more choices when it is evaluating how to denoise any particular pixel in the image.
Unfortunately, it seems that `img2img` is very sensitive to the step count. Here's strength `0.7` with a step count of `29` (SD did 19 steps from my image):
![](../assets/img2img/000045.1592514025.png)
<div align="center" markdown>
![gravity45](../assets/img2img/000045.1592514025.png)
</div>
By comparing the latents we can sort of see that something got interpreted differently enough on the third or fourth step to lead to a rather different interpretation of the flames.
![](../assets/img2img/000046.steps.gravity.png)
![](../assets/img2img/000045.steps.gravity.png)
![gravity46](../assets/img2img/000046.steps.gravity.png)
![gravity45](../assets/img2img/000045.steps.gravity.png)
This is the result of a difference in the de-noising "schedule" - basically the noise has to be cleaned by a certain degree each step or the model won't "converge" on the image properly (see https://huggingface.co/blog/stable_diffusion for more about that). A different step count means a different schedule, which means things get interpreted slightly differently at every step.
This is the result of a difference in the de-noising "schedule" - basically the noise has to be cleaned by a certain degree each step or the model won't "converge" on the image properly (see [stable diffusion blog](https://huggingface.co/blog/stable_diffusion) for more about that). A different step count means a different schedule, which means things get interpreted slightly differently at every step.

View File

@@ -6,27 +6,234 @@ title: Inpainting
## **Creating Transparent Regions for Inpainting**
Inpainting is really cool. To do it, you start with an initial image and use a photoeditor to make
one or more regions transparent (i.e. they have a "hole" in them). You then provide the path to this
image at the dream> command line using the `-I` switch. Stable Diffusion will only paint within the
transparent region.
Inpainting is really cool. To do it, you start with an initial image
and use a photoeditor to make one or more regions transparent
(i.e. they have a "hole" in them). You then provide the path to this
image at the dream> command line using the `-I` switch. Stable
Diffusion will only paint within the transparent region.
There's a catch. In the current implementation, you have to prepare the initial image correctly so
that the underlying colors are preserved under the transparent area. Many imaging editing
applications will by default erase the color information under the transparent pixels and replace
them with white or black, which will lead to suboptimal inpainting. You also must take care to
export the PNG file in such a way that the color information is preserved.
There's a catch. In the current implementation, you have to prepare
the initial image correctly so that the underlying colors are
preserved under the transparent area. Many imaging editing
applications will by default erase the color information under the
transparent pixels and replace them with white or black, which will
lead to suboptimal inpainting. It often helps to apply incomplete
transparency, such as any value between 1 and 99%
If your photoeditor is erasing the underlying color information, `dream.py` will give you a big fat
warning. If you can't find a way to coax your photoeditor to retain color values under transparent
areas, then you can combine the `-I` and `-M` switches to provide both the original unedited image
and the masked (partially transparent) image:
You also must take care to export the PNG file in such a way that the
color information is preserved. There is often an option in the export
dialog that lets you specify this.
If your photoeditor is erasing the underlying color information,
`dream.py` will give you a big fat warning. If you can't find a way to
coax your photoeditor to retain color values under transparent areas,
then you can combine the `-I` and `-M` switches to provide both the
original unedited image and the masked (partially transparent) image:
```bash
dream> "man with cat on shoulder" -I./images/man.png -M./images/man-transparent.png
invoke> "man with cat on shoulder" -I./images/man.png -M./images/man-transparent.png
```
We are hoping to get rid of the need for this workaround in an upcoming release.
## **Masking using Text**
You can also create a mask using a text prompt to select the part of
the image you want to alter, using the <a
href="https://github.com/timojl/clipseg">clipseg</a> algorithm. This
works on any image, not just ones generated by InvokeAI.
The `--text_mask` (short form `-tm`) option takes two arguments. The
first argument is a text description of the part of the image you wish
to mask (paint over). If the text description contains a space, you must
surround it with quotation marks. The optional second argument is the
minimum threshold for the mask classifier's confidence score, described
in more detail below.
To see how this works in practice, here's an image of a still life
painting that I got off the web.
<img src="../assets/still-life-scaled.jpg">
You can selectively mask out the
orange and replace it with a baseball in this way:
~~~
invoke> a baseball -I /path/to/still_life.png -tm orange
~~~
<img src="../assets/still-life-inpainted.png">
The clipseg classifier produces a confidence score for each region it
identifies. Generally regions that score above 0.5 are reliable, but
if you are getting too much or too little masking you can adjust the
threshold down (to get more mask), or up (to get less). In this
example, by passing `-tm` a higher value, we are insisting on a tigher
mask. However, if you make it too high, the orange may not be picked
up at all!
~~~
invoke> a baseball -I /path/to/breakfast.png -tm orange 0.6
~~~
The `!mask` command may be useful for debugging problems with the
text2mask feature. The syntax is `!mask /path/to/image.png -tm <text>
<threshold>`
It will generate three files:
- The image with the selected area highlighted.
- it will be named XXXXX.<imagename>.<prompt>.selected.png
- The image with the un-selected area highlighted.
- it will be named XXXXX.<imagename>.<prompt>.deselected.png
- The image with the selected area converted into a black and white
image according to the threshold level
- it will be named XXXXX.<imagename>.<prompt>.masked.png
The `.masked.png` file can then be directly passed to the `invoke>`
prompt in the CLI via the `-M` argument. Do not attempt this with
the `selected.png` or `deselected.png` files, as they contain some
transparency throughout the image and will not produce the desired
results.
Here is an example of how `!mask` works:
```
invoke> !mask ./test-pictures/curly.png -tm hair 0.5
>> generating masks from ./test-pictures/curly.png
>> Initializing clipseg model for text to mask inference
Outputs:
[941.1] outputs/img-samples/000019.curly.hair.deselected.png: !mask ./test-pictures/curly.png -tm hair 0.5
[941.2] outputs/img-samples/000019.curly.hair.selected.png: !mask ./test-pictures/curly.png -tm hair 0.5
[941.3] outputs/img-samples/000019.curly.hair.masked.png: !mask ./test-pictures/curly.png -tm hair 0.5
```
**Original image "curly.png"**
<img src="../assets/outpainting/curly.png">
**000019.curly.hair.selected.png**
<img src="../assets/inpainting/000019.curly.hair.selected.png">
**000019.curly.hair.deselected.png**
<img src="../assets/inpainting/000019.curly.hair.deselected.png">
**000019.curly.hair.masked.png**
<img src="../assets/inpainting/000019.curly.hair.masked.png">
It looks like we selected the hair pretty well at the 0.5 threshold
(which is the default, so we didn't actually have to specify it), so
let's have some fun:
```
invoke> medusa with cobras -I ./test-pictures/curly.png -M 000019.curly.hair.masked.png -C20
>> loaded input image of size 512x512 from ./test-pictures/curly.png
...
Outputs:
[946] outputs/img-samples/000024.801380492.png: "medusa with cobras" -s 50 -S 801380492 -W 512 -H 512 -C 20.0 -I ./test-pictures/curly.png -A k_lms -f 0.75
```
<img src="../assets/inpainting/000024.801380492.png">
You can also skip the `!mask` creation step and just select the masked
region directly:
```
invoke> medusa with cobras -I ./test-pictures/curly.png -tm hair -C20
```
## Using the RunwayML inpainting model
The [RunwayML Inpainting Model
v1.5](https://huggingface.co/runwayml/stable-diffusion-inpainting) is
a specialized version of [Stable Diffusion
v1.5](https://huggingface.co/spaces/runwayml/stable-diffusion-v1-5)
that contains extra channels specifically designed to enhance
inpainting and outpainting. While it can do regular `txt2img` and
`img2img`, it really shines when filling in missing regions. It has an
almost uncanny ability to blend the new regions with existing ones in
a semantically coherent way.
To install the inpainting model, follow the
[instructions](INSTALLING-MODELS.md) for installing a new model. You
may use either the CLI (`invoke.py` script) or directly edit the
`configs/models.yaml` configuration file to do this. The main thing to
watch out for is that the the model `config` option must be set up to
use `v1-inpainting-inference.yaml` rather than the `v1-inference.yaml`
file that is used by Stable Diffusion 1.4 and 1.5.
After installation, your `models.yaml` should contain an entry that
looks like this one:
inpainting-1.5:
weights: models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt
description: SD inpainting v1.5
config: configs/stable-diffusion/v1-inpainting-inference.yaml
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
width: 512
height: 512
As shown in the example, you may include a VAE fine-tuning weights
file as well. This is strongly recommended.
To use the custom inpainting model, launch `invoke.py` with the
argument `--model inpainting-1.5` or alternatively from within the
script use the `!switch inpainting-1.5` command to load and switch to
the inpainting model.
You can now do inpainting and outpainting exactly as described above,
but there will (likely) be a noticeable improvement in
coherence. Txt2img and Img2img will work as well.
There are a few caveats to be aware of:
1. The inpainting model is larger than the standard model, and will
use nearly 4 GB of GPU VRAM. This makes it unlikely to run on
a 4 GB graphics card.
2. When operating in Img2img mode, the inpainting model is much less
steerable than the standard model. It is great for making small
changes, such as changing the pattern of a fabric, or slightly
changing a subject's expression or hair, but the model will
resist making the dramatic alterations that the standard
model lets you do.
3. While the `--hires` option works fine with the inpainting model,
some special features, such as `--embiggen` are disabled.
4. Prompt weighting (`banana++ sushi`) and merging work well with
the inpainting model, but prompt swapping (a ("fluffy cat").swap("smiling dog") eating a hotdog`)
will not have any effect due to the way the model is set up.
You may use text masking (with `-tm thing-to-mask`) as an
effective replacement.
5. The model tends to oversharpen image if you use high step or CFG
values. If you need to do large steps, use the standard model.
6. The `--strength` (`-f`) option has no effect on the inpainting
model due to its fundamental differences with the standard
model. It will always take the full number of steps you specify.
## Troubleshooting
Here are some troubleshooting tips for inpainting and outpainting.
## Inpainting is not changing the masked region enough!
One of the things to understand about how inpainting works is that it
is equivalent to running img2img on just the masked (transparent)
area. img2img builds on top of the existing image data, and therefore
will attempt to preserve colors, shapes and textures to the best of
its ability. Unfortunately this means that if you want to make a
dramatic change in the inpainted region, for example replacing a red
wall with a blue one, the algorithm will fight you.
You have a couple of options. The first is to increase the values of
the requested steps (`-sXXX`), strength (`-f0.XX`), and/or
condition-free guidance (`-CXX.X`). If this is not working for you, a
more extreme step is to provide the `--inpaint_replace 0.X` (`-r0.X`)
option. This value ranges from 0.0 to 1.0. The higher it is the less
attention the algorithm will pay to the data underneath the masked
region. At high values this will enable you to replace colored regions
entirely, but beware that the masked region mayl not blend in with the
surrounding unmasked regions as well.
---
@@ -36,41 +243,42 @@ We are hoping to get rid of the need for this workaround in an upcoming release.
1. Open image in GIMP.
2. Layer->Transparency->Add Alpha Channel
3. Use lasoo tool to select region to mask
3. Use lasso tool to select region to mask
4. Choose Select -> Float to create a floating selection
5. Open the Layers toolbar (++ctrl+l++) and select "Floating Selection"
6. Set opacity to 0%
5. Open the Layers toolbar (^L) and select "Floating Selection"
6. Set opacity to a value between 0% and 99%
7. Export as PNG
8. In the export dialogue, Make sure the "Save colour values from
transparent pixels" checkbox is selected.
---
## Recipe for Adobe Photoshop
1. Open image in Photoshop
![step1](../assets/step1.png)
<div align="center" markdown>![step1](../assets/step1.png)</div>
2. Use any of the selection tools (Marquee, Lasso, or Wand) to select the area you desire to inpaint.
![step2](../assets/step2.png)
<div align="center" markdown>![step2](../assets/step2.png)</div>
3. Because we'll be applying a mask over the area we want to preserve, you should now select the inverse by using the ++shift+ctrl+i++ shortcut, or right clicking and using the "Select Inverse" option.
4. You'll now create a mask by selecting the image layer, and Masking the selection. Make sure that you don't delete any of the undrlying image, or your inpainting results will be dramatically impacted.
4. You'll now create a mask by selecting the image layer, and Masking the selection. Make sure that you don't delete any of the underlying image, or your inpainting results will be dramatically impacted.
![step4](../assets/step4.png)
<div align="center" markdown>![step4](../assets/step4.png)</div>
5. Make sure to hide any background layers that are present. You should see the mask applied to your image layer, and the image on your canvas should display the checkered background.
![step5](../assets/step5.png)
<div align="center" markdown>![step5](../assets/step5.png)</div>
6. Save the image as a transparent PNG by using the "Save a Copy" option in the File menu, or using the Alt + Ctrl + S keyboard shortcut
6. Save the image as a transparent PNG by using `File`-->`Save a Copy` from the menu bar, or by using the keyboard shortcut ++alt+ctrl+s++
![step6](../assets/step6.png)
<div align="center" markdown>![step6](../assets/step6.png)</div>
7. After following the inpainting instructions above (either through the CLI or the Web UI), marvel at your newfound ability to selectively dream. Lookin' good!
7. After following the inpainting instructions above (either through the CLI or the Web UI), marvel at your newfound ability to selectively invoke. Lookin' good!
![step7](../assets/step7.png)
<div align="center" markdown>![step7](../assets/step7.png)</div>
8. In the export dialogue, Make sure the "Save colour values from transparent pixels" checkbox is selected.

View File

@@ -6,15 +6,13 @@ title: Others
## **Google Colab**
Stable Diffusion AI Notebook: <a
href="https://colab.research.google.com/github/lstein/stable-diffusion/blob/main/notebooks/Stable_Diffusion_AI_Notebook.ipynb"
target="_parent">
<img
src="https://colab.research.google.com/assets/colab-badge.svg"
alt="Open In Colab"/></a> <br> Open and follow instructions to use an isolated environment running
Dream.<br>
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg){ align="right" }](https://colab.research.google.com/github/lstein/stable-diffusion/blob/main/notebooks/Stable_Diffusion_AI_Notebook.ipynb)
Output Example: ![Colab Notebook](../assets/colab_notebook.png)
Open and follow instructions to use an isolated environment running Dream.
Output Example:
![Colab Notebook](../assets/colab_notebook.png)
---
@@ -22,10 +20,16 @@ Output Example: ![Colab Notebook](../assets/colab_notebook.png)
The seamless tiling mode causes generated images to seamlessly tile with itself. To use it, add the
`--seamless` option when starting the script which will result in all generated images to tile, or
for each `dream>` prompt as shown here:
for each `invoke>` prompt as shown here:
```python
dream> "pond garden with lotus by claude monet" --seamless -s100 -n4
invoke> "pond garden with lotus by claude monet" --seamless -s100 -n4
```
By default this will tile on both the X and Y axes. However, you can also specify specific axes to tile on with `--seamless_axes`.
Possible values are `x`, `y`, and `x,y`:
```python
invoke> "pond garden with lotus by claude monet" --seamless --seamless_axes=x -s100 -n4
```
---
@@ -33,21 +37,21 @@ dream> "pond garden with lotus by claude monet" --seamless -s100 -n4
## **Shortcuts: Reusing Seeds**
Since it is so common to reuse seeds while refining a prompt, there is now a shortcut as of version
1.11. Provide a `**-S**` (or `**--seed**`) switch of `-1` to use the seed of the most recent image
generated. If you produced multiple images with the `**-n**` switch, then you can go back further
using -2, -3, etc. up to the first image generated by the previous command. Sorry, but you can't go
1.11. Provide a `-S` (or `--seed`) switch of `-1` to use the seed of the most recent image
generated. If you produced multiple images with the `-n` switch, then you can go back further
using `-2`, `-3`, etc. up to the first image generated by the previous command. Sorry, but you can't go
back further than one command.
Here's an example of using this to do a quick refinement. It also illustrates using the new `**-G**`
Here's an example of using this to do a quick refinement. It also illustrates using the new `-G`
switch to turn on upscaling and face enhancement (see previous section):
```bash
dream> a cute child playing hopscotch -G0.5
invoke> a cute child playing hopscotch -G0.5
[...]
outputs/img-samples/000039.3498014304.png: "a cute child playing hopscotch" -s50 -W512 -H512 -C7.5 -mk_lms -S3498014304
# I wonder what it will look like if I bump up the steps and set facial enhancement to full strength?
dream> a cute child playing hopscotch -G1.0 -s100 -S -1
invoke> a cute child playing hopscotch -G1.0 -s100 -S -1
reusing previous seed 3498014304
[...]
outputs/img-samples/000040.3498014304.png: "a cute child playing hopscotch" -G1.0 -s100 -W512 -H512 -C7.5 -mk_lms -S3498014304
@@ -58,7 +62,7 @@ outputs/img-samples/000040.3498014304.png: "a cute child playing hopscotch" -G1.
## **Weighted Prompts**
You may weight different sections of the prompt to tell the sampler to attach different levels of
priority to them, by adding `:(number)` to the end of the section you wish to up- or downweight. For
priority to them, by adding `:<percent>` to the end of the section you wish to up- or downweight. For
example consider this prompt:
```bash
@@ -71,24 +75,47 @@ combination of integers and floating point numbers, and they do not need to add
---
## Thresholding and Perlin Noise Initialization Options
## **Filename Format**
The argument `--fnformat` allows to specify the filename of the
image. Supported wildcards are all arguments what can be set such as
`perlin`, `seed`, `threshold`, `height`, `width`, `gfpgan_strength`,
`sampler_name`, `steps`, `model`, `upscale`, `prompt`, `cfg_scale`,
`prefix`.
The following prompt
```bash
dream> a red car --steps 25 -C 9.8 --perlin 0.1 --fnformat {prompt}_steps.{steps}_cfg.{cfg_scale}_perlin.{perlin}.png
```
generates a file with the name: `outputs/img-samples/a red car_steps.25_cfg.9.8_perlin.0.1.png`
---
## **Thresholding and Perlin Noise Initialization Options**
Two new options are the thresholding (`--threshold`) and the perlin noise initialization (`--perlin`) options. Thresholding limits the range of the latent values during optimization, which helps combat oversaturation with higher CFG scale values. Perlin noise initialization starts with a percentage (a value ranging from 0 to 1) of perlin noise mixed into the initial noise. Both features allow for more variations and options in the course of generating images.
For better intuition into what these options do in practice, [here is a graphic demonstrating them both](static/truncation_comparison.jpg) in use. In generating this graphic, perlin noise at initialization was programmatically varied going across on the diagram by values 0.0, 0.1, 0.2, 0.4, 0.5, 0.6, 0.8, 0.9, 1.0; and the threshold was varied going down from
For better intuition into what these options do in practice:
![here is a graphic demonstrating them both](../assets/truncation_comparison.jpg)
In generating this graphic, perlin noise at initialization was programmatically varied going across on the diagram by values 0.0, 0.1, 0.2, 0.4, 0.5, 0.6, 0.8, 0.9, 1.0; and the threshold was varied going down from
0, 1, 2, 3, 4, 5, 10, 20, 100. The other options are fixed, so the initial prompt is as follows (no thresholding or perlin noise):
```
a portrait of a beautiful young lady -S 1950357039 -s 100 -C 20 -A k_euler_a --threshold 0 --perlin 0
```bash
invoke> "a portrait of a beautiful young lady" -S 1950357039 -s 100 -C 20 -A k_euler_a --threshold 0 --perlin 0
```
Here's an example of another prompt used when setting the threshold to 5 and perlin noise to 0.2:
```
a portrait of a beautiful young lady -S 1950357039 -s 100 -C 20 -A k_euler_a --threshold 5 --perlin 0.2
```bash
invoke> "a portrait of a beautiful young lady" -S 1950357039 -s 100 -C 20 -A k_euler_a --threshold 5 --perlin 0.2
```
Note: currently the thresholding feature is only implemented for the k-diffusion style samplers, and empirically appears to work best with `k_euler_a` and `k_dpm_2_a`. Using 0 disables thresholding. Using 0 for perlin noise disables using perlin noise for initialization. Finally, using 1 for perlin noise uses only perlin noise for initialization.
!!! note
currently the thresholding feature is only implemented for the k-diffusion style samplers, and empirically appears to work best with `k_euler_a` and `k_dpm_2_a`. Using 0 disables thresholding. Using 0 for perlin noise disables using perlin noise for initialization. Finally, using 1 for perlin noise uses only perlin noise for initialization.
---
@@ -120,7 +147,7 @@ internet. In the following runs, it will load up the cached versions of the requ
`.cache` directory of the system.
```bash
(ldm) ~/stable-diffusion$ python3 ./scripts/preload_models.py
(invokeai) ~/stable-diffusion$ python3 ./scripts/preload_models.py
preloading bert tokenizer...
Downloading: 100%|██████████████████████████████████| 28.0/28.0 [00:00<00:00, 49.3kB/s]
Downloading: 100%|██████████████████████████████████| 226k/226k [00:00<00:00, 2.79MB/s]

View File

@@ -15,24 +15,65 @@ InvokeAI supports two versions of outpainting, one called "outpaint"
and the other "outcrop." They work slightly differently and each has
its advantages and drawbacks.
### Outpainting
Outpainting is the same as inpainting, except that the painting occurs
in the regions outside of the original image. To outpaint using the
`invoke.py` command line script, prepare an image in which the borders
to be extended are pure black. Add an alpha channel (if there isn't one
already), and make the borders completely transparent and the interior
completely opaque. If you wish to modify the interior as well, you may
create transparent holes in the transparency layer, which `img2img` will
paint into as usual.
Pass the image as the argument to the `-I` switch as you would for
regular inpainting:
invoke> a stream by a river -I /path/to/transparent_img.png
You'll likely be delighted by the results.
### Tips
1. Do not try to expand the image too much at once. Generally it is best
to expand the margins in 64-pixel increments. 128 pixels often works,
but your mileage may vary depending on the nature of the image you are
trying to outpaint into.
2. There are a series of switches that can be used to adjust how the
inpainting algorithm operates. In particular, you can use these to
minimize the seam that sometimes appears between the original image
and the extended part. These switches are:
--seam_size SEAM_SIZE Size of the mask around the seam between original and outpainted image (0)
--seam_blur SEAM_BLUR The amount to blur the seam inwards (0)
--seam_strength STRENGTH The img2img strength to use when filling the seam (0.7)
--seam_steps SEAM_STEPS The number of steps to use to fill the seam. (10)
--tile_size TILE_SIZE The tile size to use for filling outpaint areas (32)
### Outcrop
The `outcrop` extension allows you to extend the image in 64 pixel
increments in any dimension. You can apply the module to any image
previously-generated by InvokeAI. Note that it will **not** work with
arbitrary photographs or Stable Diffusion images created by other
implementations.
The `outcrop` extension gives you a convenient `!fix` postprocessing
command that allows you to extend a previously-generated image in 64
pixel increments in any direction. You can apply the module to any
image previously-generated by InvokeAI. Note that it works with
arbitrary PNG photographs, but not currently with JPG or other
formats. Outcropping is particularly effective when combined with the
[runwayML custom inpainting
model](INPAINTING.md#using-the-runwayml-inpainting-model).
Consider this image:
<div align="center" markdown>
![curly_woman](../assets/outpainting/curly.png)
</div>
Pretty nice, but it's annoying that the top of her head is cut
off. She's also a bit off center. Let's fix that!
~~~~
dream> !fix images/curly.png --outcrop top 64 right 64
~~~~
```bash
invoke> !fix images/curly.png --outcrop top 64 right 64
```
This is saying to apply the `outcrop` extension by extending the top
of the image by 64 pixels, and the right of the image by the same
@@ -42,7 +83,9 @@ specify any number of pixels to extend. You can also abbreviate
The result looks like this:
<div align="center" markdown>
![curly_woman_outcrop](../assets/outpainting/curly-outcrop.png)
</div>
The new image is actually slightly larger than the original (576x576,
because 64 pixels were added to the top and right sides.)
@@ -60,39 +103,3 @@ you'll get a slightly different result. You can run it repeatedly
until you get an image you like. Unfortunately `!fix` does not
currently respect the `-n` (`--iterations`) argument.
## Outpaint
The `outpaint` extension does the same thing, but with subtle
differences. Starting with the same image, here is how we would add an
additional 64 pixels to the top of the image:
~~~
dream> !fix images/curly.png --out_direction top 64
~~~
(you can abbreviate ``--out_direction` as `-D`.
The result is shown here:
![curly_woman_outpaint](../assets/outpainting/curly-outpaint.png)
Although the effect is similar, there are significant differences from
outcropping:
1. You can only specify one direction to extend at a time.
2. The image is **not** resized. Instead, the image is shifted by the specified
number of pixels. If you look carefully, you'll see that less of the lady's
torso is visible in the image.
3. Because the image dimensions remain the same, there's no rounding
to multiples of 64.
4. Attempting to outpaint larger areas will frequently give rise to ugly
ghosting effects.
5. For best results, try increasing the step number.
6. If you don't specify a pixel value in -D, it will default to half
of the whole image, which is likely not what you want.
Neither `outpaint` nor `outcrop` are perfect, but we continue to tune
and improve them. If one doesn't work, try the other. You may also
wish to experiment with other `img2img` arguments, such as `-C`, `-f`
and `-s`.

View File

@@ -1,8 +1,9 @@
---
title: Postprocessing
---
# :material-image-edit: Postprocessing
## Intro
This extension provides the ability to restore faces and upscale
@@ -20,39 +21,33 @@ The default face restoration module is GFPGAN. The default upscale is
Real-ESRGAN. For an alternative face restoration module, see [CodeFormer
Support] below.
As of version 1.14, environment.yaml will install the Real-ESRGAN package into
the standard install location for python packages, and will put GFPGAN into a
subdirectory of "src" in the InvokeAI directory. (The reason for this is
that the standard GFPGAN distribution has a minor bug that adversely affects
image color.) Upscaling with Real-ESRGAN should "just work" without further
intervention. Simply pass the --upscale (-U) option on the dream> command line,
or indicate the desired scale on the popup in the Web GUI.
As of version 1.14, environment.yaml will install the Real-ESRGAN
package into the standard install location for python packages, and
will put GFPGAN into a subdirectory of "src" in the InvokeAI
directory. Upscaling with Real-ESRGAN should "just work" without
further intervention. Simply pass the --upscale (-U) option on the
invoke> command line, or indicate the desired scale on the popup in
the Web GUI.
For **GFPGAN** to work, there is one additional step needed. You will need to
download and copy the GFPGAN
[models file](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth)
into **src/gfpgan/experiments/pretrained_models**. On Mac and Linux systems,
here's how you'd do it using **wget**:
**GFPGAN** requires a series of downloadable model files to
work. These are loaded when you run `scripts/preload_models.py`. If
GFPAN is failing with an error, please run the following from the
InvokeAI directory:
```bash
wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth -P src/gfpgan/experiments/pretrained_models/
python scripts/preload_models.py
```
Make sure that you're in the InvokeAI directory when you do this.
If you do not run this script in advance, the GFPGAN module will attempt
to download the models files the first time you try to perform facial
reconstruction.
Alternatively, if you have GFPGAN installed elsewhere, or if you are using an
earlier version of this package which asked you to install GFPGAN in a sibling
directory, you may use the `--gfpgan_dir` argument with `dream.py` to set a
custom path to your GFPGAN directory. _There are other GFPGAN related boot
arguments if you wish to customize further._
!!! warning "Internet connection needed"
Users whose GPU machines are isolated from the Internet (e.g.
on a University cluster) should be aware that the first time you run dream.py with GFPGAN and
Real-ESRGAN turned on, it will try to download model files from the Internet. To rectify this, you
may run `python3 scripts/preload_models.py` after you have installed GFPGAN and all its
dependencies.
Alternatively, if you have GFPGAN installed elsewhere, or if you are
using an earlier version of this package which asked you to install
GFPGAN in a sibling directory, you may use the `--gfpgan_dir` argument
with `invoke.py` to set a custom path to your GFPGAN directory. _There
are other GFPGAN related boot arguments if you wish to customize
further._
## Usage
@@ -75,7 +70,7 @@ If you do not explicitly specify an upscaling_strength, it will default to 0.75.
### Face Restoration
`-G : <gfpgan_strength>`
`-G : <facetool_strength>`
This prompt argument controls the strength of the face restoration that is being
applied. Similar to upscaling, values between `0.5 to 0.8` are recommended.
@@ -94,13 +89,13 @@ too.
### Example Usage
```bash
dream> superman dancing with a panda bear -U 2 0.6 -G 0.4
invoke> "superman dancing with a panda bear" -U 2 0.6 -G 0.4
```
This also works with img2img:
```bash
dream> a man wearing a pineapple hat -I path/to/your/file.png -U 2 0.5 -G 0.6
invoke> "a man wearing a pineapple hat" -I path/to/your/file.png -U 2 0.5 -G 0.6
```
!!! note
@@ -124,24 +119,24 @@ actions.
This repo also allows you to perform face restoration using
[CodeFormer](https://github.com/sczhou/CodeFormer).
In order to setup CodeFormer to work, you need to download the models like with
GFPGAN. You can do this either by running `preload_models.py` or by manually
downloading the
[model file](https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth)
and saving it to `ldm/restoration/codeformer/weights` folder.
In order to setup CodeFormer to work, you need to download the models
like with GFPGAN. You can do this either by running
`preload_models.py` or by manually downloading the [model
file](https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth)
and saving it to `ldm/invoke/restoration/codeformer/weights` folder.
You can use `-ft` prompt argument to swap between CodeFormer and the default
GFPGAN. The above mentioned `-G` prompt argument will allow you to control the
strength of the restoration effect.
You can use `-ft` prompt argument to swap between CodeFormer and the
default GFPGAN. The above mentioned `-G` prompt argument will allow
you to control the strength of the restoration effect.
### Usage:
### Usage
The following command will perform face restoration with CodeFormer instead of
the default gfpgan.
`<prompt> -G 0.8 -ft codeformer`
### Other Options:
### Other Options
- `-cf` - cf or CodeFormer Fidelity takes values between `0` and `1`. 0 produces
high quality results but low accuracy and 1 produces lower quality results but
@@ -167,16 +162,16 @@ previously-generated file. Just use the syntax `!fix path/to/file.png
2X for a file named `./outputs/img-samples/000044.2945021133.png`,
just run:
```
dream> !fix ./outputs/img-samples/000044.2945021133.png -G 0.8 -U 2
```bash
invoke> !fix ./outputs/img-samples/000044.2945021133.png -G 0.8 -U 2
```
A new file named `000044.2945021133.fixed.png` will be created in the output
directory. Note that the `!fix` command does not replace the original file,
unlike the behavior at generate time.
### Disabling:
### Disabling
If, for some reason, you do not wish to load the GFPGAN and/or ESRGAN libraries,
you can disable them on the dream.py command line with the `--no_restore` and
you can disable them on the invoke.py command line with the `--no_restore` and
`--no_upscale` options, respectively.

View File

@@ -1,14 +1,14 @@
---
title: Prompting Features
title: Prompting-Features
---
# :octicons-command-palette-24: Prompting Features
# :octicons-command-palette-24: Prompting-Features
## **Reading Prompts from a File**
You can automate `dream.py` by providing a text file with the prompts you want to run, one line per
You can automate `invoke.py` by providing a text file with the prompts you want to run, one line per
prompt. The text file must be composed with a text editor (e.g. Notepad) and not a word processor.
Each line should look like what you would type at the dream> prompt:
Each line should look like what you would type at the invoke> prompt:
```bash
a beautiful sunny day in the park, children playing -n4 -C10
@@ -16,17 +16,18 @@ stormy weather on a mountain top, goats grazing -s100
innovative packaging for a squid's dinner -S137038382
```
Then pass this file's name to `dream.py` when you invoke it:
Then pass this file's name to `invoke.py` when you invoke it:
```bash
(ldm) ~/stable-diffusion$ python3 scripts/dream.py --from_file "path/to/prompts.txt"
(invokeai) ~/stable-diffusion$ python3 scripts/invoke.py --from_file "path/to/prompts.txt"
```
You may read a series of prompts from standard input by providing a filename of `-`:
```bash
(ldm) ~/stable-diffusion$ echo "a beautiful day" | python3 scripts/dream.py --from_file -
(invokeai) ~/stable-diffusion$ echo "a beautiful day" | python3 scripts/invoke.py --from_file -
```
---
## **Negative and Unconditioned Prompts**
@@ -34,7 +35,7 @@ You may read a series of prompts from standard input by providing a filename of
Any words between a pair of square brackets will instruct Stable
Diffusion to attempt to ban the concept from the generated image.
```bash
```text
this is a test prompt [not really] to make you understand [cool] how this works.
```
@@ -44,27 +45,35 @@ Here's a prompt that depicts what it does.
original prompt:
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
`#!bash "A fantastical translucent pony made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
<div align="center" markdown>
![step1](../assets/negative_prompt_walkthru/step1.png)
</div>
That image has a woman, so if we want the horse without a rider, we can influence the image not to have a woman by putting [woman] in the prompt, like this:
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman]" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
<div align="center" markdown>
![step2](../assets/negative_prompt_walkthru/step2.png)
</div>
That's nice - but say we also don't want the image to be quite so blue. We can add "blue" to the list of negative prompts, so it's now [woman blue]:
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman blue]" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
<div align="center" markdown>
![step3](../assets/negative_prompt_walkthru/step3.png)
</div>
Getting close - but there's no sense in having a saddle when our horse doesn't have a rider, so we'll add one more negative prompt: [woman blue saddle].
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman blue saddle]" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
<div align="center" markdown>
![step4](../assets/negative_prompt_walkthru/step4.png)
</div>
!!! notes "Notes about this feature:"
@@ -75,6 +84,109 @@ Getting close - but there's no sense in having a saddle when our horse doesn't h
---
## **Prompt Syntax Features**
The InvokeAI prompting language has the following features:
### Attention weighting
Append a word or phrase with `-` or `+`, or a weight between `0` and `2` (`1`=default), to decrease or increase "attention" (= a mix of per-token CFG weighting multiplier and, for `-`, a weighted blend with the prompt without the term).
The following syntax is recognised:
* single words without parentheses: `a tall thin man picking apricots+`
* single or multiple words with parentheses: `a tall thin man picking (apricots)+` `a tall thin man picking (apricots)-` `a tall thin man (picking apricots)+` `a tall thin man (picking apricots)-`
* more effect with more symbols `a tall thin man (picking apricots)++`
* nesting `a tall thin man (picking apricots+)++` (`apricots` effectively gets `+++`)
* all of the above with explicit numbers `a tall thin man picking (apricots)1.1` `a tall thin man (picking (apricots)1.3)1.1`. (`+` is equivalent to 1.1, `++` is pow(1.1,2), `+++` is pow(1.1,3), etc; `-` means 0.9, `--` means pow(0.9,2), etc.)
* attention also applies to `[unconditioning]` so `a tall thin man picking apricots [(ladder)0.01]` will *very gently* nudge SD away from trying to draw the man on a ladder
You can use this to increase or decrease the amount of something. Starting from this prompt of `a man picking apricots from a tree`, let's see what happens if we increase and decrease how much attention we want Stable Diffusion to pay to the word `apricots`:
![an AI generated image of a man picking apricots from a tree](../assets/prompt_syntax/apricots-0.png)
Using `-` to reduce apricot-ness:
| `a man picking apricots- from a tree` | `a man picking apricots-- from a tree` | `a man picking apricots--- from a tree` |
| -- | -- | -- |
| ![an AI generated image of a man picking apricots from a tree, with smaller apricots](../assets/prompt_syntax/apricots--1.png) | ![an AI generated image of a man picking apricots from a tree, with even smaller and fewer apricots](../assets/prompt_syntax/apricots--2.png) | ![an AI generated image of a man picking apricots from a tree, with very few very small apricots](../assets/prompt_syntax/apricots--3.png) |
Using `+` to increase apricot-ness:
| `a man picking apricots+ from a tree` | `a man picking apricots++ from a tree` | `a man picking apricots+++ from a tree` | `a man picking apricots++++ from a tree` | `a man picking apricots+++++ from a tree` |
| -- | -- | -- | -- | -- |
| ![an AI generated image of a man picking apricots from a tree, with larger, more vibrant apricots](../assets/prompt_syntax/apricots-1.png) | ![an AI generated image of a man picking apricots from a tree with even larger, even more vibrant apricots](../assets/prompt_syntax/apricots-2.png) | ![an AI generated image of a man picking apricots from a tree, but the man has been replaced by a pile of apricots](../assets/prompt_syntax/apricots-3.png) | ![an AI generated image of a man picking apricots from a tree, but the man has been replaced by a mound of giant melting-looking apricots](../assets/prompt_syntax/apricots-4.png) | ![an AI generated image of a man picking apricots from a tree, but the man and the leaves and parts of the ground have all been replaced by giant melting-looking apricots](../assets/prompt_syntax/apricots-5.png) |
You can also change the balance between different parts of a prompt. For example, below is a `mountain man`:
![an AI generated image of a mountain man](../assets/prompt_syntax/mountain-man.png)
And here he is with more mountain:
| `mountain+ man` | `mountain++ man` | `mountain+++ man` |
| -- | -- | -- |
| ![](../assets/prompt_syntax/mountain1-man.png) | ![](../assets/prompt_syntax/mountain2-man.png) | ![](../assets/prompt_syntax/mountain3-man.png) |
Or, alternatively, with more man:
| `mountain man+` | `mountain man++` | `mountain man+++` | `mountain man++++` |
| -- | -- | -- | -- |
| ![](../assets/prompt_syntax/mountain-man1.png) | ![](../assets/prompt_syntax/mountain-man2.png) | ![](../assets/prompt_syntax/mountain-man3.png) | ![](../assets/prompt_syntax/mountain-man4.png) |
### Blending between prompts
* `("a tall thin man picking apricots", "a tall thin man picking pears").blend(1,1)`
* The existing prompt blending using `:<weight>` will continue to be supported - `("a tall thin man picking apricots", "a tall thin man picking pears").blend(1,1)` is equivalent to `a tall thin man picking apricots:1 a tall thin man picking pears:1` in the old syntax.
* Attention weights can be nested inside blends.
* Non-normalized blends are supported by passing `no_normalize` as an additional argument to the blend weights, eg `("a tall thin man picking apricots", "a tall thin man picking pears").blend(1,-1,no_normalize)`. very fun to explore local maxima in the feature space, but also easy to produce garbage output.
See the section below on "Prompt Blending" for more information about how this works.
### Cross-Attention Control ('prompt2prompt')
Sometimes an image you generate is almost right, and you just want to
change one detail without affecting the rest. You could use a photo editor and inpainting
to overpaint the area, but that's a pain. Here's where `prompt2prompt`
comes in handy.
Generate an image with a given prompt, record the seed of the image,
and then use the `prompt2prompt` syntax to substitute words in the
original prompt for words in a new prompt. This works for `img2img` as well.
* `a ("fluffy cat").swap("smiling dog") eating a hotdog`.
* quotes optional: `a (fluffy cat).swap(smiling dog) eating a hotdog`.
* for single word substitutions parentheses are also optional: `a cat.swap(dog) eating a hotdog`.
* Supports options `s_start`, `s_end`, `t_start`, `t_end` (each 0-1) loosely corresponding to bloc97's `prompt_edit_spatial_start/_end` and `prompt_edit_tokens_start/_end` but with the math swapped to make it easier to intuitively understand.
* Example usage:`a (cat).swap(dog, s_end=0.3) eating a hotdog` - the `s_end` argument means that the "spatial" (self-attention) edit will stop having any effect after 30% (=0.3) of the steps have been done, leaving Stable Diffusion with 70% of the steps where it is free to decide for itself how to reshape the cat-form into a dog form.
* The numbers represent a percentage through the step sequence where the edits should happen. 0 means the start (noisy starting image), 1 is the end (final image).
* For img2img, the step sequence does not start at 0 but instead at (1-strength) - so if strength is 0.7, s_start and s_end must both be greater than 0.3 (1-0.7) to have any effect.
* Convenience option `shape_freedom` (0-1) to specify how much "freedom" Stable Diffusion should have to change the shape of the subject being swapped.
* `a (cat).swap(dog, shape_freedom=0.5) eating a hotdog`.
The `prompt2prompt` code is based off [bloc97's
colab](https://github.com/bloc97/CrossAttentionControl).
Note that `prompt2prompt` is not currently working with the runwayML
inpainting model, and may never work due to the way this model is set
up. If you attempt to use `prompt2prompt` you will get the original
image back. However, since this model is so good at inpainting, a
good substitute is to use the `clipseg` text masking option:
```
invoke> a fluffy cat eating a hotdot
Outputs:
[1010] outputs/000025.2182095108.png: a fluffy cat eating a hotdog
invoke> a smiling dog eating a hotdog -I 000025.2182095108.png -tm cat
```
### Escaping parantheses () and speech marks ""
If the model you are using has parentheses () or speech marks "" as
part of its syntax, you will need to "escape" these using a backslash,
so that`(my_keyword)` becomes `\(my_keyword\)`. Otherwise, the prompt
parser will attempt to interpret the parentheses as part of the prompt
syntax and it will get confused.
## **Prompt Blending**
You may blend together different sections of the prompt to explore the
@@ -101,44 +213,58 @@ illustrate, here are three images generated using various combinations
of blend weights. As usual, unless you fix the seed, the prompts will give you
different results each time you run them.
---
<div align="center" markdown>
### "blue sphere, red cube, hybrid"
</div>
This example doesn't use melding at all and represents the default way
of mixing concepts.
<img src="../assets/prompt-blending/blue-sphere-red-cube-hybrid.png" width=256>
<div align="center" markdown>
![blue-sphere-red-cube-hyprid](../assets/prompt-blending/blue-sphere-red-cube-hybrid.png)
</div>
It's interesting to see how the AI expressed the concept of "cube" as
the four quadrants of the enclosing frame. If you look closely, there
is depth there, so the enclosing frame is actually a cube.
<div align="center" markdown>
### "blue sphere:0.25 red cube:0.75 hybrid"
<img src="../assets/prompt-blending/blue-sphere:0.25-red-cube:0.75-hybrid.png" width=256>
![blue-sphere-25-red-cube-75](../assets/prompt-blending/blue-sphere-0.25-red-cube-0.75-hybrid.png)
</div>
Now that's interesting. We get neither a blue sphere nor a red cube,
but a red sphere embedded in a brick wall, which represents a melding
of concepts within the AI's "latent space" of semantic
representations. Where is Ludwig Wittgenstein when you need him?
<div align="center" markdown>
### "blue sphere:0.75 red cube:0.25 hybrid"
<img src="../assets/prompt-blending/blue-sphere:0.75-red-cube:0.25-hybrid.png" width=256>
![blue-sphere-75-red-cube-25](../assets/prompt-blending/blue-sphere-0.75-red-cube-0.25-hybrid.png)
</div>
Definitely more blue-spherey. The cube is gone entirely, but it's
really cool abstract art.
<div align="center" markdown>
### "blue sphere:0.5 red cube:0.5 hybrid"
<img src="../assets/prompt-blending/blue-sphere:0.5-red-cube:0.5-hybrid.png" width=256>
![blue-sphere-5-red-cube-5-hybrid](../assets/prompt-blending/blue-sphere-0.5-red-cube-0.5-hybrid.png)
</div>
Whoa...! I see blue and red, but no spheres or cubes. Is the word
"hybrid" summoning up the concept of some sort of scifi creature?
Let's find out.
<div align="center" markdown>
### "blue sphere:0.5 red cube:0.5"
<img src="../assets/prompt-blending/blue-sphere:0.5-red-cube:0.5.png" width=256>
![blue-sphere-5-red-cube-5](../assets/prompt-blending/blue-sphere-0.5-red-cube-0.5.png)
</div>
Indeed, removing the word "hybrid" produces an image that is more like
what we'd expect.
@@ -146,4 +272,3 @@ what we'd expect.
In conclusion, prompt blending is great for exploring creative space,
but can be difficult to direct. A forthcoming release of InvokeAI will
feature more deterministic prompt weighting.

View File

@@ -1,8 +1,8 @@
---
title: TEXTUAL_INVERSION
title: Textual-Inversion
---
# :material-file-document-plus-outline: TEXTUAL_INVERSION
# :material-file-document: Textual Inversion
## **Personalizing Text-to-Image Generation**
@@ -23,13 +23,13 @@ As the default backend is not available on Windows, if you're using that
platform, set the environment variable `PL_TORCH_DISTRIBUTED_BACKEND` to `gloo`
```bash
python3 ./main.py --base ./configs/stable-diffusion/v1-finetune.yaml \
--actual_resume ./models/ldm/stable-diffusion-v1/model.ckpt \
-t \
-n my_cat \
--gpus 0 \
--data_root D:/textual-inversion/my_cat \
--init_word 'cat'
python3 ./main.py -t \
--base ./configs/stable-diffusion/v1-finetune.yaml \
--actual_resume ./models/ldm/stable-diffusion-v1/model.ckpt \
-n my_cat \
--gpus 0 \
--data_root D:/textual-inversion/my_cat \
--init_word 'cat'
```
During the training process, files will be created in
@@ -56,22 +56,23 @@ configs/stable_diffusion/v1-finetune.yaml (currently set to 4000000)
## **Run the Model**
Once the model is trained, specify the trained .pt or .bin file when starting
dream using
invoke using
```bash
python3 ./scripts/dream.py --embedding_path /path/to/embedding.pt
python3 ./scripts/invoke.py \
--embedding_path /path/to/embedding.pt
```
Then, to utilize your subject at the dream prompt
Then, to utilize your subject at the invoke prompt
```bash
dream> "a photo of *"
invoke> "a photo of *"
```
This also works with image2image
```bash
dream> "waterfall and rainbow in the style of *" --init_img=./init-images/crude_drawing.png --strength=0.5 -s100 -n4
invoke> "waterfall and rainbow in the style of *" --init_img=./init-images/crude_drawing.png --strength=0.5 -s100 -n4
```
For .pt files it's also possible to train multiple tokens (modify the
@@ -80,9 +81,9 @@ LDM checkpoints using:
```bash
python3 ./scripts/merge_embeddings.py \
--manager_ckpts /path/to/first/embedding.pt \
[</path/to/second/embedding.pt>,[...]] \
--output_path /path/to/output/embedding.pt
--manager_ckpts /path/to/first/embedding.pt \
[</path/to/second/embedding.pt>,[...]] \
--output_path /path/to/output/embedding.pt
```
Credit goes to rinongal and the repository

View File

@@ -25,16 +25,17 @@ variations to create the desired image of Xena, Warrior Princess.
## Step 1 -- Find a base image that you like
The prompt we will use throughout is
`lucy lawless as xena, warrior princess, character portrait, high resolution.`
The prompt we will use throughout is:
This will be indicated as `prompt` in the examples below.
`#!bash "lucy lawless as xena, warrior princess, character portrait, high resolution."`
This will be indicated as `#!bash "prompt"` in the examples below.
First we let SD create a series of images in the usual way, in this case
requesting six iterations:
```bash
dream> lucy lawless as xena, warrior princess, character portrait, high resolution -n6
invoke> lucy lawless as xena, warrior princess, character portrait, high resolution -n6
...
Outputs:
./outputs/Xena/000001.1579445059.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S1579445059
@@ -45,7 +46,10 @@ Outputs:
./outputs/Xena/000001.3357757885.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S3357757885
```
<figure markdown>
![var1](../assets/variation_walkthru/000001.3357757885.png)
<figcaption> Seed 3357757885 looks nice </figcaption>
</figure>
---
@@ -57,7 +61,7 @@ differing by a variation amount of 0.2. This number ranges from `0` to `1.0`,
with higher numbers being larger amounts of variation.
```bash
dream> "prompt" -n6 -S3357757885 -v0.2
invoke> "prompt" -n6 -S3357757885 -v0.2
...
Outputs:
./outputs/Xena/000002.784039624.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 784039624:0.2 -S3357757885
@@ -77,9 +81,15 @@ used to generate it.
This gives us a series of closely-related variations, including the two shown
here.
<figure markdown>
![var2](../assets/variation_walkthru/000002.3647897225.png)
<figcaption>subseed 3647897225</figcaption>
</figure>
<figure markdown>
![var3](../assets/variation_walkthru/000002.1614299449.png)
<figcaption>subseed 1614299449</figcaption>
</figure>
I like the expression on Xena's face in the first one (subseed 3647897225), and
the armor on her shoulder in the second one (subseed 1614299449). Can we combine
@@ -89,7 +99,7 @@ We combine the two variations using `-V` (`--with_variations`). Again, we must
provide the seed for the originally-chosen image in order for this to work.
```bash
dream> "prompt" -S3357757885 -V3647897225,0.1,1614299449,0.1
invoke> "prompt" -S3357757885 -V3647897225,0.1,1614299449,0.1
Outputs:
./outputs/Xena/000003.1614299449.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1 -S3357757885
```
@@ -97,7 +107,10 @@ Outputs:
Here we are providing equal weights (0.1 and 0.1) for both the subseeds. The
resulting image is close, but not exactly what I wanted:
<figure markdown>
![var4](../assets/variation_walkthru/000003.1614299449.png)
<figcaption> subseed 1614299449 </figcaption>
</figure>
We could either try combining the images with different weights, or we can
generate more variations around the almost-but-not-quite image. We do the
@@ -105,7 +118,7 @@ latter, using both the `-V` (combining) and `-v` (variation strength) options.
Note that we use `-n6` to generate 6 variations:
```bash
dream> "prompt" -S3357757885 -V3647897225,0.1,1614299449,0.1 -v0.05 -n6
invoke> "prompt" -S3357757885 -V3647897225,0.1,1614299449,0.1 -v0.05 -n6
Outputs:
./outputs/Xena/000004.3279757577.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,3279757577:0.05 -S3357757885
./outputs/Xena/000004.2853129515.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,2853129515:0.05 -S3357757885
@@ -118,8 +131,23 @@ Outputs:
This produces six images, all slight variations on the combination of the chosen
two images. Here's the one I like best:
<figure markdown>
![var5](../assets/variation_walkthru/000004.3747154981.png)
<figcaption> subseed 3747154981 </figcaption>
</figure>
As you can see, this is a very powerful tool, which when combined with subprompt
weighting, gives you great control over the content and quality of your
generated images.
## Variations and Samplers
The sampler you choose has a strong effect on variation strength. Some
samplers, such as `k_euler_a` are very "creative" and produce significant
amounts of image-to-image variation even when the seed is fixed and the
`-v` argument is very low. Others are more deterministic. Feel free to
experiment until you find the combination that you like.
Also be aware of the [Perlin Noise](OTHER.md#thresholding-and-perlin-noise-initialization-options)
feature, which provides another way of introducing variability into your
image generation requests.

View File

@@ -1,21 +1,369 @@
---
title: Barebones Web Server
title: InvokeAI Web Server
---
# :material-web: Barebones Web Server
# :material-web: InvokeAI Web Server
As of version 1.10, this distribution comes with a bare bones web server (see
screenshot). To use it, run the `dream.py` script by adding the `--web`
option.
As of version 2.0.0, this distribution comes with a full-featured web
server (see screenshot). To use it, run the `invoke.py` script by
adding the `--web` option:
```bash
(ldm) ~/stable-diffusion$ python3 scripts/dream.py --web
(invokeai) ~/InvokeAI$ python3 scripts/invoke.py --web
```
You can then connect to the server by pointing your web browser at
http://localhost:9090, or to the network name or IP address of the server.
http://localhost:9090. To reach the server from a different machine on
your LAN, you may launch the web server with the `--host` argument and
either the IP address of the host you are running it on, or the
wildcard `0.0.0.0`. For example:
Kudos to [Tesseract Cat](https://github.com/TesseractCat) for contributing this
code, and to [dagf2101](https://github.com/dagf2101) for refining it.
```bash
(invokeai) ~/InvokeAI$ python3 scripts/invoke.py --web --host 0.0.0.0
```
![Dream Web Server](../assets/dream_web_server.png)
# Quick guided walkthrough of the WebGUI's features
While most of the WebGUI's features are intuitive, here is a guided
walkthrough through its various components.
![Invoke Web Server - Major Components](../assets/invoke-web-server-1.png){:width="640px"}
The screenshot above shows the Text to Image tab of the WebGUI. There
are three main sections:
1. A **control panel** on the left, which contains various settings
for text to image generation. The most important part is the text
field (currently showing `strawberry sushi`) for entering the text
prompt, and the camera icon directly underneath that will render the
image. We'll call this the *Invoke* button from now on.
2. The **current image** section in the middle, which shows a large
format version of the image you are currently working on. A series of
buttons at the top ("image to image", "Use All", "Use Seed", etc) lets
you modify the image in various ways.
3. A **gallery* section on the left that contains a history of the
images you have generated. These images are read and written to the
directory specified at launch time in `--outdir`.
In addition to these three elements, there are a series of icons for
changing global settings, reporting bugs, and changing the theme on
the upper right.
There are also a series of icons to the left of the control panel (see
highlighted area in the screenshot below) which select among a series
of tabs for performing different types of operations.
<figure markdown>
![Invoke Web Server - Control Panel](../assets/invoke-web-server-2.png){:width="512px"}
</figure>
From top to bottom, these are:
1. Text to Image - generate images from text
2. Image to Image - from an uploaded starting image (drawing or photograph) generate a new one, modified by the text prompt
3. Inpainting (pending) - Interactively erase portions of a starting image and have the AI fill in the erased region from a text prompt.
4. Outpainting (pending) - Interactively add blank space to the borders of a starting image and fill in the background from a text prompt.
5. Postprocessing (pending) - Interactively postprocess generated images using a variety of filters.
The inpainting, outpainting and postprocessing tabs are currently in
development. However, limited versions of their features can already
be accessed through the Text to Image and Image to Image tabs.
## Walkthrough
The following walkthrough will exercise most (but not all) of the
WebGUI's feature set.
### Text to Image
1. Launch the WebGUI using `python scripts/invoke.py --web` and
connect to it with your browser by accessing
`http://localhost:9090`. If the browser and server are running on
different machines on your LAN, add the option `--host 0.0.0.0` to the
launch command line and connect to the machine hosting the web server
using its IP address or domain name.
2. If all goes well, the WebGUI should come up and you'll see a green
`connected` message on the upper right.
#### Basics
1. Generate an image by typing *strawberry sushi* into the large
prompt field on the upper left and then clicking on the Invoke button
(the one with the Camera icon). After a short wait, you'll see a large
image of sushi in the image panel, and a new thumbnail in the gallery
on the right.
If you need more room on the screen, you can turn the gallery off
by clicking on the **x** to the right of "Your Invocations". You can
turn it back on later by clicking the image icon that appears in the
gallery's place.
The images are written into the directory indicated by the `--outdir`
option provided at script launch time. By default, this is
`outputs/img-samples` under the InvokeAI directory.
2. Generate a bunch of strawberry sushi images by increasing the
number of requested images by adjusting the Images counter just below
the Camera button. As each is generated, it will be added to the
gallery. You can switch the active image by clicking on the gallery
thumbnails.
3. Try playing with different settings, including image width and
height, the Sampler, the Steps and the CFG scale.
Image *Width* and *Height* do what you'd expect. However, be aware that
larger images consume more VRAM memory and take longer to generate.
The *Sampler* controls how the AI selects the image to display. Some
samplers are more "creative" than others and will produce a wider
range of variations (see next section). Some samplers run faster than
others.
*Steps* controls how many noising/denoising/sampling steps the AI will
take. The higher this value, the more refined the image will be, but
the longer the image will take to generate. A typical strategy is to
generate images with a low number of steps in order to select one to
work on further, and then regenerate it using a higher number of
steps.
The *CFG Scale* controls how hard the AI tries to match the generated
image to the input prompt. You can go as high or low as you like, but
generally values greater than 20 won't improve things much, and values
lower than 5 will produce unexpected images. There are complex
interactions between *Steps*, *CFG Scale* and the *Sampler*, so
experiment to find out what works for you.
6. To regenerate a previously-generated image, select the image you
want and click *Use All*. This loads the text prompt and other
original settings into the control panel. If you then press *Invoke*
it will regenerate the image exactly. You can also selectively modify
the prompt or other settings to tweak the image.
Alternatively, you may click on *Use Seed* to load just the image's
seed, and leave other settings unchanged.
7. To regenerate a Stable Diffusion image that was generated by
another SD package, you need to know its text prompt and its
*Seed*. Copy-paste the prompt into the prompt box, unset the
*Randomize Seed* control in the control panel, and copy-paste the
desired *Seed* into its text field. When you Invoke, you will get
something similar to the original image. It will not be exact unless
you also set the correct values for the original sampler, CFG,
steps and dimensions, but it will (usually) be close.
#### Variations on a theme
1. Let's try generating some variations. Select your favorite sushi
image from the gallery to load it. Then select "Use All" from the list
of buttons above. This will load up all the settings used to generate
this image, including its unique seed.
Go down to the Variations section of the Control Panel and set the
button to On. Set Variation Amount to 0.2 to generate a modest
number of variations on the image, and also set the Image counter to
`4`. Press the `invoke` button. This will generate a series of related
images. To obtain smaller variations, just lower the Variation
Amount. You may also experiment with changing the Sampler. Some
samplers generate more variability than others. *k_euler_a* is
particularly creative, while *ddim* is pretty conservative.
2. For even more variations, experiment with increasing the setting
for *Perlin*. This adds a bit of noise to the image generation
process. Note that values of Perlin noise greater than 0.15 produce
poor images for several of the samplers.
#### Facial reconstruction and upscaling
Stable Diffusion frequently produces mangled faces, particularly when
there are multiple figures in the same scene. Stable Diffusion has
particular issues with generating reallistic eyes. InvokeAI provides
the ability to reconstruct faces using either the GFPGAN or CodeFormer
libraries. For more information see [POSTPROCESS](POSTPROCESS.md).
1. Invoke a prompt that generates a mangled face. A prompt that often
gives this is "portrait of a lawyer, 3/4 shot" (this is not intended
as a slur against lawyers!) Once you have an image that needs some
touching up, load it into the Image panel, and press the button with
the face icon (highlighted in the first screenshot below). A dialog
box will appear. Leave *Strength* at 0.8 and press *Restore Faces". If
all goes well, the eyes and other aspects of the face will be improved
(see the second screenshot)
![Invoke Web Server - Original Image](../assets/invoke-web-server-3.png)
![Invoke Web Server - Retouched Image](../assets/invoke-web-server-4.png)
The facial reconstruction *Strength* field adjusts how aggressively
the face library will try to alter the face. It can be as high as 1.0,
but be aware that this often softens the face airbrush style, losing
some details. The default 0.8 is usually sufficient.
2. "Upscaling" is the process of increasing the size of an image while
retaining the sharpness. InvokeAI uses an external library called
"ESRGAN" to do this. To invoke upscaling, simply select an image and
press the *HD* button above it. You can select between 2X and 4X
upscaling, and adjust the upscaling strength, which has much the same
meaning as in facial reconstruction. Try running this on one of your
previously-generated images.
3. Finally, you can run facial reconstruction and/or upscaling
automatically after each Invocation. Go to the Advanced Options
section of the Control Panel and turn on *Restore Face* and/or
*Upscale*.
### Image to Image
InvokeAI lets you take an existing image and use it as the basis for a
new creation. You can use any sort of image, including a photograph, a
scanned sketch, or a digital drawing, as long as it is in PNG or JPEG
format.
For this tutorial, we'll use files named
[Lincoln-and-Parrot-512.png](../assets/Lincoln-and-Parrot-512.png),
and
[Lincoln-and-Parrot-512-transparent.png](../assets/Lincoln-and-Parrot-512-transparent.png).
Download these images to your local machine now to continue with the walkthrough.
1. Click on the *Image to Image* tab icon, which is the second icon
from the top on the left-hand side of the screen:
<figure markdown>
![Invoke Web Server - Image to Image Icon](../assets/invoke-web-server-5.png)
</figure>
This will bring you to a screen similar to the one shown here:
<figure markdown>
![Invoke Web Server - Image to Image Tab](../assets/invoke-web-server-6.png){:width="640px"}
</figure>
2. Drag-and-drop the Lincoln-and-Parrot image into the Image panel, or
click the blank area to get an upload dialog. The image will load into
an area marked *Initial Image*. (The WebGUI will also load the most
recently-generated image from the gallery into a section on the left,
but this image will be replaced in the next step.)
3. Go to the prompt box and type *old sea captain with raven on
shoulder* and press Invoke. A derived image will appear to the right
of the original one:
![Invoke Web Server - Image to Image example](../assets/invoke-web-server-7.png){:width="640px"}
4. Experiment with the different settings. The most influential one
in Image to Image is *Image to Image Strength* located about midway
down the control panel. By default it is set to 0.75, but can range
from 0.0 to 0.99. The higher the value, the more of the original image
the AI will replace. A value of 0 will leave the initial image
completely unchanged, while 0.99 will replace it completely. However,
the Sampler and CFG Scale also influence the final result. You can
also generate variations in the same way as described in Text to
Image.
5. What if we only want to change certain part(s) of the image and
leave the rest intact? This is called Inpainting, and a future version
of the InvokeAI web server will provide an interactive painting canvas
on which you can directly draw the areas you wish to Inpaint into. For
now, you can achieve this effect by using an external photoeditor tool
to make one or more regions of the image transparent as described in
[INPAINTING.md] and uploading that.
The file
[Lincoln-and-Parrot-512-transparent.png](../assets/Lincoln-and-Parrot-512-transparent.png)
is a version of the earlier image in which the area around the parrot
has been replaced with transparency. Click on the "x" in the upper
right of the Initial Image and upload the transparent version. Using
the same prompt "old sea captain with raven on shoulder" try Invoking
an image. This time, only the parrot will be replaced, leaving the
rest of the original image intact:
<figure markdown>
![Invoke Web Server - Inpainting](../assets/invoke-web-server-8.png){:width="640px"}
</figure>
6. Would you like to modify a previously-generated image using the
Image to Image facility? Easy! While in the Image to Image panel,
hover over any of the gallery images to see a little menu of icons pop
up. Click the picture icon to instantly send the selected image to
Image to Image as the initial image.
You can do the same from the Text to Image tab by clicking on the
picture icon above the central image panel. The screenshot below
shows where the "use as initial image" icons are located.
![Invoke Web Server - Use as Image Links](../assets/invoke-web-server-9.png){:width="640px"}
## Parting remarks
This concludes the walkthrough, but there are several more features that you
can explore. Please check out the [Command Line Interface](CLI.md)
documentation for further explanation of the advanced features that
were not covered here.
The WebGUI is only rapid development. Check back regularly for
updates!
## Reference
### Additional Options
parameter <img width=160 align="right"> | effect
-- | --
`--web_develop` | Starts the web server in development mode.
`--web_verbose` | Enables verbose logging
`--cors [CORS ...]` | Additional allowed origins, comma-separated
`--host HOST` | Web server: Host or IP to listen on. Set to 0.0.0.0 to accept traffic from other devices on your network.
`--port PORT` | Web server: Port to listen on
`--gui` | Start InvokeAI GUI - This is the "desktop mode" version of the web app. It uses Flask to create a desktop app experience of the webserver.
### Web Specific Features
The web experience offers an incredibly easy-to-use experience for interacting with the InvokeAI toolkit.
For detailed guidance on individual features, see the Feature-specific help documents available in this directory.
Note that the latest functionality available in the CLI may not always be available in the Web interface.
#### Dark Mode & Light Mode
The InvokeAI interface is available in a nano-carbon black & purple Dark Mode, and a "burn your eyes out Nosferatu" Light Mode. These can be toggled by clicking the Sun/Moon icons at the top right of the interface.
![InvokeAI Web Server - Dark Mode](../assets/invoke_web_dark.png)
![InvokeAI Web Server - Light Mode](../assets/invoke_web_light.png)
#### Invocation Toolbar
The left side of the InvokeAI interface is available for customizing the prompt and the settings used for invoking your new image. Typing your prompt into the open text field and clicking the Invoke button will produce the image based on the settings configured in the toolbar.
See below for additional documentation related to each feature:
- [Core Prompt Settings](./CLI.md)
- [Variations](./VARIATIONS.md)
- [Upscaling](./POSTPROCESS.md#upscaling)
- [Image to Image](./IMG2IMG.md)
- [Inpainting](./INPAINTING.md)
- [Other](./OTHER.md)
#### Invocation Gallery
The currently selected --outdir (or the default outputs folder) will display all previously generated files on load. As new invocations are generated, these will be dynamically added to the gallery, and can be previewed by selecting them. Each image also has a simple set of actions (e.g., Delete, Use Seed, Use All Parameters, etc.) that can be accessed by hovering over the image.
#### Image Workspace
When an image from the Invocation Gallery is selected, or is generated, the image will be displayed within the center of the interface. A quickbar of common image interactions are displayed along the top of the image, including:
- Use image in the `Image to Image` workflow
- Initialize Face Restoration on the selected file
- Initialize Upscaling on the selected file
- View File metadata and details
- Delete the file
## Acknowledgements
A huge shout-out to the core team working to make this vision a
reality, including
[psychedelicious](https://github.com/psychedelicious),
[Kyle0654](https://github.com/Kyle0654) and
[blessedcoolant](https://github.com/blessedcoolant). [hipsterusername](https://github.com/hipsterusername)
was the team's unofficial cheerleader and added tooltips/docs.

View File

@@ -0,0 +1,58 @@
# **WebUI Hotkey List**
## General
| Setting | Hotkey |
| ------------ | ---------------------- |
| a | Set All Parameters |
| s | Set Seed |
| u | Upscale |
| r | Restoration |
| i | Show Metadata |
| Ddl | Delete Image |
| alt + a | Focus prompt input |
| shift + i | Send To Image to Image |
| ctrl + enter | Start processing |
| shift + x | cancel Processing |
| shift + d | Toggle Dark Mode |
| ` | Toggle console |
## Tabs
| Setting | Hotkey |
| ------- | ------------------------- |
| 1 | Go to Text To Image Tab |
| 2 | Go to Image to Image Tab |
| 3 | Go to Inpainting Tab |
| 4 | Go to Outpainting Tab |
| 5 | Go to Nodes Tab |
| 6 | Go to Post Processing Tab |
## Gallery
| Setting | Hotkey |
| ------------ | ------------------------------- |
| g | Toggle Gallery |
| left arrow | Go to previous image in gallery |
| right arrow | Go to next image in gallery |
| shift + p | Pin gallery |
| shift + up | Increase gallery image size |
| shift + down | Decrease gallery image size |
| shift + r | Reset image gallery size |
## Inpainting
| Setting | Hotkey |
| -------------------------- | --------------------- |
| [ | Decrease brush size |
| ] | Increase brush size |
| alt + [ | Decrease mask opacity |
| alt + ] | Increase mask opacity |
| b | Select brush |
| e | Select eraser |
| ctrl + z | Undo brush stroke |
| ctrl + shift + z, ctrl + y | Redo brush stroke |
| h | Hide mask |
| shift + m | Invert mask |
| shift + c | Clear mask |
| shift + j | Expand canvas |

View File

@@ -1,8 +1,8 @@
---
title: SAMPLER CONVERGENCE
title: Sampler Convergence
---
## *Sampler Convergence*
# :material-palette-advanced: *Sampler Convergence*
As features keep increasing, making the right choices for your needs can become increasingly difficult. What sampler to use? And for how many steps? Do you change the CFG value? Do you use prompt weighting? Do you allow variations?
@@ -14,12 +14,14 @@ In this document, we will talk about sampler convergence.
Looking for a short version? Here's a TL;DR in 3 tables.
| Remember |
|:---|
| Results converge as steps (`-s`) are increased (except for `K_DPM_2_A` and `K_EULER_A`). Often at ≥ `-s100`, but may require ≥ `-s700`). |
| Producing a batch of candidate images at low (`-s8` to `-s30`) step counts can save you hours of computation. |
| `K_HEUN` and `K_DPM_2` converge in less steps (but are slower). |
| `K_DPM_2_A` and `K_EULER_A` incorporate a lot of creativity/variability. |
!!! note "Remember"
- Results converge as steps (`-s`) are increased (except for `K_DPM_2_A` and `K_EULER_A`). Often at ≥ `-s100`, but may require ≥ `-s700`).
- Producing a batch of candidate images at low (`-s8` to `-s30`) step counts can save you hours of computation.
- `K_HEUN` and `K_DPM_2` converge in less steps (but are slower).
- `K_DPM_2_A` and `K_EULER_A` incorporate a lot of creativity/variability.
<div align="center" markdown>
| Sampler | (3 sample avg) it/s (M1 Max 64GB, 512x512) |
|---|---|
@@ -32,10 +34,13 @@ Looking for a short version? Here's a TL;DR in 3 tables.
| `K_DPM_2_A` | 0.95 (slower) |
| `K_EULER_A` | 1.86 |
| Suggestions |
|:---|
| For most use cases, `K_LMS`, `K_HEUN` and `K_DPM_2` are the best choices (the latter 2 run 0.5x as quick, but tend to converge 2x as quick as `K_LMS`). At very low steps (≤ `-s8`), `K_HEUN` and `K_DPM_2` are not recommended. Use `K_LMS` instead.|
| For variability, use `K_EULER_A` (runs 2x as quick as `K_DPM_2_A`). |
</div>
!!! tip "suggestions"
For most use cases, `K_LMS`, `K_HEUN` and `K_DPM_2` are the best choices (the latter 2 run 0.5x as quick, but tend to converge 2x as quick as `K_LMS`). At very low steps (≤ `-s8`), `K_HEUN` and `K_DPM_2` are not recommended. Use `K_LMS` instead.
For variability, use `K_EULER_A` (runs 2x as quick as `K_DPM_2_A`).
---
@@ -60,15 +65,15 @@ This realization is very useful because it means you don't need to create a batc
You can produce the same 100 images at `-s10` to `-s30` using a K-sampler (since they converge faster), get a rough idea of the final result, choose your 2 or 3 favorite ones, and then run `-s100` on those images to polish some details.
The latter technique is 3-8x as quick.
Example:
!!! example
At 60s per 100 steps.
At 60s per 100 steps.
(Option A) 60s * 100 images = 6000s (100 images at `-s100`, manually picking 3 favorites)
A) 60s * 100 images = 6000s (100 images at `-s100`, manually picking 3 favorites)
(Option B) 6s * 100 images + 60s * 3 images = 780s (100 images at `-s10`, manually picking 3 favorites, and running those 3 at `-s100` to polish details)
B) 6s *100 images + 60s* 3 images = 780s (100 images at `-s10`, manually picking 3 favorites, and running those 3 at `-s100` to polish details)
The result is 1 hour and 40 minutes (Option A) vs 13 minutes (Option B).
The result is __1 hour and 40 minutes__ for Variant A, vs __13 minutes__ for Variant B.
### *Topic convergance*
@@ -110,9 +115,12 @@ Note also the point of convergence may not be the most desirable state (e.g. I p
Once we understand the concept of sampler convergence, we must look into the performance of each sampler in terms of steps (iterations) per second, as not all samplers run at the same speed.
On my M1 Max with 64GB of RAM, for a 512x512 image:
| Sampler | (3 sample average) it/s |
|---|---|
<div align="center" markdown>
On my M1 Max with 64GB of RAM, for a 512x512 image
| Sampler | (3 sample average) it/s |
| :--- | :--- |
| `DDIM` | 1.89 |
| `PLMS` | 1.86 |
| `K_EULER` | 1.86 |
@@ -122,11 +130,13 @@ On my M1 Max with 64GB of RAM, for a 512x512 image:
| `K_DPM_2_A` | 0.95 (slower) |
| `K_EULER_A` | 1.86 |
</div>
Combining our results with the steps per second of each sampler, three choices come out on top: `K_LMS`, `K_HEUN` and `K_DPM_2` (where the latter two run 0.5x as quick but tend to converge 2x as quick as `K_LMS`). For creativity and a lot of variation between iterations, `K_EULER_A` can be a good choice (which runs 2x as quick as `K_DPM_2_A`).
Additionally, image generation at very low steps (≤ `-s8`) is not recommended for `K_HEUN` and `K_DPM_2`. Use `K_LMS` instead.
<img width="397" alt="192044949-67d5d441-a0d5-4d5a-be30-5dda4fc28a00-min" src="https://user-images.githubusercontent.com/50542132/192046823-2714cb29-bbf3-4eb1-9213-e27a0963905c.png">
![K-compare](https://user-images.githubusercontent.com/50542132/192046823-2714cb29-bbf3-4eb1-9213-e27a0963905c.png){ width=600}
### *Three key points*

View File

@@ -1,5 +1,7 @@
---
title: F.A.Q.
hide:
- toc
---
# :material-frequently-asked-questions: F.A.Q.
@@ -51,7 +53,7 @@ rm ${PIP_LOG}
### **QUESTION**
`dream.py` crashes with the complaint that it can't find `ldm.simplet2i.py`. Or it complains that
`invoke.py` crashes with the complaint that it can't find `ldm.simplet2i.py`. Or it complains that
function is being passed incorrect parameters.
### **SOLUTION**
@@ -63,7 +65,7 @@ Reinstall the stable diffusion modules. Enter the `stable-diffusion` directory a
### **QUESTION**
`dream.py` dies, complaining of various missing modules, none of which starts with `ldm``.
`invoke.py` dies, complaining of various missing modules, none of which starts with `ldm`.
### **SOLUTION**
@@ -87,9 +89,7 @@ Usually this will be sufficient, but if you start to see errors about
missing or incorrect modules, use the command `pip install -e .`
and/or `conda env update` (These commands won't break anything.)
`pip install -e .` and/or
`conda env update -f environment.yaml`
`pip install -e .` and/or `conda env update -f environment.yaml`
(These commands won't break anything.)

View File

@@ -1,6 +1,5 @@
---
title: Home
template: main.html
---
<!--
@@ -13,7 +12,7 @@ template: main.html
-->
<div align="center" markdown>
# :material-script-text-outline: Stable Diffusion Dream Script
# ^^**InvokeAI: A Stable Diffusion Toolkit**^^ :tools: <br> <small>Formerly known as lstein/stable-diffusion</small>
![project logo](assets/logo.png)
@@ -25,37 +24,41 @@ template: main.html
[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link]
[CI checks on dev badge]: https://flat.badgen.net/github/checks/lstein/stable-diffusion/development?label=CI%20status%20on%20dev&cache=900&icon=github
[CI checks on dev link]: https://github.com/lstein/stable-diffusion/actions?query=branch%3Adevelopment
[CI checks on main badge]: https://flat.badgen.net/github/checks/lstein/stable-diffusion/main?label=CI%20status%20on%20main&cache=900&icon=github
[CI checks on main link]: https://github.com/lstein/stable-diffusion/actions/workflows/test-dream-conda.yml
[discord badge]: https://flat.badgen.net/discord/members/htRgbc7e?icon=discord
[discord link]: https://discord.com/invite/htRgbc7e
[github forks badge]: https://flat.badgen.net/github/forks/lstein/stable-diffusion?icon=github
[CI checks on dev badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/development?label=CI%20status%20on%20dev&cache=900&icon=github
[CI checks on dev link]: https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Adevelopment
[CI checks on main badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
[CI checks on main link]: https://github.com/invoke-ai/InvokeAI/actions/workflows/test-invoke-conda.yml
[discord badge]: https://flat.badgen.net/discord/members/ZmtBAhwWhy?icon=discord
[discord link]: https://discord.gg/ZmtBAhwWhy
[github forks badge]: https://flat.badgen.net/github/forks/invoke-ai/InvokeAI?icon=github
[github forks link]: https://useful-forks.github.io/?repo=lstein%2Fstable-diffusion
[github open issues badge]: https://flat.badgen.net/github/open-issues/lstein/stable-diffusion?icon=github
[github open issues link]: https://github.com/lstein/stable-diffusion/issues?q=is%3Aissue+is%3Aopen
[github open prs badge]: https://flat.badgen.net/github/open-prs/lstein/stable-diffusion?icon=github
[github open prs link]: https://github.com/lstein/stable-diffusion/pulls?q=is%3Apr+is%3Aopen
[github stars badge]: https://flat.badgen.net/github/stars/lstein/stable-diffusion?icon=github
[github stars link]: https://github.com/lstein/stable-diffusion/stargazers
[latest commit to dev badge]: https://flat.badgen.net/github/last-commit/lstein/stable-diffusion/development?icon=github&color=yellow&label=last%20dev%20commit&cache=900
[latest commit to dev link]: https://github.com/lstein/stable-diffusion/commits/development
[latest release badge]: https://flat.badgen.net/github/release/lstein/stable-diffusion/development?icon=github
[latest release link]: https://github.com/lstein/stable-diffusion/releases
[github open issues badge]: https://flat.badgen.net/github/open-issues/invoke-ai/InvokeAI?icon=github
[github open issues link]: https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen
[github open prs badge]: https://flat.badgen.net/github/open-prs/invoke-ai/InvokeAI?icon=github
[github open prs link]: https://github.com/invoke-ai/InvokeAI/pulls?q=is%3Apr+is%3Aopen
[github stars badge]: https://flat.badgen.net/github/stars/invoke-ai/InvokeAI?icon=github
[github stars link]: https://github.com/invoke-ai/InvokeAI/stargazers
[latest commit to dev badge]: https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/development?icon=github&color=yellow&label=last%20dev%20commit&cache=900
[latest commit to dev link]: https://github.com/invoke-ai/InvokeAI/commits/development
[latest release badge]: https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases
</div>
This is a fork of [CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion), the open
source text-to-image generator. It provides a streamlined process with various new features and
options to aid the image generation process. It runs on Windows, Mac and Linux machines, and runs on
GPU cards with as little as 4 GB or RAM.
<a href="https://github.com/invoke-ai/InvokeAI">InvokeAI</a> is an
implementation of Stable Diffusion, the open source text-to-image and
image-to-image generator. It provides a streamlined process with
various new features and options to aid the image generation
process. It runs on Windows, Mac and Linux machines, and runs on GPU
cards with as little as 4 GB or RAM.
**Quick links**: [<a href="https://discord.gg/NwVCmKwY">Discord Server</a>] [<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas & Q&A</a>]
<div align="center"><img src="assets/invoke-web-server-1.png" width=640></div>
!!! note
This fork is rapidly evolving. Please use the
[Issues](https://github.com/lstein/stable-diffusion/issues) tab to report bugs and make feature
requests. Be sure to use the provided templates. They will help aid diagnose issues faster.
This fork is rapidly evolving. Please use the [Issues tab](https://github.com/invoke-ai/InvokeAI/issues) to report bugs and make feature requests. Be sure to use the provided templates. They will help aid diagnose issues faster.
## :octicons-package-dependencies-24: Installation
@@ -81,25 +84,46 @@ You wil need one of the following:
### :fontawesome-regular-hard-drive: Disk
- At least 6 GB of free disk space for the machine learning model, Python, and all its dependencies.
- At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
!!! note
If you are have a Nvidia 10xx series card (e.g. the 1080ti), please run the dream script in
If you are have a Nvidia 10xx series card (e.g. the 1080ti), please run the invoke script in
full-precision mode as shown below.
Similarly, specify full-precision mode on Apple M1 hardware.
To run in full-precision mode, start `dream.py` with the `--full_precision` flag:
To run in full-precision mode, start `invoke.py` with the `--full_precision` flag:
```bash
(ldm) ~/stable-diffusion$ python scripts/dream.py --full_precision
(invokeai) ~/InvokeAI$ python scripts/invoke.py --full_precision
```
## :octicons-log-16: Latest Changes
### vNEXT <small>(TODO 2022)</small>
- Deprecated `--full_precision` / `-F`. Simply omit it and `dream.py` will auto
### v2.0.0 <small>(9 October 2022)</small>
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains
for backward compatibility.
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
- Support for <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/INPAINTING.md">inpainting</a> and <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/OUTPAINTING.md">outpainting</a>
- img2img runs on all k* samplers
- Support for <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/PROMPTS.md#negative-and-unconditioned-prompts">negative prompts</a>
- Support for CodeFormer face reconstruction
- Support for Textual Inversion on Macintoshes
- Support in both WebGUI and CLI for <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/POSTPROCESS.md">post-processing of previously-generated images</a>
using facial reconstruction, ESRGAN upscaling, outcropping (similar to DALL-E infinite canvas),
and "embiggen" upscaling. See the `!fix` command.
- New `--hires` option on `invoke>` line allows <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/CLI.m#this-is-an-example-of-txt2img">larger images to be created without duplicating elements</a>, at the cost of some performance.
- New `--perlin` and `--threshold` options allow you to add and control variation
during image generation (see <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/OTHER.md#thresholding-and-perlin-noise-initialization-options">Thresholding and Perlin Noise Initialization</a>
- Extensive metadata now written into PNG files, allowing reliable regeneration of images
and tweaking of previous settings.
- Command-line completion in `invoke.py` now works on Windows, Linux and Mac platforms.
- Improved <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/CLI.m">command-line completion behavior</a>.
New commands added:
* List command-line history with `!history`
* Search command-line history with `!search`
* Clear history with `!clear`
- Deprecated `--full_precision` / `-F`. Simply omit it and `invoke.py` will auto
configure. To switch away from auto use the new flag like `--precision=float32`.
### v1.14 <small>(11 September 2022)</small>
@@ -124,7 +148,7 @@ You wil need one of the following:
[Kevin Gibbons](https://github.com/bakkot)
- A new configuration file scheme that allows new models (including upcoming stable-diffusion-v1.5)
to be added without altering the code. ([David Wager](https://github.com/maddavid12))
- Can specify --grid on dream.py command line as the default.
- Can specify --grid on invoke.py command line as the default.
- Miscellaneous internal bug and stability fixes.
- Works on M1 Apple hardware.
- Multiple bug fixes.

View File

@@ -0,0 +1,267 @@
---
title: Installing Models
---
# :octicons-paintbrush-16: Installing Models
## Model Weight Files
The model weight files ('*.ckpt') are the Stable Diffusion "secret
sauce". They are the product of training the AI on millions of
captioned images gathered from multiple sources.
Originally there was only a single Stable Diffusion weights file,
which many people named `model.ckpt`. Now there are dozens or more
that have been "fine tuned" to provide particulary styles, genres, or
other features. InvokeAI allows you to install and run multiple model
weight files and switch between them quickly in the command-line and
web interfaces.
This manual will guide you through installing and configuring model
weight files.
## Base Models
InvokeAI comes with support for a good initial set of models listed in
the model configuration file `configs/models.yaml`. They are:
| Model | Weight File | Description | DOWNLOAD FROM |
| ---------------------- | ----------------------------- |--------------------------------- | ----------------|
| stable-diffusion-1.5 | v1-5-pruned-emaonly.ckpt | Most recent version of base Stable Diffusion model| https://huggingface.co/runwayml/stable-diffusion-v1-5 |
| stable-diffusion-1.4 | sd-v1-4.ckpt | Previous version of base Stable Diffusion model | https://huggingface.co/CompVis/stable-diffusion-v-1-4-original |
| inpainting-1.5 | sd-v1-5-inpainting.ckpt | Stable Diffusion 1.5 model specialized for inpainting | https://huggingface.co/runwayml/stable-diffusion-inpainting |
| waifu-diffusion-1.3 | model-epoch09-float32.ckpt | Stable Diffusion 1.4 trained to produce anime images | https://huggingface.co/hakurei/waifu-diffusion-v1-3 |
| <all models> | vae-ft-mse-840000-ema-pruned.ckpt | A fine-tune file add-on file that improves face generation | https://huggingface.co/stabilityai/sd-vae-ft-mse-original/ |
Note that these files are covered by an "Ethical AI" license which
forbids certain uses. You will need to create an account on the
Hugging Face website and accept the license terms before you can
access the files.
The predefined configuration file for InvokeAI (located at
`configs/models.yaml`) provides entries for each of these weights
files. `stable-diffusion-1.5` is the default model used, and we
strongly recommend that you install this weights file if nothing else.
## Community-Contributed Models
There are too many to list here and more are being contributed every
day. Hugging Face maintains a [fast-growing
repository](https://huggingface.co/sd-concepts-library) of fine-tune
(".bin") models that can be imported into InvokeAI by passing the
`--embedding_path` option to the `invoke.py` command.
[This page](https://rentry.org/sdmodels) hosts a large list of
official and unofficial Stable Diffusion models and where they can be
obtained.
## Installation
There are three ways to install weights files:
1. During InvokeAI installation, the `preload_models.py` script can
download them for you.
2. You can use the command-line interface (CLI) to import, configure
and modify new models files.
3. You can download the files manually and add the appropriate entries
to `models.yaml`.
### Installation via `preload_models.py`
This is the most automatic way. Run `scripts/preload_models.py` from
the console. It will ask you to select which models to download and
lead you through the steps of setting up a Hugging Face account if you
haven't done so already.
To start, from within the InvokeAI directory run the command `python
scripts/preload_models.py` (Linux/MacOS) or `python
scripts\preload_models.py` (Windows):
```
Loading Python libraries...
** INTRODUCTION **
Welcome to InvokeAI. This script will help download the Stable Diffusion weight files
and other large models that are needed for text to image generation. At any point you may interrupt
this program and resume later.
** WEIGHT SELECTION **
Would you like to download the Stable Diffusion model weights now? [y]
Choose the weight file(s) you wish to download. Before downloading you
will be given the option to view and change your selections.
[1] stable-diffusion-1.5:
The newest Stable Diffusion version 1.5 weight file (4.27 GB) (recommended)
Download? [y]
[2] inpainting-1.5:
RunwayML SD 1.5 model optimized for inpainting (4.27 GB) (recommended)
Download? [y]
[3] stable-diffusion-1.4:
The original Stable Diffusion version 1.4 weight file (4.27 GB)
Download? [n] n
[4] waifu-diffusion-1.3:
Stable Diffusion 1.4 fine tuned on anime-styled images (4.27)
Download? [n] y
[5] ft-mse-improved-autoencoder-840000:
StabilityAI improved autoencoder fine-tuned for human faces (recommended; 335 MB) (recommended)
Download? [y] y
The following weight files will be downloaded:
[1] stable-diffusion-1.5*
[2] inpainting-1.5
[4] waifu-diffusion-1.3
[5] ft-mse-improved-autoencoder-840000
*default
Ok to download? [y]
** LICENSE AGREEMENT FOR WEIGHT FILES **
1. To download the Stable Diffusion weight files you need to read and accept the
CreativeML Responsible AI license. If you have not already done so, please
create an account using the "Sign Up" button:
https://huggingface.co
You will need to verify your email address as part of the HuggingFace
registration process.
2. After creating the account, login under your account and accept
the license terms located here:
https://huggingface.co/CompVis/stable-diffusion-v-1-4-original
Press <enter> when you are ready to continue:
...
```
When the script is complete, you will find the downloaded weights
files in `models/ldm/stable-diffusion-v1` and a matching configuration
file in `configs/models.yaml`.
You can run the script again to add any models you didn't select the
first time. Note that as a safety measure the script will _never_
remove a previously-installed weights file. You will have to do this
manually.
### Installation via the CLI
You can install a new model, including any of the community-supported
ones, via the command-line client's `!import_model` command.
1. First download the desired model weights file and place it under `models/ldm/stable-diffusion-v1/`.
You may rename the weights file to something more memorable if you wish. Record the path of the
weights file (e.g. `models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt`)
2. Launch the `invoke.py` CLI with `python scripts/invoke.py`.
3. At the `invoke>` command-line, enter the command `!import_model <path to model>`.
For example:
`invoke> !import_model models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt`
(Hint - the CLI supports file path autocompletion. Type a bit of the path
name and hit <tab> in order to get a choice of possible completions.)
4. Follow the wizard's instructions to complete installation as shown in the example
here:
```
invoke> <b>!import_model models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt</b>
>> Model import in process. Please enter the values needed to configure this model:
Name for this model: <b>arabian-nights</b>
Description of this model: <b>Arabian Nights Fine Tune v1.0</b>
Configuration file for this model: <b>configs/stable-diffusion/v1-inference.yaml</b>
Default image width: <b>512</b>
Default image height: <b>512</b>
>> New configuration:
arabian-nights:
config: configs/stable-diffusion/v1-inference.yaml
description: Arabian Nights Fine Tune v1.0
height: 512
weights: models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt
width: 512
OK to import [n]? <b>y</b>
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt
| LatentDiffusion: Running in eps-prediction mode
| DiffusionWrapper has 859.52 M params.
| Making attention of type 'vanilla' with 512 in_channels
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
| Making attention of type 'vanilla' with 512 in_channels
| Using faster float16 precision
```
If you've previously installed the fine-tune VAE file `vae-ft-mse-840000-ema-pruned.ckpt`,
the wizard will also ask you if you want to add this VAE to the model.
The appropriate entry for this model will be added to `configs/models.yaml` and it will
be available to use in the CLI immediately.
The CLI has additional commands for switching among, viewing, editing,
deleting the available models. These are described in [Command Line
Client](../features/CLI.md#model-selection-and-importation), but the two most
frequently-used are `!models` and `!switch <name of model>`. The first
prints a table of models that InvokeAI knows about and their load
status. The second will load the requested model and lets you switch
back and forth quickly among loaded models.
### Manually editing of `configs/models.yaml`
If you are comfortable with a text editor then you may simply edit
`models.yaml` directly.
First you need to download the desired .ckpt file and place it in
`models/ldm/stable-diffusion-v1` as descirbed in step #1 in the
previous section. Record the path to the weights file,
e.g. `models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt`
Then using a **text** editor (e.g. the Windows Notepad application),
open the file `configs/models.yaml`, and add a new stanza that follows
this model:
```
arabian-nights-1.0:
description: A great fine-tune in Arabian Nights style
weights: ./models/ldm/stable-diffusion-v1/arabian-nights-1.0.ckpt
config: ./configs/stable-diffusion/v1-inference.yaml
width: 512
height: 512
vae: ./models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
default: false
```
* arabian-nights-1.0
- This is the name of the model that you will refer to from within the
CLI and the WebGUI when you need to load and use the model.
* description
- Any description that you want to add to the model to remind you what
it is.
* weights
- Relative path to the .ckpt weights file for this model.
* config
- This is the confusingly-named configuration file for the model itself.
Use `./configs/stable-diffusion/v1-inference.yaml` unless the model happens
to need a custom configuration, in which case the place you downloaded it
from will tell you what to use instead. For example, the runwayML custom
inpainting model requires the file `configs/stable-diffusion/v1-inpainting-inference.yaml`.
This is already inclued in the InvokeAI distribution and is configured automatically
for you by the `preload_models.py` script.
* vae
- If you want to add a VAE file to the model, then enter its path here.
* width, height
- This is the width and height of the images used to train the model.
Currently they are always 512 and 512.
Save the `models.yaml` and relaunch InvokeAI. The new model should now be
available for your use.

View File

@@ -1,4 +1,10 @@
# Before you begin
---
title: Docker
---
# :fontawesome-brands-docker: Docker
## Before you begin
- For end users: Install Stable Diffusion locally using the instructions for
your OS.
@@ -6,7 +12,7 @@
deployment to other environments (on-premises or cloud), follow these
instructions. For general use, install locally to leverage your machine's GPU.
# Why containers?
## Why containers?
They provide a flexible, reliable way to build and deploy Stable Diffusion.
You'll also use a Docker volume to store the largest model files and image
@@ -26,117 +32,78 @@ development purposes it's fine. Once you're done with development tasks on your
laptop you can build for the target platform and architecture and deploy to
another environment with NVIDIA GPUs on-premises or in the cloud.
# Installation on a Linux container
## Installation on a Linux container
## Prerequisites
### Prerequisites
### Get the data files
Go to
[Hugging Face](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original),
and click "Access repository" to Download the model file `sd-v1-4.ckpt` (~4 GB)
to `~/Downloads`. You'll need to create an account but it's quick and free.
Also download the face restoration model.
```Shell
cd ~/Downloads
wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth
```
### Install [Docker](https://github.com/santisbon/guides#docker)
#### Install [Docker](https://github.com/santisbon/guides#docker)
On the Docker Desktop app, go to Preferences, Resources, Advanced. Increase the
CPUs and Memory to avoid this
[Issue](https://github.com/invoke-ai/InvokeAI/issues/342). You may need to
increase Swap and Disk image size too.
## Setup
#### Get a Huggingface-Token
Go to [Hugging Face](https://huggingface.co/settings/tokens), create a token and
temporary place it somewhere like a open texteditor window (but dont save it!,
only keep it open, we need it in the next step)
### Setup
Set the fork you want to use and other variables.
```Shell
TAG_STABLE_DIFFUSION="santisbon/stable-diffusion"
PLATFORM="linux/arm64"
GITHUB_STABLE_DIFFUSION="-b orig-gfpgan https://github.com/santisbon/stable-diffusion.git"
REQS_STABLE_DIFFUSION="requirements-linux-arm64.txt"
CONDA_SUBDIR="osx-arm64"
!!! tip
echo $TAG_STABLE_DIFFUSION
echo $PLATFORM
echo $GITHUB_STABLE_DIFFUSION
echo $REQS_STABLE_DIFFUSION
echo $CONDA_SUBDIR
I preffer to save my env vars
in the repository root in a `.env` (or `.envrc`) file to automatically re-apply
them when I come back.
The build- and run- scripts contain default values for almost everything,
besides the [Hugging Face Token](https://huggingface.co/settings/tokens) you
created in the last step.
Some Suggestions of variables you may want to change besides the Token:
| Environment-Variable | Description |
| ------------------------------------------------------------------- | ------------------------------------------------------------------------ |
| `HUGGINGFACE_TOKEN="hg_aewirhghlawrgkjbarug2"` | This is the only required variable, without you can't get the checkpoint |
| `ARCH=aarch64` | if you are using a ARM based CPU |
| `INVOKEAI_TAG=yourname/invokeai:latest` | the Container Repository / Tag which will be used |
| `INVOKEAI_CONDA_ENV_FILE=environment-linux-aarch64.yml` | since environment.yml wouldn't work with aarch |
| `INVOKEAI_GIT="-b branchname https://github.com/username/reponame"` | if you want to use your own fork |
#### Build the Image
I provided a build script, which is located in `docker-build/build.sh` but still
needs to be executed from the Repository root.
```bash
docker-build/build.sh
```
Create a Docker volume for the downloaded model files.
The build Script not only builds the container, but also creates the docker
volume if not existing yet, or if empty it will just download the models. When
it is done you can run the container via the run script
```Shell
docker volume create my-vol
```bash
docker-build/run.sh
```
Copy the data files to the Docker volume using a lightweight Linux container.
We'll need the models at run time. You just need to create the container with
the mountpoint; no need to run this dummy container.
When used without arguments, the container will start the website and provide
you the link to open it. But if you want to use some other parameters you can
also do so.
```Shell
cd ~/Downloads # or wherever you saved the files
!!! warning "Deprecated"
docker create --platform $PLATFORM --name dummy --mount source=my-vol,target=/data alpine
From here on it is the rest of the previous Docker-Docs, which will still
provide usefull informations for one or the other.
docker cp sd-v1-4.ckpt dummy:/data
docker cp GFPGANv1.4.pth dummy:/data
```
## Usage (time to have fun)
Get the repo and download the Miniconda installer (we'll need it at build time).
Replace the URL with the version matching your container OS and the architecture
it will run on.
### Startup
```Shell
cd ~
git clone $GITHUB_STABLE_DIFFUSION
cd stable-diffusion/docker-build
chmod +x entrypoint.sh
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.sh -O anaconda.sh && chmod +x anaconda.sh
```
Build the Docker image. Give it any tag `-t` that you want.
Choose the Linux container's host platform: x86-64/Intel is `amd64`. Apple
silicon is `arm64`. If deploying the container to the cloud to leverage powerful
GPU instances you'll be on amd64 hardware but if you're just trying this out
locally on Apple silicon choose arm64.
The application uses libraries that need to match the host environment so use
the appropriate requirements file.
Tip: Check that your shell session has the env variables set above.
```Shell
docker build -t $TAG_STABLE_DIFFUSION \
--platform $PLATFORM \
--build-arg gsd=$GITHUB_STABLE_DIFFUSION \
--build-arg rsd=$REQS_STABLE_DIFFUSION \
--build-arg cs=$CONDA_SUBDIR \
.
```
Run a container using your built image.
Tip: Make sure you've created and populated the Docker volume (above).
```Shell
docker run -it \
--rm \
--platform $PLATFORM \
--name stable-diffusion \
--hostname stable-diffusion \
--mount source=my-vol,target=/data \
$TAG_STABLE_DIFFUSION
```
# Usage (time to have fun)
## Startup
If you're on a **Linux container** the `dream` script is **automatically
If you're on a **Linux container** the `invoke` script is **automatically
started** and the output dir set to the Docker volume you created earlier.
If you're **directly on macOS follow these startup instructions**.
@@ -148,17 +115,17 @@ half-precision requires autocast and won't work.
By default the images are saved in `outputs/img-samples/`.
```Shell
python3 scripts/dream.py --full_precision
python3 scripts/invoke.py --full_precision
```
You'll get the script's prompt. You can see available options or quit.
```Shell
dream> -h
dream> q
invoke> -h
invoke> q
```
## Text to Image
### Text to Image
For quick (but bad) image results test with 5 steps (default 50) and 1 sample
image. This will let you know that everything is set up correctly.
@@ -166,10 +133,10 @@ Then increase steps to 100 or more for good (but slower) results.
The prompt can be in quotes or not.
```Shell
dream> The hulk fighting with sheldon cooper -s5 -n1
dream> "woman closeup highly detailed" -s 150
invoke> The hulk fighting with sheldon cooper -s5 -n1
invoke> "woman closeup highly detailed" -s 150
# Reuse previous seed and apply face restoration
dream> "woman closeup highly detailed" --steps 150 --seed -1 -G 0.75
invoke> "woman closeup highly detailed" --steps 150 --seed -1 -G 0.75
```
You'll need to experiment to see if face restoration is making it better or
@@ -188,7 +155,7 @@ volume):
docker cp dummy:/data/000001.928403745.png /Users/<your-user>/Pictures
```
## Image to Image
### Image to Image
You can also do text-guided image-to-image translation. For example, turning a
sketch into a detailed drawing.
@@ -210,35 +177,36 @@ If you're on a Docker container, copy your input image into the Docker volume
docker cp /Users/<your-user>/Pictures/sketch-mountains-input.jpg dummy:/data/
```
Try it out generating an image (or more). The `dream` script needs absolute
Try it out generating an image (or more). The `invoke` script needs absolute
paths to find the image so don't use `~`.
If you're on your Mac
```Shell
dream> "A fantasy landscape, trending on artstation" -I /Users/<your-user>/Pictures/sketch-mountains-input.jpg --strength 0.75 --steps 100 -n4
invoke> "A fantasy landscape, trending on artstation" -I /Users/<your-user>/Pictures/sketch-mountains-input.jpg --strength 0.75 --steps 100 -n4
```
If you're on a Linux container on your Mac
```Shell
dream> "A fantasy landscape, trending on artstation" -I /data/sketch-mountains-input.jpg --strength 0.75 --steps 50 -n1
invoke> "A fantasy landscape, trending on artstation" -I /data/sketch-mountains-input.jpg --strength 0.75 --steps 50 -n1
```
## Web Interface
### Web Interface
You can use the `dream` script with a graphical web interface. Start the web
You can use the `invoke` script with a graphical web interface. Start the web
server with:
```Shell
python3 scripts/dream.py --full_precision --web
python3 scripts/invoke.py --full_precision --web
```
If it's running on your Mac point your Mac web browser to http://127.0.0.1:9090
If it's running on your Mac point your Mac web browser to
<http://127.0.0.1:9090>
Press Control-C at the command line to stop the web server.
## Notes
### Notes
Some text you can add at the end of the prompt to make it very pretty:

View File

@@ -1,5 +1,5 @@
---
title: Linux
title: Manual Installation, Linux
---
# :fontawesome-brands-linux: Linux
@@ -26,38 +26,36 @@ title: Linux
3. Copy the InvokeAI source code from GitHub:
```
(base) ~$ git clone https://github.com/invoke-ai/InvokeAI.git
```
```bash
(base) ~$ git clone https://github.com/invoke-ai/InvokeAI.git
```
This will create InvokeAI folder where you will follow the rest of the steps.
This will create InvokeAI folder where you will follow the rest of the steps.
4. Enter the newly-created InvokeAI folder. From this step forward make sure that you are working in the InvokeAI directory!
```
(base) ~$ cd InvokeAI
(base) ~/InvokeAI$
```
```bash
(base) ~$ cd InvokeAI
(base) ~/InvokeAI$
```
5. Use anaconda to copy necessary python packages, create a new python
environment named `ldm` and activate the environment.
environment named `invokeai` and activate the environment.
```bash
(base) ~/InvokeAI$ conda env create
(base) ~/InvokeAI$ conda activate invokeai
(invokeai) ~/InvokeAI$
```
```
(base) ~/InvokeAI$ conda env create
(base) ~/InvokeAI$ conda activate ldm
(ldm) ~/InvokeAI$
```
After these steps, your command prompt will be prefixed by `(ldm)` as shown
After these steps, your command prompt will be prefixed by `(invokeai)` as shown
above.
6. Load a couple of small machine-learning models required by stable diffusion:
```
(ldm) ~/InvokeAI$ python3 scripts/preload_models.py
```
```bash
(invokeai) ~/InvokeAI$ python3 scripts/preload_models.py
```
!!! note
@@ -65,48 +63,39 @@ This will create InvokeAI folder where you will follow the rest of the steps.
model loading scheme to allow the script to work on GPU machines that are not
internet connected. See [Preload Models](../features/OTHER.md#preload-models)
7. Now you need to install the weights for the stable diffusion model.
7. Install the weights for the stable diffusion model.
- For running with the released weights, you will first need to set up an acount
with [Hugging Face](https://huggingface.co).
- Use your credentials to log in, and then point your browser [here](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original.)
- You may be asked to sign a license agreement at this point.
- Click on "Files and versions" near the top of the page, and then click on the
file named "sd-v1-4.ckpt". You'll be taken to a page that prompts you to click
the "download" link. Save the file somewhere safe on your local machine.
- Sign up at https://huggingface.co
- Go to the [Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
- Accept the terms and click Access Repository
- Download [v1-5-pruned-emaonly.ckpt (4.27 GB)](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt)
and move it into this directory under `models/ldm/stable_diffusion_v1/v1-5-pruned-emaonly.ckpt`
Now run the following commands from within the stable-diffusion directory.
This will create a symbolic link from the stable-diffusion model.ckpt file, to
the true location of the `sd-v1-4.ckpt` file.
```
(ldm) ~/InvokeAI$ mkdir -p models/ldm/stable-diffusion-v1
(ldm) ~/InvokeAI$ ln -sf /path/to/sd-v1-4.ckpt models/ldm/stable-diffusion-v1/model.ckpt
```
There are many other models that you can use. Please see [../features/INSTALLING_MODELS.md]
for details.
8. Start generating images!
```
# for the pre-release weights use the -l or --liaon400m switch
(ldm) ~/InvokeAI$ python3 scripts/dream.py -l
```bash
# for the pre-release weights use the -l or --liaon400m switch
(invokeai) ~/InvokeAI$ python3 scripts/invoke.py -l
# for the post-release weights do not use the switch
(ldm) ~/InvokeAI$ python3 scripts/dream.py
# for the post-release weights do not use the switch
(invokeai) ~/InvokeAI$ python3 scripts/invoke.py
# for additional configuration switches and arguments, use -h or --help
(ldm) ~/InvokeAI$ python3 scripts/dream.py -h
```
# for additional configuration switches and arguments, use -h or --help
(invokeai) ~/InvokeAI$ python3 scripts/invoke.py -h
```
9. Subsequently, to relaunch the script, be sure to run "conda activate ldm" (step 5, second command), enter the `InvokeAI` directory, and then launch the dream script (step 8). If you forget to activate the ldm environment, the script will fail with multiple `ModuleNotFound` errors.
9. Subsequently, to relaunch the script, be sure to run "conda activate invokeai" (step 5, second command), enter the `InvokeAI` directory, and then launch the invoke script (step 8). If you forget to activate the 'invokeai' environment, the script will fail with multiple `ModuleNotFound` errors.
## Updating to newer versions of the script
This distribution is changing rapidly. If you used the `git clone` method (step 5) to download the InvokeAI directory, then to update to the latest and greatest version, launch the Anaconda window, enter `InvokeAI` and type:
```
(ldm) ~/InvokeAI$ git pull
```bash
(invokeai) ~/InvokeAI$ git pull
(invokeai) ~/InvokeAI$ conda env update -f environment.yml
```
This will bring your local copy into sync with the remote one.

View File

@@ -1,69 +1,72 @@
---
title: macOS
title: Manual Installation, macOS
---
# :fontawesome-brands-apple: macOS
Invoke AI runs quite well on M1 Macs and we have a number of M1 users
in the community.
While the repo does run on Intel Macs, we only have a couple
reports. If you have an Intel Mac and run into issues, please create
an issue on Github and we will do our best to help.
## Requirements
- macOS 12.3 Monterey or later
- Python
- Patience
- Apple Silicon or Intel Mac
- About 10GB of storage (and 10GB of data if your internet connection has data caps)
- Any M1 Macs or an Intel Macs with 4GB+ of VRAM (ideally more)
Things have moved really fast and so these instructions change often which makes
them outdated pretty fast. One of the problems is that there are so many
different ways to run this.
## Installation
We are trying to build a testing setup so that when we make changes it doesn't
always break.
First you need to download a large checkpoint file.
## How to
1. Sign up at https://huggingface.co
2. Go to the [Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
3. Accept the terms and click Access Repository
4. Download [v1-5-pruned-emaonly.ckpt (4.27 GB)](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt)
and move it into this directory under `models/ldm/stable_diffusion_v1/v1-5-pruned-emaonly.ckpt`
(this hasn't been 100% tested yet)
There are many other models that you can try. Please see [../features/INSTALLING_MODELS.md]
for details.
First get the weights checkpoint download started since it's big and will take
some time:
1. Sign up at [huggingface.co](https://huggingface.co)
2. Go to the
[Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
3. Accept the terms and click Access Repository:
4. Download
[sd-v1-4.ckpt (4.27 GB)](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/blob/main/sd-v1-4.ckpt)
and note where you have saved it (probably the Downloads folder)
While that is downloading, open a Terminal and run the following commands:
While that is downloading, open Terminal and run the following
commands one at a time, reading the comments and taking care to run
the appropriate command for your Mac's architecture (Intel or M1).
!!! todo "Homebrew"
=== "no brew installation yet"
If you have no brew installation yet (otherwise skip):
```bash title="install brew (and Xcode command line tools)"
/bin/bash -c \
"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
```
=== "brew is already installed"
Only if you installed protobuf in a previous version of this tutorial, otherwise skip
`#!bash brew uninstall protobuf`
```bash title="install brew (and Xcode command line tools)"
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
```
!!! todo "Conda Installation"
Now there are two different ways to set up the Python (miniconda) environment:
1. Standalone
2. with pyenv
If you don't know what we are talking about, choose Standalone
1. Standalone
2. with pyenv
If you don't know what we are talking about, choose Standalone. If you are familiar with python environments, choose "with pyenv"
=== "Standalone"
```bash
# install cmake and rust:
brew install cmake rust
```bash title="Install cmake, protobuf, and rust"
brew install cmake protobuf rust
```
Then clone the InvokeAI repository:
```bash title="Clone the InvokeAI repository:
# Clone the Invoke AI repo
git clone https://github.com/invoke-ai/InvokeAI.git
cd InvokeAI
```
Choose the appropriate architecture for your system and install miniconda:
=== "M1 arm64"
```bash title="Install miniconda for M1 arm64"
@@ -82,80 +85,82 @@ While that is downloading, open a Terminal and run the following commands:
=== "with pyenv"
```{.bash .annotate}
brew install rust pyenv-virtualenv # (1)!
```bash
brew install pyenv-virtualenv
pyenv install anaconda3-2022.05
pyenv virtualenv anaconda3-2022.05
eval "$(pyenv init -)"
pyenv activate anaconda3-2022.05
```
1. You might already have this installed, if that is the case just continue.
```{.bash .annotate title="local repo setup"}
# clone the repo
git clone https://github.com/invoke-ai/InvokeAI.git
!!! todo "Clone the Invoke AI repo"
cd InvokeAI
```bash
git clone https://github.com/invoke-ai/InvokeAI.git
cd InvokeAI
```
# wait until the checkpoint file has downloaded, then proceed
!!! todo "Wait until the checkpoint-file download finished, then proceed"
# create symlink to checkpoint
mkdir -p models/ldm/stable-diffusion-v1/
We will leave the big checkpoint wherever you stashed it for long-term storage,
and make a link to it from the repo's folder. This allows you to use it for
other repos, or if you need to delete Invoke AI, you won't have to download it again.
PATH_TO_CKPT="$HOME/Downloads" # (1)!
```{.bash .annotate}
# Make the directory in the repo for the symlink
mkdir -p models/ldm/stable-diffusion-v1/
ln -s "$PATH_TO_CKPT/sd-v1-4.ckpt" \
models/ldm/stable-diffusion-v1/model.ckpt
```
# This is the folder where you put the checkpoint file `sd-v1-4.ckpt`
PATH_TO_CKPT="$HOME/Downloads" # (1)!
1. or wherever you saved sd-v1-4.ckpt
# Create a link to the checkpoint
ln -s "$PATH_TO_CKPT/sd-v1-4.ckpt" models/ldm/stable-diffusion-v1/model.ckpt
```
!!! todo "create Conda Environment"
1. replace `$HOME/Downloads` with the Location where you actually stored the Checkppoint (`sd-v1-4.ckpt`)
=== "M1 arm64"
!!! todo "Create the environment & install packages"
=== "M1 Mac"
```bash
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 \
conda env create \
-f environment-mac.yml \
&& conda activate ldm
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 conda env create -f environment-mac.yml
```
=== "Intel x86_64"
=== "Intel x86_64 Mac"
```bash
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-64 \
conda env create \
-f environment-mac.yml \
&& conda activate ldm
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-64 conda env create -f environment-mac.yml
```
```{.bash .annotate title="preload models and run script"}
# only need to do this once
python scripts/preload_models.py
```bash
# Activate the environment (you need to do this every time you want to run SD)
conda activate invokeai
# now you can run SD in CLI mode
python scripts/dream.py --full_precision # (1)!
# This will download some bits and pieces and make take a while
(invokeai) python scripts/preload_models.py
# or run the web interface!
python scripts/dream.py --web
# Run SD!
(invokeai) python scripts/dream.py
# The original scripts should work as well.
python scripts/orig_scripts/txt2img.py \
--prompt "a photograph of an astronaut riding a horse" \
--plms
```
# or run the web interface!
(invokeai) python scripts/invoke.py --web
Note, `export PIP_EXISTS_ACTION=w` is a precaution to fix `conda env
create -f environment-mac.yml` never finishing in some situations. So
it isn't required but wont hurt.
# The original scripts should work as well.
(invokeai) python scripts/orig_scripts/txt2img.py \
--prompt "a photograph of an astronaut riding a horse" \
--plms
```
!!! info
`export PIP_EXISTS_ACTION=w` is a precaution to fix `conda env
create -f environment-mac.yml` never finishing in some situations. So
it isn't required but wont hurt.
---
## Common problems
After you followed all the instructions and try to run dream.py, you might
After you followed all the instructions and try to run invoke.py, you might
get several errors. Here's the errors I've seen and found solutions for.
### Is it slow?
@@ -172,13 +177,13 @@ python ./scripts/orig_scripts/txt2img.py \
### Doesn't work anymore?
PyTorch nightly includes support for MPS. Because of this, this setup is
inherently unstable. One morning I woke up and it no longer worked no matter
what I did until I switched to miniforge. However, I have another Mac that works
just fine with Anaconda. If you can't get it to work, please search a little
first because many of the errors will get posted and solved. If you can't find a
solution please
[create an issue](https://github.com/invoke-ai/InvokeAI/issues).
PyTorch nightly includes support for MPS. Because of this, this setup
is inherently unstable. One morning I woke up and it no longer worked
no matter what I did until I switched to miniforge. However, I have
another Mac that works just fine with Anaconda. If you can't get it to
work, please search a little first because many of the errors will get
posted and solved. If you can't find a solution please [create an
issue](https://github.com/invoke-ai/InvokeAI/issues).
One debugging step is to update to the latest version of PyTorch nightly.
@@ -187,10 +192,9 @@ conda install \
pytorch \
torchvision \
-c pytorch-nightly \
-n ldm
-n invokeai
```
If it takes forever to run `conda env create -f environment-mac.yml`, try this:
```bash
@@ -202,27 +206,27 @@ conda clean \
Or you could try to completley reset Anaconda:
```bash
conda update \
--force-reinstall \
-y \
-n base \
-c defaults conda
```bash
conda update \
--force-reinstall \
-y \
-n base \
-c defaults conda
```
---
### "No module named cv2", torch, 'ldm', 'transformers', 'taming', etc
### "No module named cv2", torch, 'invokeai', 'transformers', 'taming', etc
There are several causes of these errors:
1. Did you remember to `conda activate ldm`? If your terminal prompt begins with
"(ldm)" then you activated it. If it begins with "(base)" or something else
1. Did you remember to `conda activate invokeai`? If your terminal prompt begins with
"(invokeai)" then you activated it. If it begins with "(base)" or something else
you haven't.
2. You might've run `./scripts/preload_models.py` or `./scripts/dream.py`
2. You might've run `./scripts/preload_models.py` or `./scripts/invoke.py`
instead of `python ./scripts/preload_models.py` or
`python ./scripts/dream.py`. The cause of this error is long so it's below.
`python ./scripts/invoke.py`. The cause of this error is long so it's below.
<!-- I could not find out where the error is, otherwise would have marked it as a footnote -->
@@ -231,17 +235,17 @@ There are several causes of these errors:
```bash
conda deactivate
conda env remove -n ldm
conda env remove -n invokeai
conda env create -f environment-mac.yml
```
4. If you have activated the ldm virtual environment and tried rebuilding it,
4. If you have activated the invokeai virtual environment and tried rebuilding it,
maybe the problem could be that I have something installed that you don't and
you'll just need to manually install it. Make sure you activate the virtual
environment so it installs there instead of globally.
```bash
conda activate ldm
conda activate invokeai
pip install <package name>
```
@@ -299,12 +303,12 @@ should actually be the _same python_, which you can verify by comparing the
output of `python3 -V` and `python -V`.
```bash
(ldm) % which python
/Users/name/miniforge3/envs/ldm/bin/python
(invokeai) % which python
/Users/name/miniforge3/envs/invokeai/bin/python
```
The above is what you'll see if you have miniforge and correctly activated the
ldm environment, while usingd the standalone setup instructions above.
invokeai environment, while usingd the standalone setup instructions above.
If you otherwise installed via pyenv, you will get this result:
@@ -378,7 +382,7 @@ python scripts/preload_models.py
WARNING: this will be slower than running natively on MPS.
```
This fork already includes a fix for this in
The InvokeAI version includes this fix in
[environment-mac.yml](https://github.com/invoke-ai/InvokeAI/blob/main/environment-mac.yml).
### "Could not build wheels for tokenizers"
@@ -463,13 +467,10 @@ C.
You don't have a virus. It's part of the project. Here's
[Rick](https://github.com/invoke-ai/InvokeAI/blob/main/assets/rick.jpeg)
and here's
[the code](https://github.com/invoke-ai/InvokeAI/blob/69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc/scripts/txt2img.py#L79)
that swaps him in. It's a NSFW filter, which IMO, doesn't work very good (and we
call this "computer vision", sheesh).
Actually, this could be happening because there's not enough RAM. You could try
the `model.half()` suggestion or specify smaller output images.
and here's [the
code](https://github.com/invoke-ai/InvokeAI/blob/69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc/scripts/txt2img.py#L79)
that swaps him in. It's a NSFW filter, which IMO, doesn't work very
good (and we call this "computer vision", sheesh).
---
@@ -487,16 +488,14 @@ this issue too. I should probably test it.
### "view size is not compatible with input tensor's size and stride"
```bash
File "/opt/anaconda3/envs/ldm/lib/python3.10/site-packages/torch/nn/functional.py", line 2511, in layer_norm
File "/opt/anaconda3/envs/invokeai/lib/python3.10/site-packages/torch/nn/functional.py", line 2511, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
```
Update to the latest version of invoke-ai/InvokeAI. We were patching
pytorch but we found a file in stable-diffusion that we could change instead.
This is a 32-bit vs 16-bit problem.
---
Update to the latest version of invoke-ai/InvokeAI. We were
patching pytorch but we found a file in stable-diffusion that we could
change instead. This is a 32-bit vs 16-bit problem.
### The processor must support the Intel bla bla bla
@@ -519,13 +518,13 @@ use ARM packages, and use `nomkl` as described above.
May appear when just starting to generate, e.g.:
```bash
dream> clouds
invoke> clouds
Generating: 0%| | 0/1 [00:00<?, ?it/s]/Users/[...]/dev/stable-diffusion/ldm/modules/embedding_manager.py:152: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1662016319283/work/aten/src/ATen/mps/MPSFallback.mm:11.)
placeholder_idx = torch.where(
loc("mps_add"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/20d6c351-ee94-11ec-bcaf-7247572f23b4/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":219:0)): error: input types 'tensor<2x1280xf32>' and 'tensor<*xf16>' are not broadcast compatible
LLVM ERROR: Failed to infer result type(s).
Abort trap: 6
/Users/[...]/opt/anaconda3/envs/ldm/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
/Users/[...]/opt/anaconda3/envs/invokeai/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
```

View File

@@ -1,5 +1,5 @@
---
title: Windows
title: Manual Installation, Windows
---
# :fontawesome-brands-windows: Windows
@@ -39,7 +39,7 @@ in the wiki
4. Run the command:
```bash
```batch
git clone https://github.com/invoke-ai/InvokeAI.git
```
@@ -48,17 +48,21 @@ in the wiki
5. Enter the newly-created InvokeAI folder. From this step forward make sure that you are working in the InvokeAI directory!
```
cd InvokeAI
```
```batch
cd InvokeAI
```
6. Run the following two commands:
```
conda env create (step 6a)
conda activate ldm (step 6b)
```
This will install all python requirements and activate the "ldm" environment
```batch title="step 6a"
conda env create
```
```batch title="step 6b"
conda activate invokeai
```
This will install all python requirements and activate the "invokeai" environment
which sets PATH and other environment variables properly.
Note that the long form of the first command is `conda env create -f environment.yml`. If the
@@ -67,7 +71,7 @@ conda activate ldm (step 6b)
7. Run the command:
```bash
```batch
python scripts\preload_models.py
```
@@ -79,45 +83,35 @@ conda activate ldm (step 6b)
8. Now you need to install the weights for the big stable diffusion model.
- For running with the released weights, you will first need to set up an acount with Hugging Face (https://huggingface.co).
- Use your credentials to log in, and then point your browser at https://huggingface.co/CompVis/stable-diffusion-v-1-4-original.
- You may be asked to sign a license agreement at this point.
- Click on "Files and versions" near the top of the page, and then click on the file named `sd-v1-4.ckpt`. You'll be taken to a page that
prompts you to click the "download" link. Now save the file somewhere safe on your local machine.
- The weight file is >4 GB in size, so
downloading may take a while.
- Sign up at https://huggingface.co
- Go to the [Stable diffusion diffusion model page](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
- Accept the terms and click Access Repository
- Download [v1-5-pruned-emaonly.ckpt (4.27 GB)](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt)
and move it into this directory under `models/ldm/stable_diffusion_v1/v1-5-pruned-emaonly.ckpt`
Now run the following commands from **within the InvokeAI directory** to copy the weights file to the right place:
```
mkdir -p models\ldm\stable-diffusion-v1
copy C:\path\to\sd-v1-4.ckpt models\ldm\stable-diffusion-v1\model.ckpt
```
Please replace `C:\path\to\sd-v1.4.ckpt` with the correct path to wherever you stashed this file. If you prefer not to copy or move the .ckpt file,
you may instead create a shortcut to it from within `models\ldm\stable-diffusion-v1\`.
There are many other models that you can use. Please see [../features/INSTALLING_MODELS.md]
for details.
9. Start generating images!
```bash
# for the pre-release weights
python scripts\dream.py -l
# for the post-release weights
python scripts\dream.py
```batch title="for the pre-release weights"
python scripts\invoke.py -l
```
10. Subsequently, to relaunch the script, first activate the Anaconda command window (step 3),enter the InvokeAI directory (step 5, `cd \path\to\InvokeAI`), run `conda activate ldm` (step 6b), and then launch the dream script (step 9).
```batch title="for the post-release weights"
python scripts\invoke.py
```
10. Subsequently, to relaunch the script, first activate the Anaconda command window (step 3),enter the InvokeAI directory (step 5, `cd \path\to\InvokeAI`), run `conda activate invokeai` (step 6b), and then launch the invoke script (step 9).
!!! tip "Tildebyte has written an alternative"
**Note:** Tildebyte has written an alternative
["Easy peasy Windows install"](https://github.com/invoke-ai/InvokeAI/wiki/Easy-peasy-Windows-install)
which uses the Windows Powershell and pew. If you are having trouble with
Anaconda on Windows, give this a try (or try it first!)
---
This distribution is changing rapidly. If you used the `git clone` method (step 5) to download the InvokeAI directory, then to update to the latest and greatest version, launch the Anaconda window, enter `InvokeAI`, and type:
This distribution is changing rapidly. If you used the `git clone` method
(step 5) to download the stable-diffusion directory, then to update to the
latest and greatest version, launch the Anaconda window, enter

View File

@@ -58,6 +58,7 @@ We thank them for all of their time and hard work.
- [rabidcopy](https://github.com/rabidcopy)
- [Dominic Letz](https://github.com/dominicletz)
- [Dmitry T.](https://github.com/ArDiouscuros)
- [Kent Keirsey](https://github.com/hipsterusername)
## **Original CompVis Authors:**

Some files were not shown because too many files have changed in this diff Show More