Compare commits

...

2866 Commits

Author SHA1 Message Date
Sergey Borisov
e4a45341c8 Controlnet implementation for sequential execution 2023-06-16 02:42:32 +03:00
blessedcoolant
4ca325e8e6 chore: Rebuild API 2023-06-15 03:20:49 +12:00
blessedcoolant
6b8e88ad7f Merge branch 'main' into feat/controlnet-control-modes 2023-06-15 03:18:41 +12:00
psychedelicious
0497bea264 fix: add dynamicprompts to pyproject.toml 2023-06-15 01:05:16 +10:00
psychedelicious
b8e32fa459 chore(ui): regen api client 2023-06-15 01:05:16 +10:00
psychedelicious
34ebee67b7 fix(nodes): fix revert conflict 2023-06-15 01:05:16 +10:00
psychedelicious
e0c998d192 Revert "feat(ui): add warning socket event handling"
This reverts commit e7a61e631a42190e4b64e0d5e22771c669c5b30c.
2023-06-15 01:05:16 +10:00
psychedelicious
b51e9a6bdb Revert "feat(nodes): add warning socket event"
This reverts commit cefdd9d634e515239bd85666c872a0d64bb9d772.
2023-06-15 01:05:16 +10:00
psychedelicious
09f396ce84 feat(ui): add warning socket event handling 2023-06-15 01:05:16 +10:00
psychedelicious
abee37eab3 feat(nodes): add warning socket event 2023-06-15 01:05:16 +10:00
psychedelicious
42e48b2bef feat(nodes): add dynamic prompt node 2023-06-15 01:05:16 +10:00
blessedcoolant
70ece4364c refactor(minor): Image & Latent File Storage (#3538)
- `DiskImageStorage` and `DiskLatentsStorage` have now both been updated
to exclusively work with `Path` objects and not rely on the `os` lib to
handle pathing related functions.
- We now also validate the existence of the required image output
folders and latent output folders to ensure that the app does not break
in case the required folders get tampered with mid-session.
- Just overall general cleanup.

Tested it. Don't seem to be any thing breaking.
2023-06-15 02:43:27 +12:00
psychedelicious
f9d5f9d52c fix(nodes): minor fixes for folder validation
- fix type for `__output_folder`
- prefix `validate_storage_folders()` with `__` to indicate private method
2023-06-15 00:40:39 +10:00
blessedcoolant
587297878a refactor(minor): Latent Disk Storage 2023-06-15 02:21:49 +12:00
blessedcoolant
b4c998a9ae refactor(minor): Image File Storage 2023-06-15 01:58:58 +12:00
psychedelicious
88e8e3977b feat(ui): update UI to not use image_origin
see commit `8ad8de8: feat(nodes): remove `image_origin` from most places` for details.
2023-06-14 23:08:27 +10:00
psychedelicious
24b86cffe9 chore(ui): regen api client & types 2023-06-14 23:08:27 +10:00
psychedelicious
a1773197e9 feat(nodes): remove image_origin from most places
- remove `image_origin` from most places where we interact with images
- consolidate image file storage into a single `images/` dir

Images have an `image_origin` attribute but it is not actually used when retrieving images, nor will it ever be. It is still used when creating images and helps to differentiate between internally generated images and uploads.

It was included in eg API routes and image service methods as a holdover from the previous app implementation where images were not managed in a database. Now that we have images in a db, we can do away with this and simplify basically everything that touches images.

The one potentially controversial change is to no longer separate internal and external images on disk. If we retain this separation, we have to keep `image_origin` around in a number of spots and it getting image paths on disk painful.

So, I am have gotten rid of this organisation. Images are now all stored in `images`, regardless of their origin. As we improve the image management features, this change will hopefully become transparent.
2023-06-14 23:08:27 +10:00
blessedcoolant
6c53abc034 feat: Add ControlMode to Linear UI 2023-06-14 20:01:17 +12:00
blessedcoolant
eb7047b21d chore: Rebuild WebAPI 2023-06-14 19:26:02 +12:00
blessedcoolant
43419ac761 Merge branch 'main' into feat/controlnet-control-modes 2023-06-14 19:04:42 +12:00
user1
5cd0e90816 Renamed ControlNet control_mode option "even_more_control" to "unbalanced" 2023-06-13 22:30:17 -07:00
user1
cfd49e3921 Removing vestigial comments. 2023-06-13 21:33:15 -07:00
user1
a8e0490133 Merge branch 'feat/controlnet-control-modes' of https://github.com/invoke-ai/InvokeAI into feat/controlnet-control-modes 2023-06-13 21:21:13 -07:00
psychedelicious
1e08d865c9 chore: dummy commit to trigger actions 2023-06-14 14:14:24 +10:00
blessedcoolant
f8bb650cc1 revert: IAIScrollArea 2023-06-14 14:14:24 +10:00
psychedelicious
2cee8bebb2 fix(ui): revert offset scrollbars
The wonky padding is too janky. Just overlay for now.
2023-06-14 14:14:24 +10:00
psychedelicious
ade4ec5fd8 fix(ui): fix crash when toggling pinned parameters panel 2023-06-14 14:14:24 +10:00
psychedelicious
70ffd6b03f fix(ui): fix controlnet selects data types 2023-06-14 14:14:24 +10:00
psychedelicious
6c551df311 fix(ui): fix rebase conflicts 2023-06-14 14:14:24 +10:00
blessedcoolant
24f605629e cleanup: Remove OverlayScrollable component 2023-06-14 14:14:24 +10:00
blessedcoolant
2af1ec9d02 fix: Minor padding issue in unpinned drawer 2023-06-14 14:14:24 +10:00
blessedcoolant
79d53341de fix: Stretch scroll area so it retains parent width 2023-06-14 14:14:24 +10:00
blessedcoolant
e40b3506c4 fix: Options squishing on accordion collapse 2023-06-14 14:14:24 +10:00
blessedcoolant
33912382e3 feat: Introduce Mantine's ScrollArea 2023-06-14 14:14:24 +10:00
blessedcoolant
d282810e53 cleanup: Remove IAICustomSelect and port types 2023-06-14 14:14:24 +10:00
psychedelicious
9df502fc77 fix(ui): fix mantine select props 2023-06-14 14:14:24 +10:00
psychedelicious
705573f0a8 feat(ui): even more pedantic mantine select theming 2023-06-14 14:14:24 +10:00
blessedcoolant
1878ea94f6 feat: Port Canvas Layer Select to IAIMantineSelect 2023-06-14 14:14:24 +10:00
psychedelicious
4ba5086b9a feat(ui): add tooltip to IAIMantineSelect 2023-06-14 14:14:24 +10:00
psychedelicious
4a991b4daa feat(ui): more pedantic mantine select theming 2023-06-14 14:14:24 +10:00
psychedelicious
80474d26f9 feat(ui): mantine scrollbar theming 2023-06-14 14:14:24 +10:00
blessedcoolant
9a77bd9140 feat: Port IAISelect's to IAIMantineSelect's
Ported everything except Model Manager selects and the Canvas Layer Select (this needs tooltip support)
2023-06-14 14:14:24 +10:00
psychedelicious
14cdc800c3 feat(ui): pedantic mantine select theming 2023-06-14 14:14:24 +10:00
blessedcoolant
9cfbea4c25 feat: Match styling of Mantine Select with InvokeAI 2023-06-14 14:14:24 +10:00
blessedcoolant
5fe674e223 feat: Standardize IAIMantineSelect Component 2023-06-14 14:14:24 +10:00
blessedcoolant
32200efce8 feat: Change default font to Inter 2023-06-14 14:14:24 +10:00
blessedcoolant
68a02da990 feat: Use Mantine Select for Scheduler 2023-06-14 14:14:24 +10:00
blessedcoolant
5b20766ea3 chore: Move Mantine Theme Override to own file 2023-06-14 14:14:24 +10:00
blessedcoolant
9a914250a0 feat: Change Model Select To Mantine 2023-06-14 14:14:24 +10:00
blessedcoolant
0e3106f631 feat: Add Mantine Support 2023-06-14 14:14:24 +10:00
user1
de3e6cdb02 Switched over to ControlNet control_mode with 4 options: balanced, more_prompt, more_control, even_more_control. Based on True/False combinations of internal booleans cfg_injection and soft_injection 2023-06-13 21:08:34 -07:00
user1
8495764d45 Moving from ControlNet guess_mode to separate booleans for cfg_injection and soft_injection for testing control modes 2023-06-13 00:41:36 -07:00
user1
8b7fac75ed First pass at ControlNet "guess mode" implementation. 2023-06-13 00:41:36 -07:00
user1
9e0e26f4c4 Moving from ControlNet guess_mode to separate booleans for cfg_injection and soft_injection for testing control modes 2023-06-12 23:57:57 -07:00
blessedcoolant
46cac6468e Upgrade to Diffusers 0.17.0 (#3514)
Diffusers is due for an update soon. #3512

Opening up a PR now with the required changes for when the new version
is live.

I've tested it out on Windows and nothing has broken from what I could
tell. I'd like someone to run some tests on Linux / Mac just to make
sure. Refer to the PR above on how to test it or install the release
branch.

```
pip install diffusers[torch]==0.17.0
```

Feel free to push any other changes to this PR you see fit.
2023-06-13 07:11:02 +12:00
blessedcoolant
2a814d886b Merge branch 'main' into diffusers-upgrade 2023-06-13 05:29:15 +12:00
psychedelicious
60a2fbec41 feat(ui): improve controlnet-related config types 2023-06-13 00:04:21 +10:00
psychedelicious
f15a328b80 fix(ui): allow controlnet with preprocessed control image 2023-06-13 00:04:21 +10:00
psychedelicious
811d9ab55a fix(ui): disable shouldAutoConfig switch while processing 2023-06-13 00:04:21 +10:00
psychedelicious
e00fed5c46 feat(ui): support disabling controlnet models & processors 2023-06-13 00:04:21 +10:00
psychedelicious
a3fa38b353 fix(ui): revert IAICustomSelect usage to IAISelect
There are some bugs with it that I cannot figure out related to `floating-ui` and `downshift`'s handling of refs.

Will need to revisit this component in the future.
2023-06-13 00:04:21 +10:00
psychedelicious
2e42a4bdd9 feat(ui): disable controlnets during processing 2023-06-13 00:04:21 +10:00
psychedelicious
36f72b5a49 fix(ui): check for valid controlnets before adding to graph 2023-06-13 00:04:21 +10:00
psychedelicious
af42d7d347 feat(ui): support negative controlnet weights 2023-06-13 00:04:21 +10:00
psychedelicious
8607b1994c fix(ui): fix crash when controlnet enabled but no controlnets added 2023-06-13 00:04:21 +10:00
blessedcoolant
e051c450ed fix: git stash (#3528) 2023-06-12 08:55:36 +12:00
blessedcoolant
50135b726e fix: git stash 2023-06-12 08:53:09 +12:00
user1
fd715026a7 First pass at ControlNet "guess mode" implementation. 2023-06-11 02:00:39 -07:00
Gregg Helt
c647056287 Feat/easy param (#3504)
* Testing change to LatentsToText to allow setting different cfg_scale values per diffusion step.

* Adding first attempt at float param easing node, using Penner easing functions.

* Core implementation of ControlNet and MultiControlNet.

* Added support for ControlNet and MultiControlNet to legacy non-nodal Txt2Img in backend/generator. Although backend/generator will likely disappear by v3.x, right now they are very useful for testing core ControlNet and MultiControlNet functionality while node codebase is rapidly evolving.

* Added example of using ControlNet with legacy Txt2Img generator

* Resolving rebase conflict

* Added first controlnet preprocessor node for canny edge detection.

* Initial port of controlnet node support from generator-based TextToImageInvocation node to latent-based TextToLatentsInvocation node

* Switching to ControlField for output from controlnet nodes.

* Resolving conflicts in rebase to origin/main

* Refactored ControlNet nodes so they subclass from PreprocessedControlInvocation, and only need to override run_processor(image) (instead of reimplementing invoke())

* changes to base class for controlnet nodes

* Added HED, LineArt, and OpenPose ControlNet nodes

* Added an additional "raw_processed_image" output port to controlnets, mainly so could route ImageField to a ShowImage node

* Added more preprocessor nodes for:
      MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.

* Prep for splitting pre-processor and controlnet nodes

* Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes.

* Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue.

* More rebase repair.

* Added support for using multiple control nets. Unfortunately this breaks direct usage of Control node output port  ==> TextToLatent control input port -- passing through a Collect node is now required. Working on fixing this...

* Fixed use of ControlNet control_weight parameter

* Fixed lint-ish formatting error

* Core implementation of ControlNet and MultiControlNet.

* Added first controlnet preprocessor node for canny edge detection.

* Initial port of controlnet node support from generator-based TextToImageInvocation node to latent-based TextToLatentsInvocation node

* Switching to ControlField for output from controlnet nodes.

* Refactored controlnet node to output ControlField that bundles control info.

* changes to base class for controlnet nodes

* Added more preprocessor nodes for:
      MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.

* Prep for splitting pre-processor and controlnet nodes

* Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes.

* Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue.

* Cleaning up TextToLatent arg testing

* Cleaning up mistakes after rebase.

* Removed last bits of dtype and and device hardwiring from controlnet section

* Refactored ControNet support to consolidate multiple parameters into data struct. Also redid how multiple controlnets are handled.

* Added support for specifying which step iteration to start using
each ControlNet, and which step to end using each controlnet (specified as fraction of total steps)

* Cleaning up prior to submitting ControlNet PR. Mostly turning off diagnostic printing. Also fixed error when there is no controlnet input.

* Added dependency on controlnet-aux v0.0.3

* Commented out ZoeDetector. Will re-instate once there's a controlnet-aux release that supports it.

* Switched CotrolNet node modelname input from free text to default list of popular ControlNet model names.

* Fix to work with current stable release of controlnet_aux (v0.0.3). Turned of pre-processor params that were added post v0.0.3. Also change defaults for shuffle.

* Refactored most of controlnet code into its own method to declutter TextToLatents.invoke(), and make upcoming integration with LatentsToLatents easier.

* Cleaning up after ControlNet refactor in TextToLatentsInvocation

* Extended node-based ControlNet support to LatentsToLatentsInvocation.

* chore(ui): regen api client

* fix(ui): add value to conditioning field

* fix(ui): add control field type

* fix(ui): fix node ui type hints

* fix(nodes): controlnet input accepts list or single controlnet

* Moved to controlnet_aux v0.0.4, reinstated Zoe controlnet preprocessor. Also in pyproject.toml  had to specify downgrade of timm to 0.6.13 _after_ controlnet-aux installs timm >= 0.9.2, because timm >0.6.13 breaks Zoe preprocessor.

* Core implementation of ControlNet and MultiControlNet.

* Added first controlnet preprocessor node for canny edge detection.

* Switching to ControlField for output from controlnet nodes.

* Resolving conflicts in rebase to origin/main

* Refactored ControlNet nodes so they subclass from PreprocessedControlInvocation, and only need to override run_processor(image) (instead of reimplementing invoke())

* changes to base class for controlnet nodes

* Added HED, LineArt, and OpenPose ControlNet nodes

* Added more preprocessor nodes for:
      MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.

* Prep for splitting pre-processor and controlnet nodes

* Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes.

* Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue.

* Added support for using multiple control nets. Unfortunately this breaks direct usage of Control node output port  ==> TextToLatent control input port -- passing through a Collect node is now required. Working on fixing this...

* Fixed use of ControlNet control_weight parameter

* Core implementation of ControlNet and MultiControlNet.

* Added first controlnet preprocessor node for canny edge detection.

* Initial port of controlnet node support from generator-based TextToImageInvocation node to latent-based TextToLatentsInvocation node

* Switching to ControlField for output from controlnet nodes.

* Refactored controlnet node to output ControlField that bundles control info.

* changes to base class for controlnet nodes

* Added more preprocessor nodes for:
      MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.

* Prep for splitting pre-processor and controlnet nodes

* Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes.

* Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue.

* Cleaning up TextToLatent arg testing

* Cleaning up mistakes after rebase.

* Removed last bits of dtype and and device hardwiring from controlnet section

* Refactored ControNet support to consolidate multiple parameters into data struct. Also redid how multiple controlnets are handled.

* Added support for specifying which step iteration to start using
each ControlNet, and which step to end using each controlnet (specified as fraction of total steps)

* Cleaning up prior to submitting ControlNet PR. Mostly turning off diagnostic printing. Also fixed error when there is no controlnet input.

* Commented out ZoeDetector. Will re-instate once there's a controlnet-aux release that supports it.

* Switched CotrolNet node modelname input from free text to default list of popular ControlNet model names.

* Fix to work with current stable release of controlnet_aux (v0.0.3). Turned of pre-processor params that were added post v0.0.3. Also change defaults for shuffle.

* Refactored most of controlnet code into its own method to declutter TextToLatents.invoke(), and make upcoming integration with LatentsToLatents easier.

* Cleaning up after ControlNet refactor in TextToLatentsInvocation

* Extended node-based ControlNet support to LatentsToLatentsInvocation.

* chore(ui): regen api client

* fix(ui): fix node ui type hints

* fix(nodes): controlnet input accepts list or single controlnet

* Added Mediapipe image processor for use as ControlNet preprocessor.
Also hacked in ability to specify HF subfolder when loading ControlNet models from string.

* Fixed bug where MediapipFaceProcessorInvocation was ignoring max_faces and min_confidence params.

* Added nodes for float params: ParamFloatInvocation and FloatCollectionOutput. Also added FloatOutput.

* Added mediapipe install requirement. Should be able to remove once controlnet_aux package adds mediapipe to its requirements.

* Added float to FIELD_TYPE_MAP ins constants.ts

* Progress toward improvement in fieldTemplateBuilder.ts  getFieldType()

* Fixed controlnet preprocessors and controlnet handling in TextToLatents to work with revised Image services.

* Cleaning up from merge, re-adding cfg_scale to FIELD_TYPE_MAP

* Making sure cfg_scale of type list[float] can be used in image metadata, to support param easing for cfg_scale

* Fixed math for per-step param easing.

* Added option to show plot of param value at each step

* Just cleaning up after adding param easing plot option, removing vestigial code.

* Modified control_weight ControlNet param to be polistmorphic --
can now be either a single float weight applied for all steps, or a list of floats of size total_steps, that specifies weight for each step.

* Added more informative error message when _validat_edge() throws an error.

* Just improving parm easing bar chart title to include easing type.

* Added requirement for easing-functions package

* Taking out some diagnostic prints.

* Added option to use both easing function and mirror of easing function together.

* Fixed recently introduced problem (when pulled in main), triggered by num_steps in StepParamEasingInvocation not having a default value -- just added default.

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-06-11 16:27:44 +10:00
Lincoln Stein
30f20b55d5 fix logger behavior so that it is initialized after command line parsed (#3509)
In some cases the command-line was getting parsed before the logger was
initialized, causing the logger not to pick up custom logging
instructions from `--log_handlers`. This PR fixes the issue.
2023-06-09 08:24:47 -07:00
Lincoln Stein
1bca32ed16 Merge branch 'main' into lstein/fix-logger-reconfiguration 2023-06-09 06:27:26 -07:00
psychedelicious
7f91139e21 fix(ui): fix crash when using dropdown on certain device resolutions 2023-06-09 22:19:30 +10:00
blessedcoolant
c53b7c7389 ui: misc fixes (#3525)
[fix(ui): blur tab on
click](93f3658a4a)

Fixes issue where after clicking a tab, using the arrow keys changes tab
instead of changing selected image

[fix(ui): fix canvas not filling screen on first
load](68be95acbb)

[feat(ui): remove clear temp folder canvas
button](813f79f0f9)

This button is nonfunctional.

Soon we will introduce a different way to handle clearing out
intermediate images (likely automated).
2023-06-09 23:44:21 +12:00
psychedelicious
93f3658a4a fix(ui): blur tab on click
Fixes issue where after clicking a tab, using the arrow keys changes tab instead of changing selected image
2023-06-09 18:20:52 +10:00
psychedelicious
68be95acbb fix(ui): fix canvas not filling screen on first load 2023-06-09 17:55:11 +10:00
psychedelicious
813f79f0f9 feat(ui): remove clear temp folder canvas button
This button is nonfunctional.

Soon we will introduce a different way to handle clearing out intermediate images (likely automated).
2023-06-09 17:33:17 +10:00
blessedcoolant
c3ec86bc70 feat(ui): enhance IAICustomSelect (#3523)
Now accepts an array of strings or array of `IAICustomSelectOption`s.
This supports custom labels and tooltips within the select component.
2023-06-09 18:26:20 +12:00
psychedelicious
05a19753c6 feat(ui): remove controlnet model descriptions
These are not yet exposed on the UI - somebody who understands what they do better can add them when we have a place to expose them
2023-06-09 16:20:30 +10:00
psychedelicious
a33327c651 feat(ui): enhance IAICustomSelect
Now accepts an array of strings or array of `IAICustomSelectOption`s. This supports custom labels and tooltips within the select component.
2023-06-09 16:00:17 +10:00
blessedcoolant
6ad7cc4f2a feat(ui): decrease delay on dnd to 150ms (#3522) 2023-06-09 17:54:24 +12:00
psychedelicious
c506355b8b feat(ui): decrease delay on dnd to 150ms 2023-06-09 15:53:17 +10:00
psychedelicious
d54168b8fb feat(nodes): add tests for depth-first execution 2023-06-09 14:53:45 +10:00
psychedelicious
c91b071c47 fix(nodes): use DFS with preorder traversal 2023-06-09 14:53:45 +10:00
psychedelicious
9c57b18008 fix(nodes): update Invoker.invoke() docstring 2023-06-09 14:53:45 +10:00
psychedelicious
69539a0472 feat(nodes): depth-first execution
There was an issue where for graphs w/ iterations, your images were output all at once, at the very end of processing. So if you canceled halfway through an execution of 10 nodes, you wouldn't get any images - even though you'd completed 5 images' worth of inference.

## Cause

Because graphs executed breadth-first (i.e. depth-by-depth), leaf nodes were necessarily processed last. For image generation graphs, your `LatentsToImage` will be leaf nodes, and be the last depth to be executed.

For example, a `TextToLatents` graph w/ 3 iterations would execute all 3 `TextToLatents` nodes fully before moving to the next depth, where the `LatentsToImage` nodes produce output images, resulting in a node execution order like this:

1. TextToLatents
2. TextToLatents
3. TextToLatents
4. LatentsToImage
5. LatentsToImage
6. LatentsToImage

## Solution

This PR makes a two changes to graph execution to execute as deeply as it can along each branch of the graph.

### Eager node preparation

We now prepare as many nodes as possible, instead of just a single node at a time.

We also need to change the conditions in which nodes are prepared. Previously, nodes were prepared only when all of their direct ancestors were executed.

The updated logic prepares nodes that:
- are *not* `Iterate` nodes whose inputs have *not* been executed
- do *not* have any unexecuted `Iterate` ancestor nodes

This results in graphs always being maximally prepared.

### Always execute the deepest prepared node

We now choose the next node to execute by traversing from the bottom of the graph instead of the top, choosing the first node whose inputs are all executed.

This means we always execute the deepest node possible.

## Result

Graphs now execute depth-first, so instead of an execution order like this:

1. TextToLatents
2. TextToLatents
3. TextToLatents
4. LatentsToImage
5. LatentsToImage
6. LatentsToImage

... we get an execution order like this:

1. TextToLatents
2. LatentsToImage
3. TextToLatents
4. LatentsToImage
5. TextToLatents
6. LatentsToImage

Immediately after inference, the image is decoded and sent to the gallery.

fixes #3400
2023-06-09 14:53:45 +10:00
blessedcoolant
7bce455d16 Merge branch 'main' into diffusers-upgrade 2023-06-09 16:27:52 +12:00
blessedcoolant
3f45294c61 feat(ui): restore reset button for init image (#3521) 2023-06-09 16:02:26 +12:00
psychedelicious
fd03c7eebe feat(ui): restore reset button for init image 2023-06-09 14:00:23 +10:00
blessedcoolant
07c49a5726 feat(ui): skip resize on img2img if not needed (#3520) 2023-06-09 15:56:22 +12:00
psychedelicious
8c688f8e29 feat(ui): skip resize on img2img if not needed 2023-06-09 13:54:23 +10:00
Lincoln Stein
3d13167d32 Merge branch 'main' into lstein/fix-logger-reconfiguration 2023-06-08 13:41:24 -07:00
Lincoln Stein
f2bb507ebb allow logger to be reconfigured after startup 2023-06-08 09:23:11 -04:00
blessedcoolant
fe8f3381fc create databases directory on startup (#3518)
This PR creates the databases directory at app startup time. It also
removes a couple of debugging statements that were inadvertently left in
the model manager.
2023-06-08 23:40:32 +12:00
Lincoln Stein
2a6d11e645 create databases directory on startup 2023-06-08 07:17:54 -04:00
Lincoln Stein
01f46d3c7d Merge branch 'main' into lstein/fix-logger-reconfiguration 2023-06-07 19:50:44 -07:00
Lincoln Stein
5f76b62553 Update installer support for main (#3448)
#  Make InvokeAI package installable by mere mortals
    
This commit makes InvokeAI 3.0 to be installable via PyPi.org and/or the
installer script. The install process is now pretty much identical to
the 2.3 process, including creating launcher scripts `invoke.sh` and
`invoke.bat`.
    
Main changes:
    
1. Moved static web pages into `invokeai/frontend/web` and modified the
API to look for them there. This allows pip to copy the files into the
distribution directory so that user no longer has to be in repo root to
launch, and enables PyPi installations with `pip install invokeai`
    
2. Update invoke.sh and invoke.bat to launch the new web application
properly. This also changes the wording for launching the CLI from
"generate images" to "explore the InvokeAI node system," since I would
not recommend using the CLI to generate images routinely.
    
3. Fix a bug in the checkpoint converter script that was identified
during testing.
    
4. Better error reporting when checkpoint converter fails.
    
5. Rebuild front end.

# Major improvements to the model installer.

1. The text user interface for `invokeai-model-install` has been
expanded to allow the user to install controlnet, LoRA, textual
inversion, diffusers and checkpoint models. The user can install
interactively (without leaving the TUI), or in batch mode after exiting
the application.
 

![image](https://github.com/invoke-ai/InvokeAI/assets/111189/f8f7ac23-3e18-4973-b7fe-729864c703a0)

2. The `invokeai-model-install` command now lets you list, add and
delete models from the command line:

## Listing models
```
$ invokeai-model-install --list diffusers
Diffuser models:
analog-diffusion-1.0      not loaded  diffusers  An SD-1.5 model trained on diverse analog photographs (2.13 GB)
d&d-diffusion-1.0         not loaded  diffusers  Dungeons & Dragons characters (2.13 GB)
deliberate-1.0            not loaded  diffusers  Versatile model that produces detailed images up to 768px (4.27 GB)
DreamShaper               not loaded  diffusers  Imported diffusers model DreamShaper
sd-inpainting-1.5         not loaded  diffusers  RunwayML SD 1.5 model optimized for inpainting, diffusers version (4.27 GB)
sd-inpainting-2.0         not loaded  diffusers  Stable Diffusion version 2.0 inpainting model (5.21 GB)
stable-diffusion-1.5      not loaded  diffusers  Stable Diffusion version 1.5 diffusers model (4.27 GB)
stable-diffusion-2.1      not loaded  diffusers  Stable Diffusion version 2.1 diffusers model, trained on 768 pixel images (5.21 GB)
```

```
$ invokeai-model-install --list tis
Loading Python libraries...

Installed Textual Inversion Embeddings:
   EasyNegative
   ahx-beta-453407d
```

## Installing models

(this example shows correct handling of a server side error at Civitai)
```
$ invokeai-model-install --diffusers https://civitai.com/api/download/models/46259 Linaqruf/anything-v3.0
Loading Python libraries...

[2023-06-05 22:17:23,556]::[InvokeAI]::INFO --> INSTALLING EXTERNAL MODELS
[2023-06-05 22:17:23,557]::[InvokeAI]::INFO --> Probing https://civitai.com/api/download/models/46259 for import
[2023-06-05 22:17:23,557]::[InvokeAI]::INFO --> https://civitai.com/api/download/models/46259 appears to be a URL
[2023-06-05 22:17:23,763]::[InvokeAI]::ERROR --> An error occurred during downloading /home/lstein/invokeai-test/models/ldm/stable-diffusion-v1/46259: Internal Server Error
[2023-06-05 22:17:23,763]::[InvokeAI]::ERROR --> ERROR DOWNLOADING https://civitai.com/api/download/models/46259: {"error":"Invalid database operation","cause":{"clientVersion":"4.12.0"}}
[2023-06-05 22:17:23,764]::[InvokeAI]::INFO --> Probing Linaqruf/anything-v3.0 for import
[2023-06-05 22:17:23,764]::[InvokeAI]::DEBUG --> Linaqruf/anything-v3.0 appears to be a HuggingFace diffusers repo_id
[2023-06-05 22:17:23,768]::[InvokeAI]::INFO --> Loading diffusers model from Linaqruf/anything-v3.0
[2023-06-05 22:17:23,769]::[InvokeAI]::DEBUG --> Using faster float16 precision
[2023-06-05 22:17:23,883]::[InvokeAI]::ERROR --> An unexpected error occurred while downloading the model: 404 Client Error. (Request ID: Root=1-647e9733-1b0ee3af67d6ac3456b1ebfc)

Revision Not Found for url: https://huggingface.co/Linaqruf/anything-v3.0/resolve/fp16/model_index.json.
Invalid rev id: fp16)
Downloading (…)ain/model_index.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 511/511 [00:00<00:00, 2.57MB/s]
Downloading (…)cial_tokens_map.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 472/472 [00:00<00:00, 6.13MB/s]
Downloading (…)cheduler_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 341/341 [00:00<00:00, 3.30MB/s]
Downloading (…)okenizer_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 807/807 [00:00<00:00, 11.3MB/s]
```

## Deleting models

```
 invokeai-model-install --delete --diffusers anything-v3
Loading Python libraries...

[2023-06-05 22:19:45,927]::[InvokeAI]::INFO --> Processing requested deletions
[2023-06-05 22:19:45,927]::[InvokeAI]::INFO --> anything-v3...
[2023-06-05 22:19:45,927]::[InvokeAI]::INFO --> Deleting the cached model directory for Linaqruf/anything-v3.0
[2023-06-05 22:19:45,948]::[InvokeAI]::WARNING --> Deletion of this model is expected to free 4.3G

```
2023-06-07 19:25:07 -07:00
Lincoln Stein
4bbe3b0d00 Merge branch 'main' into release/make-web-dist-startable 2023-06-07 19:21:01 -07:00
Lincoln Stein
9ed86a08f1 multiple small fixes
1. Contents of autoscan directory field are restored after doing an installation.
2. Activate dialogue to choose V2 parameterization when importing from a directory.
3. Remove autoscan directory from init file when its checkbox is unselected.
4. Add widget cycling behavior to install models form.
2023-06-07 17:32:00 -04:00
blessedcoolant
68405910ba Upgrade to Diffusers 0.17.0 2023-06-08 04:42:52 +12:00
blessedcoolant
0a50e2638c fix(ui): default controlnet autoprocess to true (#3513)
I had accidentally defaulted it to false
2023-06-08 01:56:53 +12:00
psychedelicious
fc7c5da4dd fix(ui): default controlnet autoprocess to true
I had accidentally defaulted it to false
2023-06-07 23:55:24 +10:00
Lincoln Stein
a3357e073c refactor exception handling 2023-06-07 07:35:34 -04:00
Lincoln Stein
d114833a12 pause after printing exception 2023-06-07 07:26:14 -04:00
Lincoln Stein
96038bd075 print exception on TUI crash 2023-06-07 07:23:14 -04:00
blessedcoolant
2f383c2598 docs(nodes): update INVOCATIONS.md (#3511) 2023-06-07 20:47:57 +12:00
psychedelicious
702a8d1f72 docs(nodes): update INVOCATIONS.md 2023-06-07 18:44:43 +10:00
psychedelicious
0a8390356f feat(ui): enhance autoprocessing
The processor is automatically selected when model is changed.

But if the user manually changes the processor, processor settings, or disables the new `Auto configure processor` switch, auto processing is disabled.

The user can enable auto configure by turning the switch back on.

When auto configure is enabled, a small dot is overlaid on the expand button to remind the user that the system is not auto configuring the processor for them.

If auto configure is enabled, the processor settings are reset to the default for the selected model.
2023-06-07 18:25:30 +10:00
psychedelicious
844058c0a5 feat(ui): make prompt not required
- also change the placeholder text
2023-06-07 18:25:30 +10:00
psychedelicious
7d74cbe29c fix(ui): make progress image not draggable 2023-06-07 18:25:30 +10:00
psychedelicious
62ac0ed2dc feat(ui): tweak cnet model change
If there is no control image, and the model does not have a default processor, set the processor to `none`.
2023-06-07 18:25:30 +10:00
psychedelicious
ae14adec2a feat(ui): add reset button for control image 2023-06-07 18:25:30 +10:00
psychedelicious
6c2b39d1df feat(ui): improve controlnet image style
css is terrible
2023-06-07 18:25:30 +10:00
psychedelicious
0843028e6e fix(ui): improve dragging activation
- delay of 250ms
- prevent gallery images from accidentally activating native drag and drop
2023-06-07 18:25:30 +10:00
psychedelicious
de0fd87035 fix(ui): when a session errors, reset controlnet processing spinner 2023-06-07 18:25:30 +10:00
psychedelicious
8b6c0be259 feat(ui): fix IAIDndImage button styles when upload disabled 2023-06-07 18:25:30 +10:00
psychedelicious
58fec84858 feat(ui): add upload to IAIDndImage
Add uploading to IAIDndImage
- add `postUploadAction` arg to `imageUploaded` thunk, with several current valid options (set control image, set init, set nodes image, set canvas, or toast)
- updated IAIDndImage to optionally allow click to upload
2023-06-07 18:25:30 +10:00
psychedelicious
f223ad7776 fix(ui): only show loading indicator on processing control images 2023-06-07 18:25:30 +10:00
psychedelicious
00eabf630d fix(ui): fix control image not used if processor type is none 2023-06-07 18:25:30 +10:00
psychedelicious
6245a27650 feat(ui): auto-select controlnet processor
- when the controlnet model is changed, if there is a default processor for the model set, the processor is changed.
- once a control image is selected (and processed), changing the model does not change the processor - must be manually changed
2023-06-07 18:25:30 +10:00
blessedcoolant
fa1ac57c90 Graph overlay was expanding off the screen to the size of the prompt line (#3510)
sure this isn't really important at the moment

just limited the size of the width and gave it a shadow

![image](https://github.com/invoke-ai/InvokeAI/assets/115216705/96e2db0a-9edb-48b8-9040-56ce054b5ecf)
2023-06-07 18:01:35 +12:00
mickr777
0f16b1c98d Remove Shadow 2023-06-07 15:51:37 +10:00
mickr777
08e66c5451 Update NodeGraphOverlay.tsx
Graph overlay was expanding off the screen to the size of the prompt
2023-06-07 14:49:03 +10:00
Lincoln Stein
563bf70c95 fix CI failure in configure non-interactive mode; merged with main 2023-06-06 23:24:40 -04:00
Lincoln Stein
49d29420c4 Merge branch 'main' into release/make-web-dist-startable 2023-06-06 23:24:16 -04:00
Lincoln Stein
ae9d0c6c1b fix logger behavior so that it is initialized after command line parsed 2023-06-06 23:19:10 -04:00
Lincoln Stein
d8d11f9bbb quench fp16 rev id not found warning 2023-06-06 22:01:05 -04:00
Lincoln Stein
13fa0d3bc0 make log message textbox deeper 2023-06-06 17:23:13 -04:00
Lincoln Stein
5eeb4b8e06 allow user to abort conversion of V2 models from within TUI 2023-06-06 17:21:50 -04:00
Lincoln Stein
f5044c290d fix crash during model conversion 2023-06-06 17:05:29 -04:00
Lincoln Stein
1b43276e5d make widget selection wrap around 2023-06-06 13:53:11 -07:00
Lincoln Stein
294f086857 configure/install working correctly on windows11 2023-06-06 12:51:34 -07:00
Lincoln Stein
e5024bf5e9 fix conhost launch-with args 2023-06-06 15:17:15 -04:00
blessedcoolant
79198b4bba feat(ui): fix bugs with image deletion (#3506)
- `imageUsage` object was always stale due to react component lifecycle,
fixed this
- cleaned up the deletion listener and context
2023-06-07 05:33:05 +12:00
blessedcoolant
1a2f0984db Merge branch 'main' into feat/ui/fix-stale-imageUsage 2023-06-07 04:35:16 +12:00
psychedelicious
454683e6eb feat(ui): update image urls on connect (#3507)
* feat(ui): update image urls on connect

Add `updateImageUrlsOnConnect` RTK listener:
- requests URLs for *every* image the app knows about, on connect: gallery, selectedImage, initialImage, canvas images, nodes images, controlnet images
- only fires when `shouldUpdateImagesOnConnect` config is enabled

* remove prop

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-06-06 10:23:51 -04:00
psychedelicious
bbb2a08e8f feat(ui): fix bugs with image deletion
- `imageUsage` object was always stale due to react component lifecycle, fixed this
- cleaned up the deletion listener and context
2023-06-06 20:01:27 +10:00
psychedelicious
bf116927e1 feat(ui): clear features if image used by them is deleted
This handles the case when an image is deleted but is still in use in as eg an init image on canvas, or a control image. If we just delete the image, canvas/controlnet/etc may break (the image would just fail to load).

When an image is deleted, the app checks to see if it is in use in:
- Image to Image
- ControlNet
- Unified Canvas
- Node Editor

The delete dialog will always open if the image is in use anywhere, and the user is advised that deleting the image will reset the feature(s).

Even if the user has ticked the box to not confirm on delete, the dialog will still show if the image is in use somewhere.
2023-06-06 14:35:07 +10:00
psychedelicious
3d249c4fa3 feat(ui): refactor image deletion
Add `DeleteImageContext`:
- provide a single function to delete an image
- opens the modal or immediately deletes, if confirm is off
2023-06-06 14:35:07 +10:00
psychedelicious
fa338ddb6a feat(ui): add useGetIsImageInUse
Checks if an image is currently being used eg in canvas, nodes, controlnet, init image.
2023-06-06 14:35:07 +10:00
psychedelicious
b200451330 feat(ui): add nodesSelector 2023-06-06 14:35:07 +10:00
psychedelicious
8283d23b74 feat(ui): remove shouldTransformUrls
This is no longer used.
2023-06-06 14:35:07 +10:00
psychedelicious
2fc0a4d53b feat(ui): improve handling for urls/metadata received
Update images everywhere when urls or metadata is received:
- control images
- init images
- canvas
- nodes
- init image

Also renamed the variable.
2023-06-06 14:35:07 +10:00
psychedelicious
3ff732d583 feat(ui): clear controlnet image when image deleted 2023-06-06 14:35:07 +10:00
psychedelicious
840c632c0a feat(ui): sort images by updated_at instead of created_at
fixes issue where saved staging area images are sorted as expected in gallery.
2023-06-06 14:30:53 +10:00
psychedelicious
40d6e4f287 fix(ui): fix canvas auto-save not working 2023-06-06 14:30:53 +10:00
psychedelicious
fc5f9c30a6 fix(ui): fix metadata viewer not working for canvas images 2023-06-06 14:30:53 +10:00
psychedelicious
229de2dbb8 feat(ui): fix canvas saving
- fix "bounding box region only" not being respected when saving
- add toasts for each action
- improve workflow `take()` predicates to use the requestId
2023-06-06 14:30:53 +10:00
psychedelicious
cc22427f25 feat(ui): improve UI on smaller screens
- responsive changes were causing a lot of weird layout issues, had to remove the rest of them
- canvas (non-beta) toolbar now wraps
- reduces minH for prompt boxes a bit
2023-06-06 14:29:57 +10:00
Lincoln Stein
90333c0074 merge with main 2023-06-05 22:03:44 -04:00
Lincoln Stein
54e5301b35 Multiple fixes
1. Model installer works correctly under Windows 11 Terminal
2. Fixed crash when configure script hands control off to installer
3. Kill install subprocess on keyboard interrupt
4. Command-line functionality for --yes configuration and model installation
   restored.
5. New command-line features:
   - install/delete lists of diffusers, LoRAS, controlnets and textual inversions
     using repo ids, paths or URLs.

Help:

```
usage: invokeai-model-install [-h] [--diffusers [DIFFUSERS ...]] [--loras [LORAS ...]] [--controlnets [CONTROLNETS ...]] [--textual-inversions [TEXTUAL_INVERSIONS ...]] [--delete] [--full-precision | --no-full-precision]
                              [--yes] [--default_only] [--list-models {diffusers,loras,controlnets,tis}] [--config_file CONFIG_FILE] [--root_dir ROOT]

InvokeAI model downloader

options:
  -h, --help            show this help message and exit
  --diffusers [DIFFUSERS ...]
                        List of URLs or repo_ids of diffusers to install/delete
  --loras [LORAS ...]   List of URLs or repo_ids of LoRA/LyCORIS models to install/delete
  --controlnets [CONTROLNETS ...]
                        List of URLs or repo_ids of controlnet models to install/delete
  --textual-inversions [TEXTUAL_INVERSIONS ...]
                        List of URLs or repo_ids of textual inversion embeddings to install/delete
  --delete              Delete models listed on command line rather than installing them
  --full-precision, --no-full-precision
                        use 32-bit weights instead of faster 16-bit weights (default: False)
  --yes, -y             answer "yes" to all prompts
  --default_only        only install the default model
  --list-models {diffusers,loras,controlnets,tis}
                        list installed models
  --config_file CONFIG_FILE, -c CONFIG_FILE
                        path to configuration file to create
  --root_dir ROOT       path to root of install directory
```
2023-06-05 21:45:35 -04:00
Lincoln Stein
b31fc43bfa Fix potential race condition in config system (#3466)
There was a potential gotcha in the config system that was previously
merged with main. The `InvokeAIAppConfig` object was configuring itself
from the command line and configuration file within its initialization
routine. However, this could cause it to read `argv` from the command
line at unexpected times. This PR fixes the object so that it only reads
from the init file and command line when its `parse_args()` method is
explicitly called, which should be done at startup time in any top level
script that uses it.

In addition, using the `get_invokeai_config()` function to get a global
version of the config object didn't feel pythonic to me, so I have
changed this to `InvokeAIAppConfig.get_config()` throughout.

## Updated Usage

In the main script, at startup time, do the following:

```
from invokeai.app.services.config import InvokeAIAppConfig
config = InvokeAIAppConfig.get_config()
config.parse_args()
```

In non-main scripts, it is not necessary (or recommended) to call
`parse_args()`:
```
from invokeai.app.services.config import InvokeAIAppConfig
config = InvokeAIAppConfig.get_config()
```

The configuration object properties can be overridden when
`get_config()` is called by passing initialization values in the usual
way. If a property is set this way, then it will not be changed by
subsequent calls to `parse_args()`, but can only be changed by
explicitly setting the property.

```
config = InvokeAIAppConfig.get_config(nsfw_checker=True)
config.parse_args(argv=['--no-nsfw_checker'])
config.nsfw_checker
# True
```

You may specify alternative argv lists and configuration files in
`parse_args()`:

```
config.parse_args(argv=['--no-nsfw_checker'],
                             conf = OmegaConf.load('/tmp/test.yaml')
)
```

For backward compatibility, the `get_invokeai_config()` function is
still available from the module, but has been removed from the rest of
the source tree.
2023-06-05 15:26:50 -07:00
Lincoln Stein
9bcf0b2251 Merge branch 'main' into lstein/config-management-fixes 2023-06-05 15:10:33 -07:00
Lincoln Stein
d4bc98c383 revert to conhost method 2023-06-05 11:46:01 -07:00
blessedcoolant
bc892c535c feat(ui): fix image fit (#3501)
- Prevent init, current & control images from overflowing
2023-06-05 20:48:55 +12:00
psychedelicious
099e1e7c08 feat(ui): fix image fit
- Prevent init, current & control images from overflowing
2023-06-05 17:16:30 +10:00
psychedelicious
b1000e30c1 feat(ui): disable keyboard dnd
Need to fix a bug w/ collision detection before enabling it. Will pursue later.
2023-06-05 15:24:24 +10:00
psychedelicious
7bd94eac0e feat(ui): support image dnd to canvas 2023-06-05 15:24:24 +10:00
psychedelicious
2c77563dcc feat(ui): move DropOverlay into its own IAIDropOverlay component 2023-06-05 15:24:24 +10:00
Lincoln Stein
603c9a587e open Windows Terminal maximized 2023-06-05 00:24:13 -04:00
Lincoln Stein
1a5a2dfda9 increased window size 2023-06-04 23:54:52 -04:00
Lincoln Stein
090b7eeaf3 workaround to get adequate window size on Windows Terminal 2023-06-04 23:44:07 -04:00
Lincoln Stein
117536324c the "restore" env variable in .bat launcher confuses pydantic 2023-06-04 22:53:46 -04:00
Lincoln Stein
999c092b6a fix mouse and window resizing issues 2023-06-04 22:00:11 -04:00
Lincoln Stein
9e31b1f387 Merge branch 'main' into lstein/config-management-fixes 2023-06-04 18:17:43 -04:00
Lincoln Stein
cb157ea530 fix crash when install-models launched from config script 2023-06-04 14:55:51 -04:00
Lincoln Stein
5f6f38074d merge with main 2023-06-04 13:59:31 -04:00
blessedcoolant
25b8dd340a Prompting: enable long prompts and compel's new .and() concatenating feature (#3497)
this PR adds long prompt support and enables compel's new `.and()`
concatenation feature which improves image quality especially with SD2.1

example of a long prompt:
> a moist sloppy pindlesackboy sloppy hamblin' bogomadong, Clem Fandango
is pissed-off, Wario's Woods in background, making a noise like
ga-woink-a
![000075 6dfd7adf
466129594](https://github.com/invoke-ai/InvokeAI/assets/144366/051608b6-8d52-463b-af10-04b695cda9c1)

the same prompt broken into fragments and concatenated using `.and()`
(syntax works like `.blend()`):
```
("a moist sloppy pindlesackboy sloppy hamblin' bogomadong", 
"Clem Fandango is pissed-off", 
"Wario's Woods in background", 
"making a noise like ga-woink-a").and()
```
![000076 68b1c320
466129594](https://github.com/invoke-ai/InvokeAI/assets/144366/3fee291f-5562-40f9-9c3c-a73765fc893a)


and a less silly example:

> A dream of a distant galaxy, by Caspar David Friedrich, matte
painting, trending on artstation, HQ
![000129 1b33b559
2793529321](https://github.com/invoke-ai/InvokeAI/assets/144366/d4113756-ed0d-49cd-bb2e-a2fc4a09e0af)

the same prompt broken into two fragments and concatenated:
```
("A dream of a distant galaxy, by Caspar David Friedrich, matte painting", 
"trending on artstation, HQ").and()
```
![000128 b5d5cd62
2793529321](https://github.com/invoke-ai/InvokeAI/assets/144366/c373c009-05db-4c42-8a1d-c89fbdb334ec)

as with `.blend()` you can also weight the parts eg `("a man eating an
apple", "sitting on the roof of a car", "high quality, trending on
artstation, 8K UHD").and(1, 0.5, 0.5)` which will assign weight `1` to
`a man eating an apple` and `0.5` to `sitting on the roof of a car` and
`high quality, trending on artstation, 8K UHD`.
2023-06-05 04:53:08 +12:00
blessedcoolant
fb06f5b892 Merge branch 'main' into feat_compel_longprompts_and_concat 2023-06-05 04:34:39 +12:00
Lincoln Stein
1a7fb601dc ask user for v2 variant when model manager can't infer it 2023-06-04 11:27:44 -04:00
Damian Stewart
cdcfda164d enable long prompts, upgrade compel to enable .and() (concatenating prompts) 2023-06-04 15:30:54 +02:00
blessedcoolant
966b154a1f Update web README.md (#3496) 2023-06-05 00:56:00 +12:00
psychedelicious
95fa66661c dummy commit to make github actions run 2023-06-04 22:55:35 +10:00
psychedelicious
6247b79111 docs(ui): update API_CLIENT 2023-06-04 22:46:53 +10:00
psychedelicious
5831364f9c Update web README.md 2023-06-04 22:44:18 +10:00
psychedelicious
919b81cff1 fix(ui): fix rebase issue 2023-06-04 22:34:58 +10:00
psychedelicious
065fff7db5 fix(ui): fix wonkiness with image dnd 2023-06-04 22:34:58 +10:00
psychedelicious
a664ee30a2 feat(ui): do not change images if the dropped image is the same image 2023-06-04 22:34:58 +10:00
psychedelicious
03f3ad435a feat(ui): updated controlnet logic/ui 2023-06-04 22:34:58 +10:00
psychedelicious
2270c270ef feat(ui): add tooltip to IAISwitch 2023-06-04 22:34:58 +10:00
psychedelicious
4f7820719b feat(ui): add ellipsis direction to IAICustomSelect 2023-06-04 22:34:58 +10:00
psychedelicious
fa285883ad feat(ui): make OverlayDragImage translucent 2023-06-04 22:34:58 +10:00
psychedelicious
474fca8e6a feat(ui): add controlNetDenylist 2023-06-04 22:34:58 +10:00
psychedelicious
5dc0250b00 feat(ui): ControlNet layout tweaks 2023-06-04 22:34:58 +10:00
psychedelicious
f269377a01 feat(ui): "ProcessorOptionsContainer" -> "ProcessorWrapper", organise 2023-06-04 22:34:58 +10:00
psychedelicious
d0406024e3 feat(ui): IAICustomSelect tweak styles 2023-06-04 22:34:58 +10:00
blessedcoolant
aa3a969bd2 feat: Update ControlNet Model List & Map 2023-06-04 22:34:58 +10:00
blessedcoolant
73a95973a8 wip: Add Wrapper Container for Preprocessor Options
For fast altering of the layout across all pre-preocessors.
2023-06-04 22:34:58 +10:00
blessedcoolant
bf4fe3c1ac wip: Fixing layout shifts with the ControlNet tab 2023-06-04 22:34:58 +10:00
psychedelicious
d6c08ba469 feat(ui): add mini/advanced controlnet ui 2023-06-04 22:34:58 +10:00
psychedelicious
69f0ba65f1 chore(ui): bump react-icons 2023-06-04 22:34:58 +10:00
psychedelicious
828c86964d feat(ui): IAICustomSelect prevent label wrap 2023-06-04 22:34:58 +10:00
psychedelicious
54b7ddd63f feat(ui): IAIDndImage cursor: 'grab' 2023-06-04 22:34:58 +10:00
psychedelicious
a0dde66b5d feat(ui): more work on controlnet mini 2023-06-04 22:34:58 +10:00
psychedelicious
b6b3b9f99c feat(ui): make scrollbar less bright 2023-06-04 22:34:58 +10:00
psychedelicious
faa69f8a47 feat(ui): add alpha colors 2023-06-04 22:34:58 +10:00
psychedelicious
d92c7f5483 feat(ui): organize IAIDndImage component 2023-06-04 22:34:58 +10:00
psychedelicious
6b824eb112 feat(ui): initial mini controlnet UI, dnd improvements 2023-06-04 22:34:58 +10:00
psychedelicious
72b4371804 feat(ui): control image auto-process 2023-06-04 22:34:58 +10:00
psychedelicious
fa290aff8d feat(ui): add defaults for all processors 2023-06-04 22:34:58 +10:00
psychedelicious
3d99d7ae8b feat(ui): update handling of inProgess, do not allow cnet process when processing 2023-06-04 22:34:58 +10:00
psychedelicious
2eb367969c feat(ui): do not autoprocess control if invocation in progress 2023-06-04 22:34:58 +10:00
psychedelicious
9cdad95f48 feat(ui): add rest of controlnet processors 2023-06-04 22:34:58 +10:00
psychedelicious
707ed39300 chore(ui): regen api client 2023-06-04 22:34:58 +10:00
psychedelicious
6bbb5f061a feat(nodes): update controlnet names/descriptions 2023-06-04 22:34:58 +10:00
psychedelicious
6896e69e95 fix(ui): fix multiple controlnets 2023-06-04 22:34:58 +10:00
psychedelicious
b17f4c1650 feat(ui): more tweaking controlnet ui 2023-06-04 22:34:58 +10:00
psychedelicious
98493ed9e2 feat(ui): reorg parameter panel to make room for controlnet 2023-06-04 22:34:58 +10:00
psychedelicious
94c953deab feat(ui): get processed images back into controlnet ui 2023-06-04 22:34:58 +10:00
psychedelicious
fa4d88e163 feat(ui): improve drag and drop ux 2023-06-04 22:34:58 +10:00
psychedelicious
b1e1e3efc7 fix(ui): fix IAISelectableImage fallback 2023-06-04 22:34:58 +10:00
psychedelicious
3b9426eb72 feat(ui): controlnet/image dnd wip
Implement `dnd-kit` for image drag and drop
- vastly simplifies logic bc we can drag and drop non-serializable data (like an `ImageDTO`)
- also much prettier
- also will fix conflicts with file upload via OS drag and drop, bc `dnd-kit` does not use native HTML drag and drop API
- Implemented for Init image, controlnet, and node editor so far

More progress on the ControlNet UI
2023-06-04 22:34:58 +10:00
psychedelicious
e2e07696fc feat(ui): wip controlnet ui 2023-06-04 22:34:58 +10:00
psychedelicious
d6a959b000 feat(nodes): tidy controlnet processor nodes & improve descriptions 2023-06-04 22:34:58 +10:00
Lincoln Stein
c3935d3849 feat(nodes): add separate scripts to launch cli and web (#3495) 2023-06-04 08:13:14 -04:00
psychedelicious
383e3d77cb feat(nodes): add separate scripts to launch cli and web 2023-06-04 22:02:47 +10:00
Lincoln Stein
31e97ead2a move invokeai.db to ~/invokeai/databases
- The invokeai.db database file has now been moved into
  `INVOKEAIROOT/databases`. Using plural here for possible
  future with more than one database file.

- Removed a few dangling debug messages that appeared during
  testing.

- Rebuilt frontend to test web.
2023-06-03 20:25:34 -04:00
Lincoln Stein
0b49995659 merge with main 2023-06-03 20:06:27 -04:00
Lincoln Stein
ff204db6b2 Add logging configuration (#3460)
This PR provides a number of options for controlling how InvokeAI logs
messages, including options to log to a file, syslog and a web server.
Several logging handlers can be configured simultaneously.

## Controlling How InvokeAI Logs Status Messages

InvokeAI logs status messages using a configurable logging system. You
can log to the terminal window, to a designated file on the local
machine, to the syslog facility on a Linux or Mac, or to a properly
configured web server. You can configure several logs at the same time,
and control the level of message logged and the logging format (to a
limited extent).

Three command-line options control logging:

### `--log_handlers <handler1> <handler2> ...`

This option activates one or more log handlers. Options are "console",
"file", "syslog" and "http". To specify more than one, separate them by
spaces:

```bash
invokeai-web --log_handlers console syslog=/dev/log file=C:\Users\fred\invokeai.log
```

The format of these options is described below.

### `--log_format {plain|color|legacy|syslog}`

This controls the format of log messages written to the console. Only
the "console" log handler is currently affected by this setting.

* "plain" provides formatted messages like this:

```bash

[2023-05-24 23:18:2[2023-05-24 23:18:50,352]::[InvokeAI]::DEBUG --> this is a debug message
[2023-05-24 23:18:50,352]::[InvokeAI]::INFO --> this is an informational messages
[2023-05-24 23:18:50,352]::[InvokeAI]::WARNING --> this is a warning
[2023-05-24 23:18:50,352]::[InvokeAI]::ERROR --> this is an error
[2023-05-24 23:18:50,352]::[InvokeAI]::CRITICAL --> this is a critical error
```

* "color" produces similar output, but the text will be color coded to
indicate the severity of the message.

* "legacy" produces output similar to InvokeAI versions 2.3 and earlier:

```bash
### this is a critical error
*** this is an error
** this is a warning
>> this is an informational messages
   | this is a debug message
```

* "syslog" produces messages suitable for syslog entries:

```bash
InvokeAI [2691178] <CRITICAL> this is a critical error
InvokeAI [2691178] <ERROR> this is an error
InvokeAI [2691178] <WARNING> this is a warning
InvokeAI [2691178] <INFO> this is an informational messages
InvokeAI [2691178] <DEBUG> this is a debug message
```

(note that the date, time and hostname will be added by the syslog
system)

### `--log_level {debug|info|warning|error|critical}`

Providing this command-line option will cause only messages at the
specified level or above to be emitted.

## Console logging

When "console" is provided to `--log_handlers`, messages will be written
to the command line window in which InvokeAI was launched. By default,
the color formatter will be used unless overridden by `--log_format`.

## File logging

When "file" is provided to `--log_handlers`, entries will be written to
the file indicated in the path argument. By default, the "plain" format
will be used:

```bash
invokeai-web --log_handlers file=/var/log/invokeai.log
```

## Syslog logging

When "syslog" is requested, entries will be sent to the syslog system.
There are a variety of ways to control where the log message is sent:

* Send to the local machine using the `/dev/log` socket:

```
invokeai-web --log_handlers syslog=/dev/log
```

* Send to the local machine using a UDP message:

```
invokeai-web --log_handlers syslog=localhost
```

* Send to the local machine using a UDP message on a nonstandard port:

```
invokeai-web --log_handlers syslog=localhost:512
```

* Send to a remote machine named "loghost" on the local LAN using
facility LOG_USER and UDP packets:

```
invokeai-web --log_handlers syslog=loghost,facility=LOG_USER,socktype=SOCK_DGRAM
```

This can be abbreviated `syslog=loghost`, as LOG_USER and SOCK_DGRAM are
defaults.

* Send to a remote machine named "loghost" using the facility LOCAL0 and
using a TCP socket:

```
invokeai-web --log_handlers syslog=loghost,facility=LOG_LOCAL0,socktype=SOCK_STREAM
```

If no arguments are specified (just a bare "syslog"), then the logging
system will look for a UNIX socket named `/dev/log`, and if not found
try to send a UDP message to `localhost`. The Macintosh OS used to
support logging to a socket named `/var/run/syslog`, but this feature
has since been disabled.

## Web logging

If you have access to a web server that is configured to log messages
when a particular URL is requested, you can log using the "http" method:

```
invokeai-web --log_handlers http=http://my.server/path/to/logger,method=POST
```

The optional [,method=] part can be used to specify whether the URL
accepts GET (default) or POST messages.

Currently password authentication and SSL are not supported.

## Using the configuration file

You can set and forget logging options by adding a "Logging" section to
`invokeai.yaml`:

```
InvokeAI:
  [... other settings...]
  Logging:
    log_handlers:
       - console
       - syslog=/dev/log
    log_level: info
    log_format: color
```
2023-06-03 20:03:40 -04:00
Lincoln Stein
f74f3d6a3a many TUI improvements:
1. Separated the "starter models" and "more models" sections. This
   gives us room to list all installed diffuserse models, not just
   those that are on the starter list.

2. Support mouse-based paste into the textboxes with either middle
   or right mouse buttons.

3. Support terminal-style cursor movement:
     ^A to move to beginning of line
     ^E to move to end of line
     ^K kill text to right and put in killring
     ^Y yank text back

4. Internal code cleanup.
2023-06-03 16:17:53 -04:00
Lincoln Stein
713fb061e8 Merge branch 'main' into release/make-web-dist-startable 2023-06-02 23:19:33 -04:00
Lincoln Stein
77b7680b32 slight refactoring of code; configure --yes should work now 2023-06-02 23:19:14 -04:00
Lincoln Stein
ff63433591 Merge branch 'main' into lstein/config-management-fixes 2023-06-02 22:56:43 -04:00
Lincoln Stein
31281d7181 Merge branch 'main' into lstein/logging-improvements 2023-06-02 22:56:13 -04:00
Lincoln Stein
72d1e4e404 fix bug in model_manager that prevented import of inpainting models 2023-06-02 22:39:26 -04:00
Lincoln Stein
91918e648b dynamic display of log messages now working 2023-06-02 22:24:46 -04:00
Lincoln Stein
1390b65a9c new TUI is fully functional; needs some polishing 2023-06-02 17:20:50 -04:00
blessedcoolant
82231369d3 Make Invoke Button also the progress bar (#3492)
Find on some screens the progress bar at top is hard to see, Bar should
only show when in progress


![Animation](https://github.com/invoke-ai/InvokeAI/assets/115216705/04f945d3-377b-4646-b125-1355e74b6b09)
2023-06-02 19:30:45 +12:00
blessedcoolant
7620bacc01 feat: Add temporary NodeInvokeButton 2023-06-02 17:55:15 +12:00
blessedcoolant
ea9cf04765 fix: Remove progress bg instead of altering button bg 2023-06-02 17:36:14 +12:00
blessedcoolant
47301e6f85 fix: Do the same without zIndex 2023-06-02 17:33:38 +12:00
blessedcoolant
f143fb7254 feat: Make Invoke Button also the progress bar 2023-06-02 17:24:40 +12:00
mickr777
2bdb655375 Change to absolute 2023-06-02 14:59:10 +10:00
Lincoln Stein
41f7758977 listing, downloading and deleting LoRAs working; TI support pending 2023-06-02 00:40:15 -04:00
mickr777
8ae1eaaccc Add Progress bar under invoke Button
Find on some screens the progress bar at top of screen gets cut off
2023-06-02 14:19:02 +10:00
Mary Hipp
d66979073b add optional config for settings modal 2023-06-02 00:36:45 +10:00
psychedelicious
c9e621093e fix(ui): fix looping gallery images fetch
The gallery could get in a state where it thought it had just reached the end of the list and endlessly fetches more images, if there are no more images to fetch (weird I know).

Add some logic to remove the `end reached` handler when there are no more images to load.
2023-06-02 00:34:03 +10:00
psychedelicious
e06ba40795 fix(ui): do not allow dpmpp_2s to be used ever
it doesn't work for the img2img pipelines, but the implemented conditional display could break the scheduler selection dropdown.

simple fix until diffusers merges the fix - never use this scheduler.
2023-06-02 00:30:01 +10:00
psychedelicious
6571e4c2fd feat(ui): refactor parameter recall
- use zod to validate parameters before recalling
- update recall params hook to handle all validation and UI feedback
2023-06-02 00:30:01 +10:00
Lincoln Stein
ff9240b51d slight code cleanup 2023-06-01 00:45:07 -04:00
Lincoln Stein
18466e01fd tab selection seems very natural; not wired to backend yet 2023-06-01 00:43:28 -04:00
Lincoln Stein
e9821ab711 implemented tabbed model selection; not wired to backend yet 2023-06-01 00:31:46 -04:00
Lincoln Stein
d6530df635 rename invokeai.backend.config to invokeai.backend.install 2023-05-31 21:34:20 -04:00
psychedelicious
062b2cf46f fix(ui): fix width and height not working on txt2img tab
I missed a spot when working on the graph logic yesterday.
2023-05-30 18:41:09 -04:00
Lincoln Stein
082ecf6f25 minor formatting improvements 2023-05-30 13:59:32 -04:00
Lincoln Stein
1632ac6b9f add controlnet model downloading 2023-05-30 13:49:43 -04:00
psychedelicious
877959b413 fix(ui): ensure download image opens in new tab 2023-05-30 09:22:54 -04:00
psychedelicious
6e60f7517b feat(ui): add model description tooltips 2023-05-30 09:06:13 -04:00
psychedelicious
296ee6b7ea feat(ui): tidy ParamScheduler component 2023-05-30 09:06:13 -04:00
psychedelicious
7c7ffddb2b feat(ui): upgrade IAICustomSelect to optionally display tooltips for each item 2023-05-30 09:06:13 -04:00
psychedelicious
e1ae7842ff feat(ui): add defaultModel to config 2023-05-30 09:06:13 -04:00
psychedelicious
9687fe7bac fix(ui): set default model to first model (alpha sort) 2023-05-30 09:06:13 -04:00
psychedelicious
a9a2bd90c2 fix(nodes): set min and max for l2l strength 2023-05-30 09:06:13 -04:00
psychedelicious
47ca71a7eb fix(nodes): set cfg_scale min to 1 in latents 2023-05-30 09:06:13 -04:00
psychedelicious
a9c47237b1 fix(ui): mark img2img resize node intermediate 2023-05-30 09:06:13 -04:00
psychedelicious
33bbae2f47 fix(ui): fix missing init image when fit disabled 2023-05-30 09:06:13 -04:00
psychedelicious
fab7a1d337 fix(ui): fix bug with staging bbox not resetting 2023-05-30 09:06:13 -04:00
psychedelicious
cffcf80977 fix(ui): remove w/h from canvas params, add bbox w/h 2023-05-30 09:06:13 -04:00
psychedelicious
1a3fd05b81 fix(ui): fix canvas bbox autoscale 2023-05-30 09:06:13 -04:00
psychedelicious
c22c6ca135 fix(ui): fix img2img fit 2023-05-30 09:06:13 -04:00
psychedelicious
3afb6a387f chore(ui): regen api 2023-05-30 09:06:13 -04:00
psychedelicious
33e5ed7180 fix(ui): fix edge case in nodes graph building
Inputs with explicit values are validated by pydantic even if they also
have a connection (which is the actual value that is used).

Fix this by omitting explicit values for inputs that have a connection.
2023-05-30 09:06:13 -04:00
psychedelicious
2067757fab feat(ui): enable progress images by default 2023-05-30 09:06:13 -04:00
user1
b1b94a3d56 Fixed problem with inpainting after controlnet support added to main.
Problem was that controlnet support involved adding **kwargs to method calls down in denoising loop, and AddsMaskLatents didn't accept **kwarg arg. So just changed to accept and pass on **kwargs.
2023-05-30 08:01:21 -04:00
Lincoln Stein
c9ee42450e added controlnet models to frontend; backend needs to be done 2023-05-30 00:38:37 -04:00
Lincoln Stein
10fe31c2a1 Merge branch 'main' into lstein/config-management-fixes 2023-05-29 21:03:03 -04:00
Lincoln Stein
dc54cbb1fc Merge branch 'main' into release/make-web-dist-startable 2023-05-29 14:16:10 -04:00
psychedelicious
070218aba7 feat(ui): add progress image toggle to current image buttons 2023-05-29 09:07:46 -04:00
psychedelicious
f1c226b171 fix(ui): remove console.log() 2023-05-29 09:07:46 -04:00
psychedelicious
7004430380 feat(ui): gallery filter dropdown -> Images/Assets toggle 2023-05-29 09:07:46 -04:00
psychedelicious
1ddc620192 feat(ui): only cancel on staging commit if processing 2023-05-29 09:07:46 -04:00
psychedelicious
a7cebbd970 feat(ui): cancel session when staging image accepted 2023-05-29 09:07:46 -04:00
psychedelicious
d97438b0b3 fix(ui): fix typo in actionsDenylist 2023-05-29 09:07:46 -04:00
psychedelicious
4522f3f4c9 fix(ui): fix progress images in canvas 2023-05-29 09:07:46 -04:00
psychedelicious
6fe28980b0 feat(ui): revert in-gallery progress
wasn't fully baked. will revisist in the future.
2023-05-29 09:07:46 -04:00
psychedelicious
4aec5d8ffc fix(ui): typo 2023-05-29 09:07:46 -04:00
psychedelicious
bbb4e8f5ef feat(nodes): add resize image and scale image nodes 2023-05-29 09:07:46 -04:00
psychedelicious
bce33ea62e fix(ui): when session is complete, null out progress image
This may cause minor gallery jumpiness at the very end of processing, but is necessary to prevent the progress image from sticking around if the last node in a session did not have an image output.
2023-05-29 09:07:46 -04:00
psychedelicious
e4705d5ce7 fix(ui): add additional socket event layer to gate handling socket events
Some socket events should not be handled by the slice reducers. For example generation progress should not be handled for a canceled session.

Added another layer of socket actions.

Example:
- `socketGeneratorProgress` is dispatched when the actual socket event is received
- Listener middleware exclusively handles this event and determines if the application should also handle it
- If so, it dispatches `appSocketGeneratorProgress`, which the slices can handle

Needed to fix issues related to canceling invocations.
2023-05-29 09:07:46 -04:00
psychedelicious
6764b2a854 fix(ui): fix save to gallery without bounding box 2023-05-29 09:07:46 -04:00
psychedelicious
970340cf62 fix(ui): infill and scaling options label 2023-05-29 09:07:46 -04:00
psychedelicious
043f9d9ba4 fix(ui): fix auto-switch to new images 2023-05-29 09:07:46 -04:00
psychedelicious
6f82801d07 fix(ui): fix canvas save to gallery incorrect is_intermediate flag 2023-05-28 20:19:56 -04:00
psychedelicious
3e3dd39ae4 fix(nodes): fix images service update() for is_intermediate 2023-05-28 20:19:56 -04:00
psychedelicious
89aa06e014 feat(ui): consolidate images slice
Now that images are in a database and we can make filtered queries, we can do away with the cumbersome `resultsSlice` and `uploadsSlice`.

- Remove `resultsSlice` and `uploadsSlice` entirely
- Add `imagesSlice` fills the same role
- Convert the application to use `imagesSlice`, reducing a lot of messy logic where we had to check which category was selected
- Add a simple filter popover to the gallery, which lets you select any number of image categories
2023-05-28 20:19:56 -04:00
psychedelicious
6cc00ef4b7 chore(ui): regen api client 2023-05-28 20:19:56 -04:00
psychedelicious
f31e62afad feat(nodes): make list images route use offset pagination
Because we dynamically insert images into the DB and UI's images state, `page`/`per_page` pagination makes loading the images awkward.

Using `offset`/`limit` pagination lets us query for images with an offset equal to the number of images already loaded (which match the query parameters).

The result is that we always get the correct next page of images when loading more.
2023-05-28 20:19:56 -04:00
psychedelicious
38fd2ad45d fix(ui): fix metadata viewer crash 2023-05-28 20:19:56 -04:00
psychedelicious
05b99b5377 fix(ui): fix erroneously displays is_intermediate field on nodes 2023-05-28 20:19:56 -04:00
psychedelicious
08a14ee6d5 fix(nodes): fix conflicts with controlnet 2023-05-28 20:19:56 -04:00
psychedelicious
29fcc92da9 feat(ui): handle new image origin/category setup
- Update all thunks & network related things
- Update gallery

What I have not done yet is rename the gallery tabs and the relevant slices, but I believe the functionality is all there.

Also I fixed several bugs along the way but couldn't really commit them separately bc I was refactoring. Can't remember what they were, but related to the gallery image switching.
2023-05-28 20:19:56 -04:00
psychedelicious
d78e3572e3 chore(ui): regen api client 2023-05-28 20:19:56 -04:00
psychedelicious
160267c71a feat(nodes): refactor image types
- Remove `ImageType` entirely, it is confusing
- Create `ResourceOrigin`, may be `internal` or `external`
- Revamp `ImageCategory`, may be `general`, `mask`, `control`, `user`, `other`. Expect to add more as time goes on
- Update images `list` route to accept `include_categories` OR `exclude_categories` query parameters to afford finer-grained querying. All services are updated to accomodate this change.

The new setup should account for our types of images, including the combinations we couldn't really handle until now:
- Canvas init and masks
- Canvas when saved-to-gallery or merged
2023-05-28 20:19:56 -04:00
psychedelicious
fd47e70c92 feat(nodes): use higher precision timestamps in db 2023-05-28 20:19:56 -04:00
psychedelicious
9317b42e5f feat(nodes, ui): wip image types 2023-05-28 20:19:56 -04:00
psychedelicious
bdab73701f fix(ui): canvas images not added to staging 2023-05-28 20:19:56 -04:00
psychedelicious
3ea5e78322 fix(nodes): fix list images route param descriptions 2023-05-28 20:19:56 -04:00
psychedelicious
f609ee21a2 fix(ui): handle intermediates when fetching gallery 2023-05-28 20:19:56 -04:00
psychedelicious
f51defeeb3 chore(ui): regen api client 2023-05-28 20:19:56 -04:00
psychedelicious
ee0225f4ba fix(nodes): handle intermediates during images.get_many() 2023-05-28 20:19:56 -04:00
psychedelicious
33a0af4637 feat(nodes): add nameservice
Currenly only used to make names for images, but when latents, conditioning, etc are managed in DB, will do the same for them.

Intended to eventually support custom naming schemes.
2023-05-28 20:19:56 -04:00
Lincoln Stein
d37b08a7dd Merge branch 'main' into release/make-web-dist-startable 2023-05-28 19:46:09 -04:00
user1
9a796364da Fixed controlnet preprocessors and controlnet handling in TextToLatents to work with revised Image services. 2023-05-26 21:44:00 -04:00
user1
1ad4eb3a7b Progress toward improvement in fieldTemplateBuilder.ts getFieldType() 2023-05-26 21:44:00 -04:00
user1
3767a453bb Added float to FIELD_TYPE_MAP ins constants.ts 2023-05-26 21:44:00 -04:00
user1
b0892d30a4 Added mediapipe install requirement. Should be able to remove once controlnet_aux package adds mediapipe to its requirements. 2023-05-26 21:44:00 -04:00
user1
d9b1e4a98c Added nodes for float params: ParamFloatInvocation and FloatCollectionOutput. Also added FloatOutput. 2023-05-26 21:44:00 -04:00
user1
a4dec8c1d6 Fixed bug where MediapipFaceProcessorInvocation was ignoring max_faces and min_confidence params. 2023-05-26 21:44:00 -04:00
user1
8960ceb98b Added Mediapipe image processor for use as ControlNet preprocessor.
Also hacked in ability to specify HF subfolder when loading ControlNet models from string.
2023-05-26 21:44:00 -04:00
psychedelicious
be79d088c0 fix(nodes): controlnet input accepts list or single controlnet 2023-05-26 21:44:00 -04:00
psychedelicious
009407ea3f fix(ui): fix node ui type hints 2023-05-26 21:44:00 -04:00
psychedelicious
6999d28c7f chore(ui): regen api client 2023-05-26 21:44:00 -04:00
user1
324e9eb74b Extended node-based ControlNet support to LatentsToLatentsInvocation. 2023-05-26 21:44:00 -04:00
user1
56cff40362 Cleaning up after ControlNet refactor in TextToLatentsInvocation 2023-05-26 21:44:00 -04:00
user1
2ba40c5e52 Refactored most of controlnet code into its own method to declutter TextToLatents.invoke(), and make upcoming integration with LatentsToLatents easier. 2023-05-26 21:44:00 -04:00
user1
3ab147204c Fix to work with current stable release of controlnet_aux (v0.0.3). Turned of pre-processor params that were added post v0.0.3. Also change defaults for shuffle. 2023-05-26 21:44:00 -04:00
user1
e4c89cba9c Switched CotrolNet node modelname input from free text to default list of popular ControlNet model names. 2023-05-26 21:44:00 -04:00
user1
322ea84c4e Commented out ZoeDetector. Will re-instate once there's a controlnet-aux release that supports it. 2023-05-26 21:44:00 -04:00
user1
f2b41c60ff Cleaning up prior to submitting ControlNet PR. Mostly turning off diagnostic printing. Also fixed error when there is no controlnet input. 2023-05-26 21:44:00 -04:00
user1
754acec92f Added support for specifying which step iteration to start using
each ControlNet, and which step to end using each controlnet (specified as fraction of total steps)
2023-05-26 21:44:00 -04:00
user1
11fc7e40a5 Refactored ControNet support to consolidate multiple parameters into data struct. Also redid how multiple controlnets are handled. 2023-05-26 21:44:00 -04:00
user1
d15bb88eb2 Removed last bits of dtype and and device hardwiring from controlnet section 2023-05-26 21:44:00 -04:00
user1
70ba36eefc Cleaning up mistakes after rebase. 2023-05-26 21:44:00 -04:00
user1
7e70391c2b Cleaning up TextToLatent arg testing 2023-05-26 21:44:00 -04:00
user1
e2a94be336 Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue. 2023-05-26 21:44:00 -04:00
user1
63a86eefb4 Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes. 2023-05-26 21:44:00 -04:00
user1
b0727b9d47 Prep for splitting pre-processor and controlnet nodes 2023-05-26 21:44:00 -04:00
user1
d96e727dd5 Added more preprocessor nodes for:
MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.
2023-05-26 21:44:00 -04:00
user1
fe480886dc changes to base class for controlnet nodes 2023-05-26 21:44:00 -04:00
user1
8031d1827b Refactored controlnet node to output ControlField that bundles control info. 2023-05-26 21:44:00 -04:00
user1
b5acdb322d Switching to ControlField for output from controlnet nodes. 2023-05-26 21:44:00 -04:00
user1
a4d1fe8819 Initial port of controlnet node support from generator-based TextToImageInvocation node to latent-based TextToLatentsInvocation node 2023-05-26 21:44:00 -04:00
user1
10b7a58887 Added first controlnet preprocessor node for canny edge detection. 2023-05-26 21:44:00 -04:00
user1
901a277959 Core implementation of ControlNet and MultiControlNet. 2023-05-26 21:44:00 -04:00
user1
aaa093bef1 Fixed use of ControlNet control_weight parameter 2023-05-26 21:44:00 -04:00
user1
bb96543d66 Added support for using multiple control nets. Unfortunately this breaks direct usage of Control node output port ==> TextToLatent control input port -- passing through a Collect node is now required. Working on fixing this... 2023-05-26 21:44:00 -04:00
user1
a2a2cfa765 Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue. 2023-05-26 21:44:00 -04:00
user1
18e6a2b410 Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes. 2023-05-26 21:44:00 -04:00
user1
db27263bc2 Prep for splitting pre-processor and controlnet nodes 2023-05-26 21:44:00 -04:00
user1
0e027ec3ef Added more preprocessor nodes for:
MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.
2023-05-26 21:44:00 -04:00
user1
5acbbeecaa Added HED, LineArt, and OpenPose ControlNet nodes 2023-05-26 21:44:00 -04:00
user1
6ef2168b67 changes to base class for controlnet nodes 2023-05-26 21:44:00 -04:00
user1
6d958a214c Refactored ControlNet nodes so they subclass from PreprocessedControlInvocation, and only need to override run_processor(image) (instead of reimplementing invoke()) 2023-05-26 21:44:00 -04:00
user1
4ae4bf4ff9 Resolving conflicts in rebase to origin/main 2023-05-26 21:44:00 -04:00
user1
fdef53b2de Switching to ControlField for output from controlnet nodes. 2023-05-26 21:44:00 -04:00
user1
11bd038b9d Added first controlnet preprocessor node for canny edge detection. 2023-05-26 21:44:00 -04:00
user1
768cfe3aab Core implementation of ControlNet and MultiControlNet. 2023-05-26 21:44:00 -04:00
user1
c4277b0662 Moved to controlnet_aux v0.0.4, reinstated Zoe controlnet preprocessor. Also in pyproject.toml had to specify downgrade of timm to 0.6.13 _after_ controlnet-aux installs timm >= 0.9.2, because timm >0.6.13 breaks Zoe preprocessor. 2023-05-26 21:44:00 -04:00
psychedelicious
020f3ccf07 fix(nodes): controlnet input accepts list or single controlnet 2023-05-26 21:44:00 -04:00
psychedelicious
7467fa5e57 fix(ui): fix node ui type hints 2023-05-26 21:44:00 -04:00
psychedelicious
e19ef7ed2f fix(ui): add control field type 2023-05-26 21:44:00 -04:00
psychedelicious
71003be6b8 fix(ui): add value to conditioning field 2023-05-26 21:44:00 -04:00
user1
c1dbafc2df chore(ui): regen api client 2023-05-26 21:44:00 -04:00
user1
dcebd71381 Extended node-based ControlNet support to LatentsToLatentsInvocation. 2023-05-26 21:44:00 -04:00
user1
d855a65e73 Cleaning up after ControlNet refactor in TextToLatentsInvocation 2023-05-26 21:44:00 -04:00
user1
a9007c7e0f Refactored most of controlnet code into its own method to declutter TextToLatents.invoke(), and make upcoming integration with LatentsToLatents easier. 2023-05-26 21:44:00 -04:00
user1
af60304f97 Fix to work with current stable release of controlnet_aux (v0.0.3). Turned of pre-processor params that were added post v0.0.3. Also change defaults for shuffle. 2023-05-26 21:44:00 -04:00
user1
6de241eead Switched CotrolNet node modelname input from free text to default list of popular ControlNet model names. 2023-05-26 21:44:00 -04:00
user1
51032dc0b2 Commented out ZoeDetector. Will re-instate once there's a controlnet-aux release that supports it. 2023-05-26 21:44:00 -04:00
user1
9ec3d2bc0c Added dependency on controlnet-aux v0.0.3 2023-05-26 21:44:00 -04:00
user1
297931f5d9 Cleaning up prior to submitting ControlNet PR. Mostly turning off diagnostic printing. Also fixed error when there is no controlnet input. 2023-05-26 21:44:00 -04:00
user1
f613c073c1 Added support for specifying which step iteration to start using
each ControlNet, and which step to end using each controlnet (specified as fraction of total steps)
2023-05-26 21:44:00 -04:00
user1
63d248622c Refactored ControNet support to consolidate multiple parameters into data struct. Also redid how multiple controlnets are handled. 2023-05-26 21:44:00 -04:00
user1
48485fe92f Removed last bits of dtype and and device hardwiring from controlnet section 2023-05-26 21:44:00 -04:00
user1
07726af703 Cleaning up mistakes after rebase. 2023-05-26 21:44:00 -04:00
user1
ad1004b485 Cleaning up TextToLatent arg testing 2023-05-26 21:44:00 -04:00
user1
0096fb2790 Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue. 2023-05-26 21:44:00 -04:00
user1
9c8c2e49d6 Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes. 2023-05-26 21:44:00 -04:00
user1
2005a96847 Prep for splitting pre-processor and controlnet nodes 2023-05-26 21:44:00 -04:00
user1
00a8d60c1b Added more preprocessor nodes for:
MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.
2023-05-26 21:44:00 -04:00
user1
3aa182390a changes to base class for controlnet nodes 2023-05-26 21:44:00 -04:00
user1
e44f1d6d4e Refactored controlnet node to output ControlField that bundles control info. 2023-05-26 21:44:00 -04:00
user1
dfdf8e2ead Switching to ControlField for output from controlnet nodes. 2023-05-26 21:44:00 -04:00
user1
3a645c4e80 Initial port of controlnet node support from generator-based TextToImageInvocation node to latent-based TextToLatentsInvocation node 2023-05-26 21:44:00 -04:00
user1
113129daf9 Added first controlnet preprocessor node for canny edge detection. 2023-05-26 21:44:00 -04:00
user1
940e3b6635 Core implementation of ControlNet and MultiControlNet. 2023-05-26 21:44:00 -04:00
user1
7fb29dabff Fixed lint-ish formatting error 2023-05-26 21:44:00 -04:00
user1
714ad6dbb8 Fixed use of ControlNet control_weight parameter 2023-05-26 21:44:00 -04:00
user1
c0863fa20f Added support for using multiple control nets. Unfortunately this breaks direct usage of Control node output port ==> TextToLatent control input port -- passing through a Collect node is now required. Working on fixing this... 2023-05-26 21:44:00 -04:00
user1
78b0b37ba6 More rebase repair. 2023-05-26 21:44:00 -04:00
user1
5d5cdc7716 Added resizing of controlnet image based on noise latent. Fixes a tensor mismatch issue. 2023-05-26 21:44:00 -04:00
user1
93cd818f6a Refactored controlnet nodes: split out controlnet stuff into separate node, stripped controlnet stuff form image processing/analysis nodes. 2023-05-26 21:44:00 -04:00
user1
598a628790 Prep for splitting pre-processor and controlnet nodes 2023-05-26 21:44:00 -04:00
user1
f3666eda63 Added more preprocessor nodes for:
MidasDepth
      ZoeDepth
      MLSD
      NormalBae
      Pidi
      LineartAnime
      ContentShuffle
Removed pil_output options, ControlNet preprocessors should always output as PIL. Removed diagnostics and other general cleanup.
2023-05-26 21:44:00 -04:00
user1
754017b59e Added an additional "raw_processed_image" output port to controlnets, mainly so could route ImageField to a ShowImage node 2023-05-26 21:44:00 -04:00
user1
21251ce12c Added HED, LineArt, and OpenPose ControlNet nodes 2023-05-26 21:44:00 -04:00
user1
dc12fa6cd6 changes to base class for controlnet nodes 2023-05-26 21:44:00 -04:00
user1
f2f4c37f19 Refactored ControlNet nodes so they subclass from PreprocessedControlInvocation, and only need to override run_processor(image) (instead of reimplementing invoke()) 2023-05-26 21:44:00 -04:00
user1
0864fca641 Resolving conflicts in rebase to origin/main 2023-05-26 21:44:00 -04:00
user1
5e4c0217c7 Switching to ControlField for output from controlnet nodes. 2023-05-26 21:44:00 -04:00
user1
78cd106c23 Initial port of controlnet node support from generator-based TextToImageInvocation node to latent-based TextToLatentsInvocation node 2023-05-26 21:44:00 -04:00
user1
6ed0efa938 Added first controlnet preprocessor node for canny edge detection. 2023-05-26 21:44:00 -04:00
user1
ca0669c337 Resolving rebase conflict 2023-05-26 21:44:00 -04:00
user1
b59a749627 Added example of using ControlNet with legacy Txt2Img generator 2023-05-26 21:44:00 -04:00
user1
a91dee87d0 Added support for ControlNet and MultiControlNet to legacy non-nodal Txt2Img in backend/generator. Although backend/generator will likely disappear by v3.x, right now they are very useful for testing core ControlNet and MultiControlNet functionality while node codebase is rapidly evolving. 2023-05-26 21:44:00 -04:00
user1
5ff98a4179 Core implementation of ControlNet and MultiControlNet. 2023-05-26 21:44:00 -04:00
Lincoln Stein
36b2f12219 Merge branch 'main' into release/make-web-dist-startable 2023-05-26 12:56:24 -04:00
Kent Keirsey
5569f205ee Update CODEOWNERS 2023-05-26 08:59:10 -04:00
Kent Keirsey
a76cf8aab2 Update CODEOWNERS 2023-05-26 08:59:10 -04:00
Lincoln Stein
5c0f0d1808 Merge branch 'main' into lstein/logging-improvements 2023-05-26 08:57:17 -04:00
Lincoln Stein
951900a86a Merge branch 'main' into lstein/config-management-fixes 2023-05-26 08:56:41 -04:00
psychedelicious
582f516fef Merge branch 'main' into release/make-web-dist-startable 2023-05-26 18:06:38 +10:00
psychedelicious
a25bae2545 fix(ui): tweak log levels 2023-05-26 18:06:08 +10:00
psychedelicious
0ea35b1e3d feat(ui): improve session canceled handling 2023-05-26 18:06:08 +10:00
psychedelicious
c6f935bf1a feat(ui): improve gallery page handling 2023-05-26 18:06:08 +10:00
psychedelicious
96b4d35d43 fix(ui): fix uploads not loading more images correctly after generation 2023-05-26 18:06:08 +10:00
psychedelicious
7b0938e7e4 feat(ui): add comments for weird stuff 2023-05-26 18:06:08 +10:00
psychedelicious
249522b568 fix(ui): fix gallery not loading more images correctly after generation 2023-05-26 18:06:08 +10:00
psychedelicious
39088e42cc fix(ui): remove console logs 2023-05-26 18:06:08 +10:00
psychedelicious
30e0033ebe fix(ui): fix results not added to gallery 2023-05-26 18:06:08 +10:00
psychedelicious
b599c40099 feat(ui): improve session invoked handling 2023-05-26 18:06:08 +10:00
psychedelicious
8f190169db feat(ui): improve session creation handling 2023-05-26 18:06:08 +10:00
psychedelicious
1d4d705795 feat(ui): improve image urls handling 2023-05-26 18:06:08 +10:00
psychedelicious
b3f71b3078 feat(ui): improve image metadata handling 2023-05-26 18:06:08 +10:00
psychedelicious
6059db4f15 feat(ui): improve image delete handling 2023-05-26 18:06:08 +10:00
psychedelicious
0d5f44b153 feat(ui): improve image upload handling 2023-05-26 18:06:08 +10:00
psychedelicious
17164a37a8 fix(ui): fix gallery auto switch 2023-05-26 18:06:08 +10:00
psychedelicious
f88ccabe30 fix(ui): gallery not loading on page load 2023-05-26 18:06:08 +10:00
psychedelicious
e1c85f1234 Merge branch 'main' into release/make-web-dist-startable 2023-05-26 18:04:09 +10:00
psychedelicious
57a3eb3652 feat(ui): unset progress image inside invocationComplete listener 2023-05-26 13:25:50 +10:00
Mary Hipp
82a8972bde create listener for imageMetdataReceived to swap our progressImage 2023-05-26 13:25:50 +10:00
Lincoln Stein
497a885c85 Merge branch 'main' into release/make-web-dist-startable 2023-05-25 22:49:18 -04:00
Lincoln Stein
4d9f55d0f6 replace deleted get_root() 2023-05-25 22:48:50 -04:00
psychedelicious
0c3b4bb70d chore(ui): regen api client 2023-05-25 22:17:14 -04:00
psychedelicious
33e13820fc feat(nodes): remove meta node field; use individual is_intermediate field instead
as suggested by @Kyle0654
2023-05-25 22:17:14 -04:00
psychedelicious
43d991cfdb fix(ui): fix incorrect comment 2023-05-25 22:17:14 -04:00
psychedelicious
291e9cf14b fix(nodes): add is_intermediate to all image-outputting nodes 2023-05-25 22:17:14 -04:00
psychedelicious
a2de5c9963 feat(ui): change intermediates handling
- Update the canvas graph generation to flag its uploaded init and mask images as `intermediate`.
- During canvas setup, hit the update route to associate the uploaded images with the session id.
- Organize the socketio and RTK listener middlware better. Needed to facilitate the updated canvas logic.
- Add a new action `sessionReadyToInvoke`. The `sessionInvoked` action is *only* ever run in response to this event. This lets us do whatever complicated setup (eg canvas) and explicitly invoking. Previously, invoking was tied to the socket subscribe events.
- Some minor tidying.
2023-05-25 22:17:14 -04:00
psychedelicious
5025f84627 chore(ui): regen api client 2023-05-25 22:17:14 -04:00
psychedelicious
d2c8a53c55 feat(nodes): change intermediates handling
- `ImageType` is now restricted to `results` and `uploads`.
- Add a reserved `meta` field to nodes to hold the `is_intermediate` boolean. We can extend it in the future to support other node `meta`.
- Add a `is_intermediate` column to the `images` table to hold this. (When `latents`, `conditioning` etc are added to the DB, they will also have this column.)
- All nodes default to `*not* intermediate`. Nodes must explicitly be marked `intermediate` for their outputs to be `intermediate`.
- When building a graph, you can set `node.meta.is_intermediate=True` and it will be handled as an intermediate.
- Add a new `update()` method to the `ImageService`, and a route to call it. Updates have a strict model, currently only `session_id` and `image_category` may be updated.
- Add a new `update()` method to the `ImageRecordStorageService` to update the image record using the model.
2023-05-25 22:17:14 -04:00
Lincoln Stein
5659d10778 remove unused function get_root() 2023-05-25 22:06:37 -04:00
Lincoln Stein
46cab81d6f fix missing web_dir 2023-05-25 22:01:48 -04:00
Lincoln Stein
dd157bce85 Merge branch 'main' into release/make-web-dist-startable 2023-05-25 21:52:05 -04:00
Lincoln Stein
2f25dd7d0d Merge branch 'main' into lstein/config-management-fixes 2023-05-25 21:10:12 -04:00
Lincoln Stein
e56965ad76 documentation tweaks; fixed initialization in a couple more places 2023-05-25 21:10:00 -04:00
Lincoln Stein
2273b3a8c8 fix potential race condition in config system 2023-05-25 20:41:26 -04:00
Kent Keirsey
05fb0ac2b2 Update latent.py 2023-05-26 10:27:33 +10:00
Kent Keirsey
d4acd49ee3 Update generate.py 2023-05-26 10:27:33 +10:00
Kent Keirsey
d98868e524 Update generationSlice.ts to change Default Scheduler 2023-05-26 10:27:33 +10:00
Mary Hipp
93bb27f2c7 fix gallery navigation 2023-05-26 10:01:06 +10:00
Mary Hipp
a4c44edf8d more use parameter fixes 2023-05-26 10:01:06 +10:00
Mary Hipp
1e94d7739a fix metadata references, add support for negative_conditioning syntax 2023-05-26 10:01:06 +10:00
Lincoln Stein
9110838fe4 Merge branch 'main' into release/make-web-dist-startable 2023-05-25 19:06:09 -04:00
Lincoln Stein
ca7b267326 raise error if syslogging requested and syslog lib not available 2023-05-25 10:10:46 -04:00
Lincoln Stein
7f5992d6a5 Merge branch 'lstein/logging-improvements' of github.com:invoke-ai/InvokeAI into lstein/logging-improvements 2023-05-25 09:39:56 -04:00
Lincoln Stein
88776fb2de get invokeai_configure working again 2023-05-25 09:39:45 -04:00
Lincoln Stein
34f567abd4 Merge branch 'main' into lstein/logging-improvements 2023-05-25 08:48:47 -04:00
Lincoln Stein
b87f3043ae add logging configuration 2023-05-24 23:57:15 -04:00
psychedelicious
3829ffbe66 fix(tests): add --use_memory_db flag; use it in tests 2023-05-25 12:12:31 +10:00
psychedelicious
ad619ae880 fix(tests): log db_location 2023-05-25 12:12:31 +10:00
psychedelicious
d22ebe08be fix(tests): log db_location 2023-05-25 12:12:31 +10:00
psychedelicious
ee0c6ad86e fix(cli): fix invocation services for cli 2023-05-25 12:12:31 +10:00
psychedelicious
96adb56633 fix(tests): fix missing services in tests; fix ImageField instantiation 2023-05-25 12:12:31 +10:00
psychedelicious
3000436121 chore(nodes): remove unused imports 2023-05-25 12:12:31 +10:00
psychedelicious
37cdd91f5d fix(nodes): use forward declarations for InvocationServices
Also use `TYPE_CHECKING` to get IDE hints.
2023-05-25 12:12:31 +10:00
Rohan Barar
6f3c6ddf3f Update 020_INSTALL_MANUAL.md
Corrected a markdown formatting error (missing backtick).
2023-05-24 11:33:32 -04:00
psychedelicious
0bfbda512d build(nodes): remove references to metadata service in tests 2023-05-24 11:30:47 -04:00
psychedelicious
295b98a13c build(nodes): remove outdated metadata test
I will add tests for the new service soon
2023-05-24 11:30:47 -04:00
psychedelicious
ff6b345d45 fix(nodes): rebase fixes 2023-05-24 11:30:47 -04:00
psychedelicious
1fb307abf4 feat(nodes): restore canvas functionality (non-latents) 2023-05-24 11:30:47 -04:00
psychedelicious
29c952dcf6 feat(ui): restore canvas functionality 2023-05-24 11:30:47 -04:00
psychedelicious
010f63a50d feat(ui): misc tidy 2023-05-24 11:30:47 -04:00
psychedelicious
068bbe3a39 fix(ui): fix uploads tab in gallery 2023-05-24 11:30:47 -04:00
psychedelicious
ad39680feb feat(nodes): wip inpainting nodes prep 2023-05-24 11:30:47 -04:00
psychedelicious
1e0ae8404c feat(nodes): comment out seamless
this will be a model config feature when model manager is ready
2023-05-24 11:30:47 -04:00
psychedelicious
460d555a3d feat(nodes): add image mul, channel, convert nodes
also make img node names consistent
2023-05-24 11:30:47 -04:00
psychedelicious
66ad04fcfc feat(nodes): add mask image category 2023-05-24 11:30:47 -04:00
psychedelicious
c7c0836721 feat(ui): migrate linear workflows to latents 2023-05-24 11:30:47 -04:00
psychedelicious
d2c223de8f feat(nodes): move fully* to new images service
* except i haven't rebuilt inpaint in latents
2023-05-24 11:30:47 -04:00
psychedelicious
dd16f788ed fix(nodes): fix RangeOfSizeInvocation off-by-one error 2023-05-24 11:30:47 -04:00
psychedelicious
b25c1af018 feat(nodes): add RangeOfSizeInvocation
The `RangeInvocation` is a simple wrapper around `range()`, but you must provide `stop > start`.

`RangeOfSizeInvocation` replaces the `stop` parameter with `size`, so that you can just provide the `start` and `step` and get a range of `size` length.
2023-05-24 11:30:47 -04:00
psychedelicious
8f393b64b8 feat(nodes): add seed validator
If `seed>SEED_MAX`, we can still continue if we parse the seed as `seed % SEED_MAX`.
2023-05-24 11:30:47 -04:00
psychedelicious
55b3193629 fix(nodes): add RangeInvocation validator
`stop` must be greater than `start`.
2023-05-24 11:30:47 -04:00
psychedelicious
6f78c073ed fix(ui): fix uploads & other bugs 2023-05-24 11:30:47 -04:00
psychedelicious
c406be6f4f fix(ui): fix image deletion 2023-05-24 11:30:47 -04:00
psychedelicious
aeaf3737aa fix(ui): fix gallery bugs 2023-05-24 11:30:47 -04:00
psychedelicious
23d9d58c08 fix(nodes): fix bugs with serving images
When returning a `FileResponse`, we must provide a valid path, else an exception is raised outside the route handler.

Add the `validate_path` method back to the service so we can validate paths before returning the file.

I don't like this but apparently this is just how `starlette` and `fastapi` works with `FileResponse`.
2023-05-24 11:30:47 -04:00
psychedelicious
4c331a5d7e chore(ui): regen api client 2023-05-24 11:30:47 -04:00
psychedelicious
035425ef24 feat(nodes): address feedback
- Address database feedback:
  - Remove all the extraneous tables. Only an `images` table now:
  - `image_type` and `image_category` are unrestricted strings. When creating images, the provided values are checked to ensure they are a valid type and category.
  - Add `updated_at` and `deleted_at` columns. `deleted_at` is currently unused.
  - Use SQLite's built-in timestamp features to populate these. Add a trigger to update `updated_at` when the row is updated. Currently no way to update a row.
  - Rename the `id` column in `images` to `image_name`
- Rename `ImageCategory.IMAGE` to `ImageCategory.GENERAL`
- Move all exceptions outside their base classes to make them more portable.
- Add `width` and `height` columns to the database. These store the actual dimensions of the image file, whereas the metadata's `width` and `height` refer to the respective generation parameters and are nullable.
- Make `deserialize_image_record` take a `dict` instead of `sqlite3.Row`
- Improve comments throughout
- Tidy up unused code/files and some minor organisation
2023-05-24 11:30:47 -04:00
psychedelicious
021e5a2aa3 feat(nodes): improve metadata service comments 2023-05-24 11:30:47 -04:00
psychedelicious
7a1de3887e feat(ui): wip update UI for migration 2023-05-24 11:30:47 -04:00
psychedelicious
4a7a5234df fix(ui): fix image nodes losing image 2023-05-24 11:30:47 -04:00
psychedelicious
6aebe1614d feat(ui): wip use new images service 2023-05-24 11:30:47 -04:00
psychedelicious
74292eba28 chore(ui): regen api client 2023-05-24 11:30:47 -04:00
psychedelicious
c31ff364ab fix(nodes): tidy images service 2023-05-24 11:30:47 -04:00
psychedelicious
f310a39381 feat(nodes): finalize image routes 2023-05-24 11:30:47 -04:00
psychedelicious
5a7e611e0a fix(nodes): fix image url 2023-05-24 11:30:47 -04:00
psychedelicious
4e29a751d8 feat(ui): add POC image record fetching 2023-05-24 11:30:47 -04:00
psychedelicious
3f94f81acd chore(ui): regen api client 2023-05-24 11:30:47 -04:00
psychedelicious
5de3c41d19 feat(nodes): add metadata handling 2023-05-24 11:30:47 -04:00
psychedelicious
f071b03ceb chore(ui): regen api client 2023-05-24 11:30:47 -04:00
psychedelicious
b9375186a5 feat(nodes): consolidate image routers 2023-05-24 11:30:47 -04:00
psychedelicious
11bd932cba feat(nodes): revert invocation_complete url hack 2023-05-24 11:30:47 -04:00
psychedelicious
b77ccfaf32 chore(ui): regen api client 2023-05-24 11:30:47 -04:00
psychedelicious
96653eebb6 build(ui): do not export schemas on api client generation 2023-05-24 11:30:47 -04:00
psychedelicious
60d25f105f fix(nodes): restore metadata traverser 2023-05-24 11:30:47 -04:00
psychedelicious
734b653a5f fix(nodes): add base images router 2023-05-24 11:30:47 -04:00
psychedelicious
52c9e6ec91 feat(nodes): organise/tidy 2023-05-24 11:30:47 -04:00
psychedelicious
c0f132e41a hack(nodes): hack to get image urls in the invocation complete event 2023-05-24 11:30:47 -04:00
psychedelicious
cc1160a43a feat(nodes): streamline urlservice 2023-05-24 11:30:47 -04:00
psychedelicious
adde8450bc fix(nodes): remove bad import 2023-05-24 11:30:47 -04:00
psychedelicious
5bf9891553 feat(nodes): it works 2023-05-24 11:30:47 -04:00
psychedelicious
22c34c343a feat(nodes): fix types for InvocationServices 2023-05-24 11:30:47 -04:00
psychedelicious
f7804f6126 feat(nodes): add logger to images service 2023-05-24 11:30:47 -04:00
psychedelicious
d14b02e93f feat(logger): fix logger type issues 2023-05-24 11:30:47 -04:00
psychedelicious
1b75d899ae feat(nodes): wip image storage implementation 2023-05-24 11:30:47 -04:00
psychedelicious
d4aa79acd7 fix(nodes): use save instead of set
`set` is a python builtin
2023-05-24 11:30:47 -04:00
psychedelicious
33d199c007 feat(nodes): image records router 2023-05-24 11:30:47 -04:00
psychedelicious
9c89d3452c feat(nodes): add high-level images service
feat(nodes): add ResultsServiceABC & SqliteResultsService

**Doesn't actually work bc of circular imports. Can't even test it.**

- add a base class for ResultsService and SQLite implementation
- use `graph_execution_manager` `on_changed` callback to keep `results` table in sync

fix(nodes): fix results service bugs

chore(ui): regen api

fix(ui): fix type guards

feat(nodes): add `result_type` to results table, fix types

fix(nodes): do not shadow `list` builtin

feat(nodes): add results router

It doesn't work due to circular imports still

fix(nodes): Result class should use outputs classes, not fields

feat(ui): crude results router

fix(ui): send to canvas in currentimagebuttons not working

feat(nodes): add core metadata builder

feat(nodes): add design doc

feat(nodes): wip latents db stuff

feat(nodes): images_db_service and resources router

feat(nodes): wip images db & router

feat(nodes): update image related names

feat(nodes): update urlservice

feat(nodes): add high-level images service
2023-05-24 11:30:47 -04:00
psychedelicious
fb0b63c580 fix(nodes): fix seam painting
The problem was the same seed was getting used for the seam painting pass, causing the fried look.

Same issue as if you do img2img on a txt2img with the same seed/prompt.

Thanks to @hipsterusername for teaming up to debug this. We got pretty deep into the weeds.
2023-05-25 00:58:03 +10:00
Lincoln Stein
bb2c6e5925 Merge branch 'main' into release/make-web-dist-startable 2023-05-24 10:55:51 -04:00
blessedcoolant
928caff2a6 fix: attempt to fix actions (#3454)
i think this conditional needs to be removed.
2023-05-25 02:37:39 +12:00
psychedelicious
670c79f2c7 fix: attempt to fix actions
i think this conditional needs to be removed.
2023-05-25 00:31:48 +10:00
psychedelicious
d6efb98953 build: fix test-invoke-pip.yml
- Restore conditional which ensures tests are only run on `main`
- Fix `yaml` syntax error
2023-05-24 21:48:12 +10:00
psychedelicious
19da795274 fix(ui): send to canvas in currentimagebuttons not working 2023-05-24 21:46:58 +10:00
Mary Hipp
454ba9b893 add crossOrigin = anonymous attribute to konva image 2023-05-24 10:32:41 +10:00
Lincoln Stein
d2dc1ed26f make InvokeAI package installable
This commit makes InvokeAI 3.0 to be installable via PyPi.org and the
installer script.

Main changes.

1. Move static web pages into `invokeai/frontend/web` and modify the
API to look for them there. This allows pip to copy the files into the
distribution directory so that user no longer has to be in repo root
to launch.

2. Update invoke.sh and invoke.bat to launch the new web application
properly. This also changes the wording for launching the CLI from
"generate images" to "explore the InvokeAI node system," since I would
not recommend using the CLI to generate images routinely.

3. Fix a bug in the checkpoint converter script that was identified
during testing.

4. Better error reporting when checkpoint converter fails.

5. Rebuild front end.
2023-05-22 17:51:47 -04:00
Lincoln Stein
d4fb16825e move static into invokeai.frontend.web directory for dist install 2023-05-22 16:48:17 -04:00
Mary Hipp Rogers
650d69ef5b added optional middleware prop and new actions needed (#3437)
* added optional middleware prop and new actions needed

* accidental import

* make middleware an array

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-05-22 08:16:11 -04:00
Mary Hipp
ff0e79fa9a add id for invoke button 2023-05-19 21:44:31 +10:00
Mary Hipp
127b54f812 add some IDs 2023-05-19 21:44:31 +10:00
Lincoln Stein
7025c00581 Add configuration system, remove legacy globals, args, generate and CLI (#3340)
# Application-wide configuration service

This PR creates a new `InvokeAIAppConfig` object that reads
application-wide settings from an init file, the environment, and the
command line.

Arguments and fields are taken from the pydantic definition of the
model. Defaults can be set by creating a yaml configuration file that
has a top-level key of "InvokeAI" and subheadings for each of the
categories returned by `invokeai --help`.

The file looks like this:

[file: invokeai.yaml]
```
InvokeAI:
  Paths:
    root: /home/lstein/invokeai-main
    conf_path: configs/models.yaml
    legacy_conf_dir: configs/stable-diffusion
    outdir: outputs
    embedding_dir: embeddings
    lora_dir: loras
    autoconvert_dir: null
    gfpgan_model_dir: models/gfpgan/GFPGANv1.4.pth
  Models:
    model: stable-diffusion-1.5
    embeddings: true
  Memory/Performance:
    xformers_enabled: false
    sequential_guidance: false
    precision: float16
    max_loaded_models: 4
    always_use_cpu: false
    free_gpu_mem: false
  Features:
    nsfw_checker: true
    restore: true
    esrgan: true
    patchmatch: true
    internet_available: true
    log_tokenization: false
  Cross-Origin Resource Sharing:
    allow_origins: []
    allow_credentials: true
    allow_methods:
    - '*'
    allow_headers:
    - '*'
  Web Server:
    host: 127.0.0.1
    port: 8081

```

The default name of the configuration file is `invokeai.yaml`, located
in INVOKEAI_ROOT. You can use any OmegaConf dictionary by passing it to
the config object at initialization time:

```
 omegaconf = OmegaConf.load('/tmp/init.yaml')
 conf = InvokeAIAppConfig(conf=omegaconf)
```
The default name of the configuration file is `invokeai.yaml`, located
in INVOKEAI_ROOT. You can replace supersede this by providing
anyOmegaConf dictionary object initialization time:

```
omegaconf = OmegaConf.load('/tmp/init.yaml')
conf = InvokeAIAppConfig(conf=omegaconf)
```

By default, InvokeAIAppConfig will parse the contents of `sys.argv` at
initialization time. You may pass a list of strings in the optional
`argv` argument to use instead of the system argv:

```
conf = InvokeAIAppConfig(arg=['--xformers_enabled'])
```

It is also possible to set a value at initialization time. This value
has highest priority.
```
conf = InvokeAIAppConfig(xformers_enabled=True)
```
Any setting can be overwritten by setting an environment variable of
form: "INVOKEAI_<setting>", as in:

```
export INVOKEAI_port=8080
```

Order of precedence (from highest):
   1) initialization options
   2) command line options
   3) environment variable options
   4) config file options
   5) pydantic defaults

Typical usage:

```
from invokeai.app.services.config import InvokeAIAppConfig

# get global configuration and print its nsfw_checker value
conf = InvokeAIAppConfig()
print(conf.nsfw_checker)
```
Finally, the configuration object is able to recreate its (modified)
yaml file, by calling its `to_yaml()` method:

```
conf = InvokeAIAppConfig(outdir='/tmp', port=8080)
print(conf.to_yaml())
```

# Legacy code removal and porting

This PR replaces Globals with the InvokeAIAppConfig system throughout,
and therefore removes the `globals.py` and `args.py` modules. It also
removes `generate` and the legacy CLI. ***The old CLI and web servers
are now gone.***

I have ported the functionality of the configuration script, the model
installer, and the merge and textual inversion scripts. The `invokeai`
command will now launch `invokeai-node-cli`, and `invokeai-web` will
launch the web server.

I have changed the continuous invocation tests to accommodate the new
command syntax in `invokeai-node-cli`. As a convenience function, you
can also pass invocations to `invokeai-node-cli` (or its alias
`invokeai`) on the command line as as standard input:

```
invokeai-node-cli "t2i --positive_prompt 'banana sushi' --seed 42"
invokeai < invocation_commands.txt
```
2023-05-18 13:37:09 -04:00
Lincoln Stein
7ea995149e fixes to env parsing, textual inversion & help text
- Make environment variable settings case InSenSiTive:
  INVOKEAI_MAX_LOADED_MODELS and InvokeAI_Max_Loaded_Models
  environment variables will both set `max_loaded_models`

- Updated realesrgan to use new config system.

- Updated textual_inversion_training to use new config system.

- Discovered a race condition when InvokeAIAppConfig is created
  at module load time, which makes it impossible to customize
  or replace the help message produced with --help on the command
  line. To fix this, moved all instances of get_invokeai_config()
  from module load time to object initialization time. Makes code
  cleaner, too.

- Added `--from_file` argument to `invokeai-node-cli` and changed
  github action to match. CI tests will hopefully work now.
2023-05-18 10:48:23 -04:00
Eugene
f9710dd6ed remove reference to legacy opt.hf_token, clean up whitespace in invokeai_configure 2023-05-17 20:39:00 -04:00
Eugene
4e7dd7d3f6 ci: remove reference to Globals in a workflow 2023-05-17 20:26:26 -04:00
Eugene
20ca9e1fc1 config: move 'CORS' settings to 'Web Server' in the docstring to match the actual category 2023-05-17 19:45:51 -04:00
Eugene
8a8b09a953 api_app: rename web_config to app_config for consistency 2023-05-17 19:42:13 -04:00
Eugene
9e4e386c9b web and formatting fixes
- remove non-existent import InvokeAIWebConfig
- fix workflow file formatting
- clean up whitespace
2023-05-17 19:12:03 -04:00
Lincoln Stein
eca1e449a8 Merge branch 'lstein/global-configuration' of github.com:invoke-ai/InvokeAI into lstein/global-configuration 2023-05-17 15:23:21 -04:00
Lincoln Stein
ffaadb9d05 reorder options in help text 2023-05-17 15:22:58 -04:00
Lincoln Stein
8adff96e29 Merge branch 'main' into lstein/global-configuration 2023-05-17 14:37:09 -04:00
Lincoln Stein
7593dc19d6 complete several steps needed to make 3.0 installable
- invokeai-configure updated to work with new config system
- migrate invokeai.init to invokeai.yaml during configure
- replace legacy invokeai with invokeai-node-cli
- add ability to run an invocation directly from invokeai-node-cli command line
- update CI tests to work with new invokeai syntax
2023-05-17 14:13:27 -04:00
Lincoln Stein
b7c5a39685 make invokeai.yaml more hierarchical; fix list configuration bug 2023-05-17 12:19:19 -04:00
Mary Hipp Rogers
bd1b84f7d0 tell user to refresh page on image load error (#3425)
* refetch images list if error loading

* tell user to refresh instead of refetching

* unused import

* feat(ui): use `useAppToaster` to make toast

* fix(ui): clear selected/initial image on error

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-05-17 11:52:37 -04:00
Lincoln Stein
eadfd239a8 update config script to work with new config system 2023-05-17 00:18:19 -04:00
Lincoln Stein
8d75e50435 partial port of invokeai-configure 2023-05-16 01:50:01 -04:00
psychedelicious
1d9c115225 feat(nodes): add low and high to RandomIntInvocation 2023-05-16 13:50:52 +10:00
blessedcoolant
30af20a056 ui: cleanup (#3418)
- tidy up a lot of cruft
- `sampler` --> `scheduler`
2023-05-16 15:27:12 +12:00
psychedelicious
cc21fb216c chore(ui): clean up GalleryPanel 2023-05-16 10:43:26 +10:00
psychedelicious
6fe62a2705 feat(ui): sampler --> scheduler 2023-05-16 10:40:26 +10:00
psychedelicious
da87378713 chore(ui): regen api client 2023-05-16 10:39:40 +10:00
psychedelicious
b6f5267385 chore(ui): clean up generationSlice 2023-05-16 10:21:18 +10:00
psychedelicious
f9e78d3c64 chore(ui): clean up gallerySlice 2023-05-16 10:16:36 +10:00
psychedelicious
b7b5bd1b46 chore(ui): clean up uiSlice 2023-05-16 09:57:19 +10:00
psychedelicious
9a3727d3ad chore(ui): clean up systemSlice 2023-05-16 09:48:58 +10:00
psychedelicious
d68c14516c chore(ui): clean up persist denylists 2023-05-16 09:46:03 +10:00
psychedelicious
9f4d39aa42 chore(ui): clean up modelSlice 2023-05-16 09:45:49 +10:00
blessedcoolant
84b801d88f ui: restore canvas and upload functionality (#3414)
- refactor image uploading, fix init image upload button 
- refactor toast and hotkey hooks into logical components
- restore canvas save/download/copy/merge functionality
- clean up unused files and packages
- fix canvas rendering issue resulting from fractional stage coords
2023-05-16 02:23:39 +12:00
blessedcoolant
2fc70c509b Merge branch 'main' into feat/ui/fix-uploading 2023-05-16 02:20:59 +12:00
Lincoln Stein
34fb1c4b19 make conditioning.py work with compel 1.1.5 (#3383)
This PR fixes the ValueError issue that was preventing all prompts from
working.
2023-05-15 09:46:04 -04:00
Lincoln Stein
80bdd550cf Merge branch 'main' into lstein/bugfix/compel 2023-05-15 09:25:21 -04:00
Lincoln Stein
7ef0d2aa35 merge with main 2023-05-15 09:07:17 -04:00
psychedelicious
2359b92b46 chore(ui): tidy unused component ref 2023-05-15 22:58:15 +10:00
psychedelicious
a404fb2d32 docs(ui): update PACKAGE_SCRIPTS.md 2023-05-15 22:49:28 +10:00
psychedelicious
513eb11616 chore(ui): clean up unused files/packages 2023-05-15 22:48:06 +10:00
psychedelicious
d2c9140e69 feat(ui): restore save/copy/download/merge functionality 2023-05-15 22:21:03 +10:00
psychedelicious
d95fe5925a feat(ui): restore image post-upload actions
eg set init image if on img2img when uploading
2023-05-15 18:52:48 +10:00
psychedelicious
835922ea8f fix(ui): floor canvas coords to prevent partial pixel offset rendering issues 2023-05-15 18:50:34 +10:00
psychedelicious
e1e5266fc3 feat(ui): refactor base image uploading logic 2023-05-15 17:45:05 +10:00
psychedelicious
5e4457445f feat(ui): make toast/hotkey into logical components 2023-05-15 15:25:27 +10:00
psychedelicious
0221ca8f49 fix(ui): use cloned canvas for retrieving dataURL/Blobs 2023-05-15 13:54:30 +10:00
Eugene Brodsky
cf36e4029e fix(ui): fix syntax error in the logo component flexbox 2023-05-15 08:24:33 +10:00
Eugene Brodsky
c8a98a9a22 Merge branch 'main' into lstein/bugfix/compel 2023-05-14 14:43:18 -04:00
blessedcoolant
38ecca9362 Logging Improvements (#3401)
This PR improves the logging module a tad bit along with the
documentation.

**New Look:**


![WindowsTerminal_XaijwCqFpo](https://github.com/invoke-ai/InvokeAI/assets/54517381/49a97411-1927-4a49-80ff-f4d9665be55f)

## Usage

**General Logger**

InvokeAI has a module level logger. You can call it this way.

In this below example, you will use the default logger `InvokeAI` and
all your messages will be logged under that name.

```python

from invokeai.backend.util.logging import logger

logger.critical("CriticalMessage") // In Bold Red
logger.error("Info Message") // In Red
logger.warning("Info Message") // In Yellow
logger.info("Info Message") // In Grey 
logger.debug("Debug Message") // In Grey
```

Results:

```
[12-05-2023 20]::[InvokeAI]::CRITICAL --> This is an info message [In Bold Red]
[12-05-2023 20]::[InvokeAI]::ERROR --> This is an info message [In Red]
[12-05-2023 20]::[InvokeAI]::WARNING --> This is an info message [In Yellow]
[12-05-2023 20]::[InvokeAI]::INFO --> This is an info message [In Grey]
[12-05-2023 20]::[InvokeAI]::DEBUG --> This is an info message [In Grey]
```

**Custom Logger**

If you want to use a custom logger for your module, you can import it
the following way.

```python

from invokeai.backend.util.logging import logging
logger = logging.getLogger(name='Model Manager')

logger.critical("CriticalMessage") // In Bold Red
logger.error("Info Message") // In Red
logger.warning("Info Message") // In Yellow
logger.info("Info Message") // In Grey 
logger.debug("Debug Message") // In Grey
```

Results:

```
[12-05-2023 20]::[Model Manager]::CRITICAL --> This is an info message [In Bold Red]
[12-05-2023 20]::[Model Manager]::ERROR --> This is an info message [In Red]
[12-05-2023 20]::[Model Manager]::WARNING --> This is an info message [In Yellow]
[12-05-2023 20]::[Model Manager]::INFO --> This is an info message [In Grey]
[12-05-2023 20]::[Model Manager]::DEBUG --> This is an info message [In Grey]
```

**When to use custom logger?**

It is recommended to use a custom logger if your module is not a part of
base InvokeAI. For example: custom extensions / nodes.
2023-05-15 02:18:20 +12:00
blessedcoolant
c4681774a5 Merge branch 'main' into logging-facelift 2023-05-15 02:08:29 +12:00
Damian Stewart
050add58d2 fix getting conditionings 2023-05-14 12:20:54 +02:00
blessedcoolant
3d60c958c7 ui: commercial fixes (#3409)
minor commercial fixes
2023-05-14 20:44:06 +12:00
psychedelicious
f5df150097 feat(ui): add callback to signal app is ready
needed for commercial
2023-05-14 18:42:15 +10:00
psychedelicious
dac82adb5b fix(ui): make logo component non-selectable 2023-05-14 18:41:11 +10:00
Eugene
b72c9787a9 Revert "comment out customer_attention_context"
This reverts commit 8f8cd90787.

Due to NameError: name 'options' is not defined
2023-05-14 00:37:55 -04:00
Eugene Brodsky
2623941d91 Merge branch 'main' into lstein/bugfix/compel 2023-05-13 22:23:59 -04:00
psychedelicious
d3a7fea939 Revert "fix: Rework the layout of the parameters scrollbar"
This reverts commit 6f1fc397f7.
2023-05-14 11:45:08 +10:00
psychedelicious
5a7b687c84 fix(ui): add missing packages 2023-05-14 11:45:08 +10:00
psychedelicious
0020457fc7 fix(ui): tweak settings scheduler styling 2023-05-14 11:45:08 +10:00
psychedelicious
658b556544 feat(ui): IAICustomSelect v2, implement for scheduler & model 2023-05-14 11:45:08 +10:00
psychedelicious
37da0fc075 feat(ui): IAICustomSelect v1 2023-05-14 11:45:08 +10:00
psychedelicious
6d3e8507cc fix(ui): fix "no image" fallbacks 2023-05-14 11:45:08 +10:00
blessedcoolant
0e9470503f fix: Rework the layout of the parameters scrollbar 2023-05-14 11:45:08 +10:00
blessedcoolant
d2ebc6741b feat: Add setting to hide / display schedulers 2023-05-14 11:45:08 +10:00
blessedcoolant
026d3260b4 Add Heun Karras Scheduler 2023-05-14 11:45:08 +10:00
Lincoln Stein
1103ab2844 merge with main 2023-05-13 21:35:19 -04:00
Lincoln Stein
11b2076b46 implement change to web_config suggested by ebr 2023-05-13 21:33:19 -04:00
blessedcoolant
78533714e3 Merge branch 'main' into logging-facelift 2023-05-14 09:07:51 +12:00
blessedcoolant
691e1bf829 Make debug messages cyan/blue 2023-05-14 09:06:57 +12:00
Mary Hipp
47a088d685 rehydrate selectedImage URL when results and uploads are fetched 2023-05-13 09:48:38 +10:00
Eugene Brodsky
63db3fc22f reduce queue check interval to 0.5s 2023-05-12 17:54:26 -04:00
Eugene
ad0bb3f61a fix: queue error should not crash InvocationProcessor
1. if retrieving an item from the queue raises an exception, the
   InvocationProcessor thread crashes, but the API continues running in
   a non-functional state. This fixes the issue
2. when there are no items in the queue, sleep 1 second before checking
   again.
3. Also ensures the thread isn't crashed if an exception is raised from
   invoker, and emits the error event

Intentionally using base Exceptions because for now we don't know which
specific exception to expect.

Fixes (sort of)? #3222
2023-05-12 17:54:26 -04:00
Kent Keirsey
8f8cd90787 comment out customer_attention_context 2023-05-12 13:59:00 -04:00
blessedcoolant
d796ea7bec feat: Logging Improvements 2023-05-13 02:13:49 +12:00
psychedelicious
e5b7dd63e9 fix(nodes): temporarily disable librarygraphs
- Do not retrieve graph from DB until we resolve the issue of changing node schemas causing application to fail to start up due to invalid graphs
2023-05-12 22:33:49 +10:00
Eugene Brodsky
af060188bd Merge branch 'main' into lstein/bugfix/compel 2023-05-12 08:22:18 -04:00
blessedcoolant
4270e7ae25 Feat/ui/improve-language (#3399) 2023-05-12 23:32:50 +12:00
psychedelicious
60a565d7de feat(ui): use chakra menu for theme changer 2023-05-12 20:04:29 +10:00
psychedelicious
78cf70eaad fix(ui): tweak lang picker style 2023-05-12 20:04:10 +10:00
psychedelicious
eebaa50710 fix(ui): fix language picker tooltip 2023-05-12 19:52:21 +10:00
psychedelicious
7d582553f2 feat(ui): use chakra menu for language picker 2023-05-12 19:50:34 +10:00
psychedelicious
4d6eea7e81 feat(ui): store language in redux 2023-05-12 19:35:03 +10:00
blessedcoolant
f44593331d ui: misc fixes (#3398)
- do not show canvas intermediates in gallery
- do not show progress image in uploads gallery category
- use custom dark mode `localStorage` key (prevents collision with
commercial)
- use variable font (reduce bundle size by factor of 10)
- change how custom headers are used
- use style injection for building package
- fix tab icon sizes
2023-05-12 21:00:47 +12:00
psychedelicious
3d9ecbf3c7 fix(ui): add missing package 2023-05-12 18:55:59 +10:00
psychedelicious
032aa1d59c fix(ui): excise most zIndexs
our stacking contexts are accurate, `zIndex` isn't needed
2023-05-12 18:50:54 +10:00
psychedelicious
35e0863bdb fix(ui): fix tab icon sizes 2023-05-12 17:56:18 +10:00
psychedelicious
14070d674e build(ui): add style injection plugin
when building for package, CSS is all in JS files. when used as a package, it is then injected into the page. bit of a hack to missing CSS in commercial product
2023-05-12 17:56:18 +10:00
psychedelicious
108ce06c62 feat(ui): change custom header to be a prop instead of children 2023-05-12 17:56:18 +10:00
psychedelicious
da364f3444 feat(ui): use variable font
reduces package build's CSS by an order of magnitude
2023-05-12 17:56:18 +10:00
psychedelicious
df5ba75c14 feat(ui): use custom dark mode localStorage key 2023-05-12 17:56:18 +10:00
psychedelicious
e4fb9cb33f chore(ui): regen api client 2023-05-12 17:56:18 +10:00
psychedelicious
65b527eb20 fix(ui): do not show progress images in uploads gallery category 2023-05-12 17:56:18 +10:00
psychedelicious
7dc9d18052 fix(ui): do not show intermediates uploads in gallery 2023-05-12 17:56:18 +10:00
blessedcoolant
5013a4b9f3 feat(ui): expand config options (#3393)
now may disable individual SD features eg Noise, Variation, etc - stuff
which is not ready for consumption in commercial.
2023-05-12 16:10:17 +12:00
blessedcoolant
f929359322 Merge branch 'main' into feat/ui/expand-config 2023-05-12 16:06:31 +12:00
blessedcoolant
6522c71971 feat(nodes): add RandomIntInvocation (#3390)
just outputs a single random int
2023-05-12 16:06:06 +12:00
blessedcoolant
9c1e65f3a3 Merge branch 'main' into feat/nodes/add-randomintinvocation 2023-05-12 15:56:41 +12:00
psychedelicious
ebec200ba6 Remove unused import 2023-05-12 13:56:02 +10:00
blessedcoolant
e559730b6e feat(nodes): add w/h to latents outputs (#3389)
This reduces the number of nodes needed when working with latents (ie
fewer plain integer value nodes)

Also correct a few mistakes in the fields
2023-05-12 15:40:46 +12:00
blessedcoolant
0acb8ed85d Merge branch 'main' into feat/nodes/add-w-h-latentsoutput 2023-05-12 15:23:29 +12:00
blessedcoolant
8c1c9cd702 Merge branch 'main' into feat/nodes/add-randomintinvocation 2023-05-12 15:21:49 +12:00
blessedcoolant
0ece4686aa fix(nodes): remove Optionals on ImageOutputs (#3392) 2023-05-12 15:21:42 +12:00
blessedcoolant
af95cef7f9 Merge branch 'main' into fix/nodes/fix-imageoutput-optionals 2023-05-12 15:08:19 +12:00
blessedcoolant
1eca7a918a feat(ui): make core parameters layout consistent (#3394) 2023-05-12 15:08:07 +12:00
blessedcoolant
9e6b958023 Merge branch 'main' into feat/ui/consistent-param-layout 2023-05-12 15:06:16 +12:00
blessedcoolant
f7b99d93ae docs(ui): update ui readme (#3396) 2023-05-12 15:05:55 +12:00
blessedcoolant
85d03dcd90 Merge branch 'main' into docs/ui/update-ui-readme 2023-05-12 15:04:12 +12:00
blessedcoolant
032555bcfe fix(model manager): fix string formatting error on model checksum timer (#3397)
The error occurs when loading a model for the first time. (or after
removing its checksum file, probably.)
2023-05-12 15:04:01 +12:00
Kevin Turner
4caa1f19b2 fix(model manager): fix string formatting error on model checksum timer 2023-05-11 19:06:02 -07:00
Lincoln Stein
95d4bd3012 Merge branch 'lstein/bugfix/compel' of github.com:invoke-ai/InvokeAI into lstein/bugfix/compel 2023-05-11 21:13:29 -04:00
Lincoln Stein
037078c8ad make InvokeAIDiffuserComponent.custom_attention_control a classmethod 2023-05-11 21:13:18 -04:00
psychedelicious
6de2f66b50 docs(ui): update ui readme 2023-05-12 11:11:59 +10:00
blessedcoolant
cd7b248eda Add UniPC / Euler Karras / DPMPP_2 Karras / DEIS / DDPM Schedulers (#3388)
**Features:**

- Add UniPC Scheduler
- Add Euler Karras Scheduler
- Add DPMPP_2 Karras Scheduler
- Add DEIS Scheduler
- Add DDPM Scheduler

**Other:**

- Renamed schedulers to their accurate names: _a = Ancestral, _k =
Karras
- Fix scheduler not defaulting correctly to DDIM.
- Code split SCHEDULER_MAP so its consistently loaded from the same
place.

**Known Bugs:**

- dpmpp_2s not working in img2img for denoising values < 0.8 ==> // This
seems to be an upstream bug. I've disabled it in img2img and canvas
until the upstream bug is fixed.
https://github.com/huggingface/diffusers/issues/1866
2023-05-12 09:06:22 +12:00
blessedcoolant
6d8c077f4e Merge branch 'main' into unipc-sched 2023-05-12 05:59:13 +12:00
blessedcoolant
97127e560e Disable dpmpp_2s in img2img & unifiedCanvas
... until upstream bug is fixed.
2023-05-12 04:51:58 +12:00
Sergey Borisov
27dc07d95a Set zero eta by default(fix ddim scheduler error) 2023-05-11 18:49:27 +03:00
blessedcoolant
f7dc171c4f Rename default schedulers across the app 2023-05-12 03:44:20 +12:00
blessedcoolant
4b957edfec Add DDPM Scheduler 2023-05-12 03:18:34 +12:00
blessedcoolant
46ca7718d9 Add DEIS Scheduler 2023-05-12 03:10:30 +12:00
blessedcoolant
b928d7a6e6 Change scheduler names to be accurate
_a = Ancestral
_k = Karras
2023-05-12 02:59:43 +12:00
blessedcoolant
8a836247c8 Add DPMPP Single, Euler Karras and DPMPP2 Multi Karras Schedulers 2023-05-12 02:23:33 +12:00
Mary Hipp
95c3644564 fix it again 2023-05-12 00:10:39 +10:00
psychedelicious
799cd07174 feat(ui): make core parameters layout consistent 2023-05-11 22:45:53 +10:00
psychedelicious
9af385468d feat(ui): expand config options
now may disable individual SD features eg Noise, Variation, etc - stuff which is not ready for consumption in commercial.
2023-05-11 22:42:13 +10:00
blessedcoolant
3487388788 Merge branch 'unipc-sched' of https://github.com/blessedcoolant/InvokeAI into unipc-sched 2023-05-12 00:40:24 +12:00
blessedcoolant
9a383e456d Codesplit SCHEDULER_MAP for reusage 2023-05-12 00:40:03 +12:00
blessedcoolant
805f9f8f4a Merge branch 'main' into unipc-sched 2023-05-12 00:24:55 +12:00
blessedcoolant
52aa0c9bbd ui: miscellaneous fixes (#3386) 2023-05-12 00:21:29 +12:00
psychedelicious
7f5f4689cc fix(ui): clear progress image on cancel 2023-05-11 22:20:37 +10:00
psychedelicious
a3f81f4b98 fix(ui): fix results not displaying
- fix for commercial product
2023-05-11 22:20:37 +10:00
psychedelicious
15c59e606f feat(ui): add spinner to gallery progress images
- otherwise you may think you can click it but you cannot
2023-05-11 22:20:37 +10:00
psychedelicious
40d4cabecd feat(ui): improve image overlay 2023-05-11 22:20:37 +10:00
psychedelicious
3493c8119b feat(ui): improve image preview css and fallback 2023-05-11 22:20:30 +10:00
blessedcoolant
c1e7460d39 Merge branch 'main' into unipc-sched 2023-05-12 00:11:09 +12:00
blessedcoolant
3ffff023b2 Add missing key to scheduler_map
It was breaking coz the sampler was not being reset. So needs a key on each. Will simplify this later.
2023-05-12 00:08:50 +12:00
psychedelicious
f9384be59b fix(ui): fix init image causing overflow 2023-05-11 20:55:30 +10:00
psychedelicious
6cf308004a fix(nodes): remove Optionals on ImageOutputs 2023-05-11 20:54:57 +10:00
blessedcoolant
d1029138d2 Default to DDIM if scheduler is missing 2023-05-11 22:54:35 +12:00
blessedcoolant
06b5800d28 Add UniPC Scheduler 2023-05-11 22:43:18 +12:00
psychedelicious
483f2ccb56 feat(nodes): add RandomIntInvocation
just outputs a single random int
2023-05-11 20:33:32 +10:00
psychedelicious
93ced0bec6 feat(nodes): add w/h to latents outputs
This reduces the number of nodes needed when working with latents (ie fewer plain integer value nodes)

Also correct a few mistakes in the fields
2023-05-11 20:32:55 +10:00
psychedelicious
4333852c37 fix(nodes): fix missing context arg in LatentsToLatents 2023-05-11 19:28:42 +10:00
Eugene Brodsky
3baa230077 Merge branch 'main' into lstein/bugfix/compel 2023-05-11 00:50:45 -04:00
Eugene
9e594f9018 pad conditioning tensors to same length
fixes crash when prompt length is greater than 75 tokens
2023-05-11 00:34:15 -04:00
Mary Hipp Rogers
b0c41b4828 filter our websocket errors (#3382)
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-05-11 01:58:40 +00:00
psychedelicious
e0d6946b6b fix(nodes): fix metadata test
- `progress_images` is no longer a parameter
- `seamless` needs to be reworked as a model config, removed as a param
2023-05-11 11:55:51 +10:00
psychedelicious
bf7ea8309f fix(ui): change tab to img2img when selected initial image 2023-05-11 11:55:51 +10:00
psychedelicious
54b65f725f fix(ui): rescale canvas on gallery resize 2023-05-11 11:55:51 +10:00
psychedelicious
8ef49c2640 fix(ui): fix canvas img2img if no init image selected 2023-05-11 11:55:51 +10:00
psychedelicious
f488b1a7f2 fix(nodes): fix usage of Optional 2023-05-11 11:55:51 +10:00
psychedelicious
d2edb7c402 build(ui): add yalc to gitignore 2023-05-11 11:55:51 +10:00
psychedelicious
f0a3f07b45 feat(ui): antialias progress images 2023-05-11 11:55:51 +10:00
psychedelicious
b42b630583 fix(ui): h/w disabled bug 2023-05-11 11:55:51 +10:00
psychedelicious
31a78d571b feat(ui): canvas antialiasing 2023-05-11 11:55:51 +10:00
psychedelicious
fdc2232ea0 feat(ui): progress images in gallery and viewer 2023-05-11 11:55:51 +10:00
psychedelicious
e94d0b2d40 fix(ui): fix janky gallery image delete 2023-05-11 11:55:51 +10:00
psychedelicious
75ccbaee9c fix(ui): disable invoke button as soon as pressed 2023-05-11 11:55:51 +10:00
psychedelicious
2848c8397c fix(ui): fix missing images on reload issue
- Mainly an issue for commercial due to incomplete metadata handling
2023-05-11 11:55:51 +10:00
psychedelicious
fe8b5193de feat(ui): half-baked use all parameters
until we have a better system for metadata, this will remain half-baked
2023-05-11 11:55:51 +10:00
psychedelicious
3d1470399c fix(ui): fix metadataviewer styling 2023-05-11 11:55:51 +10:00
psychedelicious
fcf9c63049 fix(ui): fix copying image link 2023-05-11 11:55:51 +10:00
blessedcoolant
7bfb5640ad cleanup(ui): Remove unused vars + minor bug fixes 2023-05-11 11:55:51 +10:00
psychedelicious
15e57e3a3d fix(ui): duplicate gallery in nodes editor 2023-05-11 11:55:51 +10:00
psychedelicious
279468c0e8 feat(ui): restore tab names 2023-05-11 11:55:51 +10:00
psychedelicious
c565812723 feat(ui): organize parameters panels 2023-05-11 11:55:51 +10:00
psychedelicious
ec6c8e2a38 feat(ui): wip layout 2023-05-11 11:55:51 +10:00
psychedelicious
77f2690711 fix(ui): remove duplicate gallery 2023-05-11 11:55:51 +10:00
psychedelicious
c4b3a24ed7 feat(ui): revert tabs to txt2img/img2img 2023-05-11 11:55:51 +10:00
psychedelicious
33c69359c2 feat(ui): add IAICollapse for parameters 2023-05-11 11:55:51 +10:00
psychedelicious
864f4bb4af feat(ui): wip img2img layouting 2023-05-11 11:55:51 +10:00
psychedelicious
5365f42a04 feat(ui): wip layouting 2023-05-11 11:55:51 +10:00
psychedelicious
3dc60254b9 feat(ui): support collect nodes 2023-05-11 11:55:51 +10:00
psychedelicious
027a8562d7 fix(ui): default node model selection 2023-05-11 11:55:51 +10:00
psychedelicious
34f3a0f0e3 feat(nodes): improve default model choosing output 2023-05-11 11:55:51 +10:00
psychedelicious
d0bac1675e fix(nodes): fix ImageOutput Config 2023-05-11 11:55:51 +10:00
psychedelicious
4e56c962f4 fix(nodes): fix infill docstrings 2023-05-11 11:55:51 +10:00
psychedelicious
4ef0e43759 fix(nodes): remove dataURL invocation 2023-05-11 11:55:51 +10:00
psychedelicious
6945d10297 chore(ui): regen api client 2023-05-11 11:55:51 +10:00
psychedelicious
4d6cef7ac8 fix(ui): fix types bug 2023-05-11 11:55:51 +10:00
psychedelicious
a7786d5ff2 fix(nodes): restore seamless to TextToLatents 2023-05-11 11:55:51 +10:00
psychedelicious
6c1de975d9 feat(nodes): add infill nodes 2023-05-11 11:55:51 +10:00
psychedelicious
a1079e455a feat(nodes): cleanup unused params, seed generation 2023-05-11 11:55:51 +10:00
psychedelicious
5457c7f069 fix(ui): use lodash-es instead of lodash 2023-05-11 11:55:51 +10:00
psychedelicious
b8c1a3f96c chore(ui): remove unused babelrc & npm script 2023-05-11 11:55:51 +10:00
psychedelicious
cee8e85f76 chore(ui): bump redux-remember 2023-05-11 11:55:51 +10:00
psychedelicious
09f166577e feat(ui): migrate to redux-remember 2023-05-11 11:55:51 +10:00
psychedelicious
bcc21531fb feat(ui): update for InfillInvocation 2023-05-11 11:55:51 +10:00
psychedelicious
da4eacdffe feat(nodes): add InfillInvocation 2023-05-11 11:55:51 +10:00
psychedelicious
6102e560ba feat(nodes): add LatentsToImage node (VAE encode) 2023-05-11 11:55:51 +10:00
psychedelicious
ff3aa57117 feat(ui): fix endless gallery scroll for single col layout 2023-05-11 11:55:51 +10:00
psychedelicious
49db6f4fac fix(nodes): fix trivial typing issues 2023-05-11 11:55:51 +10:00
psychedelicious
20f6a597ab fix(nodes): add MetadataColorField 2023-05-11 11:55:51 +10:00
psychedelicious
04c453721c feat(ui): tweak gallery loading indicator 2023-05-11 11:55:51 +10:00
psychedelicious
350ffecc1f feat(ui): endless gallery scroll 2023-05-11 11:55:51 +10:00
psychedelicious
b0557aa16b fix(ui): fix currentimagepreview not working for uploads 2023-05-11 11:55:51 +10:00
psychedelicious
1c9429a6ea feat(ui): wip canvas 2023-05-11 11:55:51 +10:00
psychedelicious
206e6b1730 feat(nodes): wip inpaint node 2023-05-11 11:55:51 +10:00
psychedelicious
357cee2849 fix(nodes): fix cfg scale min value 2023-05-11 11:55:51 +10:00
psychedelicious
0b49997bb6 feat(nodes): allow uploaded images to be any ImageType (eg intermediates) 2023-05-11 11:55:51 +10:00
psychedelicious
5e09dd380d Revert "feat(nodes): free gpu mem after invocation"
This reverts commit 99cb33f477306d5dcc455efe04053ce41b8d85bd.
2023-05-11 11:55:51 +10:00
psychedelicious
c7303adb0d feat(ui): fix generation mode logic 2023-05-11 11:55:51 +10:00
psychedelicious
ed1f096a6f feat(ui): wip canvas migration 4 2023-05-11 11:55:51 +10:00
psychedelicious
6ab5d28cf3 feat(ui): wip canvas migration, createListenerMiddleware 2023-05-11 11:55:51 +10:00
psychedelicious
a75148cb16 feat(nodes): free gpu mem after invocation 2023-05-11 11:55:51 +10:00
psychedelicious
f7bbc4004a feat(ui): wip canvas nodes migration 3 2023-05-11 11:55:51 +10:00
psychedelicious
cee21ca082 feat(ui): wip canvas nodes migration 2 2023-05-11 11:55:51 +10:00
psychedelicious
08ec12b391 feat(ui): wip canvas nodes migration 2023-05-11 11:55:51 +10:00
psychedelicious
ff5e2a9a8c chore(ui): regen api client 2023-05-11 11:55:51 +10:00
psychedelicious
e0b9b5cc6c feat(nodes): add dataURL to image node 2023-05-11 11:55:51 +10:00
Lincoln Stein
aca4770481 fixed compel.py as requested 2023-05-10 21:40:44 -04:00
Lincoln Stein
5d5157fc65 make conditioning.py work with compel 1.1.5 2023-05-10 18:08:33 -04:00
Mary Hipp Rogers
fb6ef61a4d change path for locale (#3381)
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-05-10 10:30:17 -04:00
psychedelicious
ee24ad7b13 fix(nodes): fix broken docs routes 2023-05-10 08:28:17 -04:00
psychedelicious
f8e90ba3f0 feat(nodes): add ui build static route 2023-05-10 08:28:17 -04:00
blessedcoolant
ad0b70ca23 fix(nodes): fix #3306 (#3377)
Check if the cache has the object before deleting it.
2023-05-10 17:39:45 +12:00
psychedelicious
7dfa135b2c fix(nodes): fix #3306
Check if the cache has the object before deleting it.
2023-05-10 15:29:10 +10:00
Lincoln Stein
beeaa05658 Update dependencies to get deterministic image generation behavior (main branch) (#3354)
This PR updates to `xformers ~= 0.0.19` and `torch ~= 2.0.0`, which
together seem to solve the non-deterministic image generation issue that
was previously seen with earlier versions of `xformers`.
2023-05-10 00:10:51 -04:00
Lincoln Stein
6b6d654f60 Merge branch 'main' into enhance/update-dependencies 2023-05-09 23:56:46 -04:00
Mary Hipp
853c83d0c2 surface detail field for 403 errors 2023-05-09 12:40:19 +10:00
Mary Hipp
1809990ed4 if backend returns an error, show it in toast 2023-05-09 11:09:36 +10:00
Eugene
79d49853d2 use websocket transport first for socket.io 2023-05-09 11:01:02 +10:00
Lincoln Stein
1f608d3743 add v2.3 branch to push trigger (#3363)
Update the push trigger with the branch which should deploy the docs,
also bring over the updates to the workflow from the v2.3 branch and:

- remove main and development branch from trigger
  - they would fail without the updated toml
- cache pip environment
- update install method (`pip install ".[docs]"`)
2023-05-08 16:26:06 -04:00
mauwii
df024dd982 bring changes from v2.3 branch over
- remove main and development branch from trigger
  - they would fail without the updated toml
- cache pip environment
- update install method
2023-05-08 21:50:00 +02:00
mauwii
45da85765c add v2.3 branch to push trigger 2023-05-08 21:10:20 +02:00
Lincoln Stein
bd0ad59c27 bump compel version 2023-05-07 15:22:46 -04:00
Lincoln Stein
cce40acba5 Merge branch 'enhance/update-dependencies' of github.com:invoke-ai/InvokeAI into enhance/update-dependencies 2023-05-07 15:22:31 -04:00
Lincoln Stein
bc9491ab69 bump compel version 2023-05-07 15:21:24 -04:00
Lincoln Stein
f28632980d Merge branch 'main' into lstein/global-configuration 2023-05-07 07:52:46 -04:00
blessedcoolant
b909bac0dc Merge branch 'main' into enhance/update-dependencies 2023-05-07 21:44:43 +12:00
blessedcoolant
8618e41b32 Deploy documentation from v2.3 branch rather than main (#3356)
This PR instructs github to deploy documentation pages from the v2.3
branch.
2023-05-07 21:43:44 +12:00
blessedcoolant
4687f94141 Merge branch 'main' into actions/mkdocs-deploy 2023-05-07 21:43:18 +12:00
psychedelicious
440912dcff feat(ui): make base log level debug 2023-05-07 15:36:37 +10:00
psychedelicious
8b87a26e7e feat(ui): support collect nodes 2023-05-07 15:36:37 +10:00
Lincoln Stein
44ae93df3e Deploy documentation from v2.3 branch rather than main 2023-05-06 23:56:04 -04:00
Lincoln Stein
42d938fda5 remove debugging statement 2023-05-06 23:54:11 -04:00
Lincoln Stein
8f80ba9520 update dependencies to get deterministic image generation 2023-05-06 23:09:24 -04:00
Lincoln Stein
25ce47c44f remove reference to globals in compel.py 2023-05-06 22:49:35 -04:00
Lincoln Stein
afd2e32092 Merge branch 'main' into lstein/global-configuration 2023-05-06 21:20:25 -04:00
Lincoln Stein
2b213da967 add -y to the automated install instructions (#3349)
hi there, love the project! i noticed a small typo when going over the
install process.

when copying the automated install instructions from the docs into a
terminal, the line to install the python packages failed as it was
missing the `-y` flag.
2023-05-06 13:34:37 -04:00
Lincoln Stein
e91e1eb9aa Merge branch 'main' into patch-1 2023-05-06 13:34:12 -04:00
Lincoln Stein
b24129fb3e Fix logger namespace clash in web server (#3344)
This PR fixes a bug that appeared in the legacy web server after the
logging PR was merged.

closes #3343
2023-05-06 08:35:13 -04:00
Lincoln Stein
350b1421bb Merge branch 'main' into lstein/bugfix/logger-namespace 2023-05-06 08:14:44 -04:00
Steve Martinelli
f01c79a94f add -y to the automated install instructions
when copying the automated install instructions from the docs into a terminal, the line to install the python packages failed as it was missing the `-y` flag.
2023-05-05 21:28:00 -04:00
blessedcoolant
463f6352ce Add compel node and conditioning field type (#3265)
Done as I said in title, but need to test(and understand) how cli works,
as previously it uses single prompt and now it's positive and negative.
2023-05-06 13:05:04 +12:00
StAlKeR7779
a80fe05e23 Rename compel node 2023-05-05 21:30:16 +03:00
StAlKeR7779
58d7833c5c Review changes 2023-05-05 21:09:29 +03:00
StAlKeR7779
5012f61599 Separate conditionings back to positive and negative 2023-05-05 15:47:51 +03:00
blessedcoolant
85c33823c3 Merge branch 'main' into feat/compel_node 2023-05-05 14:41:45 +12:00
blessedcoolant
c83a112669 Fix inpaint node (#3284)
Seems like this is the only change needed for the existing inpaint code
to work as a node. Kyle said on Discord that inpaint shouldn't be a
node, so feel free to just reject this if this code is going to be gone
soon.
2023-05-05 14:41:13 +12:00
psychedelicious
e04ada1319 Merge branch 'main' into patch-1 2023-05-05 10:38:45 +10:00
Lincoln Stein
d866dcb3d2 close #3343 2023-05-04 20:30:59 -04:00
StAlKeR7779
81ec476f3a Revert seed field addition 2023-05-04 21:50:40 +03:00
StAlKeR7779
1e6adf0a06 Fix default graph and test 2023-05-04 21:14:31 +03:00
StAlKeR7779
7d221e2518 Combine conditioning to one field(better fits for multiple type conditioning like perp-neg) 2023-05-04 20:14:22 +03:00
Lincoln Stein
742ed19d66 add missing config module 2023-05-04 01:20:30 -04:00
Lincoln Stein
29c2ada23c add test for the configuration module 2023-05-04 00:45:52 -04:00
Lincoln Stein
e4196bbe5b adjust non-app modules to use new config system 2023-05-04 00:43:51 -04:00
Lincoln Stein
15ffb53e59 remove globals, args, generate and the legacy CLI 2023-05-03 23:36:51 -04:00
Lincoln Stein
90054ddf0d use InvokeAISettings for app-wide configuration 2023-05-03 22:30:30 -04:00
StAlKeR7779
56d3cbead0 Merge branch 'main' into feat/compel_node 2023-05-04 00:28:33 +03:00
Lincoln Stein
5e8c97f1ba [Enhancement] Regularize logging messages (#3176)
# Intro

This commit adds invokeai.backend.util.logging, which provides support
for formatted console and logfile messages that follow the status
reporting conventions of earlier InvokeAI versions:

```
 ### A critical error
 *** A non-fatal error
 ** A warning
  >> Informational message
        | Debugging message
```

Internally, the invokeai logging module creates a new default logger
named "invokeai" so that its logging does not interfere with other
module's use of the vanilla logging module. So `logging.error("foo")`
will go through the regular logging path and not add InvokeAI's
informational message decorations, while `ialog.error("foo")` will add
the decorations.
    
# Usage:

This is a thin wrapper around the standard Python logging module. It can
be used in several ways:


## Module-level logging style
 
This style logs everything through a single default logging object and
is identical to using Python's `logging` module. The commonly-used
module-level logging functions are implemented as simple pass-thrus to
logging:
    
```
      import invokeai.backend.util.logging as logger
    
      logger.debug('this is a debugging message')
      logger.info('this is a informational message')
      logger.log(level=logging.CRITICAL, 'get out of dodge')

      logger.disable(level=logging.INFO)
      logger.basicConfig(filename='/var/log/invokeai.log')
      logger.error('this will be logged to console and to invokeai.log')
```    

Internally these functions all go through a custom logging object named
"invokeai". You can access it to perform additional customization in
either of these ways:

```
logger = logger.getLogger()
logger = logger.getLogger('invokeai')
```
    
## Object-oriented style

For more control, the logging module's object-oriented logging style is
also supported. The API is identical to the vanilla logging usage. In
fact, the only thing that has changed is that the getLogger() method
adds a custom formatter to the log messages.
    
```
     import logging
     from invokeai.backend.util.logging import InvokeAILogger
    
     logger = InvokeAILogger.getLogger(__name__)
     fh = logging.FileHandler('/var/invokeai.log')
     logger.addHandler(fh)
     logger.critical('this will be logged to both the console and the log file')
```

## Within the nodes API

From within the nodes API, the logger module is stored in the `logger`
slot of InvocationServices during dependency initialization. For
example, in a router, the idiom is:

```
from ..dependencies import ApiDependencies
logger = ApiDependencies.invoker.services.logger
logger.warning('uh oh')
```

Currently, to change the logger used by the API, one must change the
logging module passed to `ApiDependencies.initialize()` in `api_app.py`.
However, this will eventually be replaced with a method to select the
preferred logging module using the configuration file (dependent on
merging of PR #3221)
2023-05-03 15:00:05 -04:00
Lincoln Stein
4687ad4ed6 Merge branch 'main' into enhance/invokeai-logs 2023-05-03 13:36:06 -04:00
psychedelicious
994b247f8e feat(ui): do not persist gallery images
- I've sorted out the issues that make *not* persisting troublesome, these will be rolled out with canvas
- Also realized that persisting gallery images very quickly fills up localStorage, so we can't really do it anyways
2023-05-03 23:41:48 +10:00
psychedelicious
0419f50ab0 chore(ui): bump react-virtuoso
- Resolves an issue with gallery not rendering all items
2023-05-02 20:15:29 +10:00
psychedelicious
f9f40adcdc fix(nodes): fix t2i graph
Removed width and height edges.
2023-05-02 13:11:28 +10:00
psychedelicious
3264d30b44 feat(nodes): allow multiples of 8 for dimensions 2023-05-02 12:01:52 +10:00
psychedelicious
4d885653e9 feat(ui): tidy 2023-05-02 11:27:08 +10:00
psychedelicious
475b6bef53 feat(ui): use windowing for gallery
vastly improves the gallery performance when many images are loaded.

- `react-virtuoso` to do the virtualized list
- `overlayscrollbars` for a scrollbar
2023-05-02 11:27:08 +10:00
Eugene
d39de0ad38 fix(nodes): fix duplicate Invoker start/stop events 2023-05-01 18:24:37 -04:00
Eugene
d14a7d756e nodes-api: enforce single thread for the processor
On hyperthreaded CPUs we get two threads operating on the queue by
default on each core. This cases two threads to process queue items.
This results in pytorch errors and sometimes generates garbage.

Locking this to single thread makes sense because we are bound by the
number of GPUs in the system, not by CPU cores. And to parallelize
across GPUs we should just start multiple processors (and use async
instead of threading)

Fixes #3289
2023-05-01 18:24:37 -04:00
Lincoln Stein
b050c1bb8f use logger in ApiDependencies 2023-05-01 16:27:44 -04:00
psychedelicious
276dfc591b feat(ui): disable w/h when img2img & not fit 2023-05-01 17:28:22 +10:00
psychedelicious
b49d76ebee feat(nodes): fix image to image fit param
it was ignored previously.
2023-05-01 17:28:22 +10:00
psychedelicious
a6be44789b fix(ui): progress image rerender, checkbox 2023-05-01 11:16:49 +10:00
blessedcoolant
a4313c26cb fix: Do not hide Preview button & color code it 2023-05-01 11:16:49 +10:00
blessedcoolant
d4b250d509 feat(ui): Add auto show progress previews setting 2023-05-01 11:16:49 +10:00
psychedelicious
29743a9e02 fix(ui): next/prev image buttons 2023-05-01 11:16:49 +10:00
psychedelicious
fecb77e344 feat(ui): dndkit --> rnd for draggable 2023-05-01 11:16:49 +10:00
psychedelicious
779671753d feat(ui): tweak floating preview 2023-05-01 11:16:49 +10:00
psychedelicious
d5e152b35e fix(ui): ignore events after canceling session 2023-05-01 11:16:49 +10:00
psychedelicious
270657a62c feat(ui): gallery & progress image refactor 2023-05-01 11:16:49 +10:00
psychedelicious
3601b9c860 feat(ui): revamp status indicator 2023-05-01 11:16:49 +10:00
psychedelicious
c8fe12cd91 feat(ui): init image tweaks 2023-05-01 11:16:49 +10:00
psychedelicious
deae5fbaec fix(ui): socket event types 2023-05-01 11:16:49 +10:00
psychedelicious
5b558af2b3 fix(ui): fix metadata viewer scroll 2023-05-01 11:16:49 +10:00
psychedelicious
4150d5306f chore(ui): regen api client 2023-05-01 11:16:49 +10:00
psychedelicious
8c2e4700f9 feat(ui): persist gallery state 2023-05-01 11:16:49 +10:00
psychedelicious
adaecada20 fix(ui): fix current image seed button 2023-05-01 11:16:49 +10:00
psychedelicious
258895bcc9 feat(ui): being dismantling old sio stuff, fix recall seed/prompt/init
- still need to fix up metadataviewer's recall features
2023-05-01 11:16:49 +10:00
psychedelicious
2eb7c25bae feat(ui): clean up and simplify socketio middleware 2023-05-01 11:16:49 +10:00
psychedelicious
2e4e9434c1 fix(ui): fix initial image for uploads 2023-05-01 11:16:49 +10:00
psychedelicious
0cad204e74 feat(ui): add error handling for linear graph generation 2023-05-01 11:16:49 +10:00
Lincoln Stein
0bc2edc044 Merge branch 'main' into enhance/invokeai-logs 2023-04-29 11:00:18 -04:00
Lincoln Stein
16488e7db8 fix tests 2023-04-29 10:59:50 -04:00
Lincoln Stein
974841926d logger is a interchangeable service 2023-04-29 10:48:50 -04:00
Lincoln Stein
8db20e0d95 rename log to logger throughout 2023-04-29 09:43:40 -04:00
psychedelicious
d00d29d6b5 feat(ui): update settings modal 2023-04-29 18:28:19 +10:00
psychedelicious
dc976cd665 feat(ui): add switch for logging 2023-04-29 18:28:19 +10:00
psychedelicious
6d6b986a66 feat(ui): remove Console and redux logging state 2023-04-29 18:28:19 +10:00
psychedelicious
bffdede0fa feat(ui): improve log messages 2023-04-29 18:28:19 +10:00
psychedelicious
a4c258e9ec feat(ui): add roarr logger 2023-04-29 18:28:19 +10:00
psychedelicious
8d837558ac fix(ui): fix spelling of systemPersistDenylist.ts 2023-04-29 18:28:19 +10:00
psychedelicious
e673ed08ec fix(ui): restore missing chakra-cli package
(amending to try and get the workflow to run)
2023-04-29 12:21:11 +10:00
Lincoln Stein
f0e07bff5a fix bad logging path in config script 2023-04-28 15:39:00 -04:00
Lincoln Stein
3ec06a1fc3 Merge branch 'main' into enhance/invokeai-logs 2023-04-28 10:10:33 -04:00
Lincoln Stein
6b79e2b407 Merge branch 'main' into enhance/invokeai-logs
- resolve conflicts
- remove unused code identified by pyflakes
2023-04-28 10:09:46 -04:00
blessedcoolant
0eed9dbc44 fix(ui): fix packaging import issue (#3294)
I accidentally merged a broken #3292 (merge conflicts incorrectly
resolved). Fixing it
2023-04-29 00:39:56 +12:00
psychedelicious
53c7832fd1 fix(ui): fix packaging import issue 2023-04-28 22:37:51 +10:00
psychedelicious
ca1cc0e2c2 feat(ui): rerender mitigation sweep 2023-04-28 22:00:18 +10:00
psychedelicious
5d8728c7ef feat(ui): persist socket session ids and re-sub on connect 2023-04-28 22:00:18 +10:00
psychedelicious
a8cec4c7e6 fix(ui): improve schema parsing error handling 2023-04-28 22:00:18 +10:00
psychedelicious
2b5ccdc55f build(ui): treeshake lodash via lodash-es 2023-04-28 21:56:43 +10:00
psychedelicious
d92d5b5258 build(ui): fix types exports 2023-04-28 21:56:43 +10:00
psychedelicious
a591184d2a build(ui): remove unneeded types file 2023-04-28 21:56:43 +10:00
psychedelicious
ee881e4c78 build(ui): add react/react-dom peer deps 2023-04-28 21:56:43 +10:00
psychedelicious
61fbb24e36 feat(ui): set up for packaging 2023-04-28 21:56:43 +10:00
psychedelicious
d582949488 feat(ui): rename main app components 2023-04-28 21:56:43 +10:00
psychedelicious
de574eb4d9 chore(ui): upgrade all packages 2023-04-28 21:56:43 +10:00
psychedelicious
bfd90968f1 chore(ui): tidy npm structure 2023-04-28 21:56:43 +10:00
psychedelicious
4a924c9b54 feat(nodes): hardcode resize latents downsampling 2023-04-28 09:52:09 +10:00
psychedelicious
0453d60c64 fix(nodes): fix slatents and rlatents bugs 2023-04-28 09:52:09 +10:00
psychedelicious
c4f4f8b1b8 fix(nodes): remove unused width and height from t2l 2023-04-28 09:52:09 +10:00
psychedelicious
3e80eaa342 feat(nodes): add resize and scale latents nodes
- this resize/scale latents is what is needed for hires fix
- also remove unused `seed` from t2l
2023-04-28 09:52:09 +10:00
Mary Hipp
00a0cb3403 fix(ui): update exported types 2023-04-28 09:20:09 +10:00
Mary Hipp
ea93cad5ff fix(ui): update to match change in route params 2023-04-28 09:19:03 +10:00
Mary Hipp
4453a0d20d feat(ui): remove toasts for network bc we have status to tell us 2023-04-28 09:18:19 +10:00
Mary Hipp Rogers
1e837e3c9d fix(ui): add formatted neg prompt for linear nodes (#3282)
* fix(ui): add formatted neg prompt for linear nodes

* remove conditional

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-04-27 15:05:35 -04:00
Andy Luhrs
0f95f7cea3 Fix inpaint node
Seems like this is the only change needed for the existing inpaint node to work.
2023-04-27 11:03:07 -07:00
StAlKeR7779
0b0068ab86 Merge branch 'main' into feat/compel_node 2023-04-27 14:53:10 +03:00
psychedelicious
31c7fa833e feat(ui): simplify image display 2023-04-27 14:10:44 +10:00
blessedcoolant
db16ca0079 fix(ui): Current Image Buttons position 2023-04-27 14:10:44 +10:00
psychedelicious
a824f47bc6 fix(nodes): use absolute path when deleting 2023-04-27 14:10:44 +10:00
psychedelicious
99392debe8 feat(ui): refactor DeleteImageModal
- refactor the component
- use translations
- add config for systems where deleted images are not sent to bin (only changes the messaging)
2023-04-27 14:10:44 +10:00
psychedelicious
0cc739afc8 feat(nodes): use send2trash to delete images, fix thumbnail_path 2023-04-27 14:10:44 +10:00
psychedelicious
0ab62b0343 feat(ui): "blacklist" -> "denylist" 2023-04-27 14:10:44 +10:00
psychedelicious
75d25dd5cc feat(ui): restore image deletion functionality 2023-04-27 14:10:44 +10:00
psychedelicious
2e54da13d8 chore(ui): regen api client 2023-04-27 14:10:44 +10:00
psychedelicious
f34f416bf5 fix(ui): handle floats in NumberInputFieldComponent 2023-04-27 14:10:44 +10:00
psychedelicious
021c63891d fix(ui): fix config types and merging 2023-04-27 14:10:44 +10:00
blessedcoolant
a968862e6b feat(ui): Move img2img badge info to top right 2023-04-27 14:10:44 +10:00
blessedcoolant
a08189d457 ui: Match styling of img2img to the rest of the accordions 2023-04-27 14:10:44 +10:00
psychedelicious
0a936696c3 feat(ui): add config slice, configuration default values 2023-04-27 14:10:44 +10:00
blessedcoolant
55e33eaf4c docs: add note on README about migration (#3277) 2023-04-27 13:17:43 +12:00
psychedelicious
3da5fb223f docs: add note on README about migration 2023-04-27 11:05:32 +10:00
Mary Hipp Rogers
a3c5a664e5 fix(ui): update UI to handle uploads with alternate URLs (#3274) 2023-04-26 07:14:08 -07:00
Mary Hipp
b638fb2f30 fix(ui): use name in response instead of parsing out of URL to handle alternative URLs 2023-04-26 09:48:16 -04:00
psychedelicious
c1b10b2222 feat(ui): open in new tab @ hoverable image 2023-04-26 12:40:10 +10:00
psychedelicious
bee29714d9 fix(ui): fix templates not refreshing correctly 2023-04-26 12:40:10 +10:00
psychedelicious
d40d5276dd feat(ui): wip img2img ui 2023-04-26 12:40:10 +10:00
psychedelicious
568f0aad71 feat(ui): wip img2img ui 2023-04-26 12:40:10 +10:00
psychedelicious
38474fa9d4 feat(ui): add lil spinner to loading 2023-04-26 12:17:01 +10:00
psychedelicious
f7f974a28b fix(ui): fix inverted conditional 2023-04-26 12:17:01 +10:00
psychedelicious
3c150b384c fix(ui): fix export of ApplicationFeature type 2023-04-26 12:17:01 +10:00
psychedelicious
65816049ba feat(ui): add secret loading screen override button 2023-04-26 12:17:01 +10:00
psychedelicious
c1c881ded5 feat(ui): support disabledFeatures, add nicer loading
- `disabledParametersPanels` -> `disabledFeatures`
- handle disabling `faceRestore`, `upscaling`, `lightbox`, `modelManager` and OSS header links/buttons
- wait until models are loaded to hide loading screen
- also wait until schema is parsed if `nodes` is an enabled tab
2023-04-26 12:17:01 +10:00
maryhipp
82c4dd8b86 fix(api): return same URL on location header 2023-04-26 06:29:30 +10:00
psychedelicious
711d09a107 feat(nodes): add get_uri method to image storage
- gets the external URI of an image
2023-04-26 06:29:30 +10:00
psychedelicious
74013b6611 fix(nodes): address feedback 2023-04-26 06:29:30 +10:00
psychedelicious
790f399986 feat(nodes): tidy images routes 2023-04-26 06:29:30 +10:00
psychedelicious
73cdd36594 feat(nodes): raise HTTPExceptions instead of returning Reponses 2023-04-26 06:29:30 +10:00
psychedelicious
50ac3eb28d feat(nodes): add delete_image & delete_images routes 2023-04-26 06:29:30 +10:00
StAlKeR7779
d753cff91a Undo debug message 2023-04-25 13:18:50 +03:00
StAlKeR7779
89f1909e4b Update default graph 2023-04-25 13:11:50 +03:00
StAlKeR7779
37916a22ad Use textual inversion manager from pipeline, remove extra conditioning info for uc 2023-04-25 12:53:13 +03:00
blessedcoolant
76e5d0595d fix(ui): fix no progress images when gallery is empty (#3268)
When gallery was empty (and there is therefore no selected image), no
progress images were displayed.

- fix by correcting the logic in CurrentImageDisplay
- also fix app crash introduced by fixing the first bug
2023-04-25 17:48:24 +12:00
psychedelicious
f03cb8f134 fix(ui): fix no progress images when gallery is empty 2023-04-25 15:00:54 +10:00
Lincoln Stein
c2a0e8afc3 [Bugfix] prevent cli crash (#3132)
Prevent legacy CLI crash caused by removal of convert option
    
- Compensatory change to the CLI that prevents it from crashing when it
tries to import a model.
- Bug introduced when the "convert" option removed from the model
manager.
2023-04-25 03:55:33 +01:00
Lincoln Stein
31a904b903 Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-25 03:28:45 +01:00
Lincoln Stein
c174cab3ee [Bugfix] fixes and code cleanup to update and installation routines (#3101)
- Fix the update script to work again and fixes the ambiguity between
when a user wants to update to a tag vs updating to a branch, by making
these two operations explicitly separate.
- Remove dangling functions and arguments related to legacy checkpoint
conversion. These are no longer needed now that all legacy models are
either converted at import time, or on-the-fly in RAM.
2023-04-25 03:28:23 +01:00
Lincoln Stein
fe12938c23 update to diffusers 0.15 and fix code for name changes (#3201)
- This is a port of #3184 to the main branch
2023-04-25 03:23:24 +01:00
Lincoln Stein
4fa5c963a1 Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-25 03:10:51 +01:00
Lincoln Stein
48ce256ba2 Merge branch 'main' into lstein/enhance/diffusers-0.15 2023-04-25 02:49:59 +01:00
StAlKeR7779
8cb2fa8600 Restore log_tokenization check 2023-04-25 04:29:17 +03:00
StAlKeR7779
8f460b92f1 Make latent generation nodes use conditions instead of prompt 2023-04-25 04:21:03 +03:00
StAlKeR7779
d99a08a441 Add compel node and conditioning field type 2023-04-25 03:48:44 +03:00
blessedcoolant
7555b1f876 Event service will now sleep for 100ms between polls instead of 1ms, reducing CPU usage significantly (#3256)
I noticed that the current invokeai-new.py was using almost all of a CPU
core. After a bit of profileing I noticed that there were many thousands
of calls to epoll() which suggested to me that something wasn't sleeping
properly in asyncio's loop.

A bit of further investigation with Python profiling revealed that the
__dispatch_from_queue() method in FastAPIEventService
(app/api/events.py:33) was also being called thousands of times.

I believe the asyncio.sleep(0.001) in that method is too aggressive (it
means that the queue will be polled every 1ms) and that 0.1 (100ms) is
still entirely reasonable.
2023-04-24 19:35:27 +12:00
blessedcoolant
a537231f19 Merge branch 'main' into reduce-event-polling 2023-04-24 19:14:10 +12:00
ismail ihsan bülbül
8044d1b840 translationBot(ui): update translation (Turkish)
Currently translated at 11.3% (58 of 512 strings)

translationBot(ui): added translation (Turkish)

Co-authored-by: ismail ihsan bülbül <e-ben@msn.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
Patrick Tien
2b58ce4ae4 translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 75.0% (380 of 506 strings)

Co-authored-by: Patrick Tien <ivetien@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
Fabian Bahl
ef605cd76c translationBot(ui): update translation (German)
Currently translated at 81.8% (414 of 506 strings)

Co-authored-by: Fabian Bahl <fabian98@bahl-netz.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
figgefigge
a84b5b168f translationBot(ui): update translation (Swedish)
Currently translated at 34.7% (176 of 506 strings)

translationBot(ui): added translation (Swedish)

Co-authored-by: figgefigge <qvintuz@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/sv/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
Alexander Eichhorn
16f6ee04d0 translationBot(ui): update translation (German)
Currently translated at 81.8% (414 of 506 strings)

translationBot(ui): update translation (German)

Currently translated at 80.8% (409 of 506 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
System X - Files
44be057aa3 translationBot(ui): update translation (Ukrainian)
Currently translated at 100.0% (512 of 512 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (512 of 512 strings)

translationBot(ui): update translation (English)

Currently translated at 100.0% (512 of 512 strings)

translationBot(ui): update translation (Ukrainian)

Currently translated at 100.0% (506 of 506 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (506 of 506 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (506 of 506 strings)

Co-authored-by: System X - Files <vasyasos@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/en/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/uk/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
mitien
422f6967b2 translationBot(ui): update translation (Ukrainian)
Currently translated at 75.8% (384 of 506 strings)

translationBot(ui): update translation (Russian)

Currently translated at 85.5% (433 of 506 strings)

Co-authored-by: mitien <mitien@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/uk/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
Riccardo Giovanetti
4528cc8ba6 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (512 of 512 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (511 of 511 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (506 of 506 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
gallegonovato
87e91ebc1d translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (512 of 512 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (511 of 511 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (506 of 506 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
Dennis
fd00d111ea translationBot(ui): update translation (Dutch)
Currently translated at 100.0% (504 of 504 strings)

Co-authored-by: Dennis <dennis@vanzoerlandt.nl>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/nl/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
Jaulustus
b8dc9000bd translationBot(ui): update translation (German)
Currently translated at 73.4% (370 of 504 strings)

Co-authored-by: Jaulustus <jaulustus@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
Juuso V
58c1066765 translationBot(ui): update translation (Finnish)
Currently translated at 18.2% (92 of 504 strings)

translationBot(ui): added translation (Finnish)

Co-authored-by: Juuso V <juuso.vantola@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/fi/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
Bouncyknighter
37096a697b translationBot(ui): added translation (Mongolian)
Co-authored-by: Bouncyknighter <gebifirm@gmail.com>
2023-04-24 16:05:16 +10:00
唐澤 克幸
17d0920186 translationBot(ui): update translation (Japanese)
Currently translated at 73.0% (368 of 504 strings)

Co-authored-by: 唐澤 克幸 <4ranci0ne@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ja/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
techybrain-dev
1e05538364 translationBot(ui): added translation (Vietnamese)
Co-authored-by: techybrain-dev <techybrain.dev@gmail.com>
2023-04-24 16:05:16 +10:00
Chris Jones
cf28617cd6 Event service will now sleep for 100ms between polls instead of 1ms, reducing CPU usage significantly 2023-04-23 21:27:02 +01:00
blessedcoolant
d0d8640711 feat(ui): add reload schema button (#3252) 2023-04-23 19:51:37 +12:00
psychedelicious
e6158d1874 feat(ui): add reload schema button 2023-04-23 17:49:02 +10:00
psychedelicious
2e9d1ea8a3 feat(ui): add support for shouldFetchImages if UI needs to re-fetch an image URL (#3250)
* if `shouldFetchImages` is passed in, UI will make an additional
request to get valid image URL when an invocation is complete
* this is necessary in order to have optional authorization for images
2023-04-23 16:00:13 +10:00
Mary Hipp
59b0153236 add to types 2023-04-23 15:59:55 +10:00
Mary Hipp
9f8ff912c4 feat(ui): add support for shouldFetchImages if UI needs to re-fetch an image URL 2023-04-23 15:59:55 +10:00
blessedcoolant
f0e4a2124a [Nodes UI] More Work (#3248)
- Style the Minimap
- Made the Node UI Legend Responsive
- Set Min Width for nodes on Spawn so resize doesn't snap.
- Initial Implementation of Node Search
- Added FuseJS to handle the node filtering
2023-04-23 17:51:40 +12:00
blessedcoolant
11ab5c7d56 fix(ui): Fix up arrow not working on unfiltered list 2023-04-23 15:18:35 +12:00
blessedcoolant
3f334d9e5e feat(ui): Add fusejs to NodeSearch 2023-04-23 15:14:44 +12:00
blessedcoolant
ff891b1ff2 feat(ui): Basic Node Search Component
Very buggy
2023-04-23 13:35:02 +12:00
Lincoln Stein
2914ee10b0 Merge branch 'main' into lstein/enhance/diffusers-0.15 2023-04-22 20:21:59 +01:00
blessedcoolant
e29c2fb782 Merge branch 'more-nodes-work' of https://github.com/blessedcoolant/InvokeAI into more-nodes-work 2023-04-23 02:53:25 +12:00
blessedcoolant
b763f1809e feat(ui): Stylize Node Minimap 2023-04-23 02:52:32 +12:00
psychedelicious
d26b44104a fix(ui): minor tidy 2023-04-23 00:45:03 +10:00
blessedcoolant
b73fd2a6d2 fix(ui): Set Min Width for Nodes 2023-04-23 00:55:43 +12:00
blessedcoolant
f258aba6d1 chore(ui): Make the Node UI Legend Responsive 2023-04-23 00:55:22 +12:00
psychedelicious
2e70848aa0 Responsive Mobile Layout (#3207)
The first draft for a Responsive Mobile Layout for InvokeAI. Some basic
documentation to help contributors. // Notes from: @blessedcoolant

---

The whole rework needs to be done using the `mobile first` concept where
the base design will be catered to mobile and we add responsive changes
as we grow to larger screens.

**Added**

- Basic breakpoints have been added to the `theme.ts` file that indicate
at which values Chakra makes the responsive changes.
- A basic `useResolution` hook has been added that either returns
`mobile`, `tablet` or `desktop` based on the breakpoint. We can
customize this hook further to do more complex checks for us if need be.

**Syntax**

- Any Chakra component is directly capable of taking different values
for the different breakpoints set in our `theme.ts` file. These can be
passed in a few ways with the most descriptive being an object. For
example:

`flexDir={{ base: 'column', xl: 'row' }}` - This would set the `0em and
above` to be column for the flex direction but change to row
automatically when we hit `xl` and above resolutions which in our case
is `80em or 1280px`. This same format is applicable for any element in
Chakra.

`flexDir={['column', null, null, 'row', null]}` - The above syntax can
also be passed as an array to the property with each value in the array
corresponding to each breakpoint we have. Setting `null` just bypasses
it. This is a good short hand but I think we stick to the above syntax
for readability.

**Note**: I've modified a few elements here and there to give an idea on
how the responsive syntax works for reference.

---

**Problems to be solved** @SammCheese 

- Some issues you might run into are with the Resizable components.
We've decided we will get not use resizable components for smaller
resolutions. Doesn't make sense. So you'll need to make conditional
renderings around these.
- Some components that need custom layouts for different screens might
be better if ported over to `Grid` and use `gridTemplateAreas` to swap
out the design layout. I've demonstrated an example of this in a commit
I've made. I'll let you be the judge of where we might need this.
- The header will probably need to be converted to a burger menu of some
sort with the model changing being handled correctly UX wise. We'll
discuss this on discord.

---

Anyone willing to contribute to this PR can feel free to join the
discussion on discord.

https://discord.com/channels/1020123559063990373/1020839344170348605/threads/1097323866780606615
2023-04-22 22:34:30 +10:00
Sammy
e973aeef0d Merge branch 'main' into responsive-ui 2023-04-22 14:31:19 +02:00
psychedelicious
50e1ac731d fix(ui): make input/outputs renderfn callback 2023-04-22 22:25:17 +10:00
psychedelicious
43addc1548 fix(ui): memoize everything nodes 2023-04-22 22:25:17 +10:00
psychedelicious
4901911c1a fix(ui): improve nodes performance 2023-04-22 22:25:17 +10:00
psychedelicious
44a653925a feat(ui): node styling, controls
- custom node controls
- fix some types
- fix badge colors via colorScheme
- style nodes
2023-04-22 22:25:17 +10:00
blessedcoolant
94a07a8da7 feat(ui): Make Nodes always spawn in center of work area 2023-04-22 22:25:17 +10:00
blessedcoolant
ad41afe65e feat(ui): Make Nodes Resizable 2023-04-22 22:25:17 +10:00
blessedcoolant
77fa7519c4 chore(ui): Cleanup Invocation Component 2023-04-22 22:25:17 +10:00
SammCheese
6e29148d4d delete ImageToImageContent.tsx 2023-04-22 08:43:14 +02:00
SammCheese
3044f3bfe5 fix(ui): adapt NodeEditor for smaller screens 2023-04-22 08:33:05 +02:00
SammCheese
67a8627cf6 add dev:host script 2023-04-22 08:30:09 +02:00
SammCheese
3fb433cb91 Merge branch 'main' of https://github.com/invoke-ai/InvokeAI into responsive-ui 2023-04-22 08:27:00 +02:00
psychedelicious
5f498e10bd Partial migration of UI to nodes API (#3195)
* feat(ui): add axios client generator and simple example

* fix(ui): update client & nodes test code w/ new Edge type

* chore(ui): organize generated files

* chore(ui): update .eslintignore, .prettierignore

* chore(ui): update openapi.json

* feat(backend): fixes for nodes/generator

* feat(ui): generate object args for api client

* feat(ui): more nodes api prototyping

* feat(ui): nodes cancel

* chore(ui): regenerate api client

* fix(ui): disable OG web server socket connection

* fix(ui): fix scrollbar styles typing and prop

just noticed the typo, and made the types stronger.

* feat(ui): add socketio types

* feat(ui): wip nodes

- extract api client method arg types instead of manually declaring them
- update example to display images
- general tidy up

* start building out node translations from frontend state and add notes about missing features

* use reference to sampler_name

* use reference to sampler_name

* add optional apiUrl prop

* feat(ui): start hooking up dynamic txt2img node generation, create middleware for session invocation

* feat(ui): write separate nodes socket layer, txt2img generating and rendering w single node

* feat(ui): img2img implementation

* feat(ui): get intermediate images working but types are stubbed out

* chore(ui): add support for package mode

* feat(ui): add nodes mode script

* feat(ui): handle random seeds

* fix(ui): fix middleware types

* feat(ui): add rtk action type guard

* feat(ui): disable NodeAPITest

This was polluting the network/socket logs.

* feat(ui): fix parameters panel border color

This commit should be elsewhere but I don't want to break my flow

* feat(ui): make thunk types more consistent

* feat(ui): add type guards for outputs

* feat(ui): load images on socket connect

Rudimentary

* chore(ui): bump redux-toolkit

* docs(ui): update readme

* chore(ui): regenerate api client

* chore(ui): add typescript as dev dependency

I am having trouble with TS versions after vscode updated and now uses TS 5. `madge` has installed 3.9.10 and for whatever reason my vscode wants to use that. Manually specifying 4.9.5 and then setting vscode to use that as the workspace TS fixes the issue.

* feat(ui): begin migrating gallery to nodes

Along the way, migrate to use RTK `createEntityAdapter` for gallery images, and separate `results` and `uploads` into separate slices. Much cleaner this way.

* feat(ui): clean up & comment results slice

* fix(ui): separate thunk for initial gallery load so it properly gets index 0

* feat(ui): POST upload working

* fix(ui): restore removed type

* feat(ui): patch api generation for headers access

* chore(ui): regenerate api

* feat(ui): wip gallery migration

* feat(ui): wip gallery migration

* chore(ui): regenerate api

* feat(ui): wip refactor socket events

* feat(ui): disable panels based on app props

* feat(ui): invert logic to be disabled

* disable panels when app mounts

* feat(ui): add support to disableTabs

* docs(ui): organise and update docs

* lang(ui): add toast strings

* feat(ui): wip events, comments, and general refactoring

* feat(ui): add optional token for auth

* feat(ui): export StatusIndicator and ModelSelect for header use

* feat(ui) working on making socket URL dynamic

* feat(ui): dynamic middleware loading

* feat(ui): prep for socket jwt

* feat(ui): migrate cancelation

also updated action names to be event-like instead of declaration-like

sorry, i was scattered and this commit has a lot of unrelated stuff in it.

* fix(ui): fix img2img type

* chore(ui): regenerate api client

* feat(ui): improve InvocationCompleteEvent types

* feat(ui): increase StatusIndicator font size

* fix(ui): fix middleware order for multi-node graphs

* feat(ui): add exampleGraphs object w/ iterations example

* feat(ui): generate iterations graph

* feat(ui): update ModelSelect for nodes API

* feat(ui): add hi-res functionality for txt2img generations

* feat(ui): "subscribe" to particular nodes

feels like a dirty hack but oh well it works

* feat(ui): first steps to node editor ui

* fix(ui): disable event subscription

it is not fully baked just yet

* feat(ui): wip node editor

* feat(ui): remove extraneous field types

* feat(ui): nodes before deleting stuff

* feat(ui): cleanup nodes ui stuff

* feat(ui): hook up nodes to redux

* fix(ui): fix handle

* fix(ui): add basic node edges & connection validation

* feat(ui): add connection validation styling

* feat(ui): increase edge width

* feat(ui): it blends

* feat(ui): wip model handling and graph topology validation

* feat(ui): validation connections w/ graphlib

* docs(ui): update nodes doc

* feat(ui): wip node editor

* chore(ui): rebuild api, update types

* add redux-dynamic-middlewares as a dependency

* feat(ui): add url host transformation

* feat(ui): handle already-connected fields

* feat(ui): rewrite SqliteItemStore in sqlalchemy

* fix(ui): fix sqlalchemy dynamic model instantiation

* feat(ui, nodes): metadata wip

* feat(ui, nodes): models

* feat(ui, nodes): more metadata wip

* feat(ui): wip range/iterate

* fix(nodes): fix sqlite typing

* feat(ui): export new type for invoke component

* tests(nodes): fix test instantiation of ImageField

* feat(nodes): fix LoadImageInvocation

* feat(nodes): add `title` ui hint

* feat(nodes): make ImageField attrs optional

* feat(ui): wip nodes etc

* feat(nodes): roll back sqlalchemy

* fix(nodes): partially address feedback

* fix(backend): roll back changes to pngwriter

* feat(nodes): wip address metadata feedback

* feat(nodes): add seeded rng to RandomRange

* feat(nodes): address feedback

* feat(nodes): move GET images error handling to DiskImageStorage

* feat(nodes): move GET images error handling to DiskImageStorage

* fix(nodes): fix image output schema customization

* feat(ui): img2img/txt2img -> linear

- remove txt2img and img2img tabs
- add linear tab
- add initial image selection to linear parameters accordion

* feat(ui): tidy graph builders

* feat(ui): tidy misc

* feat(ui): improve invocation union types

* feat(ui): wip metadata viewer recall

* feat(ui): move fonts to normal deps

* feat(nodes): fix broken upload

* feat(nodes): add metadata module + tests, thumbnails

- `MetadataModule` is stateless and needed in places where the `InvocationContext` is not available, so have not made it a `service`
- Handles loading/parsing/building metadata, and creating png info objects
- added tests for MetadataModule
- Lifted thumbnail stuff to util

* fix(nodes): revert change to RandomRangeInvocation

* feat(nodes): address feedback

- make metadata a service
- rip out pydantic validation, implement metadata parsing as simple functions
- update tests
- address other minor feedback items

* fix(nodes): fix other tests

* fix(nodes): add metadata service to cli

* fix(nodes): fix latents/image field parsing

* feat(nodes): customise LatentsField schema

* feat(nodes): move metadata parsing to frontend

* fix(nodes): fix metadata test

---------

Co-authored-by: maryhipp <maryhipp@gmail.com>
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-04-22 13:10:20 +10:00
Lincoln Stein
fdad62e88b chore: add ".version" and ".last_model" to gitignore (#3208)
Mistakenly closed the previous pr.
2023-04-20 18:26:27 +01:00
Lincoln Stein
955c81acef Merge branch 'main' into patch-1 2023-04-20 18:26:06 +01:00
Lincoln Stein
e1058f3416 update CODEOWNERS for changed team composition (#3234)
Remove @mauwii and @keturn until they are able to reengage with the
development effort. @GreggHelt2 is designated co-codeowner for the
backend.
2023-04-20 17:19:10 +01:00
Sammy
edf16a253d Merge branch 'main' into patch-1 2023-04-20 14:16:10 +02:00
Lincoln Stein
46f5ef4100 Merge branch 'main' into dev/codeowner-fix-main 2023-04-19 22:40:56 +01:00
Lincoln Stein
b843255236 update CODEOWNERS for changed team composition 2023-04-19 17:37:48 -04:00
Alexandre D. Roberge
3a968e5072 Update NSFW.md
Outdated doc said to change the '.invokeai' file, but it's now named 'invokeai.init' afaik.
2023-04-18 21:18:32 -04:00
Lincoln Stein
b164330e3c replaced remaining print statements with log.*() 2023-04-18 20:49:00 -04:00
Lincoln Stein
69433c9f68 Merge branch 'main' into lstein/enhance/diffusers-0.15 2023-04-18 19:21:53 -04:00
Lincoln Stein
bd8ffd36bf bump to diffusers 0.15.1, remove dangling module 2023-04-18 19:20:38 -04:00
Lincoln Stein
fd80e84ea6 Merge branch 'main' into patch-1 2023-04-18 19:14:28 -04:00
Lincoln Stein
4824237a98 Added CPU instruction for README (#3225)
Since the change itself is quite straight-forward, I'll just describe
the context. Tried using automatic installer on my laptop, kept erroring
out on line 140-something of installer.py, "ERROR: Can not perform a
'--user' install. User site-packages are not visible in this
virtualenv."
Got tired of of fighting with pip so moved on to command line install.
Worked immediately, but at the time lacked instruction for CPU, so
instead of opening any helpful hyperlinks in the readme, took a few
minutes to grab the link from installer.py - thus this pr.
2023-04-18 19:07:37 -04:00
Leo Pasanen
2c9a05eb59 Added CPU instruction for README 2023-04-18 18:46:55 +03:00
blessedcoolant
ecb5bdaf7e [bug] #3218 HuggingFace API off when --no-internet set (#3219)
#3218 

Huggingface API will not be queried if --no-internet flag is set
2023-04-18 14:34:34 +12:00
blessedcoolant
2feeb1f44c fix(ui): more responsive layout work 2023-04-18 04:29:31 +12:00
blessedcoolant
554f353773 fix(ui): Fix Width and Height showing 0 as input 2023-04-18 04:28:58 +12:00
Tim Cabbage
f6cdff2c5b [bug] #3218 HuggingFace API off when --no-internet set
https://github.com/invoke-ai/InvokeAI/issues/3218

Huggingface API will not be queried if --no-internet flag is set
2023-04-17 16:53:31 +02:00
blessedcoolant
aee27e94c9 fix(ui): Fix site header on really small screens 2023-04-18 01:25:53 +12:00
blessedcoolant
695893e1ac fix(ui): Improve parameters panel and preview display 2023-04-18 01:09:48 +12:00
blessedcoolant
b800a8eb2e feat(ui): responsive wip
- Fixed a bunch of padding and margin issues across the app
- Fixed the Invoke logo compressing
- Disabled the visibility of the options panel pin button in tablet and mobile views
- Refined the header menu options in mobile and tablet views
- Refined other site header elements in mobile and tablet views
- Aligned Tab Icons to center in mobile and tablet views
2023-04-18 00:50:09 +12:00
SammCheese
9749ef34b5 layout improvements 2023-04-17 13:30:33 +02:00
blessedcoolant
9a43362127 Revert "Merge branch 'responsive-ui' of https://github.com/SammCheese/InvokeAI into pr/3207"
This reverts commit 866024ea6c, reversing
changes made to 601cc1f92c.
2023-04-17 13:51:08 +12:00
blessedcoolant
866024ea6c Merge branch 'responsive-ui' of https://github.com/SammCheese/InvokeAI into pr/3207 2023-04-17 13:50:44 +12:00
blessedcoolant
601cc1f92c help(ui): Basic responsive updates to demonstrate
Made some basic responsive changes to demonstrate how to go about making changes.

There are a bunch of problems not addressed yet. Like dealing with the resizeable component and etc.
2023-04-17 13:50:13 +12:00
blessedcoolant
d6a9a4464d feat(ui): Add Basic useResolution Component
This component just classifies `base` and `sm` as mobile, `md` and `lg` as tablet and `xl` and `2xl` as desktop.

This is a basic hook for quicker work with resolutions. Can be modified and adjusted to our needs. All resolution related work can go into this hook.
2023-04-17 13:48:42 +12:00
blessedcoolant
dac271725a feat(ui): Add Basic Breakpoints 2023-04-17 13:26:10 +12:00
blessedcoolant
e1fbecfcf7 fix(ui): Syntax issue with the HidePreview icon 2023-04-17 12:42:06 +12:00
Eugene
63d10027a4 nodes: invocation queue item - make more pydantic 2023-04-16 09:39:33 -04:00
Eugene
ef0773b8a3 nodes: set default for InvocationQueueItem.invoke_all 2023-04-16 09:39:33 -04:00
Eugene
3daaddf15b nodes: remove duplicate LatentsToLatentsInvocation 2023-04-16 09:39:33 -04:00
Eugene
570c3fe690 nodes: ensure Graph and GraphExecutionState ids are cast to str on instantiation 2023-04-16 09:39:33 -04:00
Eugene
cbd1a7263a nodes: fix typing of GraphExecutionState.id 2023-04-16 09:39:33 -04:00
Eugene
7fc5fbd4ce nodes: convert InvocationQueueItem to Pydantic class 2023-04-16 09:39:33 -04:00
Eugene Brodsky
6f6de402ad make InvocationQueueItem serializable 2023-04-16 09:39:33 -04:00
SammCheese
2ec4f5af10 remove unused import to pass lint & revert package.json 2023-04-15 21:53:33 +02:00
Sammy
281662a6e1 chore: add ".version" and ".last_model" to gitignore
Mistakenly closed the previous pr
2023-04-15 21:46:47 +02:00
SammCheese
2edd032ec7 draft mobile layout 2023-04-15 21:34:03 +02:00
SammCheese
50eb02f68b chore(ui): build 2023-04-15 20:45:17 +10:00
SammCheese
d73f3adc43 moving shouldHidePreview from gallery to ui slice. 2023-04-15 20:45:17 +10:00
SammCheese
116107f464 chore(ui): build 2023-04-15 20:45:17 +10:00
SammCheese
da44bb1707 rename setter 2023-04-15 20:45:17 +10:00
SammCheese
f43aed677e chore(ui): build 2023-04-15 20:45:17 +10:00
SammCheese
0d051aaae2 rename hidden variable to something more descriptive 2023-04-15 20:45:17 +10:00
SammCheese
e4e48ff995 i forgor to push the locale 2023-04-15 20:45:17 +10:00
SammCheese
442a6bffa4 feat: add "Hide Preview" Button 2023-04-15 20:45:17 +10:00
Lincoln Stein
aab262d991 Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-14 20:12:38 -04:00
Lincoln Stein
47b9910b48 update to diffusers 0.15 and fix code for name changes
- This is a port of #3184 to the main branch
2023-04-14 15:35:03 -04:00
Lincoln Stein
0b0e6fe448 convert remainder of print() to log.info() 2023-04-14 15:15:14 -04:00
Kyle Schouviller
23d65e7162 [nodes] Add subgraph library, subgraph usage in CLI, and fix subgraph execution (#3180)
* Add latent to latent (img2img equivalent)
Fix a CLI bug with multiple links per node

* Using "latents" instead of "latent"

* [nodes] In-progress implementation of graph library

* Add linking to CLI for graph nodes (still broken)

* Fix subgraph execution, fix subgraph linking in CLI

* Fix LatentsToLatents
2023-04-14 06:41:06 +00:00
blessedcoolant
024fd54d0b Fixed a Typo. (#3190) 2023-04-14 14:33:31 +12:00
Nicholas Körfer
c44c19e911 Fixed a Typo. 2023-04-13 17:42:34 +02:00
Lincoln Stein
c132dbdefa change "ialog" to "log" 2023-04-11 18:48:20 -04:00
Lincoln Stein
f3081e7013 add module-level getLogger() method 2023-04-11 12:23:13 -04:00
Lincoln Stein
f904f14f9e add missing module-level methods 2023-04-11 11:10:43 -04:00
Lincoln Stein
8917a6d99b add logging support
This commit adds invokeai.backend.util.logging, which provides support
for formatted console and logfile messages that follow the status
reporting conventions of earlier InvokeAI versions.

Examples:

   ### A critical error     (logging.CRITICAL)
   *** A non-fatal error    (logging.ERROR)
   ** A warning             (logging.WARNING)
   >> Informational message (logging.INFO)
      | Debugging message   (logging.DEBUG)

This style logs everything through a single logging object and is
identical to using Python's `logging` module. The commonly-used
module-level logging functions are implemented as simple pass-thrus
to logging:

  import invokeai.backend.util.logging as ialog

  ialog.debug('this is a debugging message')
  ialog.info('this is a informational message')
  ialog.log(level=logging.CRITICAL, 'get out of dodge')
  ialog.disable(level=logging.INFO)
  ialog.basicConfig(filename='/var/log/invokeai.log')

Internally, the invokeai logging module creates a new default logger
named "invokeai" so that its logging does not interfere with other
module's use of the vanilla logging module. So `logging.error("foo")`
will go through the regular logging path and not add the additional
message decorations.

For more control, the logging module's object-oriented logging style
is also supported. The API is identical to the vanilla logging
usage. In fact, the only thing that has changed is that the
getLogger() method adds a custom formatter to the log messages.

 import logging
 from invokeai.backend.util.logging import InvokeAILogger

 logger = InvokeAILogger.getLogger(__name__)
 fh = logging.FileHandler('/var/invokeai.log')
 logger.addHandler(fh)
 logger.critical('this will be logged to both the console and the log file')
2023-04-11 10:46:38 -04:00
Lincoln Stein
5a4765046e add logging support
This commit adds invokeai.backend.util.logging, which provides support
for formatted console and logfile messages that follow the status
reporting conventions of earlier InvokeAI versions.

Examples:

   ### A critical error     (logging.CRITICAL)
   *** A non-fatal error    (logging.ERROR)
   ** A warning             (logging.WARNING)
   >> Informational message (logging.INFO)
      | Debugging message   (logging.DEBUG)
2023-04-11 09:33:28 -04:00
psychedelicious
d923d1d66b fix(nodes): fix naming of CvInvocationConfig 2023-04-11 12:13:53 +10:00
psychedelicious
1f2c1e14db fix(nodes): move InvocationConfig to baseinvocation.py 2023-04-11 12:13:53 +10:00
psychedelicious
07e3a0ec15 feat(nodes): add invocation schema customisation, add model selection
- add invocation schema customisation

done via fastapi's `Config` class and `schema_extra`. when using `Config`, inherit from `InvocationConfig` to get type hints.

where it makes sense - like for all math invocations - define a `MathInvocationConfig` class and have all invocations inherit from it.

this customisation can provide any arbitrary additional data to the UI. currently it provides tags and field type hints.

this is necessary for `model` type fields, which are actually string fields. without something like this, we can't reliably differentiate  `model` fields from normal `string` fields.

can also be used for future field types.

all invocations now have tags, and all `model` fields have ui type hints.

- fix model handling for invocations

added a helper to fall back to the default model if an invalid model name is chosen. model names in graphs now work.

- fix latents progress callback

noticed this wasn't correct while working on everything else.
2023-04-11 12:13:53 +10:00
psychedelicious
427db7c7e2 feat(nodes): fix typo in PasteImageInvocation 2023-04-10 21:33:08 +10:00
psychedelicious
dad3a7f263 fix(nodes): sampler_name --> scheduler
the name of this was changed at some point. nodes still used the old name, so scheduler selection did nothing. simple fix.
2023-04-10 19:54:09 +10:00
psychedelicious
5bd0bb637f fix(nodes): add missing type to ImageField 2023-04-10 19:33:15 +10:00
Lincoln Stein
f05095770c Increase chunk size when computing diffusers SHAs (#3159)
When running this app first time in WSL2 environment, which is
notoriously slow when it comes to IO, computing the SHAs of the models
takes an eternity.

Computing shas for sd2.1
```
| Calculating sha256 hash of model files
| sha256 = 1e4ce085102fe6590d41ec1ab6623a18c07127e2eca3e94a34736b36b57b9c5e (49 files hashed in 510.87s)
```

I increased the chunk size to 16MB reduce the number of round trips for
loading the data. New results:

```
| Calculating sha256 hash of model files
| sha256 = 1e4ce085102fe6590d41ec1ab6623a18c07127e2eca3e94a34736b36b57b9c5e (49 files hashed in 59.89s)
```

Higher values don't seem to make an impact.
2023-04-09 22:29:43 -04:00
AbdBarho
de189f2db6 Increase chunk size when computing SHAs 2023-04-09 21:53:59 +02:00
Lincoln Stein
cee159dfa3 Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-09 12:08:09 -04:00
psychedelicious
4463124bdd feat(nodes): mark ImageField properties required, add docs 2023-04-09 22:53:17 +10:00
psychedelicious
34402cc46a feat(nodes): add list_images endpoint
- add `list_images` endpoint at `GET api/v1/images`
- extend `ImageStorageBase` with `list()` method, implemented it for `DiskImageStorage`
- add `ImageReponse` class to for image responses, which includes urls, metadata
- add `ImageMetadata` class (basically a stub at the moment)
- uploaded images now named `"{uuid}_{timestamp}.png"`
- add `models` modules. besides separating concerns more clearly, this helps to mitigate circular dependencies
- improve thumbnail handling
2023-04-09 13:48:44 +10:00
Kent Keirsey
54d9833db0 Else. 2023-04-08 12:08:51 -04:00
Kent Keirsey
5fe8cb56fc Correct response note 2023-04-08 12:08:51 -04:00
Kent Keirsey
7919d81fb1 Update to address feedback 2023-04-08 12:08:51 -04:00
Kent Keirsey
9d80b28a4f Begin Convert Work 2023-04-08 12:08:51 -04:00
Kent Keirsey
1fcd91bcc5 Add/Update and Delete Models 2023-04-08 12:08:51 -04:00
blessedcoolant
e456e2e63a fix typo (#3147)
fix typo.

reference:
21f79e5919/invokeai/configs/INITIAL_MODELS.yaml (L21-L25)
2023-04-08 20:25:31 +12:00
c67e708d
ee41b99049 Update 050_INSTALLING_MODELS.md
fix typo
2023-04-08 17:02:47 +09:00
psychedelicious
111d674e71 fix(nodes): use correct torch device in NoiseInvocation 2023-04-08 12:32:03 +10:00
Lincoln Stein
8f048cfbd9 Add python-multipart, which is needed by nodes (#3141)
I'm not quite sure why this isn't being installed by fastapi's
dependencies, but running without it installed yields:

```
root@gnubert:/srv/ssdtank/docker/invokeai/git/InvokeAI# docker run --gpus all -p 9989:9090 -v /srv/ssdtank/docker/invokeai/data:/data -v /srv/ssdtank/docker/invokeai/git/InvokeAI/static/dream_web/:/static/dream_web --rm -ti -u root --entrypoint /bin/bash ghcr.io/cmsj/invokeai-nodes@sha256:426ebc414936cb67e02f5f64d963196500a77b2f485df8122a2d462797293938
root@7a77b56a5771:/usr/src# /invoke-new.py --web
Form data requires "python-multipart" to be installed.
You can install "python-multipart" with:

pip install python-multipart

╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /invoke-new.py:22 in <module>                                                                    │
│                                                                                                  │
│   19                                                                                             │
│   20                                                                                             │
│   21 if __name__ == '__main__':                                                                  │
│ ❱ 22 │   main()                                                                                  │
│   23                                                                                             │
│                                                                                                  │
│ /invoke-new.py:13 in main                                                                        │
│                                                                                                  │
│   10 │   os.chdir(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))                │
│   11 │                                                                                           │
│   12 │   if '--web' in sys.argv:                                                                 │
│ ❱ 13 │   │   from invokeai.app.api_app import invoke_api                                         │
│   14 │   │   invoke_api()                                                                        │
│   15 │   else:                                                                                   │
│   16 │   │   # TODO: Parse some top-level args here.                                             │
│                                                                                                  │
│ /usr/src/InvokeAI/lib/python3.10/site-packages/invokeai/app/api_app.py:17 in <module>            │
│                                                                                                  │
│    14                                                                                            │
│    15 from ..backend import Args                                                                 │
│    16 from .api.dependencies import ApiDependencies                                              │
│ ❱  17 from .api.routers import images, sessions, models                                          │
│    18 from .api.sockets import SocketIO                                                          │
│    19 from .invocations import *                                                                 │
│    20 from .invocations.baseinvocation import BaseInvocation                                     │
│                                                                                                  │
│ /usr/src/InvokeAI/lib/python3.10/site-packages/invokeai/app/api/routers/images.py:45 in <module> │
│                                                                                                  │
│   42 │   │   404: {"description": "Session not found"},                                          │
│   43 │   },                                                                                      │
│   44 )                                                                                           │
│ ❱ 45 async def upload_image(file: UploadFile, request: Request):                                 │
│   46 │   if not file.content_type.startswith("image"):                                           │
│   47 │   │   return Response(status_code=415)                                                    │
│   48                                                                                             │
│                                                                                                  │
│ /usr/src/InvokeAI/lib/python3.10/site-packages/fastapi/routing.py:630 in decorator               │
│                                                                                                  │
│    627 │   │   ),                                                                                │
│    628 │   ) -> Callable[[DecoratedCallable], DecoratedCallable]:                                │
│    629 │   │   def decorator(func: DecoratedCallable) -> DecoratedCallable:                      │
│ ❱  630 │   │   │   self.add_api_route(                                                           │
│    631 │   │   │   │   path,                                                                     │
│    632 │   │   │   │   func,                                                                     │
│    633 │   │   │   │   response_model=response_model,                                            │
│                                                                                                  │
│ /usr/src/InvokeAI/lib/python3.10/site-packages/fastapi/routing.py:569 in add_api_route           │
│                                                                                                  │
│    566 │   │   current_generate_unique_id = get_value_or_default(                                │
│    567 │   │   │   generate_unique_id_function, self.generate_unique_id_function                 │
│    568 │   │   )                                                                                 │
│ ❱  569 │   │   route = route_class(                                                              │
│    570 │   │   │   self.prefix + path,                                                           │
│    571 │   │   │   endpoint=endpoint,                                                            │
│    572 │   │   │   response_model=response_model,                                                │
│                                                                                                  │
│ /usr/src/InvokeAI/lib/python3.10/site-packages/fastapi/routing.py:444 in __init__                │
│                                                                                                  │
│    441 │   │   │   │   0,                                                                        │
│    442 │   │   │   │   get_parameterless_sub_dependant(depends=depends, path=self.path_format),  │
│    443 │   │   │   )                                                                             │
│ ❱  444 │   │   self.body_field = get_body_field(dependant=self.dependant, name=self.unique_id)   │
│    445 │   │   self.app = request_response(self.get_route_handler())                             │
│    446 │                                                                                         │
│    447 │   def get_route_handler(self) -> Callable[[Request], Coroutine[Any, Any, Response]]:    │
│                                                                                                  │
│ /usr/src/InvokeAI/lib/python3.10/site-packages/fastapi/dependencies/utils.py:756 in              │
│ get_body_field                                                                                   │
│                                                                                                  │
│   753 │   │   alias="body",                                                                      │
│   754 │   │   field_info=BodyFieldInfo(**BodyFieldInfo_kwargs),                                  │
│   755 │   )                                                                                      │
│ ❱ 756 │   check_file_field(final_field)                                                          │
│   757 │   return final_field                                                                     │
│   758                                                                                            │
│                                                                                                  │
│ /usr/src/InvokeAI/lib/python3.10/site-packages/fastapi/dependencies/utils.py:111 in              │
│ check_file_field                                                                                 │
│                                                                                                  │
│   108 │   │   │   │   raise RuntimeError(multipart_incorrect_install_error) from None            │
│   109 │   │   except ImportError:                                                                │
│   110 │   │   │   logger.error(multipart_not_installed_error)                                    │
│ ❱ 111 │   │   │   raise RuntimeError(multipart_not_installed_error) from None                    │
│   112                                                                                            │
│   113                                                                                            │
│   114 def get_param_sub_dependant(                                                               │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: Form data requires "python-multipart" to be installed.
You can install "python-multipart" with:

pip install python-multipart
```
2023-04-07 19:17:37 -04:00
Lincoln Stein
cd1b350dae Merge branch 'main' into bugfix/release-updater 2023-04-07 18:56:21 -04:00
Lincoln Stein
8334757af9 Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-07 18:55:54 -04:00
Chris Jones
7103ac6a32 Add python-multipart, which is needed by nodes 2023-04-07 19:43:42 +01:00
blessedcoolant
f6b131e706 remove vestiges of non-functional autoimport code for legacy checkpoints (#3076)
- the functionality to automatically import and run legacy checkpoint
files in a designated folder has been removed from the backend but there
are vestiges of the code remaining in the frontend that are causing
crashes.
- This fixes the problem.

- Closes #3075
2023-04-08 02:21:23 +12:00
Lincoln Stein
d1b2b99226 Merge branch 'main' into bugfix/remove-autoimport-dead-code 2023-04-07 09:59:58 -04:00
psychedelicious
e356f2511b chore: configure stale bot 2023-04-07 20:45:08 +10:00
Lincoln Stein
e5f8b22a43 add a new method to model_manager that retrieves individual pipeline components (#3120)
This PR introduces a new set of ModelManager methods that enables you to
retrieve the individual parts of a stable diffusion pipeline model,
including the vae, text_encoder, unet, tokenizer, etc.

To use:

```
from invokeai.backend import ModelManager

manager = ModelManager('/path/to/models.yaml')

# get the VAE
vae = manager.get_model_vae('stable-diffusion-1.5')

# get the unet
unet = manager.get_model_unet('stable-diffusion-1.5')

# get the tokenizer
tokenizer = manager.get_model_tokenizer('stable-diffusion-1.5')

# etc etc
feature_extractor = manager.get_model_feature_extractor('stable-diffusion-1.5')
scheduler = manager.get_model_scheduler('stable-diffusion-1.5')
text_encoder = manager.get_model_text_encoder('stable-diffusion-1.5')

# if no model provided, then defaults to the one currently in GPU, if any
vae = manager.get_model_vae()
```
2023-04-07 01:39:57 -04:00
blessedcoolant
45b84fb4bb Merge branch 'main' into bugfix/remove-autoimport-dead-code 2023-04-07 17:07:25 +12:00
Lincoln Stein
f022c89249 Merge branch 'main' into feat/return-submodels 2023-04-06 22:03:31 -04:00
Lincoln Stein
ab05144716 Change where !replay looks for its infile (#3129)
!fetch puts its output file into the output directory; it may be
beneficial to have !replay look in the output directory as well.
2023-04-06 22:02:06 -04:00
Lincoln Stein
aeb4914e67 Merge branch 'main' into replay-file_path 2023-04-06 21:45:23 -04:00
blessedcoolant
76bcd4d44f Fix typo (#3133)
'hotdot' to 'hotdog'; the world's least important PR :)
2023-04-07 12:38:05 +12:00
Steven Frank
50f5e1bc83 Fix typo
'hotdot' to 'hotdog'; the world's least important PR :)
2023-04-06 16:47:57 -07:00
Lincoln Stein
4c339dd4b0 refactor get_submodels() into individual methods 2023-04-06 17:08:23 -04:00
Lincoln Stein
bc2b9500e3 Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-06 15:38:46 -04:00
Lincoln Stein
32857d81c5 prevent legacy CLI crash caused by removal of convert option
- Compensatory change to the CLI that prevents it from crashing
  when it tries to import a model.
- Bug introduced when the "convert" option removed from the model
  manager.
2023-04-06 15:36:05 -04:00
Thomas
7268131f57 change where !replay looks for its infile
!fetch puts its output file into the output directory; it may be beneficial to have !replay look in the output directory as well.
2023-04-06 08:14:11 -04:00
Kyle Schouviller
85b020f76c [nodes] Add latent nodes, storage, and fix iteration bugs (#3091)
* Add latents nodes.
* Fix iteration expansion.
* Add collection generator nodes, math nodes.
* Add noise node.
* Add some graph debug commands to the CLI.
* Fix negative id linking in CLI.
* Fix a CLI bug with multiple links per node.
2023-04-06 04:06:05 +00:00
Kyle Schouviller
a7833cc9a9 [api] Add models router and list model API. 2023-04-05 23:59:07 -04:00
Lincoln Stein
28f75d80d5 Merge branch 'main' into bugfix/release-updater 2023-04-05 18:25:33 -04:00
Matthias Wild
919294e977 fix build-container.yml (#3117)
Add permission go write packages to GITHUB_TOKEN
2023-04-06 00:25:00 +02:00
Lincoln Stein
b917ffa4d7 Merge branch 'main' into bugfix/release-updater 2023-04-05 17:37:27 -04:00
Lincoln Stein
d44151d6ff add a new method to model_manager that retrieves individual pipeline parts
- New method is ModelManager.get_sub_model(model_name:str,model_part:SDModelComponent)

To use:

```
from invokeai.backend import ModelManager, SDModelComponent as sdmc
manager = ModelManager('/path/to/models.yaml')
vae = manager.get_sub_model('stable-diffusion-1.5', sdmc.vae)
```
2023-04-05 17:25:42 -04:00
mauwii
7640acfb1f update build-container.yml
- add packages write permission
2023-04-05 15:44:26 +02:00
psychedelicious
aed9ecef2a feat(nodes): add thumbnail generation to DiskImageStorage 2023-04-05 08:22:23 +10:00
Lincoln Stein
18cddd7972 Right link on pytorch installer for linux rocm (#3084)
Right link on pytorch installer for linux rocm
2023-04-04 17:40:42 -04:00
Lincoln Stein
e6b25f4ae3 Merge branch 'main' into patch-1 2023-04-04 17:40:12 -04:00
Lincoln Stein
d1c0050e65 fix(nodes): fix typo in list_sessions handler (#3109)
The typo accidentally did not affect functionality; when `query==""`, it
`search()`ed but found everything due to empty query, then paginated
results, so it worked the same as `list()`.

Still fix it
2023-04-03 21:24:48 -04:00
psychedelicious
ecdfa136a0 fix(nodes): fix typo in list_sessions handler 2023-04-04 00:34:32 +10:00
blessedcoolant
5cd513ee63 [deps] bump compel version to fix crash on invalid (auto111) syntax (#3107)
currently if users input eg `happy (camper:0.3)` it gets parsed
incorrectly, which causes crashes if it's in the negative prompt. bump
to compel 1.0.5 fixes the parser to avoid this (note the weight is
parsed as plain text, it's not converted to proper invoke syntax)
2023-04-04 02:30:17 +12:00
blessedcoolant
ab45086546 Merge branch 'main' into deps_bump_compel 2023-04-04 02:05:40 +12:00
psychedelicious
77ba7359f4 fix(nodes): commit changes to db 2023-04-03 19:09:49 +10:00
Damian Stewart
8cbe2e14d9 bump compel version to fix on invalid (auto111) syntax 2023-04-03 10:37:01 +02:00
Lincoln Stein
f682fb8040 fix invokeai-update script
- This commit fixes the update script to work again, as well as fixing
  the ambiguity between updating to a tag and updating to a branch.
2023-04-02 11:08:12 -04:00
creachec
ee86eedf01 Right link on pytorch installer for linux rocm
Right link on pytorch installer for linux rocm
2023-03-31 17:22:00 -03:00
Lincoln Stein
1f89cf3343 remove vestiges of non-functional autoimport code for legacy checkpoints
- Closes #3075
2023-03-31 04:27:03 -04:00
Lincoln Stein
c4e6511a59 Add support for yet another TI embedding format (main version) (#3050)
- This PR adds support for embedding files that contain a single key
"emb_params". The only example I know of this format is the
"EasyNegative" embedding on HuggingFace, but there are certainly others.

- This PR also adds support for loading embedding files that have been
saved in safetensors format.

- It also cleans up the code so that the logic of probing for and
selecting the right format parser is clear.

- This is the same as #3045, which is on the 2.3 branch.
2023-03-31 03:57:57 -04:00
Lincoln Stein
44843be4c8 Merge branch 'main' into enhance/support-another-embedding-format-main 2023-03-30 23:16:52 -04:00
Lincoln Stein
054e963bef add basic autocomplete functionality to node cli (#3035)
- Commands, invocations and their parameters will now autocomplete using
introspection.
- Two types of parameter *arguments* will also autocomplete:
  - --sampler_name  will autocomplete the scheduler name
  - --model will autocomplete the model name
- There don't seem to be commands for reading/writing image files yet,
so path autocompletion is not implemented
2023-03-30 08:25:36 -04:00
Lincoln Stein
afb66a7884 Merge branch 'main' into feat/node-cli-autocompleter 2023-03-30 07:51:51 -04:00
Lincoln Stein
b9df9e26f2 Merge branch 'main' into enhance/support-another-embedding-format-main 2023-03-30 07:51:23 -04:00
Lincoln Stein
25ae36ceb5 I18n build mode (#3051)
Add build mode option to bundle english translation with UI
2023-03-29 22:26:45 -04:00
Lincoln Stein
3ae8daedaa Merge branch 'main' into i18n-build-mode 2023-03-29 22:26:17 -04:00
Lincoln Stein
e11c1d66ab handle multiple tokens and embeddings in single file 2023-03-29 22:05:06 -04:00
Lincoln Stein
b913e1e11e improve importation and conversion of legacy checkpoint files (#3053)
A long-standing issue with importing legacy checkpoints (both ckpt and
safetensors) is that the user has to identify the correct config file,
either by providing its path or by selecting which type of model the
checkpoint is (e.g. "v1 inpainting"). In addition, some users wish to
provide custom VAEs for use with the model. Currently this is done in
the WebUI by importing the model, editing it, and then typing in the
path to the VAE.

## Model configuration file selection

To improve the user experience, the model manager's `heuristic_import()`
method has been enhanced as follows:

1. When initially called, the caller can pass a config file path, in
which case it will be used.

2. If no config file provided, the method looks for a .yaml file in the
same directory as the model which bears the same basename. e.g.
```
   my-new-model.safetensors
   my-new-model.yaml
```
The yaml file is then used as the configuration file for importation and
conversion.

3. If no such file is found, then the method opens up the checkpoint and
probes it to determine whether it is V1, V1-inpaint or V2. If it is a V1
format, then the appropriate v1-inference.yaml config file is used.
Unfortunately there are two V2 variants that cannot be distinguished by
introspection.

4. If the probe algorithm is unable to determine the model type, then
its last-ditch effort is to execute an optional callback function that
can be provided by the caller. This callback, named
`config_file_callback` receives the path to the legacy checkpoint and
returns the path to the config file to use. The CLI uses to put up a
multiple choice prompt to the user. The WebUI **could** use this to
prompt the user to choose from a radio-button selection.

5. If the config file cannot be determined, then the import is
abandoned.

## Custom VAE Selection

The user can attach a custom VAE to the imported and converted model by
copying the desired VAE into the same directory as the file to be
imported, and giving it the same basename. E.g.:

```
    my-new-model.safetensors
    my-new-model.vae.pt
```

For this to work, the VAE must end with ".vae.pt", ".vae.ckpt", or
".vae.safetensors". The indicated VAE will be converted into diffusers
format and stored with the converted models file, so the ".pt" file can
be deleted after conversion.

No facility is currently provided to swap a diffusers VAE at import
time, but this can be done after the fact using the WebUI and CLI's
model editing functions.

Note that this is the same fix that was applied to the 2.3 branch in
#3043 . This applies to `main`.
2023-03-29 17:22:15 -04:00
Lincoln Stein
3c4b6d5735 Merge branch 'main' into enhance/heuristic-import-improvements 2023-03-29 16:54:43 -04:00
Mary Hipp Rogers
e6123eac19 Merge branch 'main' into i18n-build-mode 2023-03-29 05:33:14 -07:00
Lincoln Stein
30ca25897e Fix bugs in online ckpt conversion of 2.0 models (#3057)
## Enable the on-the-fly conversion of models based on SD 2.0/2.1 into
diffusers

This commit fixes bugs related to the on-the-fly conversion and loading
of legacy checkpoint models built on SD-2.0 base.

- When legacy checkpoints built on SD-2.0 models were converted
on-the-fly using --ckpt_convert, generation would crash with a precision
incompatibility error. This problem has been found and fixed.
2023-03-28 23:34:53 -04:00
Lincoln Stein
abaee6b9ed Merge branch 'main' into feat/node-cli-autocompleter 2023-03-28 23:32:10 -04:00
Lincoln Stein
4d7c9e1ab7 Merge branch 'main' into bugfix/convert-2.0-models 2023-03-28 23:01:36 -04:00
Eugene
cc5687f26c [nodes] downgrade fastapi+uvicorn to fix openapi schema 2023-03-28 22:53:20 -04:00
Lincoln Stein
cdb3616dca Merge branch 'main' into enhance/support-another-embedding-format-main 2023-03-28 21:03:06 -04:00
Mary Hipp Rogers
78e76f26f9 Merge branch 'main' into i18n-build-mode 2023-03-28 11:04:32 -04:00
Lincoln Stein
9a7580dedd fix bugs in online ckpt conversion of 2.0 models
This commit fixes bugs related to the on-the-fly conversion and loading of
legacy checkpoint models built on SD-2.0 base.

- When legacy checkpoints built on SD-2.0 models were converted
  on-the-fly using --ckpt_convert, generation would crash with a
  precision incompatibility error.
2023-03-28 00:17:20 -04:00
Lincoln Stein
dc2da8cff4 Doc: updating ROCm version in documentation (#3041)
The Pytorch ROCm version in the documentation in outdated (`rocm5.2`)
which leads to errors during the installation of InvokeAI.

This PR updates the documentation with the latest Pytorch ROCm `5.4.2`
version.
2023-03-27 22:37:43 -04:00
Lincoln Stein
019a9f0329 address change requests in PR
1. Prompt has changed to "invoke> ".
2. Function to initialize the autocompleter has been renamed "set_autocompleter()"
2023-03-27 12:20:24 -04:00
Lincoln Stein
fe5d9ad171 improve importation and conversion of legacy checkpoint files
A long-standing issue with importing legacy checkpoints (both ckpt and
safetensors) is that the user has to identify the correct config file,
either by providing its path or by selecting which type of model the
checkpoint is (e.g. "v1 inpainting"). In addition, some users wish to
provide custom VAEs for use with the model. Currently this is done in
the WebUI by importing the model, editing it, and then typing in the
path to the VAE.

To improve the user experience, the model manager's
`heuristic_import()` method has been enhanced as follows:

1. When initially called, the caller can pass a config file path, in
which case it will be used.

2. If no config file provided, the method looks for a .yaml file in the
same directory as the model which bears the same basename. e.g.
```
   my-new-model.safetensors
   my-new-model.yaml
```
   The yaml file is then used as the configuration file for
   importation and conversion.

3. If no such file is found, then the method opens up the checkpoint
   and probes it to determine whether it is V1, V1-inpaint or V2.
   If it is a V1 format, then the appropriate v1-inference.yaml config
   file is used. Unfortunately there are two V2 variants that cannot be
   distinguished by introspection.

4. If the probe algorithm is unable to determine the model type, then its
   last-ditch effort is to execute an optional callback function that can
   be provided by the caller. This callback, named `config_file_callback`
   receives the path to the legacy checkpoint and returns the path to the
   config file to use. The CLI uses to put up a multiple choice prompt to
   the user. The WebUI **could** use this to prompt the user to choose
   from a radio-button selection.

5. If the config file cannot be determined, then the import is abandoned.

The user can attach a custom VAE to the imported and converted model
by copying the desired VAE into the same directory as the file to be
imported, and giving it the same basename. E.g.:

```
    my-new-model.safetensors
    my-new-model.vae.pt
```

For this to work, the VAE must end with ".vae.pt", ".vae.ckpt", or
".vae.safetensors". The indicated VAE will be converted into diffusers
format and stored with the converted models file, so the ".pt" file
can be deleted after conversion.

No facility is currently provided to swap a diffusers VAE at import
time, but this can be done after the fact using the WebUI and CLI's
model editing functions.
2023-03-27 11:27:45 -04:00
Mary Hipp
dbc0093b31 Merge remote-tracking branch 'origin' into i18n-build-mode 2023-03-27 10:57:41 -04:00
Mary Hipp
92e512b8b6 add package mode option for i18next 2023-03-27 10:49:52 -04:00
Lincoln Stein
abe4dc8ac1 Add support for yet another textual inversion embedding format
- This PR adds support for embedding files that contain a single key
  "emb_params". The only example I know of this format is the
  "EasyNegative" embedding on HuggingFace, but there are certainly
  others.

- This PR also adds support for loading embedding files that have been
  saved in safetensors format.

- It also cleans up the code so that the logic of probing for and
  selecting the right format parser is clear.
2023-03-27 09:39:03 -04:00
Lincoln Stein
dc14701d20 Merge branch 'main' into feat/node-cli-autocompleter 2023-03-26 23:46:10 -04:00
Tom Gouville
737e0f3085 doc: fixing error in rocm version 2023-03-26 12:40:20 +02:00
Tom Gouville
81b7ea4362 doc: updating ROCm version for pip install 2023-03-26 12:32:12 +02:00
blessedcoolant
09dfde0ba1 fix(ui): fix viewer tooltip localisation strings (#3037)
fixes #2923
2023-03-26 20:35:52 +13:00
blessedcoolant
3ba7e966b5 Merge branch 'main' into fix/ui/viewer-localisation 2023-03-26 20:35:12 +13:00
blessedcoolant
a1cd4834d1 nodes: add cancelation, updated progress callback, typing fixes (#3036)
keeping `main` up to date with my api nodes branch:
- bd7e515290: [nodes] Add cancelation to
the API @Kyle0654
- 5fe38f7: fix(backend): simple typing fixes
  - just picking some low-hanging fruit to improve IDE hinting
- c34ac91: fix(nodes): fix cancel; fix callback for img2img, inpaint
- makes nodes cancel immediate, use fix progress images on nodes, fix
callbacks for img2img/inpaint
- 4221cf7: fix(nodes): fix schema generation for output classes
- did this previously for some other class; needed to not have node
outputs be optional
2023-03-26 20:34:27 +13:00
psychedelicious
a724038dc6 fix(ui): fix viewer tooltip localisation strings
fixes #2923
2023-03-26 17:43:00 +11:00
psychedelicious
4221cf7731 fix(nodes): fix schema generation for output classes
All output classes need to have their properties flagged as `required` for the schema generation to work as needed.
2023-03-26 17:20:10 +11:00
psychedelicious
c34ac91ff0 fix(nodes): fix cancel; fix callback for img2img, inpaint 2023-03-26 17:07:40 +11:00
psychedelicious
5fe38f7c88 fix(backend): simple typing fixes 2023-03-26 17:07:03 +11:00
Kyle Schouviller
bd7e515290 [nodes] Add cancelation to the API 2023-03-26 15:47:32 +11:00
Lincoln Stein
076fac07eb feat[web]: use the predicted denoised image for previews (#2915)
Some schedulers report not only the noisy latents at the current
timestep, but also their estimate so far of what the de-noised latents
will be.

It makes for a more legible preview than the noisy latents do.

I think this is a huge improvement, but there are a few considerations:
- Need to not spook @JPPhoto by changing how previews look.
- Some schedulers (most notably **DPM Solver++**) don't provide this
data, and it falls back to the current behavior there. That's not
terrible, but seeing such a big difference in how _previews_ look from
one scheduler to the next might mislead people into thinking there's a
bigger difference in their overall effectiveness than there really is.

My fear of configuration-option-overwhelm leaves me inclined to _not_
add a configuration option for this, but we could.
2023-03-26 00:29:00 -04:00
Lincoln Stein
9348161600 add basic autocomplete functionality to node cli
- Commands, invocations and their parameters will now autocomplete
  using introspection.
- Two types of parameter *arguments* will also autocomplete:
  - --sampler_name  will autocomplete the scheduler name
  - --model will autocomplete the model name
- There don't seem to be commands for reading/writing image files yet, so
  path autocompletion is not implemented
2023-03-26 00:24:27 -04:00
Lincoln Stein
dac3c158a5 Merge branch 'main' into feat/preview_predicted_x0
- resolve conflicts with generate.py invocation
- remove unused symbols that pyflakes complains about
- add **untested** code for passing intermediate latent image to the
  step callback in the format expected.
2023-03-25 16:07:18 -04:00
Lincoln Stein
17d8bbf330 ask for escalated privileges in push workflows 2023-03-25 15:22:25 -04:00
Eugene Brodsky
9344687a56 installer: fix indentation in invoke.sh template (tabs -> spaces) 2023-03-25 13:57:09 -04:00
Lincoln Stein
cf534d735c duplicate of PR #3016, but based on main 2023-03-25 13:57:09 -04:00
Lincoln Stein
501924bc60 do not reexport PipelineIntermediateState 2023-03-25 13:57:09 -04:00
Lincoln Stein
d117251747 make step_callback work again in generate() call
This PR fixes #2951 and restores the step_callback argument in the
refactored generate() method. Note that this issue states that
"something is still wrong because steps and step are zero." However,
I think this is confusion over the call signature of the callback, which
since the diffusers merge has been `callback(state:PipelineIntermediateState)`

This is the test script that I used to determine that `step` is being passed
correctly:

```

from pathlib import Path
from invokeai.backend import ModelManager, PipelineIntermediateState
from invokeai.backend.globals import global_config_dir
from invokeai.backend.generator import Txt2Img

def my_callback(state:PipelineIntermediateState, total_steps:int):
    print(f'callback(step={state.step}/{total_steps})')

def main():
    manager = ModelManager(Path(global_config_dir()) / "models.yaml")
    model = manager.get_model('stable-diffusion-1.5')
    print ('=== TXT2IMG TEST ===')
    steps=30
    output = next(Txt2Img(model).generate(prompt='banana sushi',
                                          iterations=None,
                                          steps=steps,
                                          step_callback=lambda x: my_callback(x,steps)
                                          )
                  )
    print(f'image={output.image}, seed={output.seed}, steps={output.params.steps}')

if __name__=='__main__':
    main()
```
2023-03-25 13:57:09 -04:00
blessedcoolant
6ea61a8486 fix issue with embeddings being loaded twice (#3029)
This bug was causing a bunch of annoying warnings about not overwriting
previously loaded tokens.

- as noted by JPPhoto
2023-03-26 04:45:20 +13:00
blessedcoolant
e4d903af20 Merge branch 'main' into bugfix/load-embeddings-once 2023-03-26 04:19:43 +13:00
Lincoln Stein
2d9797da35 (fix)[docs] Fixed snippet/code formatting (#2918)
It was pasted as plain text, now it's a code fence.
2023-03-25 10:49:13 -04:00
Lincoln Stein
07ea806553 Merge branch 'main' into patch-1 2023-03-25 10:48:25 -04:00
Lincoln Stein
5ac0316c62 fix issue with embeddings being loaded twice
- as noted by JPPhoto
2023-03-25 10:45:03 -04:00
Lincoln Stein
9536ba22af Convert custom VAEs during legacy checkpoint loading (#3010)
- When a legacy checkpoint model is loaded via --convert_ckpt and its
models.yaml stanza refers to a custom VAE path (using the 'vae:' key),
the custom VAE will be converted and used within the diffusers model.
Otherwise the VAE contained within the legacy model will be used.
    
- Note that the checkpoint import functions in the CLI or Web UIs
continue to default to the standard stabilityai/sd-vae-ft-mse VAE. This
can be fixed after the fact by editing VAE key using either the CLI or
Web UI.
   
- Fixes issue #2917
2023-03-25 00:37:12 -04:00
blessedcoolant
5503749085 Merge branch 'main' into feat/use-custom-vaes 2023-03-25 17:09:38 +13:00
Lincoln Stein
9bfe2fa371 add github API token to mkdocs workflow (#3023)
The mkdocs-workflow has been failing over the past week due to
permission denied errors. I *think* this is the result of not passing
the GitHub API token to the workflow, and this is a speculative fix for
the issue.
2023-03-24 17:59:53 -04:00
Lincoln Stein
d8ce6e4426 Merge branch 'bugfix/mkdocs-workflow' of github.com:invoke-ai/InvokeAI into bugfix/mkdocs-workflow 2023-03-24 17:58:16 -04:00
Lincoln Stein
43d2d6d98c add blessedcoolant as backup to mauwii codeowner 2023-03-24 17:58:03 -04:00
Lincoln Stein
64c233efd4 Merge branch 'main' into bugfix/mkdocs-workflow 2023-03-24 17:47:14 -04:00
Lincoln Stein
2245a4e117 doc(readme): fix incorrect install command (#3024)
Hi, I am trying to install InvokeAI on my linux machine, the command in
README.md cannot install correct dependency
2023-03-24 17:46:58 -04:00
Lincoln Stein
9ceec40b76 Merge branch 'main' into feat/use-custom-vaes 2023-03-24 17:45:02 -04:00
Yeung Yiu Hung
0f13b90059 doc(readme): fix incorrect install command 2023-03-24 23:21:51 +08:00
Lincoln Stein
d91fc16ae4 add github API token to mkdocs workflow 2023-03-24 09:17:30 -04:00
Lincoln Stein
bc01a96f9d re-implement model scanning when loading legacy checkpoint files (#3012)
- This PR turns on pickle scanning before a legacy checkpoint file is
loaded from disk within the checkpoint_to_diffusers module.

- Also miscellaneous diagnostic message cleanup.

- See also #3011 for a similar patch to the 2.3 branch.
2023-03-24 08:57:07 -04:00
Lincoln Stein
85b2822f5e Merge branch 'main' into security/scan-ckpt-files-main 2023-03-24 08:39:59 -04:00
blessedcoolant
c33d8694bb build: do not run python tests on ui build (#2987)
`invokeai/frontend/web/dist/**` should not be triggering the full test
suite.
2023-03-25 00:54:40 +13:00
blessedcoolant
685bd027f0 Merge branch 'main' into build/no-test-on-ui-build 2023-03-25 00:51:26 +13:00
psychedelicious
f592d620d5 ui: translations update from weblate (#3021)
Translations update from [Hosted Weblate](https://hosted.weblate.org)
for [InvokeAI/Web
UI](https://hosted.weblate.org/projects/invokeai/web-ui/).



Current translation status:

![Weblate translation
status](https://hosted.weblate.org/widgets/invokeai/-/web-ui/horizontal-auto.svg)
2023-03-24 19:25:17 +11:00
Tom
2b127b73ac translationBot(ui): update translation (French)
Currently translated at 82.7% (417 of 504 strings)

Co-authored-by: Tom <tom.fouthier@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/fr/
Translation: InvokeAI/Web UI
2023-03-24 04:49:27 +01:00
gallegonovato
8855902cfe translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (504 of 504 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (501 of 501 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2023-03-24 04:49:27 +01:00
Hosted Weblate
9d8ddc6a08 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-03-24 04:49:27 +01:00
Riccardo Giovanetti
4ca5189e73 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (504 of 504 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (501 of 501 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (500 of 500 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-03-24 04:49:27 +01:00
Lincoln Stein
873597cb84 Allow loading all types of dreambooth models - Fix issue #2932 (#2933)
Allows to load models with EMA using `model_ema.diffusion_model.xxxx` or
`model_ema.xxxx` weights.

Fixes #2932
2023-03-23 23:40:04 -04:00
Lincoln Stein
44d742f232 Merge branch 'main' into security/scan-ckpt-files-main 2023-03-23 23:33:49 -04:00
Lincoln Stein
6e7dbf99f3 Merge branch 'main' into bugfix/dreambooth_ema 2023-03-23 23:24:15 -04:00
Lincoln Stein
1ba1076888 Tidy up Tests and Provide Documentation (#2869)
Bit of basic housekeeping and documentation to explain to people how to
get local development environment running (including the tests).
2023-03-23 23:23:20 -04:00
Lincoln Stein
cafa108f69 Merge branch 'main' into tests 2023-03-23 23:22:27 -04:00
Lincoln Stein
deeff36e16 Merge branch 'main' into security/scan-ckpt-files-main 2023-03-23 23:20:52 -04:00
Lincoln Stein
d770b14358 [deps] upgrade compel for better .swap defaults and a bugfix (#3014) 2023-03-23 19:01:12 -04:00
Lincoln Stein
20414ba4ad Merge branch 'main' into deps_upgrade_compel 2023-03-23 18:38:46 -04:00
Lincoln Stein
92721a1d45 do not reexport PipelineIntermediateState 2023-03-24 09:32:47 +11:00
Lincoln Stein
f329fddab9 make step_callback work again in generate() call
This PR fixes #2951 and restores the step_callback argument in the
refactored generate() method. Note that this issue states that
"something is still wrong because steps and step are zero." However,
I think this is confusion over the call signature of the callback, which
since the diffusers merge has been `callback(state:PipelineIntermediateState)`

This is the test script that I used to determine that `step` is being passed
correctly:

```

from pathlib import Path
from invokeai.backend import ModelManager, PipelineIntermediateState
from invokeai.backend.globals import global_config_dir
from invokeai.backend.generator import Txt2Img

def my_callback(state:PipelineIntermediateState, total_steps:int):
    print(f'callback(step={state.step}/{total_steps})')

def main():
    manager = ModelManager(Path(global_config_dir()) / "models.yaml")
    model = manager.get_model('stable-diffusion-1.5')
    print ('=== TXT2IMG TEST ===')
    steps=30
    output = next(Txt2Img(model).generate(prompt='banana sushi',
                                          iterations=None,
                                          steps=steps,
                                          step_callback=lambda x: my_callback(x,steps)
                                          )
                  )
    print(f'image={output.image}, seed={output.seed}, steps={output.params.steps}')

if __name__=='__main__':
    main()
```
2023-03-24 09:32:47 +11:00
Lincoln Stein
f2efde27f6 load embeddings after a ckpt legacy model is converted to diffusers (#3013)
This PR corrects a bug in which embeddings were not being applied when a
non-diffusers model was loaded.

- Fixes #2954
- Also improves diagnostic reporting during embedding loading.
2023-03-23 18:10:19 -04:00
Damian Stewart
02c58f22be upgrade compel for better .swap defaults and a bugfix 2023-03-23 22:34:54 +01:00
Lincoln Stein
f751dcd245 load embeddings after a ckpt legacy model is converted to diffusers
- Fixes #2954
- Also improves diagnostic reporting during embedding loading.
2023-03-23 15:21:58 -04:00
Lincoln Stein
a97107bd90 handle VAEs that do not have a "state_dict" key 2023-03-23 15:11:29 -04:00
Lincoln Stein
b2ce45a417 re-implement model scanning when loading legacy checkpoint files
- This PR turns on pickle scanning before a legacy checkpoint file
  is loaded from disk within the checkpoint_to_diffusers module.

- Also miscellaneous diagnostic message cleanup.
2023-03-23 15:03:30 -04:00
Lincoln Stein
4e0b5d85ba convert custom VAEs into diffusers
- When a legacy checkpoint model is loaded via --convert_ckpt and its
  models.yaml stanza refers to a custom VAE path (using the 'vae:'
  key), the custom VAE will be converted and used within the diffusers
  model. Otherwise the VAE contained within the legacy model will be
  used.

- Note that the heuristic_import() method, which imports arbitrary
  legacy files on disk and URLs, will continue to default to the
  the standard stabilityai/sd-vae-ft-mse VAE. This can be fixed after
  the fact by editing the models.yaml stanza using the Web or CLI
  UIs.

- Fixes issue #2917
2023-03-23 13:14:19 -04:00
Lincoln Stein
a958ae5e29 Merge branch 'main' into feat/use-custom-vaes 2023-03-23 10:32:56 -04:00
blessedcoolant
4d50fbf8dc Merge branch 'main' into build/no-test-on-ui-build 2023-03-23 01:08:24 +13:00
blessedcoolant
485f6e5954 Export more for header (#2996)
* export more items needed for dynamic header
* remove build mode that is no longer needed
2023-03-23 01:07:16 +13:00
Mary Hipp Rogers
1f6ce838ba Merge branch 'main' into export-more-for-header 2023-03-22 07:49:15 -04:00
blessedcoolant
0dc5773849 [nodes] Update fastapi packages to latest (except FastAPI, which has an annotation bug in the newest version) (#3004) 2023-03-22 19:12:45 +13:00
Kyle Schouviller
bc347f749c [nodes] Update fastapi packages to latest (except FastAPI, which has an annotation bug in the newest version) 2023-03-21 19:45:17 -07:00
Mary Hipp Rogers
1b215059e7 Merge branch 'main' into export-more-for-header 2023-03-21 16:29:53 -04:00
Mary Hipp
db079a2733 remove unneeded build:package code 2023-03-21 10:29:27 -04:00
Mary Hipp
26f71d3536 change back 2023-03-21 10:28:29 -04:00
Mary Hipp
eb7ae2588c unused var 2023-03-21 10:21:58 -04:00
Mary Hipp
278c14ba2e try jsx.element 2023-03-21 10:18:38 -04:00
Mary Hipp
74e83dda54 update type 2023-03-21 10:10:48 -04:00
blessedcoolant
28c1fca477 Merge branch 'main' into build/no-test-on-ui-build 2023-03-20 02:21:40 +13:00
psychedelicious
1f0324102a chore(ui): build 2023-03-19 23:16:29 +11:00
psychedelicious
a782ad092d feat(ui): localise iaialertdialog defaults 2023-03-19 23:16:29 +11:00
psychedelicious
eae4eb419a fix(ui): popovers trigger on click (accessibility) 2023-03-19 23:16:29 +11:00
psychedelicious
fb7f38f46e fix(ui): make alertdialogs centered 2023-03-19 23:16:29 +11:00
psychedelicious
93d0cae455 fix(ui): fix alertdialogs closing immediately 2023-03-19 23:16:29 +11:00
psychedelicious
35f6b5d562 fix(ui): make invoketabs not lazy 2023-03-19 23:16:29 +11:00
blessedcoolant
2aefa06ef1 fix(ui): Clean up manual add forms 2023-03-19 23:16:29 +11:00
psychedelicious
5906888477 feat(ui): add current image loading fallback 2023-03-19 23:16:29 +11:00
psychedelicious
f22c7d0da6 feat(ui): add more w/h options 2023-03-19 23:16:29 +11:00
psychedelicious
93b38707b2 feat(ui): tidy up model manager styling
fixes #2970
2023-03-19 23:16:29 +11:00
blessedcoolant
6ecf53078f fix(ui): Misalignment of model search entries 2023-03-19 23:16:29 +11:00
psychedelicious
9c93b7cb59 build: do not run python tests on ui build
`invokeai/frontend/web/dist/**` should not be triggering the full test suite.
2023-03-19 23:01:30 +11:00
blessedcoolant
7789e8319c Fix some text and a link (#2910)
- Fix link to `070_INSTALL_XFORMERS.md`.
- Fix some spelling.
2023-03-19 05:55:18 +13:00
Lincoln Stein
7d7a28beb3 Merge branch 'main' into main-text-fixup-PR 2023-03-18 09:54:41 -07:00
psychedelicious
27a113d872 nodes: api fixes (#2959)
- 86932469e76f1315ee18bfa2fc52b588241dace1 add image_to_dataURL util
- 0c2611059711b45bb6142d30b1d1343ac24268f3 make fast latents method
static
- this method doesn't really need `self` and should be able to be called
without instantiating `Generator`
- 2360bfb6558ea511e9c9576f3d4b5535870d84b4 fix schema gen for
GraphExecutionState
- `GraphExecutionState` uses `default_factory` in its fields; the result
is the OpenAPI schema marks those fields as optional, which propagates
to the generated API client, which means we need a lot of unnecessary
type guards to use this data type. the [simple
fix](https://github.com/pydantic/pydantic/discussions/4577) is to add
config to explicitly say all class properties are required. looks this
this will be resolved in a future pydantic release
- 3cd7319cfdb0f07c6bb12d62d7d02efe1ab12675 fix step callback and fast
latent generation on nodes. have this working in UI. depends on the
small change in #2957
2023-03-16 20:24:28 +11:00
psychedelicious
67f8f222d9 fix(nodes): fix step_callback + fast latents generation
this depends on the small change in #2957
2023-03-16 20:03:08 +11:00
psychedelicious
5347c12fed fix(nodes): fix schema gen for GraphExecutionState 2023-03-16 20:03:08 +11:00
psychedelicious
b194180f76 feat(backend): make fast latents method static 2023-03-16 20:03:08 +11:00
psychedelicious
fb30b7d17a feat(backend): add image_to_dataURL util 2023-03-16 20:03:08 +11:00
blessedcoolant
c341dcaa3d update compel to fix black screens and use new downweighting algorithm (#2961)
Update `compel` to 1.0.0.

This fixes #2832.

It also changes the way downweighting is applied. In particular,
downweighting should now be much better and more controllable.

From the [compel
changelog](https://github.com/damian0815/compel#changelog):

> Downweighting now works by applying an attention mask to remove the
downweighted tokens, rather than literally removing them from the
sequence. This behaviour is the default, but the old behaviour can be
re-enabled by passing `downweight_mode=DownweightMode.REMOVE` on init of
the `Compel` instance.
>
> Formerly, downweighting a token worked by both multiplying the
weighting of the token's embedding, and doing an inverse-weighted blend
with a copy of the token sequence that had the downweighted tokens
removed. The intuition is that as weight approaches zero, the tokens
being downweighted should be actually removed from the sequence.
However, removing the tokens resulted in the positioning of all
downstream tokens becoming messed up. The blend ended up blending a lot
more than just the tokens in question.
> 
> As of v1.0.0, taking advice from @keturn and @bonlime
(https://github.com/damian0815/compel/issues/7) the procedure is by
default different. Downweighting still involves a blend but what is
blended is a version of the token sequence with the downweighted tokens
masked out, rather than removed. This correctly preserves positioning
embeddings of the other tokens.
2023-03-16 17:49:47 +13:00
Damian Stewart
b695a2574b bump compel version 2023-03-16 01:55:39 +01:00
Damian Stewart
aa68a326c8 update compel 2023-03-15 23:05:55 +01:00
Mary Hipp
c2922d5991 add settingsmodal 2023-03-15 16:12:51 -04:00
Mary Hipp
85888030c3 more things needed for header 2023-03-15 14:38:22 -04:00
blessedcoolant
7cf59c1e60 Merge branch 'main' into main-text-fixup-PR 2023-03-16 04:43:22 +13:00
psychedelicious
9738b0ff69 [nodes] Add Edge data type (#2958)
Adds an `Edge` data type, replacing the current tuple used for edges.
2023-03-15 18:41:56 +11:00
Kyle Schouviller
3021c78390 [nodes] Add Edge data type 2023-03-14 23:09:30 -07:00
blessedcoolant
6eeaf8d9fb Allow for dynamic header (#2955)
* Update root component to allow optional children that will render as
dynamic header of UI
* Export additional components (logo & themeChanger) for use in said
dynamic header (more to come here)
2023-03-15 07:41:24 +13:00
Mary Hipp
fa9afec0c2 fix npm deps 2023-03-14 14:15:03 -04:00
Mary Hipp
d6862bf8c1 fix npm deps 2023-03-14 14:14:16 -04:00
Mary Hipp
de01c38bbe fresh build 2023-03-14 14:11:42 -04:00
Mary Hipp
7e811908e0 remove 2023-03-14 14:09:16 -04:00
Mary Hipp
5f59f24f92 cleanup 2023-03-14 14:08:42 -04:00
Mary Hipp
e414fcf3fb bump version 2023-03-14 13:26:49 -04:00
Mary Hipp
079ad8f35a fix props 2023-03-14 13:22:57 -04:00
Mary Hipp
a4d7e0c78e export other components 2023-03-14 12:37:28 -04:00
blessedcoolant
e9c2f173c5 fix(inpaint): Seam painting being broken (#2952)
After #2942, seed needs to be passed down from inpaint to seam_paint.
Not doing so breaks inpainting and outpainting. This PR fixes it.
2023-03-15 00:38:26 +13:00
Jonathan
44f489d581 Merge branch 'main' into fix-seampaint 2023-03-14 06:19:25 -05:00
blessedcoolant
cb48bbd806 Removed file-extension-based arbitrary code execution attack vector (#2946)
# The Problem
Pickle files (.pkl, .ckpt, etc) are extremely unsafe as they can be
trivially crafted to execute arbitrary code when parsed using
`torch.load`
Right now the conventional wisdom among ML researchers and users is to
simply `not run untrusted pickle files ever` and instead only use
Safetensor files, which cannot be injected with arbitrary code. This is
very good advice.

Unfortunately, **I have discovered a vulnerability inside of InvokeAI
that allows an attacker to disguise a pickle file as a safetensor and
have the payload execute within InvokeAI.**

# How It Works
Within `model_manager.py` and `convert_ckpt_to_diffusers.py` there are
if-statements that decide which `load` method to use based on the file
extension of the model file. The logic (written in a slightly more
readable format than it exists in the codebase) is as follows:
```
if Path(file).suffix == '.safetensors':
   safetensor_load(file)
else:
   unsafe_pickle_load(file)
```

A malicious actor would only need to create an infected .ckpt file, and
then rename the extension to something that does not pass the `==
'.safetensors'` check, but still appears to a user to be a safetensors
file.
For example, this might be something like `.Safetensors`,
`.SAFETENSORS`, `SafeTensors`, etc.

InvokeAI will happily import the file in the Model Manager and execute
the payload.

# Proof of Concept
1. Create a malicious pickle file.
(https://gist.github.com/CodeZombie/27baa20710d976f45fb93928cbcfe368)
2. Rename the `.ckpt` extension to some variation of `.Safetensors`,
ensuring there is a capital letter anywhere in the extension (eg.
`malicious_pickle.SAFETENSORS`)
3. Import the 'model' like you would normally with any other safetensors
file with the Model Manager.
4. Upon trying to select the model in the web ui, it will be loaded (or
attempt to be converted to a Diffuser) with `torch.load` and the payload
will execute.


![image](https://user-images.githubusercontent.com/466103/224835490-4cf97ff3-41b3-4a31-85df-922cc99042d2.png)


# The Fix
This pull request changes the logic InvokeAI uses to decide which model
loader to use so that the safe behavior is the default. Instead of
loading as a pickle if the extension is not exactly `.safetensors`, it
will now **always** load as a safetensors file unless the extension is
**exactly** `.ckpt`.

# Notes:
I think support for pickle files should be totally dropped ASAP as a
matter of security, but I understand that there are reasons this would
be difficult.

In the meantime, I think `RestrictedUnpickler` or something similar
should be implemented as a replacement for `torch.load`, as this
significantly reduces the amount of Python methods that an attacker has
to work with when crafting malicious payloads
inside a pickle file. 
Automatic1111 already uses this with some success.
(https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/safe.py)
2023-03-15 00:09:17 +13:00
blessedcoolant
0a761d7c43 fix(inpaint): Seam painting being broken 2023-03-15 00:00:08 +13:00
Damian Stewart
a0f47aa72e Merge branch 'main' into main 2023-03-14 11:41:29 +01:00
blessedcoolant
f9abc6fc85 fix --png_compression command line argument (#2950)
- The value of png_compression was always 6, despite the value provided
to the --png_compression argument. This fixes the bug.
- It also fixes an inconsistency between the maximum range of
png_compression and the help text.

- Closes #2945
2023-03-14 18:20:17 +13:00
Lincoln Stein
d840c597b5 fix --png_compression command line argument
- The value of png_compression was always 6, despite the value provided to the
  --png_compression argument. This fixes the bug.
- It also fixes an inconsistency between the maximum range of png_compression
  and the help text.

- Closes #2945
2023-03-14 00:24:05 -04:00
Lincoln Stein
3ca654d256 speculative fix for alternative vaes 2023-03-13 23:27:29 -04:00
jeremy
e0e01f6c50 Reduced Pickle ACE attack surface
Prior to this commit, all models would be loaded with the extremely unsafe `torch.load` method, except those with the exact extension `.safetensors`. Even a change in casing (eg. `saFetensors`, `Safetensors`, etc) would cause the file to be loaded with torch.load instead of the much safer `safetensors.toch.load_file`.
If a malicious actor renamed an infected `.ckpt` to something like `.SafeTensors` or `.SAFETENSORS` an unsuspecting user would think they are loading a safe .safetensor, but would in fact be parsing an unsafe pickle file, and executing an attacker's payload. This commit fixes this vulnerability by reversing the loading-method decision logic to only use the unsafe `torch.load` when the file extension is exactly `.ckpt`.
2023-03-13 16:16:30 -04:00
Kent Keirsey
d9dab1b6c7 Update BUG_REPORT.yml 2023-03-13 11:17:38 -04:00
Kent Keirsey
3b2ef6e1a8 Update BUG_REPORT.yml 2023-03-13 11:14:53 -04:00
Kent Keirsey
c125a3871a Update BUG_REPORT.yml 2023-03-13 11:14:04 -04:00
blessedcoolant
0996bd5acf Merge branch 'main' into patch-1 2023-03-14 03:18:58 +13:00
blessedcoolant
ea77d557da add additional build mode (#2904)
*`yarn build:package` will build default component 
* moved some devDependencies to dependencies that are needed for
postinstall script
2023-03-14 03:15:51 +13:00
blessedcoolant
1b01161ea4 Merge branch 'main' into pr/2904 2023-03-14 03:14:35 +13:00
blessedcoolant
2230cb9562 chore(UI, accessibility): Icons. Header links & radio button (#2935)
# Overview
- Links should be parent of icon
- _Added style to link still so they still line up with sibling
components_
- Radio icon buttons
2023-03-14 03:13:19 +13:00
Mary Hipp Rogers
9e0c7c46a2 Merge branch 'main' into add-a-build-config 2023-03-13 09:58:17 -04:00
Mary Hipp
be305588d3 merged and rebuilt 2023-03-13 09:55:56 -04:00
blessedcoolant
9f994df814 Merge branch 'main' into chore/UI_more-accessibility-items 2023-03-14 02:49:47 +13:00
blessedcoolant
3062580006 Fix bug #2931 (#2942)
#2931 was caused by new code that held onto the PRNG in `get_make_image`
and used it in `make_image` for img2img and inpainting. This
functionality has been moved elsewhere so that we can generate multiple
images again.
2023-03-14 02:48:07 +13:00
JPPhoto
596ba754b1 Removed seed from get_make_image. 2023-03-13 08:15:46 -05:00
JPPhoto
b980e563b9 Fix bug #2931 2023-03-13 08:11:09 -05:00
blessedcoolant
7fe2606cb3 [nodes] Fixes calls into image to image and inpaint from nodes (#2940) 2023-03-13 19:05:32 +13:00
Kyle Schouviller
0c3b1fe3c4 [nodes] Fixes calls into image to image and inpaint from nodes 2023-03-12 22:12:42 -07:00
ElrikUnderlake
c9ee2e351c yarn build 2023-03-12 23:29:29 -05:00
ElrikUnderlake
e3aef20f42 chore(UI, accessibility): more items
- radio icon buttons
- links should be parent of icon
styled links to still line up with sibling components
2023-03-12 23:27:47 -05:00
blessedcoolant
60614badaf [nodes-api] Fix API generation to correctly reference outputs (#2939)
Correctly reference output types in node schemas
2023-03-13 17:02:55 +13:00
Kevin Turner
288cee9611 Merge remote-tracking branch 'origin/main' into feat/preview_predicted_x0
# Conflicts:
#	invokeai/app/invocations/generate.py
2023-03-12 20:56:02 -07:00
Kyle Schouviller
24aca37538 Just set output value in node schemas. Don't use additionalProperties, which would impact the schema. 2023-03-12 20:40:29 -07:00
Kyle Schouviller
b853ceea65 [nodes-api] Fix API generation to correctly reference outputs 2023-03-12 20:03:26 -07:00
Kyle Schouviller
3ee2798ede [fix] Get the model again if current model is empty 2023-03-12 22:26:11 -04:00
Fabio 'MrWHO' Torchetti
5c5106c14a Add keys when non EMA 2023-03-12 16:22:22 -05:00
Fabio 'MrWHO' Torchetti
c367b21c71 Fix issue #2932 2023-03-12 15:40:33 -05:00
blessedcoolant
2eef6df66a [ui]: add resizable pinnable drawer component (#2874)
wip

this is based off the branch in #2873
2023-03-12 22:46:48 +13:00
psychedelicious
300aa8d86c chore(ui): build 2023-03-12 20:13:58 +11:00
psychedelicious
727f1638d7 chore(ui): lint 2023-03-12 20:13:58 +11:00
psychedelicious
ee6df5852a fix(ui): fix lightbox 2023-03-12 20:13:38 +11:00
psychedelicious
90525b1c43 fix(ui): fix scrollable shadow 2023-03-12 20:13:38 +11:00
psychedelicious
bbb95dbc5b fix(ui): add color mode watcher 2023-03-12 20:13:38 +11:00
psychedelicious
f4b7f80d59 fix(ui): remove key prop 2023-03-12 20:13:38 +11:00
blessedcoolant
220f7373c8 feat(ui): Basic IAIOption Component & Fix Select Dropdown 2023-03-12 20:13:38 +11:00
blessedcoolant
4bb5785f29 fix(ui): Move Form Components to the correct folder 2023-03-12 20:13:38 +11:00
psychedelicious
f9a7a7d161 fix(ui): set colorMode to fix native selects 2023-03-12 20:13:38 +11:00
psychedelicious
de94c780d9 fix(ui): fix canvas status text bg 2023-03-12 20:13:38 +11:00
psychedelicious
0b9230380c fix(ui): default gallery category buttons to icon 2023-03-12 20:13:38 +11:00
psychedelicious
209a55b681 fix(ui): canvas rescale when toggle gallery 2023-03-12 20:13:38 +11:00
psychedelicious
dc2f69f5d1 fix(ui): process buttons display on canvas beta 2023-03-12 20:13:38 +11:00
psychedelicious
ad2f1b7b36 fix(ui): hack for hiding pinned panels 2023-03-12 20:13:38 +11:00
blessedcoolant
dd2d96a50f fix(ui): Bad styling on form elements 2023-03-12 20:13:38 +11:00
blessedcoolant
2bff28e305 fix(ui): Remove size limitation off the theme changer button 2023-03-12 20:13:38 +11:00
blessedcoolant
d68234d879 fix(ui): Gallery placeholder text not being centered 2023-03-12 20:13:38 +11:00
blessedcoolant
b3babf26a5 fix(ui): Fix current image buttons overflow 2023-03-12 20:13:38 +11:00
psychedelicious
ecca0eff31 fix(ui): hotkey accordion spacing 2023-03-12 20:13:38 +11:00
psychedelicious
28677f9621 fix(ui): process buttons display on canvas beta layout 2023-03-12 20:13:38 +11:00
psychedelicious
caecfadf11 fix(ui): fix shadow 2023-03-12 20:13:38 +11:00
psychedelicious
5cf8e3aa53 chore(ui): build 2023-03-12 20:13:38 +11:00
psychedelicious
76cf2c61db feat(ui): drawer almost done
TODO:
- hide while pinned
- lightbox interaction with gallery
2023-03-12 20:13:38 +11:00
psychedelicious
b4d976f2db fix(ui): fix flash of mini preview image
Restored the code that fixes this after having ripped it out thinking it didn't do anything. Spotted in #2915
2023-03-12 20:13:38 +11:00
psychedelicious
777d127c74 feat(ui): wip drawer component and build 2023-03-12 20:13:38 +11:00
psychedelicious
0678803803 lang(ui): update show canvas debug info string 2023-03-12 20:13:37 +11:00
blessedcoolant
d2fbc9f5e3 feat(ui): Add ThemeTypes & Move Grid Line Color 2023-03-12 20:13:37 +11:00
psychedelicious
d81088dff7 feat(ui): wip resizable pinnable drawer
fix(ui): remove old scrollbar css

fix(ui): make guidepopover lazy

feat(ui): wip resizable drawer

feat(ui): wip resizable drawer

feat(ui): add scroll-linked shadow

feat(ui): organize files

Align Scrollbar next to content

Move resizable drawer underneath the progress bar

Add InvokeLogo to unpinned & align

Adds Invoke Logo to Unpinned Parameters panel and aligns to make it feel seamless.
2023-03-12 20:13:37 +11:00
Lincoln Stein
1aaad9336f Remove image generation node dependencies on generate.py (#2902)
# Remove node dependencies on generate.py

This is a draft PR in which I am replacing `generate.py` with a cleaner,
more structured interface to the underlying image generation routines.
The basic code pattern to generate an image using the new API is this:

```
from invokeai.backend import ModelManager, Txt2Img, Img2Img

manager = ModelManager('/data/lstein/invokeai-main/configs/models.yaml')
model = manager.get_model('stable-diffusion-1.5')
txt2img = Txt2Img(model)
outputs = txt2img.generate(prompt='banana sushi', steps=12, scheduler='k_euler_a', iterations=5)

# generate() returns an iterator
for next_output in outputs:
    print(next_output.image, next_output.seed)

outputs = Img2Img(model).generate(prompt='strawberry` sushi', init_img='./banana_sushi.png')
output = next(outputs)
output.image.save('strawberries.png')
```

### model management

The `ModelManager` handles model selection and initialization. Its
`get_model()` method will return a `dict` with the following keys:
`model`, `model_name`,`hash`, `width`, and `height`, where `model` is
the actual StableDiffusionGeneratorPIpeline. If `get_model()` is called
without a model name, it will return whatever is defined as the default
in `models.yaml`, or the first entry if no default is designated.

### InvokeAIGenerator

The abstract base class `InvokeAIGenerator` is subclassed into into
`Txt2Img`, `Img2Img`, `Inpaint` and `Embiggen`. The constructor for
these classes takes the model dict returned by
`model_manager.get_model()` and optionally an
`InvokeAIGeneratorBasicParams` object, which encapsulates all the
parameters in common among `Txt2Img`, `Img2Img` etc. If you don't
provide the basic params, a reasonable set of defaults will be chosen.
Any of these parameters can be overridden at `generate()` time.

These classes are defined in `invokeai.backend.generator`, but they are
also exported by `invokeai.backend` as shown in the example below.

```
from invokeai.backend import InvokeAIGeneratorBasicParams, Img2Img
params = InvokeAIGeneratorBasicParams(
    perlin = 0.15
    steps = 30
   scheduler = 'k_lms'
)
img2img = Img2Img(model, params)
outputs = img2img.generate(scheduler='k_heun')
```

Note that we were able to override the basic params in the call to
`generate()`

The `generate()` method will returns an iterator over a series of
`InvokeAIGeneratorOutput` objects. These objects contain the PIL image,
the seed, the model name and hash, and attributes for all the parameters
used to generate the object (you can also get these as a dict). The
`iterations` argument controls how many objects will be returned,
defaulting to 1. Pass `None` to get an infinite iterator.

Given the proposed use of `compel` to generate a templated series of
prompts, I thought the API would benefit from a style that lets you loop
over the output results indefinitely. I did consider returning a single
`InvokeAIGeneratorOutput` object in the event that `iterations=1`, but I
think it's dangerous for a method to return different types of result
under different circumstances.

Changing the model is as easy as this:
```
model = manager.get_model('inkspot-2.0`)
txt2img = Txt2Img(model)
```

### Node and legacy support

With respect to `Nodes`, I have written `model_manager_initializer` and
`restoration_services` modules that return `model_manager` and
`restoration` services respectively. The latter is used by the face
reconstruction and upscaling nodes. There is no longer any reference to
`Generate` in the `app` tree.

I have confirmed that `txt2img` and `img2img` work in the nodes client.
I have not tested `embiggen` or `inpaint` yet. pytests are passing, with
some warnings that I don't think are related to what I did.

The legacy WebUI and CLI are still working off `Generate` (which has not
yet been removed from the source tree) and fully functional.

I've finished all the tasks on my TODO list:

- [x] Update the pytests, which are failing due to dangling references
to `generate`
- [x] Rewrite the `reconstruct.py` and `upscale.py` nodes to call
directly into the postprocessing modules rather than going through
`Generate`
- [x] Update the pytests, which are failing due to dangling references
to `generate`
2023-03-11 21:48:23 -05:00
Lincoln Stein
1f3c024d9d Merge branch 'main' into refactor/nodes-on-generator 2023-03-11 21:31:42 -05:00
Lincoln Stein
74a480f94e add back static web directory 2023-03-11 21:23:41 -05:00
blessedcoolant
c6e8d3269c build: exclude ui from test-invoke-pip (#2892)
Prior to the folder restructure, the `paths` for `test-invoke-pip` did
not include the UI's path `invokeai/frontend/`:

```yaml
    paths:
      - 'pyproject.toml'
      - 'ldm/**'
      - 'invokeai/backend/**'
      - 'invokeai/configs/**'
      - 'invokeai/frontend/dist/**'
```

After the restructure, more code was moved into the `invokeai/frontend/`
folder, and `paths` was updated:

```yaml
    paths:
      - 'pyproject.toml'
      - 'invokeai/**'
      - 'invokeai/backend/**'
      - 'invokeai/configs/**'
      - 'invokeai/frontend/web/dist/**'
```

Now, the second path includes the UI. The UI now needs to be excluded,
and must be excluded prior to `invokeai/frontend/web/dist/**` being
included.

On `test-invoke-pip-skip`, we need to do a bit of logic juggling to
invert the folder selection. First, include the web folder, then exclude
everying around it and finally exclude the `dist/` folder
2023-03-12 14:18:51 +13:00
blessedcoolant
dcb5a3a740 Merge branch 'main' into build/exclude-ui-actions 2023-03-12 14:18:03 +13:00
Lincoln Stein
c0ef546b02 Merge branch 'refactor/nodes-on-generator' of github.com:invoke-ai/InvokeAI into refactor/nodes-on-generator 2023-03-11 18:31:47 -05:00
Matthias Wild
7a78a83651 raise operations-per-run for issue workflow to 500 (#2925)
- default value is 30
- limit per hour is 1000

This should help getting the count of open issues down.
2023-03-12 00:10:55 +01:00
Lincoln Stein
10cbf99310 add TODO comments 2023-03-11 18:08:45 -05:00
Jonathan
b63aefcda9 Merge branch 'main' into refactor/nodes-on-generator 2023-03-11 16:22:29 -06:00
Lincoln Stein
6a77634b34 remove unneeded generate initializer routines 2023-03-11 17:14:03 -05:00
Lincoln Stein
8ca91b1774 add restoration services to nodes 2023-03-11 17:00:00 -05:00
mauwii
1c9d9e79d5 raise operations-per-run to 500
- default value is 30
- limit per hour is 1000
2023-03-11 22:32:13 +01:00
Lincoln Stein
3aa1ee1218 restore NSFW checker 2023-03-11 16:16:44 -05:00
Jonathan
06aa5a8120 Merge branch 'main' into feat/preview_predicted_x0 2023-03-11 14:50:30 -06:00
Lincoln Stein
580f9ecded simplify passing of config options 2023-03-11 11:32:57 -05:00
psychedelicious
270032670a build: exclude ui from test-invoke-pip 2023-03-12 03:27:49 +11:00
psychedelicious
4f056cdb55 ui: translations update from weblate (#2922)
Translations update from [Hosted Weblate](https://hosted.weblate.org)
for [InvokeAI/Web
UI](https://hosted.weblate.org/projects/invokeai/web-ui/).



Current translation status:

![Weblate translation
status](https://hosted.weblate.org/widgets/invokeai/-/web-ui/horizontal-auto.svg)
2023-03-12 03:18:23 +11:00
Lincoln Stein
c14241436b move ModelManager initialization into its own module and restore embedding support 2023-03-11 10:56:53 -05:00
ssantos
50b56d6088 translationBot(ui): update translation (Portuguese)
Currently translated at 99.2% (496 of 500 strings)

Co-authored-by: ssantos <ssantos@web.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/pt/
Translation: InvokeAI/Web UI
2023-03-11 16:56:06 +01:00
Sergey Krashevich
8ec2ae7954 translationBot(ui): update translation (Russian)
Currently translated at 86.3% (416 of 482 strings)

Co-authored-by: Sergey Krashevich <svk@svk.su>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2023-03-11 16:56:05 +01:00
wa.code
40d82b29cf translationBot(ui): update translation (Chinese (Traditional))
Currently translated at 7.0% (34 of 480 strings)

Co-authored-by: wa.code <adt107118@gm.ntcu.edu.tw>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hant/
Translation: InvokeAI/Web UI
2023-03-11 16:56:05 +01:00
Felipe Nogueira
0b953d98f5 translationBot(ui): update translation (Portuguese (Brazil))
Currently translated at 98.1% (471 of 480 strings)

Co-authored-by: Felipe Nogueira <contato.fnog@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/pt_BR/
Translation: InvokeAI/Web UI
2023-03-11 16:56:04 +01:00
Riccardo Giovanetti
8833d76709 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (500 of 500 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (500 of 500 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (482 of 482 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (480 of 480 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-03-11 16:56:04 +01:00
gallegonovato
027b316fd2 translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (500 of 500 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (482 of 482 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (480 of 480 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2023-03-11 16:56:03 +01:00
Lincoln Stein
d612f11c11 initialize InvokeAIGenerator object with model, not manager 2023-03-11 09:06:46 -05:00
Lincoln Stein
250b0ab182 add seamless tiling support 2023-03-11 08:33:23 -05:00
Lincoln Stein
675dd12b6c add attention map images to output object 2023-03-11 08:07:01 -05:00
Lincoln Stein
7e76eea059 add embiggen, remove complicated constructor 2023-03-11 07:50:39 -05:00
Jonathan
f45483e519 Merge branch 'main' into feat/preview_predicted_x0 2023-03-10 22:25:26 -06:00
blessedcoolant
65047bf976 Chore/accessibility add all aria labels to translation (#2919)
# Overview
Setting up the `aria-label` props as translations
2023-03-11 16:16:02 +13:00
ElrikUnderlake
d586a82a53 yarn build 2023-03-10 20:54:59 -06:00
ElrikUnderlake
28709961e9 add import 2023-03-10 20:53:42 -06:00
ElrikUnderlake
e9f237f39d chore(accessibility): add all aria-labels 2023-03-10 20:49:16 -06:00
Félix Sanz
4156bfd810 Fixed snippet/code formatting
It was pasted as plain text, now it's a code fence.
2023-03-11 02:08:59 +01:00
Lincoln Stein
fe75b95464 Merge branch 'refactor/nodes-on-generator' of github.com:invoke-ai/InvokeAI into refactor/nodes-on-generator 2023-03-10 19:36:40 -05:00
Lincoln Stein
95954188b2 remove factory pattern
Factory pattern is now removed. Typical usage of the InvokeAIGenerator is now:

```
from invokeai.backend.generator import (
    InvokeAIGeneratorBasicParams,
    Txt2Img,
    Img2Img,
    Inpaint,
)
    params = InvokeAIGeneratorBasicParams(
        model_name = 'stable-diffusion-1.5',
        steps = 30,
        scheduler = 'k_lms',
        cfg_scale = 8.0,
        height = 640,
        width = 640
        )
    print ('=== TXT2IMG TEST ===')
    txt2img = Txt2Img(manager, params)
    outputs = txt2img.generate(prompt='banana sushi', iterations=2)

    for i in outputs:
        print(f'image={output.image}, seed={output.seed}, model={output.params.model_name}, hash={output.model_hash}, steps={output.params.steps}')
```

The `params` argument is optional, so if you wish to accept default
parameters and selectively override them, just do this:

```
    outputs = Txt2Img(manager).generate(prompt='banana sushi',
                                        steps=50,
					scheduler='k_heun',
					model_name='stable-diffusion-2.1'
					)
```
2023-03-10 19:33:04 -05:00
Jonathan
63f59201f8 Merge branch 'main' into feat/preview_predicted_x0 2023-03-10 12:34:07 -06:00
Jonathan
370e8281b3 Merge branch 'main' into refactor/nodes-on-generator 2023-03-10 12:34:00 -06:00
Lincoln Stein
685df33584 fix bug that caused black images when converting ckpts to diffusers in RAM (#2914)
Cause of the problem was inadvertent activation of the safety checker.

When conversion occurs on disk, the safety checker is disabled during loading.
However, when converting in RAM, the safety checker was not removed, resulting
in it activating even when user specified --no-nsfw_checker.

This PR fixes the problem by detecting when the caller has requested the InvokeAi
StableDiffusionGeneratorPipeline class to be returned and setting safety checker
to None. Do not do this with diffusers models destined for disk because then they
will be incompatible with the merge script!!

Closes #2836
2023-03-10 18:11:32 +00:00
Mary Hipp
4332c9c7a6 add generic jsx type definition for default export 2023-03-10 12:14:49 -05:00
Jonathan
4a00f1cc74 Merge branch 'main' into feat/preview_predicted_x0 2023-03-10 09:20:01 -06:00
blessedcoolant
7ff77504cb Make sure command also works with Oh-my-zsh (#2905)
Many people use oh-my-zsh for their command line: https://ohmyz.sh/ 

Adding `""` should work both on ohmyzsh and native bash
2023-03-10 19:05:22 +13:00
blessedcoolant
0d1854e44a Merge branch 'main' into patch-1 2023-03-10 19:04:42 +13:00
Kevin Turner
fe6858f2d9 feat: use the predicted denoised image for previews
Some schedulers report not only the noisy latents at the current timestep,
but also their estimate so far of what the de-noised latents will be.

It makes for a more legible preview than the noisy latents do.
2023-03-09 20:28:06 -08:00
Lincoln Stein
12c7db3a16 backend: more post-ldm-removal cleanup (#2911) 2023-03-09 23:11:10 -05:00
Lincoln Stein
3ecdec02bf Merge branch 'main' into cleanup/more_ldm_cleanup 2023-03-09 22:49:09 -05:00
Lincoln Stein
d6c24d59b0 Revert "Remove label from stale issues on comment event" (#2912)
Reverts invoke-ai/InvokeAI#2903

@mauwii has a point here. It looks like triggering on a comment results
in an action for each of the stale issues, even ones that have been
previously dealt with. I'd like to revert this back to the original
behavior of running once every time the cron job executes.

What's the original motivation for having more frequent labeling of the
issues?
2023-03-09 22:28:49 -05:00
Lincoln Stein
bb3d1bb6cb Revert "Remove label from stale issues on comment event" 2023-03-09 22:24:43 -05:00
Lincoln Stein
14c8738a71 fix dangling reference to _model_to_cpu and missing variable model_description 2023-03-09 21:41:45 -05:00
Kevin Turner
1a829bb998 pipeline: remove code for legacy model 2023-03-09 18:15:12 -08:00
Kevin Turner
9d339e94f2 backend..conditioning: remove code for legacy model 2023-03-09 18:15:12 -08:00
Kevin Turner
ad7b1fa6fb model_manager: model to/from CPU methods are implemented on the Pipeline 2023-03-09 18:15:12 -08:00
Kevin Turner
42355b70c2 fix(Pipeline.debug_latents): fix import for moved utility function 2023-03-09 18:15:12 -08:00
Kevin Turner
faa2558e2f chore: add new argument to overridden method to match new signature upstream 2023-03-09 18:15:12 -08:00
Kevin Turner
081397737b typo: docstring spelling fixes
looks like they've already been corrected in the upstream copy
2023-03-09 18:15:12 -08:00
Kevin Turner
55d36eaf4f fix: image_resized_to_grid_as_tensor: reconnect dropped multiple_of argument 2023-03-09 18:15:12 -08:00
Scott Lahteine
26cd1728ac Fix some text and a link 2023-03-09 20:03:11 -06:00
Lincoln Stein
a0065da4a4 Remove label from stale issues on comment event (#2903)
I found it to be a chore to remove labels manually in order to
"un-stale" issues. This is contrary to the bot message which says
commenting should remove "stale" status. On the current `cron` schedule,
there may be a delay of up to 24 hours before the label is removed. This
PR will trigger the workflow on issue comments in addition to the
schedule.

Also adds a condition to not run this job on PRs (Github treats issues
and PRs equivalently in this respect), and rewords the messages for
clarity.
2023-03-09 20:17:54 -05:00
Lincoln Stein
c11e823ff3 remove unused _wrap_results 2023-03-09 16:30:06 -05:00
Mary Hipp
197e50a298 unstage some changes 2023-03-09 15:33:18 -05:00
Patrick von Platen
507e12520e Make sure command also works with Oh-my-zsh
Many people use oh-my-zsh for their command line: https://ohmyz.sh/ 

Adding `""` should work both on ohmyzsh and native bash
2023-03-09 19:21:57 +01:00
Mary Hipp
2cc04de397 dont care about linting build 2023-03-09 11:46:20 -05:00
Mary Hipp
f4150a7829 add new build command for building package 2023-03-09 11:10:18 -05:00
Eugene Brodsky
5418bd3b24 (ci) unlabel stale issues when commented 2023-03-09 09:22:29 -05:00
blessedcoolant
76d5fa4694 Bypass the 77 token limit (#2896)
This ought to be working but i don't know how it's supposed to behave so
i haven't been able to verify. At least, I know the numbers are getting
pushed all the way to the SD unet, i just have been unable to verify if
what's coming out is what is expected. Please test.

You'll `need to pip install -e .` after switching to the branch, because
it's currently pulling from a non-main `compel` branch. Once it's
verified as working as intended i'll promote the compel branch to pypi.
2023-03-09 23:52:28 +13:00
blessedcoolant
386dda8233 Merge branch 'main' into feat_longer_prompts 2023-03-09 22:37:10 +13:00
Damian Stewart
8076c1697c Merge branch 'feat_longer_prompts' of github.com:damian0815/InvokeAI into feat_longer_prompts 2023-03-09 10:28:13 +01:00
Damian Stewart
65fc9a6e0e bump compel version to address issues 2023-03-09 10:28:07 +01:00
Lincoln Stein
cde0b6ae8d Merge branch 'main' into refactor/nodes-on-generator 2023-03-09 01:52:45 -05:00
blessedcoolant
b12760b976 [ui] chore(Accessibility): various additions (#2888)
# Overview
Adding a few accessibility items (I think 9 total items). Mostly
`aria-label`, but also a `<VisuallyHidden>` to the left-side nav tab
icons. Tried to match existing copy that was being used. Feedback
welcome
2023-03-09 19:14:42 +13:00
Lincoln Stein
b679a6ba37 model manager defaults to consistent values of device and precision 2023-03-09 01:09:54 -05:00
ElrikUnderlake
2f5f08c35d yarn build 2023-03-08 23:51:46 -06:00
Elrik
8f48c14ed4 Merge branch 'main' into chore/accessability_various-additions 2023-03-08 23:49:08 -06:00
Lincoln Stein
5d37fa6e36 node-based txt2img working without generate 2023-03-09 00:18:29 -05:00
Jonathan
f51581bd1b Merge branch 'main' into feat_longer_prompts 2023-03-08 23:08:49 -06:00
blessedcoolant
50ca6b6ffc add back pytorch-lightning dependency (#2899)
- Closes #2893
2023-03-09 17:22:17 +13:00
blessedcoolant
63b9ec4c5e Merge branch 'main' into bugfix/restore-pytorch-lightning 2023-03-09 16:57:14 +13:00
blessedcoolant
b115bc4247 [cli] Execute commands in-order with nodes (#2901)
Executes piped commands in the order they were provided (instead of
executing CLI commands immediately).
2023-03-09 16:55:23 +13:00
blessedcoolant
dadc30f795 Merge branch 'main' into bugfix/restore-pytorch-lightning 2023-03-09 16:46:08 +13:00
blessedcoolant
111d8391e2 Merge branch 'main' into kyle0654/cli_execution_order 2023-03-09 16:37:15 +13:00
blessedcoolant
1157b454b2 decouple default component from react root (#2897)
Decouple default component from react root
2023-03-09 16:34:47 +13:00
Kyle Schouviller
8a6473610b [cli] Execute commands in-order with nodes 2023-03-08 19:25:03 -08:00
Elrik
ea7911be89 Merge branch 'main' into chore/accessability_various-additions 2023-03-08 17:15:28 -06:00
Damian Stewart
9ee648e0c3 Merge branch 'main' into feat_longer_prompts 2023-03-09 00:13:01 +01:00
Damian Stewart
543682fd3b Merge branch 'feat_longer_prompts' of github.com:damian0815/InvokeAI into feat_longer_prompts 2023-03-08 23:24:41 +01:00
Damian Stewart
88cb63e4a1 pin new compel version 2023-03-08 23:24:30 +01:00
Lincoln Stein
76212d1cca Merge branch 'main' into bugfix/restore-pytorch-lightning 2023-03-08 17:05:11 -05:00
Mary Hipp Rogers
a8df9e5122 Merge branch 'main' into decouple-component-from-root 2023-03-08 16:58:34 -05:00
Jonathan
2db180d909 Make img2img strength 1 behave the same as txt2img (#2895)
* Fix img2img and inpainting code so a strength of 1 behaves the same as txt2img.

* Make generated images identical to their txt2img counterparts when strength is 1.
2023-03-08 22:50:16 +01:00
Lincoln Stein
b716fe8f06 add pytorch-lightning dependency back in
- Closes #2893
2023-03-08 16:48:39 -05:00
damian
69e2dc0404 update for compel changes 2023-03-08 20:45:01 +01:00
Damian Stewart
a38b75572f don't log excess tokens as truncated 2023-03-08 20:00:18 +01:00
Mary Hipp Rogers
e18de761b6 Merge branch 'main' into decouple-component-from-root 2023-03-08 13:13:43 -05:00
Mary Hipp
816ea39827 decouple default component from react root 2023-03-08 12:48:49 -05:00
Lincoln Stein
1cd4cdd0e5 Merge branch 'main' into tests 2023-03-08 12:19:04 -05:00
damian
768e969c90 cleanup and fix kwarg 2023-03-08 18:00:54 +01:00
Damian Stewart
57db66634d longer prompts wip 2023-03-08 14:25:48 +01:00
Lincoln Stein
87789c1de8 add InvokeAIGenerator and InvokeAIGeneratorFactory classes 2023-03-07 23:52:53 -05:00
ElrikUnderlake
c3c1511ec6 add accessibility to localization
only set fallback english values
implement on ModelSelect and ProgressBar
2023-03-07 21:30:51 -06:00
Elrik
6b41127421 Merge branch 'main' into chore/accessability_various-additions 2023-03-07 17:44:55 -06:00
blessedcoolant
d232a439f7 build: update actions (#2883)
- Updates triggers for UI workflow `lint-frontend`
- Corrects UI paths for `test-invoke-pip` and `test-invoke-pip-skip`
2023-03-08 11:51:32 +13:00
blessedcoolant
c04f21e83e Merge branch 'main' into build/update-actions 2023-03-08 11:50:50 +13:00
blessedcoolant
8762069b37 ui: update readme & scripts (#2884)
- Update ui readme
- Update scripts to use `yarn` instead of `npm` and use `concurrently`
to watch/build the theme token types along with SPA
2023-03-08 00:20:46 +13:00
psychedelicious
d9ebdd2684 build(ui): use concurrently to run dev 2023-03-07 21:58:46 +11:00
psychedelicious
3e4c10ef9c docs(ui): update readme 2023-03-07 21:58:42 +11:00
psychedelicious
17eb2ca5a2 build: update actions
- Updates triggers for UI workflow `lint-frontend`
- Corrects UI paths for `test-invoke-pip` and `test-invoke-pip-skip`
2023-03-07 21:25:43 +11:00
mastercaster9000
63725d7534 add .pytest.ini to .gitignore 2023-03-07 09:08:27 +00:00
mastercaster
00f30ea457 Merge branch 'main' into tests 2023-03-07 09:03:18 +00:00
blessedcoolant
1b2a3c7144 ui: translations update from weblate (#2882)
Translations update from [Hosted Weblate](https://hosted.weblate.org)
for [InvokeAI/Web
UI](https://hosted.weblate.org/projects/invokeai/web-ui/).



Current translation status:

![Weblate translation
status](https://hosted.weblate.org/widgets/invokeai/-/web-ui/horizontal-auto.svg)
2023-03-07 21:51:09 +13:00
psychedelicious
01a1777370 translationBot(ui): update translation (Chinese (Traditional))
Currently translated at 4.1% (20 of 480 strings)

translationBot(ui): update translation (Portuguese (Brazil))

Currently translated at 97.2% (467 of 480 strings)

translationBot(ui): update translation (Dutch)

Currently translated at 97.2% (467 of 480 strings)

translationBot(ui): update translation (French)

Currently translated at 83.1% (399 of 480 strings)

Co-authored-by: psychedelicious <mabianfu@icloud.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/fr/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/nl/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hant/
Translation: InvokeAI/Web UI
2023-03-07 09:09:42 +01:00
Hosted Weblate
32945c7f45 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-03-07 09:09:42 +01:00
ElrikUnderlake
b0b8846430 Add aria-label to icon variant of IAISimpleMenu
Uses whatever the iconTooltip copy is
2023-03-06 22:43:41 -06:00
ElrikUnderlake
fdb146a43a add aria-label to UnifiedCanvasLayerSelect
matching tooltip copy
2023-03-06 22:42:39 -06:00
ElrikUnderlake
42c1f1fc9d add VisuallyHidden tab text to InvokeTabs 2023-03-06 22:42:04 -06:00
ElrikUnderlake
89a8ef86b5 add an aria-label to ProgressBar 2023-03-06 22:41:45 -06:00
ElrikUnderlake
f0fb767f57 add aria-label to ModelSelect 2023-03-06 22:39:08 -06:00
blessedcoolant
4bd93464bf [cli] Update CLI to define commands as Pydantic objects (#2861)
Updates the CLI to define CLI commands as Pydantic objects, similar to
how Invocations (nodes) work. For example:

```py
class HelpCommand(BaseCommand):
    """Shows help"""
    type: Literal['help'] = 'help'

    def run(self, context: CliContext) -> None:
        context.parser.print_help()
```
2023-03-07 13:25:06 +13:00
blessedcoolant
3d3de82ca9 Merge branch 'main' into kyle/cli_commands 2023-03-07 12:56:30 +13:00
Jonathan
c3ff9e6be8 Fixed startup issues with the web UI. (#2876) 2023-03-06 18:29:28 -05:00
blessedcoolant
21f79e5919 add missing package (#2878)
Added missing dependency declaration `@chakra-ui/styled-system`
2023-03-07 10:34:50 +13:00
Mary Hipp
0342e25c74 add missing package 2023-03-06 16:13:17 -05:00
blessedcoolant
91f982fb0b feat(ui): migrate theming to chakra ui (#2873)
*looks like this #2814 was reverted accidentally. instead of trying to
revert the revert, this PR can simply be re-accepted and will fix the
ui.*

- Migrate UI from SCSS to Chakra's CSS-in-JS system 
  - better dx
  - more capable theming 
  - full RTL language support (we now have Arabic and Hebrew)
  - general cleanup of the whole UI's styling
- Tidy npm packages and update scripts, necessitates update to github
actions

To test this PR in dev mode, you will need to do a `yarn install` as a
lot has changed.

thanks to @blessedcoolant for helping out on this, it was a big effort.
2023-03-07 08:43:26 +13:00
blessedcoolant
b9ab43a4bb build(ui): clean build chakra migration 2023-03-07 08:39:44 +13:00
blessedcoolant
6e0e48bf8a Merge branch 'main' into pr/2873 2023-03-07 08:36:09 +13:00
Lincoln Stein
dcc8313dbf support both epsilon and v-prediction v2 inference (#2870)
There are actually two Stable Diffusion v2 legacy checkpoint
configurations:

1. "epsilon" prediction type for Stable Diffusion v2 Base 
2. "v-prediction" type for Stable Diffusion v2-768

This commit adds the configuration file needed for epsilon prediction
type models as well as the UI that prompts the user to select the
appropriate configuration file when the code can't do so automatically.
2023-03-06 14:29:35 -05:00
Lincoln Stein
bf5831faa3 Merge branch 'main' into kyle/cli_commands 2023-03-06 08:52:38 -05:00
Lincoln Stein
5eff035f55 Merge branch 'main' into tests 2023-03-06 08:37:07 -05:00
Lincoln Stein
7c60068388 Merge branch 'main' into bugfix/fix-convert-sd-to-diffusers-error 2023-03-06 08:20:29 -05:00
psychedelicious
d843fb078a feat(ui): remove references to dark mode 2023-03-06 20:40:59 +11:00
psychedelicious
41b2e4633f chore(ui): remove unused scss files 2023-03-06 20:06:23 +11:00
psychedelicious
57144ac0cf feat(ui): migrate theming to chakra ui 2023-03-06 20:03:39 +11:00
Lincoln Stein
a305b6adbf fix call signature of import_diffuser_model() (#2871)
This fixes the borked #2867 PR.
2023-03-05 23:58:08 -05:00
Lincoln Stein
94daaa4abf fix call signature of import_diffuser_model() 2023-03-05 23:37:59 -05:00
Lincoln Stein
901337186d add .git-blame-ignore-revs file to maintain provenance (#2855)
To avoid `git blame` recording all the autoformatting changes under the
name 'lstein', this PR adds a `.git-blame-ignore-revs` that will ignore
any provenance changes that occurred during the recent refactor merge.
2023-03-05 22:58:34 -05:00
Lincoln Stein
7e2f64f60b Merge branch 'main' into refactor/maintain-blame-provenance 2023-03-05 22:57:50 -05:00
Lincoln Stein
126cba2324 Bugfix/reenable ckpt conversion to ram (#2868)
This fixes the crash that was occurring when trying to load a legacy
checkpoint file.

Note that this PR includes commits from #2867 to avoid diffusers files
from re-downloading at startup time.
2023-03-05 22:57:19 -05:00
Lincoln Stein
2f9dcd7906 support both epsilon and v-prediction v2 inference
There are actually two Stable Diffusion v2 legacy checkpoint
configurations:

1) "epsilon" prediction type for Stable Diffusion v2 Base
2) "v-prediction" type for Stable Diffusion v2-768

This commit adds the configuration file needed for epsilon prediction
type models as well as the UI that prompts the user to select the
appropriate configuration file when the code can't do so
automatically.
2023-03-05 22:51:40 -05:00
blessedcoolant
e537b5d8e1 Revert "Merge branch 'main' into bugfix/reenable-ckpt-conversion-to-ram"
This reverts commit e0e70c9222, reversing
changes made to 0b184913b9.
2023-03-06 14:29:39 +13:00
blessedcoolant
e0e70c9222 Merge branch 'main' into bugfix/reenable-ckpt-conversion-to-ram 2023-03-06 14:27:30 +13:00
blessedcoolant
1b21e5df54 Migrate to new HF diffusers cache location (#2867)
# Migrate to new HF diffusers cache location

This PR adjusts the model cache directory to use the layout of
`diffusers 0.14`. This will automatically migrate any diffusers models
located in `INVOKEAI_ROOT/models/diffusers` to
`INVOKEAI_ROOT/models/hub`, and cache new downloaded diffusers files
into the same location.

As before, if environment variable `HF_HOME` is set, then both
HuggingFace `from_pretrained()` calls as well as all InvokeAI methods
will use `HF_HOME/hub` as their cache.
2023-03-06 13:05:13 +13:00
blessedcoolant
4b76af37ae Merge branch 'main' into enhance/use-new-diffusers-path 2023-03-06 12:42:30 +13:00
mastercaster9000
486c445afb fix typos and replace frontend REAMDE content 2023-03-05 21:05:09 +00:00
mastercaster9000
4547c48013 add docs for local development including tests 2023-03-05 19:59:06 +00:00
blessedcoolant
8f21201c91 [ui]: migrate all styling to chakra-ui theme (#2814)
- Migrate UI from SCSS to Chakra's CSS-in-JS system 
  - better dx
  - more capable theming 
  - full RTL language support (we now have Arabic and Hebrew)
  - general cleanup of the whole UI's styling
- Tidy npm packages and update scripts, necessitates update to github
actions

To test this PR in dev mode, you will need to do a `yarn install` as a
lot has changed.

thanks to @blessedcoolant for helping out on this, it was a big effort.
2023-03-06 07:23:59 +13:00
blessedcoolant
532b74a206 Merge branch 'main' into feat/ui/chakra-theme 2023-03-06 06:54:33 +13:00
Lincoln Stein
0b184913b9 Merge branch 'main' into bugfix/reenable-ckpt-conversion-to-ram 2023-03-05 12:37:43 -05:00
Matthias Wild
97719e40e4 fix Dockerfile after restructure (#2863)
this PR should close #2862
2023-03-05 18:33:00 +01:00
Lincoln Stein
5ad3062b66 Merge branch 'main' into fix/broken-dockerfile-2862 2023-03-05 12:32:25 -05:00
Lincoln Stein
92d012a92d Merge branch 'main' into enhance/use-new-diffusers-path 2023-03-05 12:30:24 -05:00
Lincoln Stein
fc187f263e deal with non-directories in diffusers/ 2023-03-05 12:29:52 -05:00
Lincoln Stein
fd94f85abe remove legacy ldm code (#2866)
This removes modules that appear to be no longer used by any code under
the `invokeai` package now that the `ckpt_generator` is gone.

There are a few small changes in here to code that was referencing code
in a conditional branch for ckpt, or to swap out a  function for a
🤗 one, but only as much was strictly necessary to get things to
run. We'll follow with more clean-up to get lingering `if isinstance` or
`except AttributeError` branches later.
2023-03-05 12:10:38 -05:00
Lincoln Stein
4e9e1b660d respect HF_HOME setting when migrating 2023-03-05 12:08:29 -05:00
Lincoln Stein
d01adedff5 give user chance to back out before migration 2023-03-05 12:04:31 -05:00
mastercaster9000
c247f430f7 combine pytest.ini with pyproject.toml 2023-03-05 17:00:08 +00:00
mastercaster9000
3d6a358042 remove .coveragerc from source contrl 2023-03-05 16:59:12 +00:00
Lincoln Stein
4d1dcd11de Merge branch 'main' into dev/rm_legacy_deps 2023-03-05 11:50:53 -05:00
Lincoln Stein
b33655b0d6 restore automatic conversion of legacy files to diffusers pipelines 2023-03-05 11:45:25 -05:00
Lincoln Stein
81dee04dc9 during migration do not overwrite symlinks 2023-03-05 08:40:12 -05:00
Jonathan
114018e3e6 Unified spelling of Hugging Face 2023-03-05 07:30:35 -06:00
Lincoln Stein
ef8cf83b28 migrate to new HF diffusers cache location 2023-03-05 08:20:24 -05:00
blessedcoolant
633857b0e3 build(ui): Migrate UI to Chakra 2023-03-05 21:50:50 +13:00
blessedcoolant
214574d11f Merge branch 'feat/ui/chakra-theme' of https://github.com/psychedelicious/InvokeAI into pr/2814 2023-03-05 21:48:08 +13:00
psychedelicious
8584665ade feat(ui): migrate theming to chakra 2023-03-05 19:41:57 +11:00
blessedcoolant
516c56d0c5 feat(ui): Model Manager Cleanup 2023-03-05 21:41:55 +13:00
blessedcoolant
5891b43ce2 Merge branch 'feat/ui/chakra-theme' of https://github.com/psychedelicious/InvokeAI into pr/2814 2023-03-05 21:41:12 +13:00
psychedelicious
62e75f95aa feat(ui): migrate theming to chakra 2023-03-05 19:39:51 +11:00
psychedelicious
b07621e27e chore(ui): build frontend 2023-03-05 19:30:28 +11:00
psychedelicious
545d8968fd feat(ui): migrated theming to chakra
build(ui): fix husky path

build(ui): fix hmr issue, remove emotion cache

build(ui): clean up package.json

build(ui): update gh action and npm scripts

feat(ui): wip port lightbox to chakra theme

feat(ui): wip use chakra theme tokens

feat(ui): Add status text to main loading spinner

feat(ui): wip chakra theme tweaking

feat(ui): simply iaisimplemenu button

feat(ui): wip chakra theming

feat(ui): Theme Management

feat(ui): Add Ocean Blue Theme

feat(ui): wip lightbox

fix(ui): fix lightbox mouse

feat(ui): set default theme variants

feat(ui): model manager chakra theme

chore(ui): lint

feat(ui): remove last scss

feat(ui): fix switch theme

feat(ui): Theme Cleanup

feat(ui): Stylize Search Models Found List

feat(ui): hide scrollbars

feat(ui): fix floating button position

feat(ui): Scrollbar Styling

fix broken scripts

This PR fixes the following scripts:

1) Scripts that can be executed within the repo's scripts directory.
   Note that these are for development testing and are not intended
   to be exposed to the user.

   configure_invokeai.py - configuration
   dream.py              - the legacy CLI
   images2prompt.py      - legacy "dream prompt" retriever
   invoke-new.py         - new nodes-based CLI
   invoke.py             - the legacy CLI under another name
   make_models_markdown_table.py - a utility used during the release/doc process
   pypi_helper.py        - another utility used during the release process
   sd-metadata.py        - retrieve JSON-formatted metadata from a PNG file

2) Scripts that are installed by pip install. They get placed into the venv's
   PATH and are intended to be the official entry points:

   invokeai-node-cli      - new nodes-based CLI
   invokeai-node-web      - new nodes-based web server
   invokeai               - legacy CLI
   invokeai-configure     - install time configuration script
   invokeai-merge         - model merging script
   invokeai-ti            - textual inversion script
   invokeai-model-install - model installer
   invokeai-update        - update script
   invokeai-metadata"     - retrieve JSON-formatted metadata from PNG files

protect invocations against black autoformatting

deps: upgrade to diffusers 0.14, safetensors 0.3, transformers 4.26, accelerate 0.16
2023-03-05 19:30:02 +11:00
Lincoln Stein
7cf2f58513 deps: upgrade to diffusers 0.14, safetensors 0.3, transformers 4.26, accelerate 0.16 (#2865)
Things to check for in this version:

- `diffusers` cache location is now more consistent with other
huggingface-hub using code (i.e. `transformers`) as of
https://github.com/huggingface/diffusers/pull/2005. I think ultimately
this should make @damian0815 (and other folks with multiple
diffusers-using projects) happier, but it's worth taking a look to make
sure the way @lstein set things up to respect `HF_HOME` is still
functioning as intended.
- I've gone ahead and updated `transformers` to the current version
(4.26), but I have a vague memory that we were holding it back at some
point? Need to look that up and see if that's the case and why.
2023-03-05 01:53:01 -05:00
Kevin Turner
618e3e5e91 deps: add explicitly dependency to rich
was previously pulled in as a secondary dependency of something else.
2023-03-04 18:37:39 -08:00
Kevin Turner
c703b60986 remove legacy ldm code 2023-03-04 18:16:59 -08:00
mauwii
7c0ce5c282 fix push expression
- make use of `github.ref_type`
2023-03-05 02:58:13 +01:00
mauwii
82fe34b1f7 update build-container workflow
- switch versioning from semver to pep440
- remove unecesarry paths
- include `.dockerignore` in paths
2023-03-05 02:13:57 +01:00
Kevin Turner
65f9aae81d deps: upgrade to diffusers 0.14, safetensors 0.3, transformers 4.26, accelerate 0.16 2023-03-04 16:32:16 -08:00
mauwii
2d9fac23e7 fix Dockerfile
- update broken paths after restructure
2023-03-04 23:51:07 +01:00
Kyle Schouviller
ebc4b52f41 [cli] Update CLI to define commands as Pydantic objects 2023-03-04 14:46:02 -08:00
Lincoln Stein
c4e6d4b348 fix broken scripts (#2857)
This PR fixes the following scripts:

1) Scripts that can be executed within the repo's scripts directory.
   Note that these are for development testing and are not intended
   to be exposed to the user.
```
   configure_invokeai.py - configuration
   dream.py              - the legacy CLI
   images2prompt.py      - legacy "dream prompt" retriever
   invoke-new.py         - new nodes-based CLI
   invoke.py             - the legacy CLI under another name
   make_models_markdown_table.py - a utility used during the release/doc process
   pypi_helper.py        - another utility used during the release process
   sd-metadata.py        - retrieve JSON-formatted metadata from a PNG file
```

2) Scripts that are installed by pip install. They get placed into the
venv's
   PATH and are intended to be the official entry points:
```
   invokeai-node-cli      - new nodes-based CLI
   invokeai-node-web      - new nodes-based web server
   invokeai               - legacy CLI
   invokeai-configure     - install time configuration script
   invokeai-merge         - model merging script
   invokeai-ti            - textual inversion script
   invokeai-model-install - model installer
   invokeai-update        - update script
   invokeai-metadata"     - retrieve JSON-formatted metadata from PNG files
```
2023-03-04 16:57:45 -05:00
Jonathan
eab32bce6c Merge branch 'main' into bugfix/fix-scripts 2023-03-04 13:19:02 -06:00
Lincoln Stein
55d2094094 Protect invocations against black autoformatting (#2854)
This places `#fmt: off` and `#fmt: on` blocks around sections of the
invocation code that shouldn't be reformatted by Black.
2023-03-04 12:26:43 -05:00
Lincoln Stein
a0d50a2b23 Merge branch 'main' into formatting/undo-black-formatting-of-invocations 2023-03-04 12:05:11 -05:00
Jonathan
9efeb1b2ec Merge branch 'main' into bugfix/fix-scripts 2023-03-03 20:36:29 -06:00
blessedcoolant
86e2cb0428 Fix for txt2img2img.py (#2856)
Fix error when using txt2img 
ModuleNotFoundError: No module named 'invokeai.backend.models'
and
ModuleNotFoundError: No module named
'invokeai.backend.generator.diffusers_pipeline'
2023-03-04 15:24:39 +13:00
mickr777
53c2c0f91d Update txt2img2img.py 2023-03-04 12:58:33 +11:00
Lincoln Stein
bdc7b8b75a fix broken scripts
This PR fixes the following scripts:

1) Scripts that can be executed within the repo's scripts directory.
   Note that these are for development testing and are not intended
   to be exposed to the user.

   configure_invokeai.py - configuration
   dream.py              - the legacy CLI
   images2prompt.py      - legacy "dream prompt" retriever
   invoke-new.py         - new nodes-based CLI
   invoke.py             - the legacy CLI under another name
   make_models_markdown_table.py - a utility used during the release/doc process
   pypi_helper.py        - another utility used during the release process
   sd-metadata.py        - retrieve JSON-formatted metadata from a PNG file

2) Scripts that are installed by pip install. They get placed into the venv's
   PATH and are intended to be the official entry points:

   invokeai-node-cli      - new nodes-based CLI
   invokeai-node-web      - new nodes-based web server
   invokeai               - legacy CLI
   invokeai-configure     - install time configuration script
   invokeai-merge         - model merging script
   invokeai-ti            - textual inversion script
   invokeai-model-install - model installer
   invokeai-update        - update script
   invokeai-metadata"     - retrieve JSON-formatted metadata from PNG files
2023-03-03 20:19:37 -05:00
mickr777
1bfdd54810 Update txt2img2img.py 2023-03-04 11:23:21 +11:00
Lincoln Stein
b4bf6c12a5 add .git-blame-ignore-revs file to maintain provenance
To avoid `git blame` recording all the autoformatting changes
under the name 'lstein', this PR adds a `.git-blame-ignore-revs`
that will ignore any provenance changes that occurred during the
recent refactor merge.
2023-03-03 16:23:48 -05:00
Lincoln Stein
ab35c241c2 protect invocations against black autoformatting 2023-03-03 15:25:08 -05:00
Lincoln Stein
b3dccfaeb6 Final phase of source tree restructure (#2833)
# All python code has been moved under `invokeai`. All vestiges of `ldm`
and `ldm.invoke` are now gone.

***You will need to run `pip install -e .` before the code will work
again!***

Everything seems to be functional, but extensive testing is advised.

A guide to where the files have gone is forthcoming.
2023-03-03 15:05:41 -05:00
Lincoln Stein
6477e31c1e revert and disable auto-formatting of invocations 2023-03-03 14:59:17 -05:00
Lincoln Stein
dd4a1c998b merge localisation files that were added in main 2023-03-03 14:47:01 -05:00
Lincoln Stein
70203e6e5a CODEOWNERS coarse draft 2023-03-03 14:36:43 -05:00
psychedelicious
d778a7c5ca ui: translations update from weblate (#2850)
Translations update from [Hosted Weblate](https://hosted.weblate.org)
for [InvokeAI/Web
UI](https://hosted.weblate.org/projects/invokeai/web-ui/).



Current translation status:

![Weblate translation
status](https://hosted.weblate.org/widgets/invokeai/-/web-ui/horizontal-auto.svg)
2023-03-03 20:07:34 +11:00
LemonDouble
f8e59636cd translationBot(ui): update translation (Korean)
Currently translated at 15.5% (73 of 469 strings)

translationBot(ui): added translation (Korean)

Co-authored-by: LemonDouble <lemondouble2@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ko/
Translation: InvokeAI/Web UI
2023-03-03 10:06:13 +01:00
Airton Silva
2d1a0b0a05 translationBot(ui): update translation (Portuguese)
Currently translated at 12.7% (60 of 469 strings)

Co-authored-by: Airton Silva <airtonsilva2009@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/pt/
Translation: InvokeAI/Web UI
2023-03-03 10:06:13 +01:00
Dennis
c9b2234d90 translationBot(ui): update translation (Dutch)
Currently translated at 100.0% (469 of 469 strings)

Co-authored-by: Dennis <dennis@vanzoerlandt.nl>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/nl/
Translation: InvokeAI/Web UI
2023-03-03 10:06:12 +01:00
Netzer R
82b224539b translationBot(ui): update translation (Hebrew)
Currently translated at 100.0% (469 of 469 strings)

translationBot(ui): added translation (Hebrew)

Co-authored-by: Netz <pixi@pixelabs.net>
Co-authored-by: Netzer R <pixi@pixelabs.net>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/he/
Translation: InvokeAI/Web UI
2023-03-03 10:06:12 +01:00
Gabriel Mackievicz Telles
0b15ffb95b translationBot(ui): update translation (Portuguese)
Currently translated at 12.5% (59 of 469 strings)

translationBot(ui): added translation (Portuguese)

Co-authored-by: Gabriel Mackievicz Telles <telles.gabriel@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/pt/
Translation: InvokeAI/Web UI
2023-03-03 10:06:11 +01:00
psychedelicious
ce9aaab22f translationBot(ui): added translation (Chinese (Traditional))
Co-authored-by: psychedelicious <mabianfu@icloud.com>
2023-03-03 10:06:11 +01:00
Lincoln Stein
3f53f1186d move diagnostic message to stderr; was confusing CI 2023-03-03 01:54:48 -05:00
Lincoln Stein
c0aff396d2 fix ldm->invokeai pathnames in workflows 2023-03-03 01:44:55 -05:00
Lincoln Stein
955900507f fix issue with invokeai.version 2023-03-03 01:34:38 -05:00
Lincoln Stein
d606abc544 fix weblint call 2023-03-03 01:09:56 -05:00
Lincoln Stein
44400d2a66 fix incorrect import of merge code 2023-03-03 01:07:31 -05:00
Lincoln Stein
60a98cacef all vestiges of ldm.invoke removed 2023-03-03 01:02:00 -05:00
Lincoln Stein
6a990565ff all files migrated; tweaks needed 2023-03-03 00:02:15 -05:00
Lincoln Stein
3f0b0f3250 almost all of backend migrated; restoration next 2023-03-02 13:28:17 -05:00
Lincoln Stein
1a7371ea17 remove unused embeddings code 2023-03-01 21:09:22 -05:00
Lincoln Stein
850d1ee984 move models and modules under invokeai/backend/ldm 2023-03-01 18:24:18 -05:00
Lincoln Stein
2c7928b163 remove pycaches from repo 2023-02-28 23:25:35 -05:00
Lincoln Stein
87d1ec6a4c Merge branch 'main' into refactor/move-models-and-generators 2023-02-28 17:34:05 -05:00
Lincoln Stein
53c62537f7 fix newlines causing negative prompt to be parsed incorrectly (#2837)
closes #2753
2023-02-28 17:29:46 -05:00
Damian Stewart
418d93fdfd fix newlines causing negative prompt to be parsed incorrectly 2023-02-28 22:37:28 +01:00
Lincoln Stein
f2ce2f1778 fix import of moved model_manager module 2023-02-28 08:38:14 -05:00
Lincoln Stein
5b6c61fc75 move models and generator into backend 2023-02-28 08:32:11 -05:00
Lincoln Stein
1d77581d96 restore behavior of !import_model; fix initial models bug 2023-02-28 00:45:56 -05:00
Lincoln Stein
3b921cf393 add more missing files 2023-02-28 00:37:13 -05:00
Lincoln Stein
d334f7f1f6 add missing files 2023-02-28 00:31:15 -05:00
Lincoln Stein
8c9764476c first phase of source tree restructure
This is the first phase of a big shifting of files and directories
in the source tree.

You will need to run `pip install -e .` before the code will work again!

Here's what's in the current commit:

1) Remove a lot of dead code that dealt with checkpoint and safetensor loading.
2) Entire ckpt_generator hierarchy is now gone!
3) ldm.invoke.generator.*   => invokeai.generator.*
4) ldm.model.*              => invokeai.model.*
5) ldm.invoke.model_manager => invokeai.model.model_manager

6) In addition, a number of frequently-accessed classes can be imported
   from the invokeai.model and invokeai.generator modules:

   from invokeai.generator import ( Generator, PipelineIntermediateState,
                                    StableDiffusionGeneratorPipeline, infill_methods)

   from invokeai.models import ( ModelManager, SDLegacyType
                                 InvokeAIDiffuserComponent, AttentionMapSaver,
                                 DDIMSampler, KSampler, PLMSSampler,
                                 PostprocessingSettings )
2023-02-27 23:52:46 -05:00
Kyle Schouviller
b7d5a3e0b5 [nodes] Add better error handling to processor and CLI (#2828)
* [nodes] Add better error handling to processor and CLI

* [nodes] Use more explicit name for marking node execution error

* [nodes] Update the processor call to error
2023-02-27 10:01:07 -08:00
Lincoln Stein
e0405031a7 add a workflow to close stale issues (#2808)
with values set as discussed in discord
2023-02-26 16:14:42 -05:00
Lincoln Stein
ee24b686b3 Merge branch 'main' into dev/ci/add-close-inactive-issues 2023-02-26 16:14:03 -05:00
Lincoln Stein
835eb14c79 Split requirements / pyproject installation in Dockerfile (#2815)
This should make caching way easier and therefore speed up the image
(re-)creation a lot.

Other small improvements:
- reorder .dockerignore
- rename amd flavor to rocm to align with cuda flavor
- use `user:group` for definitions
- add `--platform=${TARGETPLATFORM}` to base
2023-02-26 13:48:32 -05:00
Lincoln Stein
9aadf7abc1 Merge branch 'main' into dev/ci/add-close-inactive-issues 2023-02-26 13:13:42 -05:00
Lincoln Stein
243f9e8377 Merge branch 'main' into dev/docker/separate-req-inst 2023-02-26 13:13:07 -05:00
blessedcoolant
6e0c6d9cc9 perf(invoke_ai_web_server): encode intermediate result previews as jpeg (#2817)
For size savings of about 80%, and jpeg encoding is still plenty fast.
2023-02-26 18:47:51 +13:00
Kevin Turner
a3076cf951 perf(invoke_ai_web_server): encode intermediate result previews as jpeg
For size savings of about 80%, and jpeg encoding is still plenty fast.
2023-02-25 21:23:25 -08:00
blessedcoolant
6696882c71 doc(invoke_ai_web_server): put docstrings inside their functions (#2816)
Documentation strings are the first thing inside the function body.
https://docs.python.org/3/tutorial/controlflow.html#defining-functions
2023-02-26 18:20:10 +13:00
Kevin Turner
17b039e85d doc(invoke_ai_web_server): put docstrings inside their functions
Documentation strings are the first thing inside the function body.
https://docs.python.org/3/tutorial/controlflow.html#defining-functions
2023-02-25 20:21:47 -08:00
mauwii
81539e6ab4 Merge remote-tracking branch 'upstream/main' into dev/docker/separate-req-inst 2023-02-26 00:55:03 +01:00
mauwii
92304b9f8a remove pip-tools, still split requirements install
- also use user:group for definitions
- add `--platform=${TARGETPLATFORM}` to base
2023-02-26 00:53:43 +01:00
mauwii
ec1de5ae8b more detailed volume parameters 2023-02-26 00:51:30 +01:00
mauwii
49198a61ef enable BuildKit in env.sh 2023-02-26 00:50:13 +01:00
blessedcoolant
c22d529528 Add node-based invocation system (#1650)
This PR adds the core of the node-based invocation system first
discussed in https://github.com/invoke-ai/InvokeAI/discussions/597 and
implements it through a basic CLI and API. This supersedes #1047, which
was too far behind to rebase.

## Architecture

### Invocations
The core of the new system is **invocations**, found in
`/ldm/invoke/app/invocations`. These represent individual nodes of
execution, each with inputs and outputs. Core invocations are already
implemented (`txt2img`, `img2img`, `upscale`, `face_restore`) as well as
a debug invocation (`show_image`). To implement a new invocation, all
that is required is to add a new implementation in this folder (there is
a markdown document describing the specifics, though it is slightly
out-of-date).

### Sessions
Invocations and links between them are maintained in a **session**.
These can be queued for invocation (either the next ready node, or all
nodes). Some notes:
* Sessions may be added to at any time (including after invocation), but
may not be modified.
* Links are always added with a node, and are always links from existing
nodes to the new node. These links can be relative "history" links, e.g.
`-1` to link from a previously executed node, and can link either
specific outputs, or can opportunistically link all matching outputs by
name and type by using `*`.
* There are no iteration/looping constructs. Most needs for this could
be solved by either duplicating nodes or cloning sessions. This is open
for discussion, but is a difficult problem to solve in a way that
doesn't make the code even more complex/confusing (especially regarding
node ids and history).

### Services
These make up the core the invocation system, found in
`/ldm/invoke/app/services`. One of the key design philosophies here is
that most components should be replaceable when possible. For example,
if someone wants to use cloud storage for their images, they should be
able to replace the image storage service easily.

The services are broken down as follows (several of these are
intentionally implemented with an initial simple/naïve approach):
* Invoker: Responsible for creating and executing **sessions** and
managing services used to do so.
* Session Manager: Manages session history. An on-disk implementation is
provided, which stores sessions as json files on disk, and caches
recently used sessions for quick access.
* Image Storage: Stores images of multiple types. An on-disk
implementation is provided, which stores images on disk and retains
recently used images in an in-memory cache.
* Invocation Queue: Used to queue invocations for execution. An
in-memory implementation is provided.
* Events: An event system, primarily used with socket.io to support
future web UI integration.

## Apps

Apps are available through the `/scripts/invoke-new.py` script (to-be
integrated/renamed).

### CLI
```
python scripts/invoke-new.py
```

Implements a simple CLI. The CLI creates a single session, and
automatically links all inputs to the previous node's output. Commands
are automatically generated from all invocations, with command options
being automatically generated from invocation inputs. Help is also
available for the cli and for each command, and is very verbose.
Additionally, the CLI supports command piping for single-line entry of
multiple commands. Example:

```
> txt2img --prompt "a cat eating sushi" --steps 20 --seed 1234 | upscale | show_image
```

### API
```
python scripts/invoke-new.py --api --host 0.0.0.0
```

Implements an API using FastAPI with Socket.io support for signaling.
API documentation is available at `http://localhost:9090/docs` or
`http://localhost:9090/redoc`. This includes OpenAPI schema for all
available invocations, session interaction APIs, and image APIs.
Socket.io signals are per-session, and can be subscribed to by session
id. These aren't currently auto-documented, though the code for event
emission is centralized in `/ldm/invoke/app/services/events.py`.

A very simple test html and script are available at
`http://localhost:9090/static/test.html` This demonstrates creating a
session from a graph, invoking it, and receiving signals from Socket.io.

## What's left?

* There are a number of features not currently covered by invocations. I
kept the set of invocations small during core development in order to
simplify refactoring as I went. Now that the invocation code has
stabilized, I'd love some help filling those out!
* There's no image metadata generated. It would be fairly
straightforward (and would make good sense) to serialize either a
session and node reference into an image, or the entire node into the
image. There are a lot of questions to answer around source images,
linked images, etc. though. This history is all stored in the session as
well, and with complex sessions, the metadata in an image may lose its
value. This needs some further discussion.
* We need a list of features (both current and future) that would be
difficult to implement without looping constructs so we can have a good
conversation around it. I'm really hoping we can avoid needing
looping/iteration in the graph execution, since it'll necessitate
separating an execution of a graph into its own concept/system, and will
further complicate the system.
* The API likely needs further filling out to support the UI. I think
using the new API for the current UI is possible, and potentially
interesting, since it could work like the new/demo CLI in a "single
operation at a time" workflow. I don't know how compatible that will be
with our UI goals though. It would be nice to support only a single API
though.
* Deeper separation of systems. I intentionally tried to not touch
Generate or other systems too much, but a lot could be gained by
breaking those apart. Even breaking apart Args into two pieces (command
line arguments and the parser for the current CLI) would make it easier
to maintain. This is probably in the future though.
2023-02-26 12:25:41 +13:00
mauwii
8c5773abc1 add a workflow to close stale issues
with values set as discussed in discord
2023-02-25 13:20:05 +01:00
Kyle Schouviller
cd98d88fe7 [nodes] Removed InvokerServices, simplying service model 2023-02-24 20:11:28 -08:00
Kyle Schouviller
34e3aa1f88 parent 9eed1919c2
author Kyle Schouviller <kyle0654@hotmail.com> 1669872800 -0800
committer Kyle Schouviller <kyle0654@hotmail.com> 1676240900 -0800

Adding base node architecture

Fix type annotation errors

Runs and generates, but breaks in saving session

Fix default model value setting. Fix deprecation warning.

Fixed node api

Adding markdown docs

Simplifying Generate construction in apps

[nodes] A few minor changes (#2510)

* Pin api-related requirements

* Remove confusing extra CORS origins list

* Adds response models for HTTP 200

[nodes] Adding graph_execution_state to soon replace session. Adding tests with pytest.

Minor typing fixes

[nodes] Fix some small output query hookups

[node] Fixing some additional typing issues

[nodes] Move and expand graph code. Add base item storage and sqlite implementation.

Update startup to match new code

[nodes] Add callbacks to item storage

[nodes] Adding an InvocationContext object to use for invocations to provide easier extensibility

[nodes] New execution model that handles iteration

[nodes] Fixing the CLI

[nodes] Adding a note to the CLI

[nodes] Split processing thread into separate service

[node] Add error message on node processing failure

Removing old files and duplicated packages

Adding python-multipart
2023-02-24 18:57:02 -08:00
psychedelicious
49ffb64ef3 ui: translations update from weblate (#2804)
Translations update from [Hosted Weblate](https://hosted.weblate.org)
for [InvokeAI/Web
UI](https://hosted.weblate.org/projects/invokeai/web-ui/).



Current translation status:

![Weblate translation
status](https://hosted.weblate.org/widgets/invokeai/-/web-ui/horizontal-auto.svg)
2023-02-25 10:09:37 +11:00
Gabriel Mackievicz Telles
ec14e2db35 translationBot(ui): update translation (Portuguese (Brazil))
Currently translated at 91.8% (431 of 469 strings)

Co-authored-by: Gabriel Mackievicz Telles <telles.gabriel@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/pt_BR/
Translation: InvokeAI/Web UI
2023-02-24 17:54:54 +01:00
Jeff Mahoney
5725fcb3e0 translationBot(ui): added translation (Romanian)
Co-authored-by: Jeff Mahoney <jbmahoney@gmail.com>
2023-02-24 17:54:54 +01:00
gallegonovato
1447b6df96 translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (469 of 469 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2023-02-24 17:54:54 +01:00
Lincoln Stein
e700da23d8 Sync main with v2.3.1 (#2792)
This PR will bring `main` up to date with released v2.3.1
2023-02-24 11:54:46 -05:00
Lincoln Stein
b4ed8bc47a Merge branch 'main' into v2.3 2023-02-24 10:52:03 -05:00
Lincoln Stein
bd85e00530 Last PR needed for v2.3.1 (#2788)
- Add curated set of starter models based on team discussion. The final
list of starter models can be found in
`invokeai/configs/INITIAL_MODELS.yaml`

- To test model installation, I selected and installed all the models on
the list. This led to my discovering that when there are no more starter
models to display, the console front end crashes. So I made a fix to
this in which the entire starter model selection is no longer shown.

- Update model table in 050_INSTALL_MODELS.md

- Add guide to dealing with low-memory situations
- Version is now `v2.3.1`
2023-02-24 10:31:38 -05:00
Lincoln Stein
4e446130d8 Merge branch 'v2.3' into enhance/curated-2.3.1-models 2023-02-24 10:30:42 -05:00
Lincoln Stein
4c93b514bb bump version to final 2.3.1 2023-02-24 10:04:41 -05:00
Lincoln Stein
d078941316 add low memory troubleshooting guide 2023-02-24 10:04:06 -05:00
Lincoln Stein
230d3a496d document starter models
- add new script `scripts/make_models_markdown_table.py` that parses
  INITIAL_MODELS.yaml and creates markdown table for the model installation
  documentation file

- update 050_INSTALLING_MODELS.md with above table, and add a warning
  about additional license terms that apply to some of the models.
2023-02-24 09:33:07 -05:00
Jonathan
ec2890c19b Run garbage collection to allow the CUDA cache to completely empty. (#2791) 2023-02-24 08:48:54 -05:00
Lincoln Stein
a540cc537f add curated set of HuggingFace diffusers models for 2.3.1 release
- Final list can be found in invokeai/configs/INITIAL_MODELS.yaml

- After installing all the models, I discovered a bug in the file
  selection form that caused a crash when no remaining uninstalled
  models remained. So had to fix this.
2023-02-24 00:53:48 -05:00
Lincoln Stein
39c57aa358 fix generate backend to generate "accurate" intermediate images (#2787)
The sample_to_image method in `ldm.invoke.generator.base` was still
using ckpt-era code. As a result when the WebUI was set to show
"accurate" intermediate images, there'd be a crash. This PR corrects the
problem.

- Closes #2784
- Closes #2775
2023-02-24 00:33:29 -05:00
mauwii
01f8c37bd3 rename amd flavor to rocm 2023-02-24 06:20:44 +01:00
Lincoln Stein
2d990c1f54 Merge branch 'v2.3' into bugfix/webui-accurate-intermediates 2023-02-23 22:07:18 -05:00
Lincoln Stein
7fb2da8741 fix generate backend to generate "accurate" intermediate images
- Closes #2784
- Closes #2775
2023-02-23 22:03:28 -05:00
mauwii
b7718985d5 update build-container.yml
- add branches 'dev/ci/docker/*' and 'dev/docker/*'
2023-02-24 03:58:22 +01:00
Lincoln Stein
c69fcb1c10 fix ckpt_convert module to work with dreambooth v2 models (#2776)
- Discord member @marcus.llewellyn reported that some civitai
2.1-derived checkpoints were not converting properly (probably
dreambooth-generated):
https://discord.com/channels/1020123559063990373/1078386197589655582/1078387806122025070

- @blessedcoolant tracked this down to a missing key that was used to
derive vector length of the CLIP model used by fetching the second
dimension of the tensor at "cond_stage_model.model.text_projection".

- On inspection, I found that the same second dimension can be recovered
from key 'cond_stage_model.model.ln_final.bias', and use that instead. I
hope this is correct; tested on multiple v1, v2 and inpainting models
and they converted correctly.

- While debugging this, I found and fixed several other issues:

- model download script was not pre-downloading the OpenCLIP
text_encoder or text_tokenizer. This is fixed.
- got rid of legacy code in `ckpt_to_diffuser.py` and replaced with
calls into `model_manager`
  - more consistent status reporting in the CLI.
2023-02-23 21:51:57 -05:00
mauwii
90cda11868 separate installation of requirements and source
this should highly increase rebuilding of the image when:
- version did not change
- requirements didn't change
2023-02-24 03:51:18 +01:00
Lincoln Stein
0982548e1f Merge branch 'v2.3' into bugfix/v2-model-conversion 2023-02-23 21:27:49 -05:00
mauwii
5cb877e096 reorder .dockerignore 2023-02-24 02:53:27 +01:00
Matthias Wild
11a29fdc4d fix python 3.9 compatibility (#2780)
without this change, the project can be installed on 3.9 but not used
this also fixes the container images

Maybe we should re-enable Python 3.9 checks which would have prevented
this.
2023-02-24 00:49:25 +01:00
Lincoln Stein
24407048a5 Version 2.3.1-rc4 (#2782)
Just a version bump to use a format recognized by PyPi.
2023-02-23 18:09:43 -05:00
Matthias Wild
a7c2333312 Merge branch 'main' into fix/py39-compatibility 2023-02-23 23:53:38 +01:00
Lincoln Stein
b5b541c747 bump version; use correct format for PyPi 2023-02-23 17:47:36 -05:00
Lincoln Stein
ad6ea02c9c Update main with V2.3 fixes (#2774)
Until the nodes merge happens, we can continue to merge bugfixes from
the 2.3 branch into `main`. This will bring main into sync with
`v2.3.1+rc3`
2023-02-23 17:38:16 -05:00
mauwii
1a6ed85d99 fix typeing to be compatible with python 3.9
without this, the project can be installed on 3.9 but not used
this also fixes the container images
2023-02-23 23:27:16 +01:00
Lincoln Stein
a094bbd839 push to pypi from branch v2.3 (#2778)
This change will cause releases on the v2.3 branch to be pushed to PyPi.
2023-02-23 17:20:24 -05:00
Lincoln Stein
73dda812ea push to pypi from branch v2.3
This change will cause releases on the v2.3 branch to be pushed
to PyPi.
2023-02-23 16:55:25 -05:00
Lincoln Stein
8eaf1c4033 Revert "(updater) style 'pip' progress to use dark background"
This reverts commit 89239d1c54.

- This was making a subprocess call to 'bash', and hence crashing
  on windows systems!
2023-02-23 16:33:57 -05:00
Lincoln Stein
4f44b64052 fix ckpt_convert module to work with dreambooth v2 models
- Discord member @marcus.llewellyn reported that some civitai 2.1-derived checkpoints were
  not converting properly (probably dreambooth-generated):
  https://discord.com/channels/1020123559063990373/1078386197589655582/1078387806122025070

- @blessedcoolant tracked this down to a missing key that was used to
  derive vector length of the CLIP model used by fetching the second
  dimension of the tensor at "cond_stage_model.model.text_projection".
  His proposed solution was to hardcode a value of 1024.

- On inspection, I found that the same second dimension can be
  recovered from key 'cond_stage_model.model.ln_final.bias', and use
  that instead. I hope this is correct; tested on multiple v1, v2 and
  inpainting models and they converted correctly.

- While debugging this, I found and fixed several other issues:

  - model download script was not pre-downloading the OpenCLIP
    text_encoder or text_tokenizer. This is fixed.
  - got rid of legacy code in `ckpt_to_diffuser.py` and replaced
    with calls into `model_manager`
  - more consistent status reporting in the CLI.
2023-02-23 15:43:58 -05:00
Lincoln Stein
c559bf3e10 Add a sanity check to root directory finding algorithm (#2772)
Root directory finding algorithm is:

2) use --root argument
2) use INVOKEAI_ROOT environment variable
3) use VIRTUAL_ENV environment variable
4) use ~/invokeai

Since developers are liable to put virtual environments in their
favorite places, not necessarily in the invokeai root directory, this PR
adds a sanity check that looks for the existence of
`VIRTUAL_ENV/invokeai.init`, and moves on to (4) if not found.
2023-02-23 11:37:11 -05:00
Lincoln Stein
a485515bc6 Merge branch 'v2.3' into bugfix/sanity-check-rootdir 2023-02-23 11:14:52 -05:00
Lincoln Stein
2c9b29725b Bugfix/windows install (#2770)
# This will constitute v2.3.1+rc2

## Windows installer enhancements
  
1. resize installer window to give more room for configure and download
forms
2. replace '\' with '/' in directory names to allow user to
drag-and-drop
       folders into the dialogue boxes that accept directories.
3. similar change in CLI for the !import_model and !convert_model
commands
4. better error reporting when a model download fails due to network
errors
5. put the launcher scripts into a loop so that menu reappears after
       invokeai, merge script, etc exits. User can quit with "Q".
6. do not try to download fp16 of sd-ft-mse-vae, since it doesn't exist.
7. cleaned up status reporting when installing models
8. Detect when install failed for some reason and print helpful error
      message rather than stack trace.
9. Detect window size and resize to minimum acceptable values to provide
      better display of configure and install forms.
10. Fix a bug in the CLI which prevented diffusers imported by their
repo_ids
from being correctly registered in the current session (though they
install
      correctly)
11. Capitalize the "i" in Imported in the autogenerated descriptions.
2023-02-23 11:14:30 -05:00
Lincoln Stein
28612c899a add a sanity check to root directory finding algorithm
Root directory finding algorithm is:

2) use --root argument
2) use INVOKEAI_ROOT environment variable
3) use VIRTUAL_ENV environment variable
4) use ~/invokeai

Since developer's are liable to put virtual environments in their
favorite places, not necessarily in the invokeai root directory, this
PR adds a sanity check that looks for the existence of
VIRTUAL_ENV/invokeai.init, and moves to (4) if not found.
2023-02-23 10:15:01 -05:00
Lincoln Stein
88acbeaa35 install creator tags but don't commit 2023-02-23 07:08:41 -05:00
Lincoln Stein
46729efe95 upgrade to compel 0.1.7 2023-02-23 07:06:40 -05:00
Lincoln Stein
b3d03e1146 Merge branch 'v2.3.1' into bugfix/windows-install 2023-02-23 01:04:39 -05:00
Lincoln Stein
e29c9a7d9e fix CLI import of diffusers by repo_id
- Fix a bug in the CLI which prevented diffusers imported by their repo_ids
  from being correctly registered in the current session (though they install
  correctly)

- Capitalize the "i" in Imported in the autogenerated descriptions.
2023-02-23 01:00:14 -05:00
Lincoln Stein
9b157b6532 fix several issues with Windows installs
1. resize installer window to give more room for configure and download forms
2. replace '\' with '/' in directory names to allow user to drag-and-drop
   folders into the dialogue boxes that accept directories.
3. similar change in CLI for the !import_model and !convert_model commands
4. better error reporting when a model download fails due to network errors
5. put the launcher scripts into a loop so that menu reappears after
   invokeai, merge script, etc exits. User can quit with "Q".
6. do not try to download fp16 of sd-ft-mse-vae, since it doesn't exist.
7. cleaned up status reporting when installing models
2023-02-23 00:49:59 -05:00
blessedcoolant
10a1e7962b docs: add TRANSLATION.md (#2769) 2023-02-23 15:37:15 +13:00
Lincoln Stein
cb672d7d00 Merge branch 'v2.3.1' into docs/add-translation-md 2023-02-22 21:35:39 -05:00
psychedelicious
e791fb6b0b docs: tweak messaging 2023-02-23 13:00:05 +11:00
psychedelicious
1c9001ad21 docs: add TRANSLATION.md 2023-02-23 12:53:03 +11:00
Lincoln Stein
3083356cf0 installer enhancements
- Detect when install failed for some reason and print helpful error
  message rather than stack trace.

- Detect window size and resize to minimum acceptable values to provide
  better display of configure and install forms.
2023-02-22 19:18:07 -05:00
blessedcoolant
179814e50a [WebUI] 2.3.1 Localization (#2765) 2023-02-23 10:29:14 +13:00
blessedcoolant
9515c07fca Merge branch 'v2.3.1' into localization-231 2023-02-23 10:29:02 +13:00
blessedcoolant
a45e94fde7 build: localization (2.3.1-final) 2023-02-23 09:47:01 +13:00
Lincoln Stein
8b6196e0a2 version 2.3.1 release candidate 1 2023-02-22 15:26:35 -05:00
Sergey Krashevich
ee2c0ab51b translationBot(ui): update translation (Russian)
Currently translated at 81.4% (382 of 469 strings)

translationBot(ui): update translation (Russian)

Currently translated at 81.6% (382 of 468 strings)

Co-authored-by: Sergey Krashevich <svk@svk.su>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2023-02-22 21:25:08 +01:00
Riccardo Giovanetti
ca5f129902 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (469 of 469 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (468 of 468 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-02-22 21:25:08 +01:00
Lincoln Stein
cf2eca7c60 Add new console frontend to initial model selection, and other model mgmt improvements (#2644)
## Major Changes
The invokeai-configure script has now been refactored. The work of
selecting and downloading initial models at install time is now done by
a script named `invokeai-model-install` (module name is
`ldm.invoke.config.model_install`)

Screen 1 - adjust startup options:

![screenshot1](https://user-images.githubusercontent.com/111189/219976468-b642df78-a6fe-44a2-bf97-54ccf34e9656.png)

Screen 2 - select SD models:

![screenshot2](https://user-images.githubusercontent.com/111189/219976494-13c7d257-cc8d-4dae-9521-3b352aab010b.png)

The calling arguments for `invokeai-configure` have not changed, so
nothing should break. After initializing the root directory, the script
calls `invokeai-model-install` to let the user select the starting
models to install.

`invokeai-model-install puts up a console GUI with checkboxes to
indicate which models to install. It respects the `--default_only` and
`--yes` arguments so that CI will continue to work. Here are the various
effects you can achieve:

`invokeai-configure`
       This will use console-based UI to initialize invokeai.init,
       download support models, and choose and download SD models
    
`invokeai-configure --yes`
Without activating the GUI, populate invokeai.init with default values,
       download support models and download the "recommended" SD models
    
`invokeai-configure --default_only`
Activate the GUI for changing init options, but don't show the SD
download
form, and automatically download the default SD model (currently SD-1.5)
    
`invokeai-model-install`
       Select and install models. This can be used to download arbitrary
models from the Internet, install HuggingFace models using their
repo_id,
       or watch a directory for models to load at startup time
    
`invokeai-model-install --yes`
       Import the recommended SD models without a GUI
    
`invokeai-model-install --default_only`
       As above, but only import the default model

## Flexible Model Imports

The console GUI allows the user to import arbitrary models into InvokeAI
using:

1. A HuggingFace Repo_id
2. A URL (http/https/ftp) that points to a checkpoint or safetensors
file
3. A local path on disk pointing to a checkpoint/safetensors file or
diffusers directory
4. A directory to be scanned for all checkpoint/safetensors files to be
imported

The UI allows the user to specify multiple models to bulk import. The
user can specify whether to import the ckpt/safetensors as-is, or
convert to `diffusers`. The user can also designate a directory to be
scanned at startup time for checkpoint/safetensors files.

## Backend Changes

To support the model selection GUI PR introduces a new method in
`ldm.invoke.model_manager` called `heuristic_import(). This accepts a
string-like object which can be a repo_id, URL, local path or directory.
It will figure out what the object is and import it. It interrogates the
contents of checkpoint and safetensors files to determine what type of
SD model they are -- v1.x, v2.x or v1.x inpainting.

## Installer

I am attaching a zip file of the installer if you would like to try the
process from end to end.

[InvokeAI-installer-v2.3.0.zip](https://github.com/invoke-ai/InvokeAI/files/10785474/InvokeAI-installer-v2.3.0.zip)
2023-02-22 15:24:59 -05:00
Lincoln Stein
16aea1e869 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-22 14:22:52 -05:00
blessedcoolant
75ff6cd3c3 Refactor prompting code paths to use the compel library (#2729)
motivation: i want to be doing future prompting development work in the
`compel` lib (https://github.com/damian0815/compel) - which is currently
pip installable with `pip install compel`.
2023-02-23 08:09:52 +13:00
blessedcoolant
7b7b31637c Merge branch 'main' into refactor_use_compel 2023-02-23 07:43:30 +13:00
blessedcoolant
fca564c18a ui: fix use prompt when prompt has colon (#2760)
- Fixes wonky use prompt when prompt contains colon
2023-02-23 07:41:38 +13:00
Lincoln Stein
eb8d87e185 Merge branch 'main' into refactor_use_compel 2023-02-22 12:34:16 -05:00
Lincoln Stein
dbadb1d7b5 Merge branch 'main' into fix/ui/prompt-metadata 2023-02-22 12:33:54 -05:00
Lincoln Stein
a4afb69615 fix crash in textual inversion with "num_samples=0" error (#2762)
-At some point pathlib was added to the list of imported modules and
this broken the os.path code that assembled the sample data set.

-Now fixed by replacing os.path calls with Path methods
2023-02-22 12:31:28 -05:00
Lincoln Stein
8b7925edf3 fix crash in textual inversion with "num_samples=0" error
-At some point pathlib was added to the list of imported modules and this
broken the os.path code that assembled the sample data set.

-Now fixed by replacing os.path calls with Path methods
2023-02-22 11:29:30 -05:00
Lincoln Stein
168a51c5a6 fix textual inversion output directory path
- The configure script was misnaming the directory for text-inversion-output.
- Now fixed.
2023-02-22 10:06:04 -05:00
Damian Stewart
3f5d8c3e44 remove inaccurate docstring 2023-02-22 13:18:39 +01:00
Lincoln Stein
609bb19573 fixes to resizing and init file editing
- Disable responsive resizing below starting dimensions (you can make
  form larger, but not smaller than what it was at startup)

- Fix bug that caused multiple --ckpt_convert entries (and similar) to
  be written to init file.
2023-02-22 07:05:51 -05:00
psychedelicious
d561d6d3dd chore(ui): build frontend 2023-02-22 22:09:11 +11:00
psychedelicious
7ffaa17551 fix(ui): use prompt bug when prompt has colon
This bug is related to the format in which we stored prompts for some time: an array of weighted subprompts.

This caused some strife when recalling a prompt if the prompt had colons in it, due to our recently introduced handling of negative prompts.

Currently there is no need to store a prompt as anything other than a string, so we revert to doing that.

Compatibility with structured prompts is maintained via helper hook.
2023-02-22 20:33:58 +11:00
Damian Stewart
97eac58a50 fix blend tokenizaiton reporting; fix LDM checkpoint support 2023-02-22 10:29:42 +01:00
Damian Stewart
cedbe8fcd7 fix .blend 2023-02-22 09:04:23 +01:00
Jonathan
a461875abd Merge branch 'main' into refactor_use_compel 2023-02-21 21:14:28 -06:00
Lincoln Stein
ab018ccdfe Fallback to using filename to trigger embeddings (#2752)
Lots of earlier embeds use a common trigger token such as * or the
hebrew letter shan. Previously, the textual inversion manager would
refuse to load the second and subsequent embeddings that used a
previously-claimed trigger. Now, when this case is encountered, the
trigger token is replaced by <filename> and the user is informed of the
fact.
2023-02-21 21:58:11 -05:00
Lincoln Stein
d41dcdfc46 move trigger_str registration into try block 2023-02-21 21:38:42 -05:00
Lincoln Stein
972aecc4c5 fix responsive resizing 2023-02-21 21:33:44 -05:00
Lincoln Stein
6b7be4e5dc remove dangling debug statement 2023-02-21 20:09:34 -05:00
Lincoln Stein
9b1a7b553f add "hit any key to exit" pause at end of install 2023-02-21 20:03:08 -05:00
Lincoln Stein
7f99efc5df require diffusers 0.13 2023-02-21 17:28:07 -05:00
Lincoln Stein
0a6d8b4855 Merge branch 'main' into refactor_use_compel 2023-02-21 17:19:48 -05:00
Lincoln Stein
5e41811fb5 move trigger text munging to upper level per review 2023-02-21 17:04:42 -05:00
Lincoln Stein
5a4967582e reformat with black and isort 2023-02-21 14:12:57 -05:00
Jonathan
1d0ba4a1a7 Merge branch 'main' into bugfix/filename-embedding-fallback 2023-02-21 13:12:34 -06:00
Lincoln Stein
4878c7a2d5 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-21 14:09:38 -05:00
blessedcoolant
9e5aa645a7 Fix crashing when using 2.1 model (#2757)
We now require more free memory to avoid attention slicing. 17.5% free
was not sufficient headroom in all cases, so now we require 25%.
2023-02-22 08:03:51 +13:00
Lincoln Stein
d01e23973e fix problem that was causing CI failures 2023-02-21 13:44:32 -05:00
Jonathan
71bbd78574 Fix crashing when using 2.1 model
We now require more free memory to avoid attention slicing. 17.5% free was not sufficient headroom, so now we require 25%.
2023-02-21 12:35:03 -06:00
Lincoln Stein
fff41a7349 merged with main 2023-02-21 12:20:59 -05:00
blessedcoolant
d5f524a156 Merge branch 'main' into bugfix/filename-embedding-fallback 2023-02-22 06:13:41 +13:00
Jonathan
3ab9d02883 Fixed embiggening crash due to clear_cuda_cache not being passed on and bad cuda stats initialization. (#2756) 2023-02-22 06:12:24 +13:00
Lincoln Stein
27a2e27c3a fix crash when installed models < number columns
1. Fixed display crash when the number of installed models is less than
   the number of desired columns to display them.

2. Added --ckpt_convert option to init file.
2023-02-21 12:09:34 -05:00
Jonathan
da04b11a31 Merge branch 'main' into bugfix/filename-embedding-fallback 2023-02-21 10:52:13 -06:00
Lincoln Stein
3795b40f63 implemented the following fixes:
Enhancements:
1. Directory-based imports will not attempt to import components of diffusers models.
2. Diffuser directory imports now supported
3. Files that end with .ckpt that are not Stable Diffusion models (such as VAEs) are
   skipped during import.

Bugs identified in Psychedelicious's review:
1. The invokeai-configure form now tracks the current contents of `invokeai.init` correctly.
2. The autoencoders are no longer treated like installable models, but instead are
   mandatory support models. They will no longer appear in `models.yaml`

Bugs identified in Damian's review:
1. If invokeai-model-install is started before the root directory is initialized, it will
   call invokeai-configure to fix the matter.
2. Fix bug that was causing empty `models.yaml` under certain conditions.
3. Made import textbox smaller
4. Hide the "convert to diffusers" options if nothing to import.
2023-02-21 11:47:41 -05:00
Lincoln Stein
9436f2e3d1 alphabetize trigger strings 2023-02-21 06:23:34 -05:00
Lincoln Stein
7fadd5e5c4 performance: low-memory option for calculating guidance sequentially (#2732)
In theory, this reduces peak memory consumption by doing the conditioned
and un-conditioned predictions one after the other instead of in a
single mini-batch.

In practice, it doesn't reduce the reported "Max VRAM used for this
generation" for me, even without xformers. (But it does slow things down
by a good 18%.)

That suggests to me that the peak memory usage is during VAE decoding,
not the diffusion unet, but ymmv. It does [improve things for gogurt's
16 GB
M1](https://github.com/invoke-ai/InvokeAI/pull/2732#issuecomment-1436187407),
so it seems worthwhile.

To try it out, use the `--sequential_guidance` option:
2dded68267/ldm/invoke/args.py (L487-L492)
2023-02-20 23:00:54 -05:00
Lincoln Stein
4c2a588e1f Merge branch 'main' into perf/lowmem_sequential_guidance 2023-02-20 22:40:31 -05:00
Lincoln Stein
5f9de762ff update installation docs for 2.3.1 installer screens (#2749)
This PR updates the manual page for automatic installation, and contains
screenshots of the new installer screens.
2023-02-20 22:40:02 -05:00
Lincoln Stein
91f7abb398 replace repeated triggers with <filename> 2023-02-20 22:33:13 -05:00
Damian Stewart
6420b81a5d Merge remote-tracking branch 'upstream/main' into refactor_use_compel 2023-02-20 23:34:38 +01:00
Lincoln Stein
b6ed5eafd6 update installation docs for 2.3.1 installer screens 2023-02-20 17:24:52 -05:00
blessedcoolant
694d5aa2e8 Add 'update' action to launcher script (#2636)
- Adds an update action to launcher script
- This action calls new python script `invokeai-update`, which prompts
user to update to latest release version, main development version, or
an arbitrary git tag or branch name.
- It then uses `pip` to update to whatever tag was specified.

The user interface (such as it is) looks like this:

![updater-screenshot](https://user-images.githubusercontent.com/111189/218291539-e5542662-6bfd-46ef-8ea9-655ca77392b7.png)
2023-02-21 11:17:22 +13:00
Lincoln Stein
833079140b Merge branch 'main' into enhance/update-menu 2023-02-20 17:16:20 -05:00
Lincoln Stein
fd27948c36 Merge branch 'main' into perf/lowmem_sequential_guidance 2023-02-20 17:15:33 -05:00
Damian Stewart
1dfaaa2a57 fix web ui issues 2023-02-20 22:58:07 +01:00
Lincoln Stein
bac6b50dd1 During textual inversion training, skip over non-image files (#2747)
- The TI script was looping over all files in the training image
directory, regardless of whether they were image files or not. This PR
adds a check for image file extensions.
- 
- Closes #2715
2023-02-20 16:17:32 -05:00
blessedcoolant
a30c91f398 Merge branch 'main' into bugfix/textual-inversion-training 2023-02-21 09:58:19 +13:00
Lincoln Stein
17294bfa55 restore ability of textual inversion manager to read .pt files (#2746)
- Fixes longstanding bug in the token vector size code which caused .pt
files to be assigned the wrong token vector length. These were then
tossed out during directory scanning.
2023-02-20 15:34:56 -05:00
Lincoln Stein
3fa1771cc9 Merge branch 'main' into perf/lowmem_sequential_guidance 2023-02-20 15:20:15 -05:00
Lincoln Stein
f3bd386ff0 Merge branch 'main' into bugfix/textual-inversion-training 2023-02-20 15:19:53 -05:00
Lincoln Stein
8486ce31de Merge branch 'main' into bugfix/embedding-vector-length 2023-02-20 15:19:36 -05:00
Lincoln Stein
1d9845557f reduced verbosity of embed loading messages 2023-02-20 15:18:55 -05:00
Lincoln Stein
55dce6cfdd remove more dead code 2023-02-20 15:08:07 -05:00
Lincoln Stein
58be915446 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-20 14:48:41 -05:00
blessedcoolant
dc9268f772 [WebUI] Symmetry Fix (#2745)
Symmetry now has a toggle on and off. Won't be passed if not enabled.
Symmetry settings now moved to their accordion.
2023-02-21 08:47:23 +13:00
Lincoln Stein
47ddc00c6a in textual inversion training, skip over non-image files
- Closes #2715
2023-02-20 14:44:10 -05:00
Lincoln Stein
0d22fd59ed restore ability of textual inversion manager to read .pt files
- Fixes longstanding bug in the token vector size code which caused
  .pt files to be assigned the wrong token vector length. These
  were then tossed out during directory scanning.
2023-02-20 14:34:14 -05:00
blessedcoolant
d5efd57c28 Merge branch 'symmetry-fix' of https://github.com/blessedcoolant/InvokeAI into symmetry-fix 2023-02-21 07:44:34 +13:00
blessedcoolant
b52a92da7e build: symmetry-fix-2 2023-02-21 07:43:56 +13:00
blessedcoolant
b949162e7e Revert Symmetry Big Size Input 2023-02-21 07:42:20 +13:00
blessedcoolant
5409991256 Merge branch 'main' into symmetry-fix 2023-02-21 07:29:53 +13:00
blessedcoolant
be1bcbc173 build: symmetry-fix 2023-02-21 07:28:25 +13:00
blessedcoolant
d6196e863d Move symmetry settings to their own accordion 2023-02-21 07:25:24 +13:00
blessedcoolant
63e790b79b fix crash in CLI when --save_intermediates called (#2744)
Fixes #2733
2023-02-21 07:16:45 +13:00
Lincoln Stein
cf53bba99e Merge branch 'main' into bugfix/save-intermediates 2023-02-20 12:51:53 -05:00
Lincoln Stein
ed4c8f6a8a fix crash in CLI when --save_intermediates called
Fixes #2733
2023-02-20 12:50:32 -05:00
Lincoln Stein
aab8263c31 Fix crash on calling diffusers' prepare_attention_mask (#2743)
Diffusers' `prepare_attention_mask` was crashing when we didn't pass in
a batch size.
2023-02-20 12:35:33 -05:00
Jonathan
b21bd6f428 Fix crash on calling diffusers' prepare_attention_mask
Diffusers' `prepare_attention_mask` was crashing when we didn't pass in a batch size.
2023-02-20 11:12:47 -06:00
Kevin Turner
cb6903dfd0 Merge branch 'main' into perf/lowmem_sequential_guidance 2023-02-20 08:03:11 -08:00
blessedcoolant
cd87ca8214 Correctly detect when an embedding is incompatible with the current model (#2736)
- Fixed the test for token length; tested on several .pt and .bin files
- Also added a __main__ entrypoint for CLI.py, to make pdb debugging a
bit more convenient.
2023-02-21 04:32:32 +13:00
blessedcoolant
58e5bf5a58 Merge branch 'main' into bugfix/embedding-compatibility-test 2023-02-21 04:09:18 +13:00
blessedcoolant
f17c7ca6f7 [WebUI] Symmetry Settings (#2741)
Add the newly added Symmetry settings to the WebUI.
2023-02-21 04:07:30 +13:00
blessedcoolant
c3dd28cff9 Merge branch 'main' into symmetry-webui 2023-02-21 04:06:54 +13:00
blessedcoolant
db4e1e8b53 add @lstein and @blessedcoolant to all codeowner paths (#2742)
- In an emergency, one or the other of these individuals will be
available to review any part of the code.
2023-02-21 04:06:23 +13:00
Lincoln Stein
3e43c3e698 add @lstein and @blessedcoolant to all paths
- In an emergency, one or the other of these individuals will
  be available to review any part of the code.
2023-02-20 10:02:32 -05:00
blessedcoolant
cc7733af1c Merge branch 'main' into enhance/update-menu 2023-02-21 03:54:40 +13:00
blessedcoolant
2a29734a56 Merge branch 'main' into symmetry-webui 2023-02-21 03:18:47 +13:00
blessedcoolant
f2e533f7c8 build: threshold slider fix 2023-02-21 03:17:41 +13:00
blessedcoolant
078f897b67 Revert Threshold Slider to older values 2023-02-21 02:57:00 +13:00
Matthias Wild
8352ab2076 remove old swagger related files since security issues (#2730) 2023-02-20 14:55:21 +01:00
Matthias Wild
1a3d47814b Merge branch 'main' into update/docs/remove-swagger-related-files 2023-02-20 14:54:22 +01:00
Lincoln Stein
e852ad0a51 fix bug that prevented converted files from being written into models.yaml` 2023-02-20 08:48:54 -05:00
blessedcoolant
136cd0e868 Merge branch 'main' into symmetry-webui 2023-02-21 02:43:40 +13:00
blessedcoolant
7afe26320a build: symmetry-settings 2023-02-21 02:41:26 +13:00
Lincoln Stein
702da71515 swap y/n values for broken model reconfiguration prompt 2023-02-20 08:34:46 -05:00
blessedcoolant
b313cf8afd Add Symmetry Settings 2023-02-21 02:27:55 +13:00
Lincoln Stein
852d78d9ad Fix for issue #2707 (#2710)
When selecting the last model of the third model-list in the
model-merging-TUI it crashed because the code forgot about the "None"
element.

Additionally it seems that it accidentally always took the wrong model
as third model if selected?

This simple fix resolves both issues.
2023-02-20 08:02:00 -05:00
Lincoln Stein
5570a88858 Merge branch 'main' into update/docs/remove-swagger-related-files 2023-02-20 07:44:42 -05:00
Lincoln Stein
cfd897874b Merge branch 'main' into perf/lowmem_sequential_guidance 2023-02-20 07:42:35 -05:00
Lincoln Stein
1249147c57 Merge branch 'main' into enhance/update-menu 2023-02-20 07:38:56 -05:00
Lincoln Stein
eec5c3bbb1 Merge branch 'main' into main 2023-02-20 07:38:08 -05:00
Jonathan
ca8d9fb885 Add symmetry to generation (#2675)
Added symmetry to Invoke based on discussions with @damian0815. This can currently only be activated via the CLI with the `--h_symmetry_time_pct` and `--v_symmetry_time_pct` options. Those take values from 0.0-1.0, exclusive, indicating the percentage through generation at which symmetry is applied as a one-time operation. To have symmetry in either axis applied after the first step, use a very low value like 0.001.
2023-02-20 07:33:19 -05:00
Lincoln Stein
7d77fb9691 fixed --default_only behavior 2023-02-20 01:29:39 -05:00
Lincoln Stein
a4c0dfb33c fix broken --ckpt_convert option
- not sure why, but at some pont --ckpt_convert (which converts legacy checkpoints)
  into diffusers in memory, stopped working due to float16/float32 issues.

- this commit repairs the problem

- also removed some debugging messages I found in passing
2023-02-20 01:12:02 -05:00
Kevin Turner
2dded68267 add --sequential_guidance option for low-RAM tradeoff 2023-02-19 21:21:14 -08:00
Lincoln Stein
172ce3dc25 correctly detect when an embedding is incompatible with the current model
- Fixed the test for token length; tested on several .pt and .bin files
- Also added a __main__ entrypoint for CLI.py, to make pdb debugging a bit
  more convenient.
2023-02-19 22:30:57 -05:00
Kevin Turner
6c8d4b091e dev(InvokeAIDiffuserComponent): mollify type checker's concern about the optional argument 2023-02-19 16:58:54 -08:00
Lincoln Stein
7beebc3659 resolved conflicts; ran black and isort 2023-02-19 19:48:01 -05:00
Lincoln Stein
5461318eda clean up diagnostic messages 2023-02-19 19:38:29 -05:00
Kevin Turner
d0abe13b60 performance(InvokeAIDiffuserComponent): add low-memory path for calculating conditioned and unconditioned predictions sequentially
Proof of concept. Still needs to be wired up to options or heuristics.
2023-02-19 16:04:54 -08:00
Kevin Turner
aca9d74489 refactor(InvokeAIDiffuserComponent): rename internal methods
Prefix with `_` as is tradition.
2023-02-19 15:33:16 -08:00
mauwii
a0c213a158 remove /static 2023-02-19 23:51:00 +01:00
mauwii
740210fc99 remove old unused files since security issues 2023-02-19 23:47:28 +01:00
Lincoln Stein
ca10d0652f show title of add models screen 2023-02-19 16:55:09 -05:00
Lincoln Stein
e1a85d8184 fix incorrect passing of precision to model installer 2023-02-19 16:24:31 -05:00
Lincoln Stein
9d8236c59d tested and working on Ubuntu
- You can now achieve several effects:

   `invokeai-configure`
   This will use console-based UI to initialize invokeai.init,
   download support models, and choose and download SD models

   `invokeai-configure --yes`
   Without activating the GUI, populate invokeai.init with default values,
   download support models and download the "recommended" SD models

   `invokeai-configure --default_only`
   As above, but only download the default SD model (currently SD-1.5)

   `invokeai-model-install`
   Select and install models. This can be used to download arbitrary
   models from the Internet, install HuggingFace models using their repo_id,
   or watch a directory for models to load at startup time

   `invokeai-model-install --yes`
   Import the recommended SD models without a GUI

   `invokeai-model-install --default_only`
   As above, but only import the default model
2023-02-19 16:08:58 -05:00
blessedcoolant
7eafcd47a6 [WebUI] Bug Fixes (#2728)
A  few bugs fixed.

- After the recent update to the Cancel Button, it was no longer
respecting sizing in Floating Mode and the Beta Canvas. Fixed that.
- After the recent dependency update, useHotkeys was bugging out for the
fullscreen hotkey `f`. Realized this was happening because the hotkey
was initialized in two places -- in both the gallery and the parameter
floating button. Removed it from both those places and moved it to the
InvokeTabs component. It makes sense to reside it here because it is a
global hotkey.
- Also added index `0` to the default Accordion index in state in order
to ensure that the main accordions stay open. Conveniently this works
great on all tabs. We have all the primary options in accordions so they
stay open. And as for advanced settings, the first one is always Seed
which is an important accordion, so it opens up by default.

Think there may be some more bugs. Looking in to them.
2023-02-20 09:39:48 +13:00
Damian Stewart
ded3f13a33 move all prompting stuff to use compel 2023-02-19 20:42:29 +01:00
Lincoln Stein
e5646d7241 both forms functional; need integration 2023-02-19 13:12:05 -05:00
blessedcoolant
79ac9698c1 build: webui-bug-fixes 2023-02-20 05:28:52 +13:00
blessedcoolant
d29f57c93d fix: Keep the first accordion open by default on reset
We need to do this now because we are using multiple accordions.
2023-02-20 05:26:48 +13:00
blessedcoolant
9b7cde8918 fix: Fullscreen Hotkey Bug
After upgrading the deps, the full screen hotkey started to bug out. I believe this was happening because it was triggered in two different components causing it to run twice. Removed it from both floating buttons and moved it to the Invoke tab. Makes sense to keep it there as it is a global hotkey.
2023-02-20 05:20:51 +13:00
blessedcoolant
8ae71303a5 fix: Cancel Button not maintaining min height
After the recent changes the Cancel button wasn't maintaining min height in floating mode. Also the new button group was not scaling in width correctly on the Canvas Beta UI. Fixed both.
2023-02-20 05:18:37 +13:00
blessedcoolant
2cd7bd4a8e docs: add translation info to readme (#2725)
- Adds a translation status badge
- Adds a blurb about contributing a translation (we want Weblate to be
the source of truth for translations, and to avoid updating translations
directly here)
2023-02-20 04:46:26 +13:00
blessedcoolant
b813298f2a Merge branch 'main' into docs/readme-translation 2023-02-20 04:43:13 +13:00
blessedcoolant
58f787f7d4 ui: update deps, fix husky script (#2726)
- Upgraded all dependencies
- Removed beta TS 5.0 as it conflicted with some packages
- Added types for `Array.prototype.findLast` and
`Array.prototype.findLastIndex` (these definitions are provided in TS
5.0
- Fixed fixed type import syntax in a few components
- Re-patched `redux-deep-persist` and tested to ensure the patch still
works

The husky pre-commit command was `npx run lint`, but it should run
`lint-staged`. Also, `npx` wasn't working for me. Changed the command to
`npm run lint-staged` and it all works. Extended the `lint-staged`
triggers to hit `json`, `scss` and `html`.
2023-02-20 04:14:24 +13:00
blessedcoolant
2bba543d20 Merge branch 'main' into chore/ui/update-deps 2023-02-20 04:13:47 +13:00
Jonathan
d3c1b747ee Fix behavior when encountering a bad embedding (#2721)
When encountering a bad embedding, InvokeAI was asking about reconfiguring models. This is because the embedding load error was never handled - it now is.
2023-02-19 14:04:59 +00:00
psychedelicious
b9ecf93ba3 ui: translations update from weblate (#2727)
Translations update from [Hosted Weblate](https://hosted.weblate.org)
for [InvokeAI/Web
UI](https://hosted.weblate.org/projects/invokeai/web-ui/).



Current translation status:

![Weblate translation
status](https://hosted.weblate.org/widgets/invokeai/-/web-ui/horizontal-auto.svg)
2023-02-19 23:12:05 +11:00
psychedelicious
487da8394d translationBot(ui): update translation (French)
Currently translated at 85.4% (398 of 466 strings)

Co-authored-by: psychedelicious <mabianfu@icloud.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/fr/
Translation: InvokeAI/Web UI
2023-02-19 13:00:55 +01:00
Riccardo Giovanetti
4c93bc56f8 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (466 of 466 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-02-19 13:00:55 +01:00
Hosted Weblate
727dfeae43 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-02-19 13:00:55 +01:00
psychedelicious
88d561dee7 chore(ui): build frontend 2023-02-19 22:32:05 +11:00
psychedelicious
7a379f1d4f chore(ui): update deps
- Upgraded all dependencies
- Removed beta TS 5.0 as it conflicted with some packages
- Added types for `Array.prototype.findLast` and `Array.prototype.findLastIndex` (these definitions are provided in TS 5.0
- Fixed fixed type import syntax in a few components
- Re-patched `redux-deep-persist` and tested to ensure the patch still works
2023-02-19 22:32:05 +11:00
psychedelicious
3ad89f99d2 build(ui): fix husky & lint-staged 2023-02-19 22:32:00 +11:00
psychedelicious
d76c5da514 docs: add translation info to readme 2023-02-19 19:13:38 +11:00
blessedcoolant
da5b0673e7 docs(ti): add using & troubleshooting sections (#2717)
Add `Using Embeddings` and `Troubleshooting` sections clarifying issues
I had when using TI for the first time.
2023-02-19 16:52:44 +13:00
blessedcoolant
d7180afe9d Merge branch 'main' into docs/ti/add-using-troubleshooting 2023-02-19 16:51:50 +13:00
psychedelicious
2e9c15711b docs(ti): add using & troubleshooting sections 2023-02-19 14:45:26 +11:00
psychedelicious
e19b08b149 [WebUI] Model Manager Lag Fix (#2720)
Model Manager lags a bit if you have a lot of models.

Basically added a fake delay to rendering the model list so the modal
has time to load first. Hacky but if it works it works.
2023-02-19 14:42:25 +11:00
blessedcoolant
234d76a269 build: webui-model-manager-lag-fix 2023-02-19 15:25:14 +13:00
blessedcoolant
826d941068 fix: Fix Model Manager Modal Lag
By hacking in a fake delay to load the list.
2023-02-19 15:23:25 +13:00
Kevin Turner
34e449213c add ability to retrieve current list of embedding trigger strings (#2650) 2023-02-18 18:05:00 -08:00
Kevin Turner
671c5943e4 Merge remote-tracking branch 'origin/main' into api/add-trigger-string-retrieval
# Conflicts:
#	ldm/generate.py
2023-02-18 17:44:59 -08:00
blessedcoolant
16c24ec367 [WebUI] Implement a "Cancel after current iteration" Button (#2642)
## What was the problem/requirement? (What/Why)
Frequently, I wish to cancel the processing of images, but also want the
current image to finalize before I do. To work around this, I need to
wait until the current one finishes before pressing the cancel.

## What was the solution? (How)
* Implemented a button that allows to "Cancel after current iteration,"
which stores a state in the UI that will attempt to cancel the
processing after the current image finishes
* If the button is pressed again, while it is spinning and before the
next iteration happens, this will stop the scheduling of the cancel, and
behave as if the button was never pressed.

### Minor
* Added `.yarn` to `.gitignore` as this was an output folder produced
from following Frontend's README

### Revision 2
#### Major 
* Changed from a standalone button to a context menu next to the
original cancel button. Pressing the context menu will give the
drop-down option to select which type of cancel method the user prefers,
and they can press that button for canceling in the specified type
* Moved states to system state for cross-screen and toggled cancel types
management
* Added in distribution for the target yarn version (allowing any
version of yarn to compile successfully), and updated the README to
ensure `--immutable` is passed for onboarding developers

#### Minor 
* Updated `.gitignore` to ignore specific yarn folders, as specified by
their team -
https://yarnpkg.com/getting-started/qa#which-files-should-be-gitignored

## How were these changes tested?
* `yarn dev` => Server started successfully
* Manual testing on the development server to ensure the button behaved
as expected
* `yarn run  build` => Success

### Artifacts
#### Revision 1
* Video showing the UI changes in action

https://user-images.githubusercontent.com/89283782/218347722-3a15ce61-2d8c-4c38-b681-e7a3e79dd595.mov

* Images showing the basic UI changes

![image](https://user-images.githubusercontent.com/89283782/218347124-4afbb699-2abc-4e71-a794-b04f7179cfe2.png)

![image](https://user-images.githubusercontent.com/89283782/218347826-443db351-7a3a-4111-80af-56d56a81f07b.png)

#### Revision 2
* Video showing the UI changes in action

https://user-images.githubusercontent.com/89283782/219901217-048d2912-9b61-4415-85fd-9e8fedb00c79.mov

* Images showing the basic UI changes
(Default state) 

![image](https://user-images.githubusercontent.com/89283782/219901228-918b263a-dc75-4e5d-8897-5fc62c71a790.png)
(Drop-down context menu active) 

![image](https://user-images.githubusercontent.com/89283782/219901241-021be07a-b768-40a2-988f-eb59be4a962d.png)
(Scheduled cancel selected and running)

![image](https://user-images.githubusercontent.com/89283782/219901243-59a9c61a-71a7-44b3-adab-7aa4c9ee1f8e.png)
(Scheduled cancel started)

![image](https://user-images.githubusercontent.com/89283782/219901266-b4c0adc1-d791-4989-9351-075758e06534.png)


## Notes
* Using `SystemState`'s `currentStatus` variable, when the value is
`common:statusIterationComplete` is an alternative to this approach (and
would be more optimal as it should prevent the next iteration from even
starting), but since the names are within the translations, rather than
an enum or other type, this method of tracking the current iteration was
used instead.
* `isLoading` on `IAIIconButton` caused the Icon Button to also be
disabled, so the current solution works around that with conditionally
rendering the icon of the button instead of passing that value.
* I don't have context on the development expectation for `dist` folder
interactions (and couldn't find any documentation outside of the
`.gitignore` mentioning that the folder should remain. Let me know if
they need to be modified a certain way.
2023-02-19 14:35:34 +13:00
psychedelicious
e8240855e0 chore(ui): build frontend 2023-02-19 12:18:40 +11:00
psychedelicious
a5e065048e feat(ui): persist blacklist cancelAfter 2023-02-19 11:53:52 +11:00
blessedcoolant
a53c3269db build: cancel-after-iteration-webui 2023-02-19 13:30:15 +13:00
blessedcoolant
8bf93d3a32 Isolate Cancel Button Menu Styling 2023-02-19 13:23:04 +13:00
blessedcoolant
d42cc0fd1c Port Cancel Button Options Menu to New Component 2023-02-19 13:18:03 +13:00
blessedcoolant
d2553d783c Add IAISimpleMenu Component 2023-02-19 13:17:45 +13:00
blhook
10b747d22b Run yarn build once more due to merge 2023-02-18 14:45:00 -08:00
blhook
1d567fa593 Merge branch 'main' into scheduled-cancel 2023-02-18 14:43:05 -08:00
psychedelicious
3a3dd39d3a [ui] fix weblate merge conflict (#2716)
My last attempt to fix the Weblate missing keys was done incorrectly and
caused a merge conflict on the Weblate repo.

This PR follows these steps to fix it
https://docs.weblate.org/en/latest/faq.html#how-to-fix-merge-conflicts-in-translations

🤞
2023-02-19 09:14:35 +11:00
psychedelicious
f4b3d7dba2 fix(ui): add useSlidersForAll string 2023-02-19 09:12:14 +11:00
Riccardo Giovanetti
de2c7fd372 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (462 of 462 strings)

Translation: InvokeAI/Web UI
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
2023-02-19 09:05:01 +11:00
Anonymous
b140e1c619 translationBot(ui): update translation (English)
Currently translated at 100.0% (462 of 462 strings)

Co-authored-by: Anonymous <noreply@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/en/
Translation: InvokeAI/Web UI
2023-02-19 09:05:01 +11:00
Riccardo Giovanetti
1308584289 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (459 of 459 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-02-19 09:05:01 +11:00
blhook
2ac4778bcf Fix broken translation string location in Scheduled Cancel 2023-02-18 13:51:53 -08:00
blhook
6101d67dba Post-merge cleanup 2023-02-18 13:35:33 -08:00
blhook
3cd50fe3a1 Merge branch 'main' into scheduled-cancel 2023-02-18 13:30:45 -08:00
blhook
e683b574d1 Change scheduled send to be as part of context for Cancel button 2023-02-18 13:23:58 -08:00
blessedcoolant
0decd05913 fix conversion of checkpoints into incompatible diffusers models (#2714)
- The checkpoint conversion script was generating diffusers models with
the safety checker set to null. This resulted in models that could not
be merged with ones that have the safety checker activated.

- This PR fixes the issue by incorporating the safety checker into all
1.x-derived checkpoints, regardless of user's nsfw_checker setting.
2023-02-19 05:42:19 +13:00
Lincoln Stein
d01b7ea2d2 remove debug statement & actually do merge 2023-02-18 11:19:06 -05:00
Lincoln Stein
4fa91724d9 fix conversion of checkpoints into incompatible diffusers models
- The checkpoint conversion script was generating diffusers models
  with the safety checker set to null. This resulted in models
  that could not be merged with ones that have the safety checker
  activated.

- This PR fixes the issue by incorporating the safety checker into
  all 1.x-derived checkpoints, regardless of user's nsfw_checker setting.
2023-02-18 11:07:38 -05:00
blessedcoolant
e3d1c64b77 fix(diffusers_pipeline): ensure cuda.get_mem_info always gets a specific device index. (#2700)
Also tighten up the typing of `device` attributes in general.

Fixes 
> ValueError: Expected a torch.device with a specified index or an
integer, but got:cuda
2023-02-19 04:33:16 +13:00
blessedcoolant
17f35a7bba Merge branch 'main' into fix/expected-torch-device 2023-02-19 04:16:13 +13:00
blessedcoolant
ab2f0a6fbf fix(ui): fix translation files (#2708)
Weblate's first PR was it attempting to fix some translation issues we
had overlooked!

It wanted to remove some keys which it did not see in our translation
source due to typos.

This PR instead corrects the key names to resolve the issues.
2023-02-19 03:51:32 +13:00
blessedcoolant
41cbf2f7c4 Merge branch 'main' into feat/ui/fix-translations 2023-02-19 03:50:35 +13:00
Damian Stewart
d5d2e1d7a3 Merge branch 'main' into fix/expected-torch-device 2023-02-18 15:23:08 +01:00
Lincoln Stein
587faa3e52 preparation for startup option editor 2023-02-18 08:51:26 -05:00
psychedelicious
80229ab73e Fixed grammar in "other options" feature tooltip (#2711)
It bothered me so i fixed it
2023-02-18 22:05:46 +11:00
ExperimentalCyborg
68b2911d2f Fixed grammar in "other options" feature tooltip 2023-02-18 11:58:33 +01:00
Iman Karim
2bf2f627e4 Fix for issue #2707 2023-02-18 11:40:12 +01:00
psychedelicious
58676b2ce2 fix(ui): fix translation files 2023-02-18 19:08:46 +11:00
psychedelicious
11f79dc1e1 [WebUI] Localization Port Bug Fixes (#2706)
- Fixed missing localization string for "useSlidersForAll"
- Fixed status messages being broken.
2023-02-18 18:59:05 +11:00
blessedcoolant
2a095ddc8e build: localization-bug-fixes 2023-02-18 19:35:39 +13:00
blessedcoolant
dd849d2e91 Fix Localization Porting Bugs 2023-02-18 19:32:55 +13:00
Lincoln Stein
8c63fac958 AttributeError: 'Namespace' object has no attribute 'log_tokenization' (#2698)
Could be fixed here or alternatively declared in file globals.py
2023-02-18 01:08:50 -05:00
blessedcoolant
11a70e9764 Merge branch 'main' into patch-14 2023-02-18 18:45:05 +13:00
blessedcoolant
33ce78e4a2 feat(ui): set up for weblate translation (#2702)
# Weblate Translation 

After doing a full integration test of 3 translation service providers
on my fork of InvokeAI, we have chosen
[Weblate](https://hosted.weblate.org). The other two viable options were
[Crowdin](https://crowdin.com/) and
[Transifex](https://www.transifex.com/).

Weblate was the choice because its hosted service provides a very solid
UX / DX, can scale as much as we may ever need, is FOSS itself, and
generously offers free hosted service to other libre projects like ours.

## How it works

Weblate hosts its own fork of our repo and establishes a kind of
unidirectional relationship between our repo and its fork.

### InvokeAI --> Weblate

The `invoke-ai/InvokeAI` repo has had the Weblate GitHub app added to
it. This app watches for changes to our translation source
(`invokeai/frontend/public/locales/en.json`) and then updates the
Weblate fork. The Weblate UI then knows there are new strings to be
translated, or changes to be made.

### Translation

Our translators can then update the translations on the Weblate UI. The
plan now is to invite individual community members who have expressed
interest in maintaining a language or two and give them access to the
app. We can also open the doors to the general public if desired.

### Weblate --> InvokeAI

When a translation is ready or changed, the system will make a PR to
`main`. We have a substantial degree of control over this and will
likely manually trigger these PRs instead of letting them fire off
automatically.

Once a PR is merged, we will still need to rebuild the web UI. I think
we can set things up so that we only need the rebuild when a totally new
language is added, but for now, we will stick to this relatively simple
setup.

## This PR 

This PR sets up the web UI's translation stuff to work with Weblate:
- merged each locale into a single file
- updated the i18next config and UI to work with this simpler file
structure
- update our eslint and prettier rules to ensure the locale files have
the same format as what Weblate outputs (`tabWidth: 4`)
- added a thank you to Weblate in our README

Once this is merged, I'll link Weblate to `main` and do a couple tests
to ensure it is all working as expected.
2023-02-18 18:42:03 +13:00
psychedelicious
4f78518858 chore(ui): build frontend 2023-02-18 15:26:24 +11:00
psychedelicious
fad99ac4d2 docs: add thanks to weblate for translation 2023-02-18 15:26:24 +11:00
psychedelicious
423b592b25 feat(ui): set up for weblate translation 2023-02-18 15:26:04 +11:00
Kevin Turner
8aa7d1da55 fix(xformers): shush about not having Triton available. (#2701)
It's not readily available on Windows and xformers only uses it on some very specific hardware anyway.
2023-02-17 18:02:32 -08:00
Kevin Turner
6b702c32ca fix(xformers): shush about not having Triton available.
It's not readily available on Windows and xformers only uses it on some very specific hardware anyway.
2023-02-17 17:41:27 -08:00
blessedcoolant
767012aec0 [WebUI] Model Merging (#2699)
This PR brings Model Merging to the WebUI.

Inside the Model Manager, you can now find a new button called Merge
Models. Rest of it is self explanatory.


![firefox_BYCM4YNHEa](https://user-images.githubusercontent.com/54517381/219795631-dbb5c5c4-fc3a-4cdd-9549-18c2e5302835.png)
2023-02-18 14:34:35 +13:00
blessedcoolant
2267057e2b Merge branch 'main' into webui-model-merging 2023-02-18 14:13:44 +13:00
Kevin Turner
b8212e4dea fix(diffusers_pipeline): ensure cuda.get_mem_info always gets a specific device index.
Also tighten up the typing of `device` attributes in general.
2023-02-17 16:56:15 -08:00
blessedcoolant
5b7e4a5f5d Add Error Handling For Merging 2023-02-18 12:17:22 +13:00
Lincoln Stein
07f9fa63d0 Bugfixes on the merge_model GUI (#2697)
This fixes a few cosmetic bugs in the merge models console GUI:

1) Fix the minimum and maximum ranges on alpha. Was 0.05 to 0.95. Now
0.01 to 0.99.
2) Don't show the 'add_difference' interpolation method when 2 models
selected, or the other three methods when three models selected
2023-02-17 16:57:03 -05:00
Lincoln Stein
1ae8986451 add log_tokenization to globals 2023-02-17 16:47:32 -05:00
Lincoln Stein
b305c240de fix syntax errors introduced by github web-ui edits 2023-02-17 16:44:20 -05:00
blessedcoolant
248dc81ec3 build: [WebUI] model-merge 2023-02-18 10:18:29 +13:00
blessedcoolant
ebe0071ed2 feat: [WebUI] Model Merging 2023-02-18 10:13:56 +13:00
spezialspezial
7a518218e5 AttributeError: 'Namespace' object has no attribute 'log_tokenization'
Could be fixed here or alternatively declared in file globals.py
2023-02-17 22:11:49 +01:00
Lincoln Stein
fc14ac7faa Merge branch 'main' into api/add-trigger-string-retrieval 2023-02-17 15:53:57 -05:00
Lincoln Stein
95e2739c47 Merge branch 'main' into bugfix/merge-gui 2023-02-17 15:42:53 -05:00
Lincoln Stein
f129393a2e document add_difference on-screen 2023-02-17 15:42:06 -05:00
Lincoln Stein
c55bbd1a85 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-17 15:00:33 -05:00
Lincoln Stein
ccba41cdb2 Bugfix/convert v2 models (#2630)
## Convert v2 models in CLI

- This PR introduces a CLI prompt for the proper configuration file to
use when converting a ckpt file, in order to support both inpainting
      and v2 models files.
    
- When user tries to directly !import a v2 model, it prints out a proper
warning that v2 ckpts are not directly supported and converts it into a
diffusers model automatically.

The user interaction looks like this:
```
(stable-diffusion-1.5) invoke> !import_model /home/lstein/graphic-art.ckpt
Short name for this model [graphic-art]: graphic-art-test
Description for this model [Imported model graphic-art]: Imported model graphic-art
What type of model is this?:
[1] A model based on Stable Diffusion 1.X
[2] A model based on Stable Diffusion 2.X
[3] An inpainting model based on Stable Diffusion 1.X
[4] Something else
Your choice: [1] 2
```

In addition, this PR enhances the bulk checkpoint import function. If a
directory path is passed to `!import_model` then it will be scanned for
`.ckpt` and `.safetensors` files. The user will be prompted to import
all the files found, or select which ones to import.

Addresses
https://discord.com/channels/1020123559063990373/1073730061380894740/1073954728544845855
2023-02-17 14:50:54 -05:00
Lincoln Stein
3d442bbf22 Merge branch 'main' into bugfix/convert-v2-models 2023-02-17 14:50:05 -05:00
Lincoln Stein
4888d0d832 fix slider and interpolations
- fix alpha slider to show values from 0.01 to 0.99
- fix interpolation list to show 'difference' method for 3 models,
-   and weighted_sum, sigmoid and inverse_sigmoid methods for 2
2023-02-17 14:46:26 -05:00
Lincoln Stein
47de3fb007 correct display of 'add_difference' method when three models defined
- due to typo, the add_difference method was being displayed as "['add_difference']"
2023-02-17 14:41:02 -05:00
blessedcoolant
41bc160cb8 [WebUI] They see me slidin .. they hatin... (#2614)
Porting over as many usable options to slider as possible.

- Ported Face Restoration settings to Sliders.
- Ported Upscale Settings to Sliders.
- Ported Variation Amount to Sliders.
- Ported Noise Threshold to Sliders <-- Optimized slider so the values
actually make sense.
- Ported Perlin Noise to Sliders.
- Added a suboption hook for the High Res Strength Slider.
- Fixed a couple of small issues with the Slider component.
- Ported Main Options to Sliders.
2023-02-17 21:58:35 +13:00
psychedelicious
d0ba155c19 chore(ui): build frontend 2023-02-17 19:54:36 +11:00
blessedcoolant
5f0848bf7d feat(ui): add all-sliders option 2023-02-17 19:53:44 +11:00
Lincoln Stein
6551527fe2 Update 050_INSTALLING_MODELS.md (#2690)
Fix typo; "cute" to "cube"
2023-02-16 23:03:30 -05:00
Lincoln Stein
159ce2ea08 Merge branch 'main' into bugfix/convert-v2-models 2023-02-16 23:00:58 -05:00
Steven Frank
3715570d17 Update 050_INSTALLING_MODELS.md
Fix typo; "cute" to "cube"
2023-02-16 19:53:01 -08:00
Lincoln Stein
65a7432b5a disable xformers if cuda not available 2023-02-16 22:20:30 -05:00
Lincoln Stein
557e28f460 Fix workflow path filters (#2689)
remove leading Slash from paths
2023-02-16 22:15:31 -05:00
Lincoln Stein
62a7f252f5 Merge branch 'main' into fix/ci/workflow-path-filters 2023-02-16 22:14:45 -05:00
Lincoln Stein
2fa14200aa Merge branch 'main' into api/add-trigger-string-retrieval 2023-02-16 22:12:39 -05:00
mauwii
0605cf94f0 remove leading Slash from paths 2023-02-17 04:10:40 +01:00
Lincoln Stein
d69156c616 remove superseded code 2023-02-16 22:05:00 -05:00
Lincoln Stein
0963bbbe78 rebuild frontend after merge conflict 2023-02-16 21:52:20 -05:00
Lincoln Stein
f3351a5e47 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-16 21:51:15 -05:00
Lincoln Stein
f3f4c68acc fix model download and autodetection bugs
- Corrected error that caused --full-precision argument to be ignored
  when models downloaded using the --yes argument.

- Improved autodetection of v1 inpainting files; no longer relies on the
  file having 'inpaint' in the name.
2023-02-16 21:37:50 -05:00
Lincoln Stein
5d617ce63d rebuild front end 2023-02-16 20:03:59 -05:00
Kevin Turner
8a0d45ac5a new OffloadingDevice loads one model at a time, on demand (#2596)
* new OffloadingDevice loads one model at a time, on demand

* fixup! new OffloadingDevice loads one model at a time, on demand

* fix(prompt_to_embeddings): call the text encoder directly instead of its forward method

allowing any associated hooks to run with it.

* more attempts to get things on the right device from the offloader

* more attempts to get things on the right device from the offloader

* make offloading methods an explicit part of the pipeline interface

* inlining some calls where device is only used once

* ensure model group is ready after pipeline.to is called

* fixup! Strategize slicing based on free [V]RAM (#2572)

* doc(offloading): docstrings for offloading.ModelGroup

* doc(offloading): docstrings for offloading-related pipeline methods

* refactor(offloading): s/SimpleModelGroup/FullyLoadedModelGroup

* refactor(offloading): s/HotSeatModelGroup/LazilyLoadedModelGroup

to frame it is the same terms as "FullyLoadedModelGroup"

---------

Co-authored-by: Damian Stewart <null@damianstewart.com>
2023-02-16 23:48:27 +00:00
Matthias Wild
2468ba7445 skip huge workflows if not needed (#2688)
- filter paths for `build-container.yml` and `test-invoke-pip.yml`
  - add workflow to pass required checks on PRs with `paths-ignore`
  - this triggers if `test-invoke-pip.yml` does not
- fix "CI checks on main link" in `/README.md`
2023-02-16 22:57:36 +01:00
mauwii
65b7d2db47 skip huge workflows if not needed
- filter paths for `build-container.yml` and `test-invoke-pip.yml`
  - add workflow to pass required checks on PRs with `paths-ignore`
  - this triggers if `test-invoke-pip.yml` does not
- fix "CI checks on main link" in `/README.md`
2023-02-16 22:56:39 +01:00
Ryan Cao
e07f1bb89c build frontend 2023-02-16 21:33:47 +01:00
Ryan Cao
f4f813d108 design: smooth progress bar animations 2023-02-16 21:33:47 +01:00
Lincoln Stein
6217edcb6c tweak wording of python version requirements 2023-02-16 12:55:13 -05:00
Lincoln Stein
c5cc832304 check maximum value of python version as well as minimum 2023-02-16 12:52:07 -05:00
blessedcoolant
a76038bac4 [WebUI] Even off JSX string syntax (#2058)
Assuming that mixing `"literal strings"` and `{'JSX expressions'}`
throughout the code is not for a explicit reason but just a result IDE
autocompletion, I changed all props to be consistent with the
conventional style of using simple string literals where it is
sufficient.

This is a somewhat trivial change, but it makes the code a little more
readable and uniform
2023-02-17 01:22:17 +13:00
blessedcoolant
ff4942f9b4 Merge branch 'main' into pr/2058 2023-02-17 01:05:20 +13:00
blessedcoolant
1ccad64871 build: lint/format ignores stats.html (#2681) 2023-02-17 00:42:51 +13:00
psychedelicious
19f0022bbe build: lint/format ignores stats.html 2023-02-16 20:02:52 +11:00
psychedelicious
ecc7b7a700 builds frontend 2023-02-16 19:54:38 +11:00
David Regla
e46102124e [WebUI] Even off JSX string props
Increased consistency and readability by replacing any unnecessary JSX expressions in places where string literals are sufficient
2023-02-16 19:54:25 +11:00
Lincoln Stein
314ed7d8f6 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-16 03:24:02 -05:00
Lincoln Stein
b1341bc611 fully functional and ready for review
- quashed multiple bugs in model conversion and importing
- found old issue in handling of resume of interrupted downloads
- will require extensive testing
2023-02-16 03:22:25 -05:00
Lincoln Stein
07be605dcb mostly working 2023-02-16 01:30:59 -05:00
Lincoln Stein
fe318775c3 bring in url download bugfix from PR 2630 2023-02-16 00:37:17 -05:00
Lincoln Stein
1bb07795d8 model installer downloads starter models + user-provided paths and repo_ids
- Ability to scan directory not yet implemented
- Can't download from Civitai due to incomplete URL download implementation
2023-02-16 00:34:15 -05:00
Eugene Brodsky
caf07479ec fix spelling mistake 2023-02-16 00:19:08 -05:00
Johnathon Selstad
508780d07f Also fix .bat file to point at correct configurer 2023-02-16 00:19:08 -05:00
Johnathon Selstad
05e67e924c Make configure_invokeai.py call invokeai_configure 2023-02-16 00:19:08 -05:00
blessedcoolant
fb2488314f fix minor typos (#2666)
Very, very minor typos I noticed.
2023-02-16 10:14:30 +13:00
blessedcoolant
062f58209b Merge branch 'main' into fix_typos 2023-02-16 10:01:28 +13:00
Matthias Wild
7cb9d6b1a6 [WebUI] Model Conversion (#2616)
### WebUI Model Conversion

**Model Search Updates**

- Model Search now has a radio group that allows users to pick the type
of model they are importing. If they know their model has a custom
config file, they can assign it right here. Based on their pick, the
model config data is automatically populated. And this same information
is used when converting the model to `diffusers`.


![firefox_q8b4Iog73A](https://user-images.githubusercontent.com/54517381/218283322-6bf31fd5-349a-410f-991a-2aa50ee8b6e1.png)

- Files named `model.safetensors` and
`diffusion_pytorch_model.safetensors` are excluded from the search
because these are naming conventions used by diffusers models and they
will end up showing on the list because our conversion saves safetensors
and not bin files.

**Model Conversion UI**

- The **Convert To Diffusers** button can be found on the Edit page of
any **Checkpoint Model**.


![firefox_VUzv10CZ7m](https://user-images.githubusercontent.com/54517381/218283424-d9864406-ebb3-44a4-9e00-b6adda72d817.png)

- When converting the model, the entire process is handled
automatically. The corresponding config while at the time of the Ckpt
addition is used in the process.
- Users are presented with the choice on where to save the diffusers
converted model - same location as the ckpt, InvokeAI models root folder
or a completely custom location.


![firefox_HJlR97KY0u](https://user-images.githubusercontent.com/54517381/218283443-b9136edd-b432-4569-a8cc-50961544f31f.png)

- When the model is converted, the checkpoint entry is replaced with the
diffusers model entry. A user can readd the ckpt if they wish to.

--- 

More or less done. Might make some minor UX improvements as I refine
things.
2023-02-15 21:58:29 +01:00
blessedcoolant
fb721234ec final build (webui-model-conversion) 2023-02-16 09:32:54 +13:00
blessedcoolant
92906aeb08 Merge branch 'main' into webui-model-conversion 2023-02-16 09:31:28 +13:00
Jonathan
cab41f0538 Fix perlin noise generator for diffusers tensors (#2678)
Tensors with diffusers no longer have to be multiples of 8. This broke Perlin noise generation. We now generate noise for the next largest multiple of 8 and return a cropped result. Fixes #2674.
2023-02-15 19:37:42 +01:00
Kent Keirsey
5d0dcaf81e Fix typo and Hi-Res Bug 2023-02-15 13:06:31 +01:00
psychedelicious
9591c8d4e0 builds frontend 2023-02-15 22:30:47 +11:00
psychedelicious
bcb1fbe031 add tooltips & status messages to model conversion 2023-02-15 22:28:36 +11:00
Lincoln Stein
e87a2fe14b model installer frontend done - needs to be hooked to backend 2023-02-15 01:07:39 -05:00
blhook
d00571b5a4 Revert yarn.lock 2023-02-14 18:05:24 -08:00
fattire
b08a514594 missed one. 2023-02-14 17:49:01 -08:00
Eugene Brodsky
265ccaca4a Merge branch 'main' into enhance/update-menu 2023-02-14 20:48:36 -05:00
fattire
7aa6c827f7 fix minor typos 2023-02-14 17:38:21 -08:00
Jonathan
093174942b Add thresholding for all diffusers types (#2479)
`generator` now asks `InvokeAIDiffuserComponent` to do postprocessing work on latents after every step. Thresholding - now implemented as replacing latents outside of the threshold with random noise - is called at this point. This postprocessing step is also where we can hook up symmetry and other image latent manipulations in the future.

Note: code at this layer doesn't need to worry about MPS as relevant torch functions are wrapped and made MPS-safe by `generator.py`.
2023-02-14 18:00:34 -06:00
Lincoln Stein
f299f40763 convert existing model display to column format 2023-02-14 16:32:54 -05:00
Lincoln Stein
7545e38655 frontend design done; functionality not hooked up yet 2023-02-14 00:02:19 -05:00
Lincoln Stein
0bc55a0d55 Fix link to the installation documentation
Broken link in the README. Now pointing to correct mkdocs file.
2023-02-14 04:15:23 +01:00
Lincoln Stein
d38e7170fe fix broken !import_model downloads
1. Now works with sites that produce lots of redirects, such as CIVITAI
2. Derive name of destination model file from HTTP Content-Disposition header,
   if present.
3. Swap \\ for / in file paths provided by users, to hopefully fix issues with
   Windows.
2023-02-13 22:14:24 -05:00
mauwii
15a9412255 some small formatting fixes 2023-02-13 23:10:58 +01:00
Lincoln Stein
e29399e032 don't even try to load incompatible embeddings 2023-02-13 17:00:52 -05:00
Lincoln Stein
bc18a94d8c add ability to retrieve current list of embedding trigger strings
This PR adds a new attributer to ldm.generate, `embedding_trigger_strings`:

```
gen = Generate(...)
strings = gen.embedding_trigger_strings
strings = gen.embedding_trigger_strings()
```

The trigger strings will change when the model is updated to show only
those strings which are compatible with the current
model. Dynamically-downloaded triggers from the HF Concepts Library
will only show up after they are used for the first time. However, the
full list of concepts available for download can be retrieved
programatically like this:

```
from ldm.invoke.concepts_lib import HuggingFAceConceptsLibrary
concepts = HuggingFaceConceptsLibrary()
trigger_strings = concepts.list_concepts()
```
2023-02-13 14:11:36 -05:00
Lincoln Stein
5d2bdd478c Merge branch 'main' into bugfix/convert-v2-models 2023-02-13 13:15:05 -05:00
Lincoln Stein
9cacba916b Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-13 09:31:34 -05:00
Lincoln Stein
628e82fa79 Added arabic locale files (#2561)
I have added the arabic locale files. There need to be some
modifications to the code in order to detect the language direction and
add it to the current document body properties.

For example we can use this:

import { appWithTranslation, useTranslation } from "next-i18next";
import React, { useEffect } from "react";

  const { t, i18n } = useTranslation();
  const direction = i18n.dir();
  useEffect(() => {
    document.body.dir = direction;
  }, [direction]);

This should be added to the app file. It uses next-i18next to
automatically get the current language and sets the body text direction
(ltr or rtl) depending on the selected language.
2023-02-13 07:45:16 -05:00
Lincoln Stein
fbbbba2fac correct crash on edge case 2023-02-13 07:40:15 -05:00
blessedcoolant
9cbf9d52b4 Merge branch 'main' into pr/2561 2023-02-13 23:48:18 +13:00
blessedcoolant
fb35fe1a41 Merge branch 'main' into pr/2561 2023-02-13 23:47:21 +13:00
psychedelicious
b60b5750af builds frontend 2023-02-13 21:23:26 +11:00
psychedelicious
3ff40114fa adds arabic to language picker 2023-02-13 21:22:39 +11:00
psychedelicious
71c6ae8789 fixes mislocated language file 2023-02-13 21:22:18 +11:00
psychedelicious
d9a7536fa8 moves languages to fallback lang (en) 2023-02-13 21:21:46 +11:00
Lincoln Stein
99f4417cd7 Improve error messages from Textual Inversion and Merge scripts (#2641)
## Provide informative error messages when TI and Merge scripts have
insufficient space for console UI

- The invokeai-ti and invokeai-merge scripts will crash if there is not
enough space in the console to fit the user interface (even after
responsive formatting).

- This PR intercepts the errors and prints a useful error message
advising user to make window larger.
2023-02-13 00:12:32 -05:00
Lincoln Stein
47f94bde04 Merge branch 'main' into install/refactor-configure-and-model-select 2023-02-12 23:59:31 -05:00
Lincoln Stein
197e6b95e3 add missing file 2023-02-12 23:59:18 -05:00
Lincoln Stein
8e47ca8d57 Merge branch 'main' into bugfix/prevent-ti-frontend-crash 2023-02-12 23:56:41 -05:00
Lincoln Stein
714fff39ba add new console frontend to initial model selection, and other improvements
1. The invokeai-configure script has now been refactored. The work of
   selecting and downloading initial models at install time is now done
   by a script named invokeai-initial-models (module
   name is ldm.invoke.config.initial_model_select)

   The calling arguments for invokeai-configure have not changed, so
   nothing should break. After initializing the root directory, the
   script calls invokeai-initial-models to let the user select the
   starting models to install.

2. invokeai-initial-models puts up a console GUI with checkboxes to
   indicate which models to install. It respects the --default_only
   and --yes arguments so that CI will continue to work.

3. User can now edit the VAE assigned to diffusers models in the CLI.

4. Fixed a bug that caused a crash during model loading when the VAE
   is set to None, rather than being empty.
2023-02-12 23:52:44 -05:00
Eugene Brodsky
89239d1c54 (updater) style 'pip' progress to use dark background 2023-02-12 19:10:11 -05:00
blhook
c03d98cf46 Implement a cancel after next iteration button 2023-02-12 15:56:03 -08:00
Lincoln Stein
d1ad46d6f1 ask user to make window larger if not enough space for textual inversion/merge gui
- The invokeai-ti and invokeai-merge scripts will crash if there is not enough space
  in the console to fit the user interface (even after responsive formatting).

- This PR intercepts the errors and prints a useful error message advising user to
  make window larger.
2023-02-12 17:38:46 -05:00
Lincoln Stein
6ae7560f66 Merge branch 'main' into webui-model-conversion 2023-02-12 17:22:32 -05:00
Lincoln Stein
e561d19206 a few adjustments
- fix unused variables and f-strings found by pyflakes
- use global_converted_ckpts_dir() to find location of diffusers
- fixed bug in model_manager that was causing the description of converted
  models to read "Optimized version of {model_name}'
2023-02-12 17:20:13 -05:00
Jonathan
9eed1919c2 Strategize slicing based on free [V]RAM (#2572)
Strategize slicing based on free [V]RAM when not using xformers. Free [V]RAM is evaluated at every generation. When there's enough memory, the entire generation occurs without slicing. If there is not enough free memory, we use diffusers' sliced attention.
2023-02-12 18:24:15 +00:00
blessedcoolant
b87f7b1129 Update Model Conversion Help Text 2023-02-13 00:30:50 +13:00
blessedcoolant
7410a60208 Merge branch 'main' into webui-model-conversion 2023-02-12 23:35:49 +13:00
Matthias Wild
7c86130a3d add merge_group trigger to test-invoke-pip.yml (#2590) 2023-02-12 05:00:04 +01:00
Lincoln Stein
58a1d9aae0 Merge branch 'main' into update/ci/prepare-test-invoke-pip-for-queue 2023-02-11 22:38:55 -05:00
Lincoln Stein
24e32f6ae2 add 'update' action to launcher script
- Adds an update action to launcher script
- This action calls new python script `invokeai-update`, which prompts
  user to update to latest release version, main development version,
  or an arbitrary git tag or branch name.
- It then uses `pip` to update to whatever tag was specified.
2023-02-11 22:32:48 -05:00
Lincoln Stein
3dd7393984 Huge Docker Update - better caching, don't use root user, include dockerhub and more.... (#2597)
Some of the core features of this PR include:

- optional push image to dockerhub (will be skipped in repos which
didn't set it up)
- stop using the root user at runtime
- trigger builds also for update/docker/* and update/ci/docker/*
- always cache image from current branch and main branch
- separate caches for container flavors
- updated comments with instructions in build.sh and run.sh
2023-02-11 18:25:48 -05:00
Lincoln Stein
f18f743d03 Merge branch 'main' into update/docker/include-dockerhub 2023-02-11 18:03:03 -05:00
Lincoln Stein
c660dcdfcd improve ability to bulk import .ckpt and .safetensors
This commit cleans up the code that did bulk imports of legacy model
files. The code has been refactored, and the user is now offered the
option of importing all the model files found in the directory, or
selecting which ones to import.
2023-02-11 17:59:12 -05:00
blessedcoolant
9e0250c0b4 Merge branch 'main' into webui-model-conversion 2023-02-12 11:13:13 +13:00
blessedcoolant
08c747f1e0 test-build (model-conversion-v1) 2023-02-12 11:12:23 +13:00
blessedcoolant
04ae6fde80 Model Manager localization updates 2023-02-12 11:11:00 +13:00
blessedcoolant
b1a53c8ef0 {Model Manager] Backend update to support custom save locations and configs 2023-02-12 11:10:47 +13:00
blessedcoolant
cd64511f24 [Model Manager] Allows uses to pick Diffusers converted model save location
Users can now pick the folder to save their diffusers converted model. It can either be the same folder as the ckpt, or the invoke root models folder or a totally custom location.
2023-02-12 11:10:17 +13:00
blessedcoolant
1e98e0b159 [Model Manager] Allow users to pick model type
Users can now pick model type when adding a new model and the configuration files are automatically applied.
2023-02-12 11:09:09 +13:00
Lincoln Stein
4f7af55bc3 if importing a v2 ckpt model, convert to diffusers 2023-02-11 16:35:45 -05:00
Lincoln Stein
d0e6a57e48 make inpaint model conversion work
Fixed a couple of bugs:

1. The original config file for the ckpt file is derived from the entry in
   `models.yaml` rather than relying on the user to select. The implication
   of this is that V2 ckpt models need to be assigned `v2-inference-v.yaml`
   when they are first imported. Otherwise they won't convert right. Note
   that currently V2 ckpts are imported with `v1-inference.yaml`, which
   isn't right either.

2. Fixed a backslash in the output diffusers path, which was causing
   load failures on Linux.

Remaining issues:

1. The radio buttons for selecting the model type are
   nonfunctional. It feels to me like these should be moved into the
   dialogue for importing ckpt/safetensors files, because this is
   where the algorithm needs help from the user.

2. The output diffusers model is written into the same directory as
   the input ckpt file. The CLI does it differently and stores the
   diffusers model in `ROOTDIR/models/converted-ckpts`. We should
   settle on one way or the other.
2023-02-11 15:53:41 -05:00
Lincoln Stein
d28a486769 rebuild frontend 2023-02-11 15:07:12 -05:00
Lincoln Stein
84722d92f6 foo 2023-02-11 15:06:34 -05:00
Lincoln Stein
8a3b5ac21d rebuild frontend 2023-02-11 14:58:49 -05:00
Lincoln Stein
717d53a773 Merge branch 'main' into bugfix/convert-v2-models 2023-02-11 14:27:52 -05:00
blessedcoolant
96926d6648 v2 Conversion Support & Radio Picker
Converted the picker options to a Radio Group and also updated the backend to use the appropriate config if it is a v2 model that needs to be converted.
2023-02-12 05:00:29 +13:00
Lincoln Stein
f3639de8b1 add note in manual that directly running v2 models not supported 2023-02-11 09:43:14 -05:00
Lincoln Stein
b71e675e8d support conversion of v2 models
- This PR introduces a CLI prompt for the proper configuration file to
  use when converting a ckpt file, in order to support both inpainting
  and v2 models files.

- When user tries to directly !import a v2 model, it prints out a proper
  warning that v2 ckpts are not directly supported.
2023-02-11 09:39:41 -05:00
tyler
d3c850104b pulling esrgan denoise strength through to the generate API. 2023-02-12 02:47:37 +13:00
tyler
c00155f6a4 pulling esrgan denoise strength through to the generate API. 2023-02-12 02:47:37 +13:00
Lincoln Stein
8753070fc7 Fix Incorrect Windows Environment Activation Location (Manual Installation Documentation) (#2627)
## What was the problem/requirement? (What/Why)
* Windows location for the Python environment activate location is
currently incorrect
  * Due to this, this command will fail for Windows-based users
* The contributing link within the `Developer Install` sections leads to
a [404](https://invoke-ai.github.io/index.md#Contributing)
* `Developer Install`'s numbered list currently lists 1, 1, 2, . . .

## What was the solution? (How)
* Changed the location of Windows script based on actual location -
[reference](https://docs.python.org/3/library/venv.html)
* Moved the link to point to one directory higher -- the main index.md
* Minor format adjustments to allow for the numbered list to appear as
expected

## How were these changes tested?
* `mkdocs serve` => Verified on local server that the changes reflected
as expected

## Notes
Contributing mentions to set the upstream towards the `development`
branch, but that branch has been untouched for several months, so I've
pointed to the `main` branch. Let me know if we need to switch to a
different one.
2023-02-11 08:15:17 -05:00
Lincoln Stein
ed8f9f021d Merge branch 'main' into update-installation-documents 2023-02-11 07:46:29 -05:00
Lincoln Stein
3ccc705396 fix two bugs in conversion of inpaint models from ckpt to diffusers m… (#2620)
…odels

- If CLI asked to convert the currently loaded model, the model would
crash on the first rendering. CLI will now refuse to convert a model
loaded in memory (probably a good idea in any case).

- CLI will offer the `v1-inpainting-inference.yaml` as the configuration
file when importing an inpainting a .ckpt or .safetensors file that has
"inpainting" in the name. Otherwise it offers `v1-inference.yaml` as the
default.
2023-02-11 07:45:06 -05:00
blessedcoolant
11e422cf29 Ignore two files names instead of the entire folder
rather than bypassing any path with diffusers in it, im specifically bypassing model.safetensors and diffusion_pytorch_model.safetensors both of which should be diffusers files in most cases.
2023-02-12 00:13:22 +13:00
blessedcoolant
7f695fed39 Ignore safetensor or ckpt files inside diffusers model folders.
Basically skips the path if the path has the word diffusers anywhere inside it.
2023-02-12 00:03:42 +13:00
blessedcoolant
310501cd8a Add support for custom config files 2023-02-11 23:34:24 +13:00
blhook
106b3aea1b Fix incorrect Windows env activation location
Change broken link to Contributing inside of Developer Install
Minor format modification to allow for numbered list to appear properly
2023-02-11 00:30:07 -08:00
blessedcoolant
6e52ca3307 Model Convert Component 2023-02-11 20:41:49 +13:00
blessedcoolant
94c31f672f Add Initial Checks for Inpainting
The conversion itself is broken. But that's another issue.
2023-02-11 20:41:18 +13:00
Saifeddine ALOUI
240bbb9852 Merge branch 'main' into main 2023-02-11 01:17:42 +01:00
Matthias Wild
8cf2ed91a9 Merge branch 'main' into update/docker/include-dockerhub 2023-02-10 22:55:54 +01:00
mauwii
7be5b4ca8b update Dockerfile
- introduce build arg `VOLUME_DIR`
- fix permissions of the Volume
2023-02-10 22:55:19 +01:00
Lincoln Stein
d589ad96aa fix two bugs in conversion of inpaint models from ckpt to diffusers models
- If CLI asked to convert the currently loaded model, the model would crash
  on the first rendering. CLI will now refuse to convert a model loaded
  in memory (probably a good idea in any case).

- CLI will offer the `v1-inpainting-inference.yaml` as the configuration
  file when importing an inpainting a .ckpt or .safetensors file that
  has "inpainting" in the name. Otherwise it offers `v1-inference.yaml`
  as the default.
2023-02-10 15:06:37 -05:00
Lincoln Stein
097e41e8d2 2.3.0 Documentation Fixes (#2609)
Found a couple of places where the formatting was messed up. I also
added a "Quick Start Guide" to the README for people who encounter
InvokeAI through PyPi. It features the PyPi install!
2023-02-10 13:00:14 -05:00
mauwii
4cf43b858d update 020_INSTALL_MANUAL.md
- some formatting changes / fixes
- updates venv creation commands
- remove extra index from Mac Installations
2023-02-10 17:29:12 +01:00
mauwii
13a4666a6e update README.md
- fix some formatting issues
- fix command to create venv
- some other small updates
2023-02-10 16:27:21 +01:00
blessedcoolant
9232290950 Initial Implementation - Model Conversion Frontend 2023-02-11 03:53:31 +13:00
blessedcoolant
f3153d45bc Initial Implementation - Model Conversion Backend 2023-02-11 03:53:15 +13:00
Lincoln Stein
d9cb6da951 Merge branch 'main' into update/docker/include-dockerhub 2023-02-10 09:42:07 -05:00
Saifeddine ALOUI
17535d887f Merge branch 'invoke-ai:main' into main 2023-02-10 07:58:28 +01:00
Lincoln Stein
35da7f5b96 Merge branch 'main' into doc/manual-install-fixes 2023-02-09 21:55:21 -05:00
Lincoln Stein
4e95a68582 adding support for ESRGAN denoising strength (#2598)
pulling in denoising support from upstream (its already there, invoke
just isn't using it). I've enabled this as a command line argument as
construction of the ESRGAN handler happens once. Ideally this would be a
UI option that could be adjusted for each upscaling task. Unfortunately
that is beyond my current level of InvokeAI-foo.

Upstream reference is here, starting on line 99 "use dni to control the
denoise strength"

https://github.com/xinntao/Real-ESRGAN/blob/master/inference_realesrgan.py
2023-02-09 21:55:00 -05:00
Lincoln Stein
9dfeb93f80 add quick install instructions to README 2023-02-09 20:28:20 -05:00
blessedcoolant
02247ffc79 resolved build (denoise_str) 2023-02-10 14:12:21 +13:00
blessedcoolant
48da030415 resolving conflicts 2023-02-10 14:03:31 +13:00
Lincoln Stein
817e04bee0 add quickstart instructions for PyPi 2023-02-09 19:37:04 -05:00
Lincoln Stein
e5d0b0c37d Formatting fixes, Manual Installation docs
Found a couple of places where the formatting was messed up. This corrects them.
2023-02-09 17:56:25 -05:00
blessedcoolant
950f450665 Merge branch 'main' into main 2023-02-10 11:52:26 +13:00
Lincoln Stein
f5d1fbd896 Update main to release v2.3.0 (#2608)
# Release 2.3.0

This will bring `main` up to date with release 2.3.0. I will need
approvals from @mauwii (docs) and @blessedcoolant (for _version.py).
2023-02-09 17:28:55 -05:00
Lincoln Stein
424cee63f1 Merge branch 'main' into release/2.3.0-last-tweaks 2023-02-09 16:36:51 -05:00
blessedcoolant
79daf8b039 clean build (esrgan-denoise-str) 2023-02-10 10:20:37 +13:00
blessedcoolant
383cbca896 lint-resolve 2023-02-10 10:16:55 +13:00
psychedelicious
07c55d5e2a adds upscaling denoising to metadata viewer 2023-02-10 07:30:17 +11:00
blessedcoolant
156151df45 build (esrgan-denoise-str) 2023-02-10 09:19:55 +13:00
blessedcoolant
03b1d71af9 Resolving Conflicts 2023-02-10 09:18:02 +13:00
blessedcoolant
da193ecd4a ESLint EOL Fix 2023-02-10 09:11:07 +13:00
psychedelicious
56fd202e21 builds frontend 2023-02-10 08:24:40 +13:00
Jonathan
29454a2974 Update generationSlice.ts 2023-02-10 08:24:40 +13:00
Jonathan
c977d295f5 Update generationSlice.ts 2023-02-10 08:24:40 +13:00
Jonathan
28eaffa188 Update generationSlice.ts
Added perlin noise state restoration.
2023-02-10 08:24:40 +13:00
psychedelicious
3feff09fb3 fixes #2049 use threshold not setting correct value 2023-02-10 08:24:40 +13:00
Lincoln Stein
158d1ef384 bump version number; update contributors 2023-02-09 13:01:08 -05:00
Matthias Wild
f6ad107fdd Merge branch 'main' into update/ci/prepare-test-invoke-pip-for-queue 2023-02-09 08:48:06 +01:00
blessedcoolant
e2c392631a build (esrgan-denoise-str) 2023-02-09 20:21:22 +13:00
blessedcoolant
4a1b4d63ef Change denoise_str default to 0.75 2023-02-09 20:21:09 +13:00
blessedcoolant
83ecda977c Add frontend UI for denoise_str for ESRGAN 2023-02-09 20:19:25 +13:00
blessedcoolant
9601febef8 Add denoise_str to ESRGARN - frontend server 2023-02-09 20:16:47 +13:00
blessedcoolant
0503680efa Change denoise_str to an arg instead of a class variable 2023-02-09 20:16:23 +13:00
mauwii
57ccec1df3 remove metadata to summary step since secret use
- This PR will also close #2593
2023-02-09 07:24:28 +01:00
Matthias Wild
22f3634481 Merge branch 'main' into update/docker/include-dockerhub 2023-02-09 07:18:45 +01:00
blessedcoolant
5590c73af2 Prettified Frontend 2023-02-09 19:16:36 +13:00
coreco
1f76b30e54 adding support for ESRGAN denoising strength, which allows for improved detail retention when upscaling photorelistic faces 2023-02-08 22:36:35 -06:00
Lincoln Stein
4785a1cd05 Up version to 2.3.0-rc7 (#2591)
This brings `main` up to date with 2.3.0 release candidate 7.
2023-02-08 22:06:58 -05:00
mauwii
8bd04654c7 remove Trash folder if existing 2023-02-09 04:00:51 +01:00
Lincoln Stein
2876c4ddec Merge branch 'main' into 2.3.0rc7 2023-02-08 21:40:14 -05:00
mauwii
0dce3188cc make DOCKERHUB_USERNAME a secret 2023-02-09 03:35:16 +01:00
mauwii
106c7aa956 bindmount outputs directory to ./docker/outputs 2023-02-09 02:48:12 +01:00
mauwii
b04f199035 revert caching to main; cache to ref_name 2023-02-09 01:33:08 +01:00
mauwii
a2b992dfd1 always cache-to main 2023-02-09 01:25:32 +01:00
mauwii
745e253a78 fix condition of Docker Hub Description step 2023-02-09 01:05:08 +01:00
mauwii
2ea551d37d create user home, expose port 2023-02-09 00:56:09 +01:00
mauwii
8d1481ca10 cache from main and ref_name 2023-02-09 00:28:04 +01:00
mauwii
307e7e00c2 only push if refs/heads/main or refs/tags/* 2023-02-09 00:15:46 +01:00
Lincoln Stein
4bce81de26 blank out lstein's employer info 2023-02-08 18:08:02 -05:00
mauwii
c3ad1c8a9f remove long sha from container tags 2023-02-09 00:06:09 +01:00
mauwii
05d51d7b5b re-use --link, lock pip cache 2023-02-08 23:45:15 +01:00
mauwii
09f69a4d28 Add output when activating venv 2023-02-08 23:39:28 +01:00
mauwii
a338af17c8 Initialize PIP_CACHE_DIR after setting the env 2023-02-08 23:17:15 +01:00
Matthias Wild
bc82fc0cdd Merge branch 'invoke-ai:main' into main 2023-02-08 22:46:16 +01:00
Saifeddine
418a3d6e41 Merge branch 'main' of https://github.com/ParisNeo/ArtBot 2023-02-08 21:59:58 +01:00
Saifeddine
fbcc52ec3d upgréaded arabic localization 2023-02-08 21:59:53 +01:00
Saifeddine ALOUI
47e89f4ba1 Merge branch 'invoke-ai:main' into main 2023-02-08 21:59:27 +01:00
Lincoln Stein
12d15a1a3f Up version to 2.3.0-rc7 2023-02-08 15:55:35 -05:00
mauwii
888d3ae968 cleanup Dockerfile 2023-02-08 21:54:06 +01:00
mauwii
a28120abdd small improvements to env.sh 2023-02-08 21:53:57 +01:00
Lincoln Stein
2aad4dab90 Initial Slider & Img2Img=1 Updates (#2467)
Adding a slider for Hi Res Fix to control Img2Img

Updated Img2img to accept values of 1 (replacing Inpaint Replace)
2023-02-08 15:50:44 -05:00
mauwii
4493d83aea update docker-scripts instructions 2023-02-08 21:35:19 +01:00
mauwii
eff0fb9a69 add merge_group trigger to test-invoke-pip.yml 2023-02-08 21:23:13 +01:00
Lincoln Stein
c19107e0a8 Merge branch 'main' into Img2Img-Slider-Updates 2023-02-08 15:21:46 -05:00
Lincoln Stein
eaf29e1751 Make menu options in invoke.bat the same as options in invoke.sh (#2588)
- This makes the launcher options menu on Windows look and act the same
as the Linux/Mac launcher, which previously was lacking the command-line
help option and didn't list item (6) as an option.
2023-02-08 15:20:43 -05:00
psychedelicious
d964374a91 builds frontend 2023-02-09 07:03:58 +11:00
Kent Keirsey
9826f80d7f Initial Slider & Img2Img=1 Updates 2023-02-09 07:02:39 +11:00
Lincoln Stein
ec89bd19dc Merge branch 'main' into installer/fix-launcher-menu 2023-02-08 14:54:36 -05:00
Lincoln Stein
23aaf54f56 Documentation for 2.3.0 (#2564)
Work in progress. I am reviewing and updating the documentation for
2.3.0. The following sections need to be done:

- [x] index.md
- [x] installation/010_INSTALL_AUTOMATED.md
- [x] installation/020_INSTALL_MANUAL.md
- [x] installation/030_INSTALL_CUDA_AND_ROCM.md (needs to be written
from scratch)
- [x] installation/040_INSTALL_DOCKER.md
- [x] installation/050_INSTALLING_MODELS.md
- [x] features/CLI.md
- [x] features/WEB.md
2023-02-08 14:54:20 -05:00
Lincoln Stein
6d3cc25bca Merge branch 'main' into 2.3-documentation-fixes 2023-02-08 14:29:35 -05:00
Lincoln Stein
c9d246c4ec Update 050_INSTALLING_MODELS.md (#2576)
Using Windows 10 I found I needed to use double backslashes to import a
new model, when using single backslash the output would say
"e:_ProjectsCodemodelsldmstable-diffusion-model-to-import.ckpt is
neither the path to a .ckpt file nor a diffusers repository id. Can't
import." This added tip in the documentation will help Windows users
overcome this.
2023-02-08 14:25:36 -05:00
mauwii
74406456f2 Fix links (ignored deprecated folder) 2023-02-08 20:07:27 +01:00
Lincoln Stein
8e0cd2df18 add 2.3.0 release date 2023-02-08 14:06:53 -05:00
Lincoln Stein
4d4b1777db Merge branch 'main' into patch-1 2023-02-08 13:59:47 -05:00
Lincoln Stein
d6e5da6e37 deprecated out of date FAQ 2023-02-08 13:58:17 -05:00
mauwii
5bb0f9bedc update Dockerfile - more simple user creation 2023-02-08 19:57:41 +01:00
Lincoln Stein
dec7d8b160 fix up the features/overview document 2023-02-08 13:52:02 -05:00
Lincoln Stein
4ecf016ace Merge branch 'main' into 2.3-documentation-fixes 2023-02-08 12:47:27 -05:00
Lincoln Stein
4d74af2363 Update docs/installation/030_INSTALL_CUDA_AND_ROCM.md
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2023-02-08 12:46:36 -05:00
Lincoln Stein
c6a2ba12e2 finished CLI, IMG2IMG and WEB updates 2023-02-08 12:45:56 -05:00
Lincoln Stein
350b5205a3 fix crash when --prompt="prompt" is used in CLI (#2579)
- The following were supposed to be equivalent, but the latter crashes:
```
invoke> banana sushi
invoke> --prompt="banana sushi"
```
This PR fixes the problem.

- Fixes #2548
2023-02-08 11:36:20 -05:00
Lincoln Stein
06028e0131 Merge branch 'main' into bugfix/cli-crash-on-prompt-arg 2023-02-08 11:06:48 -05:00
Lincoln Stein
c6d13e679f make menu options in invoke.bat the same as options in invoke.sh
- This makes the launcher options menu on Windows look and act the same
  as the Linux/Mac launcher, which previously was lacking the command-line
  help option and didn't list item (6) as an option.
2023-02-08 11:04:00 -05:00
psychedelicious
72357266a6 fixes #2578 use prompt bug on webkit browsers 2023-02-09 02:25:57 +13:00
Lincoln Stein
9d69843a9d fix screenshot directory name 2023-02-08 07:57:46 -05:00
Lincoln Stein
0547d20b2f crop screenshots 2023-02-08 07:54:27 -05:00
Lincoln Stein
2af6b8fbd8 screenshot revision 2023-02-08 07:46:47 -05:00
psychedelicious
0cee72dba5 fixes #2525 del hotkey doesn't work after canceling
The `useHotkeys` hook for this hotkey didn't have `isConnected` or `isProcessing` in its dependencies array. This prevented `handleDelete()` from dispatching the delete request.
2023-02-09 01:37:55 +13:00
psychedelicious
77c11a42ee fixes #2505 add preserve masked to status text 2023-02-09 01:10:59 +13:00
mauwii
bf812e6493 hotfix - use context github.ref_name for cache 2023-02-08 11:01:55 +01:00
Matthias Wild
a3da12d867 Merge branch 'invoke-ai:main' into main 2023-02-08 10:22:48 +01:00
Lincoln Stein
1d62b4210f First draft of CODEOWNERS (#2558)
This is an early draft of a codeowners file for InvokeAI. It has plenty
of gaps in it. Please use this PR to add yourself and others where
appropriate.
2023-02-08 01:13:45 -05:00
Lincoln Stein
d5a3571c00 Merge branch 'main' into dev/codeowner-assignment 2023-02-08 00:46:31 -05:00
Lincoln Stein
8b2ed9b8fd finished work on INSTALLING MODELS 2023-02-08 00:40:21 -05:00
Lincoln Stein
24792eb5da add CUDA and ROCm installation instructions 2023-02-07 23:02:45 -05:00
Lincoln Stein
614220576f add that forward slashes work too 2023-02-07 23:01:59 -05:00
Lincoln Stein
70bcbc7401 Better AMD clarification (#2536)
To better clarify that AMD is supported when using linux
2023-02-07 22:36:40 -05:00
Lincoln Stein
492605ac3e Merge branch 'main' into patch-1 2023-02-07 22:14:39 -05:00
Lincoln Stein
67f892455f fix crash when --prompt="prompt" is used in CLI
- The following were supposed to be equivalent, but the latter crashes:
```
invoke> banana sushi
invoke> --prompt="banana sushi"
```
This PR fixes the problem.

- Fixes #2548
2023-02-07 22:09:34 -05:00
Lincoln Stein
ae689d1a4a add platform-specific help instructions to installer (#2530)
This adds some platform-specific help messages to the installer welcome
screen:

- For Windows, the message encourages them to install VC++ core
libraries and the registry long name patch
- For MacOSX, the message warns the user to install the XCode tools.
2023-02-07 20:47:58 -05:00
Lincoln Stein
10990799db Merge branch 'main' into dev/codeowner-assignment 2023-02-07 20:29:38 -05:00
Lincoln Stein
c5b4397212 Merge branch 'main' into installer/platform-specific-help 2023-02-07 20:25:02 -05:00
LoganPederson
f62bbef9f7 Update 050_INSTALLING_MODELS.md
I found I needed to use double backslashes to import a new model, when using single backslash the output would say "e:_ProjectsCodemodelsldmstable-diffusion-model-to-import.ckpt is neither the path to a .ckpt file nor a diffusers repository id. Can't import." This added tip in the documentation will help Windows users overcome this.
2023-02-07 18:19:59 -06:00
Matthias Wild
6b4a06c3fc Merge branch 'invoke-ai:main' into main 2023-02-08 00:25:49 +01:00
mauwii
9157da8237 Begun to fill the empty CUDA/ROCm doc
🤡
2023-02-08 00:05:24 +01:00
Lincoln Stein
9c2b9af3a8 Bring main up to 2.3.0-rc6 (#2563)
This bumps up the version number, and also applies a hotfix to the
configure script to fix the problem described in PR #2562
2023-02-07 18:02:13 -05:00
mauwii
3833b28132 Create a new user for the container runtime
without root permission
2023-02-07 23:46:43 +01:00
Lincoln Stein
e3419c82e8 Merge branch 'main' into patch-1 2023-02-07 17:45:15 -05:00
Lincoln Stein
65f3d22649 Merge branch 'main' into dev/codeowner-assignment 2023-02-07 17:44:37 -05:00
Lincoln Stein
39b0288595 Merge branch 'main' into 2.3.0rc6 2023-02-07 17:43:38 -05:00
Lincoln Stein
13d12a0ceb Merge branch 'main' into 2.3-documentation-fixes 2023-02-07 17:08:10 -05:00
Lincoln Stein
b92dc8db83 add developer install instructions 2023-02-07 17:04:01 -05:00
Lincoln Stein
b49188a39d doc updates; clean up install directory
- Large rewrite of documentation for automated and manual install.
- Reorganize installer zip file to reduce visual clutter for user.
2023-02-07 16:35:22 -05:00
Lincoln Stein
b9c8270ee6 update manual install doc 2023-02-07 14:19:55 -05:00
Jonathan
f0f3520bca Switch to using max for attention slicing in all cases for the time being. (#2569) 2023-02-07 19:28:57 +01:00
Matthias Wild
e8f9ab82ed Merge branch 'invoke-ai:main' into main 2023-02-07 18:48:47 +01:00
psychedelicious
6ab364b16a build frontend 2023-02-07 17:06:47 +01:00
psychedelicious
a4dc11addc switch to @vitejs/plugin-react-swc 2023-02-07 17:06:47 +01:00
psychedelicious
0372702eb4 remove unneeded polyfill 2023-02-07 17:06:47 +01:00
psychedelicious
aa8eeea478 update app build configuration 2023-02-07 17:06:47 +01:00
blessedcoolant
e54ecc4c37 build (vite-4-code-quality) 2023-02-07 17:06:47 +01:00
blessedcoolant
4a12c76097 Remove build-dev 2023-02-07 17:06:47 +01:00
blessedcoolant
be72faf78e Upgrade to Vite 4 2023-02-07 17:06:46 +01:00
blessedcoolant
28d44d80ed Rebase Fix - ModelSelect 2023-02-07 17:06:46 +01:00
psychedelicious
9008d9996f builds frontend 2023-02-07 17:06:46 +01:00
psychedelicious
be2a9b78bb fixes rebase issues 2023-02-07 17:06:46 +01:00
Ryan Cao
70003ee5b1 feat: add copy image in share menu 2023-02-07 17:06:46 +01:00
psychedelicious
45a5ccba84 Updates code quality tooling and formats codebase
- `eslint` and `prettier` configs
- `husky` to format and lint via pre-commit hook
- `babel-plugin-transform-imports` to treeshake `lodash` and other packages if needed

Lints and formats codebase.
2023-02-07 17:06:46 +01:00
psychedelicious
f80a64a0f4 Reorganises internal state
`options` slice was huge and managed a mix of generation parameters and general app settings. It has been split up:

- Generation parameters are now in `generationSlice`.
- Postprocessing parameters are now in `postprocessingSlice`
- UI related things are now in `uiSlice`

There is probably more to be done, like `gallerySlice` perhaps should only manage internal gallery state, and not if the gallery is displayed.

Full-slice selectors have been made for each slice.

Other organisational tweaks.
2023-02-07 17:06:46 +01:00
Lincoln Stein
511df2963b remove debugging statement 2023-02-07 17:06:46 +01:00
Lincoln Stein
f92f62a91b enhance model_manager support for converting inpainting ckpt files
Previously conversions of .ckpt and .safetensors files to diffusers
models were failing with channel mismatch errors. This is corrected
with this PR.

- The model_manager convert_and_import() method now accepts the path
  to the checkpoint file's configuration file, using the parameter
  `original_config_file`. For inpainting files this should be set to
  the full path to `v1-inpainting-inference.yaml`.

- If no configuration file is provided in the call, then the presence
  of an inpainting file will be inferred at the
  `ldm.ckpt_to_diffuser.convert_ckpt_to_diffUser()` level by looking
  for the string "inpaint" in the path. AUTO1111 does something
  similar to this, but it is brittle and not recommended.

- This PR also changes the model manager model_names() method to return
  the model names in case folded sort order.
2023-02-07 17:06:45 +01:00
psychedelicious
3efe9899c2 build frontend 2023-02-08 01:53:34 +13:00
psychedelicious
bdbe4660fc switch to @vitejs/plugin-react-swc 2023-02-08 01:53:34 +13:00
psychedelicious
8af9432f63 remove unneeded polyfill 2023-02-08 01:53:34 +13:00
psychedelicious
668d9cdb9d update app build configuration 2023-02-08 01:53:34 +13:00
blessedcoolant
90f5811e59 build (vite-4-code-quality) 2023-02-08 01:53:34 +13:00
blessedcoolant
15d21206a3 Remove build-dev 2023-02-08 01:53:34 +13:00
blessedcoolant
b622286f17 Upgrade to Vite 4 2023-02-08 01:53:34 +13:00
blessedcoolant
176add58b2 Rebase Fix - ModelSelect 2023-02-08 01:53:34 +13:00
psychedelicious
33c5f5a9c2 builds frontend 2023-02-08 01:53:34 +13:00
psychedelicious
2b7752b72e fixes rebase issues 2023-02-08 01:53:34 +13:00
Ryan Cao
5478d2a15e feat: add copy image in share menu 2023-02-08 01:53:34 +13:00
psychedelicious
9ad76fe80c Updates code quality tooling and formats codebase
- `eslint` and `prettier` configs
- `husky` to format and lint via pre-commit hook
- `babel-plugin-transform-imports` to treeshake `lodash` and other packages if needed

Lints and formats codebase.
2023-02-08 01:53:34 +13:00
psychedelicious
d74c4009cb Reorganises internal state
`options` slice was huge and managed a mix of generation parameters and general app settings. It has been split up:

- Generation parameters are now in `generationSlice`.
- Postprocessing parameters are now in `postprocessingSlice`
- UI related things are now in `uiSlice`

There is probably more to be done, like `gallerySlice` perhaps should only manage internal gallery state, and not if the gallery is displayed.

Full-slice selectors have been made for each slice.

Other organisational tweaks.
2023-02-08 01:53:34 +13:00
Lincoln Stein
ffe0e81ec9 Support conversion of inpainting ckpt files to diffusers (#2550)
#     enhance model_manager support for converting inpainting ckpt files
    
Previously conversions of .ckpt and .safetensors files to diffusers
    models were failing with channel mismatch errors. This is corrected
    with this PR.
    
 - The model_manager convert_and_import() method now accepts the path
      to the checkpoint file's configuration file, using the parameter
      `original_config_file`. For inpainting files this should be set to
      the full path to `v1-inpainting-inference.yaml`.
    
- If no configuration file is provided in the call, then the presence
      of an inpainting file will be inferred at the
      `ldm.ckpt_to_diffuser.convert_ckpt_to_diffUser()` level by looking
      for the string "inpaint" in the path. AUTO1111 does something
      similar to this, but it is brittle and not recommended.
    
- This PR also changes the model manager model_names() method to return
      the model names in case folded sort order.
2023-02-07 07:25:30 -05:00
Lincoln Stein
bdf683ec41 Merge branch 'main' into enhance/convert-inpaint-models 2023-02-07 06:59:35 -05:00
mauwii
7f41893da4 set scope for caches 2023-02-07 09:27:20 +01:00
mauwii
42da4f57c2 update .dockerignore 2023-02-07 09:27:20 +01:00
mauwii
c2e11dfe83 update build-container.yml
- add long sha tag
- update cache-from
Dockerfile:
- re-use `apt-get update`
env.sh/build.sh:
- rename platform to lowercase
2023-02-07 09:27:20 +01:00
mauwii
17e1930229 remove CONTAINER_FLAVOR build arg
also disable currently unused PIP_PACKAGE build arg
will start using it when problems with XFORMERS are sorted out
2023-02-07 09:27:20 +01:00
mauwii
bde94347d3 don't use --linkin COPY 2023-02-07 09:27:20 +01:00
mauwii
b1612afff4 update .dockerignore 2023-02-07 09:27:20 +01:00
mauwii
1d10d952b2 use cleartext DOCKERHUB_USERNAME 2023-02-07 09:27:20 +01:00
mauwii
9150f9ef3c move LABEL to top 2023-02-07 09:27:20 +01:00
mauwii
7bc0f7cc6c update Docker Hub description 2023-02-07 09:27:20 +01:00
mauwii
c52d11b24c optionally push to DockerHub 2023-02-07 09:27:20 +01:00
mauwii
59486615dd update build-container.yml 2023-02-07 09:27:20 +01:00
mauwii
f0212cd361 update Dockerfile 2023-02-07 09:27:20 +01:00
mauwii
ee4cb5fdc9 add id to Build container 2023-02-07 09:27:20 +01:00
mauwii
75b919237b update cache-from 2023-02-07 09:27:20 +01:00
mauwii
07a9062e1f update .dockerignore and scripts 2023-02-07 09:27:20 +01:00
mauwii
cdb3e18b80 add flavor to pip cache id
to prevent cache invalidation
2023-02-07 09:27:20 +01:00
Lincoln Stein
28a5424242 Update textual inversion doc with the correct CLI name. (#2560) 2023-02-07 01:22:03 -05:00
Lincoln Stein
8d418af20b Merge branch 'main' into ti-doc-update 2023-02-07 00:59:53 -05:00
Lincoln Stein
055badd611 Diffusers Samplers (#2565)
- Diffusers Sampler list is independent from CKPT Sampler list. And the
app will load the correct list based on what model you have loaded.
- Isolated the activeModelSelector coz this is used in multiple places.
- Possible fix to the white screen bug that some users face. This was
happening because of a possible null in the active model list
description tag. Which should hopefully now be fixed with the new
activeModelSelector.

I'll keep tabs on the last thing. Good to go.
2023-02-07 00:59:32 -05:00
blessedcoolant
944f9e98a7 build (diffusers-samplers) 2023-02-07 18:29:14 +13:00
blessedcoolant
fcffcf5602 Diffusers Samplers
DIsplay sampler list based on the active model.
2023-02-07 18:26:06 +13:00
blessedcoolant
f121dfe120 Update model select to use new active model selector
Hopefully this also fixes the white screen error that some users face.
2023-02-07 18:25:45 +13:00
blessedcoolant
a7dd7b4298 Add activeModelSelector
Active Model details are used in multiple places. So makes sense to have a selector for it.
2023-02-07 18:25:12 +13:00
Lincoln Stein
d94780651c Merge branch 'main' into patch-1 2023-02-07 00:07:31 -05:00
Lincoln Stein
d26abd7f01 add empty CUDA/ROCM install guide 2023-02-07 00:04:56 -05:00
Lincoln Stein
7e2b122105 updated manual install instructions 2023-02-06 23:59:48 -05:00
Lincoln Stein
8a21fc1c50 bump version to 2.3.0-rc6 2023-02-06 23:36:49 -05:00
Lincoln Stein
275d5040f4 Merge branch 'bugfix/configure-script' into 2.3.0rc6 2023-02-06 23:35:32 -05:00
Lincoln Stein
1b5930dcad do not merge diffusers and ckpt stanzas 2023-02-06 23:23:07 -05:00
Lincoln Stein
d5810f6270 Bring main up to date with RC5 (#2555)
Updated the version number
2023-02-06 22:23:58 -05:00
Lincoln Stein
ebc51dc535 incomplete work on manual install 2023-02-06 21:47:29 -05:00
Lincoln Stein
ac6e9238f1 Merge branch 'main' into ti-doc-update 2023-02-06 20:06:33 -05:00
Saifeddine
01eb93d664 Added Arabic Localisation 2023-02-07 00:42:09 +01:00
Saifeddine
89f69c2d94 Merge branch 'main' of https://github.com/ParisNeo/ArtBot 2023-02-07 00:29:33 +01:00
Saifeddine
dc6f6fcab7 Added arabic locale files 2023-02-07 00:29:30 +01:00
Dan Sully
6343b245ef Update textual inversion doc with the correct CLI name. 2023-02-06 14:51:22 -08:00
Lincoln Stein
8c80da2844 Merge branch 'main' into 2.3.0rc5 2023-02-06 17:38:25 -05:00
Lincoln Stein
a12189e088 fix build-container.yml (#2557)
This should fix the build-container workflow when triggered by a Tag
(that it is failing was mentioned in #2555 )
2023-02-06 15:09:04 -05:00
cosmii02
472c97e4e8 Merge branch 'main' into patch-1 2023-02-06 22:05:47 +02:00
mauwii
5baf0ae755 add mkdocs.yml and pyproject.toml
also make docs separate header
2023-02-06 20:47:20 +01:00
Lincoln Stein
a56e3014a4 Merge branch 'main' into update/ci/refine-build-container 2023-02-06 14:42:02 -05:00
Lincoln Stein
f3eff38f90 add tildebyte areas 2023-02-06 14:38:42 -05:00
Lincoln Stein
53d2d34b3d Merge branch 'main' into 2.3.0rc5 2023-02-06 14:34:16 -05:00
Lincoln Stein
ede7d1a8f7 first draft of codeowners 2023-02-06 14:33:46 -05:00
blessedcoolant
ac23a321b0 build (hires-strength-slider) 2023-02-07 08:22:39 +13:00
blessedcoolant
f52b233205 Add Hi Res Strength Slider 2023-02-07 08:22:39 +13:00
mauwii
8242fc8bad update metadata 2023-02-06 19:58:48 +01:00
Matthias Wild
09b6f7572b Merge branch 'invoke-ai:main' into main 2023-02-06 19:50:40 +01:00
Lincoln Stein
bde6e96800 Merge branch 'main' into 2.3.0rc5 2023-02-06 12:55:47 -05:00
Lincoln Stein
13474e985b Merge branch 'main' into patch-1 2023-02-06 12:54:07 -05:00
Jonathan
28b40bebbe Refactor CUDA cache clearing to add statistical reporting. (#2553) 2023-02-06 12:53:30 -05:00
Lincoln Stein
1c9fd00f98 this is likely the penultimate rc 2023-02-06 12:03:08 -05:00
Lincoln Stein
8ab66a211c force torch reinstall (#2532)
For the torch and torchvision libraries **only**, the installer will now
pass the pip `--force-reinstall` option. This is intended to fix issues
with the user getting a CPU-only version of torch and then not being
able to replace it.
2023-02-06 11:58:57 -05:00
Lincoln Stein
bc03ff8b30 Merge branch 'main' into install/force-torch-reinstall 2023-02-06 11:31:57 -05:00
blessedcoolant
0247d63511 Build (negative-prompt-box) 2023-02-07 05:21:09 +13:00
blessedcoolant
7604b36577 Add Negative Prompts Box 2023-02-07 05:21:09 +13:00
blessedcoolant
4a026bd46e Organize language picker items alphabetically 2023-02-07 05:21:09 +13:00
blessedcoolant
6241fc19e0 Fix the model manager edit placeholder not being full height 2023-02-07 05:21:09 +13:00
blessedcoolant
25d7d71dd8 Slightly decrease the size of the tab list icons 2023-02-07 05:21:09 +13:00
Jonathan
2432adb38f In exception handlers, clear the torch CUDA cache (if we're using CUDA) to free up memory for other programs using the GPU and to reduce fragmentation. (#2549) 2023-02-06 10:33:24 -05:00
Lincoln Stein
91acae30bf Merge branch 'main' into patch-1 2023-02-06 10:14:27 -05:00
Lincoln Stein
ca749b7de1 remove debugging statement 2023-02-06 09:45:21 -05:00
Lincoln Stein
7486aa8608 enhance model_manager support for converting inpainting ckpt files
Previously conversions of .ckpt and .safetensors files to diffusers
models were failing with channel mismatch errors. This is corrected
with this PR.

- The model_manager convert_and_import() method now accepts the path
  to the checkpoint file's configuration file, using the parameter
  `original_config_file`. For inpainting files this should be set to
  the full path to `v1-inpainting-inference.yaml`.

- If no configuration file is provided in the call, then the presence
  of an inpainting file will be inferred at the
  `ldm.ckpt_to_diffuser.convert_ckpt_to_diffUser()` level by looking
  for the string "inpaint" in the path. AUTO1111 does something
  similar to this, but it is brittle and not recommended.

- This PR also changes the model manager model_names() method to return
  the model names in case folded sort order.
2023-02-06 09:35:23 -05:00
mauwii
0402766f4d add author label 2023-02-06 14:05:27 +01:00
mauwii
a9ef5d1532 update tags 2023-02-06 14:05:27 +01:00
Matthias Wild
a485d45400 Update test-invoke-pip.yml (#2524)
test-invoke-pip.yml:
- enable caching of pip dependencies in `actions/setup-python@v4`
- add workflow_dispatch trigger
- fix indentation in concurrency
- set env `PIP_USE_PEP517: '1'`
- cache python dependencies
- remove models cache (since we currently use 190.96 GB of 10 GB while I
am writing this)
- add step to set `INVOKEAI_OUTDIR`
- add outdir arg to invokeai
- fix path in archive results

model_manager.py:
- read files in chunks when calculating sha (windows runner is crashing
otherwise)
2023-02-06 12:56:15 +01:00
mauwii
a40bdef29f update model_manager.py
- read files in chunks when calculating sha
  - windows runner is crashing without
2023-02-06 12:30:10 +01:00
mauwii
fc2670b4d6 update test-invoke-pip.yml
- add workflow_dispatch trigger
- fix indentation in concurrency
- set env `PIP_USE_PEP517: '1'`
- cache python dependencies
- remove models cache (since currently 183.59 GB of 10 GB are Used)
- add step to set `INVOKEAI_OUTDIR`
- add outdir arg to invokeai
- fix path in archive results
2023-02-06 12:30:10 +01:00
Eugene Brodsky
f0cd1aa736 highlight key elements of installer welcome message
- help users to avoid glossing over per-platform prerequisites
- better link colouring
- update link to community instructions to install xcode command line tools
2023-02-06 00:57:29 -05:00
Lincoln Stein
c3807b044d Merge branch 'main' into install/force-torch-reinstall 2023-02-06 00:18:38 -05:00
Jonathan
b7ab025f40 Update base.py (#2543)
Free up CUDA cache right after each image is generated. VRAM usage drops down to pre-generation levels.
2023-02-06 05:14:35 +00:00
Lincoln Stein
633f702b39 fix crash in txt2img and img2img w/ inpainting models and perlin > 0 (#2544)
- get_perlin_noise() was returning 9 channels; fixed code to return
noise for just the 4 image channels and not the mask ones.

- Closes Issue #2541
2023-02-05 23:50:32 -05:00
Lincoln Stein
3969637488 remove misleading completion message from merge_diffusers 2023-02-05 23:39:43 -05:00
Lincoln Stein
658ef829d4 tweak initial model descriptions 2023-02-05 23:23:09 -05:00
Lincoln Stein
0240656361 fix crash in txt2img and img2img w/ inpainting models and perlin > 0
- get_perlin_noise() was returning 9 channels; fixed code to return
  noise for just the 4 image channels and not the mask ones.

- Closes Issue #2541
2023-02-05 22:55:08 -05:00
Lincoln Stein
719a5de506 Merge branch 'main' into patch-1 2023-02-05 21:43:13 -05:00
Matthias Wild
05bb9e444b update pypi_helper.py (#2533)
- dont rename requests
- remove dash in verison (`2.3.0-rc3` becomes `2.3.0rc3`)
- read package_name instead of hardcode it
2023-02-06 03:34:52 +01:00
Lincoln Stein
0076757767 Merge branch 'main' into dev/ci/update-pypi-helper 2023-02-05 21:10:49 -05:00
Lincoln Stein
6ab03c4d08 fix crash in both textual_inversion and merge front ends when not enough models defined (#2540)
- Issue is that if insufficient diffusers models are defined in
models.yaml the frontend would ungraciously crash.

- Now it emits appropriate error messages telling user what the problem
is.
2023-02-05 19:34:07 -05:00
Lincoln Stein
142016827f fix formatting bugs in both textual_inversion and merge front ends
- Issue is that if insufficient diffusers models are defined in
  models.yaml the frontend would ungraciously crash.

- Now it emits appropriate error messages telling user what the problem
  is.
2023-02-05 18:35:01 -05:00
Lincoln Stein
466a82bcc2 Updates frontend README.md (#2539) 2023-02-05 17:25:25 -05:00
Lincoln Stein
05349f6cdc Merge branch 'main' into dev/ci/update-pypi-helper 2023-02-05 17:13:09 -05:00
psychedelicious
ab585aefae Update README.md 2023-02-06 09:07:44 +11:00
Matthias Wild
083ce9358b hotfix build-container.yml (#2537)
fix broken tag
2023-02-05 22:30:23 +01:00
Lincoln Stein
f56cf2400a Merge branch 'main' into install/force-torch-reinstall 2023-02-05 15:40:35 -05:00
cosmii02
5de5e659d0 Better AMD clarification
To better clarify that AMD is supported when using linux
2023-02-05 12:29:50 -08:00
mauwii
fc53f6d47c hotfix build-container.yml 2023-02-05 21:25:44 +01:00
Matthias Wild
2f70daef8f Issue/2487/address docker issues (#2517)
Address issues of #2487
2023-02-05 21:20:13 +01:00
mauwii
fc2a136eb0 add requested change 2023-02-05 21:15:39 +01:00
Lincoln Stein
ce3da40434 Merge branch 'main' into install/force-torch-reinstall 2023-02-05 15:01:56 -05:00
mauwii
7933f27a72 update pypi_helper.py`
- dont rename requests
- remove dash in verison (`2.3.0-rc3` becomes `2.3.0rc3`)
- read package_name instead of hardcode it
2023-02-05 20:45:31 +01:00
mauwii
1c197c602f update Dockerfile, .dockerignore and workflow
- dont build frontend since complications with QEMU
- set pip cache dir
- add pip cache to all pip related build steps
- dont lock pip cache
- update dockerignore to exclude uneeded files
2023-02-05 20:20:50 +01:00
mauwii
90656aa7bf update Dockerfile
- add build arg `FRONTEND_DIR`
2023-02-05 20:20:50 +01:00
mauwii
394b4a771e update Dockerfile
- remove yarn install args `--prefer-offline` and `--production=false`
2023-02-05 20:20:50 +01:00
mauwii
9c3f548900 update settings output in build.sh 2023-02-05 20:20:50 +01:00
mauwii
5662d2daa8 add invokeai/frontend/dist/** to .dockerignore 2023-02-05 20:20:50 +01:00
mauwii
fc0f966ad2 fix docs 2023-02-05 20:20:50 +01:00
mauwii
eb702a5049 fix env.sh, update Dockerfile, update build.sh
env.sh:
- move check for torch to CONVTAINER_FLAVOR detection

Dockerfile
- only mount `/var/cache/apt` for apt related steps
- remove `docker-clean` from `/etc/apt/apt.conf.d` for BuildKit cache
- remove apt-get clean for BuildKit cache
- only copy frontend to frontend-builder
- mount `/usr/local/share/.cache/yarn` in frountend-builder
- separate steps for yarn install and yarn build
- build pytorch in pyproject-builder

build.sh
- prepare for installation with extras
2023-02-05 20:20:50 +01:00
mauwii
1386d73302 fix env.sh
only try to auto-detect CUDA/ROCm if torch is installed
2023-02-05 20:20:50 +01:00
mauwii
6089f33e54 fix HUGGING_FACE_HUB_TOKEN 2023-02-05 20:20:50 +01:00
mauwii
3a260cf54f update directory from docker-build to docker 2023-02-05 20:20:50 +01:00
mauwii
9949a438f4 update docs with newly added variables
also remove outdated information
2023-02-05 20:20:50 +01:00
mauwii
84c1122208 fix build.sh and env.sh 2023-02-05 20:20:50 +01:00
Lincoln Stein
cc3d431928 2.3.0rc4 (#2514)
This will bring main up to date with v2.3.0-rc4
2023-02-05 14:05:15 -05:00
Lincoln Stein
c44b060a2e Merge branch 'main' into 2.3.0rc4 2023-02-05 13:40:56 -05:00
Lincoln Stein
eff7fb89d8 installer will --force-reinstall torch 2023-02-05 13:39:46 -05:00
Lincoln Stein
cd5c112fcd Allow multiple models to be imported by passing a directory. (#2529)
This change allows passing a directory with multiple models in it to be
imported.

Ensures that diffusers directories will still work.

Fixed up some minor type issues.
2023-02-05 13:36:00 -05:00
Lincoln Stein
563867fa99 Merge branch 'main' into main 2023-02-05 12:51:03 -05:00
Lincoln Stein
2e230774c2 Merge branch 'main' into 2.3.0rc4 2023-02-05 12:44:44 -05:00
Lincoln Stein
9577410be4 add platform-specific help instructions to installer 2023-02-05 12:43:13 -05:00
Lincoln Stein
4ada4c9f1f Add --log_tokenization to sysargs (#2523)
This allows the --log_tokenization option to be used as a command line
argument (or from invokeai.init), making it possible to view
tokenization information in the terminal when using the web interface.
2023-02-05 11:55:26 -05:00
blessedcoolant
9a6966924c Merge branch 'main' into main 2023-02-06 05:33:48 +13:00
Lincoln Stein
0d62525f3d reword help message slightly 2023-02-05 08:11:02 -08:00
Dan Sully
2ec864e37e Allow multiple models to be imported by passing a directory. 2023-02-05 08:11:02 -08:00
Lincoln Stein
9307ce3dc3 this fixes a crash in the TI frontend (#2527)
- This fixes an edge case crash when the textual inversion frontend
  tried to display the list of models and no default model defined
  in models.yaml

Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
2023-02-05 16:05:33 +00:00
Lincoln Stein
15996446e0 Merge branch 'main' into 2.3.0rc4 2023-02-05 10:54:53 -05:00
blessedcoolant
7a06c8fd89 Merge branch 'main' into main 2023-02-06 04:43:49 +13:00
Lincoln Stein
4895fe8395 fix crash when text mask applied to img2img (#2526)
This PR fixes the crash reported at https://discord.com/channels/1020123559063990373/1031668022294884392/1071782238137630800

It also quiets-down the "NSFW is disabled" nag during img2img generation.
2023-02-05 15:26:40 +00:00
Lincoln Stein
1e793a2dfe Merge branch 'main' into 2.3.0rc4 2023-02-05 10:24:09 -05:00
blessedcoolant
9c8fcaaf86 Beautify & Cleanup WebUI Logs 2023-02-05 22:55:57 +13:00
blessedcoolant
bf4344be51 Beautify Usage Stats Log 2023-02-05 22:55:40 +13:00
blessedcoolant
f7532cdfd4 Beautify Token Log Outputs 2023-02-05 22:55:29 +13:00
blessedcoolant
f1dd76c20b Remove Deprecation Warning from Diffusers Pipeline 2023-02-05 22:55:10 +13:00
whosawhatsis
3016eeb6fb Merge branch 'invoke-ai:main' into main 2023-02-04 22:56:59 -05:00
whosawhatsis
75b62d6ca8 Add --log_tokenization to sysargs
This allows the --log_tokenization option to be used as a command line argument (or from invokeai.init), making it possible to view tokenization information in the terminal when using the web interface.
2023-02-04 19:56:20 -08:00
Lincoln Stein
82ae2769c8 Configuration script tidying up (#2513)
- Rename configure_invokeai.py to invokeai_configure.py to be consistent
with installed script name
- Remove warning message about half-precision models not being available
during the model download process.
- adjust estimated file size reported by configure
- guesstimate disk space needed for "all" models
- fix up the "latest" tag to be named 'v2.3-latest'
2023-02-04 21:58:56 -05:00
Lincoln Stein
61149abd2f Merge branch 'main' into lstein/normalize-names 2023-02-04 21:41:22 -05:00
Lincoln Stein
eff126af6e Merge branch 'main' into 2.3.0rc4 2023-02-04 21:40:47 -05:00
Matthias Wild
0ca499cf96 Add workflow for PyPI Release (#2516) 2023-02-05 00:31:00 +01:00
mauwii
3abf85e658 fix conditions
workflow will only run in official repo
2023-02-04 23:58:07 +01:00
mauwii
5095285854 fix pypi-release.yml 2023-02-04 23:46:10 +01:00
mauwii
93623a4449 add conditions to check for Repo and Secret 2023-02-04 23:22:23 +01:00
mauwii
0197459b02 change back to current version 2023-02-04 23:07:20 +01:00
mauwii
1578bc68cc change version to test workflow 2023-02-04 23:06:29 +01:00
mauwii
4ace397a99 remove debug steps 2023-02-04 23:05:29 +01:00
mauwii
d85a710211 rename pypi_helper.py 2023-02-04 23:00:39 +01:00
mauwii
536d534ab4 add pypi-release.yml and pypi-helper.py 2023-02-04 22:58:21 +01:00
Lincoln Stein
fc752a4e75 move old .venv directory away during install
- To ensure a clean environment, the installer will now detect whether a
  previous .venv exists in the install location, and move it to .venv-backup
  before creating a fresh .venv.

- Any previous .venv-backup is deleted.

- User is informed of process.
2023-02-04 16:14:29 -05:00
Lincoln Stein
3c06d114c3 fix name of latest tag 2023-02-04 14:04:24 -05:00
Lincoln Stein
00d79c1fe3 bump version number to rc4 2023-02-04 14:00:58 -05:00
Lincoln Stein
60213893ab configuration script tidying up
- Rename configure_invokeai.py to invokeai_configure.py to be
  consistent with installed script name
- Remove warning message about half-precision models not being
  available during the model download process.

- adjust estimated file size reported by configure

- guesstimate disk space needed for "all" models

- fix up the "latest" tag to be named 'v2.3-latest'
2023-02-04 13:55:36 -05:00
Lincoln Stein
3b58413d9f Fixes PYTORCH_ENABLE_MPS_FALLBACK not set correctly (#2508)
`torch` wasn't seeing the environment variable. I suspect this is
because it was imported before the variable was set, so was running with
a different environment.

Many `torch` ops are supported on MPS so this wasn't noticed
immediately, but some samplers like k_dpm_2 still use unsupported
operations and need this fallback.
2023-02-04 11:32:52 -05:00
Lincoln Stein
1139884493 Merge branch 'main' into fix/mps-fallback 2023-02-04 11:11:59 -05:00
Lincoln Stein
17e8f966d0 Fix registration of text masks (#2501)
- Scale and crop not applied correctly
- Problem found and fixed by @spezialspezial
- Closes #2470
2023-02-04 10:48:27 -05:00
Lincoln Stein
a42b25339f Merge branch 'main' into bugfix/txt2mask 2023-02-04 10:25:30 -05:00
Lincoln Stein
1b0731dd1a use torch-cu117 from download.torch.org rather than pypi (#2492)
This PR forces the installer to install the official torch-cu117 wheel
from download.torch.org, rather than relying on PyPi.org to return the
correct version. It ought to correct the problems that some people have
experienced with cuda support not being installed.
2023-02-04 10:04:22 -05:00
Lincoln Stein
61c3886843 Merge branch 'main' into bugfix/use-cu117-wheel 2023-02-04 09:43:52 -05:00
Lincoln Stein
f76d57637e Fix bugs in merge and convert process (#2491)
1. The convert module was converting ckpt models into
StableDiffusionGeneratorPipeline objects for use in-memory, but then
when saved to disk created files that could not be merged with
StableDiffusionPipeline models. I have added a flag that selects which
pipeline class to return, so that both in-memory and disk conversions
work properly.

2. This PR also fixes an issue with `invoke.sh` not using the correct
path for the textual inversion and merge scripts.

3. Quench nags during the merge process about the safety checker being
turned off.
2023-02-04 09:40:09 -05:00
Lincoln Stein
6bf73a0cf9 Merge branch 'main' into bugfix/use-cu117-wheel 2023-02-04 09:17:45 -05:00
Lincoln Stein
5145df21d9 Merge branch 'main' into bugfix/merge-fixes 2023-02-04 09:17:01 -05:00
blessedcoolant
e96ac61cb3 Add Ukranian Localization (#2486)
* Add Ukranian & Update Italian

* Frontend Build (Ukranian Localization)

* Update invokeai/frontend/dist/locales/hotkeys/ua.json

Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>

* UA Localization Fixes

* Build (ua-fixes)

* Clean Build

* Clear Build

* Clean Build (resolving main conflicts)

* Clear Build

* Frontend Build (ua-localization-rebased)

---------

Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2023-02-05 00:24:24 +13:00
blessedcoolant
0e35d829c1 Build (french-localization) 2023-02-04 23:14:25 +13:00
blessedcoolant
d08f048621 Add French Localization 2023-02-04 23:14:25 +13:00
Saifeddine ALOUI
cfd453c1c7 Added French localization 2023-02-04 23:14:25 +13:00
Saifeddine ALOUI
6ca177e462 Added French localization 2023-02-04 09:54:30 +01:00
psychedelicious
a1b1a48fb3 Fixes PYTORCH_ENABLE_MPS_FALLBACK not set correctly
`torch` wasn't seeing the environment variable. I suspect this is because it was imported before the variable was set, so was running with a different environment.

Many `torch` ops are supported on MPS so this wasn't noticed immediately, but some samplers like k_dpm_2 still use unsupported operations and need this fallback.
2023-02-04 17:27:33 +11:00
Eugene Brodsky
b5160321bf fix finding the wheel when running from outside the installer directory
in case of calling python script instead of shell
2023-02-03 23:50:57 -05:00
Lincoln Stein
0cc2a8176e bump version 2023-02-03 23:50:57 -05:00
Lincoln Stein
9ac81c1dc4 change latest tag to v2.2.3-latest, won\'t conflict with 2.2.5 latest tag 2023-02-03 23:50:57 -05:00
Lincoln Stein
50191774fc this fixes an issue when the install script is called outside its directory
- Also reimplements the python-path finding logic of the older install.sh script.
2023-02-03 23:50:57 -05:00
Eugene Brodsky
fcd9b813e3 Merge branch 'main' into bugfix/use-cu117-wheel 2023-02-03 23:13:22 -05:00
Lincoln Stein
813f92a1ae do not install the "update" script
- The update script doesn't work yet, so we shouldn't install it.
- For now, users update by re-running the installer.
2023-02-03 20:26:10 -05:00
Lincoln Stein
0d141c1d84 small f-string syntax fix in generate.py (#2483)
Probably low priority, but helps the error message be more clear by
hopefully displaying model_name.
2023-02-03 18:29:50 -05:00
Lincoln Stein
2e3cd03b27 Merge branch 'main' into bugfix/use-cu117-wheel 2023-02-03 18:15:54 -05:00
Lincoln Stein
4500c8b244 Merge branch 'main' into patch-2 2023-02-03 18:03:29 -05:00
Lincoln Stein
d569c9dec6 remove dead code 2023-02-03 17:35:35 -05:00
Matthias Wild
01a2b8c05b Adapt latest changes to Dockerfile (#2478)
* remove non maintained Dockerfile

* adapt Docker related files to latest changes
- also build the frontend when building the image
- skip user response if INVOKE_MODEL_RECONFIGURE is set
- split INVOKE_MODEL_RECONFIGURE to support more than one argument

* rename `docker-build` dir to `docker`

* update build-container.yml
- rename image to invokeai
- add cpu flavor
- add metadata to build summary
- enable caching
- remove build-cloud-img.yml

* fix yarn cache path, link copyjob
2023-02-03 22:34:47 +00:00
Lincoln Stein
b23664c794 registration of mask images was off due to typo
- Problem found and fixed by @spezialspezial
- Closes #2470
2023-02-03 17:32:35 -05:00
Lincoln Stein
f06fefcacc Merge branch 'main' into patch-2 2023-02-03 17:15:29 -05:00
Lincoln Stein
7fa3a499bb fix crash on Windows10 when configure script given no HF token
Crashes would occur in the invokeai-configure script if no HF token
was found in cache and the user declines to provide one when prompted.
The reason appears to be that on Linux systems getpass_asterisk()
raises an EOFError when no input is provided

On windows10, getpass_asterisk() does not raise the EOFError, but
returns an empty string instead. This patch detects this and raises
the exception so that the control logic is preserved.
2023-02-03 16:06:49 -05:00
Lincoln Stein
c50b64ec1d correct default menu entry in install.bat file 2023-02-03 13:30:21 -05:00
Lincoln Stein
76b0bdb6f9 Fix: upgrade fails if existing venv was created with symlinks (#2489)
if reinstalling over an existing installation where the .venv was
created with symlinks to system python instead of copies of the python
executable, the installer would raise a `SameFileError`, because it
would attempt to copy Python over itself. This fixes the issue.

Copying the executable is still preferred for new environments, because
this guarantees the stable Python version.
2023-02-03 13:29:20 -05:00
Lincoln Stein
b0ad109886 Merge branch 'main' into fix-samefile 2023-02-03 13:02:01 -05:00
Lincoln Stein
66b312c353 enhance console gui for invokeai-merge (#2480)
- Added modest adaptive behavior; if the screen is wide enough the three
checklists of models will be arranged in a horizontal row.
- Added color support
# What it looks like
On a wide window:

![image](https://user-images.githubusercontent.com/111189/216495149-0ceed761-b829-4b21-8e90-0b7faf2c7b72.png)
On a narrow window:

![image](https://user-images.githubusercontent.com/111189/216495239-1d6615cf-0e7e-44fe-83d7-513819635d8a.png)
2023-02-03 13:00:16 -05:00
Lincoln Stein
fc857f9d91 Merge branch 'main' into lstein/enhance-merge-models-gui 2023-02-03 12:36:23 -05:00
Lincoln Stein
d6bd0cbf61 Bugfixes for path finding during manual install (#2490)
- fixes bug in finding the source of the configs dir;
- updates the docs for manual install to clarify the preference to
keeping the `.venv` inside the runtime dir, and the caveat/extra steps
required if done otherwise
2023-02-03 11:02:47 -05:00
Lincoln Stein
a32f6e9ea7 use torch-cu117 from download.torch.org rather than pypi 2023-02-03 10:57:15 -05:00
Lincoln Stein
b41342a779 Merge branch 'main' into bugfix/config-manual-install 2023-02-03 10:28:18 -05:00
Lincoln Stein
7603c8982c feat: add copy image in share menu (#2484)
<img width="233" alt="Screenshot 2023-02-03 at 12 11 46"
src="https://user-images.githubusercontent.com/70191398/216510761-3e5013a3-5346-45d4-92e5-d913d035f1bc.png">
2023-02-03 10:27:54 -05:00
Lincoln Stein
d351e365d6 Merge branch 'main' into lstein/enhance-merge-models-gui 2023-02-03 10:27:32 -05:00
Lincoln Stein
d453afbf6b Merge branch 'main' into fix-samefile 2023-02-03 10:27:03 -05:00
Lincoln Stein
9ae55c91cc quench safety checker warnings from diffusers 2023-02-03 10:14:51 -05:00
Lincoln Stein
9e46badc40 convert no longer creates StableDiffusionGenerator pipelines unless asked to 2023-02-03 10:04:32 -05:00
Lincoln Stein
ca0f3ec0e4 fix launcher shell script to use correct names for ti and merge functions 2023-02-03 09:45:57 -05:00
Eugene Brodsky
4b9be6113d (docs) remove an obsolete symlink to a documentation file 2023-02-03 09:01:54 -05:00
Eugene Brodsky
31964c7c4c (docs) remove an obsolete manual install doc 2023-02-03 09:01:30 -05:00
Eugene Brodsky
64f9fbda2f (docs) update manual install documentation 2023-02-03 08:51:46 -05:00
psychedelicious
3ece2f19f0 Merge branch 'main' into share-copy-image 2023-02-04 00:46:48 +11:00
Eugene Brodsky
c38b0b906d (config) fix invokeai-configure path handling after manual install 2023-02-03 08:06:27 -05:00
Lincoln Stein
c79678a643 prevent crash when no default model defined 2023-02-03 02:27:50 -05:00
Lincoln Stein
2217998010 remove the environments-and-requirements directory 2023-02-03 01:49:30 -05:00
Eugene Brodsky
3b43f3a5a1 (installer) fix failure to create venv over an existing venv
if reinstalling over an existing installation where the .venv
was created with symlinks to system python instead of copies
of the python executable, the installer would raise a
SameFileError, because it would attempt to copy Python over
itself. This fixes the issue.
2023-02-03 00:36:28 -05:00
Lincoln Stein
3f193d2b97 attempted correction of white screen issue 2023-02-02 23:47:55 -05:00
Ryan Cao
9fe660c515 feat: add copy image in share menu 2023-02-03 12:10:33 +08:00
gogurtenjoyer
16356d5225 small f-string fix in generate.py
Probably low priority, but helps the error message be more clear by hopefully displaying model_name.
2023-02-02 19:33:17 -08:00
Lincoln Stein
e04cb70c7c rebuild front end 2023-02-02 21:55:01 -05:00
Lincoln Stein
ddd5137cc6 Update version 2023-02-02 21:17:53 -05:00
Lincoln Stein
b9aef33ae8 enhance console gui for invokeai-merge
- Added modest adaptive behavior; if the screen is wide enough the three
  checklists of models will be arranged in a horizontal row.
- Added color support
2023-02-02 20:26:45 -05:00
gogurtenjoyer
797e2f780d Add python version warning from the docs
Just a quick update about Python 3.11.
2023-02-02 19:28:49 -05:00
Lincoln Stein
0642728484 remove requirements step from install manual (#2442)
removing the step to link the requirements file from the docs for manual
Installation after commenting about it in #2431
2023-02-02 16:50:29 -05:00
Lincoln Stein
fe9b4f4a3c Merge branch 'main' into update/docs/remove-requirements-step 2023-02-02 16:14:45 -05:00
Lincoln Stein
756e50f641 Installer rewrite in Python (#2448)
## Summary

This PR rewrites the core of the installer in Python for cross-platform
compatibility. Filesystem path manipulation, platform/arch decisions and
various edge cases are handled in a more convenient fashion. The
original `install.bat.in`/`install.sh.in` scripts are kept as
entrypoints for their respective OSs, but only serve as thin wrappers to
the Python module.

In addition, it:

- builds and **packages the .whl with the installer**, so that
downloading a versioned installer will guarantee installation of the
same version of the application.
- updates shell entrypoints: 
- new commands are `invokeai`, `invokeai-configure`, `invokeai-ti`,
`invokeai-merge`.
- these commands will be available in the activated `.venv` or via the
launch scripts
- `invoke.py` and `configure_invokeai.py` scripts are deprecated but
kept around for backwards compatibility and keeping users' surprise to a
minimum.
- introduces a new `ldm/invoke/config` package and moves the
`configure_invokeai` script into it. Similarly, movers Textual Inversion
script and TUI to `ldm/invoke/training`.
- moves the `configs` directory into the `ldm/invoke/config` package for
easy distribution.
- updates documentation to reflect all of the above changes
- fixes a failing test
- reduces wheel size to 3MB (from 27MB) by excluding unnecessary image
files under `assets`

⚠️ self-updating functionality and ability to install arbitrary
versions are still WIP. For now we can recommend downloading and running
the installer for a specific version as desired.

## Testing the source install

From the cloned source, check out this branch, and:

`$ python3 installer/main.py --root <path_to_destination>`

Also try:

`$ python3 installer/main.py ` - will prompt for paths
`$ python3 installer/main.py --yes` - will not prompt for any input

- try to combine the `--yes` and `--root` options
- try to install in destinations with "quirky" paths, such as paths
containing spaces in the directory name, etc.

## Testing the packaged install ("Automated Installer"):

Download the
[InvokeAI-installer-v2.3.0+a0.zip](https://github.com/invoke-ai/InvokeAI/files/10533913/InvokeAI-installer-v2.3.0%2Ba0.zip)
file, unzip it, and run the install script for your platform (preferably
in a terminal window)

OR make your own: from the cloned source, check out this branch, and:

```
cd installer
./create_installer.sh
# (do NOT tag/push when prompted! just say "no")
```

This will create the installation media:
`InvokeAI-installer-v2.3.0+a0.zip`. The installer is now
*platform-agnostic* - meaning, both Windows and *nix install resources
are packaged together.

Copy it somewhere as if it had been downloaded from the internet. Unzip
the file, enter the created `InvokeAI-Installer` directory, and run
`install.sh` or `install.bat` as applicable your platform.

⚠️ NOTE!!! `install.sh` accepts the same arguments as are
applicable to the Python script, i.e. you can `install.sh --yes --root
....`. This is NOT yet supported by the Windows `.bat` script. Only
interactive installation is supported on Windows. (this is still a
TODO).
2023-02-02 16:08:10 -05:00
Lincoln Stein
2202288eb2 Merge branch 'main' into dev/installer 2023-02-02 15:17:40 -05:00
Lincoln Stein
fc3378bb74 Load legacy ckpt files as diffusers models (#2468)
* refactor ckpt_to_diffuser to allow converted pipeline to remain in memory

- This idea was introduced by Damian
- Note that although I attempted to use the updated HuggingFace module
  pipelines/stable_diffusion/convert_from_ckpt.py, it was unable to
  convert safetensors files for reasons I didn't dig into.
- Default is to extract EMA weights.

* add --ckpt_convert option to load legacy ckpt files as diffusers models

- not quite working - I'm getting artifacts and glitches in the
  converted diffuser models
- leave as draft for time being

* do not include safety checker in converted files

* add ability to control which vae is used

API now allows the caller to pass an external VAE model to the
checkpoint conversion process. In this way, if an external VAE is
specified in the checkpoint's config stanza, this VAE will be used
when constructing the diffusers model.

Tested with both regular and inpainting 1.X models.

Not tested with SD 2.X models!

---------

Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
Co-authored-by: Damian Stewart <null@damianstewart.com>
2023-02-02 20:15:44 +00:00
Lincoln Stein
96228507d2 Merge branch 'main' into dev/installer 2023-02-02 14:30:35 -05:00
Lincoln Stein
1fe5ec32f5 Swap codeowners for installer (#2477)
This PR changes the codeowner for the installer directory from
@tildebyte to @ebr due to the former's time commitments.

Further reorganization of the codeowners is pending.
2023-02-02 14:27:31 -05:00
Lincoln Stein
6dee9051a1 swap codeowners for installer 2023-02-02 13:54:53 -05:00
Lincoln Stein
d58574ca46 Merge branch 'main' into dev/installer 2023-02-02 13:53:11 -05:00
Lincoln Stein
d282000c05 swap tildebyte to ebr as code owner 2023-02-02 13:52:45 -05:00
Kevin Turner
80c5322ccc fix(img2img): do not attempt to do a zero-step img2img when strength is low (#2472) 2023-02-02 10:04:09 -08:00
Kevin Turner
da181ce64e Merge branch 'main' into fix/img2img-low-strength 2023-02-02 09:40:16 -08:00
Kevin Turner
5ef66ca237 Fix typo in xformers version, 0.16 -> 0.0.16 (#2475) 2023-02-02 09:39:08 -08:00
Lincoln Stein
e99e720474 resolve conflicts with main and rebuild frontend 2023-02-02 11:00:33 -05:00
Kevin Turner
7aa331af8c Merge branch 'main' into fix/img2img-low-strength 2023-02-02 07:20:34 -08:00
noodlebox
9e943ff7dc Fix typo in xformers version, 0.16 -> 0.0.16 2023-02-02 05:26:15 -05:00
Kent Keirsey
b5040ba8d0 Build 2023-02-02 22:52:03 +13:00
Kent Keirsey
07462d1d99 Remove Inpaint Replace 2023-02-02 22:52:03 +13:00
Eugene Brodsky
d273fba42c (installer) upgrade pip in python3.9 environments 2023-02-02 01:30:47 -05:00
Eugene Brodsky
735545dca1 (installer) remove pip from bootstrap venv requirements as it was breaking bootstrapping 2023-02-02 01:18:02 -05:00
Eugene Brodsky
328f87559b (installer) remove leftover debug logs; fix typo 2023-02-02 01:03:51 -05:00
Eugene Brodsky
6f10b06a0c (installer) clarify user messaging during destination directory selection 2023-02-02 01:03:51 -05:00
Eugene Brodsky
fd60c8297d (package) provide more legacy aliases to entrypoints to minimize user surprise 2023-02-02 01:03:51 -05:00
Lincoln Stein
480064fa06 pip won't install itself without --upgrade 2023-02-02 00:48:53 -05:00
Lincoln Stein
3810d6a4ce numerous tweaks
1. only load triton on linux machines
2. require pip >= 23.0 so that editable installs can run without setup.py
3. model files default to SD-1.5, not 2.1
4. use diffusers model of inpainting rather than ckpt
5. selected a new set of initial models based on # of likes at huggingface
2023-02-02 00:28:38 -05:00
Kevin Turner
44d36a0e0b fix(img2img): do not attempt to do a zero-step img2img when strength is low 2023-02-01 18:42:54 -08:00
Lincoln Stein
3996ee843c fix bugs in launcher script installation
- launcher scripts are installed *before* the configure script runs,
  so that if something goes wrong in the configure script, the user
  can run invoke.{sh,bat} and get the option to re-run configure
- fixed typo in invoke.sh which misspelled name of invokeai-configure
2023-02-01 19:14:07 -05:00
Lincoln Stein
6d966313b9 add a --find-links argument to import custom wheels 2023-02-01 19:03:15 -05:00
Lincoln Stein
8ce9f07223 Merge branch 'main' into dev/installer 2023-02-01 17:50:22 -05:00
Lincoln Stein
11ac50a6ea install xformers and triton when CUDA torch requested 2023-02-01 17:41:38 -05:00
Matthias Wild
31146eb797 add workflow to clean caches after PR gets closed (#2450)
This helps at least a bit to get rid of all those huge caches
2023-02-01 19:06:07 +01:00
mauwii
99cd598334 add workflow to clean caches after PR gets closed 2023-02-01 18:32:29 +01:00
Lincoln Stein
5441be8169 requirements: add xformers for CUDA platforms (#2465)
[xformers
0.16](https://github.com/facebookresearch/xformers/releases/tag/v0.0.16)
was released earlier today, and is now installable from wheels on PyPI!

Fixes #1876.
2023-02-01 10:59:13 -05:00
Lincoln Stein
3e98b50b62 Merge branch 'main' into req-xformers 2023-02-01 10:29:49 -05:00
Lincoln Stein
5f16148dea Prevent actions from running on draft PRs (#2457)
Draft PRs are triggering actions on every commit (except
`test-invoke-pip.yml`).

I've added a conditional to each job to only run when the PR is not a
draft.

(maybe there is a reason we are running all applicable workflows on
draft PRs?)
2023-02-01 00:33:15 -05:00
Lincoln Stein
9628d45a92 Merge branch 'main' into build/no-actions-on-draft 2023-02-01 00:15:30 -05:00
Eugene Brodsky
6cbdd88fe2 (installer) correctly call invokeai entrypoints in .bat launch script 2023-02-01 00:08:18 -05:00
Eugene Brodsky
d423db4f82 (meta) add copyright statements for installer code 2023-01-31 23:47:36 -05:00
Eugene Brodsky
5c8c204a1b (installer) fix regression in directory selection 2023-01-31 23:47:36 -05:00
Eugene Brodsky
a03471c588 (installer) hide system and user site packages from the installer 2023-01-31 23:47:36 -05:00
Kevin Turner
6608343455 [enhancement] Print status message at startup when xformers is available (#2461) 2023-01-31 19:11:17 -08:00
Kevin Turner
abd972f099 Merge branch 'main' into feat/xformers-startup-message 2023-01-31 18:48:09 -08:00
Lincoln Stein
bd57793a65 fix img2img by working around pytorch bug (#2458)
horribly, temporarily send the vae to `.cpu()` so that good latents can
be produced

closes #2418
2023-01-31 21:46:05 -05:00
Kevin Turner
8cdc65effc Merge branch 'main' into fix_2418_simplified 2023-01-31 17:45:54 -08:00
Kevin Turner
85b553c567 requirements: add xformers for CUDA platforms
Now available from pip!
2023-01-31 16:51:43 -08:00
Matthias Wild
af74a2d1f4 fix broken Dockerfile (#2445)
also switch to `python:3.9-slim` since it has a ton less security issues
2023-02-01 01:47:25 +01:00
mauwii
6fdc9ac224 re-enable INVOKE_MODEL_RECONFIGURE 2023-02-01 01:21:07 +01:00
mauwii
8107d354d9 fix broken Dockerfile
also switch to python 3.9:slim since it has a ton less security issues
2023-02-01 01:21:07 +01:00
mauwii
7ca8abb206 integrate required changes
- also remove conda related things
- rename `invoke` to `invokeai`
- rename `configure_invokeai` to `invokeai-configure`
- rename venv back to common `.venv` but add `--prompt InvokeAI`
- remove outdated information
2023-02-01 01:17:24 +01:00
Lincoln Stein
28c17613c4 feat(inpaint): add solid infill for use with inpainting model (#2441)
A new infill method, **solid:** solid color. currently using middle
gray.

Fixes #2417

It seems like the runwayml inpainting model specifically expects those
masked areas to be blanked out like this.

I haven't tried the SD 2.0 inpainting model with it yet.
2023-01-31 18:27:48 -05:00
mauwii
eeb7a4c28c (ci) disable py3.9, lin-cuda-11_6 and win cuda 2023-02-01 00:24:56 +01:00
mauwii
0009d82a92 update test_path.py to also verify caution.png 2023-02-01 00:22:28 +01:00
Lincoln Stein
e6d52d7ce6 Merge branch 'main' into fix_2418_simplified 2023-01-31 18:11:56 -05:00
Lincoln Stein
8c726d3e3e Merge branch 'main' into build/no-actions-on-draft 2023-01-31 18:08:52 -05:00
Lincoln Stein
56e2d22b6e Merge branch 'main' into feat/solid-infill 2023-01-31 18:02:17 -05:00
Lincoln Stein
053d11fe30 fix(inpainting model): blank areas to be repainted in the masked image (#2447)
Otherwise the model seems too reluctant to change these areas, even
though the mask channel should allow it to.

This makes the solid infill method proposed by #2441 less necessary,
though I think there's still a place for an infill method that is faster
than patchmatch and more predictable than tiles.

Even with #2441, this PR is still useful because it influences all areas
to be painted, not just the infill area.

Fixes #2417
2023-01-31 18:01:33 -05:00
Lincoln Stein
0066187651 Merge branch 'main' into feat/solid-infill 2023-01-31 17:53:09 -05:00
Lincoln Stein
d3d24fa816 fill color is parameterized 2023-01-31 17:52:33 -05:00
Kevin Turner
4d58fed6b0 Merge branch 'main' into fix/inpainting-blank-slate 2023-01-31 11:04:56 -08:00
Kevin Turner
bde5874707 fix dimension errors when inpainting model is used with hires-fix (#2440) 2023-01-31 11:04:02 -08:00
Kevin Turner
eed802f5d9 Merge branch 'main' into fix/hires_inpaint 2023-01-31 09:34:29 -08:00
Lincoln Stein
c13e11a264 Merge branch 'dev/installer' of github.com:invoke-ai/InvokeAI into dev/installer 2023-01-31 12:26:19 -05:00
Lincoln Stein
1c377b7995 further improvements to ability to find location of data files
- implement the following pattern for finding data files under both
  regular and editable install conditions:

  import invokeai.foo.bar as bar
  path = bar.__path__[0]

- this *seems* to work reliably with Python 3.9. Testing on 3.10 needs
  to be performed.
2023-01-31 12:24:55 -05:00
mauwii
efe8dcaae9 cleanup test_path.py, enable pytest in pipeline
temporary enable 3.9 tests as well
2023-01-31 18:18:32 +01:00
Lincoln Stein
fc8e3dbcd3 fix crash when editing name of model
- fixes a spurious "unknown model name" error when trying to edit the
  short name of an existing model.
- relaxes naming requirements to include the ':' and '/' characters
  in model names
2023-01-31 09:59:58 -05:00
mauwii
ec1e83e912 add pytest to test path of frontend and configs 2023-01-31 09:06:06 +01:00
mauwii
ab9daf1241 remove frontend from configure_invokeai.py
since it does not get accessed there at all
2023-01-31 08:15:48 +01:00
mauwii
c061c1b1b6 fix frontend path
point to package's path instead of searching for it
2023-01-31 08:15:20 +01:00
Lincoln Stein
b9cc56593e print status message at startup when xformers is available 2023-01-30 22:01:06 -05:00
psychedelicious
6a0e1c8673 Merge branch 'main' into build/no-actions-on-draft 2023-01-31 12:00:38 +11:00
Kevin Turner
371edc993a Implement .swap() against diffusers 0.12 (#2385) 2023-01-30 15:56:24 -08:00
Lincoln Stein
d71734c90d update frontend path in lint test 2023-01-30 18:48:43 -05:00
Lincoln Stein
9ad4c03277 Various fixes
1) Downgrade numpy to avoid dependency conflict with numba
2) Move all non ldm/invoke files into `invokeai`. This includes assets, backend, frontend, and configs.
3) Fix up way that the backend finds the frontend and the generator finds the NSFW caution.png icon.
2023-01-30 18:42:17 -05:00
Damian Stewart
5299324321 workaround for pytorch bug, fixes #2418 2023-01-30 18:45:53 +01:00
Damian Stewart
817e36f8bf Merge branch 'diffusers_cross_attention_control_reimplementation' of github.com:damian0815/InvokeAI into diffusers_cross_attention_control_reimplementation 2023-01-30 16:23:52 +01:00
Damian Stewart
d044d4c577 rename override/restore methods to better reflect what they actually do 2023-01-30 16:23:44 +01:00
Damian Stewart
3f1120e6f2 Merge branch 'main' into diffusers_cross_attention_control_reimplementation 2023-01-30 16:17:25 +01:00
Damian Stewart
17d73d09c0 Revert "with diffusers cac, always run the original prompt on the first step"
This reverts commit 27ee939e4b.
2023-01-30 15:38:03 +01:00
Damian Stewart
478c379534 for cac make t_start=0.1 the default 2023-01-30 15:30:01 +01:00
Damian Stewart
c5c160a788 Merge branch 'diffusers_cross_attention_control_reimplementation' of github.com:damian0815/InvokeAI into diffusers_cross_attention_control_reimplementation 2023-01-30 14:51:06 +01:00
Damian Stewart
27ee939e4b with diffusers cac, always run the original prompt on the first step 2023-01-30 14:50:57 +01:00
psychedelicious
c222cf7e64 Prevents actions from running on draft PRs 2023-01-30 22:28:05 +11:00
Eugene Brodsky
b2a3b8bbf6 (installer) fix the create_installer.sh script so it instructs the user to deactivate an active venv 2023-01-30 03:42:27 -05:00
Eugene Brodsky
11cb03f7de (installer) fall back to attempted source install if wheel not found
if running `python3 installer/main.py` from the source distribution,
it would fail because it expected to find a wheel.

this PR tries to perform a source install by going one level up the directory
tree and checking for `pyproject.toml` and `ldm` directory entries to
confirm (to a degree) that this is an InvokeAI distribution
2023-01-30 03:29:09 -05:00
Eugene Brodsky
6b1dc34523 (installer) improve selection of destination directory 2023-01-30 03:15:05 -05:00
Eugene Brodsky
44786b0496 (installer) improve function naming 2023-01-29 23:39:14 -05:00
Lincoln Stein
d9ed0f6005 fix documentation of huggingface cache location (#2430)
* fix documentation of huggingface cache location

---------

Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
2023-01-29 20:30:50 -06:00
Eugene Brodsky
2e7a002308 (installer) remove unnecessary shell options from the install wrapper script 2023-01-29 20:10:51 -05:00
Jonathan
5ce62e00c9 Merge branch 'main' into diffusers_cross_attention_control_reimplementation 2023-01-29 13:52:01 -06:00
Kevin Turner
5a8c28de97 Merge remote-tracking branch 'origin/main' into fix/hires_inpaint 2023-01-29 10:51:59 -08:00
Jonathan
07e03b31b7 Update --hires_fix (#2414)
* Update --hires_fix

Change `--hires_fix` to calculate initial width and height based on the model's resolution (if available) and with a minimum size.
2023-01-29 12:27:01 -06:00
Eugene Brodsky
5ee5c5a012 (training) correctly import TI module; fix type annotation 2023-01-28 19:09:16 -05:00
Eugene Brodsky
3075c99ed2 (ci) fix test that was failng due to CLI entrypoint change 2023-01-28 19:03:48 -05:00
Eugene Brodsky
2c0bee2a6d (config) ensure the correct 'invokeai' command is displayed to the user after configuration 2023-01-28 17:39:33 -05:00
Eugene Brodsky
8f86aa7ded (docs) update install docs to refer to the platform-agnostic installer 2023-01-28 17:39:33 -05:00
Eugene Brodsky
34e0d7aaa8 (config) rename all mentions of scripts/configure_invokeai.py to the new invokeai-configure command 2023-01-28 17:39:33 -05:00
Eugene Brodsky
abe4e1ea91 (scripts) improved script entrypoints 2023-01-28 17:39:33 -05:00
Eugene Brodsky
f1f8ce604a (installer) build .whl and distribute together with the installer; install from bundled .whl by default 2023-01-28 17:39:33 -05:00
Eugene Brodsky
47dbe7bc0d (assets) move 'caution.png' to avoid including entire 'assets' dir in the wheel
reduces wheel size to 3MB from 27MB
2023-01-28 17:39:33 -05:00
Eugene Brodsky
ebe6daac56 (installer) do not install if already in a venv 2023-01-28 17:39:33 -05:00
Eugene Brodsky
d209dab881 (installer) support both pip and source install; no longer support installing from a downloaded release .zip 2023-01-28 17:39:33 -05:00
Eugene Brodsky
2ff47cdecf (scripts) rename/reorganize CLI scripts
- add torch MPS fallback directly to CLI.py
- rename CLI scripts with `invoke-...` prefix
- delete long-deprecated scripts
- add a missing package dependency
- delete setup.py as obsolete
2023-01-28 17:39:33 -05:00
Eugene Brodsky
22c34aabfe (package) move TI scripts into a module; update packaging of 'configs' dir 2023-01-28 17:39:33 -05:00
Eugene Brodsky
b58a80109b (test) tweak pytest coverage options
- remove redundant options (unchanged from defaults)
- don't test 3rd party code
- omit fully covered files from coverage report
- gitignore junit (xml) test output directory
2023-01-28 17:39:33 -05:00
Eugene Brodsky
c5a9e70e7f (parser) fix missing argument default in parse_legacy_blend 2023-01-28 17:39:33 -05:00
Eugene Brodsky
c5914ce236 (installer) new torch index urls + support installation from PyPi 2023-01-28 17:39:33 -05:00
Eugene Brodsky
242abac12d (installer) add a --y[es[to_all]] argument for a fully hands-off install/config 2023-01-28 17:39:33 -05:00
Eugene Brodsky
4b659982b7 (installer) install.bat wrapper for the python script 2023-01-28 17:39:33 -05:00
Eugene Brodsky
71733bcfa1 (installer) copy launch/update scripts to the root dir; improve launch experience on Linux/Mac
- install.sh is now a thin wrapper around the pythonized install script
- install.bat not done yet - to follow
- user messaging is tailored to the current platform (paste shortcuts, file paths, etc)
- emit invoke.sh/invoke.bat scripts to the runtime dir
- improve launch scripts (add help option, etc)
- only emit the platform-specific scripts
2023-01-28 17:39:33 -05:00
Eugene Brodsky
d047e070b8 (config) fix config file creation in edge cases
if the config directory is missing, initialize it using the standard
process of copying it over, instead of failing to create the config file

this can happen if the user is re-running the config script in a directory which
already has the init file, but no configs dir
2023-01-28 17:39:33 -05:00
Eugene Brodsky
02c530e200 (installer) work around Windows install issues 2023-01-28 17:39:33 -05:00
Eugene Brodsky
d36bbb817c (installer) use pep517 for installing dependencies
the 'setup.py install' method is deprecated in favour of a
build-system independent format: https://peps.python.org/pep-0517/

this is needed to install dependencies that don't have a pyproject.toml
file (only setup.py) in a forward-compatible way
2023-01-28 17:39:33 -05:00
Eugene Brodsky
9997fde144 (config) moving the 'configs' dir into the 'config' module
This allows reliable distribution of the initial 'configs' directory
with the Python package, and enables the configuration script to be running
from anywhere, as long as the virtual environment is available on the sys.path
2023-01-28 17:39:33 -05:00
Eugene Brodsky
9e22ed5c12 (installer) ignore temporary venv cleanup errors on Windows
There is a race condition affecting the 'tempfile' module on Windows.
A PermissionsError is raised when cleaning up the temp dir
Python3.10 introduced a flag to suppress this error.

Windows + Python3.9 users will receive an unpleasant stack trace for now
2023-01-28 17:39:32 -05:00
Eugene Brodsky
169c56e471 (installer) install PyTorch from correct repositories 2023-01-28 17:39:32 -05:00
Eugene Brodsky
b186965e77 (installer) ask the user for their GPU type; improve other messaging 2023-01-28 17:39:32 -05:00
Eugene Brodsky
88526b9294 (config) move configure_invokeai script to the config module for easier importing 2023-01-28 17:39:32 -05:00
Eugene Brodsky
071a438745 (installer) add graphics accelerator selection 2023-01-28 17:39:32 -05:00
Eugene Brodsky
93129fde32 (installer) run configure_invokeai from within the installer 2023-01-28 17:39:32 -05:00
Eugene Brodsky
802b95b9d9 (installer) use prompt-toolkit for directory picking instead of tkinter 2023-01-28 17:39:32 -05:00
Eugene Brodsky
c279314cf5 (installer) use plumbum for better stdout streaming 2023-01-28 17:39:32 -05:00
Eugene Brodsky
f75b194b76 (installer) PoC to install the app (source installer style) into the app venv 2023-01-28 17:39:32 -05:00
Eugene Brodsky
bf1996bbcf (installer) add venv creation for the app 2023-01-28 17:39:32 -05:00
Eugene Brodsky
d3962ab7b5 (installer) Windows fixes 2023-01-28 17:39:32 -05:00
Eugene Brodsky
2296f5449e (installer) initial work on the installer 2023-01-28 17:39:32 -05:00
Kevin Turner
b6d37a70ca fix(inpainting model): threshold mask to avoid gray blurry seam 2023-01-28 13:34:22 -08:00
Kevin Turner
71b6ddf5fb fix(inpainting model): blank areas to be repainted in the masked image
Otherwise the model seems too reluctant to change these areas, even though the mask channel should allow it to.
2023-01-28 11:10:32 -08:00
mauwii
14de7ed925 remove requirements step from install manual 2023-01-28 00:58:32 +01:00
Kevin Turner
6556b200b5 remove experimental "blur" infill
It seems counterproductive for use with the inpainting model, and not especially useful otherwise.
2023-01-27 15:25:50 -08:00
Kevin Turner
d627cd1865 feat(inpaint): add simpler infill methods for use with inpainting model 2023-01-27 14:28:16 -08:00
Kevin Turner
09b6104bfd refactor(txt2img2img): factor out tensor shape 2023-01-27 12:04:12 -08:00
Kevin Turner
1bb5b4ab32 fix dimension errors when inpainting model is used with hires-fix 2023-01-27 11:52:05 -08:00
Lincoln Stein
c18db4e47b removed defunct textual inversion script (#2433)
The original textual inversion script in scripts is now superseded. The
replacement can be found in ldm/invoke/textual_inversion.py and is a
merging of the command line and front end scripts. After running `pip
install -e .` there will be a `textual_inversion` command on your path.
You can activate the front end this way:

`textual_inversion -gui`
2023-01-27 10:44:34 -05:00
Jonathan
f9c92e3576 Merge branch 'main' into bugfix/remove-defunct-scripts 2023-01-27 08:32:15 -06:00
psychedelicious
1ceb7a60db adds double-click to reset view to 100% (#2436)
Adds double-click to reset canvas view to 100%.

- Adds hook to manage single and double clicks
- Single Click `Reset Canvas View` --> scale to fit, no change to
current behaviour
- Double Click `Reset Canvas View` --> set scale to 1
2023-01-28 00:56:24 +11:00
psychedelicious
f509650ec5 adds double-click to reset view to 100% 2023-01-27 08:30:24 -05:00
psychedelicious
0d0f35a1e2 Fix download button styling (#2435)
fixes #2383
2023-01-28 00:29:34 +11:00
psychedelicious
6dbc42fc1a fixes download button styling 2023-01-27 20:23:12 +11:00
Lincoln Stein
f6018fe5aa removed defunct textual inversion script 2023-01-26 23:35:09 -05:00
blessedcoolant
e4cd66216e Frontend Build (diffusers-mm-fixes) 2023-01-27 17:23:25 +13:00
blessedcoolant
995fbc78c8 Diffusers Model Manager Fixes 2023-01-27 17:23:25 +13:00
blessedcoolant
3083f8313d Default Seam Steps to 30
Seems to be the temporary solution for the seams looking horrible with some diffuser models.
2023-01-27 17:23:25 +13:00
Lincoln Stein
c0614ac7f3 Improve configuration of initial Waifu models (#2426)
Testing suggests that the diffusers versions of Waifu-1.4 anything-v4.0
require the `sd-vae-ft-mse` to generate decent images, so the
appropriate arguments have been added to the initial model file.
2023-01-26 18:18:00 -05:00
Lincoln Stein
0186630514 Merge branch 'main' into install/better-initial-models 2023-01-26 17:42:10 -05:00
Lincoln Stein
d53df09203 [enhancement] Improve organization & behavior of model merging and textual inversion scripts (#2427)
- Model merging and textual inversion scripts have been moved into
`ldm/invoke`, which allows them to be installed properly by
pyproject.toml.
- As part of the pyproject install, the .py suffix is removed from the
command. I.e. use `invoke`, `configure_invokeai`, `merge_models` and
`textual_inversion`.
- GUI versions are activated by adding `--gui` to the command. Without
this, you get a classical argv-based command. Example: `merge_models
--gui`
- Fixed up the launcher scripts to accommodate new naming scheme.
- Keyboard behavior of the GUI front ends has been improved. You can now
use up and down arrow to move from field to field, in addition to <tab>
and ctrl-N/ctrl-P
2023-01-26 17:36:45 -05:00
Lincoln Stein
12a29bfbc0 Merge branch 'main' into install/change-script-locations 2023-01-26 17:10:33 -05:00
Lincoln Stein
f36114eb94 Fix Sliders unable to take typed input (#2407)
So far the slider component was unable to take typed input due to a
bunch of issues that were a pain to solve. This PR fixes it.

Things to test:

- Moving the slider also updates the value in the input text box.
- Input text box next to slider can be changed in two ways: If you type
a manual value, the slider will be updated when you lose focus from the
input box. If you use the stepper icons to update the values, the slider
should update immediately.
- Make sure the reset buttons next to the slider are updating correctly
and make sure this updates both the slider and the input box values.
- Brush Size slider -> make sure the hotkeys are updating the input box
too.
2023-01-26 17:10:16 -05:00
Lincoln Stein
c255481c11 Merge branch 'main' into slider-fix 2023-01-26 16:20:25 -05:00
Lincoln Stein
7f81105acf dev: update to diffusers 0.12, transformers 4.26 (#2420)
Happy New Year!
2023-01-26 16:18:37 -05:00
Lincoln Stein
c8de679dc3 Merge branch 'main' into update-diffusers 2023-01-26 15:43:41 -05:00
Lincoln Stein
85b18fe9ee Merge branch 'main' into install/better-initial-models 2023-01-26 15:42:13 -05:00
Lincoln Stein
e0d8c19da6 fix indentation problem 2023-01-26 15:39:59 -05:00
Lincoln Stein
5567808237 tweak documentation 2023-01-26 15:28:54 -05:00
Lincoln Stein
2817f8a428 update launcher shell scripts for new script names & paths 2023-01-26 15:26:38 -05:00
Lincoln Stein
8e4c044ca2 clean up tab/cursor behavior in textual inversion txt gui 2023-01-26 15:18:28 -05:00
Lincoln Stein
9dc3832b9b clean up merge_models 2023-01-26 15:10:16 -05:00
Lincoln Stein
046abb634e Remove dependency on original clipseg library for text masking (#2425)
- This replaces the original clipseg library with the transformers
version from HuggingFace.
- This should make it possible to register InvokeAI at PyPi and do a
fully automated pip-based install.
- Minor regression: it is no longer possible to specify which device the
clipseg model will be loaded into, and it will reside in CPU. However,
performance is more than acceptable.
2023-01-26 12:14:13 -05:00
Lincoln Stein
d3a469d136 fix location of textual_inversion script 2023-01-26 11:56:23 -05:00
Lincoln Stein
e79f89b619 improve initial model configuration 2023-01-26 11:53:06 -05:00
Lincoln Stein
cbd967cbc4 add documentation caveat about location of HF cached models 2023-01-26 11:48:03 -05:00
damian
e090c0dc10 try without setting every time 2023-01-26 17:46:51 +01:00
damian
c381788ab9 don't restore None 2023-01-26 17:44:27 +01:00
damian
fb312f9ed3 use the correct value - whoops 2023-01-26 17:30:29 +01:00
damian
729752620b trying out JPPhoto's patch on vast.ai 2023-01-26 17:27:33 +01:00
damian
8ed8bf52d0 use 'auto' slice size 2023-01-26 17:04:22 +01:00
Lincoln Stein
a49d546125 simplified code a bit 2023-01-26 09:46:34 -05:00
Lincoln Stein
288e31fc60 remove dependency on original clipseg library
- This replaces the original clipseg library with the transformers
  version from HuggingFace.
- This should make it possible to register InvokeAI at PyPi and do
  a fully automated pip-based install.
- Minor regression: it is no longer possible to specify which device
  the clipseg model will be loaded into, and it will reside in CPU.
  However, performance is more than acceptable.
2023-01-26 09:35:16 -05:00
Lincoln Stein
7b2c0d12a3 add missing VAEs to initial diffuser models 2023-01-26 00:25:39 -05:00
Kevin Turner
2978c3eb8d Merge branch 'main' into update-diffusers 2023-01-25 18:42:00 -08:00
Damian Stewart
5e7ed964d2 wip updating docs 2023-01-25 23:49:38 +01:00
Damian Stewart
93a24445dc Merge remote-tracking branch 'upstream/main' into diffusers_cross_attention_control_reimplementation 2023-01-25 23:05:39 +01:00
Damian Stewart
95d147c5df MPS support: negatory 2023-01-25 23:03:30 +01:00
Damian Stewart
41aed57449 wip tracking down MPS slicing support 2023-01-25 22:27:23 +01:00
Damian Stewart
34a3f4a820 cleanup 2023-01-25 21:47:17 +01:00
Damian Stewart
1f5ad1b05e sliced swap working 2023-01-25 21:38:27 +01:00
blessedcoolant
87c63f1f08 Slider Fix Build 2023-01-26 09:04:52 +13:00
blessedcoolant
5b054dd5b7 Conflict Resolved Build (slider-fix) 2023-01-26 09:04:20 +13:00
blessedcoolant
fc5c8cc800 Merge branch 'main' into slider-fix 2023-01-26 09:03:02 +13:00
blessedcoolant
eb2ca4970b Add Dutch Localization Build 2023-01-26 08:56:38 +13:00
blessedcoolant
c2b10e6461 Add Dutch Localization 2023-01-26 08:56:38 +13:00
Dennis
190d266060 Dutch localization 2023-01-26 08:56:38 +13:00
Kevin Turner
8c8e1a448d dev: update to diffusers 0.12, transformers 4.26
Happy New Year!
2023-01-25 10:51:56 -08:00
Damian Stewart
c52dd7e3f4 Merge branch 'diffusers_cross_attention_control_reimplementation' of github.com:damian0815/InvokeAI into diffusers_cross_attention_control_reimplementation 2023-01-25 14:51:15 +01:00
Damian Stewart
a4aea1540b more wip sliced attention (.swap doesn't work yet) 2023-01-25 14:51:08 +01:00
Kevin Turner
3c53b46a35 Merge branch 'main' into diffusers_cross_attention_control_reimplementation 2023-01-24 19:32:34 -08:00
blessedcoolant
65fd6cd105 Merge branch 'main' into slider-fix 2023-01-25 08:28:37 +13:00
Lincoln Stein
61403fe306 fix second conflict in CLI.py 2023-01-24 14:21:21 -05:00
Lincoln Stein
b2f288d6ec fix conflict in CLI.py 2023-01-24 14:20:40 -05:00
blessedcoolant
d1d12e4f92 Merge branch 'main' into slider-fix 2023-01-25 08:06:30 +13:00
Lincoln Stein
eaf7934d74 [Enhancements] Allow user to specify VAE with !import_model and delete underlying model with !del_model (#2369)
Fix two deficiencies in the CLI's support for model management:

1. `!import_model` did not allow user to specify VAE file. This is now
fixed.
2. `!del_model` did not offer the user the opportunity to delete the
underlying
       weights file or diffusers directory. This is now fixed.
2023-01-24 13:43:16 -05:00
Lincoln Stein
079ec4cb5c Merge branch 'main' into feat/import-with-vae 2023-01-24 13:16:00 -05:00
blessedcoolant
38d0b1e3df Merge branch 'main' into slider-fix 2023-01-25 07:14:26 +13:00
blessedcoolant
fc6500e819 Fix Inpaint Replace Slider 2023-01-25 07:13:01 +13:00
Lincoln Stein
f521f5feba improve UI of textual inversion frontend (#2333)
- File selection box now accepts directories that don't exist yet.
- Fixed crash when resume is selected and no files available to resume
from.
2023-01-24 12:22:17 -05:00
Lincoln Stein
ce865a8d69 Merge branch 'main' into slider-fix 2023-01-24 12:21:39 -05:00
Lincoln Stein
00839d02ab Merge branch 'main' into lstein-improve-ti-frontend 2023-01-24 11:53:03 -05:00
Lincoln Stein
ce52d0c42b Merge branch 'main' into feat/import-with-vae 2023-01-24 11:52:40 -05:00
Lincoln Stein
f687d90bca [feat] Better status reporting when loading embeds and concepts (#2372)
This PR improves the console reporting of the process of recognizing
trigger tokens and loading their embeds.

1. Do not report "concept is not known to HuggingFace" if the trigger
term is in fact a local embedding trigger.
2. When a trigger term is first recognized during a session, report the
fact.
This should help debug embedding issues in the future.

Note that the local embeddings produced by the new InvokeAI TI training
script default to the format <trigger> with literal angle brackets. This
sets them off from the rest of the text well and will enable
autocomplete at some point in the future. However, this means that they
supersede like-named HuggingFace concepts, and may cause problems for
people uploading them to the HuggingFace repository (although that
problem already exists).
2023-01-24 09:35:53 -05:00
Lincoln Stein
7473d814f5 remove original setup.py 2023-01-24 09:11:05 -05:00
Lincoln Stein
b2c30c2093 Merge branch 'main' into bugfix/embed-loading-messages 2023-01-24 09:08:13 -05:00
Lincoln Stein
a7048eea5f Merge branch 'main' into feat/import-with-vae 2023-01-24 09:07:41 -05:00
Lincoln Stein
87c9398266 [enhancement] import .safetensors ckpt files directly (#2353)
This small fix makes it possible to import and run safetensors ckpt
files directly without doing a conversion step first.
2023-01-24 09:06:49 -05:00
Damian Stewart
63c6019f92 sliced attention processor wip (untested) 2023-01-24 14:46:32 +01:00
blessedcoolant
8eaf0d8bfe Fix Slider Build 2023-01-24 16:44:58 +13:00
blessedcoolant
5344481809 Fix Slider not being able to take typed input 2023-01-24 16:43:29 +13:00
Lincoln Stein
9f32daab2d Merge branch 'main' into lstein-import-safetensors 2023-01-23 21:58:07 -05:00
Lincoln Stein
884768c39d Make sure --free_gpu_mem still works when using CKPT-based diffuser model (#2367)
This PR attempts to fix `--free_gpu_mem` option that was not working in
CKPT-based diffuser model after #1583.

I noticed that the memory usage after #1583 did not decrease after
generating an image when `--free_gpu_mem` option was enabled.
It turns out that the option was not propagated into `Generator`
instance, hence the generation will always run without the memory saving
procedure.

This PR also related to #2326. Initially, I was trying to make
`--free_gpu_mem` works on 🤗 diffuser model as well.
In the process, I noticed that InvokeAI will raise an exception when
`--free_gpu_mem` is enabled.
I tried to quickly fix it by simply ignoring the exception and produce a
warning message to user's console.
2023-01-23 21:48:23 -05:00
Lincoln Stein
bc2194228e stability improvements
- provide full traceback when a model fails to load
- fix VAE record for VoxelArt; otherwise load fails
2023-01-23 21:40:27 -05:00
Lincoln Stein
10c3afef17 Merge branch 'main' into bugfix/free-gpu-mem-diffuser 2023-01-23 21:15:12 -05:00
Lincoln Stein
98e9721101 correct fail-to-resume error
- applied https://github.com/huggingface/diffusers/pull/2072 to fix
  error in epoch calculation that caused script not to resume from
  latest checkpoint when asked to.
2023-01-23 21:04:07 -05:00
blessedcoolant
66babb2e81 Japanese Localization Build 2023-01-24 09:07:29 +13:00
blessedcoolant
31a967965b Add Japanese Localization 2023-01-24 09:07:29 +13:00
Katsuyuki-Karasawa
b9c9b947cd update japanese translation 2023-01-24 09:07:29 +13:00
唐澤 克幸
1eee08a070 add Japanese Translation 2023-01-24 09:07:29 +13:00
Lincoln Stein
aca1b61413 [Feature] Add interactive diffusers model merger (#2388)
This PR adds `scripts/merge_fe.py`, which will merge any 2-3 diffusers
models registered in InvokeAI's `models.yaml`, producing a new merged
model that will be registered as well.

Currently this script will only work if all models to be merged are
known by their repo_ids. Local models, including those converted from
ckpt files, will cause a crash due to a bug in the diffusers
`checkpoint_merger.py` code. I have made a PR against
huggingface/diffusers which fixes this:
https://github.com/huggingface/diffusers/pull/2060
2023-01-23 09:27:05 -05:00
Lincoln Stein
e18beaff9c Merge branch 'main' into feat/merge-script 2023-01-23 09:05:38 -05:00
Kevin Turner
d7554b01fd fix typo in prompt 2023-01-23 00:24:06 -08:00
Kevin Turner
70f8793700 Merge branch 'main' into feat/import-with-vae 2023-01-23 00:17:46 -08:00
Kevin Turner
0d4e6cbff5 Merge branch 'main' into bugfix/embed-loading-messages 2023-01-23 00:12:33 -08:00
Kevin Turner
ea61bf2c94 [bugfix] ckpt conversion script respects cache in ~/invokeai/models (#2395) 2023-01-23 00:07:23 -08:00
Lincoln Stein
7dead7696c fixed setup.py to install the new scripts 2023-01-23 00:43:15 -05:00
Lincoln Stein
ffcc5ad795 conversion script uses invokeai models cache by default 2023-01-23 00:35:16 -05:00
Lincoln Stein
48deb3e49d add model merging documentation and launcher script menu entries 2023-01-23 00:20:28 -05:00
Lincoln Stein
6c31225d19 create small module for merge importation logic 2023-01-22 18:07:53 -05:00
Damian Stewart
c0610f7cb9 pass missing value 2023-01-22 18:19:06 +01:00
Damian Stewart
313b206ff8 squash float16/float32 mismatch on linux 2023-01-22 18:13:12 +01:00
Lincoln Stein
f0fe483915 Merge branch 'main' into feat/merge-script 2023-01-21 18:42:40 -05:00
Lincoln Stein
4ee8d104f0 working, but needs diffusers PR to be accepted 2023-01-21 18:39:13 -05:00
Kevin Turner
89791d91e8 fix: use pad_token for padding (#2381)
Stable Diffusion 2 does not use eos_token for padding.

Fixes #2378
2023-01-21 13:30:03 -08:00
Kevin Turner
87f3da92e9 Merge branch 'main' into fix/sd2-padding-token 2023-01-21 13:11:02 -08:00
Lincoln Stein
f169bb0020 fix long prompt weighting bug in ckpt codepath (#2382) 2023-01-21 15:14:14 -05:00
Damian Stewart
155efadec2 Merge branch 'main' into fix/sd2-padding-token 2023-01-21 21:05:40 +01:00
Damian Stewart
bffe199ad7 SwapCrossAttnProcessor working - tested on mac CPU (MPS doesn't work) 2023-01-21 20:54:18 +01:00
Damian Stewart
0c2a511671 wip SwapCrossAttnProcessor 2023-01-21 18:07:36 +01:00
Damian Stewart
e94c8fa285 fix long prompt weighting bug in ckpt codepath 2023-01-21 12:08:21 +01:00
Lincoln Stein
b3363a934d Update index.md (#2377) 2023-01-21 00:17:23 -05:00
Lincoln Stein
599c558c87 Merge branch 'main' into patch-1 2023-01-20 23:54:40 -05:00
Kevin Turner
d35ec3398d fix: use pad_token for padding
Stable Diffusion does not use the eos_token for padding.
2023-01-20 19:25:20 -08:00
Lincoln Stein
96a900d1fe correctly import diffusers models by their local path
- Corrects a bug in which the local path was treated as a repo_id
2023-01-20 20:13:43 -05:00
Lincoln Stein
f00f7095f9 Add instructions for installing xFormers on linux (#2360)
I've written up the install procedure for xFormers on Linux systems.

I need help with the Windows install; I don't know what the build
dependencies (compiler, etc) are. This section of the docs is currently
empty.

Please see `docs/installation/070_INSTALL_XFORMERS.md`
2023-01-20 17:57:12 -05:00
mauwii
d7217e3801 disable instable CI tests for windows runners
therefore enable all pytorch versions to verify installation
2023-01-20 23:30:25 +01:00
mauwii
fc5fdae562 update installation instructions 2023-01-20 23:30:25 +01:00
mauwii
a491644e56 fix dependencies/requirements 2023-01-20 23:30:24 +01:00
mauwii
ec2a509e01 make images in README.md compatible to pypi
also add missing new-lines before/after headings
2023-01-20 23:30:24 +01:00
mauwii
6a3a0af676 update test-invoke-pip.yml
- remove stable-diffusion-model from matrix
- add windows-cuda-11_6 and linux-cuda-11_6
- enable linux-cpu
- disable windows-cpu
- change step order
- remove job env
- set runner.os specific env
- install editable
- cache models folder
- remove `--model` and `--root` arguments from invoke command
2023-01-20 23:30:24 +01:00
mauwii
ef4b03289a enable image generating step for windows as well
- also remove left over debug lines and development branch leftover
2023-01-20 23:30:24 +01:00
mauwii
963b666844 fix memory issue on windows runner
- use cpu version which is only 162.6 MB
- set `INVOKEAI_ROOT=C:\InvokeAI` on Windows runners
2023-01-20 23:30:24 +01:00
mauwii
5a788f8f73 fix test-invoke-pip.yml matrix 2023-01-20 23:30:24 +01:00
mauwii
5afb63e41b replace legacy setup.py with pyproject.toml
other changes which where required:
- move configure_invokeai.py into ldm.invoke
- update files which imported configure_invokeai to use new location:
    - ldm/invoke/CLI.py
    - scripts/load_models.py
    - scripts/preload_models.py
- update test-invoke-pip.yml:
    - remove pr type "converted_to_draft"
    - remove reference to dev/diffusers
    - remove no more needed requirements from matrix
    - add pytorch to matrix
    - install via `pip3 install --use-pep517 .`
    - use the created executables
        - this should also fix configure_invoke not executed in windows
To install use `pip install --use-pep517 -e .` where `-e` is optional
2023-01-20 23:30:24 +01:00
Lincoln Stein
279ffcfe15 Merge branch 'main' into lstein/xformers-instructions 2023-01-20 17:29:39 -05:00
Lincoln Stein
9b73292fcb add pip install documentation for xformers 2023-01-20 17:28:14 -05:00
Lincoln Stein
67d91dc550 Merge branch 'bugfix/embed-loading-messages' of github.com:invoke-ai/InvokeAI into bugfix/embed-loading-messages 2023-01-20 17:16:50 -05:00
Lincoln Stein
a1c0818a08 ignore .DS_Store files when scanning Mac embeddings 2023-01-20 17:16:39 -05:00
Lincoln Stein
2cf825b169 Merge branch 'main' into bugfix/embed-loading-messages 2023-01-20 17:14:46 -05:00
Lincoln Stein
292b0d70d8 Merge branch 'lstein-improve-ti-frontend' of github.com:invoke-ai/InvokeAI into lstein-improve-ti-frontend 2023-01-20 17:14:08 -05:00
Lincoln Stein
c3aa3d48a0 ignore .DS_Store files when scanning Mac embeddings 2023-01-20 17:13:32 -05:00
Lincoln Stein
9e3c947cd3 Merge branch 'main' into lstein-improve-ti-frontend 2023-01-20 17:01:09 -05:00
Lincoln Stein
4b8aebabfb add diffusers repo as a reference for further reading 2023-01-20 16:59:34 -05:00
Lincoln Stein
080fc4b380 add documentation and minor bug fixes
- Added new documentation for textual inversion training process
- Move `main.py` into the deprecated scripts folder
- Fix bug in `textual_inversion.py` which was causing it to not load
  the globals module correctly.
- Sort models alphabetically in console front end
- Only show diffusers models in console front end
2023-01-20 16:55:50 -05:00
Lincoln Stein
195294e74f sort models alphabetically 2023-01-20 15:17:54 -05:00
michaelk71
da81165a4b Update index.md 2023-01-20 19:03:12 +01:00
Lincoln Stein
f3ff386491 [enhancement] Reorganize form for textual inversion training (#2375)
- Add num_train_epochs
- Reorganize widgets so all sliders that control # of steps are together
2023-01-20 10:58:26 -05:00
Lincoln Stein
da524f159e Merge branch 'main' into feat/enhance-ti-training-ui 2023-01-20 10:28:27 -05:00
Lincoln Stein
2d1eeec063 Save HFToken only if it is present (#2370)
Fixes https://github.com/invoke-ai/InvokeAI/issues/2083
2023-01-19 22:16:19 -05:00
Nicholas Koh
a8bb1a1109 Save HFToken only if it is present 2023-01-19 21:47:27 -05:00
Lincoln Stein
d9fa505412 [feat] Provide option to disable xformers from command line (#2373)
Starting `invoke.py` with --no-xformers will disable
memory-efficient-attention support if xformers is installed.

For symmetry, `--xformers` will enable support, but this is already the
default if xformers is available.
2023-01-19 19:15:57 -05:00
Lincoln Stein
02ce602a38 Merge branch 'main' into feat/disable-xformers 2023-01-19 18:45:59 -05:00
Lincoln Stein
9b1843307b [enhancement] Reorganize form for textual inversion training
- Add num_train_epochs
- Reorganize widgets so all sliders that control # of steps are together
2023-01-19 18:43:12 -05:00
Lincoln Stein
f0010919f2 Merge branch 'main' into bugfix/free-gpu-mem-diffuser 2023-01-19 18:03:36 -05:00
Lincoln Stein
d113b4ad41 [bugfix] suppress extraneous warning messages generated by diffusers (#2374)
This commit suppresses a few irrelevant warning messages that the
diffusers module produces:

1. The warning that turning off the NSFW detector makes you an
irresponsible person.
2. Warnings about running fp16 models stored in CPU (we are not running
them in CPU, just caching them in CPU RAM)
2023-01-19 18:00:31 -05:00
Lincoln Stein
895505976e [bugfix] suppress extraneous warning messages generated by diffusers
This commit suppresses a few irrelevant warning messages that the
diffusers module produces:

1. The warning that turning off the NSFW detector makes you an
irresponsible person.
2. Warnings about running fp16 models stored in CPU (we are not running
   them in CPU, just caching them in CPU RAM)
2023-01-19 16:49:40 -05:00
Lincoln Stein
171f4aa71b [feat] Provide option to disable xformers from command line
Starting `invoke.py` with --no-xformers will disable
memory-efficient-attention support if xformers is installed.

--xformers will enable support, but this is already the
default.
2023-01-19 16:16:35 -05:00
Lincoln Stein
775e1a21c7 improve embed trigger token not found error
- Now indicates that the trigger is *neither* a huggingface concept,
  nor the trigger of a locally loaded embed.
2023-01-19 15:46:58 -05:00
Lincoln Stein
3c3d893b9d improve status reporting when loading local and remote embeddings
- During trigger token processing, emit better status messages indicating
  which triggers were found.
- Suppress message "<token> is not known to HuggingFace library, when
  token is in fact a local embed.
2023-01-19 15:43:52 -05:00
Lincoln Stein
33a5c83c74 during ckpt->diffusers tell user when custom autoencoder can't be loaded
- When a ckpt or safetensors file uses an external autoencoder and we
  don't know which diffusers model corresponds to this (if any!), then
  we fallback to using stabilityai/sd-vae-ft-mse
- This commit improves error reporting so that user knows what is happening.
2023-01-19 12:05:49 -05:00
Lincoln Stein
7ee0edcb9e when converting a ckpt/safetensors model, preserve vae in diffusers config
- After successfully converting a ckt file to diffusers, model_manager
  will attempt to create an equivalent 'vae' entry to the resulting
  diffusers stanza.

- This is a bit of a hack, as it relies on a hard-coded dictionary
  to map ckpt VAEs to diffusers VAEs. The correct way to do this
  would be to convert the VAE to a diffusers model and then point
  to that. But since (almost) all models are using vae-ft-mse-840000-ema-pruned,
  I did it the easy way first and will work on the better solution later.
2023-01-19 11:02:49 -05:00
Lincoln Stein
7bd2220a24 fix two bugs in model import
1. !import_model did not allow user to specify VAE file. This is now fixed.
2. !del_model did not offer the user the opportunity to delete the underlying
   weights file or diffusers directory. This is now fixed.
2023-01-19 01:30:58 -05:00
Lincoln Stein
284b432ffd add triton install instructions 2023-01-18 22:34:36 -05:00
Lincoln Stein
ab675af264 Merge branch 'main' into lstein-improve-ti-frontend 2023-01-18 22:22:30 -05:00
Daya Adianto
be58a6bfbc Merge branch 'main' into bugfix/free-gpu-mem-diffuser 2023-01-19 10:21:06 +07:00
Daya Adianto
5a40aadbee Ensure free_gpu_mem option is passed into the generator (#2326) 2023-01-19 09:57:03 +07:00
Lincoln Stein
e11f15cf78 Merge branch 'main' into lstein-import-safetensors 2023-01-18 17:09:48 -05:00
Lincoln Stein
ce17051b28 Store & load 🤗 models at XDG_CACHE_HOME if HF_HOME is not set (#2359)
This commit allows InvokeAI to store & load 🤗 models at a location set
by `XDG_CACHE_HOME` environment variable if `HF_HOME` is not set.

By integrating this commit, a user who either use `HF_HOME` or
`XDG_CACHE_HOME` environment variables in their environment can let
InvokeAI to reuse the existing cache directory used by 🤗 library by
default. I happened to benefit from this commit because I have a Jupyter
Notebook that uses 🤗 diffusers model stored at `XDG_CACHE_HOME`
directory.

Reference:
https://huggingface.co/docs/huggingface_hub/main/en/package_reference/environment_variables#xdgcachehome
2023-01-18 17:05:06 -05:00
Lincoln Stein
a2bdc8b579 Merge branch 'lstein-import-safetensors' of github.com:invoke-ai/InvokeAI into lstein-import-safetensors 2023-01-18 12:16:06 -05:00
Lincoln Stein
1c62ae461e fix vae safetensor loading 2023-01-18 12:15:57 -05:00
Lincoln Stein
c5b802b596 Merge branch 'main' into feature/hub-in-xdg-cache-home 2023-01-18 11:53:46 -05:00
Lincoln Stein
b9ab9ffb4a Merge branch 'main' into lstein-import-safetensors 2023-01-18 10:58:38 -05:00
Lincoln Stein
f232068ab8 Update automated install doc - link to MS C libs (#2306)
Updated the link for the MS Visual C libraries - I'm not sure if MS
changed the location of the files but this new one leads right to the
file downloads.
2023-01-18 10:56:09 -05:00
Lincoln Stein
4556f29359 Merge branch 'main' into lstein/xformers-instructions 2023-01-18 09:33:17 -05:00
Lincoln Stein
c1521be445 add instructions for installing xFormers on linux 2023-01-18 09:31:19 -05:00
Daya Adianto
f3e952ecf0 Use global_cache_dir calls properly 2023-01-18 21:06:01 +07:00
Daya Adianto
aa4e8d8cf3 Migrate legacy models (pre-2.3.0) to 🤗 cache directory if exists 2023-01-18 21:02:31 +07:00
Daya Adianto
a7b2074106 Ignore free_gpu_mem when using 🤗 diffuser model (#2326) 2023-01-18 19:42:11 +07:00
Daya Adianto
2282e681f7 Store & load 🤗 models at XDG_CACHE_HOME if HF_HOME is not set
This commit allows InvokeAI to store & load 🤗 models at a location
set by `XDG_CACHE_HOME` environment variable if `HF_HOME` is not set.

Reference: https://huggingface.co/docs/huggingface_hub/main/en/package_reference/environment_variables#xdgcachehome
2023-01-18 19:32:09 +07:00
Lincoln Stein
6e2365f835 Merge branch 'main' into patch-1 2023-01-17 23:52:13 -05:00
Lincoln Stein
e4ea98c277 further improvements to initial load (#2330)
- Migration process will not crash if duplicate model files are found,
one in legacy location and the other in new location. The model in the
legacy location will be deleted in this case.

- Added a hint to stable-diffusion-2.1 telling people it will work best
with 768 pixel images.

- Added the anything-4.0 model.
2023-01-17 23:21:14 -05:00
Lincoln Stein
2fd5fe6c89 Merge branch 'main' into lstein-improve-migration 2023-01-17 22:55:58 -05:00
Lincoln Stein
4a9e93463d Merge branch 'lstein-import-safetensors' of github.com:invoke-ai/InvokeAI into lstein-import-safetensors 2023-01-17 22:52:50 -05:00
Lincoln Stein
0b5c0c374e load safetensors vaes 2023-01-17 22:51:57 -05:00
Lincoln Stein
5750f5dac2 Merge branch 'main' into lstein-import-safetensors 2023-01-17 21:31:56 -05:00
Kevin Turner
3fb095de88 do not use autocast for diffusers (#2349)
fixes #2345
2023-01-17 14:26:35 -08:00
Lincoln Stein
c5fecfe281 Merge branch 'main' into lstein-improve-migration 2023-01-17 17:05:12 -05:00
Kevin Turner
1fa6a3558e Merge branch 'main' into lstein-fix-autocast 2023-01-17 14:00:51 -08:00
Lincoln Stein
2ee68cecd9 tip fix (#2281)
Context: Small fix for the manual, added tab for a "!!! tip"
2023-01-17 16:25:09 -05:00
Lincoln Stein
c8d1d4d159 Merge branch 'main' into lstein-fix-autocast 2023-01-17 16:23:33 -05:00
Lincoln Stein
529b19f8f6 Merge branch 'main' into patch-1 2023-01-17 14:57:17 -05:00
Lincoln Stein
be4f44fafd [Enhancement] add --default_only arg to configure_invokeai.py, for CI use (#2355)
Added a --default_only argument that limits model downloads to the
single default model, for use in continuous integration.

New behavior

         - switch -
    --yes      --default_only           Behavior
    -----      --------------           --------

   <not set>     <not set>              interactive download

   --yes         <not set>              non-interactively download all
                                          recommended models

   --yes        --default_only          non-interactively download the
                                          default model
2023-01-17 14:56:50 -05:00
Kevin Turner
5aec48735e lint(generator): 🚮 remove unused imports 2023-01-17 11:44:45 -08:00
Kevin Turner
3c919f0337 Restore ldm/invoke/conditioning.py 2023-01-17 11:37:14 -08:00
mauwii
858ddffab6 add --default_only to run-preload-models step 2023-01-17 20:10:37 +01:00
Lincoln Stein
212fec669a add --default_only arg to configure_invokeai.py for CI use
Added a --default_only argument that limits model downloads to the single
default model, for use in continuous integration.

New behavior

         - switch -
    --yes      --default_only           Behavior
    -----      --------------           --------

   <not set>     <not set>              interactive download

   --yes         <not set>              non-interactively download all
                                          recommended models

   --yes        --default_only          non-interactively download the
                                          default model
2023-01-17 12:45:04 -05:00
Lincoln Stein
fc2098834d support direct loading of .safetensors models
- Small fix to allow ckpt files with the .safetensors suffix
  to be directly loaded, rather than undergo a conversion step
  first.
2023-01-17 08:11:19 -05:00
Lincoln Stein
8a31e5c5e3 allow safetensors models to be imported 2023-01-17 00:18:09 -05:00
Lincoln Stein
bcc0110c59 Merge branch 'lstein-fix-autocast' of github.com:invoke-ai/InvokeAI into lstein-fix-autocast 2023-01-16 23:18:54 -05:00
Lincoln Stein
ce1c5e70b8 fix autocast dependency in cross_attention_control 2023-01-16 23:18:43 -05:00
Lincoln Stein
ce00c9856f fix perlin noise and txt2img2img 2023-01-16 22:50:13 -05:00
Lincoln Stein
7e8f364d8d do not use autocast for diffusers
- All tensors in diffusers code path are now set explicitly to
  float32 or float16, depending on the --precision flag.
- autocast is still used in the ckpt path, since it is being
  deprecated.
2023-01-16 19:32:06 -05:00
Lincoln Stein
088cd2c4dd further tweaks to model management
- Work around problem with OmegaConf.update() that prevented model names
  from containing periods.
- Fix logic bug in !delete_model that didn't check for existence of model
  in config file.
2023-01-16 17:11:59 -05:00
Lincoln Stein
9460763eff Merge branch 'main' into lstein-improve-migration 2023-01-16 16:47:08 -05:00
Lincoln Stein
fe46d9d0f7 Merge branch 'main' into patch-1 2023-01-16 16:46:46 -05:00
Damian Stewart
563196bd03 pass step count and step index to diffusion step func (#2342) 2023-01-16 19:56:54 +00:00
Lincoln Stein
d2a038200c Merge branch 'main' into lstein-improve-migration 2023-01-16 14:22:13 -05:00
Lincoln Stein
d6ac0eeffd make SD-1.5 the default again 2023-01-16 14:21:34 -05:00
Lincoln Stein
3a1724652e upgrade requirements to CUDA 11.7, torch 1.13 (#2331)
* upgrade requirements to CUDA 11.7, torch 1.13

* fix ROCm version number

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2023-01-16 14:19:27 -05:00
Lincoln Stein
8c073a7818 Merge branch 'main' into patch-1 2023-01-16 08:38:14 -05:00
Lincoln Stein
8c94f6a234 Merge branch 'main' into patch-1 2023-01-16 08:35:25 -05:00
Lincoln Stein
5fa8f8be43 Merge branch 'main' into lstein-improve-migration 2023-01-16 08:33:20 -05:00
Daya Adianto
5b35fa53a7 Improve readability of the manual installation documentation (#2296)
* docs: Fix links to pip and Conda installation methods

* docs: Improve installation script readability

This commit adds a space between `-m` option and the module name.

* docs: Fix alignments of step 4 & 9 in `pip` installation method

* docs: Rewrite step 10 of the ` pip` installation method

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2023-01-15 22:37:02 +00:00
Lincoln Stein
a2ee32f57f Merge branch 'main' into lstein-improve-ti-frontend 2023-01-15 17:12:50 -05:00
Brian Racer
4486169a83 pin dnspython version (#2327)
Fixes dns-related errors that began January 14, 2023
2023-01-15 17:08:45 -05:00
Lincoln Stein
bfeafa8d5e improve UI of textual inversion frontend
- File selection box now accepts directories that don't exist yet.
- Fixed crash when resume is selected and no files available to resume from.
2023-01-15 17:04:14 -05:00
Lincoln Stein
f86c8b043c further improvements to initial load
- Migration process will not crash if duplicate model files are found,
  one in legacy location and the other in new location.
  The model in the legacy location will be deleted in this case.

- Added a hint to stable-diffusion-2.1 telling people it will work best
  with 768 pixel images.

- Added the anything-4.0 model.
2023-01-15 15:08:59 -05:00
Lincoln Stein
251a409087 adjust initial model defaults (#2322)
- Default to SD 1.5
- Add waifu diffusion 1.4
2023-01-15 15:18:41 +00:00
Kevin Turner
6fdbc1978d use 🧨diffusers model (#1583)
* initial commit of DiffusionPipeline class

* spike: proof of concept using diffusers for txt2img

* doc: type hints for Generator

* refactor(model_cache): factor out load_ckpt

* model_cache: add ability to load a diffusers model pipeline

and update associated things in Generate & Generator to not instantly fail when that happens

* model_cache: fix model default image dimensions

* txt2img: support switching diffusers schedulers

* diffusers: let the scheduler do its scaling of the initial latents

Remove IPNDM scheduler; it is not behaving.

* web server: update image_progress callback for diffusers data

* diffusers: restore prompt weighting feature

* diffusers: fix set-sampler error following model switch

* diffusers: use InvokeAIDiffuserComponent for conditioning

* cross_attention_control: stub (no-op) implementations for diffusers

* model_cache: let offload_model work with DiffusionPipeline, sorta.

* models.yaml.example: add diffusers-format model, set as default

* test-invoke-conda: use diffusers-format model
test-invoke-conda: put huggingface-token where the library can use it

* environment-mac: upgrade to diffusers 0.7 (from 0.6)

this was already done for linux; mac must have been lost in the merge.

* preload_models: explicitly load diffusers models

In non-interactive mode too, as long as you're logged in.

* fix(model_cache): don't check `model.config` in diffusers format

clean-up from recent merge.

* diffusers integration: support img2img

* dev: upgrade to diffusers 0.8 (from 0.7.1)

We get to remove some code by using methods that were factored out in the base class.

* refactor: remove backported img2img.get_timesteps

now that we can use it directly from diffusers 0.8.1

* ci: use diffusers model

* dev: upgrade to diffusers 0.9 (from 0.8.1)

* lint: correct annotations for Python 3.9.

* lint: correct AttributeError.name reference for Python 3.9.

* CI: prefer diffusers-1.4 because it no longer requires a token

The RunwayML models still do.

* build: there's yet another place to update requirements?

* configure: try to download models even without token

Models in the CompVis and stabilityai repos no longer require them. (But runwayml still does.)

* configure: add troubleshooting info for config-not-found

* fix(configure): prepend root to config path

* fix(configure): remove second `default: true` from models example

* CI: simplify test-on-push logic now that we don't need secrets

The "test on push but only in forks" logic was only necessary when tests didn't work for PRs-from-forks.

* create an embedding_manager for diffusers

* internal: avoid importing diffusers DummyObject

see https://github.com/huggingface/diffusers/issues/1479

* fix "config attributes…not expected" diffusers warnings.

* fix deprecated scheduler construction

* work around an apparent MPS torch bug that causes conditioning to have no effect

* 🚧 post-rebase repair

* preliminary support for outpainting (no masking yet)

* monkey-patch diffusers.attention and use Invoke lowvram code

* add always_use_cpu arg to bypass MPS

* add cross-attention control support to diffusers (fails on MPS)

For unknown reasons MPS produces garbage output with .swap(). Use
--always_use_cpu arg to invoke.py for now to test this code on MPS.

* diffusers support for the inpainting model

* fix debug_image to not crash with non-RGB images.

* inpainting for the normal model [WIP]

This seems to be performing well until the LAST STEP, at which point it dissolves to confetti.

* fix off-by-one bug in cross-attention-control (#1774)

prompt token sequences begin with a "beginning-of-sequence" marker <bos> and end with a repeated "end-of-sequence" marker <eos> - to make a default prompt length of <bos> + 75 prompt tokens + <eos>. the .swap() code was failing to take the column for <bos> at index 0 into account. the changes here do that, and also add extra handling for a single <eos> (which may be redundant but which is included for completeness).

based on my understanding and some assumptions about how this all works, the reason .swap() nevertheless seemed to do the right thing, to some extent, is because over multiple steps the conditioning process in Stable Diffusion operates as a feedback loop. a change to token n-1 has flow-on effects to how the [1x4x64x64] latent tensor is modified by all the tokens after it, - and as the next step is processed, all the tokens before it as well. intuitively, a token's conditioning effects "echo" throughout the whole length of the prompt. so even though the token at n-1 was being edited when what the user actually wanted was to edit the token at n, it nevertheless still had some non-negligible effect, in roughly the right direction, often enough that it seemed like it was working properly.

* refactor common CrossAttention stuff into a mixin so that the old ldm code can still work if necessary

* inpainting for the normal model. I think it works this time.

* diffusers: reset num_vectors_per_token

sync with 44a0055571

* diffusers: txt2img2img (hires_fix)

with so much slicing and dicing of pipeline methods to stitch them together

* refactor(diffusers): reduce some code duplication amongst the different tasks

* fixup! refactor(diffusers): reduce some code duplication amongst the different tasks

* diffusers: enable DPMSolver++ scheduler

* diffusers: upgrade to diffusers 0.10, add Heun scheduler

* diffusers(ModelCache): stopgap to make from_cpu compatible with diffusers

* CI: default to diffusers-1.5 now that runwayml token requirement is gone

* diffusers: update to 0.10 (and transformers to 4.25)

* diffusers: use xformers when available

diffusers no longer auto-enables this as of 0.10.2.

* diffusers: make masked img2img behave better with multi-step schedulers

re-randomizing the noise each step was confusing them.

* diffusers: work more better with more models.

fixed relative path problem with local models.

fixed models on hub not always having a `fp16` branch.

* diffusers: stopgap fix for attention_maps_callback crash after recent merge

* fixup import merge conflicts

correction for 061c5369a2

* test: add tests/inpainting inputs for masked img2img

* diffusers(AddsMaskedGuidance): partial fix for k-schedulers

Prevents them from crashing, but results are still hot garbage.

* fix --safety_checker arg parsing

and add note to diffusers loader about where safety checker gets called

* generate: fix import error

* CI: don't try to read the old init location

* diffusers: support loading an alternate VAE

* CI: remove sh-syntax if-statement so it doesn't crash powershell

* CI: fold strings in yaml because backslash is not line-continuation in powershell

* attention maps callback stuff for diffusers

* build: fix syntax error in environment-mac

* diffusers: add INITIAL_MODELS with diffusers-compatible repos

* re-enable the embedding manager; closes #1778

* Squashed commit of the following:

commit e4a956abc37fcb5cf188388b76b617bc5c8fda7d
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 15:43:07 2022 +0100

    import new load handling from EmbeddingManager and cleanup

commit c4abe91a5ba0d415b45bf734068385668b7a66e6
Merge: 032e856e 1efc6397
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 15:09:53 2022 +0100

    Merge branch 'feature_textual_inversion_mgr' into dev/diffusers_with_textual_inversion_manager

commit 032e856eefb3bbc39534f5daafd25764bcfcef8b
Merge: 8b4f0fe9 bc515e24
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 15:08:01 2022 +0100

    Merge remote-tracking branch 'upstream/dev/diffusers' into dev/diffusers_with_textual_inversion_manager

commit 1efc6397fc6e61c1aff4b0258b93089d61de5955
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 15:04:28 2022 +0100

    cleanup and add performance notes

commit e400f804ac471a0ca2ba432fd658778b20c7bdab
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 14:45:07 2022 +0100

    fix bug and update unit tests

commit deb9ae0ae1016750e93ce8275734061f7285a231
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 14:28:29 2022 +0100

    textual inversion manager seems to work

commit 162e02505dec777e91a983c4d0fb52e950d25ff0
Merge: cbad4583 12769b3d
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 11:58:03 2022 +0100

    Merge branch 'main' into feature_textual_inversion_mgr

commit cbad45836c6aace6871a90f2621a953f49433131
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 11:54:10 2022 +0100

    use position embeddings

commit 070344c69b0e0db340a183857d0a787b348681d3
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 11:53:47 2022 +0100

    Don't crash CLI on exceptions

commit b035ac8c6772dfd9ba41b8eeb9103181cda028f8
Author: Damian Stewart <d@damianstewart.com>
Date:   Sun Dec 18 11:11:55 2022 +0100

    add missing position_embeddings

commit 12769b3d3562ef71e0f54946b532ad077e10043c
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 13:33:25 2022 +0100

    debugging why it don't work

commit bafb7215eabe1515ca5e8388fd3bb2f3ac5362cf
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 13:21:33 2022 +0100

    debugging why it don't work

commit 664a6e9e14
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 12:48:38 2022 +0100

    use TextualInversionManager in place of embeddings (wip, doesn't work)

commit 8b4f0fe9d6e4e2643b36dfa27864294785d7ba4e
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 12:48:38 2022 +0100

    use TextualInversionManager in place of embeddings (wip, doesn't work)

commit ffbe1ab11163ba712e353d89404e301d0e0c6cdf
Merge: 6e4dad60 023df37e
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 02:37:31 2022 +0100

    Merge branch 'feature_textual_inversion_mgr' into dev/diffusers

commit 023df37eff
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 02:36:54 2022 +0100

    cleanup

commit 05fac594ea
Author: Damian Stewart <d@damianstewart.com>
Date:   Fri Dec 16 02:07:49 2022 +0100

    tweak error checking

commit 009f32ed39
Author: damian <null@damianstewart.com>
Date:   Thu Dec 15 21:29:47 2022 +0100

    unit tests passing for embeddings with vector length >1

commit beb1b08d9a
Author: Damian Stewart <d@damianstewart.com>
Date:   Thu Dec 15 13:39:09 2022 +0100

    more explicit equality tests when overwriting

commit 44d8a5a7c8
Author: Damian Stewart <d@damianstewart.com>
Date:   Thu Dec 15 13:30:13 2022 +0100

    wip textual inversion manager (unit tests passing for 1v embedding overwriting)

commit 417c2b57d9
Author: Damian Stewart <d@damianstewart.com>
Date:   Thu Dec 15 12:30:55 2022 +0100

    wip textual inversion manager (unit tests passing for base stuff + padding)

commit 2e80872e3b
Author: Damian Stewart <d@damianstewart.com>
Date:   Thu Dec 15 10:57:57 2022 +0100

    wip new TextualInversionManager

* stop using WeightedFrozenCLIPEmbedder

* store diffusion models locally

- configure_invokeai.py reconfigured to store diffusion models rather than
  CompVis models
- hugging face caching model is used, but cache is set to ~/invokeai/models/repo_id
- models.yaml does **NOT** use path, just repo_id
- "repo_name" changed to "repo_id" to following hugging face conventions
- Models are loaded with full precision pending further work.

* allow non-local files during development

* path takes priority over repo_id

* MVP for model_cache and configure_invokeai

- Feature complete (almost)

- configure_invokeai.py downloads both .ckpt and diffuser models,
  along with their VAEs. Both types of download are controlled by
  a unified INITIAL_MODELS.yaml file.

- model_cache can load both type of model and switches back and forth
  in CPU. No memory leaks detected

TO DO:

  1. I have not yet turned on the LocalOnly flag for diffuser models, so
     the code will check the Hugging Face repo for updates before using the
     locally cached models. This will break firewalled systems. I am thinking
     of putting in a global check for internet connectivity at startup time
     and setting the LocalOnly flag based on this. It would be good to check
     updates if there is connectivity.

  2. I have not gone completely through INITIAL_MODELS.yaml to check which
     models are available as diffusers and which are not. So models like
     PaperCut and VoxelArt may not load properly. The runway and stability
     models are checked, as well as the Trinart models.

  3. Add stanzas for SD 2.0 and 2.1 in INITIAL_MODELS.yaml

REMAINING PROBLEMS NOT DIRECTLY RELATED TO MODEL_CACHE:

  1. When loading a .ckpt file there are lots of messages like this:

     Warning! ldm.modules.attention.CrossAttention is no longer being
     maintained. Please use InvokeAICrossAttention instead.

     I'm not sure how to address this.

  2. The ckpt models ***don't actually run*** due to the lack of special-case
     support for them in the generator objects. For example, here's the hard
     crash you get when you run txt2img against the legacy waifu-diffusion-1.3
     model:
```
     >> An error occurred:
     Traceback (most recent call last):
       File "/data/lstein/InvokeAI/ldm/invoke/CLI.py", line 140, in main
           main_loop(gen, opt)
      File "/data/lstein/InvokeAI/ldm/invoke/CLI.py", line 371, in main_loop
         gen.prompt2image(
      File "/data/lstein/InvokeAI/ldm/generate.py", line 496, in prompt2image
	 results = generator.generate(
      File "/data/lstein/InvokeAI/ldm/invoke/generator/base.py", line 108, in generate
         image = make_image(x_T)
      File "/data/lstein/InvokeAI/ldm/invoke/generator/txt2img.py", line 33, in make_image
         pipeline_output = pipeline.image_from_embeddings(
      File "/home/lstein/invokeai/.venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1265, in __getattr__
         raise AttributeError("'{}' object has no attribute '{}'".format(
     AttributeError: 'LatentDiffusion' object has no attribute 'image_from_embeddings'
```

  3. The inpainting diffusion model isn't working. Here's the output of "banana
     sushi" when inpainting-1.5 is loaded:

```
    Traceback (most recent call last):
      File "/data/lstein/InvokeAI/ldm/generate.py", line 496, in prompt2image
        results = generator.generate(
      File "/data/lstein/InvokeAI/ldm/invoke/generator/base.py", line 108, in generate
        image = make_image(x_T)
      File "/data/lstein/InvokeAI/ldm/invoke/generator/txt2img.py", line 33, in make_image
        pipeline_output = pipeline.image_from_embeddings(
      File "/data/lstein/InvokeAI/ldm/invoke/generator/diffusers_pipeline.py", line 301, in image_from_embeddings
        result_latents, result_attention_map_saver = self.latents_from_embeddings(
      File "/data/lstein/InvokeAI/ldm/invoke/generator/diffusers_pipeline.py", line 330, in latents_from_embeddings
        result: PipelineIntermediateState = infer_latents_from_embeddings(
      File "/data/lstein/InvokeAI/ldm/invoke/generator/diffusers_pipeline.py", line 185, in __call__
        for result in self.generator_method(*args, **kwargs):
      File "/data/lstein/InvokeAI/ldm/invoke/generator/diffusers_pipeline.py", line 367, in generate_latents_from_embeddings
        step_output = self.step(batched_t, latents, guidance_scale,
      File "/home/lstein/invokeai/.venv/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
        return func(*args, **kwargs)
      File "/data/lstein/InvokeAI/ldm/invoke/generator/diffusers_pipeline.py", line 409, in step
        step_output = self.scheduler.step(noise_pred, timestep, latents, **extra_step_kwargs)
      File "/home/lstein/invokeai/.venv/lib/python3.9/site-packages/diffusers/schedulers/scheduling_lms_discrete.py", line 223, in step
        pred_original_sample = sample - sigma * model_output
    RuntimeError: The size of tensor a (9) must match the size of tensor b (4) at non-singleton dimension 1
```

* proper support for float32/float16

- configure script now correctly detects user's preference for
  fp16/32 and downloads the correct diffuser version. If fp16
  version not available, falls back to fp32 version.

- misc code cleanup and simplification in model_cache

* add on-the-fly conversion of .ckpt to diffusers models

1. On-the-fly conversion code can be found in the file ldm/invoke/ckpt_to_diffusers.py.

2. A new !optimize command has been added to the CLI. Should be ported to Web GUI.

User experience on the CLI is this:

```
invoke> !optimize /home/lstein/invokeai/models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
INFO: Converting legacy weights file /home/lstein/invokeai/models/ldm/stable-diffusion-v1/sd-v1-4.ckpt to optimized diffuser model.
      This operation will take 30-60s to complete.
Success. Optimized model is now located at /home/lstein/tmp/invokeai/models/optimized-ckpts/sd-v1-4
Writing new config file entry for sd-v1-4...

>> New configuration:
sd-v1-4:
  description: Optimized version of sd-v1-4
  format: diffusers
  path: /home/lstein/tmp/invokeai/models/optimized-ckpts/sd-v1-4

OK to import [n]? y
>> Verifying that new model loads...
>> Current VRAM usage:  2.60G
>> Offloading stable-diffusion-2.1 to CPU
>> Loading diffusers model from /home/lstein/tmp/invokeai/models/optimized-ckpts/sd-v1-4
  | Using faster float16 precision
You have disabled the safety checker for <class 'ldm.invoke.generator.diffusers_pipeline.StableDiffusionGeneratorPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion \
license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances,\
 disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
  | training width x height = (512 x 512)
>> Model loaded in 3.48s
>> Max VRAM used to load the model: 2.17G
>> Current VRAM usage:2.17G
>> Textual inversions available:
>> Setting Sampler to k_lms (LMSDiscreteScheduler)
Keep model loaded? [y]
```

* add parallel set of generator files for ckpt legacy generation

* generation using legacy ckpt models now working

* diffusers: fix missing attention_maps_callback

fix for 23eb80b404

* associate legacy CrossAttention with .ckpt models

* enable autoconvert

New --autoconvert CLI option will scan a designated directory for
new .ckpt files, convert them into diffuser models, and import
them into models.yaml.

Works like this:

   invoke.py --autoconvert /path/to/weights/directory

In ModelCache added two new methods:

  autoconvert_weights(config_path, weights_directory_path, models_directory_path)
  convert_and_import(ckpt_path, diffuser_path)

* diffusers: update to diffusers 0.11 (from 0.10.2)

* fix vae loading & width/height calculation

* refactor: encapsulate these conditioning data into one container

* diffusers: fix some noise-scaling issues by pushing the noise-mixing down to the common function

* add support for safetensors and accelerate

* set local_files_only when internet unreachable

* diffusers: fix error-handling path when model repo has no fp16 branch

* fix generatorinpaint error

Fixes :
  "ModuleNotFoundError: No module named 'ldm.invoke.generatorinpaint'
   https://github.com/invoke-ai/InvokeAI/pull/1583#issuecomment-1363634318

* quench diffuser safety-checker warning

* diffusers: support stochastic DDIM eta parameter

* fix conda env creation on macos

* fix cross-attention with diffusers 0.11

* diffusers: the VAE needs to be tiling as well as the U-Net

* diffusers: comment on subfolders

* diffusers: embiggen!

* diffusers: make model_cache.list_models serializable

* diffusers(inpaint): restore scaling functionality

* fix requirements clash between numba and numpy 1.24

* diffusers: allow inpainting model to do non-inpainting tasks

* start expanding model_cache functionality

* add import_ckpt_model() and import_diffuser_model() methods to model_manager

- in addition, model_cache.py is now renamed to model_manager.py

* allow "recommended" flag to be optional in INITIAL_MODELS.yaml

* configure_invokeai now downloads VAE diffusers in advance

* rename ModelCache to ModelManager

* remove support for `repo_name` in models.yaml

* check for and refuse to load embeddings trained on incompatible models

* models.yaml.example: s/repo_name/repo_id

and remove extra INITIAL_MODELS now that the main one has diffusers models in it.

* add MVP textual inversion script

* refactor(InvokeAIDiffuserComponent): factor out _combine()

* InvokeAIDiffuserComponent: implement threshold

* InvokeAIDiffuserComponent: diagnostic logs for threshold

...this does not look right

* add a curses-based frontend to textual inversion

- not quite working yet
- requires npyscreen installed
- on windows will also have the windows-curses requirement, but not added
  to requirements yet

* add curses-based interface for textual inversion

* fix crash in convert_and_import()

- This corrects a "local variable referenced before assignment" error
  in model_manager.convert_and_import()

* potential workaround for no 'state_dict' key error

- As reported in https://github.com/huggingface/diffusers/issues/1876

* create TI output dir if needed

* Update environment-lin-cuda.yml (#2159)

Fixing line 42 to be the proper order to define the transformers requirement: ~= instead of =~

* diffusers: update sampler-to-scheduler mapping

based on https://github.com/huggingface/diffusers/issues/277#issuecomment-1371428672

* improve user exp for ckt to diffusers conversion

- !optimize_models command now operates on an existing ckpt file entry in models.yaml
- replaces existing entry, rather than adding a new one
- offers to delete the ckpt file after conversion

* web: adapt progress callback to deal with old generator or new diffusers pipeline

* clean-up model_manager code

- add_model() verified to work for .ckpt local paths,
  .ckpt remote URLs, diffusers local paths, and
  diffusers repo_ids

- convert_and_import() verified to work for local and
  remove .ckpt files

* handle edge cases for import_model() and convert_model()

* add support for safetensor .ckpt files

* fix name error

* code cleanup with pyflake

* improve model setting behavior

- If the user enters an invalid model name at startup time, will not
  try to load it, warn, and use default model
- CLI UI enhancement: include currently active model in the command
  line prompt.

* update test-invoke-pip.yml
- fix model cache path to point to runwayml/stable-diffusion-v1-5
- remove `skip-sd-weights` from configure_invokeai.py args

* exclude dev/diffusers from "fail for draft PRs"

* disable "fail on PR jobs"

* re-add `--skip-sd-weights` since no space

* update workflow environments
- include `INVOKE_MODEL_RECONFIGURE: '--yes'`

* clean up model load failure handling

- Allow CLI to run even when no model is defined or loadable.
- Inhibit stack trace when model load fails - only show last error
- Give user *option* to run configure_invokeai.py when no models
  successfully load.
- Restart invokeai after reconfiguration.

* further edge-case handling

1) only one model in models.yaml file, and that model is broken
2) no models in models.yaml
3) models.yaml doesn't exist at all

* fix incorrect model status listing

- "cached" was not being returned from list_models()
- normalize handling of exceptions during model loading:
   - Passing an invalid model name to generate.set_model() will return
     a KeyError
   - All other exceptions are returned as the appropriate Exception

* CI: do download weights (if not already cached)

* diffusers: fix scheduler loading in offline mode

* CI: fix model name (no longer has `diffusers-` prefix)

* Update txt2img2img.py (#2256)

* fixes to share models with HuggingFace cache system

- If HF_HOME environment variable is defined, then all huggingface models
  are stored in that directory following the standard conventions.
- For seamless interoperability, set HF_HOME to ~/.cache/huggingface
- If HF_HOME not defined, then models are stored in ~/invokeai/models.
  This is equivalent to setting HF_HOME to ~/invokeai/models

A future commit will add a migration mechanism so that this change doesn't
break previous installs.

* feat - make model storage compatible with hugging face caching system

This commit alters the InvokeAI model directory to be compatible with
hugging face, making it easier to share diffusers (and other models)
across different programs.

- If the HF_HOME environment variable is not set, then models are
  cached in ~/invokeai/models in a format that is identical to the
  HuggingFace cache.

- If HF_HOME is set, then models are cached wherever HF_HOME points.

- To enable sharing with other HuggingFace library clients, set
  HF_HOME to ~/.cache/huggingface to set the default cache location
  or to ~/invokeai/models to have huggingface cache inside InvokeAI.

* fixes to share models with HuggingFace cache system

    - If HF_HOME environment variable is defined, then all huggingface models
      are stored in that directory following the standard conventions.
    - For seamless interoperability, set HF_HOME to ~/.cache/huggingface
    - If HF_HOME not defined, then models are stored in ~/invokeai/models.
      This is equivalent to setting HF_HOME to ~/invokeai/models

    A future commit will add a migration mechanism so that this change doesn't
    break previous installs.

* fix error "no attribute CkptInpaint"

* model_manager.list_models() returns entire model config stanza+status

* Initial Draft - Model Manager Diffusers

* added hash function to diffusers

* implement sha256 hashes on diffusers models

* Add Model Manager Support for Diffusers

* fix various problems with model manager

- in cli import functions, fix not enough values to unpack from
  _get_name_and_desc()
- fix crash when using old-style vae: value with new-style diffuser

* rebuild frontend

* fix dictconfig-not-serializable issue

* fix NoneType' object is not subscriptable crash in model_manager

* fix "str has no attribute get" error in model_manager list_models()

* Add path and repo_id support for Diffusers Model Manager

Also fixes bugs

* Fix tooltip IT localization not working

* Add Version Number To WebUI

* Optimize Model Search

* Fix incorrect font on the Model Manager UI

* Fix image degradation on merge fixes - [Experimental]

This change should effectively fix a couple of things.

- Fix image degradation on subsequent merges of the canvas layers.
- Fix the slight transparent border that is left behind when filling the bounding box with a color.
- Fix the left over line of color when filling a bounding box with color.

So far there are no side effects for this. If any, please report.

* Add local model filtering for Diffusers / Checkpoints

* Go to home on modal close for the Add Modal UI

* Styling Fixes

* Model Manager Diffusers Localization Update

* Add Safe Tensor scanning to Model Manager

* Fix model edit form dispatching string values instead of numbers.

* Resolve VAE handling / edge cases for supplied repos

* defer injecting tokens for textual inversions until they're used for the first time

* squash a console warning

* implement model migration check

* add_model() overwrites previous config rather than merges

* fix model config file attribute merging

* fix precision handling in textual inversion script

* allow ckpt conversion script to work with safetensors .ckpts

Applied patch here:
beb932c5d1

* fix name "args" is not defined crash in textual_inversion_training

* fix a second NameError: name 'args' is not defined crash

* fix loading of the safety checker from the global cache dir

* add installation step to textual inversion frontend

- After a successful training run, the script will copy learned_embeds.bin
  to a subfolder of the embeddings directory.
- User given the option to delete the logs and intermediate checkpoints
  (which together use 7-8G of space)
- If textual inversion training fails, reports the error gracefully.

* don't crash out on incompatible embeddings

- put try: blocks around places where the system tries to load an embedding
  which is incompatible with the currently loaded model

* add support for checkpoint resuming

* textual inversion preferences are saved and restored between sessions

- Preferences are stored in a file named text-inversion-training/preferences.conf
- Currently the resume-from-checkpoint option is not working correctly. Possible
  bug in textual_inversion_training.py?

* copy learned_embeddings.bin into right location

* add front end for diffusers model merging

- Front end doesn't do anything yet!!!!
- Made change to model name parsing in CLI to support ability to have merged models
  with the "+" character in their names.

* improve inpainting experience

- recommend ckpt version of inpainting-1.5 to user
- fix get_noise() bug in ckpt version of omnibus.py

* update environment*yml

* tweak instructions to install HuggingFace token

* bump version number

* enhance update scripts

- update scripts will now fetch new INITIAL_MODELS.yaml so that
  configure_invokeai.py will know about the diffusers versions.

* enhance invoke.sh/invoke.bat launchers

- added configure_invokeai.py to menu
- menu defaults to browser-based invoke

* remove conda workflow (#2321)

* fix `token_ids has shape torch.Size([79]) - expected [77]`

* update CHANGELOG.md with 2.3.* info

- Add information on how formats have changed and the upgrade process.
- Add short bug list.

Co-authored-by: Damian Stewart <d@damianstewart.com>
Co-authored-by: Damian Stewart <null@damianstewart.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
Co-authored-by: Wybartel-luxmc <37852506+Wybartel-luxmc@users.noreply.github.com>
Co-authored-by: mauwii <Mauwii@outlook.de>
Co-authored-by: mickr777 <115216705+mickr777@users.noreply.github.com>
Co-authored-by: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
Co-authored-by: Matthias Wild <40327258+mauwii@users.noreply.github.com>
2023-01-15 09:22:46 -05:00
Lincoln Stein
c855d2a350 Consolidate version numbers (#2201)
* update version number

* print version number at startup

* move version number into ldm/invoke/_version.py

* bump version to 2.2.6+a0

* handle whitespace better

* resolve issues raised by mauwii during PR review
2023-01-15 04:07:21 +01:00
Kent Keirsey
4dd74cdc68 update Readme (#2278)
* Update Readme & Assets

* Update Canvas Assets

* Updated Readme to correct missing refs

* Correcting refs

* Updating Canvas Preview size

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2023-01-15 01:13:11 +00:00
Lincoln Stein
746e97ea1d enhance the installer (#2299)
1. create_installers.sh now asks before tagging and committing the
   current repo
2. trailing whitespace removed from user-provided location of invokeai
   directory in install.bat
2023-01-14 19:28:14 -05:00
gogurtenjoyer
241313c4a6 Update automated install doc - link to MS C libs
Updated the link for the MS Visual C libraries - I'm not sure if MS changed the location of the files but this new one leads right to the file downloads.
2023-01-12 14:09:35 -08:00
Edward Johan
b6d1a17a1e tip fix 2023-01-09 23:53:55 +06:00
Lincoln Stein
c73434c2a3 tweak install instructions (#2227)
- Removed links from the install instructions to the installer zip files.
- Replaced "2.2.4" with "2.X.X" globally, to avoid the docs going out of
  date.
2023-01-09 00:12:41 +00:00
Matthias Wild
69b15024a9 update python requirements (#2251)
since torch versions <0.13.1 have a critical security issue
2023-01-08 07:44:03 -05:00
William Chong
26e413ae9c Require huggingface-hub version 0.11.1 (#2222)
`import login` only works in huggingface-hub >= 0.11.0

Fixes https://github.com/invoke-ai/InvokeAI/issues/2149
2023-01-04 22:21:48 +00:00
Chris Dawson
91eb84c5d9 Allow multiple CORS origins (#2031)
* Permit cmd override for CORS modification

* Enable multiple origins for CORS

* Remove CMD_OVERRIDE

* Revert executable bit change

* Defensively convert list into string

* Bad if statement

* Retry rebase

* Retry rebase

Co-authored-by: Chris Dawson <chris@vivoh.com>
2023-01-04 14:26:42 -05:00
Lincoln Stein
5d69bd408b fix facexlib weights being downloaded to .venv (#2221)
- fix problem of facexlib weights being downloaded into the .venv
  package directory when codeformer restoration requested.
- now users pre-downloaded weights in ~/invokeai/models/gfpgan/weights
  (which is shared with gfpgan)

Co-authored-by: Matthias Wild <40327258+mauwii@users.noreply.github.com>
2023-01-04 14:22:49 -05:00
Minjune Song
21bf512056 Local embeddings support (CLI autocomplete) (#2211)
* integrate local embeds with HF embeds

* Update concepts_lib.py

* Update concepts_lib.py

Co-authored-by: BuildTools <unconfigured@null.spigotmc.org>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2023-01-04 06:22:10 +00:00
Lincoln Stein
6c6e534c1a fix codeformer facexlib files being downloaded into .venv
- Fixed codeformer module so that the facexlib files are downloaded
  into their pre-stored location in models/gfpgan/weights (shared
  with the GFPGAN module)
2023-01-04 00:13:33 -05:00
Name
010378153f spelling mistake fixxed
wil -> will
2023-01-04 05:48:18 +13:00
Jeremy Clark
9091b6e24a Explicitly call python found in system (#2203)
Explicitly calls the python bin found in the system instead of calling `python` which may fail on systems where python is installed as `python3`
2023-01-02 13:47:01 +00:00
Matthias Wild
64700b07a8 fixing a typo in invoke.py (#2204) 2023-01-02 02:39:43 +00:00
Matthias Wild
34f8117241 Fix patchmatch-docs (#2111)
* use `uname -m` instead of `arch`
addressing #2105

* fix install patchmatch formating

* fix 2 broken links

* remove instruction to do develop install of patchmatch

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2023-01-01 20:52:05 +00:00
Lincoln Stein
c3f82d4481 update version number (#2199) 2023-01-01 19:26:55 +00:00
Lincoln Stein
3929bd3e13 Lstein release candidate 2.2.5 (#2137)
* installer tweaks in preparation for v2.2.5

- pin numpy to 1.23.* to avoid requirements conflict with numba
- update.sh and update.bat now accept a tag or branch string, not a URL
- update scripts download latest requirements-base before updating.

* update.bat.in debugged and working

* update pulls from "latest" now

* bump version number

* fix permissions on create_installer.sh

* give Linux user option of installing ROCm or CUDA

* rc2.2.5 (install.sh) relative path fixes (#2155)

* (installer) fix bug in resolution of relative paths in linux install script

point installer at 2.2.5-rc1

selecting ~/Data/myapps/ as location  would create a ./~/Data/myapps
instead of expanding the ~/ to the value of ${HOME}

also, squash the trailing slash in path, if it was entered by the user

* (installer) add option to automatically start the app after install

also: when exiting, print the command to get back into the app

* remove extraneous whitespace

* model_cache applies rootdir to config path

* bring installers up to date with 2.2.5-rc2

* bump rc version

* create_installer now adds version number

* rebuild frontend

* bump rc#

* add locales to frontend dist package

- bump to patchlevel 6

* bump patchlevel

* use invoke-ai version of GFPGAN

- This version is very slightly modified to allow weights files
  to be pre-downloaded by the configure script.

* fix formatting error during startup

* bump patch level

* workaround #2 for GFPGAN facexlib() weights downloading

* bump patch

* ready for merge and release

* remove extraneous comment

* set PYTORCH_ENABLE_MPS_FALLBACK directly in invoke.py

Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2023-01-01 17:54:45 +00:00
Yorzaren
caf7caddf7 Update WEBUIHOTKEYS.md - Key Display (#2190)
* Update WEBUIHOTKEYS.md

Fixed display errors so it no longer show extra plus signs on the site

* Update WEBUIHOTKEYS.md

Correction to keycap look to have symbols on special keys like enter, shift, and ctrl.
2022-12-31 23:48:17 +01:00
blessedcoolant
9fded69f0c Frontend Lint 2022-12-31 06:22:32 +13:00
blessedcoolant
9f719883c8 WebUI 2.2.5 Bug Fix Build 2022-12-31 06:22:32 +13:00
blessedcoolant
5d4da31dcd Localization Updates 2022-12-31 06:22:32 +13:00
blessedcoolant
686640af3a Fix status localization not working when iteration count > 0 2022-12-31 06:22:32 +13:00
blessedcoolant
edc22e06c3 RU Localization for Model Manager and Tooltips
Co-Authored-By: Artur <83028930+netsvetaev@users.noreply.github.com>
2022-12-31 06:22:32 +13:00
blessedcoolant
409a46e2c4 Fix styling of slider input to accommodate 4 digit values 2022-12-31 06:22:32 +13:00
blessedcoolant
e7ee4ecac7 Fix NumberInput not respecting min and max values on stepper click 2022-12-31 06:22:32 +13:00
blessedcoolant
da6c690d7b WebUI 2.2.5 Build 2022-12-30 08:35:54 +13:00
blessedcoolant
7c4544f95e Fix Seed Shuffle layout to adjust to localization text 2022-12-30 08:35:54 +13:00
David Regla
f173e0a085 i18n: add Spanish (es) translations 2022-12-30 08:35:54 +13:00
David Regla
2a90e0c55f Remove extra trailing space in JSON file 2022-12-30 08:35:54 +13:00
Lincoln Stein
9d103ef030 attempt to address memory issues when loading ckpt models (#2128)
- A couple of users have reported that switching back and forth
  between ckpt models is causing a "GPU out of memory" crash.
  Traceback suggests there is actually a CPU RAM issue.

- This speculative test simply performs a round of garbage collection
  before the point where the crash occurs.
2022-12-29 09:00:50 -05:00
pejotr
4cc60669c1 [WebUI] Localize tooltips (#2136)
* [WebUI]: Localize tooltips

* fix: typo in seamCorrection translation

* [WebUI]: Localize tooltips

* fix: typo in seamCorrection translation

* Add Missing Language Placeholders for Tooltip Localization

* Fix UI displacement in RU localization for options

* Fix double options during merge.

* Fix tkinter lefover

Co-authored-by: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
2022-12-29 21:19:57 +13:00
Yorzaren
d456aea8f3 Update WEBUIHOTKEYS.md
Fixed display errors so it no longer show extra plus signs on the site
2022-12-29 17:30:57 +13:00
Ryan Cao
4151883cb2 i18n: simplified chinese translations for model manager 2022-12-29 16:37:51 +13:00
blessedcoolant
a029d90630 Model Manager Final Build 2022-12-29 08:33:27 +13:00
blessedcoolant
211d6b3831 Improve add models guidance 2022-12-29 08:33:27 +13:00
blessedcoolant
b40faa98bd Model Manager Test Build 4 2022-12-29 08:33:27 +13:00
blessedcoolant
8d4ad0de4e Formatting Pass 2022-12-29 08:33:27 +13:00
blessedcoolant
e4b2f815e8 Improve interaction area for edit and stylize 2022-12-29 08:33:27 +13:00
blessedcoolant
0dd5804949 Normalize the config path to prevent write errors 2022-12-29 08:33:27 +13:00
blessedcoolant
53476af72e Add Italian and PT BR Localization for Model Manager 2022-12-29 08:33:27 +13:00
blessedcoolant
61ee597f4b Add Messages To Indicate to the user to add models. 2022-12-29 08:33:27 +13:00
blessedcoolant
ad0b366e47 Add option to scan loaded folder again 2022-12-29 08:33:27 +13:00
blessedcoolant
942f029a24 Model Manager Test Build 3 2022-12-29 08:33:27 +13:00
blessedcoolant
e0d7c466cc Add Scrollbar Styling 2022-12-29 08:33:27 +13:00
blessedcoolant
16c0132a6b Model Manager Test Build 2 2022-12-29 08:33:27 +13:00
blessedcoolant
7cb2fcf8b4 Remove folder picker 2022-12-29 08:33:27 +13:00
blessedcoolant
1a65d43569 Add Icon To the tkinter folder picker 2022-12-29 08:33:27 +13:00
blessedcoolant
1313e31f62 Add Italian Localization for Model Manager 2022-12-29 08:33:27 +13:00
blessedcoolant
aa213285bb Style fixes to accommodate localization in Model Manager 2022-12-29 08:33:27 +13:00
blessedcoolant
f691353570 Add Model Manager German Localization 2022-12-29 08:33:27 +13:00
blessedcoolant
1c75010f29 Model Manager Test Build 2022-12-29 08:33:27 +13:00
blessedcoolant
eba8fb58ed Change Settings Icon in the Site Header 2022-12-29 08:33:27 +13:00
blessedcoolant
83a7e60fe5 Add Missing Localization Files for Model Manager 2022-12-29 08:33:27 +13:00
blessedcoolant
d4e86feeeb Add Simplified Chinese Localization
Co-Authored-By: Ryan Cao <70191398+ryanccn@users.noreply.github.com>
2022-12-29 08:33:27 +13:00
blessedcoolant
427614d1df Populate en-US localization configs 2022-12-29 08:33:27 +13:00
blessedcoolant
ce6fb8ea29 Model Form Styling 2022-12-29 08:33:27 +13:00
blessedcoolant
df858eb3f9 Add Edit Model Functionality 2022-12-29 08:33:27 +13:00
blessedcoolant
6523fd07ab Model Edit Initial Implementation 2022-12-29 08:33:27 +13:00
Kent Keirsey
a823e37126 Fix storehook references 2022-12-29 08:33:27 +13:00
Kent Keirsey
4eed06903c Adding model edits (unstable/WIP) 2022-12-29 08:33:27 +13:00
blessedcoolant
79d577bff9 Model Manager Frontend Rebased 2022-12-29 08:33:27 +13:00
blessedcoolant
3521557541 Model Manager Backend Implementation 2022-12-29 08:33:27 +13:00
Artur
e66b1a685c Update README.md (#2115)
Copyright (c) 2020 -> 2022

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-25 16:41:11 +00:00
tomosuto
c351aa19eb Update 020_INSTALL_MANUAL.md (#2093)
Fixed a description that was overflowing from the warning box
2022-12-25 02:02:50 +00:00
Artur
aa1f46820f Update 020_INSTALL_MANUAL.md (#2114)
«git clone» step added for pip

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-24 21:11:51 +00:00
blessedcoolant
1d34405f4f [WebUI] Localization Support (#2050)
* Initial Localization Implementation

* Fix Initial Spinner

* Language Picker Dropdown

* RU Localization Update

Co-Authored-By: Artur <83028930+netsvetaev@users.noreply.github.com>

* Fixed localization breaking themes

* useUpdateTranslation Hook

To force trigger translations for data objects

* Localize Tab Data

* Localize Prompt Input & Current Image Buttons

* Localize Gallery & Bug FIxes

Fix a bug where the delete image from the context menu wasn't working. Removed tooltips that were broken as they don't work in context menu.

* Fix localization breaking in production

* Add Toast Localization Support

* Localize Unified Canvas

* Localize WIP Tabs

* Localize Hotkeys

* Localize Settings

* RU Localization Update

Co-Authored-By: Artur <83028930+netsvetaev@users.noreply.github.com>

* Add Support for Italian and Portuguese

* Localize Toasts

* Fix width of language picker items

* Localize Backend Messages

* Disable Debug Messages

* Add Support for French

* Fix missing localization for a string in the SettingsModal

* Disable French

* Styling updates to normalize text and accommodate other langs

* Add Portuguese Brazilian

* Fix Hotkey headers not being localized.

* Fix styling issue on models tag in Settings

* Fix Slider Styling to accommodate different languages

* Fix slider styling in light mode.

* Add German

* Add Italian

* Add Polish

* Update Italian

* Localized Frontend Build

* Updated RU Translations

* Fresh Build with updated RU changes

* Bug Fixes and Loc Updates

* Updated Frontend Build

* Fresh Build

Co-authored-by: Artur <83028930+netsvetaev@users.noreply.github.com>
2022-12-24 18:23:21 +00:00
Matthias Wild
f961e865f5 use uname -m instead of arch (#2110)
addressing #2105

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-24 16:55:19 +00:00
Kent Keirsey
9eba6acb7f Fix of Hires Fix on Img2Img tab (#2096)
* Fix of Hires Fix on Img2Img tab

Fixed linting issues

* Attempting to fix prettier workflow issues

* More Prettier Attempts

* Finally Fixed Prettier Issues

* Fix of Hires Fix on Img2Img tab

Fixed linting issues

* Attempting to fix prettier workflow issues

* More Prettier Attempts

* Finally Fixed Prettier Issues

* updated with useEffect

* Update to fix Prettier

* Update useEffect dependencies

* Fix dispatch dependency error from prettier

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-24 10:56:40 -05:00
Lincoln Stein
e32dd1d703 [docs] Provide an example of reading prompts from a script (#2087)
* add example of using -from_file to read from a script

Addresses #1654, #473, #566, #1008 at least partially.

* fix bug in code example

* improve docs for !fetch and !replay

* enable rendering of images in GH WebUI
also fix indention in some bullet lists

Co-authored-by: mauwii <Mauwii@outlook.de>
2022-12-23 14:06:59 +00:00
tomosuto
bbbfea488d Update 020_INSTALL_MANUAL.md (#2092)
The file name should be configure_invokeai.py
2022-12-23 04:58:40 +00:00
Lincoln Stein
c8a9848ad6 correct a crash in img2img under particular circumstances (#2088)
When using the inpainting model, the following sequence of events
would cause a predictable crash:

1. Use unified canvas to outcrop a portion of the image.
2. Accept outcropped image and import into img2img
3. Try any img2img operation

This closes #1596.

The crash was:

```
operands could not be broadcast together with shapes (320,512) (512,576)

Traceback (most recent call last):
  File "/data/lstein/InvokeAI/backend/invoke_ai_web_server.py", line 1125, in generate_images
    self.generate.prompt2image(
  File "/data/lstein/InvokeAI/ldm/generate.py", line 492, in prompt2image
    results = generator.generate(
  File "/data/lstein/InvokeAI/ldm/invoke/generator/base.py", line 98, in generate
    image = make_image(x_T)
  File "/data/lstein/InvokeAI/ldm/invoke/generator/omnibus.py", line 138, in make_image
    return self.sample_to_image(samples)
  File "/data/lstein/InvokeAI/ldm/invoke/generator/omnibus.py", line 173, in sample_to_image
    corrected_result = super(Img2Img, self).repaste_and_color_correct(gen_result, self.pil_image, self.pil_mask, self.mask_blur_radius)
  File "/data/lstein/InvokeAI/ldm/invoke/generator/base.py", line 148, in repaste_and_color_correct
    mask_pixels = init_a_pixels * init_mask_pixels > 0
ValueError: operands could not be broadcast together with shapes (320,512) (512,576)
```

This error was caused by the image and its mask not being of identical
size due to the outcropping operation. The ultimate cause of this
error has something to do with different code paths being followed in
the `inpaint` vs the `omnibus` modules.

Since omnibus will be obsoleted by diffusers, I have chosen just to
work around the problem rather than track it down to its source. The
only ill effect is that color correction will not be applied to the
first image created by `img2img` after applying the outcrop and
immediately importing into the img2img canvas. Since the inpainting
model has less of a color drift problem than the standard model, this
is unlikely to be problematic.
2022-12-22 14:53:23 +00:00
Eugene Brodsky
e88e274bf2 Add @ebr to Contributors (#2095)
* (docs) @ebr signs Statement of Values

* (docs) add @ebr to Contributors page
2022-12-21 14:33:08 -05:00
Lincoln Stein
cca8d14c79 defer patchmatch loading (#2039)
* defer patchmatch loading

Because of the way that patchmatch was loaded early at import time, it
was impossible to turn off the attempted loading with --no-patchmatch.

In addition, the patchmatch loading messages appear early on during
initialization, interfering with ability to print out the version
cleanly when --version provided to invoke script.

This commit creates a thin wrapper class for patch_match that is only
loaded when needed, solving both problems.

* create a singleton patchmatch object for use in inpainting

This creates a thin wrapper to patchmatch which loads the module
on demand, respecting the global "trypatchmatch" option.

* address 2d round of issues in PR 2039 comments

* Patchmatch->PatchMatch and misc cleanup
2022-12-20 15:32:35 -08:00
Kent Keirsey
464aafa862 Correct asset link (#2081)
* Correct asset link

Minor documentation fix to correct linked asset.

* fix switched graphics
also:
- add blanks before/after figure tag
  (makes the screenshot also appear in github)
- use a table in inpainting example to have the pics side by side

Co-authored-by: mauwii <Mauwii@outlook.de>
2022-12-20 17:29:54 +00:00
Lincoln Stein
6e98b5535d add --version to invoke.py arguments (#2038)
* add --version to invoke.py arguments

This commit allows invoke.py to print out its name and version
number when given the --version argument. I had to move some
status messages around in order to make the output clean.

There is still an early message about initializing patchmatch
that interferes with a clean print of the version, and in fact the
--no-patchmatch argument is not doing anything. This will be the
subject of a subsequent PR.

* export APP_ID and APP_VERSION

Needed to support the web backend.
2022-12-20 15:14:28 +00:00
Eugene Brodsky
ab2972f320 Fix the configure script crash when no TTY is allocated (e.g. a container) (#2080)
* (config) avoid failure when huggingface token is not set

it is not required for model download, and we are handling the
saving of the token during huggingface authentication phase elsewhere.

* (config) safely print to non-tty terminals where width can not be determined
2022-12-20 03:52:58 +01:00
Matthias Wild
1ba40db361 optimize Dockerfile (#2036)
* remove build-essentials after opencv is built
also remo unecesarry python3-opencv dependencie (its already in venv)

* use branch name as tag

* leave pip and setuptools on the preinstalled vers.

* Rename Argument from WORKDIR to APPDIR

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-19 23:53:19 +01:00
Damian Stewart
f69fc68e06 Revert "Don't crash CLI on exceptions (#2066)" (#2078)
This reverts commit 147834e99c.
2022-12-20 08:56:04 +13:00
Scott Lahteine
7d8d4bcafb Global replace [ \t]+$, add "GB" (#1751)
* "GB"

* Replace [ \t]+$ global

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-19 16:36:39 +00:00
Lincoln Stein
4fd97ceddd remove redundant code line (#2068)
* remove redundant code line

install.bat was copying the requirements file into the install folder
twice, causing an error message on the second try. This fixes the
issue.

* add further improvements to installer

- Windows version will unzip to have requirements.txt already in
  the right location, to prevent problems when users try to run
  the .bat script from within a mounted read-only zip file manager.

- Do not assume that "pip" is on the path in either the .bat or shell
  versions of the update script.
2022-12-19 14:57:41 +00:00
Eugene Brodsky
ded49523cd (docs) update installer links as per some Discord reports 2022-12-18 22:32:33 -05:00
Eugene Brodsky
914e5fc4f8 (docs) update python install instructions for Ubuntu. Add documentation for installing OpenGL libraries on Linux. Fixes #1987 2022-12-18 22:32:33 -05:00
Eugene Brodsky
ab4d391a3a (docs) call out that the root volume needs at least 6GB of free space for pip cache. Fixes #2056 2022-12-18 22:32:33 -05:00
Matthias Wild
82f59829b8 set workflow PR triggers to filter PR-types (#2065)
* set workflow PR triggers to filter PR-types
- `review_requested`
- `ready_for_review`

* fail tests if draft pr

* add more types to test pr triggers

* remove unneeded condition

* readd condition

* leave PR-types default, only verify PRs to main
and fail for draft-PRs

* set types to cancel when converted to draft
2022-12-18 20:54:07 +00:00
Damian Stewart
147834e99c Don't crash CLI on exceptions (#2066) 2022-12-18 16:28:47 +01:00
Eugene Brodsky
f41da11d66 Relax Huggingface login requirement during setup (#2046)
* (config) handle huggingface token more gracefully

* (docs) document HuggingFace token requirement for Concepts

* (cli) deprecate the --(no)-interactive CLI flag

It was previously only used to skip the SD weights download, and therefore
the prompt for Huggingface token (the "interactive" part).

Now that we don't need a Huggingface token
to download the SD weights at all, we can replace this flag with
"--skip-sd-weights", to clearly describe its purpose

The `--(no)-interactive` flag still functions the same, but shows a deprecation message

* (cli) fix emergency_model_reconfigure argument parsing

* (config) fix installation issues on systems with non-UTF8 locale

Co-authored-by: Matthias Wild <40327258+mauwii@users.noreply.github.com>
2022-12-18 10:44:50 +01:00
Eugene Brodsky
5c5454e4a5 (docs) add redirects for moved pages (#2063) 2022-12-18 08:04:58 +00:00
Shapor Naghibzadeh
dedbdeeafc Update scripts/configure_invokeai.py
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2022-12-18 01:21:47 -05:00
Shapor Naghibzadeh
d1770bff37 Accept --root_dir in addition to --root in configure_ivokeai.py to be consistent with the documentation in 020_INSTALL_MANUAL. 2022-12-18 01:21:47 -05:00
JPPhoto
20652620d9 Added compiled TS changes 2022-12-17 20:56:42 -05:00
Jonathan
51613525a4 Updated to pull threshold from an existing image even if 0 (#2051)
Addresses #2049 but not other cases where the stored value is 0 (e.g. perlin noise). This should be investigated more throughly.
2022-12-17 19:03:09 -05:00
blessedcoolant
dc39f8d6a7 Fix broken embedding variants (#2037) 2022-12-17 03:07:05 +00:00
Lincoln Stein
f1748d7017 avoid leaking data to HuggingFace (#2021)
Before making a concept download request to HuggingFace, the concepts
library module now checks the concept name against a downloaded list
of all the concepts currently known to HuggingFace.  If the requested
concept is not on the list, then no download request is made.
2022-12-16 16:50:02 +00:00
Lincoln Stein
de7abce464 add an argument that lets user specify folders to scan for weights (#1977)
* add an argument that lets user specify folders to scan for weights

This PR adds a `--weight_folders` argument to invoke.py. Using
argparse, it adds a "weight_folders" attribute to the Args object, and
can be used like this:

```
'''test.py'''
from ldm.invoke.args import Args

args = Args().parse_args()

for folder in args.weight_folders:
    print(folder)
```

Example output:

```
python test.py --weight_folders /tmp/weights /home/fred/invokeai/weights "./my folder with spaces/weight files"
/tmp/weights
/home/fred/invokeai/weights
./my folder with spaces/weight files
```

* change --weight_folders to --weight_dirs
2022-12-16 15:14:49 +00:00
Kaspar Emanuel
2aa5bb6aad Auto-format frontend (#2009)
* Auto-format frontend

* Update lint-frontend GA workflow node and checkout

* Fix linter error in ThemeChanger

* Add a `on: pull_request` to lint-frontend workflow

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-16 13:56:39 +01:00
Matthias Wild
c0c4d7ca69 update (docker-)build scripts, .dockerignore and add patchmatch (#1970)
* update build scripts and dockerignore
updates to build and run script:
- read repository name
- include flavor in container name
- read arch via arch command
- use latest tag instead of arch
- don't bindmount `$HOME/.huggingface`
- make sure HUGGINGFACE_TOKEN is set

updates to .dockerignore
- include environment-and-requirements
- exclude binary_installer
- exclude docker-build
- exclude docs

* disable push and pr triggers of cloud image
also disable pushing.

This was decided since:
- it is not multiarch useable
- the default image is already cloud aproved

* integrate patchmatch in container

* pin verisons of recently introduced dependencies

* remove now unecesarry part from build.sh
move huggingface token to run script, so it can download missing models

* move GPU_FLAGS to run script
since not needed at build time

* update env.sh

- read REPOSITORY_NAME from env if available
- add comment to explain the intension of this file
- remove unecesarry exports

* get rid of repository_name_lc

* capitalize variables

* update INSTALL_DOCKER with new variables

* add comments pointing to the docs

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-16 13:53:37 +01:00
Eugene Brodsky
7d09d9da49 delete old 'server' package and the dependency_injector requirement (#2032)
fixes #1944
2022-12-16 06:28:16 -05:00
blessedcoolant
ffa54f4a35 Fix --config arg not being recognized 2022-12-16 18:29:47 +13:00
blessedcoolant
69cc0993f8 Add Embedding Parsing (#1973)
* Add Embedding Parsing

* Add Embedding Parsing

* Return token_dim in embedding_info

* fixes to handle other variants

1. Handle the case of a .bin file being mislabeled .pt (seen in the
wild at https://cyberes.github.io/stable-diffusion-textual-inversion-models/)

2. Handle the "broken" .pt files reported by https://github.com/invoke-ai/InvokeAI/issues/1829

3. When token name is not available, use the basename of the pt or bin file rather than the
   whole path.

fixes #1829

* remove whitespace

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-15 17:26:36 -05:00
mauwii
1050f2726a update links with new filenames 2022-12-16 10:26:02 +13:00
Lincoln Stein
f7170e4156 improve installation documentation
1. Added a big fat warning to the Windows installer to tell user
   to install Visual C++ redistributable.

2. Added a bit fat warning to the automated installer doc to
   tell user the same thing.

3. Reordered entries on the table-of-contents sidebar for installation
   to prioritize the most important docs.

4. Moved older installation documentation into deprecated folder.

5. Moved developer-specific installation documentation into the
   developers folder.
2022-12-16 10:26:02 +13:00
mauwii
bfa8fed568 update site_name 2022-12-16 10:24:03 +13:00
mauwii
2923dfaed1 update index and changelog 2022-12-16 10:24:03 +13:00
zeptofine
0932b4affa Replace latest link
The link still points to 2.2.3 .
2022-12-16 10:22:44 +13:00
Kaspar Emanuel
0b10835269 Fix initial theme setting 2022-12-16 10:20:26 +13:00
Kaspar Emanuel
6e0f3475b4 Reduce frontend eslint warnings to 0 2022-12-16 10:18:45 +13:00
Kaspar Emanuel
9b9e276491 Add lint-frontend github actions workflow 2022-12-16 10:16:01 +13:00
Kaspar Emanuel
392c0725f3 Remove circular dependencies in frontend 2022-12-16 10:16:01 +13:00
wfng92
2a2f38a016 Correct timestep for img2img initial noise addition (#1946)
* Correct timestep for img2img initial noise addition

* apply fix to inpaint and txt2img2img as well

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-15 15:59:19 -05:00
Kevin Turner
7a4e647287 build: GitHub Action to lint python files with pyflakes (#1332) 2022-12-15 19:30:58 +00:00
Lincoln Stein
b8e1151a9c change pypatchmatch only 2022-12-15 10:37:13 -05:00
Lincoln Stein
f39cb668fc update requirements and environment files
- Update pypatchmatch to 0.1.5 (see request from @Kyle0654 here
  https://discord.com/channels/1020123559063990373/1034740515209486387/1052465462757310464 )

- Removed basicsrs workaround for environment files, now that we know
  problem was caused by Windows long path issue.
2022-12-15 10:37:13 -05:00
Lincoln Stein
6c015eedb3 better error reporting when root directory not found (#2001)
- The invoke.py script now checks that the root (runtime) directory contains
  the expected config/models.yaml file and if it doesn't exits with a helpful
  error message about how to set the proper root.

- Formerly the script would fail with a "bad model" message and try to redownload
  its models, which is not helpful in the case that the root is missing or
  damaged.
2022-12-15 09:34:10 -05:00
Kent Keirsey
834e56a513 update Contributors directive to merge to main 2022-12-14 23:42:53 -05:00
mauwii
652aaa809b add missed backticks and some icons for tab
also puth the alf screenshot in a table to fit the other examples
2022-12-14 18:36:07 -05:00
Lincoln Stein
89880e1f72 affirm that <concepts> work with the webGUI 2022-12-14 18:36:07 -05:00
Lincoln Stein
d94f955d9d fix manual install documentation 2022-12-14 18:36:07 -05:00
Damian Stewart
64339af2dc restrict to 75 tokens and correctly handle blends 2022-12-14 16:54:27 -05:00
Chris Dawson
5d20f47993 Permit usage of GPUs in docker script (#1985)
* Add gpu support to docker

Enable GPUs within docker

* Use gpus flag

* Add GPU information to readme

* Fix env var name for GPU
2022-12-14 05:21:33 +00:00
Ivan Efimov
ccf8a46320 Fix: define path as None before usage 2022-12-13 19:46:03 -05:00
mauwii
af3d72e001 re-enable wheel install in test-invoke-pip.yml 2022-12-13 23:29:08 +01:00
Matthias Wild
1d78e1af9c add concurrency to test actions (#1975)
configured to only cancel workflows in PRs, but not on main branch
origins in #1933, but opitmized to not cancel workflows of non PRs
2022-12-13 19:53:10 +01:00
Matthias Wild
1fd605604f remove redundant tests, only do 20 steps (#1972)
- remove tests already performed in PR
- remove tests pointing to non existing files
- reduce steps to 20

This should decrease test time a lot and also "fix" failing mac tests.
I still recommend to invent why mac invoke takes so much longer!

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-13 19:39:29 +01:00
Ivan Efimov
f0b04c5066 Typo fix in INSTALL_AUTOMATED.md (#1968) 2022-12-13 19:13:28 +01:00
Eugene Brodsky
2836976d6d (install) fix segfault on macos when using homebrew 2022-12-13 11:39:08 -05:00
blessedcoolant
474220ce8e Fresh Frontend Build 2022-12-13 20:45:37 +13:00
blessedcoolant
4074705194 Update WEBUIHOTKEYS.md
Update hotkeys docs. They were outdated.
2022-12-13 20:45:37 +13:00
blessedcoolant
e89ff01caf Update Hotkeys Modal
Update hotkeys modal to reflect the previous changes to the scale and restore hotkeys and also improve a few other descriptions.
2022-12-13 20:45:37 +13:00
blessedcoolant
2187d0f31c Change Restore and Upscale Hotkeys
Changed the hotkeys of Restore and Upscale from R and U to Shift R and Shift U. Users could accidentally press R and U to trigger these functions which can be annoying. Especially considering R is also a hotkey for Reset View in other tabs and it can become muscle memory.
2022-12-13 20:45:37 +13:00
Lincoln Stein
1219c39d78 Lstein installer improvements (#1954)
* add logic for finding the root (runtime) directory

This commit fixes the root search logic to be as follows:

1) The `--root_dir` command line argument
2) The contents of environment variable INVOKEAI_ROOT
3) The VIRTUAL_ENV environment variable, plus '..'
4) $HOME/invokeai

(3) is the new feature. Since we are now recommending to install
InvokeAI and its dependencies into the .venv in the root directory,
this should be a reliable choice.

* make installer scripts more robust

This commits improves the installer .sh and .bat scripts in the following
ways:

1. They now handle folder/directory names containing spaces.
2. Pip will be installed into the .venv using the `ensurepip`
   module.

This addresses issues identified by @vargol in Issue #1941

* add --prefer-binary option to pip install

* fix unset variable crash

* add patch level to zip file name

* Fix crash introduced in #1948
2022-12-13 01:15:11 -05:00
blessedcoolant
bc0b0e4752 Possible fix for crash introduced in #1948 (#1963)
* Possible fix for crash introduced in #1948

* fix root dir search logic

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-13 01:14:46 -05:00
Lincoln Stein
cd3da2900d close #1956 (#1962) 2022-12-13 01:12:53 -05:00
blessedcoolant
4402ca10b2 [WebUI 2.2.5] Unified Canvas Alternate UI Beta (#1951)
* Fix Prompt Placeholder Text Color

* Display Model Desc as tooltip in SiteHeader

This'll allow the user to quickly access info like activation token for that model if they set it in the description.

* Unified Canvas UI Beta

* Initial Test Build

* Make Snap Grid Hotkey Accessible Always
2022-12-12 19:36:05 -05:00
Matthias Wild
1a1625406c Make Dockerfile cloud ready (tested on runpod) (#1950)
* Push dockerfile (#18)

* update build-container.yml

* add login step to build-container.yml

* update job name

* update matrix: add registry and platforms
also set latest only for cuda image

* quote string

* use latest for amd and cuda image

* separate images for cuda and amd

* change latest from auto to true

* configure_invoke -y instead of --interactive

* fix argument to --yes

* update matrix:
- use flavor instead of pip-requirements
- add flavor `cloud`
- add `dockerfile`

* introduce INVOKE_MODEL_RECONFIGURE

* add `--cap-add=sys_nice` to run.sh

* update Dockerfile: install wheel

* only have main branch in action again

* disable push of cloud image for now
since it still has it's own workflow, but PoC succeeded

* remove now untrue comments in top

* install pip, setuptools and wheel in sep. step

* add labels to the image

* remove doubled installation of wheel
2022-12-12 17:54:42 -05:00
Lincoln Stein
36e6908266 add logic for finding the root (runtime) directory (#1948)
This commit fixes the root search logic to be as follows:

1) The `--root_dir` command line argument
2) The contents of environment variable INVOKEAI_ROOT
3) The VIRTUAL_ENV environment variable, plus '..'
4) $HOME/invokeai

(3) is the new feature. Since we are now recommending to install
InvokeAI and its dependencies into the .venv in the root directory,
this should be a reliable choice.
2022-12-12 15:05:14 -05:00
Lincoln Stein
7314f1a862 add --karras_max option to invoke.py command line (#1762)
This addresses image regression image reported in #1754
2022-12-12 13:16:15 -05:00
psychedelicious
5c3cbd05f1 Improves configure_invokeai.py postscript (#1935)
The first few lines directed the user to run `python scripts/invoke.py`, which is not exactly correct anymore, and a holdover from previous versions.

Improves and clarifies the postscript messaging.
2022-12-12 13:13:46 -05:00
rmagur1203
f4e7383490 Load model in inpaint when using free_gpu_mem option (#1938)
* Load model in inpaint when using free_gpu_mem option

* Passing free_gpu_mem option to inpaint generator
2022-12-12 09:14:30 -05:00
rmagur1203
96a12099ed Fix the mistake of not importing the gc (#1939) 2022-12-12 09:14:09 -05:00
Lincoln Stein
e159bb3dce update installers for v2.2.4 tag (#1936) 2022-12-11 18:17:45 -05:00
rmagur1203
bd0c0d77d2 Reduce more memories on free_gpu_mem option (#1915)
* Enhance free_gpu_mem option
Unload cond_stage_model on free_gpu_mem option is setted

* Enhance free_gpu_mem option
Unload cond_stage_model on free_gpu_mem option is setted
2022-12-11 13:49:55 -05:00
Lincoln Stein
f745f78cb3 correct bug when trying to enhance JPG images (#1928)
This fix was authored by @mebelz and is reissued here to base it on
`main`.
2022-12-11 13:48:47 -05:00
Lincoln Stein
7efe0f3996 fix mkdocs formatting (#1927)
* fix mkdocs formatting

* update formatting, add some mkdocs specials

* fix wrong line break, use icon for tab key

Co-authored-by: mauwii <Mauwii@outlook.de>
2022-12-11 13:48:34 -05:00
Damian Stewart
9f855a358a fix for crash with inpainting model introduced by #1866 (#1922)
* fix for crash using inpainting model

* prevent crash due to invalid attention_maps_saver
2022-12-11 13:48:12 -05:00
Matthias Wild
62b80a81d3 Update dockerfile 2.2.4 (#1924)
* updated Dockerfile
- use `python:3.10-slim` as baseimage
- separate builder and runtime stages again
- get rid of uneeded packages
- pin packages for persistence
- remove outdir from entrypoint since invoke.init is available in /data
- shrinked image size to <2GB
- way better security score than before

* small output update to build.sh and run.sh

* update matrix in build-container.yml

* remove branches from build-container.yml
2022-12-11 17:33:54 +01:00
blessedcoolant
14587c9a95 Fresh Frontend Build 2022-12-11 11:19:22 -05:00
blessedcoolant
fcae5defe3 Add invokeai.init to gitignore 2022-12-11 11:19:22 -05:00
Lincoln Stein
e7144055d1 make webGUI model changing work again
- Using relative root addresses was causing problems when the
  current working directory was changed after start time.
- This commit makes the root address absolute at start time, such
  that changing the working directory later on doesn't break anything.
2022-12-11 11:19:22 -05:00
Lincoln Stein
c857c6cc62 rebuild frontend for 2.2.4 2022-12-11 11:19:22 -05:00
Lincoln Stein
7ecb11cf86 remove sampler questions (#1903) 2022-12-11 09:07:55 -05:00
Lincoln Stein
e4b61923ae fix InvokeAI download URLs (#1910)
- This fixes the .bat and .sh file URLs for the InvokeAI source
  code.
2022-12-11 07:10:17 -05:00
psychedelicious
aa68e4e0da Adds polyfill for Array.prototype.findLast() (#1909) 2022-12-11 06:54:15 -05:00
blessedcoolant
09365d6d2e Fix GUI not working (#1916) 2022-12-11 06:53:40 -05:00
AdamOStark
b77f34998c Responsive for devices under 600px
This doesn't not work for the Canvas Painting yet, but works on img2img and text2img
2022-12-11 22:10:46 +13:00
Lincoln Stein
0439b51a26 Simple Installer for Unified Directory Structure, Initial Implementation (#1819)
* partially working simple installer

* works on linux

* fix linux requirements files

* read root environment variable in right place

* fix cat invokeai.init in test workflows

* fix classical cp error in test-invoke-pip.yml

* respect --root argument now

* untested bat installers added

* windows install.bat now working

fix logic to find frontend files

* rename simple_install to "installer"

1. simple_install => 'installer'
2. source and binary install directories are removed

* enable update scripts to update requirements

- Also pin requirements to known working commits.
- This may be a breaking change; exercise with caution
- No functional testing performed yet!

* update docs and installation requirements

NOTE: This may be a breaking commit! Due to the way the installer
works, I have to push to a public branch in order to do full end-to-end
testing.

- Updated installation docs, removing binary and source installers and
  substituting the "simple" unified installer.
- Pin requirements for the "http:" downloads to known working commits.
- Removed as much as possible the invoke-ai forks of others' repos.

* fix directory path for installer

* correct requirement/environment errors

* exclude zip files in .gitignore

* possible fix for dockerbuild

* ready for torture testing

- final Windows bat file tweaks
- copy environments-and-requirements to the runtime directory so that
  the `update.sh` script can run.

  This is not ideal, since we lose control over the
  requirements. Better for the update script to pull the proper
  updated requirements script from the repository.

* allow update.sh/update.bat to install arbitrary InvokeAI versions

- Can pass the zip file path to any InvokeAI release, branch, commit or tag,
  and the installer will try to install it.
- Updated documentation
- Added Linux Python install hints.

* use binary installer's :err_exit function

* user diffusers 0.10.0

* added logic for CPPFLAGS on mac

* improve windows install documentation

- added information on a couple of gotchas I experienced during
  windows installation, including DLL loading errors experienced
  when Visual Studio C++ Redistributable was not present.

* tagged to pull from 2.2.4-rc1

- also fix error of shell window closing immediately if suitable
  python not found

Co-authored-by: mauwii <Mauwii@outlook.de>
2022-12-11 00:37:08 -05:00
blessedcoolant
ef6870c714 Fix Inpainting Model entry in models.yaml.example 2022-12-10 23:52:24 -05:00
Damian Stewart
8cbb50c204 avoid further crash under low-memory conditions 2022-12-10 15:32:11 -05:00
blessedcoolant
12a8d7fc14 Fix crash introduced in #1866 2022-12-10 15:32:11 -05:00
Matthias Wild
3d2b497eb0 Run more tests for PRs (#1895)
* run 3 tests for PR with different samplers
reduce tests for PR to do only 5 Iterations

* use correct txt file - delete unused old file
2022-12-10 20:07:14 +01:00
Damian Stewart
786b8878d6 Save and display per-token attention maps (#1866)
* attention maps saving to /tmp

* tidy up diffusers branch backporting of cross attention refactoring

* base64-encoding the attention maps image for generationResult

* cleanup/refactor conditioning.py

* attention maps and tokens being sent to web UI

* attention maps: restrict count to actual token count and improve robustness

* add argument type hint to image_to_dataURL function

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>

Co-authored-by: damian <git@damianstewart.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2022-12-10 15:57:41 +01:00
Lincoln Stein
55132f6463 pin diffusers to 0.9.0 2022-12-09 09:09:22 -05:00
Matthias Wild
ed9186b099 Add windows to test workflows (#1809)
* add windows to test runners

* disable fail-fast for debugging

* re-enable login shell for conda workflow
also fix expression to exclude windows from run tests

* enable fail-fast again

* fix condition, pin runner verisons

* remove feature branch from push trigger
since already being triggered now via PR

* use gfpgan from pypi for windows
curious if this would fix the installation here as well
since worked for #1802

* unpin basicsr for windows

* for curiosity enabling testing for windows as well

* disable pip cache
since windows failed with a memory error now
but was working before it had a cache

* use matrix.github-env

* set platform specific root and outdir

* disable tests for windows since memory error
I guess the windows installation uses more space than linux
and for this they have less swap memory
2022-12-09 14:21:38 +01:00
wfng92
d2026d0509 Fix error when init_mask=None and invert_mask=True
In the event where no `init_mask` is given and `invert_mask` is set to True, the script will raise the following error:

```bash
AttributeError: 'NoneType' object has no attribute 'mode'
```

The new implementation will only run inversion when both variables are valid.
2022-12-08 22:37:11 -05:00
Artur
0bc4ed14cd Prompt placeholder changed in PromptInput.tsx
Syntax examples were added
2022-12-08 22:35:41 -05:00
Jonathan
06369d07c0 Update CLI.py 2022-12-08 22:34:49 -05:00
Jonathan
4e61069821 Update embiggen.py 2022-12-08 22:34:49 -05:00
Daya Adianto
d7ba041007 Enable force free GPU memory in img2img 2022-12-07 19:25:21 -05:00
Sammy
3859302f1c Remove -e from "INSTALL_PATCHMATCH.md
The -e flag does NOT work in this case and results in a RemoteNotFound Error
2022-12-07 19:24:31 -05:00
Sammy
865439114b Arch Specific Patchmatch Instructions + Fixing linux conda installation 2022-12-07 19:24:31 -05:00
Lynne Whitehorn
4d76116152 Update invoke.bat.in isolate environment variables
Without locally scoped (to the script) environment variables, this script can only be run once and then you need to start a new cmd session to get a clean environment.

Surrounding the script with setlocal/endlocal achieves this.

https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/setlocal
https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/endlocal
2022-12-07 17:45:19 -05:00
spezialspezial
42f5bd4e12 Account for flat models
Merged models from auto11 merge board are flat for some reason. Current behavior of invoke is not changed by this modification.
2022-12-07 12:11:37 -05:00
Vedant Madane
04e77f3858 Fix Broken Link To Notebook
* The link pointed to https://github.com/invoke-ai/InvokeAI/blob/main/notebooks/Stable-Diffusion-local-Windows.ipynb which does not exist so it has been replaced with https://github.com/invoke-ai/InvokeAI/blob/main/notebooks/Stable_Diffusion_AI_Notebook.ipynb

* Add buttons for running on Colab 

* Tried adding running InvokeAI on Binder but the error was:
ERROR: Ignored the following versions that require a different python version: 0.55.2 Requires-Python <3.5
ERROR: Could not find a version that satisfies the requirement clipseg (from invokeai) (from versions: none)
ERROR: No matching distribution found for clipseg
Removing intermediate container 25be65428187
The command '/bin/sh -c ${KERNEL_PYTHON_PREFIX}/bin/pip install --no-cache-dir .' returned a non-zero code: 1

`## Running Online On JupyterHub Binder
[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/invoke-ai/InvokeAI/main?labpath=https%3A%2F%2Fgithub.com%2Finvoke-ai%2FInvokeAI%2Fblob%2Fmain%2Fnotebooks%2FStable_Diffusion_AI_Notebook.ipynb)`

This will have to be added for having the Launch | Binder button after it runs properly.
2022-12-07 08:28:14 -05:00
Eugene Brodsky
1fc1eeec38 Fix docker push github action and expand with additional metadata (#1837)
* update docker build (cloud) action with additional metadata, new labels

* (docker) also add aarch64 cloud build and remove arch suffix

* (docker) architecture suffix is needed for now

* (docker) don't build aarch64 for now
2022-12-07 14:03:33 +01:00
Matthias Wild
556081695a disable pushing the cloud container (#1831) 2022-12-06 18:06:48 +01:00
Eugene Brodsky
ad7917c7aa Optimized Docker build with support for external working directory (#1544)
* add docker build optimized for size; do not copy models to image

useful for cloud deployments. attempts to utilize docker layer
caching as effectively as possible. also some quick tools to help with
building

* add workflow to build cloud img in ci

* push cloud image in addition to building

* (ci) also tag docker images with git SHA

* (docker) rework Makefile for easy cache population and local use

* support the new conda-less install; further optimize docker build

* (ci) clean up the build-cloud-img action

* improve the Makefile for local use

* move execution of invoke script from entrypoint to cmd, allows overriding the cmd if needed (e.g. in Runpod

* remove unnecessary copyright statements

* (docs) add a section on running InvokeAI in the cloud using Docker

* (docker) add patchmatch to the cloud image; improve build caching; simplify Makefile

* (docker) fix pip requirements path to use binary_installer directory
2022-12-06 13:28:07 +01:00
Kent Keirsey
39cca8139f Clean up readme 2022-12-06 06:58:26 -05:00
blessedcoolant
1d1988683b Fix Embedding Dir not working 2022-12-05 22:24:31 -05:00
Lincoln Stein
44a0055571 correct regression in loading of PaperCut and VoxelArt models (#1730)
This corrects a regression in loading of these models due to
a change of the embedding_manager parameter `num_vectors_per_token`

Fixes #1718
2022-12-05 19:04:34 +01:00
Lincoln Stein
0cc01143d8 invoke script cds to its location before running (#1805) 2022-12-05 19:03:20 +01:00
spezialspezial
1c0247d58a Eventually update APP_VERSION to 2.2.3
Not sure what the procedure is for the version number. Is this supposed to match every git tag or just major versions? Same question for setup.py
2022-12-04 14:33:16 -05:00
Damian Stewart
d335f51e5f fix off-by-one bug in cross-attention-control (#1774)
prompt token sequences begin with a "beginning-of-sequence" marker <bos> and end with a repeated "end-of-sequence" marker <eos> - to make a default prompt length of <bos> + 75 prompt tokens + <eos>. the .swap() code was failing to take the column for <bos> at index 0 into account. the changes here do that, and also add extra handling for a single <eos> (which may be redundant but which is included for completeness).

based on my understanding and some assumptions about how this all works, the reason .swap() nevertheless seemed to do the right thing, to some extent, is because over multiple steps the conditioning process in Stable Diffusion operates as a feedback loop. a change to token n-1 has flow-on effects to how the [1x4x64x64] latent tensor is modified by all the tokens after it, - and as the next step is processed, all the tokens before it as well. intuitively, a token's conditioning effects "echo" throughout the whole length of the prompt. so even though the token at n-1 was being edited when what the user actually wanted was to edit the token at n, it nevertheless still had some non-negligible effect, in roughly the right direction, often enough that it seemed like it was working properly.
2022-12-04 11:41:03 +01:00
Lincoln Stein
38cd968130 stability and use improvements to binary & source installers
- Pass command-line arguments through to invoke.py via the .bat and .sh scripts.
- Remove obsolete warning message from binary install.bat
- Make sure that current working directory matches where .bat file is installed
2022-12-03 21:25:12 -05:00
tildebyte
0111304982 fix(srcinstall) shell installer: cp scripts instead of linking 2022-12-03 21:24:18 -05:00
Eugene Brodsky
c607d4fe6c (config) clarify why we're setting the env var 2022-12-03 14:33:21 -05:00
Eugene Brodsky
6d6076d3c7 (config) fix permissions on configure_invokeai.py, improve documentation in globals.py comment 2022-12-03 14:33:21 -05:00
Eugene Brodsky
485fcc7fcb (config) do not cache HF token when using the non-canonical env var
this mirrors the behaviour when using the officially supported env var
2022-12-03 14:33:21 -05:00
Eugene Brodsky
76633f500a (config) make user aware of any problems downloading models
also implement a generic way of reporting issues at the end of installation
2022-12-03 14:33:21 -05:00
Eugene Brodsky
ed6194351c (config) try to authenticate to Huggingface more eagerly, using env vars 2022-12-03 14:33:21 -05:00
Eugene Brodsky
f237744ab1 (config) fix f-string in prompt for output location 2022-12-03 14:33:21 -05:00
ofirkris
678cf8519e typo fix 2022-12-03 14:30:48 -05:00
Damian Stewart
ee9de75b8d Make install instructions discoverable in readme (#1752)
also "Macintosh" → "macOS" to improve "We Support macOS Properly And Not Halfassed Like Other OSS Projects" signalling

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-12-03 14:20:50 -05:00
Andy Bearman
50f3847ef8 Fix Linux source URL in installation docs 2022-12-03 14:19:58 -05:00
Lincoln Stein
8596e3586c add documentation warning about 1650/60 cards
Several users have been trying to run InvokeAI on GTX 1650 and 1660
cards. They really can't because these cards don't work with
half-precision and only have 4-6GB of memory. Added a warning to
the docs (in two places) about this problem.
2022-12-03 13:16:22 -05:00
Lincoln Stein
5ef1e0714b Merge branch 'main' of github.com:/invoke-ai/InvokeAI into main 2022-12-03 12:25:30 +00:00
Lincoln Stein
be871c3ab3 Merge branch 'ebr-gh-link-src-installer' into main 2022-12-03 12:24:03 +00:00
Lincoln Stein
dec40d9b04 Update source_installer/install.sh.in
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2022-12-03 07:20:32 -05:00
Lincoln Stein
fe5c008dd5 Update docs/installation/INSTALL_SOURCE.md
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2022-12-03 07:20:32 -05:00
Lincoln Stein
72def2ae13 documentation fixes for 2.2.3
- Add Xcode installation instructions to source installer walkthrough
- Fix link to source installer page from installer overview
- If OSX install crashes, script will tell Mac users to go to the docs
  to learn how to install Xcode
2022-12-03 07:20:32 -05:00
Eugene Brodsky
31cd76a2af (docs) install ux: link directly to release zip files
NB: if we remove the version from the zip file names, we can link
directly to assets in the latest GH release from documentation without
the need to keep the links updated
2022-12-03 00:24:49 -05:00
Eugene Brodsky
00c78263ce (docs) install ux: link main README directly to source installer 2022-12-03 00:19:45 -05:00
Lincoln Stein
5c31feb3a1 Remove reference to binary installer 2022-12-02 22:02:51 -05:00
Shawn Zhong
26f129cef8 Fix broken link 2022-12-02 22:02:30 -05:00
Lincoln Stein
292ee06751 Fix description of source code installer
The mkdocs version of INSTALL_SOURCE.md has disappeared and I am patching this in
so that users find the correct installer.
2022-12-02 17:16:29 -05:00
Lincoln Stein
c00d53fcce fix link in documentation 2022-12-02 15:50:34 -05:00
Daya Adianto
a78a8728fe Fix FlaskUI initialization 2022-12-02 15:50:14 -05:00
Kevin Turner
6b5d19347a fix(invoke.sh.in): remove additional mystery character 2022-12-02 15:43:59 -05:00
Eugene Brodsky
26671d8eed (installer) fix syntax error in invoke.sh.in 2022-12-02 15:43:59 -05:00
Lincoln Stein
b487fa4391 fix basicsr conflict on windows 2022-12-02 12:53:13 -05:00
Lincoln Stein
12b98ba4ec make invoke.sh executable 2022-12-02 12:53:13 -05:00
Lincoln Stein
fa25a64d37 remove references to binary installer from docs 2022-12-02 12:48:26 -05:00
Lincoln Stein
29540452f2 fix bad naming of invoke.sh.in 2022-12-02 11:25:37 -05:00
Lincoln Stein
c7960f930a fix regression in copy function 2022-12-02 10:53:42 -05:00
Lincoln Stein
c1c8b5026a apply current directory patch to binary installer .sh file 2022-12-02 10:53:42 -05:00
Lincoln Stein
5da42e0ad2 add back PYTORCH_ENABLE_MPS_FALLBACK 2022-12-02 10:53:42 -05:00
Lincoln Stein
34d6f35408 run .bat file in directory potentially containing spaces
- The previous fix for the "install in Windows system directory" error would fail
   if the path includes directories with spaces in them. This fixes that.

- In addition, addressing the same issue in source installer, although not
	yet reported in wild.
2022-12-02 10:53:42 -05:00
mauwii
401165ba35 correctly link current core team 2022-12-02 09:33:19 -05:00
mauwii
6d8057c84f fix POSTPROCESS ToC 2022-12-02 09:33:19 -05:00
mauwii
3f23dee6f4 add title 2022-12-02 09:33:19 -05:00
mauwii
8cdd961ad2 update IMG2IMG.md 2022-12-02 09:33:19 -05:00
mauwii
470b267939 update CONEPTS.md
- use table with correct syntax for screenshots
- switch Title and first Headline to look better in ToC
2022-12-02 09:33:19 -05:00
mauwii
bf399e303c add index.md to features
to prevent the menu being occupied from the expanded CLI ToC
Should maybe be fleshed out a bit
2022-12-02 09:33:19 -05:00
mauwii
b3d7ad7461 a lot of formatting updates to CLI.md 2022-12-02 09:33:19 -05:00
mauwii
cd66b2c76d fix links in older_docs_to_be_removed 2022-12-02 09:33:19 -05:00
psychedelicious
6b406e2b5e Adds tip for importing models on Windows 2022-12-02 09:25:36 -05:00
Lincoln Stein
6737cc1443 recompile for linux 2022-12-02 09:11:17 -05:00
Lincoln Stein
7fd0eeb9f9 update darwin requirements 2022-12-02 09:11:17 -05:00
Lincoln Stein
16e3b45fa2 update linux reuqirements file 2022-12-02 09:11:17 -05:00
Lincoln Stein
2f07ea03a9 binary installer fix
- bat file changes to directory it lives in rather than user's current directory
- restore incorrect requirements and compiled Darwin requirements file
2022-12-02 09:11:17 -05:00
Lincoln Stein
b563d75c58 restored mac requirements file 2022-12-02 09:11:17 -05:00
psychedelicious
a7b7b20d16 Updates docs release link to latest 2022-12-02 06:20:16 -05:00
Lincoln Stein
a47ef3ded9 change download links to release candidate 2022-12-01 23:24:23 -05:00
Lincoln Stein
7cb9b654f3 add compiled windows file 2022-12-01 23:07:48 -05:00
Lincoln Stein
8819e12a86 configure script changed from preload_models.py to configure_invokeai.py
This makes a cosmetic change. Instead of calling preload_models.py
(deprecated) it calls configure_invokeai.py. Currently the two do
the same thing.
2022-12-01 22:51:05 -05:00
Lincoln Stein
967eb60ea9 added the linux py3.10* file 2022-12-01 22:51:05 -05:00
psychedelicious
b1091ecda1 Fixes failed canvas generation when gallery is empty
There was some old logic from before Unified Canvas which aborted generation when there was no currentImage. 

If you have an image in the gallery, there is always a currentImage. But if gallery is empty, there is no currentImage. Generation would silently fail in this case.

We apparently never tested with an empty gallery and thus never ran into the issue. This removes this old and now-unused logic.
2022-12-01 22:29:56 -05:00
Lincoln Stein
2723dd9051 remove bad characters from end of user input
Some users were leaving whitespace at the end of their root
directories or ending them with a backslash. This caused the root
directory to become unusable. This removes whitespace and backslashes
from the end of the directory names.

Note that more needs to be done to cleanse the input, but for now
this will cover the cases we have seen so far in the wild.
2022-12-01 22:15:39 -05:00
Lincoln Stein
8f050d992e documentation fixes for release 2022-12-01 22:02:50 -05:00
Lincoln Stein
0346095876 fix incorrect syntax for .bat 2022-12-01 22:02:27 -05:00
Lincoln Stein
f9bbc55f74 Merge branch 'source-installer-improvements' into main 2022-12-01 23:18:54 +00:00
Lincoln Stein
878a3907e9 defer loading of Hugging Face concepts until needed
Some users have been complaining that the CLI "freezes" for a while
before the invoke> prompt appears. I believe this is due to internet
delay while the concepts library names are downloaded by the autocompleter.
I have changed logic so that the concepts are downloaded the first time
the user types a < and tabs.
2022-12-01 17:56:18 -05:00
Lincoln Stein
4cfb41d9ae configure_invokeai.py enhancement
- Adds a new option to download <a>ll the models, in addition
  to <r>ecommended and <c>ustomized.
2022-12-01 15:59:14 -05:00
Lincoln Stein
6ec64ecb3c fix commit conflict markers 2022-12-01 15:07:54 -05:00
Lincoln Stein
540315edaa rename to binary_installer in build docs 2022-12-01 14:58:07 -05:00
Lincoln Stein
cf10a1b736 Merge branch 'main' into source-installer-improvements 2022-12-01 19:45:47 +00:00
Lincoln Stein
9fb2a43780 rename "installer" to "binary_installer"
- Fix up internal names so scripts run properly
2022-12-01 19:40:47 +00:00
Lincoln Stein
1b743f7d9b source installer improvements and documentation
- Source installer provides more context for what it is doing, and
  sends user to help/troubleshooting pages when something goes wrong.

- install.sh and install.bat are renamed to install.sh.in and install.bat.in
  to discourage users from running them from within the

- Documentation updated
2022-12-01 19:40:13 +00:00
Damian Stewart
d7bf3f7d7b make .sh/.bat files inside installer/ non executable (#1664)
* make binary installer executables non-executable inside the repo

* update docs to match previous commit
2022-12-01 19:35:21 +01:00
Lincoln Stein
eba31e7caf Documentation updates for 2.2 release 2022-12-01 08:09:31 -05:00
Lincoln Stein
bde456f9fa fix startup messages and a startup crash
- make the warnings about patchmatch less redundant
- only warn about being unable to load concepts from Hugging Face
  library once
- do not crash when unable to load concepts from Hugging Face
  due to network connectivity issues
2022-12-01 07:42:31 -05:00
Lincoln Stein
9ee83380e6 fix missig history file in output director 2022-12-01 07:39:26 -05:00
Lincoln Stein
6982e6a469 rebuilt frontend 2022-11-30 19:20:57 -05:00
Lincoln Stein
0f4d71ed63 Merge dev into main for 2.2.0 (#1642)
* Fixes inpainting + code cleanup

* Disable stage info in Inpainting Tab

* Mask Brush Preview now always at 0.5 opacity

The new mask is only visible properly at max opacity but at max opacity the brush preview becomes fully opaque blocking the view. So the mask brush preview no remains at 0.5 no matter what the Brush opacity is.

* Remove save button from Canvas Controls (cleanup)

* Implements invert mask

* Changes "Invert Mask" to "Preserve Masked Areas"

* Fixes (?) spacebar issues

* Patches redux-persist and redux-deep-persist with debounced persists

Our app changes redux state very, very often. As our undo/redo history grows, the calls to persist state start to take in the 100ms range, due to a the deep cloning of the history. This causes very noticeable performance lag.

The deep cloning is required because we need to blacklist certain items in redux from being persisted (e.g. the app's connection status).

Debouncing the whole process of persistence is a simple and effective solution. Unfortunately, `redux-persist` dropped `debounce` between v4 and v5, replacing it with `throttle`. `throttle`, instead of delaying the expensive action until a period of X ms of inactivity, simply ensures the action is executed at least every X ms. Of course, this does not fix our performance issue. 

The patch is very simple. It adds a `debounce` argument - a number of milliseconds - and debounces `redux-persist`'s `update()` method (provided by `createPersistoid`) by that many ms.

Before this, I also tried writing a custom storage adapter for `redux-persist` to debounce the calls to `localStorage.setItem()`. While this worked and was far less invasive, it doesn't actually address the issue. It turns out `setItem()` is a very fast part of the process.

We use `redux-deep-persist` to simplify the `redux-persist` configuration, which can get complicated when you need to blacklist or whitelist deeply nested state. There is also a patch here for that library because it uses the same types as `redux-persist`.

Unfortunately, the last release of `redux-persist` used a package `flat-stream` which was malicious and has been removed from npm. The latest commits to `redux-persist` (about 1 year ago) do not build; we cannot use the master branch. And between the last release and last commit, the changes have all been breaking.

Patching this last release (about 3 years old at this point) directly is far simpler than attempting to fix the upstream library's master branch or figuring out an alternative to the malicious and now non-existent dependency.

* Adds debouncing

* Fixes AttributeError: 'dict' object has no attribute 'invert_mask'

* Updates package.json to use redux-persist patches

* Attempts to fix redux-persist debounce patch

* Fixes undo/redo

* Fixes invert mask

* Debounce > 300ms

* Limits history to 256 for each of undo and redo

* Canvas styling

* Hotkeys improvement

* Add Metadata To Viewer

* Increases CFG Scale max to 200

* Fix gallery width size for Outpainting

Also fixes the canvas resizing failing n fast pushes

* Fixes disappearing canvas grid lines

* Adds staging area

* Fixes "use all" not setting variationAmount

Now sets to 0 when the image had variations.

* Builds fresh bundle

* Outpainting tab loads to empty canvas instead of upload

* Fixes wonky canvas layer ordering & compositing

* Fixes error on inpainting paste back

`TypeError: 'float' object cannot be interpreted as an integer`

* Hides staging area outline on mouseover prev/next

* Fixes inpainting not doing img2img when no mask

* Fixes bbox not resizing in outpainting if partially off screen

* Fixes crashes during iterative outpaint. Still doesn't work correctly though.

* Fix iterative outpainting by restoring original images

* Moves image uploading to HTTP

- It all seems to work fine
- A lot of cleanup is still needed
- Logging needs to be added
- May need types to be reviewed

* Fixes: outpainting temp images show in gallery

* WIP refactor to unified canvas

* Removes console.log from redux-persist patch

* Initial unification of canvas

* Removes all references to split inpainting/outpainting canvas

* Add patchmatch and infill_method parameter to prompt2image (options are 'patchmatch' or 'tile').

* Fixes app after removing in/out-painting refs

* Rebases on dev, updates new env files w/ patchmatch

* Organises features/canvas

* Fixes bounding box ending up offscreen

* Organises features/canvas

* Stops unnecessary canvas rescales on gallery state change

* Fixes 2px layout shift on toggle canvas lock

* Clips lines drawn while canvas locked

When drawing with the locked canvas, if a brush stroke gets too close to the edge of the canvas and its stroke would extend past the edge of the canvas, the edge of that stroke will be seen after unlocking the canvas.

This could cause a problem if you unlock the canvas and now have a bunch of strokes just outside the init image area, which are far back in undo history and you cannot easily erase.

With this change, lines drawn while the canvas is locked get clipped to the initial image bbox, fixing this issue.

Additionally, the merge and save to gallery functions have been updated to respect the initial image bbox so they function how you'd expect.

* Fixes reset canvas view when locked

* Fixes send to buttons

* Fixes bounding box not being rounded to 64

* Abandons "inpainting" canvas lock

* Fixes save to gallery including empty area, adds download and copy image

* Fix Current Image display background going over image bounds

* Sets status immediately when clicking Invoke

* Adds hotkeys and refactors sharing of konva instances

Adds hotkeys to canvas. As part of this change, the access to konva instance objects was refactored:

Previously closure'd refs were used to indirectly get access to the konva instances outside of react components.

Now, a  getter and setter function are used to provide access directly to the konva objects.

* Updates hotkeys

* Fixes canvas showing spinner on first load

Also adds good default canvas scale and positioning when no image is on it

* Fixes possible hang on MaskCompositer

* Improves behaviour when setting init canvas image/reset view

* Resets bounding box coords/dims when no image present

* Disables canvas actions which cannot be done during processing

* Adds useToastWatcher hook

- Dispatch an `addToast` action with standard Chakra toast options object to add a toast to the toastQueue
- The hook is called in App.tsx and just useEffect's w/ toastQueue as dependency to create the toasts
- So now you can add toasts anywhere you have access to `dispatch`, which includes middleware and thunks
- Adds first usage of this for the save image buttons in canvas

* Update Hotkey Info

Add missing tooltip hotkeys and update the hotkeys modal to reflect the new hotkeys for the Unified Canvas.

* Fix theme changer not displaying current theme on page refresh

* Fix tab count in hotkeys panel

* Unify Brush and Eraser Sizes

* Fix staging area display toggle not working

* Staging Area delete button is now red

So it doesnt feel blended into to the rest of them.

* Revert "Fix theme changer not displaying current theme on page refresh"

This reverts commit 903edfb803e743500242589ff093a8a8a0912726.

* Add arguments to use SSL to webserver

* Integrates #1487 - touch events

Need to add:
- Pinch zoom
- Touch-specific handling (some things aren't quite right)

* Refactors upload-related async thunks

- Now standard thunks instead of RTK createAsyncThunk()
- Adds toasts for all canvas upload-related actions

* Reorganises app file structure

* Fixes Canvas Auto Save to Gallery

* Fixes staging area outline

* Adds staging area hotkeys, disables gallery left/right when staging

* Fixes Use All Parameters

* Fix metadata viewer image url length when viewing intermediate

* Fixes intermediate images being tiny in txt2img/img2img

* Removes stale code

* Improves canvas status text and adds option to toggle debug info

* Fixes paste image to upload

* Adds model drop-down to site header

* Adds theme changer popover

* Fix missing key on ThemeChanger map

* Fixes stage position changing on zoom

* Hotkey Cleanup

- Viewer is now Z
- Canvas Move tool is V - sync with PS
- Removed some unused hotkeys

* Fix canvas resizing when both options and gallery are unpinned

* Implements thumbnails for gallery

- Thumbnails are saved whenever an image is saved, and when gallery requests images from server
- Thumbnails saved at original image aspect ratio with width of 128px as WEBP
- If the thumbnail property of an image is unavailable for whatever reason, the image's full size URL is used instead

* Saves thumbnails to separate thumbnails directory

* Thumbnail size = 256px

* Fix Lightbox Issues

* Disables canvas image saving functions when processing

* Fix index error on going past last image in Gallery

* WIP - Lightbox Fixes

Still need to fix the images not being centered on load when the image res changes

* Fixes another similar index error, simplifies logic

* Reworks canvas toolbar

* Fixes canvas toolbar upload button

* Cleans up IAICanvasStatusText

* Improves metadata handling, fixes #1450

- Removes model list from metadata
- Adds generation's specific model to metadata
- Displays full metadata in JSON viewer

* Gracefully handles corrupted images; fixes #1486

- App does not crash if corrupted image loaded
- Error is displayed in the UI console and CLI output if an image cannot be loaded

* Adds hotkey to reset canvas interaction state

If the canvas' interaction state (e.g. isMovingBoundingBox, isDrawing, etc) get stuck somehow, user can press Escape to reset the state.

* Removes stray console.log()

* Fixes bug causing gallery to close on context menu open

* Minor bugfixes

- When doing long-running canvas image exporting actions, display indeterminate progress bar
- Fix staging area image outline not displaying after committing/discarding results

* Removes unused imports

* Fixes repo root .gitignore ignoring frontend things

* Builds fresh bundle

* Styling updates

* Removes reasonsWhyNotReady

The popover doesn't play well with the button being disabled, and I don't think adds any value.

* Image gallery resize/style tweaks

* Styles buttons for clearing canvas history and mask

* First pass on Canvas options panel

* Fixes bug where discarding staged images results in loss of history

* Adds Save to Gallery button to staging toolbar

* Rearrange some canvas toolbar icons

Put brush stuff together and canvas movement stuff together

* Fix gallery maxwidth on unified canvas

* Update Layer hotkey display to UI

* Adds option to crop to bounding box on save

* Masking option tweaks

* Crop to Bounding Box > Save Box Region Only

* Adds clear temp folder

* Updates mask options popover behavior

* Builds fresh bundle

* Fix styling on alert modals

* Fix input checkbox styling being incorrect on light theme

* Styling fixes

* Improves gallery resize behaviour

* Cap gallery size on canvas tab so it doesnt overflow

* Fixes bug when postprocessing image with no metadata

* Adds IAIAlertDialog component

* Moves Loopback to app settings

* Fixes metadata viewer not showing metadata after refresh

Also adds Dream-style prompt to metadata

* Adds outpainting specific options

* Linting

* Fixes gallery width on lightbox, fixes gallery button expansion

* Builds fresh bundle

* Fix Lightbox images of different res not centering

* Update feature tooltip text

* Highlight mask icon when on mask layer

* Fix gallery not resizing correctly on open and close

* Add loopback to just img2img. Remove from settings.

* Fix to gallery resizing

* Removes Advanced checkbox, cleans up options panel for unified canvas

* Minor styling fixes to new options panel layout

* Styling Updates

* Adds infill method

* Tab Styling Fixes

* memoize outpainting options

* Fix unnecessary gallery re-renders

* Isolate Cursor Pos debug text on canvas to prevent rerenders

* Fixes missing postprocessed image metadata before refresh

* Builds fresh bundle

* Fix rerenders on model select

* Floating panel re-render fix

* Simplify fullscreen hotkey selector

* Add Training WIP Tab

* Adds Training icon

* Move full screen hotkey to floating to prevent tab rerenders

* Adds single-column gallery layout

* Fixes crash on cancel with intermediates enabled, fixes #1416

* Updates npm dependencies

* Fixes img2img attempting inpaint when init image has transparency

* Fixes missing threshold and perlin parameters in metadata viewer

* Renames "Threshold" > "Noise Threshold"

* Fixes postprocessing not being disabled when clicking use all

* Builds fresh bundle

* Adds color picker

* Lints & builds fresh bundle

* Fixes iterations being disabled when seed random & variations are off

* Un-floors cursor position

* Changes color picker preview to circles

* Fixes variation params not set correctly when recalled

* Fixes invoke hotkey not working in input fields

* Simplifies Accordion

Prep for adding reset buttons for each section

* Fixes mask brush preview color

* Committing color picker color changes tool to brush

* Color picker does not overwrite user-selected alpha

* Adds brush color alpha hotkey

* Lints

* Removes force_outpaint param

* Add inpaint size options to inpaint at a larger size than the actual inpaint image, then scale back down for recombination

* Bug fix for inpaint size

* Adds inpaint size (as scale bounding box) to UI

* Adds auto-scaling for inpaint size

* Improves scaled bbox display logic

* Fixes bug with clear mask and history

* Fixes shouldShowStagingImage not resetting to true on commit

* Builds fresh bundle

* Fixes canvas failing to scale on first run

* Builds fresh bundle

* Fixes unnecessary canvas scaling

* Adds gallery drag and drop to img2img/canvas

* Builds fresh bundle

* Fix desktop mode being broken with new versions of flaskwebgui

* Fixes canvas dimensions not setting on first load

* Builds fresh bundle

* stop crash on !import_models call on model inside rootdir

- addresses bug report #1546

* prevent "!switch state gets confused if model switching fails"

- If !switch were to fail on a particular model, then generate got
  confused and wouldn't try again until you switch to a different working
  model and back again.

- This commit fixes and closes #1547

* Revert "make the docstring more readable and improve the list_models logic"

This reverts commit 248068fe5d.

* fix model cache path

* also set fail-fast to it's default (true)
in this way the whole action fails if one job fails
this should unblock the runners!!!

* fix output path for Archive results

* disable checks for python 3.9

* Update-requirements and test-invoke-pip workflow (#1574)

* update requirements files

* update test-invoke-pip workflow

* move requirements-mkdocs.txt to docs folder (#1575)

* move requirements-mkdocs.txt to docs folder

* update copyright

* Fixes outpainting with resized inpaint size

* Interactive configuration (#1517)

* Update scripts/configure_invokeai.py

prevent crash if output exists

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>

* implement changes requested by reviews

* default to correct root and output directory on Windows systems

- Previously the script was relying on the readline buffer editing
  feature to set up the correct default. But this feature doesn't
  exist on windows.

- This commit detects when user typed return with an empty directory
  value and replaces with the default directory.

* improved readability of directory choices

* Update scripts/configure_invokeai.py

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>

* better error reporting at startup

- If user tries to run the script outside of the repo or runtime directory,
  a more informative message will appear explaining the problem.

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>

* Embedding merging (#1526)

* add whole <style token> to vocab for concept library embeddings

* add ability to load multiple concept .bin files

* make --log_tokenization respect custom tokens

* start working on concept downloading system

* preliminary support for dynamic loading and merging of multiple embedded models

- The embedding_manager is now enhanced with ldm.invoke.concepts_lib,
  which handles dynamic downloading and caching of embedded models from
  the Hugging Face concepts library (https://huggingface.co/sd-concepts-library)

- Downloading of a embedded model is triggered by the presence of one or more
  <concept> tags in the prompt.

- Once the embedded model is downloaded, its trigger phrase will be loaded
  into the embedding manager and the prompt's <concept> tag will be replaced
  with the <trigger_phrase>

- The downloaded model stays on disk for fast loading later.

- The CLI autocomplete will complete partial <concept> tags for you. Type a
  '<' and hit tab to get all ~700 concepts.

BUGS AND LIMITATIONS:

- MODEL NAME VS TRIGGER PHRASE

  You must use the name of the concept embed model from the SD
  library, and not the trigger phrase itself. Usually these are the
  same, but not always. For example, the model named "hoi4-leaders"
  corresponds to the trigger "<HOI4-Leader>"

  One reason for this design choice is that there is no apparent
  constraint on the uniqueness of the trigger phrases and one trigger
  phrase may map onto multiple models. So we use the model name
  instead.

  The second reason is that there is no way I know of to search
  Hugging Face for models with certain trigger phrases. So we'd have
  to download all 700 models to index the phrases.

  The problem this presents is that this may confuse users, who will
  want to reuse prompts from distributions that use the trigger phrase
  directly. Usually this will work, but not always.

- WON'T WORK ON A FIREWALLED SYSTEM

  If the host running IAI has no internet connection, it can't
  download the concept libraries. I will add a script that allows
  users to preload a list of concept models.

- BUG IN PROMPT REPLACEMENT WHEN MODEL NOT FOUND

  There's a small bug that occurs when the user provides an invalid
  model name. The <concept> gets replaced with <None> in the prompt.

* fix loading .pt embeddings; allow multi-vector embeddings; warn on dupes

* simplify replacement logic and remove cuda assumption

* download list of concepts from hugging face

* remove misleading customization of '*' placeholder

the existing code as-is did not do anything; unclear what it was supposed to do.

the obvious alternative -- setting using 'placeholder_strings' instead of
'placeholder_tokens' to match model.params.personalization_config.params.placeholder_strings --
caused a crash. i think this is because the passed string also needed to be handed over
on init of the PersonalizedBase as the 'placeholder_token' argument.
this is weird config dict magic and i don't want to touch it. put a
breakpoint in personalzied.py line 116 (top of PersonalizedBase.__init__) if
you want to have a crack at it yourself.

* address all the issues raised by damian0815 in review of PR #1526

* actually resize the token_embeddings

* multiple improvements to the concept loader based on code reviews

1. Activated the --embedding_directory option (alias --embedding_path)
   to load a single embedding or an entire directory of embeddings at
   startup time.

2. Can turn off automatic loading of embeddings using --no-embeddings.

3. Embedding checkpoints are scanned with the pickle scanner.

4. More informative error messages when a concept can't be loaded due
   either to a 404 not found error or a network error.

* autocomplete terms end with ">" now

* fix startup error and network unreachable

1. If the .invokeai file does not contain the --root and --outdir options,
  invoke.py will now fix it.

2. Catch and handle network problems when downloading hugging face textual
   inversion concepts.

* fix misformatted error string

Co-authored-by: Damian Stewart <d@damianstewart.com>

* model_cache.py: fix list_models

Signed-off-by: devops117 <55235206+devops117@users.noreply.github.com>

* add statement of values (#1584)

* this adds the Statement of Values

Google doc source = https://docs.google.com/document/d/1-PrUKDJcxy8OyNGc8CyiHhv2VgLvjt7LRGlEpbg1nmQ/edit?usp=sharing

* Fix heading

* Update InvokeAI_Statement_of_Values.md

* Update InvokeAI_Statement_of_Values.md

* Update InvokeAI_Statement_of_Values.md

* Update InvokeAI_Statement_of_Values.md

* Update InvokeAI_Statement_of_Values.md

* add keturn and mauwii to the team member list

* Fix punctuation

* this adds the Statement of Values

Google doc source = https://docs.google.com/document/d/1-PrUKDJcxy8OyNGc8CyiHhv2VgLvjt7LRGlEpbg1nmQ/edit?usp=sharing

* add keturn and mauwii to the team member list

* fix formating
- make sub bullets use * (decide to all use - or *)
- indent sub bullets
Sorry, first only looked at the code version and found this only after
looking at the markdown rendered version

* use multiparagraph numbered sections

* Break up Statement Of Values as per comments on #1584

* remove duplicated word, reduce vagueness

it's important not to overstate how many artists we are consulting.

* fix typo (sorry blessedcoolant)

Co-authored-by: mauwii <Mauwii@outlook.de>
Co-authored-by: damian <git@damianstewart.com>

* update dockerfile (#1551)

* update dockerfile

* remove not existing file from .dockerignore

* remove bloat and unecesary step
also use --no-cache-dir for pip install
image is now close to 2GB

* make Dockerfile a variable

* set base image to `ubuntu:22.10`

* add build-essential

* link outputs folder for persistence

* update tag variable

* update docs

* fix not customizeable build args, add reqs output

* !model_import autocompletes in ROOTDIR

* Adds psychedelicious to statement of values signature (#1602)

* add a --no-patchmatch option to disable patchmatch loading (#1598)

This feature was added to prevent the CI Macintosh tests from erroring
out when patchmatch is unable to retrieve its shared library from
github assets.

* Fix #1599 by relaxing the `match_trigger` regex (#1601)

* Fix #1599 by relaxing the `match_trigger` regex

Also simplify logic and reduce duplication.

* restrict trigger regex again (but not so far)

* make concepts library work with Web UI

This PR makes it possible to include a Hugging Face concepts library
<style-or-subject-trigger> in the WebUI prompt. The metadata seems
to be correctly handled.

* documentation enhancements (#1603)

- Add documentation for the Hugging Face concepts library and TI embedding.

- Fixup index.md to point to each of the feature documentation files,
  including ones that are pending.

* tweak setup and environment files for linux & pypatchmatch (#1580)

* tweak setup and environment files for linux & pypatchmatch

- Downgrade python requirements to 3.9 because 3.10 is not supported
  on Ubuntu 20.04 LTS (widely-used distro)
- Use our github pypatchmatch 0.1.3 in order to install Makefile
  where it needs to be.
- Restored "-e ." as the last install step on pip installs. Hopefully
  this will not trigger the high-CPU hang we've previously experienced.

* keep windows on basicsr 1.4.1

* keep windows on basicsr 1.4.1

* bump pypatchmatch requirement to 0.1.4

- This brings in a version of pypatchmatch that will gracefully
  handle internet connection not available at startup time.
- Also refactors and simplifies the handling of gfpgan's basicsr requirement
  across various platforms.

* revert to older version of list_models() (#1611)

This restores the correct behavior of list_models() and quenches
the bug of list_models() returning a single model entry named "name".

I have not investigated what was wrong with the new version, but I
think it may have to do with changes to the behavior in dict.update()

* Fixes for #1604 (#1605)

* Converts ESRGAN image input to RGB

- Also adds typing for image input.
- Partially resolves #1604

* ensure there are unmasked pixels before color matching

Co-authored-by: Kyle Schouviller <kyle0654@hotmail.com>

* update index.md (#1609)

- comment out non existing link
- fix indention
- add seperator between feature categories

* Debloat-docker (#1612)

* debloat Dockerfile
- less options more but more userfriendly
- better Entrypoint to simulate CLI usage
- without command the container still starts the web-host

* debloat build.sh

* better syntax in run.sh

* update Docker docs
- fix description of VOLUMENAME
- update run script example to reflect new entrypoint

* Test installer (#1618)

* test linux install

* try removing http from parsed requirements

* pip install confirmed working on linux

* ready for linux testing

- rebuilt py3.10-linux-x86_64-cuda-reqs.txt to include pypatchmatch
  dependency.
- point install.sh and install.bat to test-installer branch.

* Updates MPS reqs

* detect broken readline history files

* fix download.pytorch.org URL

* Test installer (Win 11) (#1620)

Co-authored-by: Cyrus Chan <cyruswkc@hku.hk>

* Test installer (MacOS 13.0.1 w/ torch==1.12.0) (#1621)

* Test installer (Win 11)

* Test installer (MacOS 13.0.1 w/ torch==1.12.0)

Co-authored-by: Cyrus Chan <cyruswkc@hku.hk>

* change sourceball to development for testing

* Test installer (MacOS 13.0.1 w/ torch==1.12.1 & torchvision==1.13.1) (#1622)

* Test installer (Win 11)

* Test installer (MacOS 13.0.1 w/ torch==1.12.0)

* Test installer (MacOS 13.0.1 w/ torch==1.12.1 & torchvision==1.13.1)

Co-authored-by: Cyrus Chan <cyruswkc@hku.hk>

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: Cyrus Chan <82143712+cyruschan360@users.noreply.github.com>
Co-authored-by: Cyrus Chan <cyruswkc@hku.hk>

* 2.2 Doc Updates (#1589)

* Unified Canvas Docs & Assets

Unified Canvas draft

Advanced Tools Updates

Doc Updates (lstein feedback)

* copy edits to Unified Canvas docs

- consistent capitalisation and feature naming
- more intimate address (replace "the user" with "you") for improved User
  Engagement(tm)
- grammatical massaging and *poesie*

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
Co-authored-by: damian <git@damianstewart.com>

* include a step after config to `cat ~/.invokeai` (#1629)

* disable patchmatch in CI actions (#1626)

* disable patchmatch in CI actions

* fix indention

* replace tab with spaces

Co-authored-by: Matthias Wild <40327258+mauwii@users.noreply.github.com>
Co-authored-by: mauwii <Mauwii@outlook.de>

* Fix installer script for macOS. (#1630)

* refer to the platform as 'osx' instead of 'mac', otherwise the
composed URL to micromamba is wrong.
* move the `-O` option to `tar` to be grouped with the other tar flags
to avoid the `-O` being interpreted as something to unarchive.

* Removes symlinked environment.yaml (#1631)

Was unintentionally added in #1621

* Fix inpainting with iterations (#1635)

* fix error when inpainting using runwayml inpainting model (#1634)

- error was "Omnibus object has no attribute pil_image"
- closes #1596

* add k_dpmpp_2_a and k_dpmpp_2 solvers options (#1389)

* add k_dpmpp_2_a and k_dpmpp_2 solvers options

* update frontend

Co-authored-by: Victor <victorca25@users.noreply.github.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>

* add .editorconfig (#1636)

* Web UI 2.2 bugfixes (#1572)

* Fixes bug preventing multiple images from being generated

* Fixes valid seam strength value range

* Update Delete Alert Text

Indicates to the user that images are not permanently deleted.

* Fixes left/right arrows not working on gallery

* Fixes initial image on load erroneously set to a user uploaded image

Should be a result gallery image.

* Lightbox Fixes

- Lightbox is now a button in the current image buttons
- Lightbox is also now available in the gallery context menu
- Lightbox zoom issues fixed
- Lightbox has a fade in animation.

* Fix image display wrapper in current preview not overflow bounds

* Revert "Fix image display wrapper in current preview not overflow bounds"

This reverts commit 5511c82714dbf1d1999d64e8bc357bafa34ddf37.

* Change Staging Area discard icon from Bin to X

* Expose Snap Threshold and Move Snap Settings to BBox Panel

* Changes img2img strength default to 0.75

* Fixes drawing triggering when mouse enters canvas w/ button down

When we only supported inpainting and no zoom, this was useful. It allowed the cursor to leave the canvas (which was easy to do given the limited canvas dimensions) and without losing the "I am drawing" state. 

With a zoomable canvas this is no longer as useful.

Additionally, we have more popovers and tools (like the color pickers) which result in unexpected brush strokes. This fixes that issue.

* Revert "Expose Snap Threshold and Move Snap Settings to BBox Panel"

We will handle this a bit differently - by allowing the grid origin to be moved. I will dig in at some point.

This reverts commit 33c92ecf4da724c2f17d9d91c7ea31a43a2f6deb.

* Adds Limit Strokes to Box

* Adds fill bounding box button

* Adds erase bounding box button

* Changes Staging area discard icon to match others

* Fixes right click breaking move tool

* Fixes brush preview visibility issue with "darken outside box"

* Fixes history bugs with addFillRect, addEraseRect, and other actions

* Adds missing `key`

* Fixes postprocessing being applied to canvas generations

* Fixes bbox not getting scaled in various situations

* Fixes staging area show image toggle not resetting on accept/discard

* Locks down canvas while generating/staging

* Fixes move tool breaking when canvas loses focus during move/transform

* Hides cursor when restrict strokes is on and mouse outside bbox

* Lints

* Builds fresh bundle

* Fix overlapping hotkey for Fill Bounding Box

* Build Fresh Bundle

* Fixes bug with mask and bbox overlay

* Builds fresh bundle

Co-authored-by: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>

* disable NSFW checker loading during the CI tests (#1641)

* disable NSFW checker loading during the CI tests

The NSFW filter apparently causes invoke.py to crash during CI testing,
possibly due to out of memory errors. This workaround disables NSFW
model loading.

* doc change

* fix formatting errors in yml files

* Configure the NSFW checker at install time with default on (#1624)

* configure the NSFW checker at install time with default on

1. Changes the --safety_checker argument to --nsfw_checker and
--no-nsfw_checker. The original argument is recognized for backward
compatibility.

2. The configure script asks users whether to enable the checker
(default yes). Also offers users ability to select default sampler and
number of generation steps.

3.Enables the pasting of the caution icon on blurred images when
InvokeAI is installed into the package directory.

4. Adds documentation for the NSFW checker, including caveats about
accuracy, memory requirements, and intermediate image dispaly.

* use better fitting icon

* NSFW defaults false for testing

* set default back to nsfw active

Co-authored-by: Matthias Wild <40327258+mauwii@users.noreply.github.com>
Co-authored-by: mauwii <Mauwii@outlook.de>

Signed-off-by: devops117 <55235206+devops117@users.noreply.github.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Co-authored-by: Kyle Schouviller <kyle0654@hotmail.com>
Co-authored-by: javl <mail@jaspervanloenen.com>
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
Co-authored-by: mauwii <Mauwii@outlook.de>
Co-authored-by: Matthias Wild <40327258+mauwii@users.noreply.github.com>
Co-authored-by: Damian Stewart <d@damianstewart.com>
Co-authored-by: DevOps117 <55235206+devops117@users.noreply.github.com>
Co-authored-by: damian <git@damianstewart.com>
Co-authored-by: Damian Stewart <null@damianstewart.com>
Co-authored-by: Cyrus Chan <82143712+cyruschan360@users.noreply.github.com>
Co-authored-by: Cyrus Chan <cyruswkc@hku.hk>
Co-authored-by: Andre LaBranche <dre@mac.com>
Co-authored-by: victorca25 <41912303+victorca25@users.noreply.github.com>
Co-authored-by: Victor <victorca25@users.noreply.github.com>
2022-11-30 16:12:23 -05:00
Lincoln Stein
8f3f64b22e prevent crash that occurs when changing models.yaml on windows systems
Windows does not support an atomic `os.rename()` operation. This
PR changes it to `os.replace()`, which does the same thing.
2022-11-25 16:59:31 -05:00
slashtechno
dba0280790 Fix Colab requirements (again) (#1505) 2022-11-24 20:41:31 -05:00
Kevin Coakley
19e2cff18c Fix micromamba tar command for macOS
Moved the -O from after the file to after the tar command for compatibility with macOS

Signed-off-by: Kevin Coakley <kcoakley@sdsc.edu>
2022-11-18 16:08:17 -05:00
slashtechno
58f65d49b6 Fixed Google Colab requirements URL
Signed-off-by: slashtechno <77907286+slashtechno@users.noreply.github.com>
2022-11-16 22:31:18 -05:00
Lawrence Norton
e5edd025d6 Fixing Dead Link
I believe this is what you meant to link?
2022-11-14 17:34:51 -05:00
Peter Lin
29e229b409 adding troubleshooting tips to the newer doc 2022-11-14 10:31:19 -05:00
Lincoln Stein
93cdb476d9 change installer download repo to main.tar.gz 2022-11-13 16:44:21 -05:00
Lincoln Stein
1305e7a56c documentation hot fixes
- changes pointers to installation instructions from README
- Adds the changelog for the 2.1.3 release
2022-11-13 10:18:28 -05:00
majick
58edf262e4 Fix CWD being removed from path again. Refixes #723 again
Weirdly, I'm not even on a Mac or using a special Python distro.  Just
plain old Debian stable and a plain old venv.
2022-11-13 09:58:42 -05:00
blessedcoolant
fd67df9447 Remove gfpgan_dir
+ Update GFPGAN Model Path Defaults
>  Update them to match the new file heirarchy
2022-11-13 00:27:56 +00:00
Lincoln Stein
45e5053d06 added assets back 2022-11-13 00:23:51 +00:00
Lincoln Stein
9c5999ede1 added caveats to use of installer script 2022-11-12 20:02:01 +00:00
Lincoln Stein
7ddf7f0b7d use invoke-ai gfpgan to store weights in right place 2022-11-12 19:31:41 +00:00
psychedelicious
b8de5244b1 Fixes issue with intermediates size
Sorry @lstein !
2022-11-12 19:03:22 +00:00
Lincoln Stein
72e011a4e4 stop crash on text mask generation 2022-11-12 18:23:16 +00:00
Lincoln Stein
98db0d746c fix crash in inpaint when no seed in original image 2022-11-12 17:57:43 +00:00
Lincoln Stein
1a8e007066 merge release-candidate-1-3-2 into main.
Squashed commit of the following:

commit 9a1fe8e7fb
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 12 17:07:40 2022 +0000

    swap in release URLs for installers

commit ff56f5251b
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 12 17:03:21 2022 +0000

    fix up bad unicode chars in invoke.py

commit ed943bd6c7
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 12 16:05:45 2022 +0000

    outcrop improvements, hand-added

commit 7ad2355b1d
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 12 15:14:33 2022 +0000

    documentation fixes

commit 66c920fc19
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 16:49:25 2022 -0500

    Revert "Resize hires as an image"

    This reverts commit d05b1b3544.

commit 3fc5cb09f8
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 12 12:43:17 2022 +0000

    fix incorrect link in install

commit 1345ec77ab
Author: tildebyte <337875+tildebyte@users.noreply.github.com>
Date:   Sun Nov 6 19:07:31 2022 -0500

    toil(repo): add tildebyte as owner of installer/ directory

commit b116715490
Author: Kyle Schouviller <kyle0654@hotmail.com>
Date:   Thu Nov 10 21:43:56 2022 -0800

    Fix performance issue introduced by torch cuda cache clear during generation

commit fa3670270e
Author: mauwii <Mauwii@outlook.de>
Date:   Fri Nov 11 12:42:03 2022 +0100

    small update to dockers huggingface section

commit c304250ef6
Author: mauwii <Mauwii@outlook.de>
Date:   Fri Nov 11 12:19:27 2022 +0100

    fix format and Link in INSTALL_INVOKE.md

commit 802ce5dde5
Author: mauwii <Mauwii@outlook.de>
Date:   Fri Nov 11 11:17:49 2022 +0100

    small fixex to format and a link in INSTALL_MANUAL

commit 311ee320ec
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 11 07:23:35 2022 +0000

    ignore installer intermediate files

commit e9df17b374
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 11 07:19:25 2022 +0000

    fix backslash-related syntax error

commit 061fb4ef00
Merge: 52be0d23 4095acd1
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 11 06:50:04 2022 +0000

    Merge branch 'release-candidate-2-1-3' of github.com:/invoke-ai/InvokeAI into release-candidate-2-1-3

commit 52be0d2396
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 11 06:49:45 2022 +0000

    add WindowsLongFileName batfile to source installer

commit 4095acd10e
Author: mauwii <Mauwii@outlook.de>
Date:   Fri Nov 11 07:05:17 2022 +0100

    Doc Updates
    A lot of re-formating of new Installation Docs
    also some content updates/corrections

commit 201eb22d76
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 11 04:41:02 2022 +0000

    prevent two models from being marked default in models.yaml

commit 17ab982200
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 11 03:56:54 2022 +0000

    installers download branch HEAD not tag

commit a04965b0e9
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 11 03:48:21 2022 +0000

    improve messaging during installation process

commit 0b529f0c57
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 15:22:32 2022 +0000

    enable outcropping of random JPG/PNG images

    - Works best with runwayML inpainting model
    - Numerous code changes required to propagate seed to final metadata.
      Original code predicated on the image being generated within InvokeAI.

commit 6f9f848345
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 17:27:42 2022 +0000

    enhance outcropping with ability to direct contents of new regions

    - When outcropping an image you can now add a `--new_prompt` option, to specify
      a new prompt to be used instead of the original one used to generate the image.

    - Similarly you can provide a new seed using `--seed` (or `-S`). A seed of zero
      will pick one randomly.

    - This PR also fixes the crash that happened when trying to outcrop an image
      that does not contain InvokeAI metadata.

commit 918c1589ef
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 20:16:47 2022 +0000

    fix #1402

commit 116415b3fc
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 21:27:25 2022 +0000

    fix invoke.py crash if no models.yaml file present

    - Script will now offer the user the ability to create a
      minimal models.yaml and then gracefully exit.
    - Closes #1420

commit b4b6eabaac
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 16:49:25 2022 -0500

    Revert "Log strength with hires"

    This reverts commit 82d4904c07.

commit 4ef1f4a854
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 20:01:49 2022 +0000

    remove temporary directory from repo

commit 510fc4ebaa
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 19:59:03 2022 +0000

    remove -e from clipseg load in installer

commit a20914434b
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 19:37:07 2022 +0000

    change clipseg repo branch to avoid clipseg not found error

commit 0d134195fd
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 18:39:29 2022 +0000

    update repo URL to point to rc

commit 649d8c8573
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 18:13:28 2022 +0000

    integrate tildebyte installer

commit a358d370a0
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 17:48:14 2022 +0000

    add @tildebyte compiled pip installer

commit 94a9033c4f
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 14:52:00 2022 +0000

    ignore source installer zip files

commit 18a947c503
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 14:46:36 2022 +0000

    documentation and environment file fixes

    - Have clarified the relationship between the @tildebyte and @cmdr2 installers;
      However, @tildebyte installer merge is still a WIP due to conflicts over
      such things as `invoke.sh`.
    - Rechristened 1click installer as "source" installer. @tildebyte installer will be
      "the" installer. (We'll see which one generates the least support requests and
      maintenance work.)
    - Sync'd `environment-mac.yml` with `development`. The former was failing with a
      taming-transformers error as per https://discord.com/channels/@me/1037201214154231899/1040060947378749460

commit a23b031895
Author: Mike DiGiovanni <vinblau@gmail.com>
Date:   Wed Nov 9 16:44:59 2022 -0500

    Fixes typos in README.md

commit 23af68c7d7
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 07:02:27 2022 -0500

    downgrade win installs to basicsr==1.4.1

commit e258beeb51
Merge: 7460c069 e481bfac
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 06:37:45 2022 -0500

    Merge branch 'release-candidate-2-1-3' of github.com:invoke-ai/InvokeAI into release-candidate-2-1-3

commit 7460c069b8
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 06:36:48 2022 -0500

    remove --prefer-binary from requirements-base.txt

    It appears that some versions of pip do not recognize this option
    when it appears in the requirements file. Did not explore this further
    but recommend --prefer-binary in the manual install instructions on
    the command line.

commit e481bfac61
Merge: 5040747c d1ab65a4
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 11:21:56 2022 +0000

    Merge branch 'release-candidate-2-1-3' of github.com:/invoke-ai/InvokeAI into release-candidate-2-1-3

commit 5040747c67
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 11:21:43 2022 +0000

    fix windows install instructions & bat file

commit d1ab65a431
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 10 07:18:59 2022 +0100

    update WEBUIHOTKEYS.md

commit af4ee7feb8
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 10 06:33:49 2022 +0100

    update INSTALL_DOCKER.md

commit 764fb29ade
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 10 06:30:15 2022 +0100

    fix formatting in INSTALL.md

commit 1014d3ba44
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 10 06:29:14 2022 +0100

    fix build.sh invokeai_conda_env_file default value

commit 40a48aca88
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 10 05:25:30 2022 +0100

    fix environment-mac.yml
    moved taming-transformers-rom1504 to pip dependencies

commit 92abc00f16
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 10 05:19:52 2022 +0100

    fix test-invoke-conda
    - copy required conda environment yaml
    - use environment.yml
    - I use cp instead of ln since would be compatible for windows runners

commit a5719aabf8
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 10 04:14:35 2022 +0100

    update Dockerfile
    - link environment.yml from new environemnts path
    - change default conda_env_file
    - quote all variables to avoid splitting
    - also remove paths from conda-env-files in build-container.yml

commit 44a18511fa
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 20:51:06 2022 +0000

    update paths in container build workflow

commit b850dbadaf
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 20:16:57 2022 +0000

    finished reorganization of install docs

commit 9ef8b944d5
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 18:50:58 2022 +0000

    tweaks to manual install documentation

    --prefer-binary is an iffy option in the requirements file. It isn't
    supported by some versions of pip, so I removed it from
    requirements-base.txt and inserted it into the manual install
    instructions where it seems to do what it is supposed to.

commit efc5a98488
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 18:20:03 2022 +0000

    manual installation documentation tested on Linux

commit 1417c87928
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 17:37:06 2022 +0000

    change name of requirements.txt to avoid confusion

commit 2dd6fc2b93
Merge: 22213612 71ee44a8
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 17:26:24 2022 +0000

    Merge branch 'release-candidate-2-1-3' of github.com:/invoke-ai/InvokeAI into release-candidate-2-1-3

commit 22213612a0
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 17:25:59 2022 +0000

    directory cleanup; working on install docs

commit 71ee44a827
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 02:07:13 2022 +0000

    prevent crash when switching to an invalid model

commit b17ca0a5e7
Author: damian0815 <null@damianstewart.com>
Date:   Tue Nov 8 14:28:38 2022 +0100

    don't suppress exceptions when doing cross-attention control

commit 71bbfe4a1a
Author: damian0815 <null@damianstewart.com>
Date:   Tue Nov 8 12:59:34 2022 +0100

    Fix #1362 by improving VRAM usage patterns when doing .swap()

    commit ef3f7a26e242b73c2beb0195c7fd8f654ef47f55
    Author: damian0815 <null@damianstewart.com>
    Date:   Tue Nov 8 12:18:37 2022 +0100

        remove log spam

    commit 7189d649622d4668b120b0dd278388ad672142c4
    Author: damian0815 <null@damianstewart.com>
    Date:   Tue Nov 8 12:10:28 2022 +0100

        change the way saved slicing strategy is applied

    commit 01c40f751ab72955140165c16f95ae411732265b
    Author: damian0815 <null@damianstewart.com>
    Date:   Tue Nov 8 12:04:43 2022 +0100

        fix slicing_strategy_getter callsite

    commit f8cfe25150a346958903316bc710737d99839923
    Author: damian0815 <null@damianstewart.com>
    Date:   Tue Nov 8 11:56:22 2022 +0100

        cleanup, consistent dim=0 also tested

    commit 5bf9b1e890d48e962afd4a668a219b68271e5dc1
    Author: damian0815 <null@damianstewart.com>
    Date:   Tue Nov 8 11:34:09 2022 +0100

        refactored context, tested with non-sliced cross attention control

    commit d58a46e39bf562e7459290d2444256e8c08ad0b6
    Author: damian0815 <null@damianstewart.com>
    Date:   Sun Nov 6 00:41:52 2022 +0100

        cleanup

    commit 7e2c658b4c06fe239311b65b9bb16fa3adec7fd7
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:57:31 2022 +0100

        disable logs

    commit 20ee89d93841b070738b3d8a4385c93b097d92eb
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:36:58 2022 +0100

        slice saved attention if necessary

    commit 0a7684a22c880ec0f48cc22bfed4526358f71546
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:32:38 2022 +0100

        raise instead of asserting

    commit 7083104c7f3a0d8fd96e94a2f391de50a3c942e4
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:31:00 2022 +0100

        store dim when saving slices

    commit f7c0808ed383ec1dc70645288a798ed2aa4fa85c
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:27:16 2022 +0100

        don't retry on exception

    commit 749a721e939b3fe7c1741e7998dab6bd2c85a0cb
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:24:50 2022 +0100

        stuff

    commit 032ab90e9533be8726301ec91b97137e2aadef9a
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:20:17 2022 +0100

        more logging

    commit 3dc34b387f033482305360e605809d95a40bf6f8
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:16:47 2022 +0100

        logs

    commit 901c4c1aa4b9bcef695a6551867ec8149e6e6a93
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:12:39 2022 +0100

        actually set save_slicing_strategy to True

    commit f780e0a0a7c6b6a3db320891064da82589358c8a
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:10:35 2022 +0100

        store slicing strategy

    commit 93bb6d566fd18c5c69ef7dacc8f74ba2cf671cb7
    Author: damian <git@damianstewart.com>
    Date:   Sat Nov 5 20:43:48 2022 +0100

        still not it

    commit 5e3a9541f8ae00bde524046963910323e20c40b7
    Author: damian <git@damianstewart.com>
    Date:   Sat Nov 5 17:20:02 2022 +0100

        wip offloading attention slices on-demand

    commit 4c2966aa856b6f3b446216da3619ae931552ef08
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 15:47:40 2022 +0100

        pre-emptive offloading, idk if it works

    commit 572576755e9f0a878d38e8173e485126c0efbefb
    Author: root <you@example.com>
    Date:   Sat Nov 5 11:25:32 2022 +0000

        push attention slices to cpu. slow but saves memory.

    commit b57c83a68f2ac03976ebc89ce2ff03812d6d185f
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 12:04:22 2022 +0100

        verbose logging

    commit 3a5dae116f110a96585d9eb71d713b5ed2bc3d2b
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 11:50:48 2022 +0100

        wip fixing mem strategy crash (4 test on runpod)

    commit 3cf237db5fae0c7b0b4cc3c47c81830bdb2ae7de
    Author: damian0815 <null@damianstewart.com>
    Date:   Fri Nov 4 09:02:40 2022 +0100

        wip, only works on cuda

commit 5702271991
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 14:09:36 2022 +0000

    speculative reorganization of the requirements & environment files

    - This is only a test!
    - The various environment*.yml and requirements*.txt files have all
      been moved into a directory named "environments-and-requirements".
    - The idea is to clean up our root directory so that the github home
      page is tidy.
    - The manual install instructions will start with the instructions to
      create a symbolic link from environment.yml to the appropriate file
      for OS and GPU.
    - The 1-click installers have been updated to accommodate this change.

commit 10781e7dc4
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 01:59:45 2022 +0000

    refactoring requirements

commit 099d1157c5
Author: mauwii <Mauwii@outlook.de>
Date:   Wed Nov 9 00:16:18 2022 +0100

    better way to make sure if conda is useable

commit ab825bf7ee
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 22:05:33 2022 +0000

    add back --prefer-binaries to requirements

commit 10cfeb5ada
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 8 22:27:19 2022 +0100

    add quotes to set and use `$environment_file`

commit e97515d045
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 8 22:24:21 2022 +0100

    set environment file for conda update

commit 0f04bc5789
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 8 22:21:25 2022 +0100

    use conda env update

commit 3f74aabecd
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 8 22:20:44 2022 +0100

    use command instead of hash

commit b1a99a51b7
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 14:44:44 2022 -0500

    remove --global git config from 1-click installers

commit 8004f8a6d9
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Mon Nov 7 09:07:20 2022 -0500

    Revert "Use array slicing to calc ddim timesteps"

    This reverts commit 1f0c5b4cf1.

commit ff8ff2212a
Merge: 8e5363cd 636620b1
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 14:01:40 2022 +0000

    add initfile support from PR #1386

commit 8e5363cd83
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 13:26:18 2022 +0000

    move 'installer/' to '1-click-installer' to make room for tildebyte installer

commit 1450779146
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 12:56:36 2022 +0000

    update branch for installer to pull against

commit 8cd5d95b8a
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 05:30:20 2022 +0000

    move all models into subdirectories of ./models

    - this required an update to the invoke-ai fork of gfpgan
    - simultaneously reverted consolidation of environment and
      requirements files, as their presence in a directory
      triggered setup.py to try to install a sub-package.

commit abd6407394
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 03:52:46 2022 +0000

    leave a copy of environment-cuda.yml at top level

    - named it environment.yml
    - need to avoid a big change for users and breaking older support
      instructions.

commit 734dacfbe9
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 03:50:07 2022 +0000

    consolidate environment files

    - starting to remove unneeded entries and pins
    - no longer require -e in front of github dependencies
    - update setup.py with release number
    - update manual installation instructions

commit 636620b1d5
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 03:26:16 2022 +0000

    change initfile to ~/.invokeai

    - adjust documentation
    - also fix 'clipseg_models' to 'clipseg', which seems to be working now

commit 1fe41146f0
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 5 14:28:01 2022 -0400

    add support for an initialization file, invokeai.init

    - Place preferred startup command switches in a file named
      "invokeai.init". The file can consist of a single line of switches
      such as "--web --steps=28", a series of switches on each
      line, or any combination of the two.

     Example:
     ```
       --web
       --host=0.0.0.0
       --steps=28
       --grid
       -f 0.6 -C 11.0 -A k_euler_a
    ```

    - The following options, which were previously only available within
      the CLI, are now available on the command line as well:

      --steps
      --strength
      --cfg_scale
      --width
      --height
      --fit

commit 2ad6ef355a
Merge: 865502ee 8b47c829
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sun Nov 6 18:08:36 2022 +0000

    update discord link

commit 865502ee4f
Author: mauwii <Mauwii@outlook.de>
Date:   Sun Nov 6 18:00:16 2022 +0100

    update changelog

commit c7984f3299
Author: mauwii <Mauwii@outlook.de>
Date:   Sun Nov 6 17:07:27 2022 +0100

    update TROUBLESHOOT.md

commit 7f150ed833
Author: mauwii <Mauwii@outlook.de>
Date:   Sun Nov 6 16:56:58 2022 +0100

    remove `:`from headlines in CONTRIBUTORS.md

commit badf4e256c
Author: mauwii <Mauwii@outlook.de>
Date:   Sun Nov 6 16:56:37 2022 +0100

    enable navigation tabs
    Since the docs are growing, this way they look cleaner

commit e64c60bbb3
Author: mauwii <Mauwii@outlook.de>
Date:   Sun Nov 6 16:18:59 2022 +0100

    remove preflight checks from assets
    seems like somebody executed tests and commited them

commit 1780618543
Author: mauwii <Mauwii@outlook.de>
Date:   Sun Nov 6 16:15:06 2022 +0100

    update INSTALLING_MODELS.md

commit f91fd27624
Author: Kyle Schouviller <kyle0654@hotmail.com>
Date:   Sat Nov 5 14:47:53 2022 -0700

    Bug fix for inpaint size

commit 09e41e8f76
Author: Kyle Schouviller <kyle0654@hotmail.com>
Date:   Sat Nov 5 14:34:52 2022 -0700

    Add inpaint size options to inpaint at a larger size than the actual inpaint image, then scale back down for recombination

commit 6eeb2107b3
Author: mauwii <Mauwii@outlook.de>
Date:   Sat Nov 5 21:01:14 2022 +0100

    remove create-caches.yml since not used anywhere

commit 17053ad8b7
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 5 16:01:55 2022 -0400

    fix duplicated argument introduced by conflict resolution

commit fefb4dc1f8
Merge: 762ca60a d05b1b35
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 5 12:47:35 2022 -0700

    Merge branch 'development' into fix_generate.py

commit d05b1b3544
Author: Craig <cwallen@users.noreply.github.com>
Date:   Sat Oct 29 20:40:30 2022 -0400

    Resize hires as an image

commit 82d4904c07
Author: Craig <cwallen@users.noreply.github.com>
Date:   Sat Oct 29 20:37:40 2022 -0400

    Log strength with hires

commit 1cdcf33cfa
Merge: 6616fa83 cbc029c6
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 5 09:57:38 2022 -0400

    Merge branch 'main' into development

    - this synchronizes recent document fixes by mauwii

commit 6616fa835a
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 4 00:47:03 2022 -0400

    fix Windows library dependency issues

    This commit addresses two bugs:

    1) invokeai.py crashes immediately with a message about an undefined
       attritube sigKILL (closes #1288). The fix is to pin torch at 1.12.1.

    2) Version 1.4.2 of basicsr fails to load properly on Windows, and is
       a requirement of realesrgan, however 1.4.1 works. Pinning basicsr
       in our requirements file resulted in a dependency conflict, so I
       ended up cloning realesrgan into the invoke-ai Git space and changing
       the requirements file there.

    If there is a more elegant solution, please advise.

commit 7b9a4564b1
Author: Matthias Wild <40327258+mauwii@users.noreply.github.com>
Date:   Sat Nov 5 14:36:45 2022 +0100

    Update-docs (#1382)

    * update IMG2IMG.md

    * update INPAINTING.md

    * update WEBUIHOTKEYS.md

    * more doc updates (mostly fix formatting):
    - OUTPAINTING.md
    - POSTPROCESS.md
    - PROMPTS.md
    - VARIATIONS.md
    - WEB.md
    - WEBUIHOTKEYS.md

commit fcdefa0620
Author: Matthias Wild <40327258+mauwii@users.noreply.github.com>
Date:   Fri Nov 4 20:47:31 2022 +0100

    Hotifx docs (#1376) (#1377)

commit ef8b3ce639
Merge: b7042095 36870a8f
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 4 12:08:44 2022 -0400

    Merge-main-into-development (#1373)

    To get the rid of the difference between main and development.

    Since otherwise it will be a pain to start fixing the documentatino
    (when the state between main and development is not the same ...)

    Also this should fix the problem of all tests failing since environment
    yamls get updated.

commit 36870a8f53
Merge: 6b89adfa b7042095
Author: Matthias Wild <40327258+mauwii@users.noreply.github.com>
Date:   Fri Nov 4 16:25:00 2022 +0100

    Merge branch 'development' into merge-main-into-development

commit b70420951d
Author: damian0815 <null@damianstewart.com>
Date:   Thu Nov 3 12:39:45 2022 +0100

    fix parsing error doing eg `forest ().swap(in winter)`

commit 1f0c5b4cf1
Author: wfng92 <43742196+wfng92@users.noreply.github.com>
Date:   Thu Nov 3 17:13:52 2022 +0800

    Use array slicing to calc ddim timesteps

commit 8648da8111
Author: mauwii <Mauwii@outlook.de>
Date:   Fri Nov 4 00:06:19 2022 +0100

    update environment-linux-aarch64 to use python 3.9

commit 45b4593563
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 3 22:31:46 2022 +0100

    update environment-linux-aarch64.yml
    - move getpass_asterisk to pip

commit 41b04316cf
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 3 20:40:08 2022 +0100

    rename job, remove debug branch from triggers

commit e97c6db2a3
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 3 20:34:01 2022 +0100

    include build matrix to build x86_64 and aarch64

commit 896820a349
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 05:01:15 2022 +0100

    disable caching

commit 06c8f468bf
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 04:26:39 2022 +0100

    disable PR-Validation
    since there are no files passed from context this is unecesarry

commit 61920e2701
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 04:09:39 2022 +0100

    update action to use current branch
    also update build-args of dockerfile and build.sh

commit f34ba7ca70
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 02:30:24 2022 +0100

    remove unecesarry mkdir command again

commit c30ef0895d
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 01:51:12 2022 +0100

    remove symlink to GFPGANv1.4
    also re-add mkdir to prevent action from failing

commit aa3a774f73
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 01:48:59 2022 +0100

    update build-container.yml to use cachev3

commit 2c30555b84
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 01:34:20 2022 +0100

    update Dockerfile
    - create models.yaml from models.yaml.example
    - run preload_models.py with --no-interactive

commit 743f605773
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 01:21:15 2022 +0100

    update build.sh to download sd-v1.5 model

commit 519c661abb
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 25 01:26:50 2022 +0200

    replace old fashined markdown templates with forms
    this will help the readability of issues a lot 🤓

commit 22c956c75f
Merge: 13696adc 0196571a
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 3 10:20:21 2022 -0400

    Merge branch 'development' of github.com:invoke-ai/InvokeAI into development

commit 13696adc3a
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 3 10:20:10 2022 -0400

    speculative change to solve windows esrgan issues

commit 0196571a12
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 2 22:39:35 2022 -0400

    remove merge markers from preload_models.py

commit 9666f466ab
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 2 18:29:34 2022 -0400

    use refined model by default

commit 240e5486c8
Merge: 8164b6b9 aa247e68
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 2 18:35:00 2022 -0400

    Merge branch 'spezialspezial-patch-9' into development

commit 8164b6b9cf
Merge: 4fc82d55 dd5a88dc
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 2 17:06:46 2022 -0400

    Merge branch 'development' of github.com:invoke-ai/InvokeAI into development

commit 4fc82d554f
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Thu Nov 3 04:17:28 2022 +1300

    [WebUI] Final 2.1 Release Build

commit 96b34c0f85
Author: damian0815 <null@damianstewart.com>
Date:   Wed Nov 2 09:08:11 2022 +0100

    Final WebUI build for Release 2.1
    - squashed commit of 52 commits from PR #1327

    don't log base64 progress images

    Fresh Build For WebUI

    [WebUI] Loopback Default False

    Fixes bugs/styling

    - Fixes missing web app state on new version:
    Adds stateReconciler to redux-persist.

    When we add more values to the state and then release the update app, they will be automatically merged in.

    Reseting web UI will be needed far less.
    7159ec

    - Fixes console z-index
    - Moves reset web UI button to visible area

    Decreases gallery width on inpainting

    Increases workarea split padding to 1rem

    Adds missing tooltips to site header

    Changes inpainting controls settings to hover

    Fixes hotkeys and settings buttons not working

    Improves bounding box interactions

    - Bounding box can now be moved by dragging any of its edges
    - Bounding box does not affect drawing if already drawing a stroke
    - Can lock bounding box to draw directly on the bounding box edges
    - Removes spacebar-hold behaviour due to technical issues

    Fixes silent crash when init image too large

    To send the mask to the server, the UI rendered the mask onto the init image and sent the whole image. The mask was then cropped by the server.

    If the image was too large, the app silently failed. Maybe it exceeds the websocket size limit.

    Fixed by cropping the mask in the UI layer, sending only bounding-box-sized mask image data.

    Disabled bounding box settings when locked

    Styles image uploader

    Builds fresh bundle

    Improves bounding box interaction

    Added spacebar-hold-to-transform back.

    Address bounding box feedback

    - Adds back toggle to hide bounding box
    - Box quick toggle = q, normal toggle = shift + q
    - Styles canvas alert icons

    Adds hints when unable to invoke

    - Popover on Invoke button indicates why exactly it is disabled, e.g. prompt is empty, something else is processing, etc.
    - There may be more than one reason; all are displayed.

    Fix Inpainting Alerts Styling

    Preventing unnecessary re-renders across the app

    Code Split Inpaint Options

    Isolate features to their own components so they dont re-render the other stuff each time.

    [TESTING] Remove  global isReady checking

    I dont believe this is need at all because the isready state is constantly updated when needed and tracked real time in the Redux store. This causes massive re-renders. @psychedelicious If this is absolutely essential for a reason that I do not see, please hit me up on Discord.

    Fresh Bundle

    Fix Bounding Box Settings re-rendering on brush stroke

    [Code Splitting] Bounding Box Options

    Isolated all bounding box components to trigger unnecessary re-renders. Still need to fix  bounding box  triggering re-renders on the control panel inside the canvas itself. But the options panel should be a good to go with this change.

    Inpainting Controls Code Spitting and Performance

    Codesplit the entirety of the inpainting controls. Created new selectors for each and every component to ensure there are no unnecessary re-renders. App feels a lot smoother.

    Fixes rerenders on ClearBrushHistory

    Fixes crash when requesting post-generation upscale/face restoration

    - Moves the inpainting paste to before the postprocessing.

    Removes unused isReady state

    Changes Report Bug icon to a bug

    Restores shift+q bounding box shortcut

    Adds alert for bounding box size to status icons

    Adds asCheckbox to IAIIconButton

    Rough draft of this. Not happy with the styling but it's clearer than having them look just like buttons.

    Fixes crash related to old value of progress_latents in state

    Styling changes and settings modal minor refactor

    Fixes: uploaded JPG images not loading

    Reworks CurrentImageButtons.tsx

    - Change all icons to FA iconset for consistency
    - Refactors IAIIconButton, IAIButton, IAIPopover to handle ref forwarding
    - Redesigns buttons into group

    Only generate 1 iteration when seed fixed & variations disabled

    Fixes progress images select

    Fixes edge case: upload over gets stuck while alt tabbing

    - Press esc to close it now

    Fixes display progress images select typing

    Fixes current image button rerenders

    Adds min width to ImageUploader

    Makes fast-latents in progress default

    Update Icon Button Checkbox Style Styling

    Fixes next/prev image buttons

    Refactor canvas buttons + more

    Add Save Intermediates Step Count

    For accurate mode only.

    Co-Authored-By: Richard Macarthy <richardmacarthy@protonmail.com>

    Restores "initial image" text

    Address feedback

    - moves mask clear button
    - fixes intermediates
    - shrinks inpainting icons by 10%

    Fix Loopback Styling

    Adds escape hotkey to close floating panels

    Readd Hotkey for Dual Display

    Updated Current Image Button Styling

commit dd5a88dcee
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Thu Nov 3 04:17:28 2022 +1300

    [WebUI] Final 2.1 Release Build

commit 95ed56bf82
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Thu Nov 3 04:16:31 2022 +1300

    Updated Current Image Button Styling

commit 1ae80f5ab9
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Thu Nov 3 04:07:57 2022 +1300

    Readd Hotkey for Dual Display

commit 1f0bd3ca6c
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Thu Nov 3 02:07:00 2022 +1100

    Adds escape hotkey to close floating panels

commit a1971f6830
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Thu Nov 3 03:38:15 2022 +1300

    Fix Loopback Styling

commit c6118e8898
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Thu Nov 3 01:29:51 2022 +1100

    Address feedback

    - moves mask clear button
    - fixes intermediates
    - shrinks inpainting icons by 10%

commit 7ba958cf7f
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Thu Nov 3 01:10:38 2022 +1100

    Restores "initial image" text

commit 383905d5d2
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Thu Nov 3 02:59:11 2022 +1300

    Add Save Intermediates Step Count

    For accurate mode only.

    Co-Authored-By: Richard Macarthy <richardmacarthy@protonmail.com>

commit 6173e3e9ca
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Thu Nov 3 00:53:53 2022 +1100

    Refactor canvas buttons + more

commit 3feb7d8922
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Thu Nov 3 00:49:23 2022 +1100

    Fixes next/prev image buttons

commit 1d9edbd0dd
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Thu Nov 3 00:50:44 2022 +1300

    Update Icon Button Checkbox Style Styling

commit d439abdb89
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 21:37:24 2022 +1100

    Makes fast-latents in progress default

commit ee47ea0c89
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 21:37:09 2022 +1100

    Adds min width to ImageUploader

commit 300bb2e627
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 21:28:22 2022 +1100

    Fixes current image button rerenders

commit ccf8593501
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 21:27:43 2022 +1100

    Fixes display progress images select typing

commit 0fda612f3f
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 21:02:01 2022 +1100

    Fixes edge case: upload over gets stuck while alt tabbing

    - Press esc to close it now

commit 5afff65b71
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 20:33:19 2022 +1100

    Fixes progress images select

commit 7e55bdefce
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 20:27:47 2022 +1100

    Only generate 1 iteration when seed fixed & variations disabled

commit 620cf84d3d
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 19:51:38 2022 +1100

    Reworks CurrentImageButtons.tsx

    - Change all icons to FA iconset for consistency
    - Refactors IAIIconButton, IAIButton, IAIPopover to handle ref forwarding
    - Redesigns buttons into group

commit cfe567c62a
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 16:14:50 2022 +1100

    Fixes: uploaded JPG images not loading

commit cefe12f1df
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 15:31:18 2022 +1100

    Styling changes and settings modal minor refactor

commit 1e51c39928
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 15:27:46 2022 +1100

    Fixes crash related to old value of progress_latents in state

commit 42a02bbb80
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 13:15:06 2022 +1100

    Adds asCheckbox to IAIIconButton

    Rough draft of this. Not happy with the styling but it's clearer than having them look just like buttons.

commit f1ae6dae4c
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 13:13:56 2022 +1100

    Adds alert for bounding box size to status icons

commit 6195579910
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 11:52:19 2022 +1100

    Restores shift+q bounding box shortcut

commit 16c8b23b34
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 11:32:07 2022 +1100

    Changes Report Bug icon to a bug

commit 07ae626b22
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 11:17:16 2022 +1100

    Removes unused isReady state

commit 8d171bb044
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 11:13:26 2022 +1100

    Fixes crash when requesting post-generation upscale/face restoration

    - Moves the inpainting paste to before the postprocessing.

commit 6e33ca7e9e
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 10:59:01 2022 +1100

    Fixes rerenders on ClearBrushHistory

commit db46e12f2b
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 11:36:28 2022 +1300

    Inpainting Controls Code Spitting and Performance

    Codesplit the entirety of the inpainting controls. Created new selectors for each and every component to ensure there are no unnecessary re-renders. App feels a lot smoother.

commit 868e4b2db8
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 07:40:31 2022 +1300

    [Code Splitting] Bounding Box Options

    Isolated all bounding box components to trigger unnecessary re-renders. Still need to fix  bounding box  triggering re-renders on the control panel inside the canvas itself. But the options panel should be a good to go with this change.

commit 2e562742c1
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 06:40:27 2022 +1300

    Fix Bounding Box Settings re-rendering on brush stroke

commit 68e6958009
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 06:28:34 2022 +1300

    Fresh Bundle

commit ea6e3a7949
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 06:26:56 2022 +1300

    [TESTING] Remove  global isReady checking

    I dont believe this is need at all because the isready state is constantly updated when needed and tracked real time in the Redux store. This causes massive re-renders. @psychedelicious If this is absolutely essential for a reason that I do not see, please hit me up on Discord.

commit b2879ca99f
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 06:08:59 2022 +1300

    Code Split Inpaint Options

    Isolate features to their own components so they dont re-render the other stuff each time.

commit 4e911566c3
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 03:50:56 2022 +1300

    Preventing unnecessary re-renders across the app

commit 9bafda6a15
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 03:02:35 2022 +1300

    Fix Inpainting Alerts Styling

commit 871a8a5375
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 23:52:07 2022 +1100

    Adds hints when unable to invoke

    - Popover on Invoke button indicates why exactly it is disabled, e.g. prompt is empty, something else is processing, etc.
    - There may be more than one reason; all are displayed.

commit 0eef74bc00
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 23:40:11 2022 +1100

    Address bounding box feedback

    - Adds back toggle to hide bounding box
    - Box quick toggle = q, normal toggle = shift + q
    - Styles canvas alert icons

commit 423ae32097
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 17:06:07 2022 +1100

    Improves bounding box interaction

    Added spacebar-hold-to-transform back.

commit 8282e5d045
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 16:57:07 2022 +1100

    Builds fresh bundle

commit 19305cdbdf
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 16:51:11 2022 +1100

    Styles image uploader

commit eb9028ab30
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 16:51:03 2022 +1100

    Disabled bounding box settings when locked

commit 21483f5d07
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 16:50:24 2022 +1100

    Fixes silent crash when init image too large

    To send the mask to the server, the UI rendered the mask onto the init image and sent the whole image. The mask was then cropped by the server.

    If the image was too large, the app silently failed. Maybe it exceeds the websocket size limit.

    Fixed by cropping the mask in the UI layer, sending only bounding-box-sized mask image data.

commit 82dcbac28f
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 16:28:30 2022 +1100

    Improves bounding box interactions

    - Bounding box can now be moved by dragging any of its edges
    - Bounding box does not affect drawing if already drawing a stroke
    - Can lock bounding box to draw directly on the bounding box edges
    - Removes spacebar-hold behaviour due to technical issues

commit d43bd4625d
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 15:10:49 2022 +1100

    Fixes hotkeys and settings buttons not working

commit ea891324a2
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 11:04:02 2022 +1100

    Changes inpainting controls settings to hover

commit 8fd9ea2193
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 11:03:41 2022 +1100

    Adds missing tooltips to site header

commit fb02666856
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 11:03:25 2022 +1100

    Increases workarea split padding to 1rem

commit f6f5c2731b
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 11:03:10 2022 +1100

    Decreases gallery width on inpainting

commit b4e3f771e0
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 10:54:59 2022 +1100

    Fixes bugs/styling

    - Fixes missing web app state on new version:
    Adds stateReconciler to redux-persist.

    When we add more values to the state and then release the update app, they will be automatically merged in.

    Reseting web UI will be needed far less.
    7159ec

    - Fixes console z-index
    - Moves reset web UI button to visible area

commit 99bb9491ac
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Tue Nov 1 08:35:45 2022 +1300

    [WebUI] Loopback Default False

commit 0453f21127
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 23:23:51 2022 +1300

    Fresh Build For WebUI

commit 9fc09aa4bd
Author: damian0815 <null@damianstewart.com>
Date:   Wed Nov 2 09:08:11 2022 +0100

    don't log base64 progress images

commit 5e87062cf8
Author: spezialspezial <75758219+spezialspezial@users.noreply.github.com>
Date:   Wed Nov 2 00:21:27 2022 +0100

    Option to directly invert the grayscale heatmap - fix

commit 3e7a459990
Author: spezialspezial <75758219+spezialspezial@users.noreply.github.com>
Date:   Tue Nov 1 21:37:33 2022 +0100

    Update txt2mask.py

commit bbf4c03e50
Author: spezialspezial <75758219+spezialspezial@users.noreply.github.com>
Date:   Tue Nov 1 21:11:19 2022 +0100

    Option to directly invert the grayscale heatmap

    Theoretically less work inverting the image while it's small but I can't measure a significant difference. Though, handy option to have in some cases.

commit 611a3a9753
Author: mauwii <Mauwii@outlook.de>
Date:   Wed Nov 2 02:23:09 2022 +0100

    fix name of caching step

commit 1611f0d181
Author: mauwii <Mauwii@outlook.de>
Date:   Wed Nov 2 02:18:46 2022 +0100

    readd caching of sd-models
    - this would remove the necesarrity of the secret availability in PRs

commit 08835115e4
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 1 22:10:12 2022 -0400

    pin pytorch_lightning to 1.7.7, issue #1331

commit 2d84e28d32
Merge: 533fd04e ef17aae8
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 1 22:11:04 2022 -0400

    Merge branch 'development' of github.com:invoke-ai/InvokeAI into development

commit ef17aae8ab
Author: damian0815 <null@damianstewart.com>
Date:   Wed Nov 2 00:39:48 2022 +0100

    add damian0815 to contributors list

commit 0cc39f01a3
Author: damian0815 <null@damianstewart.com>
Date:   Wed Nov 2 01:18:50 2022 +0100

    report full size for fast latents and update conversion matrix for v1.5

commit 688d7258f1
Author: damian0815 <null@damianstewart.com>
Date:   Wed Nov 2 00:33:00 2022 +0100

    fix a bug that broke cross attention control index mapping

commit 4513320bf1
Author: damian0815 <null@damianstewart.com>
Date:   Wed Nov 2 00:31:58 2022 +0100

    save VRAM by not recombining tensors that have been sliced to save VRAM

commit 533fd04ef0
Merge: 6215592b dff5681c
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 1 17:40:36 2022 -0400

    Merge branch 'development' of github.com:invoke-ai/InvokeAI into development

commit dff5681cf0
Author: damian0815 <d@d.com>
Date:   Tue Nov 1 13:56:03 2022 +0100

    shorter strings

commit 5a2790a69b
Author: damian0815 <d@d.com>
Date:   Tue Nov 1 13:19:20 2022 +0100

    convert progress display to a drop-down

commit 7c5305ccba
Author: damian0815 <d@d.com>
Date:   Tue Nov 1 12:54:46 2022 +0100

    do not try to save base64 intermediates in gallery on cancellation

commit 4013e8ad6f
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 21:54:35 2022 +1100

    Fixes b64 image sending and displaying

commit d1dfd257f9
Author: damian <d@d.com>
Date:   Tue Nov 1 11:40:40 2022 +0100

    wip base64

commit 5322d735ee
Author: damian <d@d.com>
Date:   Tue Nov 1 11:31:42 2022 +0100

    update frontend

commit cdb107dcda
Author: damian <d@d.com>
Date:   Tue Nov 1 11:17:43 2022 +0100

    add option to show intermediate latent space

commit be1393a41c
Author: damian <d@d.com>
Date:   Tue Nov 1 10:16:55 2022 +0100

    ensure existing exception handling code also handles new exception class

commit e554c2607f
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Nov 1 10:08:42 2022 +0100

    Rebuilt prompt parsing logic

    Complete re-write of the prompt parsing logic to be more readable and
    logical, and therefore also hopefully easier to debug, maintain, and
    augment.

    In the process it has also become more robust to badly-formed prompts.

    Squashed commit of the following:

    commit 8fcfa88a16e1390d41717e940d72aed64712171c
    Author: Damian at mba <damian@frey.NOSPAMco.nz>
    Date:   Sun Oct 30 17:05:57 2022 +0100

        further cleanup

    commit 1a1fd78bcfeb49d072e3e6d5808aa8df15441629
    Author: Damian at mba <damian@frey.NOSPAMco.nz>
    Date:   Sun Oct 30 16:07:57 2022 +0100

        cleanup and document

    commit 099c9659fa8b8135876f9a5a50fe80b20bc0635c
    Author: Damian at mba <damian@frey.NOSPAMco.nz>
    Date:   Sun Oct 30 15:54:58 2022 +0100

        works fully

    commit 5e6887ea8c25a1e21438ff6defb381fd027d25fd
    Author: Damian at mba <damian@frey.NOSPAMco.nz>
    Date:   Sun Oct 30 15:24:31 2022 +0100

        further...

    commit 492fda120844d9bc1ad4ec7dd408a3374762d0ff
    Author: Damian at mba <damian@frey.NOSPAMco.nz>
    Date:   Sun Oct 30 14:08:57 2022 +0100

        getting there...

    commit c6aab05a8450cc3c95c8691daf38fdc64c74f52d
    Author: Damian at mba <damian@frey.NOSPAMco.nz>
    Date:   Fri Oct 28 14:29:03 2022 +0200

        wip doesn't compile

    commit 5e533f731cfd20cd435330eeb0012e5689e87e81
    Author: Damian at mba <damian@frey.NOSPAMco.nz>
    Date:   Fri Oct 28 13:21:43 2022 +0200

        working with CrossAttentionCtonrol but no Attention support yet

    commit 9678348773431e500e110e8aede99086bb7b5955
    Author: Damian at mba <damian@frey.NOSPAMco.nz>
    Date:   Fri Oct 28 13:04:52 2022 +0200

        wip rebuiling prompt parser

commit 6215592b12
Merge: ef24d76a 349cc254
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 1 17:34:55 2022 -0400

    Merge branch 'development' of github.com:invoke-ai/InvokeAI into development

commit 349cc25433
Author: damian0815 <d@d.com>
Date:   Tue Nov 1 20:08:52 2022 +0100

    fix crash (be a little less aggressive clearing out the attention slice)

commit 214d276379
Author: damian0815 <d@d.com>
Date:   Tue Nov 1 19:57:55 2022 +0100

    be more aggressive at clearing out saved_attn_slice

commit ef24d76adc
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 1 14:34:23 2022 -0400

    fix library problems in preload_modules

commit ab2b5a691d
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 1 17:22:48 2022 -0400

    fix model_cache memory management issues

commit c7de2b2801
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 02:02:14 2022 +0100

    disable checks with sd-V1.4 model...
    ...to save some resources, since V1.5 is the default now

commit e8075658ac
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 31 22:20:51 2022 +0100

    update test-invoke-conda.yml
    - fix model dl path for sd-v1-4.ckpt
    - copy configs/models.yaml.example to configs/models.yaml

commit 4202dabee1
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 31 22:17:21 2022 +0100

    fix models example weights for sd-v1.4

commit d67db2bcf1
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Tue Nov 1 08:35:45 2022 +1300

    [WebUI] Loopback Default False

commit 7159ec885f
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Mon Oct 31 11:33:05 2022 -0400

    further improvements to preload_models.py

    - Faster startup for command line switch processing
    - Specify configuration file to modify using --config option:

      ./scripts/preload_models.ply --config models/my-models-file.yaml

commit b5cf734ba9
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Mon Oct 31 11:08:19 2022 -0400

    improve behavior of preload_models.py

    - NEVER overwrite user's existing models.yaml
    - Instead, merge its contents into new config file,
      and rename original to models.yaml.orig (with
      message)
    - models.yaml has been removed from repository and renamed
      models.yaml.example

commit f7dc8eafee
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Mon Oct 31 10:47:35 2022 -0400

    restore models.yaml to virgin state

commit 762ca60a30
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Oct 4 22:55:10 2022 -0400

    Update INPAINTING.md

commit e7fb9f342c
Author: Hideyuki Katsushiro <h.katsushiro@qualia.tokyo.jp>
Date:   Wed Oct 5 10:08:53 2022 +0900

    add argument --outdir
2022-11-12 17:17:07 +00:00
Kent Keirsey
8b47c82992 Update README.md 2022-11-06 09:21:05 -08:00
Kent Keirsey
eab435da27 Update index.md 2022-11-06 09:21:05 -08:00
Lincoln Stein
cbc029c6f9 fix Windows library dependency issues
This commit addresses two bugs:

1) invokeai.py crashes immediately with a message about an undefined
   attritube sigKILL (closes #1288). The fix is to pin torch at 1.12.1.

2) Version 1.4.2 of basicsr fails to load properly on Windows, and is
   a requirement of realesrgan, however 1.4.1 works. Pinning basicsr
   in our requirements file resulted in a dependency conflict, so I
   ended up cloning realesrgan into the invoke-ai Git space and changing
   the requirements file there.

If there is a more elegant solution, please advise.
2022-11-05 06:45:28 -07:00
Lincoln Stein
d318968abe remove --no_interactive from preload_scripts.py example (#1378) 2022-11-05 06:23:56 +01:00
Matthias Wild
e71655237a Hotifx docs (#1376) 2022-11-04 15:17:28 -04:00
Lincoln Stein
6b89adfa7e change "python3" to "python" in instructions 2022-11-03 19:22:05 -04:00
mauwii
8aa4a258f4 replace old fashined markdown templates with forms
this will help the readability of issues a lot 🤓
2022-11-03 16:28:06 -04:00
Lincoln Stein
174a9b78b0 Bring main back into a consistent state with other branches
- Due to misuse of rebase command, main was transiently
  in an inconsistent state.

- This repairs the damage, and adds a few post-release
  patches that ensure stable conda installs on Mac and Windows.
2022-11-03 15:44:06 -04:00
Lincoln Stein
90d37eac03 update requirements to address #1149 2022-10-18 16:00:59 -04:00
Lincoln Stein
230de023ff resolve doc conflicts during merge 2022-10-18 08:27:33 -04:00
mauwii
febf86dedf Merge branch 'fix-gh-actions' of github.com:mauwii/stable-diffusion into fix-gh-actions 2022-10-18 13:26:03 +02:00
mauwii
76ae17abac update cache steps
remove restore-keys, make keys uniuqe
2022-10-18 13:25:51 +02:00
mauwii
339ff4b464 fix conda pkg cache name
also change content of hashFile-function
2022-10-18 13:25:51 +02:00
mauwii
00c0e487dd move export behind the tests, upload with artifact
also switch to python between 3.9-3.10 and use conda-forge again
2022-10-18 13:25:50 +02:00
mauwii
5c8dfa38be readd pip dependencie in environment-ma.yml 2022-10-18 13:25:50 +02:00
mauwii
acf85c66a5 add current branch to push trigger 2022-10-18 13:25:50 +02:00
mauwii
3619918954 rename step to export conda env 2022-10-18 13:25:50 +02:00
mauwii
65b14683a8 unpin conda package versions in environment.yml 2022-10-18 13:25:50 +02:00
mauwii
f4fc02a3da switch to default channel in environment-mac.yml 2022-10-18 13:25:50 +02:00
mauwii
c334170a93 pin versions only for pip packages 2022-10-18 13:25:50 +02:00
mauwii
deab6c64fc export conda env instead of only print versions 2022-10-18 13:25:50 +02:00
mauwii
e1c9503951 list conda packages after activating env
also want to show how much faster it will run now with cached pkgs
2022-10-18 13:25:50 +02:00
mauwii
9a21812bf5 revert changes to environment.yml
@tildebyte this would not have been pointed out without PR-Validation
2022-10-18 13:25:50 +02:00
mauwii
347b5ce452 fix expression 2022-10-18 13:25:50 +02:00
mauwii
b39029521b use very short validation for Pull Requests 2022-10-18 13:25:49 +02:00
mauwii
97b26f3de2 remove doubled checkpoint cache 2022-10-18 13:25:49 +02:00
mauwii
e19a7a990d unpin versions in environment
as asked by @tildebyte
2022-10-18 13:25:49 +02:00
mauwii
3e424e1046 remove pip from dependencies 2022-10-18 13:25:49 +02:00
mauwii
db20b4af9c remove pr trigger 2022-10-18 13:25:15 +02:00
Matthias Wild
44ff8f8531 squash merge update-gh-actions into fix-gh-actions
* fix mkdocs deployment

* update path to python bin

* add trigger for current branch

* change path to python_bin for mac as well

* try to use setup-python@v4 instead of setting env

* remove setup conda action

* try to use $CONDA

* remove overseen action

* change branch from master to main

* sort out if then else for faster syntax

* remove more if functions

* add updates to create-caches as well

* eliminate the rest of if functions

* try to unpin pytorch and torchvision

* restore pinned versions

* try switching from set-output to use env

* update test-invoke-conda as well

* fix env var creation

* quote variable

* add second equal to compare

* try another way to use outputs

* fix outputs

* pip install for mac before creating conda env

* fix output variable

* fix python bin path

* remove pip install for before creating conda env

* unpin streamlit version in conda mac env

* try to make git-workflows better readable

* remove 4gotten trigger

* Update-gh-actions (#6)

* fix mkdocs deployment

* update path to python bin

* add trigger for current branch

* change path to python_bin for mac as well

* try to use setup-python@v4 instead of setting env

* remove setup conda action

* try to use $CONDA

* remove overseen action

* change branch from master to main

* sort out if then else for faster syntax

* remove more if functions

* add updates to create-caches as well

* eliminate the rest of if functions

* try to unpin pytorch and torchvision

* restore pinned versions

* try switching from set-output to use env

* update test-invoke-conda as well

* fix env var creation

* quote variable

* add second equal to compare

* try another way to use outputs

* fix outputs

* pip install for mac before creating conda env

* fix output variable

* fix python bin path

* remove pip install for before creating conda env

* unpin streamlit version in conda mac env

* try to make git-workflows better readable

* use macos-latest

* try to update conda before creating mac env

* better conda update trial

* re-pin streamlit version

* re-added trigger to run workflow in current branch

* try to find out if conda mac env could be updated

* install cmake, protobuf and rust b4 conda

* add yes to conda update

* lets try anaconda3-2022.05

* try environment.yml for mac as well

* reenable conda mac env, add pip install
also fix gitignore by changing from dream to invoke

* remove
- unecesary virtualenv creation
- conda update

change != macos back to == linux

* remove cmake from brew install since pre-installed

* disable opencv-python pip requirement

* fixed commands to find latest package versions

* update requirements for mac env

* back to the roots - only install conda env
depending on runner_os with or without extra env variables

* check out macOS in azure-pipelines
since becoming kind of tired of the GitHub Runner which is broken as ...

* let's try to setup python and update conda env

* initialize conda before using it

* add trigger in azure-pipelines.yml

* And another go for update first ....

* update azure-pipelines.yml
- add caching
- add checkpoint download
- add paths to trigger
and more

* unquote checkpoint-url

* fix chekpoint-url variable

* mkdir before downloading model

* set pr trigger to main, rename anaconda cache

* unique cacheHitVariables

* try to use macos-latest instead of macos-12

* update test-invoke-conda.yml:
- remove unecesarry echo step
- use s-weigand/setup-conda@v1
- remove conda update from install deps step since updated with action

* update test-invoke-conda.yml:
- rename conda env cache from ldm to invokeai
- reorder steps:
  1. checkout sources
  2. setup python
  3. setup conda
  4. keep order after set platform variables

* change macos back to 12 since also fails with 11

* update condition in run the tests
make difference between main or not main

* fix path to cache invokeai conda env

* fix invokeai conda env cache path

* update mkdocs-flow.yml

* change conda-channel priority

* update create-caches

* update conda env also when cache was used

* os dependend conda env cache path

* use existing CONDA env pointing to conda root

* create CONDA_ROOT output from $CONDA

* use output variable to define test prompts

* use setup-python v4, get rid of PYTHON_BIN env

* add runner.os to result artifacts name

* update test-invoke-conda.yml:
- reuse macos-latest
- disable setup python 3.9
- setup conda with default python version
- create or update conda environment depending on cache success
- remove name parameter from conda update since name is set in env yml

* improve mkdocs-flow.yml

* disable cache-hugginface-torch
since preload_models.py downloads to more than one location

* update mkdocs-flow.yml with new name

* rename mkdocs action to mkdocs-material

* try to ignore error when creating conda env
maybe it would still be usable, lets see ;P

* remove bloat

* update environment-mac.yml
to match dependencies of invoke-ai/InvokeAI's main branch

* disable conda update, tweak prompt condition

* try to set some env vars for macOS to fix conda

* stop ignoring error, use env instead of outputs

* tweak `[[` connditions

* update python and pip dependencie
makes a difference of 1 sec per itteration compared to 3.9!!!
also I see no reason why using a old pip version would be beneficial

* remove unecesarry env for macOS
everything was pre-tested on my MacBook Air 2020 with M1

* update conda env in setup step

* activate conda env after installation

* update test-invoke-conda.yml
- set conda env dependent on matrix.os
- set CONDA_ENV_NAME to prevent breaking action when renaming conda env
- fix conda env activation

* fix activate conda env

* set bash -l as default shell

* use action to activate conda env

* add conda env file to env activation

* try to replace s-weigeand with conda-incubator

* remove azure-pipelines.yml
funniest part is that the macos runner is the same as the one on github!

* include environment-file in matrix
- also disable auto-activate-base and auto-update-conda
- include macos-latest and macos-12 for debugging purpose
- set miniforge-version in matrix

* fix miniforge-variant, set fail-fast to false

* add step to setup miniconda
- make default shell a matrix variable
- remove bloat

* use a mac env yml without pinned versions

* unpin nomkl, pytorch and torchvision
also removed opencv-pyhton

* cache conda pkgs dir instead of conda env

* use python 3.10, exclude macos-12 from cache

* fix expression

* prepare for PR

* fix doubled id

* reuse pinned versions in mac conda env
- updated python pip version
- unpined pytorch and torchvision
- removed opencv-python
- updated versions to most recent (tested locally)

* fix classical copy/paste error

* remove unused env from shell-block comment

* fix hashFiles function to determine restore-keys

* reenable caching `~.cache`, update create-caches

* unpin all versions in mac conda env file
this was the only way I got it working in the action, also works locally
tested on MacBook Air 2020 M1
remove environment-mac-unpinned.yml

* prepare merge by removing this branch from trigger

* include pull_request trigger for main and dev

* remove pull_request trigger
2022-10-18 13:25:15 +02:00
mauwii
a8b794d7e0 update precision info 2022-10-17 22:27:27 -04:00
mauwii
f868362ca8 fix prompt in README.md 2022-10-17 22:27:27 -04:00
mauwii
8858f7e97c (re-) fix a lot in mkdocs 2022-10-17 22:27:27 -04:00
Matthias Wild
2db4969e18 Merge branch 'main' into fix-gh-actions 2022-10-17 23:41:36 +02:00
mauwii
2ecc1abf21 fix links to point to invoke-ai.github.io 2022-10-17 17:40:31 -04:00
mauwii
703bc9494a Merge remote-tracking branch 'upstream/main' into fix-gh-actions-fork 2022-10-17 21:40:16 +02:00
Lincoln Stein
e5ab07091d adding license using GitHub template
Did not attempt to add additional copyright information.
2022-10-17 12:09:24 -04:00
Lincoln Stein
891678b656 remove license files temporarily 2022-10-17 12:08:09 -04:00
Lincoln Stein
39ea2a257c remove additional copyrights from license file
Trying to get GitHub to recognize our MIT license. Perhaps the additional copyrights are confusing it.
2022-10-17 12:07:00 -04:00
Lincoln Stein
2d68eae16b Second try at getting GitHub to register license 2022-10-17 12:05:42 -04:00
spezialspezial
d65948c423 Update gitignore to ignore codeformer weights at new location
Eventually making it slightly more flexible
2022-10-17 11:54:45 -04:00
majick
9910a0b004 Fix broken links to CLI.md
* Looks like there was a bad paste
2022-10-16 23:40:27 -04:00
majick
ff96358cb3 Correct typo in the subtitle of the project
* “Formally” means that there is a formality such as a rule or declaration, “formerly” refers to a prior state.  The latter is almost certainly what is meant here.
2022-10-16 23:40:27 -04:00
mauwii
edf471f655 update cache steps
remove restore-keys, make keys uniuqe
2022-10-17 04:43:06 +02:00
mauwii
5b02c8ca4a fix conda pkg cache name
also change content of hashFile-function
2022-10-17 04:02:38 +02:00
mauwii
e7688c53b8 move export behind the tests, upload with artifact
also switch to python between 3.9-3.10 and use conda-forge again
2022-10-17 03:27:15 +02:00
mauwii
87cada42db readd pip dependencie in environment-ma.yml 2022-10-17 02:22:19 +02:00
mauwii
6fe67ee426 add current branch to push trigger 2022-10-17 02:12:46 +02:00
mauwii
5fbc81885a rename step to export conda env 2022-10-17 02:08:08 +02:00
mauwii
25ba5451f2 unpin conda package versions in environment.yml 2022-10-17 02:07:17 +02:00
mauwii
138c9cf7a8 switch to default channel in environment-mac.yml 2022-10-17 02:05:59 +02:00
mauwii
87981306a3 pin versions only for pip packages 2022-10-17 01:50:19 +02:00
mauwii
f7893b3ea9 export conda env instead of only print versions 2022-10-17 01:48:22 +02:00
mauwii
87395fe6fe list conda packages after activating env
also want to show how much faster it will run now with cached pkgs
2022-10-16 22:48:53 +02:00
mauwii
15f876c66c revert changes to environment.yml
@tildebyte this would not have been pointed out without PR-Validation
2022-10-16 22:02:58 +02:00
mauwii
522c35ac5b fix expression 2022-10-16 21:52:49 +02:00
mauwii
bb2d6d640f use very short validation for Pull Requests 2022-10-16 21:50:57 +02:00
mauwii
2412d8dec1 remove doubled checkpoint cache 2022-10-16 20:53:07 +02:00
mauwii
2ab5a43663 unpin versions in environment
as asked by @tildebyte
2022-10-16 20:48:31 +02:00
mauwii
0ec3d6c10a remove pip from dependencies 2022-10-16 20:36:33 +02:00
mauwii
d208e1b0f5 Merge branch 'fix-gh-actions' of github.com:mauwii/stable-diffusion into fix-gh-actions 2022-10-16 20:35:57 +02:00
mauwii
8a6ba6a212 remove pr trigger 2022-10-16 13:56:45 -04:00
Matthias Wild
b793d69ff3 squash merge update-gh-actions into fix-gh-actions
* fix mkdocs deployment

* update path to python bin

* add trigger for current branch

* change path to python_bin for mac as well

* try to use setup-python@v4 instead of setting env

* remove setup conda action

* try to use $CONDA

* remove overseen action

* change branch from master to main

* sort out if then else for faster syntax

* remove more if functions

* add updates to create-caches as well

* eliminate the rest of if functions

* try to unpin pytorch and torchvision

* restore pinned versions

* try switching from set-output to use env

* update test-invoke-conda as well

* fix env var creation

* quote variable

* add second equal to compare

* try another way to use outputs

* fix outputs

* pip install for mac before creating conda env

* fix output variable

* fix python bin path

* remove pip install for before creating conda env

* unpin streamlit version in conda mac env

* try to make git-workflows better readable

* remove 4gotten trigger

* Update-gh-actions (#6)

* fix mkdocs deployment

* update path to python bin

* add trigger for current branch

* change path to python_bin for mac as well

* try to use setup-python@v4 instead of setting env

* remove setup conda action

* try to use $CONDA

* remove overseen action

* change branch from master to main

* sort out if then else for faster syntax

* remove more if functions

* add updates to create-caches as well

* eliminate the rest of if functions

* try to unpin pytorch and torchvision

* restore pinned versions

* try switching from set-output to use env

* update test-invoke-conda as well

* fix env var creation

* quote variable

* add second equal to compare

* try another way to use outputs

* fix outputs

* pip install for mac before creating conda env

* fix output variable

* fix python bin path

* remove pip install for before creating conda env

* unpin streamlit version in conda mac env

* try to make git-workflows better readable

* use macos-latest

* try to update conda before creating mac env

* better conda update trial

* re-pin streamlit version

* re-added trigger to run workflow in current branch

* try to find out if conda mac env could be updated

* install cmake, protobuf and rust b4 conda

* add yes to conda update

* lets try anaconda3-2022.05

* try environment.yml for mac as well

* reenable conda mac env, add pip install
also fix gitignore by changing from dream to invoke

* remove
- unecesary virtualenv creation
- conda update

change != macos back to == linux

* remove cmake from brew install since pre-installed

* disable opencv-python pip requirement

* fixed commands to find latest package versions

* update requirements for mac env

* back to the roots - only install conda env
depending on runner_os with or without extra env variables

* check out macOS in azure-pipelines
since becoming kind of tired of the GitHub Runner which is broken as ...

* let's try to setup python and update conda env

* initialize conda before using it

* add trigger in azure-pipelines.yml

* And another go for update first ....

* update azure-pipelines.yml
- add caching
- add checkpoint download
- add paths to trigger
and more

* unquote checkpoint-url

* fix chekpoint-url variable

* mkdir before downloading model

* set pr trigger to main, rename anaconda cache

* unique cacheHitVariables

* try to use macos-latest instead of macos-12

* update test-invoke-conda.yml:
- remove unecesarry echo step
- use s-weigand/setup-conda@v1
- remove conda update from install deps step since updated with action

* update test-invoke-conda.yml:
- rename conda env cache from ldm to invokeai
- reorder steps:
  1. checkout sources
  2. setup python
  3. setup conda
  4. keep order after set platform variables

* change macos back to 12 since also fails with 11

* update condition in run the tests
make difference between main or not main

* fix path to cache invokeai conda env

* fix invokeai conda env cache path

* update mkdocs-flow.yml

* change conda-channel priority

* update create-caches

* update conda env also when cache was used

* os dependend conda env cache path

* use existing CONDA env pointing to conda root

* create CONDA_ROOT output from $CONDA

* use output variable to define test prompts

* use setup-python v4, get rid of PYTHON_BIN env

* add runner.os to result artifacts name

* update test-invoke-conda.yml:
- reuse macos-latest
- disable setup python 3.9
- setup conda with default python version
- create or update conda environment depending on cache success
- remove name parameter from conda update since name is set in env yml

* improve mkdocs-flow.yml

* disable cache-hugginface-torch
since preload_models.py downloads to more than one location

* update mkdocs-flow.yml with new name

* rename mkdocs action to mkdocs-material

* try to ignore error when creating conda env
maybe it would still be usable, lets see ;P

* remove bloat

* update environment-mac.yml
to match dependencies of invoke-ai/InvokeAI's main branch

* disable conda update, tweak prompt condition

* try to set some env vars for macOS to fix conda

* stop ignoring error, use env instead of outputs

* tweak `[[` connditions

* update python and pip dependencie
makes a difference of 1 sec per itteration compared to 3.9!!!
also I see no reason why using a old pip version would be beneficial

* remove unecesarry env for macOS
everything was pre-tested on my MacBook Air 2020 with M1

* update conda env in setup step

* activate conda env after installation

* update test-invoke-conda.yml
- set conda env dependent on matrix.os
- set CONDA_ENV_NAME to prevent breaking action when renaming conda env
- fix conda env activation

* fix activate conda env

* set bash -l as default shell

* use action to activate conda env

* add conda env file to env activation

* try to replace s-weigeand with conda-incubator

* remove azure-pipelines.yml
funniest part is that the macos runner is the same as the one on github!

* include environment-file in matrix
- also disable auto-activate-base and auto-update-conda
- include macos-latest and macos-12 for debugging purpose
- set miniforge-version in matrix

* fix miniforge-variant, set fail-fast to false

* add step to setup miniconda
- make default shell a matrix variable
- remove bloat

* use a mac env yml without pinned versions

* unpin nomkl, pytorch and torchvision
also removed opencv-pyhton

* cache conda pkgs dir instead of conda env

* use python 3.10, exclude macos-12 from cache

* fix expression

* prepare for PR

* fix doubled id

* reuse pinned versions in mac conda env
- updated python pip version
- unpined pytorch and torchvision
- removed opencv-python
- updated versions to most recent (tested locally)

* fix classical copy/paste error

* remove unused env from shell-block comment

* fix hashFiles function to determine restore-keys

* reenable caching `~.cache`, update create-caches

* unpin all versions in mac conda env file
this was the only way I got it working in the action, also works locally
tested on MacBook Air 2020 M1
remove environment-mac-unpinned.yml

* prepare merge by removing this branch from trigger

* include pull_request trigger for main and dev

* remove pull_request trigger
2022-10-16 13:56:45 -04:00
mauwii
54f55471df remove pr trigger 2022-10-16 19:34:31 +02:00
Matthias Wild
cec7fb7dc6 squash merge update-gh-actions into fix-gh-actions
* fix mkdocs deployment

* update path to python bin

* add trigger for current branch

* change path to python_bin for mac as well

* try to use setup-python@v4 instead of setting env

* remove setup conda action

* try to use $CONDA

* remove overseen action

* change branch from master to main

* sort out if then else for faster syntax

* remove more if functions

* add updates to create-caches as well

* eliminate the rest of if functions

* try to unpin pytorch and torchvision

* restore pinned versions

* try switching from set-output to use env

* update test-invoke-conda as well

* fix env var creation

* quote variable

* add second equal to compare

* try another way to use outputs

* fix outputs

* pip install for mac before creating conda env

* fix output variable

* fix python bin path

* remove pip install for before creating conda env

* unpin streamlit version in conda mac env

* try to make git-workflows better readable

* remove 4gotten trigger

* Update-gh-actions (#6)

* fix mkdocs deployment

* update path to python bin

* add trigger for current branch

* change path to python_bin for mac as well

* try to use setup-python@v4 instead of setting env

* remove setup conda action

* try to use $CONDA

* remove overseen action

* change branch from master to main

* sort out if then else for faster syntax

* remove more if functions

* add updates to create-caches as well

* eliminate the rest of if functions

* try to unpin pytorch and torchvision

* restore pinned versions

* try switching from set-output to use env

* update test-invoke-conda as well

* fix env var creation

* quote variable

* add second equal to compare

* try another way to use outputs

* fix outputs

* pip install for mac before creating conda env

* fix output variable

* fix python bin path

* remove pip install for before creating conda env

* unpin streamlit version in conda mac env

* try to make git-workflows better readable

* use macos-latest

* try to update conda before creating mac env

* better conda update trial

* re-pin streamlit version

* re-added trigger to run workflow in current branch

* try to find out if conda mac env could be updated

* install cmake, protobuf and rust b4 conda

* add yes to conda update

* lets try anaconda3-2022.05

* try environment.yml for mac as well

* reenable conda mac env, add pip install
also fix gitignore by changing from dream to invoke

* remove
- unecesary virtualenv creation
- conda update

change != macos back to == linux

* remove cmake from brew install since pre-installed

* disable opencv-python pip requirement

* fixed commands to find latest package versions

* update requirements for mac env

* back to the roots - only install conda env
depending on runner_os with or without extra env variables

* check out macOS in azure-pipelines
since becoming kind of tired of the GitHub Runner which is broken as ...

* let's try to setup python and update conda env

* initialize conda before using it

* add trigger in azure-pipelines.yml

* And another go for update first ....

* update azure-pipelines.yml
- add caching
- add checkpoint download
- add paths to trigger
and more

* unquote checkpoint-url

* fix chekpoint-url variable

* mkdir before downloading model

* set pr trigger to main, rename anaconda cache

* unique cacheHitVariables

* try to use macos-latest instead of macos-12

* update test-invoke-conda.yml:
- remove unecesarry echo step
- use s-weigand/setup-conda@v1
- remove conda update from install deps step since updated with action

* update test-invoke-conda.yml:
- rename conda env cache from ldm to invokeai
- reorder steps:
  1. checkout sources
  2. setup python
  3. setup conda
  4. keep order after set platform variables

* change macos back to 12 since also fails with 11

* update condition in run the tests
make difference between main or not main

* fix path to cache invokeai conda env

* fix invokeai conda env cache path

* update mkdocs-flow.yml

* change conda-channel priority

* update create-caches

* update conda env also when cache was used

* os dependend conda env cache path

* use existing CONDA env pointing to conda root

* create CONDA_ROOT output from $CONDA

* use output variable to define test prompts

* use setup-python v4, get rid of PYTHON_BIN env

* add runner.os to result artifacts name

* update test-invoke-conda.yml:
- reuse macos-latest
- disable setup python 3.9
- setup conda with default python version
- create or update conda environment depending on cache success
- remove name parameter from conda update since name is set in env yml

* improve mkdocs-flow.yml

* disable cache-hugginface-torch
since preload_models.py downloads to more than one location

* update mkdocs-flow.yml with new name

* rename mkdocs action to mkdocs-material

* try to ignore error when creating conda env
maybe it would still be usable, lets see ;P

* remove bloat

* update environment-mac.yml
to match dependencies of invoke-ai/InvokeAI's main branch

* disable conda update, tweak prompt condition

* try to set some env vars for macOS to fix conda

* stop ignoring error, use env instead of outputs

* tweak `[[` connditions

* update python and pip dependencie
makes a difference of 1 sec per itteration compared to 3.9!!!
also I see no reason why using a old pip version would be beneficial

* remove unecesarry env for macOS
everything was pre-tested on my MacBook Air 2020 with M1

* update conda env in setup step

* activate conda env after installation

* update test-invoke-conda.yml
- set conda env dependent on matrix.os
- set CONDA_ENV_NAME to prevent breaking action when renaming conda env
- fix conda env activation

* fix activate conda env

* set bash -l as default shell

* use action to activate conda env

* add conda env file to env activation

* try to replace s-weigeand with conda-incubator

* remove azure-pipelines.yml
funniest part is that the macos runner is the same as the one on github!

* include environment-file in matrix
- also disable auto-activate-base and auto-update-conda
- include macos-latest and macos-12 for debugging purpose
- set miniforge-version in matrix

* fix miniforge-variant, set fail-fast to false

* add step to setup miniconda
- make default shell a matrix variable
- remove bloat

* use a mac env yml without pinned versions

* unpin nomkl, pytorch and torchvision
also removed opencv-pyhton

* cache conda pkgs dir instead of conda env

* use python 3.10, exclude macos-12 from cache

* fix expression

* prepare for PR

* fix doubled id

* reuse pinned versions in mac conda env
- updated python pip version
- unpined pytorch and torchvision
- removed opencv-python
- updated versions to most recent (tested locally)

* fix classical copy/paste error

* remove unused env from shell-block comment

* fix hashFiles function to determine restore-keys

* reenable caching `~.cache`, update create-caches

* unpin all versions in mac conda env file
this was the only way I got it working in the action, also works locally
tested on MacBook Air 2020 M1
remove environment-mac-unpinned.yml

* prepare merge by removing this branch from trigger

* include pull_request trigger for main and dev

* remove pull_request trigger
2022-10-16 19:19:49 +02:00
Lincoln Stein
b0b82efffe restore inline images
<div> around the inline images works great in gh-pages, but breaks plain old markdown in GitHub code display. This removes the <div>s, causing slight degradation in quality of gh-page appearance.
2022-10-16 12:07:21 -04:00
Lincoln Stein
e599604294 restore inline images
<div> seems to be messing with the ability of the plain-old markdown processor to display inline images. Slightly degrades appearance of gh-pages.
2022-10-16 12:05:33 -04:00
Eric Wolf
57a3ea9d7b Update 'ldm' env to 'invokeai' in troubleshooting steps 2022-10-16 11:23:00 -04:00
Conor Reid
a3a50bb886 Update generate.py
Fixed spelling mistake (open source king)
2022-10-15 16:02:14 -04:00
1413 changed files with 308346 additions and 59787 deletions

View File

@@ -1,3 +1,25 @@
# use this file as a whitelist
*
!environment*.yml
!docker-build
!invokeai
!ldm
!pyproject.toml
# ignore frontend/web but whitelist dist
invokeai/frontend/web/
!invokeai/frontend/web/dist/
# ignore invokeai/assets but whitelist invokeai/assets/web
invokeai/assets/
!invokeai/assets/web/
# Guard against pulling in any models that might exist in the directory tree
**/*.pt*
**/*.ckpt
# Byte-compiled / optimized / DLL files
**/__pycache__/
**/*.py[cod]
# Distribution / packaging
**/*.egg-info/
**/*.egg

12
.editorconfig Normal file
View File

@@ -0,0 +1,12 @@
# All files
[*]
charset = utf-8
end_of_line = lf
indent_size = 2
indent_style = space
insert_final_newline = true
trim_trailing_whitespace = true
# Python
[*.py]
indent_size = 4

1
.git-blame-ignore-revs Normal file
View File

@@ -0,0 +1 @@
b3dccfaeb636599c02effc377cdd8a87d658256c

2
.gitattributes vendored
View File

@@ -1,4 +1,4 @@
# Auto normalizes line endings on commit so devs don't need to change local settings.
# Only affects text files and ignores other file types.
# Only affects text files and ignores other file types.
# For more info see: https://www.aleksandrhovhannisyan.com/blog/crlf-vs-lf-normalizing-line-endings-in-git/
* text=auto

38
.github/CODEOWNERS vendored
View File

@@ -1,4 +1,34 @@
ldm/invoke/pngwriter.py @CapableWeb
ldm/invoke/server_legacy.py @CapableWeb
scripts/legacy_api.py @CapableWeb
tests/legacy_tests.sh @CapableWeb
# continuous integration
/.github/workflows/ @lstein @blessedcoolant
# documentation
/docs/ @lstein @blessedcoolant @hipsterusername
/mkdocs.yml @lstein @blessedcoolant
# nodes
/invokeai/app/ @Kyle0654 @blessedcoolant
# installation and configuration
/pyproject.toml @lstein @blessedcoolant
/docker/ @lstein @blessedcoolant
/scripts/ @ebr @lstein
/installer/ @lstein @ebr
/invokeai/assets @lstein @ebr
/invokeai/configs @lstein
/invokeai/version @lstein @blessedcoolant
# web ui
/invokeai/frontend @blessedcoolant @psychedelicious @lstein @maryhipp
/invokeai/backend @blessedcoolant @psychedelicious @lstein @maryhipp
# generation, model management, postprocessing
/invokeai/backend @damian0815 @lstein @blessedcoolant @jpphoto @gregghelt2 @StAlKeR7779
# front ends
/invokeai/frontend/CLI @lstein
/invokeai/frontend/install @lstein @ebr
/invokeai/frontend/merge @lstein @blessedcoolant
/invokeai/frontend/training @lstein @blessedcoolant
/invokeai/frontend/web @psychedelicious @blessedcoolant @maryhipp

112
.github/ISSUE_TEMPLATE/BUG_REPORT.yml vendored Normal file
View File

@@ -0,0 +1,112 @@
name: 🐞 Bug Report
description: File a bug report
title: '[bug]: '
labels: ['bug']
# assignees:
# - moderator_bot
# - lstein
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this Bug Report!
- type: checkboxes
attributes:
label: Is there an existing issue for this?
description: |
Please use the [search function](https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen+label%3Abug)
irst to see if an issue already exists for the bug you encountered.
options:
- label: I have searched the existing issues
required: true
- type: markdown
attributes:
value: __Describe your environment__
- type: dropdown
id: os_dropdown
attributes:
label: OS
description: Which operating System did you use when the bug occured
multiple: false
options:
- 'Linux'
- 'Windows'
- 'macOS'
validations:
required: true
- type: dropdown
id: gpu_dropdown
attributes:
label: GPU
description: Which kind of Graphic-Adapter is your System using
multiple: false
options:
- 'cuda'
- 'amd'
- 'mps'
- 'cpu'
validations:
required: true
- type: input
id: vram
attributes:
label: VRAM
description: Size of the VRAM if known
placeholder: 8GB
validations:
required: false
- type: input
id: version-number
attributes:
label: What version did you experience this issue on?
description: |
Please share the version of Invoke AI that you experienced the issue on. If this is not the latest version, please update first to confirm the issue still exists. If you are testing main, please include the commit hash instead.
placeholder: X.X.X
validations:
required: true
- type: textarea
id: what-happened
attributes:
label: What happened?
description: |
Briefly describe what happened, what you expected to happen and how to reproduce this bug.
placeholder: When using the webinterface and right-clicking on button X instead of the popup-menu there error Y appears
validations:
required: true
- type: textarea
attributes:
label: Screenshots
description: If applicable, add screenshots to help explain your problem
placeholder: this is what the result looked like <screenshot>
validations:
required: false
- type: textarea
attributes:
label: Additional context
description: Add any other context about the problem here
placeholder: Only happens when there is full moon and Friday the 13th on Christmas Eve 🎅🏻
validations:
required: false
- type: input
id: contact
attributes:
label: Contact Details
description: __OPTIONAL__ How can we get in touch with you if we need more info (besides this issue)?
placeholder: ex. email@example.com, discordname, twitter, ...
validations:
required: false

View File

@@ -0,0 +1,56 @@
name: Feature Request
description: Commit a idea or Request a new feature
title: '[enhancement]: '
labels: ['enhancement']
# assignees:
# - lstein
# - tildebyte
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this Feature request!
- type: checkboxes
attributes:
label: Is there an existing issue for this?
description: |
Please make use of the [search function](https://github.com/invoke-ai/InvokeAI/labels/enhancement)
to see if a simmilar issue already exists for the feature you want to request
options:
- label: I have searched the existing issues
required: true
- type: input
id: contact
attributes:
label: Contact Details
description: __OPTIONAL__ How could we get in touch with you if we need more info (besides this issue)?
placeholder: ex. email@example.com, discordname, twitter, ...
validations:
required: false
- type: textarea
id: whatisexpected
attributes:
label: What should this feature add?
description: Please try to explain the functionality this feature should add
placeholder: |
Instead of one huge textfield, it would be nice to have forms for bug-reports, feature-requests, ...
Great benefits with automatic labeling, assigning and other functionalitys not available in that form
via old-fashioned markdown-templates. I would also love to see the use of a moderator bot 🤖 like
https://github.com/marketplace/actions/issue-moderator-with-commands to auto close old issues and other things
validations:
required: true
- type: textarea
attributes:
label: Alternatives
description: Describe alternatives you've considered
placeholder: A clear and concise description of any alternative solutions or features you've considered.
- type: textarea
attributes:
label: Aditional Content
description: Add any other context or screenshots about the feature request here.
placeholder: This is a Mockup of the design how I imagine it <screenshot>

View File

@@ -1,36 +0,0 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe your environment**
- GPU: [cuda/amd/mps/cpu]
- VRAM: [if known]
- CPU arch: [x86/arm]
- OS: [Linux/Windows/macOS]
- Python: [Anaconda/miniconda/miniforge/pyenv/other (explain)]
- Branch: [if `git status` says anything other than "On branch main" paste it here]
- Commit: [run `git show` and paste the line that starts with "Merge" here]
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.

14
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,14 @@
blank_issues_enabled: false
contact_links:
- name: Project-Documentation
url: https://invoke-ai.github.io/InvokeAI/
about: Should be your first place to go when looking for manuals/FAQs regarding our InvokeAI Toolkit
- name: Discord
url: https://discord.gg/ZmtBAhwWhy
about: Our Discord Community could maybe help you out via live-chat
- name: GitHub Community Support
url: https://github.com/orgs/community/discussions
about: Please ask and answer questions regarding the GitHub Platform here.
- name: GitHub Security Bug Bounty
url: https://bounty.github.com/
about: Please report security vulnerabilities of the GitHub Platform here.

View File

@@ -1,20 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

19
.github/stale.yaml vendored Normal file
View File

@@ -0,0 +1,19 @@
# Number of days of inactivity before an issue becomes stale
daysUntilStale: 28
# Number of days of inactivity before a stale issue is closed
daysUntilClose: 14
# Issues with these labels will never be considered stale
exemptLabels:
- pinned
- security
# Label to use when marking an issue as stale
staleLabel: stale
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: >
This issue has been automatically marked as stale because it has not had
recent activity. It will be closed if no further activity occurs. Please
update the ticket if this is still a problem on the latest release.
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: >
Due to inactivity, this issue has been automatically closed. If this is
still a problem on the latest release, please recreate the issue.

View File

@@ -1,42 +1,114 @@
# Building the Image without pushing to confirm it is still buildable
# confirum functionality would unfortunately need way more resources
name: build container image
on:
push:
branches:
- 'main'
- 'development'
pull_request:
branches:
- 'main'
- 'development'
- 'update/ci/docker/*'
- 'update/docker/*'
- 'dev/ci/docker/*'
- 'dev/docker/*'
paths:
- 'pyproject.toml'
- '.dockerignore'
- 'invokeai/**'
- 'docker/Dockerfile'
tags:
- 'v*.*.*'
workflow_dispatch:
permissions:
contents: write
packages: write
jobs:
docker:
if: github.event.pull_request.draft == false
strategy:
fail-fast: false
matrix:
flavor:
- rocm
- cuda
- cpu
include:
- flavor: rocm
pip-extra-index-url: 'https://download.pytorch.org/whl/rocm5.2'
- flavor: cuda
pip-extra-index-url: ''
- flavor: cpu
pip-extra-index-url: 'https://download.pytorch.org/whl/cpu'
runs-on: ubuntu-latest
name: ${{ matrix.flavor }}
env:
PLATFORMS: 'linux/amd64,linux/arm64'
DOCKERFILE: 'docker/Dockerfile'
steps:
- name: prepare docker-tag
env:
repository: ${{ github.repository }}
run: echo "dockertag=${repository,,}" >> $GITHUB_ENV
- name: Checkout
uses: actions/checkout@v3
- name: Docker meta
id: meta
uses: docker/metadata-action@v4
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
images: |
ghcr.io/${{ github.repository }}
${{ vars.DOCKERHUB_REPOSITORY }}
tags: |
type=ref,event=branch
type=ref,event=tag
type=pep440,pattern={{version}}
type=pep440,pattern={{major}}.{{minor}}
type=pep440,pattern={{major}}
type=sha,enable=true,prefix=sha-,format=short
flavor: |
latest=${{ matrix.flavor == 'cuda' && github.ref == 'refs/heads/main' }}
suffix=-${{ matrix.flavor }},onlatest=false
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache
key: buildx-${{ hashFiles('docker-build/Dockerfile') }}
platforms: ${{ env.PLATFORMS }}
- name: Login to GitHub Container Registry
if: github.event_name != 'pull_request'
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to Docker Hub
if: github.event_name != 'pull_request' && vars.DOCKERHUB_REPOSITORY != ''
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build container
uses: docker/build-push-action@v3
id: docker_build
uses: docker/build-push-action@v4
with:
context: .
file: docker-build/Dockerfile
platforms: linux/amd64
push: false
tags: ${{ env.dockertag }}:latest
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache
file: ${{ env.DOCKERFILE }}
platforms: ${{ env.PLATFORMS }}
push: ${{ github.ref == 'refs/heads/main' || github.ref_type == 'tag' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
build-args: PIP_EXTRA_INDEX_URL=${{ matrix.pip-extra-index-url }}
cache-from: |
type=gha,scope=${{ github.ref_name }}-${{ matrix.flavor }}
type=gha,scope=main-${{ matrix.flavor }}
cache-to: type=gha,mode=max,scope=${{ github.ref_name }}-${{ matrix.flavor }}
- name: Docker Hub Description
if: github.ref == 'refs/heads/main' || github.ref == 'refs/tags/*' && vars.DOCKERHUB_REPOSITORY != ''
uses: peter-evans/dockerhub-description@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
repository: ${{ vars.DOCKERHUB_REPOSITORY }}
short-description: ${{ github.event.repository.description }}

34
.github/workflows/clean-caches.yml vendored Normal file
View File

@@ -0,0 +1,34 @@
name: cleanup caches by a branch
on:
pull_request:
types:
- closed
workflow_dispatch:
jobs:
cleanup:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v3
- name: Cleanup
run: |
gh extension install actions/gh-actions-cache
REPO=${{ github.repository }}
BRANCH=${{ github.ref }}
echo "Fetching list of cache key"
cacheKeysForPR=$(gh actions-cache list -R $REPO -B $BRANCH | cut -f 1 )
## Setting this to not fail the workflow while deleting cache keys.
set +e
echo "Deleting caches..."
for cacheKey in $cacheKeysForPR
do
gh actions-cache delete $cacheKey -R $REPO -B $BRANCH --confirm
done
echo "Done"
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -0,0 +1,27 @@
name: Close inactive issues
on:
schedule:
- cron: "00 6 * * *"
env:
DAYS_BEFORE_ISSUE_STALE: 14
DAYS_BEFORE_ISSUE_CLOSE: 28
jobs:
close-issues:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v5
with:
days-before-issue-stale: ${{ env.DAYS_BEFORE_ISSUE_STALE }}
days-before-issue-close: ${{ env.DAYS_BEFORE_ISSUE_CLOSE }}
stale-issue-label: "Inactive Issue"
stale-issue-message: "There has been no activity in this issue for ${{ env.DAYS_BEFORE_ISSUE_STALE }} days. If this issue is still being experienced, please reply with an updated confirmation that the issue is still being experienced with the latest release."
close-issue-message: "Due to inactivity, this issue was automatically closed. If you are still experiencing the issue, please recreate the issue."
days-before-pr-stale: -1
days-before-pr-close: -1
repo-token: ${{ secrets.GITHUB_TOKEN }}
operations-per-run: 500

View File

@@ -1,80 +0,0 @@
name: Create Caches
on: workflow_dispatch
jobs:
os_matrix:
strategy:
matrix:
os: [ubuntu-latest, macos-latest]
include:
- os: ubuntu-latest
environment-file: environment.yml
default-shell: bash -l {0}
- os: macos-latest
environment-file: environment-mac.yml
default-shell: bash -l {0}
name: Test invoke.py on ${{ matrix.os }} with conda
runs-on: ${{ matrix.os }}
defaults:
run:
shell: ${{ matrix.default-shell }}
steps:
- name: Checkout sources
uses: actions/checkout@v3
- name: setup miniconda
uses: conda-incubator/setup-miniconda@v2
with:
auto-activate-base: false
auto-update-conda: false
miniconda-version: latest
- name: set environment
run: |
[[ "$GITHUB_REF" == 'refs/heads/main' ]] \
&& echo "TEST_PROMPTS=tests/preflight_prompts.txt" >> $GITHUB_ENV \
|| echo "TEST_PROMPTS=tests/dev_prompts.txt" >> $GITHUB_ENV
echo "CONDA_ROOT=$CONDA" >> $GITHUB_ENV
echo "CONDA_ENV_NAME=invokeai" >> $GITHUB_ENV
- name: Use Cached Stable Diffusion v1.4 Model
id: cache-sd-v1-4
uses: actions/cache@v3
env:
cache-name: cache-sd-v1-4
with:
path: models/ldm/stable-diffusion-v1/model.ckpt
key: ${{ env.cache-name }}
restore-keys: ${{ env.cache-name }}
- name: Download Stable Diffusion v1.4 Model
if: ${{ steps.cache-sd-v1-4.outputs.cache-hit != 'true' }}
run: |
[[ -d models/ldm/stable-diffusion-v1 ]] \
|| mkdir -p models/ldm/stable-diffusion-v1
[[ -r models/ldm/stable-diffusion-v1/model.ckpt ]] \
|| curl \
-H "Authorization: Bearer ${{ secrets.HUGGINGFACE_TOKEN }}" \
-o models/ldm/stable-diffusion-v1/model.ckpt \
-L https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt
- name: Activate Conda Env
uses: conda-incubator/setup-miniconda@v2
with:
activate-environment: ${{ env.CONDA_ENV_NAME }}
environment-file: ${{ matrix.environment-file }}
- name: Use Cached Huggingface and Torch models
id: cache-hugginface-torch
uses: actions/cache@v3
env:
cache-name: cache-hugginface-torch
with:
path: ~/.cache
key: ${{ env.cache-name }}
restore-keys: |
${{ env.cache-name }}-${{ hashFiles('scripts/preload_models.py') }}
- name: run preload_models.py
run: python scripts/preload_models.py

37
.github/workflows/lint-frontend.yml vendored Normal file
View File

@@ -0,0 +1,37 @@
name: Lint frontend
on:
pull_request:
paths:
- 'invokeai/frontend/web/**'
types:
- 'ready_for_review'
- 'opened'
- 'synchronize'
push:
branches:
- 'main'
paths:
- 'invokeai/frontend/web/**'
merge_group:
workflow_dispatch:
defaults:
run:
working-directory: invokeai/frontend/web
jobs:
lint-frontend:
if: github.event.pull_request.draft == false
runs-on: ubuntu-22.04
steps:
- name: Setup Node 18
uses: actions/setup-node@v3
with:
node-version: '18'
- uses: actions/checkout@v3
- run: 'yarn install --frozen-lockfile'
- run: 'yarn run lint:tsc'
- run: 'yarn run lint:madge'
- run: 'yarn run lint:eslint'
- run: 'yarn run lint:prettier'

View File

@@ -1,28 +0,0 @@
name: Deploy
on:
push:
branches:
- main
# pull_request:
# branches:
# - main
jobs:
build:
name: Deploy docs to GitHub Pages
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Build
uses: Tiryoh/actions-mkdocs@v0
with:
mkdocs_version: 'latest' # option
requirements: '/requirements-mkdocs.txt' # option
configfile: '/mkdocs.yml' # option
- name: Deploy
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./site

View File

@@ -2,13 +2,19 @@ name: mkdocs-material
on:
push:
branches:
- 'main'
- 'development'
- 'release-candidate-2-1'
- 'refs/heads/v2.3'
permissions:
contents: write
jobs:
mkdocs-material:
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
env:
REPO_URL: '${{ github.server_url }}/${{ github.repository }}'
REPO_NAME: '${{ github.repository }}'
SITE_URL: 'https://${{ github.repository_owner }}.github.io/InvokeAI'
steps:
- name: checkout sources
uses: actions/checkout@v3
@@ -19,11 +25,15 @@ jobs:
uses: actions/setup-python@v4
with:
python-version: '3.10'
cache: pip
cache-dependency-path: pyproject.toml
- name: install requirements
env:
PIP_USE_PEP517: 1
run: |
python -m \
pip install -r requirements-mkdocs.txt
pip install ".[docs]"
- name: confirm buildability
run: |
@@ -33,7 +43,7 @@ jobs:
--verbose
- name: deploy to gh-pages
if: ${{ github.ref == 'refs/heads/main' }}
if: ${{ github.ref == 'refs/heads/v2.3' }}
run: |
python -m \
mkdocs gh-deploy \

20
.github/workflows/pyflakes.yml vendored Normal file
View File

@@ -0,0 +1,20 @@
on:
pull_request:
push:
branches:
- main
- development
- 'release-candidate-*'
jobs:
pyflakes:
name: runner / pyflakes
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: pyflakes
uses: reviewdog/action-pyflakes@v1
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
reporter: github-pr-review

41
.github/workflows/pypi-release.yml vendored Normal file
View File

@@ -0,0 +1,41 @@
name: PyPI Release
on:
push:
paths:
- 'invokeai/version/invokeai_version.py'
workflow_dispatch:
jobs:
release:
if: github.repository == 'invoke-ai/InvokeAI'
runs-on: ubuntu-22.04
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
TWINE_NON_INTERACTIVE: 1
steps:
- name: checkout sources
uses: actions/checkout@v3
- name: install deps
run: pip install --upgrade build twine
- name: build package
run: python3 -m build
- name: check distribution
run: twine check dist/*
- name: check PyPI versions
if: github.ref == 'refs/heads/main' || github.ref == 'refs/heads/v2.3'
run: |
pip install --upgrade requests
python -c "\
import scripts.pypi_helper; \
EXISTS=scripts.pypi_helper.local_on_pypi(); \
print(f'PACKAGE_EXISTS={EXISTS}')" >> $GITHUB_ENV
- name: upload package
if: env.PACKAGE_EXISTS == 'False' && env.TWINE_PASSWORD != ''
run: twine upload dist/*

View File

@@ -1,112 +0,0 @@
name: Test invoke.py
on:
push:
branches:
- 'main'
- 'development'
pull_request:
branches:
- 'main'
- 'development'
jobs:
matrix:
strategy:
fail-fast: false
matrix:
stable-diffusion-model:
# - 'https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt'
- 'https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt'
os:
- ubuntu-latest
- macOS-12
include:
- os: ubuntu-latest
environment-file: environment.yml
default-shell: bash -l {0}
- os: macOS-12
environment-file: environment-mac.yml
default-shell: bash -l {0}
# - stable-diffusion-model: https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt
# stable-diffusion-model-dl-path: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
# stable-diffusion-model-switch: stable-diffusion-1.4
- stable-diffusion-model: https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt
stable-diffusion-model-dl-path: models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt
stable-diffusion-model-switch: stable-diffusion-1.5
name: ${{ matrix.os }} with ${{ matrix.stable-diffusion-model-switch }}
runs-on: ${{ matrix.os }}
env:
CONDA_ENV_NAME: invokeai
defaults:
run:
shell: ${{ matrix.default-shell }}
steps:
- name: Checkout sources
id: checkout-sources
uses: actions/checkout@v3
- name: create models.yaml from example
run: cp configs/models.yaml.example configs/models.yaml
- name: Use cached conda packages
id: use-cached-conda-packages
uses: actions/cache@v3
with:
path: ~/conda_pkgs_dir
key: conda-pkgs-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles(matrix.environment-file) }}
- name: Activate Conda Env
id: activate-conda-env
uses: conda-incubator/setup-miniconda@v2
with:
activate-environment: ${{ env.CONDA_ENV_NAME }}
environment-file: ${{ matrix.environment-file }}
miniconda-version: latest
- name: set test prompt to main branch validation
if: ${{ github.ref == 'refs/heads/main' }}
run: echo "TEST_PROMPTS=tests/preflight_prompts.txt" >> $GITHUB_ENV
- name: set test prompt to development branch validation
if: ${{ github.ref == 'refs/heads/development' }}
run: echo "TEST_PROMPTS=tests/dev_prompts.txt" >> $GITHUB_ENV
- name: set test prompt to Pull Request validation
if: ${{ github.ref != 'refs/heads/main' && github.ref != 'refs/heads/development' }}
run: echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> $GITHUB_ENV
- name: Download ${{ matrix.stable-diffusion-model-switch }}
id: download-stable-diffusion-model
run: |
[[ -d models/ldm/stable-diffusion-v1 ]] \
|| mkdir -p models/ldm/stable-diffusion-v1
curl \
-H "Authorization: Bearer ${{ secrets.HUGGINGFACE_TOKEN }}" \
-o ${{ matrix.stable-diffusion-model-dl-path }} \
-L ${{ matrix.stable-diffusion-model }}
- name: run preload_models.py
id: run-preload-models
run: |
python scripts/preload_models.py \
--no-interactive
- name: Run the tests
id: run-tests
run: |
time python scripts/invoke.py \
--model ${{ matrix.stable-diffusion-model-switch }} \
--from_file ${{ env.TEST_PROMPTS }}
- name: export conda env
id: export-conda-env
run: |
mkdir -p outputs/img-samples
conda env export --name ${{ env.CONDA_ENV_NAME }} > outputs/img-samples/environment-${{ runner.os }}-${{ runner.arch }}.yml
- name: Archive results
id: archive-results
uses: actions/upload-artifact@v3
with:
name: results_${{ matrix.os }}_${{ matrix.stable-diffusion-model-switch }}
path: outputs/img-samples

View File

@@ -0,0 +1,66 @@
name: Test invoke.py pip
on:
pull_request:
paths:
- '**'
- '!pyproject.toml'
- '!invokeai/**'
- 'invokeai/frontend/web/**'
merge_group:
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
matrix:
if: github.event.pull_request.draft == false
strategy:
matrix:
python-version:
# - '3.9'
- '3.10'
pytorch:
# - linux-cuda-11_6
- linux-cuda-11_7
- linux-rocm-5_2
- linux-cpu
- macos-default
- windows-cpu
# - windows-cuda-11_6
# - windows-cuda-11_7
include:
# - pytorch: linux-cuda-11_6
# os: ubuntu-22.04
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
# github-env: $GITHUB_ENV
- pytorch: linux-cuda-11_7
os: ubuntu-22.04
github-env: $GITHUB_ENV
- pytorch: linux-rocm-5_2
os: ubuntu-22.04
extra-index-url: 'https://download.pytorch.org/whl/rocm5.2'
github-env: $GITHUB_ENV
- pytorch: linux-cpu
os: ubuntu-22.04
extra-index-url: 'https://download.pytorch.org/whl/cpu'
github-env: $GITHUB_ENV
- pytorch: macos-default
os: macOS-12
github-env: $GITHUB_ENV
- pytorch: windows-cpu
os: windows-2022
github-env: $env:GITHUB_ENV
# - pytorch: windows-cuda-11_6
# os: windows-2022
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
# github-env: $env:GITHUB_ENV
# - pytorch: windows-cuda-11_7
# os: windows-2022
# extra-index-url: 'https://download.pytorch.org/whl/cu117'
# github-env: $env:GITHUB_ENV
name: ${{ matrix.pytorch }} on ${{ matrix.python-version }}
runs-on: ${{ matrix.os }}
steps:
- run: 'echo "No build required"'

139
.github/workflows/test-invoke-pip.yml vendored Normal file
View File

@@ -0,0 +1,139 @@
name: Test invoke.py pip
on:
push:
branches:
- 'main'
paths:
- 'pyproject.toml'
- 'invokeai/**'
- '!invokeai/frontend/web/**'
pull_request:
paths:
- 'pyproject.toml'
- 'invokeai/**'
- '!invokeai/frontend/web/**'
types:
- 'ready_for_review'
- 'opened'
- 'synchronize'
merge_group:
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
matrix:
if: github.event.pull_request.draft == false
strategy:
matrix:
python-version:
# - '3.9'
- '3.10'
pytorch:
# - linux-cuda-11_6
- linux-cuda-11_7
- linux-rocm-5_2
- linux-cpu
- macos-default
- windows-cpu
# - windows-cuda-11_6
# - windows-cuda-11_7
include:
# - pytorch: linux-cuda-11_6
# os: ubuntu-22.04
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
# github-env: $GITHUB_ENV
- pytorch: linux-cuda-11_7
os: ubuntu-22.04
github-env: $GITHUB_ENV
- pytorch: linux-rocm-5_2
os: ubuntu-22.04
extra-index-url: 'https://download.pytorch.org/whl/rocm5.2'
github-env: $GITHUB_ENV
- pytorch: linux-cpu
os: ubuntu-22.04
extra-index-url: 'https://download.pytorch.org/whl/cpu'
github-env: $GITHUB_ENV
- pytorch: macos-default
os: macOS-12
github-env: $GITHUB_ENV
- pytorch: windows-cpu
os: windows-2022
github-env: $env:GITHUB_ENV
# - pytorch: windows-cuda-11_6
# os: windows-2022
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
# github-env: $env:GITHUB_ENV
# - pytorch: windows-cuda-11_7
# os: windows-2022
# extra-index-url: 'https://download.pytorch.org/whl/cu117'
# github-env: $env:GITHUB_ENV
name: ${{ matrix.pytorch }} on ${{ matrix.python-version }}
runs-on: ${{ matrix.os }}
env:
PIP_USE_PEP517: '1'
steps:
- name: Checkout sources
id: checkout-sources
uses: actions/checkout@v3
- name: set test prompt to main branch validation
run: echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> ${{ matrix.github-env }}
- name: setup python
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: pyproject.toml
- name: install invokeai
env:
PIP_EXTRA_INDEX_URL: ${{ matrix.extra-index-url }}
run: >
pip3 install
--editable=".[test]"
- name: run pytest
id: run-pytest
run: pytest
- name: run invokeai-configure
id: run-preload-models
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGINGFACE_TOKEN }}
run: >
invokeai-configure
--yes
--default_only
--full-precision
# can't use fp16 weights without a GPU
- name: run invokeai
id: run-invokeai
env:
# Set offline mode to make sure configure preloaded successfully.
HF_HUB_OFFLINE: 1
HF_DATASETS_OFFLINE: 1
TRANSFORMERS_OFFLINE: 1
INVOKEAI_OUTDIR: ${{ github.workspace }}/results
run: >
invokeai
--no-patchmatch
--no-nsfw_checker
--precision=float32
--always_use_cpu
--use_memory_db
--outdir ${{ env.INVOKEAI_OUTDIR }}/${{ matrix.python-version }}/${{ matrix.pytorch }}
--from_file ${{ env.TEST_PROMPTS }}
- name: Archive results
id: archive-results
env:
INVOKEAI_OUTDIR: ${{ github.workspace }}/results
uses: actions/upload-artifact@v3
with:
name: results
path: ${{ env.INVOKEAI_OUTDIR }}

34
.gitignore vendored
View File

@@ -1,11 +1,16 @@
# ignore default image save location and model symbolic link
.idea/
embeddings/
outputs/
models/ldm/stable-diffusion-v1/model.ckpt
ldm/invoke/restoration/codeformer/weights
**/restoration/codeformer/weights
# ignore user models config
configs/models.user.yaml
config/models.user.yml
invokeai.init
.version
.last_model
# ignore the Anaconda/Miniconda installer used while building Docker image
anaconda.sh
@@ -60,16 +65,20 @@ pip-delete-this-directory.txt
htmlcov/
.tox/
.nox/
.coveragerc
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
cov.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
.pytest.ini
cover/
junit/
# Translations
*.mo
@@ -184,7 +193,7 @@ src
**/__pycache__/
outputs
# Logs and associated folders
# Logs and associated folders
# created from generated embeddings.
logs
testtube
@@ -192,8 +201,10 @@ checkpoints
# If it's a Mac
.DS_Store
invokeai/frontend/web/dist/*
# Let the frontend manage its own gitignore
!frontend/*
!invokeai/frontend/web/*
# Scratch folder
.scratch/
@@ -201,11 +212,24 @@ checkpoints
gfpgan/
models/ldm/stable-diffusion-v1/*.sha256
# GFPGAN model files
gfpgan/
# config file (will be created by installer)
configs/models.yaml
# weights (will be created by installer)
models/ldm/stable-diffusion-v1/*.ckpt
# ignore initfile
.invokeai
# ignore environment.yml and requirements.txt
# these are links to the real files in environments-and-requirements
environment.yml
requirements.txt
# source installer files
installer/*zip
installer/install.bat
installer/install.sh
installer/update.bat
installer/update.sh

128
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,128 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior
may be reported to the community leaders responsible for enforcement
at https://github.com/invoke-ai/InvokeAI/issues. All complaints will
be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.

View File

@@ -0,0 +1,84 @@
<img src="docs/assets/invoke_ai_banner.png" align="center">
Invoke-AI is a community of software developers, researchers, and user
interface experts who have come together on a voluntary basis to build
software tools which support cutting edge AI text-to-image
applications. This community is open to anyone who wishes to
contribute to the effort and has the skill and time to do so.
# Our Values
The InvokeAI team is a diverse community which includes individuals
from various parts of the world and many walks of life. Despite our
differences, we share a number of core values which we ask prospective
contributors to understand and respect. We believe:
1. That Open Source Software is a positive force in the world. We
create software that can be used, reused, and redistributed, without
restrictions, under a straightforward Open Source license (MIT). We
believe that Open Source benefits society as a whole by increasing the
availability of high quality software to all.
2. That those who create software should receive proper attribution
for their creative work. While we support the exchange and reuse of
Open Source Software, we feel strongly that the original authors of a
piece of code should receive credit for their contribution, and we
endeavor to do so whenever possible.
3. That there is moral ambiguity surrounding AI-assisted art. We are
aware of the moral and ethical issues surrounding the release of the
Stable Diffusion model and similar products. We are aware that, due to
the composition of their training sets, current AI-generated image
models are biased against certain ethnic groups, cultural concepts of
beauty, ethnic stereotypes, and gender roles.
1. We recognize the potential for harm to these groups that these biases
represent and trust that future AI models will take steps towards
reducing or eliminating the biases noted above, respect and give due
credit to the artists whose work is sourced, and call on developers
and users to favor these models over the older ones as they become
available.
4. We are deeply committed to ensuring that this technology benefits
everyone, including artists. We see AI art not as a replacement for
the artist, but rather as a tool to empower them. With that
in mind, we are constantly debating how to build systems that put
artists needs first: tools which can be readily integrated into an
artists existing workflows and practices, enhancing their work and
helping them to push it further. Every decision we take as a team,
which includes several artists, aims to build towards that goal.
5. That artificial intelligence can be a force for good in the world,
but must be used responsibly. Artificial intelligence technologies
have the potential to improve society, in everything from cancer care,
to customer service, to creative writing.
1. While we do not believe that software should arbitrarily limit what
users can do with it, we recognize that when used irresponsibly, AI
has the potential to do much harm. Our Discord server is actively
moderated in order to minimize the potential of harm from
user-contributed images. In addition, we ask users of our software to
refrain from using it in any way that would cause mental, emotional or
physical harm to individuals and vulnerable populations including (but
not limited to) women; minors; ethnic minorities; religious groups;
members of LGBTQIA communities; and people with disabilities or
impairments.
2. Note that some of the image generation AI models which the Invoke-AI
toolkit supports carry licensing agreements which impose restrictions
on how the model is used. We ask that our users read and agree to
these terms if they wish to make use of these models. These agreements
are distinct from the MIT license which applies to the InvokeAI
software and source code.
6. That mutual respect is key to a healthy software development
community. Members of the InvokeAI community are expected to treat
each other with respect, beneficence, and empathy. Each of us has a
different background and a unique set of skills. We strive to help
each other grow and gain new skills, and we apportion expectations in
a way that balances the members' time, skillset, and interest
area. Disputes are resolved by open and honest communication.
## Signature
This document has been collectively crafted and approved by the current InvokeAI team members, as of 28 Nov 2022: **lstein** (Lincoln Stein), **blessedcoolant**, **hipsterusername** (Kent Keirsey), **Kyle0654** (Kyle Schouviller), **damian0815**, **mauwii** (Matthias Wild), **Netsvetaev** (Artur Netsvetaev), **psychedelicious**, **tildebyte**, **keturn**, and **ebr** (Eugene Brodsky). Although individuals within the group may hold differing views on particular details and/or their implications, we are all in agreement about its fundamental statements, as well as their significance and importance to this project moving forward.

13
LICENSE
View File

@@ -1,17 +1,6 @@
MIT License
Copyright (c) 2022 Lincoln Stein and InvokeAI Organization
This software is derived from a fork of the source code available from
https://github.com/pesser/stable-diffusion and
https://github.com/CompViz/stable-diffusion. They carry the following
copyrights:
Copyright (c) 2022 Machine Vision and Learning Group, LMU Munich
Copyright (c) 2022 Robin Rombach and Patrick Esser and contributors
Please see individual source code files for copyright and authorship
attributions.
Copyright (c) 2022 InvokeAI Team
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

341
README.md
View File

@@ -1,23 +1,19 @@
<div align="center">
![project logo](https://github.com/invoke-ai/InvokeAI/raw/main/docs/assets/invoke_ai_banner.png)
# InvokeAI: A Stable Diffusion Toolkit
_Formerly known as lstein/stable-diffusion_
![project logo](docs/assets/logo.png)
[![discord badge]][discord link]
[![latest release badge]][latest release link] [![github stars badge]][github stars link] [![github forks badge]][github forks link]
[![CI checks on main badge]][CI checks on main link] [![CI checks on dev badge]][CI checks on dev link] [![latest commit to dev badge]][latest commit to dev link]
[![CI checks on main badge]][CI checks on main link] [![latest commit to main badge]][latest commit to main link]
[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link]
[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link] [![translation status badge]][translation status link]
[CI checks on dev badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/development?label=CI%20status%20on%20dev&cache=900&icon=github
[CI checks on dev link]: https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Adevelopment
[CI checks on main badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
[CI checks on main link]: https://github.com/invoke-ai/InvokeAI/actions/workflows/test-invoke-conda.yml
[CI checks on main link]:https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Amain
[discord badge]: https://flat.badgen.net/discord/members/ZmtBAhwWhy?icon=discord
[discord link]: https://discord.gg/ZmtBAhwWhy
[github forks badge]: https://flat.badgen.net/github/forks/invoke-ai/InvokeAI?icon=github
@@ -28,178 +24,271 @@ _Formerly known as lstein/stable-diffusion_
[github open prs link]: https://github.com/invoke-ai/InvokeAI/pulls?q=is%3Apr+is%3Aopen
[github stars badge]: https://flat.badgen.net/github/stars/invoke-ai/InvokeAI?icon=github
[github stars link]: https://github.com/invoke-ai/InvokeAI/stargazers
[latest commit to dev badge]: https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/development?icon=github&color=yellow&label=last%20dev%20commit&cache=900
[latest commit to dev link]: https://github.com/invoke-ai/InvokeAI/commits/development
[latest commit to main badge]: https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/main?icon=github&color=yellow&label=last%20dev%20commit&cache=900
[latest commit to main link]: https://github.com/invoke-ai/InvokeAI/commits/main
[latest release badge]: https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases
[translation status badge]: https://hosted.weblate.org/widgets/invokeai/-/svg-badge.svg
[translation status link]: https://hosted.weblate.org/engage/invokeai/
</div>
This is a fork of
[CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion),
the open source text-to-image generator. It provides a streamlined
process with various new features and options to aid the image
generation process. It runs on Windows, Mac and Linux machines, with
GPU cards with as little as 4 GB of RAM. It provides both a polished
Web interface (see below), and an easy-to-use command-line interface.
_**Note: The UI is not fully functional on `main`. If you need a stable UI based on `main`, use the `pre-nodes` tag while we [migrate to a new backend](https://github.com/invoke-ai/InvokeAI/discussions/3246).**_
**Quick links**: [<a href="https://discord.gg/NwVCmKwY">Discord Server</a>] [<a href="https://invoke-ai.github.io/InvokeAI/">Documentation and Tutorials</a>] [<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas & Q&A</a>]
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.
<div align="center"><img src="docs/assets/invoke-web-server-1.png" width=640></div>
**Quick links**: [[How to Install](https://invoke-ai.github.io/InvokeAI/#installation)] [<a href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a href="https://invoke-ai.github.io/InvokeAI/">Documentation and Tutorials</a>] [<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas & Q&A</a>]
_Note: This fork is rapidly evolving. Please use the
_Note: InvokeAI is rapidly evolving. Please use the
[Issues](https://github.com/invoke-ai/InvokeAI/issues) tab to report bugs and make feature
requests. Be sure to use the provided templates. They will help aid diagnose issues faster._
requests. Be sure to use the provided templates. They will help us diagnose issues faster._
<div align="center">
![canvas preview](https://github.com/invoke-ai/InvokeAI/raw/main/docs/assets/canvas_preview.png)
</div>
## Table of Contents
1. [Installation](#installation)
2. [Hardware Requirements](#hardware-requirements)
3. [Features](#features)
4. [Latest Changes](#latest-changes)
5. [Troubleshooting](#troubleshooting)
6. [Contributing](#contributing)
7. [Contributors](#contributors)
8. [Support](#support)
9. [Further Reading](#further-reading)
1. [Quick Start](#getting-started-with-invokeai)
2. [Installation](#detailed-installation-instructions)
3. [Hardware Requirements](#hardware-requirements)
4. [Features](#features)
5. [Latest Changes](#latest-changes)
6. [Troubleshooting](#troubleshooting)
7. [Contributing](#contributing)
8. [Contributors](#contributors)
9. [Support](#support)
10. [Further Reading](#further-reading)
### Installation
## Getting Started with InvokeAI
This fork is supported across multiple platforms. You can find individual installation instructions
below.
For full installation and upgrade instructions, please see:
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/)
- #### [Linux](docs/installation/INSTALL_LINUX.md)
### Automatic Installer (suggested for 1st time users)
- #### [Windows](docs/installation/INSTALL_WINDOWS.md)
1. Go to the bottom of the [Latest Release Page](https://github.com/invoke-ai/InvokeAI/releases/latest)
- #### [Macintosh](docs/installation/INSTALL_MAC.md)
2. Download the .zip file for your OS (Windows/macOS/Linux).
### Hardware Requirements
3. Unzip the file.
#### System
4. If you are on Windows, double-click on the `install.bat` script. On
macOS, open a Terminal window, drag the file `install.sh` from Finder
into the Terminal, and press return. On Linux, run `install.sh`.
You wil need one of the following:
5. You'll be asked to confirm the location of the folder in which
to install InvokeAI and its image generation model files. Pick a
location with at least 15 GB of free memory. More if you plan on
installing lots of models.
6. Wait while the installer does its thing. After installing the software,
the installer will launch a script that lets you configure InvokeAI and
select a set of starting image generation models.
7. Find the folder that InvokeAI was installed into (it is not the
same as the unpacked zip file directory!) The default location of this
folder (if you didn't change it in step 5) is `~/invokeai` on
Linux/Mac systems, and `C:\Users\YourName\invokeai` on Windows. This directory will contain launcher scripts named `invoke.sh` and `invoke.bat`.
8. On Windows systems, double-click on the `invoke.bat` file. On
macOS, open a Terminal window, drag `invoke.sh` from the folder into
the Terminal, and press return. On Linux, run `invoke.sh`
9. Press 2 to open the "browser-based UI", press enter/return, wait a
minute or two for Stable Diffusion to start up, then open your browser
and go to http://localhost:9090.
10. Type `banana sushi` in the box on the top left and click `Invoke`
### Command-Line Installation (for users familiar with Terminals)
You must have Python 3.9 or 3.10 installed on your machine. Earlier or later versions are
not supported.
1. Open a command-line window on your machine. The PowerShell is recommended for Windows.
2. Create a directory to install InvokeAI into. You'll need at least 15 GB of free space:
```terminal
mkdir invokeai
````
3. Create a virtual environment named `.venv` inside this directory and activate it:
```terminal
cd invokeai
python -m venv .venv --prompt InvokeAI
```
4. Activate the virtual environment (do it every time you run InvokeAI)
_For Linux/Mac users:_
```sh
source .venv/bin/activate
```
_For Windows users:_
```ps
.venv\Scripts\activate
```
5. Install the InvokeAI module and its dependencies. Choose the command suited for your platform & GPU.
_For Windows/Linux with an NVIDIA GPU:_
```terminal
pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117
```
_For Linux with an AMD GPU:_
```sh
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.4.2
```
_For non-GPU systems:_
```terminal
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
```
_For Macintoshes, either Intel or M1/M2:_
```sh
pip install InvokeAI --use-pep517
```
6. Configure InvokeAI and install a starting set of image generation models (you only need to do this once):
```terminal
invokeai-configure
```
7. Launch the web server (do it every time you run InvokeAI):
```terminal
invokeai --web
```
8. Point your browser to http://localhost:9090 to bring up the web interface.
9. Type `banana sushi` in the box on the top left and click `Invoke`.
Be sure to activate the virtual environment each time before re-launching InvokeAI,
using `source .venv/bin/activate` or `.venv\Scripts\activate`.
### Detailed Installation Instructions
This fork is supported across Linux, Windows and Macintosh. Linux
users can use either an Nvidia-based card (with CUDA support) or an
AMD card (using the ROCm driver). For full installation and upgrade
instructions, please see:
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/INSTALL_SOURCE/)
## Hardware Requirements
InvokeAI is supported across Linux, Windows and macOS. Linux
users can use either an Nvidia-based card (with CUDA support) or an
AMD card (using the ROCm driver).
### System
You will need one of the following:
- An NVIDIA-based graphics card with 4 GB or more VRAM memory.
- An Apple computer with an M1 chip.
- An AMD-based graphics card with 4GB or more VRAM memory. (Linux only)
#### Memory
We do not recommend the GTX 1650 or 1660 series video cards. They are
unable to run in half-precision mode and do not have sufficient VRAM
to render 512x512 images.
### Memory
- At least 12 GB Main Memory RAM.
#### Disk
### Disk
- At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
**Note**
## Features
If you have a Nvidia 10xx series card (e.g. the 1080ti), please
run the dream script in full-precision mode as shown below.
Feature documentation can be reviewed by navigating to [the InvokeAI Documentation page](https://invoke-ai.github.io/InvokeAI/features/)
Similarly, specify full-precision mode on Apple M1 hardware.
### *Web Server & UI*
Precision is auto configured based on the device. If however you encounter
errors like 'expected type Float but found Half' or 'not implemented for Half'
you can try starting `invoke.py` with the `--precision=float32` flag:
InvokeAI offers a locally hosted Web Server & React Frontend, with an industry leading user experience. The Web-based UI allows for simple and intuitive workflows, and is responsive for use on mobile devices and tablets accessing the web server.
```bash
(ldm) ~/stable-diffusion$ python scripts/invoke.py --precision=float32
```
### *Unified Canvas*
### Features
The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/outpainting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.
#### Major Features
### *Advanced Prompt Syntax*
- [Web Server](docs/features/WEB.md)
- [Interactive Command Line Interface](docs/features/CLI.md)
- [Image To Image](docs/features/IMG2IMG.md)
- [Inpainting Support](docs/features/INPAINTING.md)
- [Outpainting Support](docs/features/OUTPAINTING.md)
- [Upscaling, face-restoration and outpainting](docs/features/POSTPROCESS.md)
- [Seamless Tiling](docs/features/OTHER.md#seamless-tiling)
- [Google Colab](docs/features/OTHER.md#google-colab)
- [Reading Prompts From File](docs/features/PROMPTS.md#reading-prompts-from-a-file)
- [Shortcut: Reusing Seeds](docs/features/OTHER.md#shortcuts-reusing-seeds)
- [Prompt Blending](docs/features/PROMPTS.md#prompt-blending)
- [Thresholding and Perlin Noise Initialization Options](/docs/features/OTHER.md#thresholding-and-perlin-noise-initialization-options)
- [Negative/Unconditioned Prompts](docs/features/PROMPTS.md#negative-and-unconditioned-prompts)
- [Variations](docs/features/VARIATIONS.md)
- [Personalizing Text-to-Image Generation](docs/features/TEXTUAL_INVERSION.md)
- [Simplified API for text to image generation](docs/features/OTHER.md#simplified-api)
InvokeAI's advanced prompt syntax allows for token weighting, cross-attention control, and prompt blending, allowing for fine-tuned tweaking of your invocations and exploration of the latent space.
#### Other Features
### *Command Line Interface*
- [Creating Transparent Regions for Inpainting](docs/features/INPAINTING.md#creating-transparent-regions-for-inpainting)
- [Preload Models](docs/features/OTHER.md#preload-models)
For users utilizing a terminal-based environment, or who want to take advantage of CLI features, InvokeAI offers an extensive and actively supported command-line interface that provides the full suite of generation functionality available in the tool.
### Other features
- *Support for both ckpt and diffusers models*
- *SD 2.0, 2.1 support*
- *Noise Control & Tresholding*
- *Popular Sampler Support*
- *Upscaling & Face Restoration Tools*
- *Embedding Manager & Support*
- *Model Manager & Support*
### Coming Soon
- *Node-Based Architecture & UI*
- And more...
### Latest Changes
- v2.0.1 (13 October 2022)
- fix noisy images at high step count when using k* samplers
- dream.py script now calls invoke.py module directly rather than
via a new python process (which could break the environment)
For our latest changes, view our [Release
Notes](https://github.com/invoke-ai/InvokeAI/releases) and the
[CHANGELOG](docs/CHANGELOG.md).
- v2.0.0 (9 October 2022)
## Troubleshooting
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains
for backward compatibility.
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
- Support for <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/INPAINTING.md">inpainting</a> and <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/OUTPAINTING.md">outpainting</a>
- img2img runs on all k* samplers
- Support for <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/PROMPTS.md#negative-and-unconditioned-prompts">negative prompts</a>
- Support for CodeFormer face reconstruction
- Support for Textual Inversion on Macintoshes
- Support in both WebGUI and CLI for <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/POSTPROCESS.md">post-processing of previously-generated images</a>
using facial reconstruction, ESRGAN upscaling, outcropping (similar to DALL-E infinite canvas),
and "embiggen" upscaling. See the `!fix` command.
- New `--hires` option on `invoke>` line allows <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/CLI.md#this-is-an-example-of-txt2img">larger images to be created without duplicating elements</a>, at the cost of some performance.
- New `--perlin` and `--threshold` options allow you to add and control variation
during image generation (see <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/OTHER.md#thresholding-and-perlin-noise-initialization-options">Thresholding and Perlin Noise Initialization</a>
- Extensive metadata now written into PNG files, allowing reliable regeneration of images
and tweaking of previous settings.
- Command-line completion in `invoke.py` now works on Windows, Linux and Mac platforms.
- Improved <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/CLI.md">command-line completion behavior</a>.
New commands added:
* List command-line history with `!history`
* Search command-line history with `!search`
* Clear history with `!clear`
- Deprecated `--full_precision` / `-F`. Simply omit it and `invoke.py` will auto
configure. To switch away from auto use the new flag like `--precision=float32`.
For older changelogs, please visit the **[CHANGELOG](docs/features/CHANGELOG.md)**.
### Troubleshooting
Please check out our **[Q&A](docs/help/TROUBLESHOOT.md)** to get solutions for common installation
Please check out our **[Q&A](https://invoke-ai.github.io/InvokeAI/help/TROUBLESHOOT/#faq)** to get solutions for common installation
problems and other issues.
# Contributing
## Contributing
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with how
to contribute to GitHub projects, here is a
[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github).
cleanup, testing, or code reviews, is very much encouraged to do so.
A full set of contribution guidelines, along with templates, are in progress, but for now the most
important thing is to **make your pull request against the "development" branch**, and not against
"main". This will help keep public breakage to a minimum and will allow you to propose more radical
changes.
To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the GitHub discussion board.
If you'd like to help with translation, please see our [translation guide](docs/other/TRANSLATION.md).
If you are unfamiliar with how
to contribute to GitHub projects, here is a
[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github). A full set of contribution guidelines, along with templates, are in progress. You can **make your pull request against the "main" branch**.
We hope you enjoy using our software as much as we enjoy creating it,
and we hope that some of those of you who are reading this will elect
to become part of our community.
Welcome to InvokeAI!
### Contributors
This fork is a combined effort of various people from across the world.
[Check out the list of all these amazing people](docs/other/CONTRIBUTORS.md). We thank them for
[Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for
their time, hard work and effort.
Thanks to [Weblate](https://weblate.org/) for generously providing translation services to this project.
### Support
For support, please use this repository's GitHub Issues tracking service. Feel free to send me an
email if you use and like the script.
For support, please use this repository's GitHub Issues tracking service, or join the Discord.
Original portions of the software are Copyright (c) 2020
[Lincoln D. Stein](https://github.com/lstein)
Original portions of the software are Copyright (c) 2023 by respective contributors.
### Further Reading
Please see the original README for more information on this software and underlying algorithm,
located in the file [README-CompViz.md](docs/other/README-CompViz.md).

View File

@@ -21,7 +21,7 @@ This model card focuses on the model associated with the Stable Diffusion model,
# Uses
## Direct Use
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
@@ -68,11 +68,11 @@ Using the model to generate content that is cruel to individuals is a misuse of
considerations.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
@@ -84,7 +84,7 @@ The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
Stable Diffusion v1 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
@@ -108,12 +108,12 @@ filtered to images with an original size `>= 512x512`, estimated aesthetics scor
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
steps show the relative improvements of the checkpoints:
![pareto](assets/v1-variants-scores.jpg)
![pareto](assets/v1-variants-scores.jpg)
Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact

File diff suppressed because it is too large Load Diff

4
coverage/.gitignore vendored Normal file
View File

@@ -0,0 +1,4 @@
# Ignore everything in this directory
*
# Except this file
!.gitignore

View File

@@ -1,74 +0,0 @@
FROM ubuntu AS get_miniconda
SHELL ["/bin/bash", "-c"]
# install wget
RUN apt-get update \
&& apt-get install -y \
wget \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# download and install miniconda
ARG conda_version=py39_4.12.0-Linux-x86_64
ARG conda_prefix=/opt/conda
RUN wget --progress=dot:giga -O /miniconda.sh \
https://repo.anaconda.com/miniconda/Miniconda3-${conda_version}.sh \
&& bash /miniconda.sh -b -p ${conda_prefix} \
&& rm -f /miniconda.sh
FROM ubuntu AS invokeai
# use bash
SHELL [ "/bin/bash", "-c" ]
# clean bashrc
RUN echo "" > ~/.bashrc
# Install necesarry packages
RUN apt-get update \
&& apt-get install -y \
--no-install-recommends \
gcc \
git \
libgl1-mesa-glx \
libglib2.0-0 \
pip \
python3 \
python3-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# clone repository and create symlinks
ARG invokeai_git=https://github.com/invoke-ai/InvokeAI.git
ARG project_name=invokeai
RUN git clone ${invokeai_git} /${project_name} \
&& mkdir /${project_name}/models/ldm/stable-diffusion-v1 \
&& ln -s /data/models/sd-v1-4.ckpt /${project_name}/models/ldm/stable-diffusion-v1/model.ckpt \
&& ln -s /data/outputs/ /${project_name}/outputs
# set workdir
WORKDIR /${project_name}
# install conda env and preload models
ARG conda_prefix=/opt/conda
ARG conda_env_file=environment.yml
COPY --from=get_miniconda ${conda_prefix} ${conda_prefix}
RUN source ${conda_prefix}/etc/profile.d/conda.sh \
&& conda init bash \
&& source ~/.bashrc \
&& conda env create \
--name ${project_name} \
--file ${conda_env_file} \
&& rm -Rf ~/.cache \
&& conda clean -afy \
&& echo "conda activate ${project_name}" >> ~/.bashrc \
&& ln -s /data/models/GFPGANv1.4.pth ./src/gfpgan/experiments/pretrained_models/GFPGANv1.4.pth \
&& conda activate ${project_name} \
&& python scripts/preload_models.py
# Copy entrypoint and set env
ENV CONDA_PREFIX=${conda_prefix}
ENV PROJECT_NAME=${project_name}
COPY docker-build/entrypoint.sh /
ENTRYPOINT [ "/entrypoint.sh" ]

View File

@@ -1,81 +0,0 @@
#!/usr/bin/env bash
set -e
# IMPORTANT: You need to have a token on huggingface.co to be able to download the checkpoint!!!
# configure values by using env when executing build.sh
# f.e. env ARCH=aarch64 GITHUB_INVOKE_AI=https://github.com/yourname/yourfork.git ./build.sh
source ./docker-build/env.sh || echo "please run from repository root" || exit 1
invokeai_conda_version=${INVOKEAI_CONDA_VERSION:-py39_4.12.0-${platform/\//-}}
invokeai_conda_prefix=${INVOKEAI_CONDA_PREFIX:-\/opt\/conda}
invokeai_conda_env_file=${INVOKEAI_CONDA_ENV_FILE:-environment.yml}
invokeai_git=${INVOKEAI_GIT:-https://github.com/invoke-ai/InvokeAI.git}
huggingface_token=${HUGGINGFACE_TOKEN?}
# print the settings
echo "You are using these values:"
echo -e "project_name:\t\t ${project_name}"
echo -e "volumename:\t\t ${volumename}"
echo -e "arch:\t\t\t ${arch}"
echo -e "platform:\t\t ${platform}"
echo -e "invokeai_conda_version:\t ${invokeai_conda_version}"
echo -e "invokeai_conda_prefix:\t ${invokeai_conda_prefix}"
echo -e "invokeai_conda_env_file: ${invokeai_conda_env_file}"
echo -e "invokeai_git:\t\t ${invokeai_git}"
echo -e "invokeai_tag:\t\t ${invokeai_tag}\n"
_runAlpine() {
docker run \
--rm \
--interactive \
--tty \
--mount source="$volumename",target=/data \
--workdir /data \
alpine "$@"
}
_copyCheckpoints() {
echo "creating subfolders for models and outputs"
_runAlpine mkdir models
_runAlpine mkdir outputs
echo -n "downloading sd-v1-4.ckpt"
_runAlpine wget --header="Authorization: Bearer ${huggingface_token}" -O models/sd-v1-4.ckpt https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt
echo "done"
echo "downloading GFPGANv1.4.pth"
_runAlpine wget -O models/GFPGANv1.4.pth https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth
}
_checkVolumeContent() {
_runAlpine ls -lhA /data/models
}
_getModelMd5s() {
_runAlpine \
alpine sh -c "md5sum /data/models/*"
}
if [[ -n "$(docker volume ls -f name="${volumename}" -q)" ]]; then
echo "Volume already exists"
if [[ -z "$(_checkVolumeContent)" ]]; then
echo "looks empty, copying checkpoint"
_copyCheckpoints
fi
echo "Models in ${volumename}:"
_checkVolumeContent
else
echo -n "createing docker volume "
docker volume create "${volumename}"
_copyCheckpoints
fi
# Build Container
docker build \
--platform="${platform}" \
--tag "${invokeai_tag}" \
--build-arg project_name="${project_name}" \
--build-arg conda_version="${invokeai_conda_version}" \
--build-arg conda_prefix="${invokeai_conda_prefix}" \
--build-arg conda_env_file="${invokeai_conda_env_file}" \
--build-arg invokeai_git="${invokeai_git}" \
--file ./docker-build/Dockerfile \
.

View File

@@ -1,8 +0,0 @@
#!/bin/bash
set -e
source "${CONDA_PREFIX}/etc/profile.d/conda.sh"
conda activate "${PROJECT_NAME}"
python scripts/invoke.py \
${@:---web --host=0.0.0.0}

View File

@@ -1,13 +0,0 @@
#!/usr/bin/env bash
project_name=${PROJECT_NAME:-invokeai}
volumename=${VOLUMENAME:-${project_name}_data}
arch=${ARCH:-x86_64}
platform=${PLATFORM:-Linux/${arch}}
invokeai_tag=${INVOKEAI_TAG:-${project_name}-${arch}}
export project_name
export volumename
export arch
export platform
export invokeai_tag

View File

@@ -1,15 +0,0 @@
#!/usr/bin/env bash
set -e
source ./docker-build/env.sh || echo "please run from repository root" || exit 1
docker run \
--interactive \
--tty \
--rm \
--platform "$platform" \
--name "$project_name" \
--hostname "$project_name" \
--mount source="$volumename",target=/data \
--publish 9090:9090 \
"$invokeai_tag" ${1:+$@}

107
docker/Dockerfile Normal file
View File

@@ -0,0 +1,107 @@
# syntax=docker/dockerfile:1
ARG PYTHON_VERSION=3.9
##################
## base image ##
##################
FROM --platform=${TARGETPLATFORM} python:${PYTHON_VERSION}-slim AS python-base
LABEL org.opencontainers.image.authors="mauwii@outlook.de"
# Prepare apt for buildkit cache
RUN rm -f /etc/apt/apt.conf.d/docker-clean \
&& echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' >/etc/apt/apt.conf.d/keep-cache
# Install dependencies
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update \
&& apt-get install -y \
--no-install-recommends \
libgl1-mesa-glx=20.3.* \
libglib2.0-0=2.66.* \
libopencv-dev=4.5.*
# Set working directory and env
ARG APPDIR=/usr/src
ARG APPNAME=InvokeAI
WORKDIR ${APPDIR}
ENV PATH ${APPDIR}/${APPNAME}/bin:$PATH
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE 1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED 1
# Don't fall back to legacy build system
ENV PIP_USE_PEP517=1
#######################
## build pyproject ##
#######################
FROM python-base AS pyproject-builder
# Install build dependencies
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update \
&& apt-get install -y \
--no-install-recommends \
build-essential=12.9 \
gcc=4:10.2.* \
python3-dev=3.9.*
# Prepare pip for buildkit cache
ARG PIP_CACHE_DIR=/var/cache/buildkit/pip
ENV PIP_CACHE_DIR ${PIP_CACHE_DIR}
RUN mkdir -p ${PIP_CACHE_DIR}
# Create virtual environment
RUN --mount=type=cache,target=${PIP_CACHE_DIR} \
python3 -m venv "${APPNAME}" \
--upgrade-deps
# Install requirements
COPY --link pyproject.toml .
COPY --link invokeai/version/invokeai_version.py invokeai/version/__init__.py invokeai/version/
ARG PIP_EXTRA_INDEX_URL
ENV PIP_EXTRA_INDEX_URL ${PIP_EXTRA_INDEX_URL}
RUN --mount=type=cache,target=${PIP_CACHE_DIR} \
"${APPNAME}"/bin/pip install .
# Install pyproject.toml
COPY --link . .
RUN --mount=type=cache,target=${PIP_CACHE_DIR} \
"${APPNAME}/bin/pip" install .
# Build patchmatch
RUN python3 -c "from patchmatch import patch_match"
#####################
## runtime image ##
#####################
FROM python-base AS runtime
# Create a new user
ARG UNAME=appuser
RUN useradd \
--no-log-init \
-m \
-U \
"${UNAME}"
# Create volume directory
ARG VOLUME_DIR=/data
RUN mkdir -p "${VOLUME_DIR}" \
&& chown -hR "${UNAME}:${UNAME}" "${VOLUME_DIR}"
# Setup runtime environment
USER ${UNAME}:${UNAME}
COPY --chown=${UNAME}:${UNAME} --from=pyproject-builder ${APPDIR}/${APPNAME} ${APPNAME}
ENV INVOKEAI_ROOT ${VOLUME_DIR}
ENV TRANSFORMERS_CACHE ${VOLUME_DIR}/.cache
ENV INVOKE_MODEL_RECONFIGURE "--yes --default_only"
EXPOSE 9090
ENTRYPOINT [ "invokeai" ]
CMD [ "--web", "--host", "0.0.0.0", "--port", "9090" ]
VOLUME [ "${VOLUME_DIR}" ]

51
docker/build.sh Executable file
View File

@@ -0,0 +1,51 @@
#!/usr/bin/env bash
set -e
# If you want to build a specific flavor, set the CONTAINER_FLAVOR environment variable
# e.g. CONTAINER_FLAVOR=cpu ./build.sh
# Possible Values are:
# - cpu
# - cuda
# - rocm
# Don't forget to also set it when executing run.sh
# if it is not set, the script will try to detect the flavor by itself.
#
# Doc can be found here:
# https://invoke-ai.github.io/InvokeAI/installation/040_INSTALL_DOCKER/
SCRIPTDIR=$(dirname "${BASH_SOURCE[0]}")
cd "$SCRIPTDIR" || exit 1
source ./env.sh
DOCKERFILE=${INVOKE_DOCKERFILE:-./Dockerfile}
# print the settings
echo -e "You are using these values:\n"
echo -e "Dockerfile:\t\t${DOCKERFILE}"
echo -e "index-url:\t\t${PIP_EXTRA_INDEX_URL:-none}"
echo -e "Volumename:\t\t${VOLUMENAME}"
echo -e "Platform:\t\t${PLATFORM}"
echo -e "Container Registry:\t${CONTAINER_REGISTRY}"
echo -e "Container Repository:\t${CONTAINER_REPOSITORY}"
echo -e "Container Tag:\t\t${CONTAINER_TAG}"
echo -e "Container Flavor:\t${CONTAINER_FLAVOR}"
echo -e "Container Image:\t${CONTAINER_IMAGE}\n"
# Create docker volume
if [[ -n "$(docker volume ls -f name="${VOLUMENAME}" -q)" ]]; then
echo -e "Volume already exists\n"
else
echo -n "creating docker volume "
docker volume create "${VOLUMENAME}"
fi
# Build Container
docker build \
--platform="${PLATFORM:-linux/amd64}" \
--tag="${CONTAINER_IMAGE:-invokeai}" \
${CONTAINER_FLAVOR:+--build-arg="CONTAINER_FLAVOR=${CONTAINER_FLAVOR}"} \
${PIP_EXTRA_INDEX_URL:+--build-arg="PIP_EXTRA_INDEX_URL=${PIP_EXTRA_INDEX_URL}"} \
${PIP_PACKAGE:+--build-arg="PIP_PACKAGE=${PIP_PACKAGE}"} \
--file="${DOCKERFILE}" \
..

54
docker/env.sh Normal file
View File

@@ -0,0 +1,54 @@
#!/usr/bin/env bash
# This file is used to set environment variables for the build.sh and run.sh scripts.
# Try to detect the container flavor if no PIP_EXTRA_INDEX_URL got specified
if [[ -z "$PIP_EXTRA_INDEX_URL" ]]; then
# Activate virtual environment if not already activated and exists
if [[ -z $VIRTUAL_ENV ]]; then
[[ -e "$(dirname "${BASH_SOURCE[0]}")/../.venv/bin/activate" ]] \
&& source "$(dirname "${BASH_SOURCE[0]}")/../.venv/bin/activate" \
&& echo "Activated virtual environment: $VIRTUAL_ENV"
fi
# Decide which container flavor to build if not specified
if [[ -z "$CONTAINER_FLAVOR" ]] && python -c "import torch" &>/dev/null; then
# Check for CUDA and ROCm
CUDA_AVAILABLE=$(python -c "import torch;print(torch.cuda.is_available())")
ROCM_AVAILABLE=$(python -c "import torch;print(torch.version.hip is not None)")
if [[ "${CUDA_AVAILABLE}" == "True" ]]; then
CONTAINER_FLAVOR="cuda"
elif [[ "${ROCM_AVAILABLE}" == "True" ]]; then
CONTAINER_FLAVOR="rocm"
else
CONTAINER_FLAVOR="cpu"
fi
fi
# Set PIP_EXTRA_INDEX_URL based on container flavor
if [[ "$CONTAINER_FLAVOR" == "rocm" ]]; then
PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/rocm"
elif [[ "$CONTAINER_FLAVOR" == "cpu" ]]; then
PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/cpu"
# elif [[ -z "$CONTAINER_FLAVOR" || "$CONTAINER_FLAVOR" == "cuda" ]]; then
# PIP_PACKAGE=${PIP_PACKAGE-".[xformers]"}
fi
fi
# Variables shared by build.sh and run.sh
REPOSITORY_NAME="${REPOSITORY_NAME-$(basename "$(git rev-parse --show-toplevel)")}"
REPOSITORY_NAME="${REPOSITORY_NAME,,}"
VOLUMENAME="${VOLUMENAME-"${REPOSITORY_NAME}_data"}"
ARCH="${ARCH-$(uname -m)}"
PLATFORM="${PLATFORM-linux/${ARCH}}"
INVOKEAI_BRANCH="${INVOKEAI_BRANCH-$(git branch --show)}"
CONTAINER_REGISTRY="${CONTAINER_REGISTRY-"ghcr.io"}"
CONTAINER_REPOSITORY="${CONTAINER_REPOSITORY-"$(whoami)/${REPOSITORY_NAME}"}"
CONTAINER_FLAVOR="${CONTAINER_FLAVOR-cuda}"
CONTAINER_TAG="${CONTAINER_TAG-"${INVOKEAI_BRANCH##*/}-${CONTAINER_FLAVOR}"}"
CONTAINER_IMAGE="${CONTAINER_REGISTRY}/${CONTAINER_REPOSITORY}:${CONTAINER_TAG}"
CONTAINER_IMAGE="${CONTAINER_IMAGE,,}"
# enable docker buildkit
export DOCKER_BUILDKIT=1

41
docker/run.sh Executable file
View File

@@ -0,0 +1,41 @@
#!/usr/bin/env bash
set -e
# How to use: https://invoke-ai.github.io/InvokeAI/installation/040_INSTALL_DOCKER/
SCRIPTDIR=$(dirname "${BASH_SOURCE[0]}")
cd "$SCRIPTDIR" || exit 1
source ./env.sh
# Create outputs directory if it does not exist
[[ -d ./outputs ]] || mkdir ./outputs
echo -e "You are using these values:\n"
echo -e "Volumename:\t${VOLUMENAME}"
echo -e "Invokeai_tag:\t${CONTAINER_IMAGE}"
echo -e "local Models:\t${MODELSPATH:-unset}\n"
docker run \
--interactive \
--tty \
--rm \
--platform="${PLATFORM}" \
--name="${REPOSITORY_NAME}" \
--hostname="${REPOSITORY_NAME}" \
--mount type=volume,volume-driver=local,source="${VOLUMENAME}",target=/data \
--mount type=bind,source="$(pwd)"/outputs/,target=/data/outputs/ \
${MODELSPATH:+--mount="type=bind,source=${MODELSPATH},target=/data/models"} \
${HUGGING_FACE_HUB_TOKEN:+--env="HUGGING_FACE_HUB_TOKEN=${HUGGING_FACE_HUB_TOKEN}"} \
--publish=9090:9090 \
--cap-add=sys_nice \
${GPU_FLAGS:+--gpus="${GPU_FLAGS}"} \
"${CONTAINER_IMAGE}" ${@:+$@}
echo -e "\nCleaning trash folder ..."
for f in outputs/.Trash*; do
if [ -e "$f" ]; then
rm -Rf "$f"
break
fi
done

View File

@@ -4,66 +4,447 @@ title: Changelog
# :octicons-log-16: **Changelog**
## v2.0.1 (13 October 2022)
## v2.3.0 <small>(15 January 2023)</small>
- fix noisy images at high step count when using k* samplers
- dream.py script now calls invoke.py module directly rather than
via a new python process (which could break the environment)
**Transition to diffusers
Version 2.3 provides support for both the traditional `.ckpt` weight
checkpoint files as well as the HuggingFace `diffusers` format. This
introduces several changes you should know about.
1. The models.yaml format has been updated. There are now two
different type of configuration stanza. The traditional ckpt
one will look like this, with a `format` of `ckpt` and a
`weights` field that points to the absolute or ROOTDIR-relative
location of the ckpt file.
```
inpainting-1.5:
description: RunwayML SD 1.5 model optimized for inpainting (4.27 GB)
repo_id: runwayml/stable-diffusion-inpainting
format: ckpt
width: 512
height: 512
weights: models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt
config: configs/stable-diffusion/v1-inpainting-inference.yaml
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
```
A configuration stanza for a diffusers model hosted at HuggingFace will look like this,
with a `format` of `diffusers` and a `repo_id` that points to the
repository ID of the model on HuggingFace:
```
stable-diffusion-2.1:
description: Stable Diffusion version 2.1 diffusers model (5.21 GB)
repo_id: stabilityai/stable-diffusion-2-1
format: diffusers
```
A configuration stanza for a diffuers model stored locally should
look like this, with a `format` of `diffusers`, but a `path` field
that points at the directory that contains `model_index.json`:
```
waifu-diffusion:
description: Latest waifu diffusion 1.4
format: diffusers
path: models/diffusers/hakurei-haifu-diffusion-1.4
```
2. In order of precedence, InvokeAI will now use HF_HOME, then
XDG_CACHE_HOME, then finally default to `ROOTDIR/models` to
store HuggingFace diffusers models.
Consequently, the format of the models directory has changed to
mimic the HuggingFace cache directory. When HF_HOME and XDG_HOME
are not set, diffusers models are now automatically downloaded
and retrieved from the directory `ROOTDIR/models/diffusers`,
while other models are stored in the directory
`ROOTDIR/models/hub`. This organization is the same as that used
by HuggingFace for its cache management.
This allows you to share diffusers and ckpt model files easily with
other machine learning applications that use the HuggingFace
libraries. To do this, set the environment variable HF_HOME
before starting up InvokeAI to tell it what directory to
cache models in. To tell InvokeAI to use the standard HuggingFace
cache directory, you would set HF_HOME like this (Linux/Mac):
`export HF_HOME=~/.cache/huggingface`
Both HuggingFace and InvokeAI will fall back to the XDG_CACHE_HOME
environment variable if HF_HOME is not set; this path
takes precedence over `ROOTDIR/models` to allow for the same sharing
with other machine learning applications that use HuggingFace
libraries.
3. If you upgrade to InvokeAI 2.3.* from an earlier version, there
will be a one-time migration from the old models directory format
to the new one. You will see a message about this the first time
you start `invoke.py`.
4. Both the front end back ends of the model manager have been
rewritten to accommodate diffusers. You can import models using
their local file path, using their URLs, or their HuggingFace
repo_ids. On the command line, all these syntaxes work:
```
!import_model stabilityai/stable-diffusion-2-1-base
!import_model /opt/sd-models/sd-1.4.ckpt
!import_model https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/blob/main/PaperCut_v1.ckpt
```
**KNOWN BUGS (15 January 2023)
1. On CUDA systems, the 768 pixel stable-diffusion-2.0 and
stable-diffusion-2.1 models can only be run as `diffusers` models
when the `xformer` library is installed and configured. Without
`xformers`, InvokeAI returns black images.
2. Inpainting and outpainting have regressed in quality.
Both these issues are being actively worked on.
## v2.2.4 <small>(11 December 2022)</small>
**the `invokeai` directory**
Previously there were two directories to worry about, the directory that
contained the InvokeAI source code and the launcher scripts, and the `invokeai`
directory that contained the models files, embeddings, configuration and
outputs. With the 2.2.4 release, this dual system is done away with, and
everything, including the `invoke.bat` and `invoke.sh` launcher scripts, now
live in a directory named `invokeai`. By default this directory is located in
your home directory (e.g. `\Users\yourname` on Windows), but you can select
where it goes at install time.
After installation, you can delete the install directory (the one that the zip
file creates when it unpacks). Do **not** delete or move the `invokeai`
directory!
**Initialization file `invokeai/invokeai.init`**
You can place frequently-used startup options in this file, such as the default
number of steps or your preferred sampler. To keep everything in one place, this
file has now been moved into the `invokeai` directory and is named
`invokeai.init`.
**To update from Version 2.2.3**
The easiest route is to download and unpack one of the 2.2.4 installer files.
When it asks you for the location of the `invokeai` runtime directory, respond
with the path to the directory that contains your 2.2.3 `invokeai`. That is, if
`invokeai` lives at `C:\Users\fred\invokeai`, then answer with `C:\Users\fred`
and answer "Y" when asked if you want to reuse the directory.
The `update.sh` (`update.bat`) script that came with the 2.2.3 source installer
does not know about the new directory layout and won't be fully functional.
**To update to 2.2.5 (and beyond) there's now an update path**
As they become available, you can update to more recent versions of InvokeAI
using an `update.sh` (`update.bat`) script located in the `invokeai` directory.
Running it without any arguments will install the most recent version of
InvokeAI. Alternatively, you can get set releases by running the `update.sh`
script with an argument in the command shell. This syntax accepts the path to
the desired release's zip file, which you can find by clicking on the green
"Code" button on this repository's home page.
**Other 2.2.4 Improvements**
- Fix InvokeAI GUI initialization by @addianto in #1687
- fix link in documentation by @lstein in #1728
- Fix broken link by @ShawnZhong in #1736
- Remove reference to binary installer by @lstein in #1731
- documentation fixes for 2.2.3 by @lstein in #1740
- Modify installer links to point closer to the source installer by @ebr in
#1745
- add documentation warning about 1650/60 cards by @lstein in #1753
- Fix Linux source URL in installation docs by @andybearman in #1756
- Make install instructions discoverable in readme by @damian0815 in #1752
- typo fix by @ofirkris in #1755
- Non-interactive model download (support HUGGINGFACE_TOKEN) by @ebr in #1578
- fix(srcinstall): shell installer - cp scripts instead of linking by @tildebyte
in #1765
- stability and usage improvements to binary & source installers by @lstein in
#1760
- fix off-by-one bug in cross-attention-control by @damian0815 in #1774
- Eventually update APP_VERSION to 2.2.3 by @spezialspezial in #1768
- invoke script cds to its location before running by @lstein in #1805
- Make PaperCut and VoxelArt models load again by @lstein in #1730
- Fix --embedding_directory / --embedding_path not working by @blessedcoolant in
#1817
- Clean up readme by @hipsterusername in #1820
- Optimized Docker build with support for external working directory by @ebr in
#1544
- disable pushing the cloud container by @mauwii in #1831
- Fix docker push github action and expand with additional metadata by @ebr in
#1837
- Fix Broken Link To Notebook by @VedantMadane in #1821
- Account for flat models by @spezialspezial in #1766
- Update invoke.bat.in isolate environment variables by @lynnewu in #1833
- Arch Linux Specific PatchMatch Instructions & fixing conda install on linux by
@SammCheese in #1848
- Make force free GPU memory work in img2img by @addianto in #1844
- New installer by @lstein
## v2.2.3 <small>(2 December 2022)</small>
!!! Note
This point release removes references to the binary installer from the
installation guide. The binary installer is not stable at the current
time. First time users are encouraged to use the "source" installer as
described in [Installing InvokeAI with the Source Installer](installation/deprecated_documentation/INSTALL_SOURCE.md)
With InvokeAI 2.2, this project now provides enthusiasts and professionals a
robust workflow solution for creating AI-generated and human facilitated
compositions. Additional enhancements have been made as well, improving safety,
ease of use, and installation.
Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a
512x768 image (and less for smaller images), and is compatible with
Windows/Linux/Mac (M1 & M2).
You can see the [release video](https://youtu.be/hIYBfDtKaus) here, which
introduces the main WebUI enhancement for version 2.2 -
[The Unified Canvas](features/UNIFIED_CANVAS.md). This new workflow is the
biggest enhancement added to the WebUI to date, and unlocks a stunning amount of
potential for users to create and iterate on their creations. The following
sections describe what's new for InvokeAI.
## v2.2.2 <small>(30 November 2022)</small>
!!! note
The binary installer is not ready for prime time. First time users are recommended to install via the "source" installer accessible through the links at the bottom of this page.****
With InvokeAI 2.2, this project now provides enthusiasts and professionals a
robust workflow solution for creating AI-generated and human facilitated
compositions. Additional enhancements have been made as well, improving safety,
ease of use, and installation.
Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a
512x768 image (and less for smaller images), and is compatible with
Windows/Linux/Mac (M1 & M2).
You can see the [release video](https://youtu.be/hIYBfDtKaus) here, which
introduces the main WebUI enhancement for version 2.2 -
[The Unified Canvas](https://invoke-ai.github.io/InvokeAI/features/UNIFIED_CANVAS/).
This new workflow is the biggest enhancement added to the WebUI to date, and
unlocks a stunning amount of potential for users to create and iterate on their
creations. The following sections describe what's new for InvokeAI.
## v2.2.0 <small>(2 December 2022)</small>
With InvokeAI 2.2, this project now provides enthusiasts and professionals a
robust workflow solution for creating AI-generated and human facilitated
compositions. Additional enhancements have been made as well, improving safety,
ease of use, and installation.
Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a
512x768 image (and less for smaller images), and is compatible with
Windows/Linux/Mac (M1 & M2).
You can see the [release video](https://youtu.be/hIYBfDtKaus) here, which
introduces the main WebUI enhancement for version 2.2 -
[The Unified Canvas](features/UNIFIED_CANVAS.md). This new workflow is the
biggest enhancement added to the WebUI to date, and unlocks a stunning amount of
potential for users to create and iterate on their creations. The following
sections describe what's new for InvokeAI.
## v2.1.3 <small>(13 November 2022)</small>
- A choice of installer scripts that automate installation and configuration.
See
[Installation](installation/index.md).
- A streamlined manual installation process that works for both Conda and
PIP-only installs. See
[Manual Installation](installation/020_INSTALL_MANUAL.md).
- The ability to save frequently-used startup options (model to load, steps,
sampler, etc) in a `.invokeai` file. See
[Client](features/CLI.md)
- Support for AMD GPU cards (non-CUDA) on Linux machines.
- Multiple bugs and edge cases squashed.
## v2.1.0 <small>(2 November 2022)</small>
- update mac instructions to use invokeai for env name by @willwillems in #1030
- Update .gitignore by @blessedcoolant in #1040
- reintroduce fix for m1 from #579 missing after merge by @skurovec in #1056
- Update Stable_Diffusion_AI_Notebook.ipynb (Take 2) by @ChloeL19 in #1060
- Print out the device type which is used by @manzke in #1073
- Hires Addition by @hipsterusername in #1063
- fix for "1 leaked semaphore objects to clean up at shutdown" on M1 by
@skurovec in #1081
- Forward dream.py to invoke.py using the same interpreter, add deprecation
warning by @db3000 in #1077
- fix noisy images at high step counts by @lstein in #1086
- Generalize facetool strength argument by @db3000 in #1078
- Enable fast switching among models at the invoke> command line by @lstein in
#1066
- Fix Typo, committed changing ldm environment to invokeai by @jdries3 in #1095
- Update generate.py by @unreleased in #1109
- Update 'ldm' env to 'invokeai' in troubleshooting steps by @19wolf in #1125
- Fixed documentation typos and resolved merge conflicts by @rupeshs in #1123
- Fix broken doc links, fix malaprop in the project subtitle by @majick in #1131
- Only output facetool parameters if enhancing faces by @db3000 in #1119
- Update gitignore to ignore codeformer weights at new location by
@spezialspezial in #1136
- fix links to point to invoke-ai.github.io #1117 by @mauwii in #1143
- Rework-mkdocs by @mauwii in #1144
- add option to CLI and pngwriter that allows user to set PNG compression level
by @lstein in #1127
- Fix img2img DDIM index out of bound by @wfng92 in #1137
- Fix gh actions by @mauwii in #1128
- update mac instructions to use invokeai for env name by @willwillems in #1030
- Update .gitignore by @blessedcoolant in #1040
- reintroduce fix for m1 from #579 missing after merge by @skurovec in #1056
- Update Stable_Diffusion_AI_Notebook.ipynb (Take 2) by @ChloeL19 in #1060
- Print out the device type which is used by @manzke in #1073
- Hires Addition by @hipsterusername in #1063
- fix for "1 leaked semaphore objects to clean up at shutdown" on M1 by
@skurovec in #1081
- Forward dream.py to invoke.py using the same interpreter, add deprecation
warning by @db3000 in #1077
- fix noisy images at high step counts by @lstein in #1086
- Generalize facetool strength argument by @db3000 in #1078
- Enable fast switching among models at the invoke> command line by @lstein in
#1066
- Fix Typo, committed changing ldm environment to invokeai by @jdries3 in #1095
- Fixed documentation typos and resolved merge conflicts by @rupeshs in #1123
- Only output facetool parameters if enhancing faces by @db3000 in #1119
- add option to CLI and pngwriter that allows user to set PNG compression level
by @lstein in #1127
- Fix img2img DDIM index out of bound by @wfng92 in #1137
- Add text prompt to inpaint mask support by @lstein in #1133
- Respect http[s] protocol when making socket.io middleware by @damian0815 in
#976
- WebUI: Adds Codeformer support by @psychedelicious in #1151
- Skips normalizing prompts for web UI metadata by @psychedelicious in #1165
- Add Asymmetric Tiling by @carson-katri in #1132
- Web UI: Increases max CFG Scale to 200 by @psychedelicious in #1172
- Corrects color channels in face restoration; Fixes #1167 by @psychedelicious
in #1175
- Flips channels using array slicing instead of using OpenCV by @psychedelicious
in #1178
- Fix typo in docs: s/Formally/Formerly by @noodlebox in #1176
- fix clipseg loading problems by @lstein in #1177
- Correct color channels in upscale using array slicing by @wfng92 in #1181
- Web UI: Filters existing images when adding new images; Fixes #1085 by
@psychedelicious in #1171
- fix a number of bugs in textual inversion by @lstein in #1190
- Improve !fetch, add !replay command by @ArDiouscuros in #882
- Fix generation of image with s>1000 by @holstvoogd in #951
- Web UI: Gallery improvements by @psychedelicious in #1198
- Update CLI.md by @krummrey in #1211
- outcropping improvements by @lstein in #1207
- add support for loading VAE autoencoders by @lstein in #1216
- remove duplicate fix_func for MPS by @wfng92 in #1210
- Metadata storage and retrieval fixes by @lstein in #1204
- nix: add shell.nix file by @Cloudef in #1170
- Web UI: Changes vite dist asset paths to relative by @psychedelicious in #1185
- Web UI: Removes isDisabled from PromptInput by @psychedelicious in #1187
- Allow user to generate images with initial noise as on M1 / mps system by
@ArDiouscuros in #981
- feat: adding filename format template by @plucked in #968
- Web UI: Fixes broken bundle by @psychedelicious in #1242
- Support runwayML custom inpainting model by @lstein in #1243
- Update IMG2IMG.md by @talitore in #1262
- New dockerfile - including a build- and a run- script as well as a GH-Action
by @mauwii in #1233
- cut over from karras to model noise schedule for higher steps by @lstein in
#1222
- Prompt tweaks by @lstein in #1268
- Outpainting implementation by @Kyle0654 in #1251
- fixing aspect ratio on hires by @tjennings in #1249
- Fix-build-container-action by @mauwii in #1274
- handle all unicode characters by @damian0815 in #1276
- adds models.user.yml to .gitignore by @JakeHL in #1281
- remove debug branch, set fail-fast to false by @mauwii in #1284
- Protect-secrets-on-pr by @mauwii in #1285
- Web UI: Adds initial inpainting implementation by @psychedelicious in #1225
- fix environment-mac.yml - tested on x64 and arm64 by @mauwii in #1289
- Use proper authentication to download model by @mauwii in #1287
- Prevent indexing error for mode RGB by @spezialspezial in #1294
- Integrate sd-v1-5 model into test matrix (easily expandable), remove
unecesarry caches by @mauwii in #1293
- add --no-interactive to configure_invokeai step by @mauwii in #1302
- 1-click installer and updater. Uses micromamba to install git and conda into a
contained environment (if necessary) before running the normal installation
script by @cmdr2 in #1253
- configure_invokeai.py script downloads the weight files by @lstein in #1290
## v2.0.1 <small>(13 October 2022)</small>
- fix noisy images at high step count when using k\* samplers
- dream.py script now calls invoke.py module directly rather than via a new
python process (which could break the environment)
## v2.0.0 <small>(9 October 2022)</small>
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains
for backward compatibility.
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
- Support for <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/INPAINTING.md">inpainting</a> and <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/OUTPAINTING.md">outpainting</a>
- img2img runs on all k* samplers
- Support for <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/PROMPTS.md#negative-and-unconditioned-prompts">negative prompts</a>
- Support for CodeFormer face reconstruction
- Support for Textual Inversion on Macintoshes
- Support in both WebGUI and CLI for <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/POSTPROCESS.md">post-processing of previously-generated images</a>
using facial reconstruction, ESRGAN upscaling, outcropping (similar to DALL-E infinite canvas),
and "embiggen" upscaling. See the `!fix` command.
- New `--hires` option on `invoke>` line allows <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/CLI.m#this-is-an-example-of-txt2img">larger images to be created without duplicating elements</a>, at the cost of some performance.
- New `--perlin` and `--threshold` options allow you to add and control variation
during image generation (see <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/OTHER.md#thresholding-and-perlin-noise-initialization-options">Thresholding and Perlin Noise Initialization</a>
- Extensive metadata now written into PNG files, allowing reliable regeneration of images
and tweaking of previous settings.
- Command-line completion in `invoke.py` now works on Windows, Linux and Mac platforms.
- Improved <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/CLI.m">command-line completion behavior</a>.
New commands added:
* List command-line history with `!history`
* Search command-line history with `!search`
* Clear history with `!clear`
- Deprecated `--full_precision` / `-F`. Simply omit it and `invoke.py` will auto
configure. To switch away from auto use the new flag like `--precision=float32`.
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains for
backward compatibility.
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
- Support for [inpainting](features/INPAINTING.md) and
[outpainting](features/OUTPAINTING.md)
- img2img runs on all k\* samplers
- Support for
[negative prompts](features/PROMPTS.md#negative-and-unconditioned-prompts)
- Support for CodeFormer face reconstruction
- Support for Textual Inversion on Macintoshes
- Support in both WebGUI and CLI for
[post-processing of previously-generated images](features/POSTPROCESS.md)
using facial reconstruction, ESRGAN upscaling, outcropping (similar to DALL-E
infinite canvas), and "embiggen" upscaling. See the `!fix` command.
- New `--hires` option on `invoke>` line allows
[larger images to be created without duplicating elements](features/CLI.md#this-is-an-example-of-txt2img),
at the cost of some performance.
- New `--perlin` and `--threshold` options allow you to add and control
variation during image generation (see
[Thresholding and Perlin Noise Initialization](features/OTHER.md#thresholding-and-perlin-noise-initialization-options))
- Extensive metadata now written into PNG files, allowing reliable regeneration
of images and tweaking of previous settings.
- Command-line completion in `invoke.py` now works on Windows, Linux and Mac
platforms.
- Improved [command-line completion behavior](features/CLI.md) New commands
added:
- List command-line history with `!history`
- Search command-line history with `!search`
- Clear history with `!clear`
- Deprecated `--full_precision` / `-F`. Simply omit it and `invoke.py` will auto
configure. To switch away from auto use the new flag like
`--precision=float32`.
## v1.14 <small>(11 September 2022)</small>
- Memory optimizations for small-RAM cards. 512x512 now possible on 4 GB GPUs.
- Full support for Apple hardware with M1 or M2 chips.
- Add "seamless mode" for circular tiling of image. Generates beautiful effects.
([prixt](https://github.com/prixt)).
- Inpainting support.
- Improved web server GUI.
- Lots of code and documentation cleanups.
- Memory optimizations for small-RAM cards. 512x512 now possible on 4 GB GPUs.
- Full support for Apple hardware with M1 or M2 chips.
- Add "seamless mode" for circular tiling of image. Generates beautiful effects.
([prixt](https://github.com/prixt)).
- Inpainting support.
- Improved web server GUI.
- Lots of code and documentation cleanups.
## v1.13 <small>(3 September 2022)</small>
- Support image variations (see [VARIATIONS](features/VARIATIONS.md)
([Kevin Gibbons](https://github.com/bakkot) and many contributors and reviewers)
- Supports a Google Colab notebook for a standalone server running on Google hardware
[Arturo Mendivil](https://github.com/artmen1516)
- WebUI supports GFPGAN/ESRGAN facial reconstruction and upscaling
[Kevin Gibbons](https://github.com/bakkot)
- WebUI supports incremental display of in-progress images during generation
[Kevin Gibbons](https://github.com/bakkot)
- A new configuration file scheme that allows new models (including upcoming
stable-diffusion-v1.5) to be added without altering the code.
([David Wager](https://github.com/maddavid12))
- Can specify --grid on invoke.py command line as the default.
- Miscellaneous internal bug and stability fixes.
- Works on M1 Apple hardware.
- Multiple bug fixes.
- Support image variations (see [VARIATIONS](features/VARIATIONS.md)
([Kevin Gibbons](https://github.com/bakkot) and many contributors and
reviewers)
- Supports a Google Colab notebook for a standalone server running on Google
hardware [Arturo Mendivil](https://github.com/artmen1516)
- WebUI supports GFPGAN/ESRGAN facial reconstruction and upscaling
[Kevin Gibbons](https://github.com/bakkot)
- WebUI supports incremental display of in-progress images during generation
[Kevin Gibbons](https://github.com/bakkot)
- A new configuration file scheme that allows new models (including upcoming
stable-diffusion-v1.5) to be added without altering the code.
([David Wager](https://github.com/maddavid12))
- Can specify --grid on invoke.py command line as the default.
- Miscellaneous internal bug and stability fixes.
- Works on M1 Apple hardware.
- Multiple bug fixes.
---
@@ -71,49 +452,59 @@ title: Changelog
- Improved file handling, including ability to read prompts from standard input.
(kudos to [Yunsaki](https://github.com/yunsaki)
- The web server is now integrated with the invoke.py script. Invoke by adding --web to
the invoke.py command arguments.
- The web server is now integrated with the invoke.py script. Invoke by adding
--web to the invoke.py command arguments.
- Face restoration and upscaling via GFPGAN and Real-ESGAN are now automatically
enabled if the GFPGAN directory is located as a sibling to Stable Diffusion.
VRAM requirements are modestly reduced. Thanks to both [Blessedcoolant](https://github.com/blessedcoolant) and
VRAM requirements are modestly reduced. Thanks to both
[Blessedcoolant](https://github.com/blessedcoolant) and
[Oceanswave](https://github.com/oceanswave) for their work on this.
- You can now swap samplers on the invoke> command line. [Blessedcoolant](https://github.com/blessedcoolant)
- You can now swap samplers on the invoke> command line.
[Blessedcoolant](https://github.com/blessedcoolant)
---
## v1.11 <small>(26 August 2022)</small>
- NEW FEATURE: Support upscaling and face enhancement using the GFPGAN module. (kudos to [Oceanswave](https://github.com/Oceanswave)
- You now can specify a seed of -1 to use the previous image's seed, -2 to use the seed for the image generated before that, etc.
Seed memory only extends back to the previous command, but will work on all images generated with the -n# switch.
- NEW FEATURE: Support upscaling and face enhancement using the GFPGAN module.
(kudos to [Oceanswave](https://github.com/Oceanswave)
- You now can specify a seed of -1 to use the previous image's seed, -2 to use
the seed for the image generated before that, etc. Seed memory only extends
back to the previous command, but will work on all images generated with the
-n# switch.
- Variant generation support temporarily disabled pending more general solution.
- Created a feature branch named **yunsaki-morphing-invoke** which adds experimental support for
iteratively modifying the prompt and its parameters. Please see[ Pull Request #86](https://github.com/lstein/stable-diffusion/pull/86)
for a synopsis of how this works. Note that when this feature is eventually added to the main branch, it will may be modified
significantly.
- Created a feature branch named **yunsaki-morphing-invoke** which adds
experimental support for iteratively modifying the prompt and its parameters.
Please
see[Pull Request #86](https://github.com/lstein/stable-diffusion/pull/86) for
a synopsis of how this works. Note that when this feature is eventually added
to the main branch, it will may be modified significantly.
---
## v1.10 <small>(25 August 2022)</small>
- A barebones but fully functional interactive web server for online generation of txt2img and img2img.
- A barebones but fully functional interactive web server for online generation
of txt2img and img2img.
---
## v1.09 <small>(24 August 2022)</small>
- A new -v option allows you to generate multiple variants of an initial image
in img2img mode. (kudos to [Oceanswave](https://github.com/Oceanswave). [
See this discussion in the PR for examples and details on use](https://github.com/lstein/stable-diffusion/pull/71#issuecomment-1226700810))
- Added ability to personalize text to image generation (kudos to [Oceanswave](https://github.com/Oceanswave) and [nicolai256](https://github.com/nicolai256))
in img2img mode. (kudos to [Oceanswave](https://github.com/Oceanswave).
[ See this discussion in the PR for examples and details on use](https://github.com/lstein/stable-diffusion/pull/71#issuecomment-1226700810))
- Added ability to personalize text to image generation (kudos to
[Oceanswave](https://github.com/Oceanswave) and
[nicolai256](https://github.com/nicolai256))
- Enabled all of the samplers from k_diffusion
---
## v1.08 <small>(24 August 2022)</small>
- Escape single quotes on the invoke> command before trying to parse. This avoids
parse errors.
- Escape single quotes on the invoke> command before trying to parse. This
avoids parse errors.
- Removed instruction to get Python3.8 as first step in Windows install.
Anaconda3 does it for you.
- Added bounds checks for numeric arguments that could cause crashes.
@@ -123,34 +514,36 @@ title: Changelog
## v1.07 <small>(23 August 2022)</small>
- Image filenames will now never fill gaps in the sequence, but will be assigned the
next higher name in the chosen directory. This ensures that the alphabetic and chronological
sort orders are the same.
- Image filenames will now never fill gaps in the sequence, but will be assigned
the next higher name in the chosen directory. This ensures that the alphabetic
and chronological sort orders are the same.
---
## v1.06 <small>(23 August 2022)</small>
- Added weighted prompt support contributed by [xraxra](https://github.com/xraxra)
- Example of using weighted prompts to tweak a demonic figure contributed by [bmaltais](https://github.com/bmaltais)
- Added weighted prompt support contributed by
[xraxra](https://github.com/xraxra)
- Example of using weighted prompts to tweak a demonic figure contributed by
[bmaltais](https://github.com/bmaltais)
---
## v1.05 <small>(22 August 2022 - after the drop)</small>
- Filenames now use the following formats:
000010.95183149.png -- Two files produced by the same command (e.g. -n2),
000010.26742632.png -- distinguished by a different seed.
- Filenames now use the following formats: 000010.95183149.png -- Two files
produced by the same command (e.g. -n2), 000010.26742632.png -- distinguished
by a different seed.
000011.455191342.01.png -- Two files produced by the same command using
000011.455191342.02.png -- a batch size>1 (e.g. -b2). They have the same seed.
000011.4160627868.grid#1-4.png -- a grid of four images (-g); the whole grid can
be regenerated with the indicated key
000011.4160627868.grid#1-4.png -- a grid of four images (-g); the whole grid
can be regenerated with the indicated key
- It should no longer be possible for one image to overwrite another
- You can use the "cd" and "pwd" commands at the invoke> prompt to set and retrieve
the path of the output directory.
- You can use the "cd" and "pwd" commands at the invoke> prompt to set and
retrieve the path of the output directory.
---
@@ -164,26 +557,28 @@ title: Changelog
## v1.03 <small>(22 August 2022)</small>
- The original txt2img and img2img scripts from the CompViz repository have been moved into
a subfolder named "orig_scripts", to reduce confusion.
- The original txt2img and img2img scripts from the CompViz repository have been
moved into a subfolder named "orig_scripts", to reduce confusion.
---
## v1.02 <small>(21 August 2022)</small>
- A copy of the prompt and all of its switches and options is now stored in the corresponding
image in a tEXt metadata field named "Dream". You can read the prompt using scripts/images2prompt.py,
or an image editor that allows you to explore the full metadata.
**Please run "conda env update" to load the k_lms dependencies!!**
- A copy of the prompt and all of its switches and options is now stored in the
corresponding image in a tEXt metadata field named "Dream". You can read the
prompt using scripts/images2prompt.py, or an image editor that allows you to
explore the full metadata. **Please run "conda env update" to load the k_lms
dependencies!!**
---
## v1.01 <small>(21 August 2022)</small>
- added k_lms sampling.
**Please run "conda env update" to load the k_lms dependencies!!**
- use half precision arithmetic by default, resulting in faster execution and lower memory requirements
Pass argument --full_precision to invoke.py to get slower but more accurate image generation
- added k_lms sampling. **Please run "conda env update" to load the k_lms
dependencies!!**
- use half precision arithmetic by default, resulting in faster execution and
lower memory requirements Pass argument --full_precision to invoke.py to get
slower but more accurate image generation
---

Binary file not shown.

After

Width:  |  Height:  |  Size: 359 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 528 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 601 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 142 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 122 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 112 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 107 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 470 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 457 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 114 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 169 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 635 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 284 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 252 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 428 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 331 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 369 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 362 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 329 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 329 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 377 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 328 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 380 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 372 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 401 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 441 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 451 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 338 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 271 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 353 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 330 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 439 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 463 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 444 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 468 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 466 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 475 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 429 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 429 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 477 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 476 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 434 KiB

View File

@@ -1,116 +0,0 @@
## 000001.1863159593.png
![](000001.1863159593.png)
banana sushi -s 50 -S 1863159593 -W 512 -H 512 -C 7.5 -A k_lms
## 000002.1151955949.png
![](000002.1151955949.png)
banana sushi -s 50 -S 1151955949 -W 512 -H 512 -C 7.5 -A plms
## 000003.2736230502.png
![](000003.2736230502.png)
banana sushi -s 50 -S 2736230502 -W 512 -H 512 -C 7.5 -A ddim
## 000004.42.png
![](000004.42.png)
banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms
## 000005.42.png
![](000005.42.png)
banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms
## 000006.478163327.png
![](000006.478163327.png)
banana sushi -s 50 -S 478163327 -W 640 -H 448 -C 7.5 -A k_lms
## 000007.2407640369.png
![](000007.2407640369.png)
banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms -V 2407640369:0.1
## 000008.2772421987.png
![](000008.2772421987.png)
banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms -V 2772421987:0.1
## 000009.3532317557.png
![](000009.3532317557.png)
banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms -V 3532317557:0.1
## 000010.2028635318.png
![](000010.2028635318.png)
banana sushi -s 50 -S 2028635318 -W 512 -H 512 -C 7.5 -A k_lms
## 000011.1111168647.png
![](000011.1111168647.png)
pond with waterlillies -s 50 -S 1111168647 -W 512 -H 512 -C 7.5 -A k_lms
## 000012.1476370516.png
![](000012.1476370516.png)
pond with waterlillies -s 50 -S 1476370516 -W 512 -H 512 -C 7.5 -A k_lms
## 000013.4281108706.png
![](000013.4281108706.png)
banana sushi -s 50 -S 4281108706 -W 960 -H 960 -C 7.5 -A k_lms
## 000014.2396987386.png
![](000014.2396987386.png)
old sea captain with crow on shoulder -s 50 -S 2396987386 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/Lincoln-and-Parrot-512.png -A k_lms -f 0.75
## 000015.1252923272.png
![](000015.1252923272.png)
old sea captain with crow on shoulder -s 50 -S 1252923272 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/Lincoln-and-Parrot-512-transparent.png -A k_lms -f 0.75
## 000016.2633891320.png
![](000016.2633891320.png)
old sea captain with crow on shoulder -s 50 -S 2633891320 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/Lincoln-and-Parrot-512.png -A plms -f 0.75
## 000017.1134411920.png
![](000017.1134411920.png)
old sea captain with crow on shoulder -s 50 -S 1134411920 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/Lincoln-and-Parrot-512.png -A k_euler_a -f 0.75
## 000018.47.png
![](000018.47.png)
big red dog playing with cat -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
## 000019.47.png
![](000019.47.png)
big red++++ dog playing with cat -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
## 000020.47.png
![](000020.47.png)
big red dog playing with cat+++ -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
## 000021.47.png
![](000021.47.png)
big (red dog).swap(tiger) playing with cat -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
## 000022.47.png
![](000022.47.png)
dog:1,cat:2 -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
## 000023.47.png
![](000023.47.png)
dog:2,cat:1 -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
## 000024.1029061431.png
![](000024.1029061431.png)
medusa with cobras -s 50 -S 1029061431 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/curly.png -A k_lms -f 0.75 -tm hair
## 000025.1284519352.png
![](000025.1284519352.png)
bearded man -s 50 -S 1284519352 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/curly.png -A k_lms -f 0.75 -tm face
## curly.942491079.gfpgan.png
![](curly.942491079.gfpgan.png)
!fix ./docs/assets/preflight-checks/inputs/curly.png -s 50 -S 942491079 -W 512 -H 512 -C 7.5 -A k_lms -G 0.8 -ft gfpgan -U 2.0 0.75
## curly.942491079.outcrop.png
![](curly.942491079.outcrop.png)
!fix ./docs/assets/preflight-checks/inputs/curly.png -s 50 -S 942491079 -W 512 -H 512 -C 7.5 -A k_lms -c top 64
## curly.942491079.outpaint.png
![](curly.942491079.outpaint.png)
!fix ./docs/assets/preflight-checks/inputs/curly.png -s 50 -S 942491079 -W 512 -H 512 -C 7.5 -A k_lms -D top 64
## curly.942491079.outcrop-01.png
![](curly.942491079.outcrop-01.png)
!fix ./docs/assets/preflight-checks/inputs/curly.png -s 50 -S 942491079 -W 512 -H 512 -C 7.5 -A k_lms -c top 64

View File

@@ -1,29 +0,0 @@
outputs/preflight/000001.1863159593.png: banana sushi -s 50 -S 1863159593 -W 512 -H 512 -C 7.5 -A k_lms
outputs/preflight/000002.1151955949.png: banana sushi -s 50 -S 1151955949 -W 512 -H 512 -C 7.5 -A plms
outputs/preflight/000003.2736230502.png: banana sushi -s 50 -S 2736230502 -W 512 -H 512 -C 7.5 -A ddim
outputs/preflight/000004.42.png: banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms
outputs/preflight/000005.42.png: banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms
outputs/preflight/000006.478163327.png: banana sushi -s 50 -S 478163327 -W 640 -H 448 -C 7.5 -A k_lms
outputs/preflight/000007.2407640369.png: banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms -V 2407640369:0.1
outputs/preflight/000008.2772421987.png: banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms -V 2772421987:0.1
outputs/preflight/000009.3532317557.png: banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms -V 3532317557:0.1
outputs/preflight/000010.2028635318.png: banana sushi -s 50 -S 2028635318 -W 512 -H 512 -C 7.5 -A k_lms
outputs/preflight/000011.1111168647.png: pond with waterlillies -s 50 -S 1111168647 -W 512 -H 512 -C 7.5 -A k_lms
outputs/preflight/000012.1476370516.png: pond with waterlillies -s 50 -S 1476370516 -W 512 -H 512 -C 7.5 -A k_lms
outputs/preflight/000013.4281108706.png: banana sushi -s 50 -S 4281108706 -W 960 -H 960 -C 7.5 -A k_lms
outputs/preflight/000014.2396987386.png: old sea captain with crow on shoulder -s 50 -S 2396987386 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/Lincoln-and-Parrot-512.png -A k_lms -f 0.75
outputs/preflight/000015.1252923272.png: old sea captain with crow on shoulder -s 50 -S 1252923272 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/Lincoln-and-Parrot-512-transparent.png -A k_lms -f 0.75
outputs/preflight/000016.2633891320.png: old sea captain with crow on shoulder -s 50 -S 2633891320 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/Lincoln-and-Parrot-512.png -A plms -f 0.75
outputs/preflight/000017.1134411920.png: old sea captain with crow on shoulder -s 50 -S 1134411920 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/Lincoln-and-Parrot-512.png -A k_euler_a -f 0.75
outputs/preflight/000018.47.png: big red dog playing with cat -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
outputs/preflight/000019.47.png: big red++++ dog playing with cat -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
outputs/preflight/000020.47.png: big red dog playing with cat+++ -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
outputs/preflight/000021.47.png: big (red dog).swap(tiger) playing with cat -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
outputs/preflight/000022.47.png: dog:1,cat:2 -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
outputs/preflight/000023.47.png: dog:2,cat:1 -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
outputs/preflight/000024.1029061431.png: medusa with cobras -s 50 -S 1029061431 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/curly.png -A k_lms -f 0.75 -tm hair
outputs/preflight/000025.1284519352.png: bearded man -s 50 -S 1284519352 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/curly.png -A k_lms -f 0.75 -tm face
outputs/preflight/curly.942491079.gfpgan.png: !fix ./docs/assets/preflight-checks/inputs/curly.png -s 50 -S 942491079 -W 512 -H 512 -C 7.5 -A k_lms -G 0.8 -ft gfpgan -U 2.0 0.75
outputs/preflight/curly.942491079.outcrop.png: !fix ./docs/assets/preflight-checks/inputs/curly.png -s 50 -S 942491079 -W 512 -H 512 -C 7.5 -A k_lms -c top 64
outputs/preflight/curly.942491079.outpaint.png: !fix ./docs/assets/preflight-checks/inputs/curly.png -s 50 -S 942491079 -W 512 -H 512 -C 7.5 -A k_lms -D top 64
outputs/preflight/curly.942491079.outcrop-01.png: !fix ./docs/assets/preflight-checks/inputs/curly.png -s 50 -S 942491079 -W 512 -H 512 -C 7.5 -A k_lms -c top 64

View File

@@ -1,61 +0,0 @@
# outputs/preflight/000001.1863159593.png
banana sushi -s 50 -S 1863159593 -W 512 -H 512 -C 7.5 -A k_lms
# outputs/preflight/000002.1151955949.png
banana sushi -s 50 -S 1151955949 -W 512 -H 512 -C 7.5 -A plms
# outputs/preflight/000003.2736230502.png
banana sushi -s 50 -S 2736230502 -W 512 -H 512 -C 7.5 -A ddim
# outputs/preflight/000004.42.png
banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms
# outputs/preflight/000005.42.png
banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms
# outputs/preflight/000006.478163327.png
banana sushi -s 50 -S 478163327 -W 640 -H 448 -C 7.5 -A k_lms
# outputs/preflight/000007.2407640369.png
banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms -V 2407640369:0.1
# outputs/preflight/000007.2772421987.png
banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms -V 2772421987:0.1
# outputs/preflight/000007.3532317557.png
banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms -V 3532317557:0.1
# outputs/preflight/000008.2028635318.png
banana sushi -s 50 -S 2028635318 -W 512 -H 512 -C 7.5 -A k_lms
# outputs/preflight/000009.1111168647.png
pond with waterlillies -s 50 -S 1111168647 -W 512 -H 512 -C 7.5 -A k_lms
# outputs/preflight/000010.1476370516.png
pond with waterlillies -s 50 -S 1476370516 -W 512 -H 512 -C 7.5 -A k_lms --seamless
# outputs/preflight/000011.4281108706.png
banana sushi -s 50 -S 4281108706 -W 960 -H 960 -C 7.5 -A k_lms
# outputs/preflight/000012.2396987386.png
old sea captain with crow on shoulder -s 50 -S 2396987386 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/Lincoln-and-Parrot-512.png -A k_lms -f 0.75
# outputs/preflight/000013.1252923272.png
old sea captain with crow on shoulder -s 50 -S 1252923272 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/Lincoln-and-Parrot-512-transparent.png -A k_lms -f 0.75
# outputs/preflight/000014.2633891320.png
old sea captain with crow on shoulder -s 50 -S 2633891320 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/Lincoln-and-Parrot-512.png -A plms -f 0.75
# outputs/preflight/000015.1134411920.png
old sea captain with crow on shoulder -s 50 -S 1134411920 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/Lincoln-and-Parrot-512.png -A k_euler_a -f 0.75
# outputs/preflight/000016.42.png
big red dog playing with cat -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
# outputs/preflight/000017.42.png
big red++++ dog playing with cat -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
# outputs/preflight/000018.42.png
big red dog playing with cat+++ -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
# outputs/preflight/000019.42.png
big (red dog).swap(tiger) playing with cat -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
# outputs/preflight/000020.42.png
dog:1,cat:2 -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
# outputs/preflight/000021.42.png
dog:2,cat:1 -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
# outputs/preflight/000022.1029061431.png
medusa with cobras -s 50 -S 1029061431 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/curly.png -A k_lms -f 0.75 -tm hair
# outputs/preflight/000023.1284519352.png
bearded man -s 50 -S 1284519352 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/curly.png -A k_lms -f 0.75 -tm face
# outputs/preflight/000024.curly.hair.deselected.png
!mask -I docs/assets/preflight-checks/inputs/curly.png -tm hair
# outputs/preflight/curly.942491079.gfpgan.png
!fix ./docs/assets/preflight-checks/inputs/curly.png -U2 -G0.8
# outputs/preflight/curly.942491079.outcrop.png
!fix ./docs/assets/preflight-checks/inputs/curly.png -c top 64
# outputs/preflight/curly.942491079.outpaint.png
!fix ./docs/assets/preflight-checks/inputs/curly.png -D top 64
# outputs/preflight/curly.942491079.outcrop-01.png
!switch inpainting-1.5
!fix ./docs/assets/preflight-checks/inputs/curly.png -c top 64

Some files were not shown because too many files have changed in this diff Show More