Jordan
686f6ef8d6
Merge branch 'main' into add_lora_support
2023-02-21 18:35:11 -08:00
Jordan
f70b7272f3
cleanup / concept of loading through diffusers
2023-02-21 19:33:39 -07:00
blessedcoolant
9e5aa645a7
Fix crashing when using 2.1 model ( #2757 )
...
We now require more free memory to avoid attention slicing. 17.5% free
was not sufficient headroom in all cases, so now we require 25%.
2023-02-22 08:03:51 +13:00
Jonathan
71bbd78574
Fix crashing when using 2.1 model
...
We now require more free memory to avoid attention slicing. 17.5% free was not sufficient headroom, so now we require 25%.
2023-02-21 12:35:03 -06:00
Jonathan
3ab9d02883
Fixed embiggening crash due to clear_cuda_cache not being passed on and bad cuda stats initialization. ( #2756 )
2023-02-22 06:12:24 +13:00
Jordan
24d92979db
fix typo
2023-02-21 02:08:02 -07:00
Jordan
c669336d6b
Update lora_manager.py
2023-02-21 02:05:11 -07:00
Jordan
5529309e73
adjusting back to hooks, forcing to be last in execution
2023-02-21 01:34:06 -07:00
Jordan
49c0516602
change hook to override
2023-02-20 23:45:57 -07:00
Jordan
c1c62f770f
Merge branch 'main' into add_lora_support
2023-02-20 20:33:59 -08:00
Jordan
e2b6dfeeb9
Update generate.py
2023-02-20 21:33:20 -07:00
Jordan
8f527c2b2d
Merge pull request #2 from jordanramstad/prompt-fix
...
fix prompt
2023-02-20 20:11:00 -08:00
neecapp
3732af63e8
fix prompt
2023-02-20 23:06:05 -05:00
Lincoln Stein
7fadd5e5c4
performance: low-memory option for calculating guidance sequentially ( #2732 )
...
In theory, this reduces peak memory consumption by doing the conditioned
and un-conditioned predictions one after the other instead of in a
single mini-batch.
In practice, it doesn't reduce the reported "Max VRAM used for this
generation" for me, even without xformers. (But it does slow things down
by a good 18%.)
That suggests to me that the peak memory usage is during VAE decoding,
not the diffusion unet, but ymmv. It does [improve things for gogurt's
16 GB
M1](https://github.com/invoke-ai/InvokeAI/pull/2732#issuecomment-1436187407 ),
so it seems worthwhile.
To try it out, use the `--sequential_guidance` option:
2dded68267/ldm/invoke/args.py (L487-L492)
2023-02-20 23:00:54 -05:00
Lincoln Stein
4c2a588e1f
Merge branch 'main' into perf/lowmem_sequential_guidance
2023-02-20 22:40:31 -05:00
Lincoln Stein
5f9de762ff
update installation docs for 2.3.1 installer screens ( #2749 )
...
This PR updates the manual page for automatic installation, and contains
screenshots of the new installer screens.
2023-02-20 22:40:02 -05:00
Jordan
de89041779
optimize functions for unloading
2023-02-20 17:02:36 -07:00
Jordan
488326dd95
Merge branch 'add_lora_support' of https://github.com/jordanramstad/InvokeAI into add_lora_support
2023-02-20 16:50:16 -07:00
Jordan
c3edede73f
add notes and adjust functions
2023-02-20 16:49:59 -07:00
Jordan
6e730bd654
Merge branch 'main' into add_lora_support
2023-02-20 15:34:52 -08:00
Jordan
884a5543c7
adjust loader to use a settings dict
2023-02-20 16:33:53 -07:00
Jordan
ac972ebbe3
update prompt setup so lora's can be loaded in other ways
2023-02-20 16:06:30 -07:00
Lincoln Stein
b6ed5eafd6
update installation docs for 2.3.1 installer screens
2023-02-20 17:24:52 -05:00
Jordan
3c6c18b34c
cleanup suggestions from neecap
2023-02-20 15:19:29 -07:00
blessedcoolant
694d5aa2e8
Add 'update' action to launcher script ( #2636 )
...
- Adds an update action to launcher script
- This action calls new python script `invokeai-update`, which prompts
user to update to latest release version, main development version, or
an arbitrary git tag or branch name.
- It then uses `pip` to update to whatever tag was specified.
The user interface (such as it is) looks like this:

2023-02-21 11:17:22 +13:00
Lincoln Stein
833079140b
Merge branch 'main' into enhance/update-menu
2023-02-20 17:16:20 -05:00
Lincoln Stein
fd27948c36
Merge branch 'main' into perf/lowmem_sequential_guidance
2023-02-20 17:15:33 -05:00
Lincoln Stein
bac6b50dd1
During textual inversion training, skip over non-image files ( #2747 )
...
- The TI script was looping over all files in the training image
directory, regardless of whether they were image files or not. This PR
adds a check for image file extensions.
-
- Closes #2715
2023-02-20 16:17:32 -05:00
Jordan
8f6e43d4a4
code cleanup
2023-02-20 14:06:58 -07:00
blessedcoolant
a30c91f398
Merge branch 'main' into bugfix/textual-inversion-training
2023-02-21 09:58:19 +13:00
Lincoln Stein
17294bfa55
restore ability of textual inversion manager to read .pt files ( #2746 )
...
- Fixes longstanding bug in the token vector size code which caused .pt
files to be assigned the wrong token vector length. These were then
tossed out during directory scanning.
2023-02-20 15:34:56 -05:00
Jordan
404000bf93
Merge pull request #1 from neecapp/add_lora_support
...
Rewrite lora manager with hooks
2023-02-20 12:31:03 -08:00
Lincoln Stein
3fa1771cc9
Merge branch 'main' into perf/lowmem_sequential_guidance
2023-02-20 15:20:15 -05:00
Lincoln Stein
f3bd386ff0
Merge branch 'main' into bugfix/textual-inversion-training
2023-02-20 15:19:53 -05:00
Lincoln Stein
8486ce31de
Merge branch 'main' into bugfix/embedding-vector-length
2023-02-20 15:19:36 -05:00
Lincoln Stein
1d9845557f
reduced verbosity of embed loading messages
2023-02-20 15:18:55 -05:00
blessedcoolant
dc9268f772
[WebUI] Symmetry Fix ( #2745 )
...
Symmetry now has a toggle on and off. Won't be passed if not enabled.
Symmetry settings now moved to their accordion.
2023-02-21 08:47:23 +13:00
Lincoln Stein
47ddc00c6a
in textual inversion training, skip over non-image files
...
- Closes #2715
2023-02-20 14:44:10 -05:00
Lincoln Stein
0d22fd59ed
restore ability of textual inversion manager to read .pt files
...
- Fixes longstanding bug in the token vector size code which caused
.pt files to be assigned the wrong token vector length. These
were then tossed out during directory scanning.
2023-02-20 14:34:14 -05:00
neecapp
e744774171
Rewrite lora manager with hooks
2023-02-20 13:49:16 -05:00
blessedcoolant
d5efd57c28
Merge branch 'symmetry-fix' of https://github.com/blessedcoolant/InvokeAI into symmetry-fix
2023-02-21 07:44:34 +13:00
blessedcoolant
b52a92da7e
build: symmetry-fix-2
2023-02-21 07:43:56 +13:00
blessedcoolant
b949162e7e
Revert Symmetry Big Size Input
2023-02-21 07:42:20 +13:00
blessedcoolant
5409991256
Merge branch 'main' into symmetry-fix
2023-02-21 07:29:53 +13:00
blessedcoolant
be1bcbc173
build: symmetry-fix
2023-02-21 07:28:25 +13:00
blessedcoolant
d6196e863d
Move symmetry settings to their own accordion
2023-02-21 07:25:24 +13:00
blessedcoolant
63e790b79b
fix crash in CLI when --save_intermediates called ( #2744 )
...
Fixes #2733
2023-02-21 07:16:45 +13:00
Lincoln Stein
cf53bba99e
Merge branch 'main' into bugfix/save-intermediates
2023-02-20 12:51:53 -05:00
Lincoln Stein
ed4c8f6a8a
fix crash in CLI when --save_intermediates called
...
Fixes #2733
2023-02-20 12:50:32 -05:00
Lincoln Stein
aab8263c31
Fix crash on calling diffusers' prepare_attention_mask ( #2743 )
...
Diffusers' `prepare_attention_mask` was crashing when we didn't pass in
a batch size.
2023-02-20 12:35:33 -05:00