Jordan
71972c3709
re-enable load attn procs support (no multiplier)
2023-02-23 01:44:13 -07:00
Jordan
5b4a241f5c
Merge branch 'main' into add_lora_support
2023-02-21 20:38:33 -08:00
Jordan
cd333e414b
move key converter to wrapper
2023-02-21 21:38:15 -07:00
Jordan
af3543a8c7
further cleanup and implement wrapper
2023-02-21 20:42:40 -07:00
Lincoln Stein
d41dcdfc46
move trigger_str registration into try block
2023-02-21 21:38:42 -05:00
Jordan
686f6ef8d6
Merge branch 'main' into add_lora_support
2023-02-21 18:35:11 -08:00
Jordan
f70b7272f3
cleanup / concept of loading through diffusers
2023-02-21 19:33:39 -07:00
Lincoln Stein
5e41811fb5
move trigger text munging to upper level per review
2023-02-21 17:04:42 -05:00
Jonathan
1d0ba4a1a7
Merge branch 'main' into bugfix/filename-embedding-fallback
2023-02-21 13:12:34 -06:00
Jonathan
71bbd78574
Fix crashing when using 2.1 model
...
We now require more free memory to avoid attention slicing. 17.5% free was not sufficient headroom, so now we require 25%.
2023-02-21 12:35:03 -06:00
blessedcoolant
d5f524a156
Merge branch 'main' into bugfix/filename-embedding-fallback
2023-02-22 06:13:41 +13:00
Jonathan
3ab9d02883
Fixed embiggening crash due to clear_cuda_cache not being passed on and bad cuda stats initialization. ( #2756 )
2023-02-22 06:12:24 +13:00
Jonathan
da04b11a31
Merge branch 'main' into bugfix/filename-embedding-fallback
2023-02-21 10:52:13 -06:00
Lincoln Stein
9436f2e3d1
alphabetize trigger strings
2023-02-21 06:23:34 -05:00
Jordan
24d92979db
fix typo
2023-02-21 02:08:02 -07:00
Jordan
c669336d6b
Update lora_manager.py
2023-02-21 02:05:11 -07:00
Jordan
5529309e73
adjusting back to hooks, forcing to be last in execution
2023-02-21 01:34:06 -07:00
Jordan
49c0516602
change hook to override
2023-02-20 23:45:57 -07:00
Jordan
c1c62f770f
Merge branch 'main' into add_lora_support
2023-02-20 20:33:59 -08:00
Jordan
e2b6dfeeb9
Update generate.py
2023-02-20 21:33:20 -07:00
neecapp
3732af63e8
fix prompt
2023-02-20 23:06:05 -05:00
Lincoln Stein
4c2a588e1f
Merge branch 'main' into perf/lowmem_sequential_guidance
2023-02-20 22:40:31 -05:00
Lincoln Stein
91f7abb398
replace repeated triggers with <filename>
2023-02-20 22:33:13 -05:00
Jordan
de89041779
optimize functions for unloading
2023-02-20 17:02:36 -07:00
Jordan
488326dd95
Merge branch 'add_lora_support' of https://github.com/jordanramstad/InvokeAI into add_lora_support
2023-02-20 16:50:16 -07:00
Jordan
c3edede73f
add notes and adjust functions
2023-02-20 16:49:59 -07:00
Jordan
6e730bd654
Merge branch 'main' into add_lora_support
2023-02-20 15:34:52 -08:00
Jordan
884a5543c7
adjust loader to use a settings dict
2023-02-20 16:33:53 -07:00
Jordan
ac972ebbe3
update prompt setup so lora's can be loaded in other ways
2023-02-20 16:06:30 -07:00
Jordan
3c6c18b34c
cleanup suggestions from neecap
2023-02-20 15:19:29 -07:00
Lincoln Stein
833079140b
Merge branch 'main' into enhance/update-menu
2023-02-20 17:16:20 -05:00
Lincoln Stein
fd27948c36
Merge branch 'main' into perf/lowmem_sequential_guidance
2023-02-20 17:15:33 -05:00
Jordan
8f6e43d4a4
code cleanup
2023-02-20 14:06:58 -07:00
blessedcoolant
a30c91f398
Merge branch 'main' into bugfix/textual-inversion-training
2023-02-21 09:58:19 +13:00
Lincoln Stein
3fa1771cc9
Merge branch 'main' into perf/lowmem_sequential_guidance
2023-02-20 15:20:15 -05:00
Lincoln Stein
1d9845557f
reduced verbosity of embed loading messages
2023-02-20 15:18:55 -05:00
Lincoln Stein
47ddc00c6a
in textual inversion training, skip over non-image files
...
- Closes #2715
2023-02-20 14:44:10 -05:00
Lincoln Stein
0d22fd59ed
restore ability of textual inversion manager to read .pt files
...
- Fixes longstanding bug in the token vector size code which caused
.pt files to be assigned the wrong token vector length. These
were then tossed out during directory scanning.
2023-02-20 14:34:14 -05:00
neecapp
e744774171
Rewrite lora manager with hooks
2023-02-20 13:49:16 -05:00
Lincoln Stein
cf53bba99e
Merge branch 'main' into bugfix/save-intermediates
2023-02-20 12:51:53 -05:00
Lincoln Stein
ed4c8f6a8a
fix crash in CLI when --save_intermediates called
...
Fixes #2733
2023-02-20 12:50:32 -05:00
Jonathan
b21bd6f428
Fix crash on calling diffusers' prepare_attention_mask
...
Diffusers' `prepare_attention_mask` was crashing when we didn't pass in a batch size.
2023-02-20 11:12:47 -06:00
Kevin Turner
cb6903dfd0
Merge branch 'main' into perf/lowmem_sequential_guidance
2023-02-20 08:03:11 -08:00
blessedcoolant
58e5bf5a58
Merge branch 'main' into bugfix/embedding-compatibility-test
2023-02-21 04:09:18 +13:00
blessedcoolant
cc7733af1c
Merge branch 'main' into enhance/update-menu
2023-02-21 03:54:40 +13:00
Lincoln Stein
cfd897874b
Merge branch 'main' into perf/lowmem_sequential_guidance
2023-02-20 07:42:35 -05:00
Lincoln Stein
1249147c57
Merge branch 'main' into enhance/update-menu
2023-02-20 07:38:56 -05:00
Lincoln Stein
eec5c3bbb1
Merge branch 'main' into main
2023-02-20 07:38:08 -05:00
Jonathan
ca8d9fb885
Add symmetry to generation ( #2675 )
...
Added symmetry to Invoke based on discussions with @damian0815. This can currently only be activated via the CLI with the `--h_symmetry_time_pct` and `--v_symmetry_time_pct` options. Those take values from 0.0-1.0, exclusive, indicating the percentage through generation at which symmetry is applied as a one-time operation. To have symmetry in either axis applied after the first step, use a very low value like 0.001.
2023-02-20 07:33:19 -05:00
Jordan
096e1d3a5d
start of rewrite for add / remove
2023-02-20 02:37:44 -07:00