Lincoln Stein
75dcff92f9
incorporate single-file loading
2024-06-23 13:16:29 -04:00
Lincoln Stein
aff5700cce
merge cache setting api
2024-06-23 12:43:58 -04:00
Lincoln Stein
6932f27b43
fixup code broken by merge with main
2024-06-23 12:17:16 -04:00
Lincoln Stein
0df018bd4e
resolve merge conflicts
2024-06-23 10:31:35 -04:00
Lincoln Stein
ebe373c614
Merge branch 'main' into lstein/feat/set-cache-sizes
2024-06-21 15:36:47 -04:00
Lincoln Stein
27195b1672
code cleanup after @ryand review
2024-06-21 15:36:37 -04:00
Lincoln Stein
787671c2c2
Update invokeai/app/api/routers/model_manager.py
...
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com >
2024-06-21 15:15:31 -04:00
Lincoln Stein
5c8cf991a9
remove use of original_config_file in load_single_file()
2024-06-20 22:28:22 -04:00
Lincoln Stein
b0574f85bc
Merge branch 'lstein/bugfix/sdxl-vae-conversion' into lstein/feat/load-one-file
2024-06-19 23:48:21 -04:00
Lincoln Stein
2a4254c7c3
merge with main
2024-06-19 23:48:19 -04:00
Lincoln Stein
349239e336
associate sdxl config with sdxl VAEs
2024-06-19 23:43:56 -04:00
Lincoln Stein
b03073d888
[MM] Add support for probing and loading SDXL VAE checkpoint files ( #6524 )
...
* add support for probing and loading SDXL VAE checkpoint files
* broaden regexp probe for SDXL VAEs
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com >
2024-06-20 02:57:27 +00:00
Lincoln Stein
4c5bad6352
[MM] add API routes for getting & setting MM cache sizes, and retrieving MM stats
2024-06-19 21:35:50 -04:00
Lincoln Stein
74f0c317ce
Merge branch 'main' into lstein/feat/load-one-file
2024-06-19 10:26:37 -04:00
steffylo
a43d602f16
fix(queue): add clear_queue_on_startup config to clear problematic queues
2024-06-19 11:39:25 +10:00
Ryan Dick
79ceac2f82
(minor) Use SilenceWarnings as a decorator rather than a context manager to save an indentation level.
2024-06-18 15:06:22 -04:00
Ryan Dick
8e47e005a7
Tidy SilenceWarnings context manager:
...
- Fix type errors
- Enable SilenceWarnings to be used as both a context manager and a decorator
- Remove duplicate implementation
- Check the initial verbosity on __enter__() rather than __init__()
2024-06-18 15:06:22 -04:00
Ryan Dick
d13aafb514
Tidy denoise_latents.py imports to all use absolute import paths.
2024-06-18 15:06:22 -04:00
Lincoln Stein
3a622af3b2
Merge branch 'main' into lstein/feat/load-one-file
2024-06-18 13:45:03 -04:00
Lincoln Stein
c87cad3e91
simplified config schema migration code
2024-06-18 13:43:12 -04:00
Brandon Rising
63a7e19dbf
Run ruff
2024-06-18 10:38:29 -04:00
Brandon Rising
fbc5a8ec65
Ignore validation on improperly formatted hashes (pytest)
2024-06-18 10:38:29 -04:00
Brandon Rising
8ce6e4540e
Run ruff
2024-06-18 10:38:29 -04:00
Brandon Rising
f14f377ede
Update validator list
2024-06-18 10:38:29 -04:00
Brandon Rising
1925f83f5e
Update validator list
2024-06-18 10:38:29 -04:00
Brandon Rising
3a5ad6d112
Update validator list
2024-06-18 10:38:29 -04:00
Brandon Rising
41a6bb45f3
Initial functionality
2024-06-18 10:38:29 -04:00
psychedelicious
cd70937b7f
feat(api): improved model install confirmation page styling & messaging
2024-06-17 10:51:08 +10:00
psychedelicious
f002bca2fa
feat(ui): handle new model_install_download_started event
...
When a model install is initiated from outside the client, we now trigger the model manager tab's model install list to update.
- Handle new `model_install_download_started` event
- Handle `model_install_download_complete` event (this event is not new but was never handled)
- Update optimistic updates/cache invalidation logic to efficiently update the model install list
2024-06-17 10:07:10 +10:00
psychedelicious
56771de856
feat(ui): add redux actions for model_install_download_started event
2024-06-17 09:52:46 +10:00
psychedelicious
c11478a94a
chore(ui): typegen
2024-06-17 09:51:18 +10:00
Lincoln Stein
7088d5610b
add script to sync models db with models.yaml
2024-06-16 19:50:49 -04:00
psychedelicious
fb694b3e17
feat(app): add model_install_download_started event
...
Previously, we used `model_install_download_progress` for both download starting and progressing. When handling this event, we don't know which actual thing it represents.
Add `model_install_download_started` event to explicitly represent a model download started event.
2024-06-17 09:50:25 +10:00
psychedelicious
1bc98abc76
docs(ui): explain model install events
2024-06-17 09:33:46 +10:00
Lincoln Stein
1109708029
Merge branch 'main' into lstein/feat/load-one-file
2024-06-15 20:36:40 -04:00
Lincoln Stein
e7b7737c76
Merge branch 'lstein/feat/load-one-file' of github.com:invoke-ai/InvokeAI into lstein/feat/load-one-file
2024-06-15 19:57:44 -04:00
Lincoln Stein
17e9d4f7af
implement lightweight version-by-version config migration
2024-06-15 19:57:35 -04:00
Lincoln Stein
1411fbbd1a
Update invokeai/backend/model_manager/load/model_loaders/stable_diffusion.py
...
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com >
2024-06-15 19:08:29 -04:00
Lincoln Stein
6b788bff51
Update invokeai/backend/model_manager/load/model_loaders/stable_diffusion.py
...
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com >
2024-06-15 19:08:15 -04:00
chainchompa
7f03b04b2f
Merge branch 'main' into chainchompa/model-install-deeplink
2024-06-14 17:16:25 -04:00
chainchompa
4029972530
formatting
2024-06-14 17:15:55 -04:00
chainchompa
328f160e88
refetch model installs when a new model install starts
2024-06-14 17:09:07 -04:00
chainchompa
aae318425d
added route for installing huggingface model from model marketplace
2024-06-14 17:08:39 -04:00
Ryan Dick
785bb1d9e4
Fix all comparisons against the DEFAULT_PRECISION constant. DEFAULT_PRECISION is a torch.dtype. Previously, it was compared to a str in a number of places where it would always resolve to False. This is a bugfix that results in a change to the default behavior. In practice, this will not change the behavior for many users, because it only causes a change in behavior if a users has configured float32 as their default precision.
2024-06-14 11:26:10 -07:00
Lincoln Stein
a3cb5da130
Improve RAM<->VRAM memory copy performance in LoRA patching and elsewhere ( #6490 )
...
* allow model patcher to optimize away the unpatching step when feasible
* remove lazy_offloading functionality
* allow model patcher to optimize away the unpatching step when feasible
* remove lazy_offloading functionality
* do not save original weights if there is a CPU copy of state dict
* Update invokeai/backend/model_manager/load/load_base.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com >
* documentation fixes requested during penultimate review
* add non-blocking=True parameters to several torch.nn.Module.to() calls, for slight performance increases
* fix ruff errors
* prevent crash on non-cuda-enabled systems
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com >
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com >
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com >
2024-06-13 17:10:03 +00:00
Lincoln Stein
379d02d209
migrate config file to remove convert_cache setting
2024-06-12 17:09:12 -04:00
Lincoln Stein
7634991107
rename migration_11 before conflict merge with main
2024-06-12 16:19:41 -04:00
Lincoln Stein
acce4d393e
working, needs sql migrator update
2024-06-12 16:18:15 -04:00
blessedcoolant
568a4844f7
fix: other recursive imports
2024-06-10 04:12:20 -07:00
blessedcoolant
b1e56e2485
fix: SchedulerOutput not being imported correctly
2024-06-10 04:12:20 -07:00