This website requires JavaScript.
Explore
Help
Register
Sign In
github
/
InvokeAI
Watch
1
Star
1
Fork
0
You've already forked InvokeAI
mirror of
https://github.com/invoke-ai/InvokeAI.git
synced
2026-01-25 02:57:56 -05:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
14372e3818da650ca4cc968adc873d7f8b667e88
InvokeAI
/
invokeai
/
backend
/
model_manager
/
load
/
model_cache
History
Lincoln Stein
21a60af881
when unlocking models, offload_unlocked_models should prune to vram limit only (
#6450
)
...
Co-authored-by: Lincoln Stein <
lstein@gmail.com
>
2024-05-29 03:01:21 +00:00
..
__init__.py
BREAKING CHANGES: invocations now require model key, not base/type/name
2024-03-01 10:42:33 +11:00
model_cache_base.py
Optimize RAM to VRAM transfer (
#6312
)
2024-05-24 17:06:09 +00:00
model_cache_default.py
Optimize RAM to VRAM transfer (
#6312
)
2024-05-24 17:06:09 +00:00
model_locker.py
when unlocking models, offload_unlocked_models should prune to vram limit only (
#6450
)
2024-05-29 03:01:21 +00:00