This website requires JavaScript.
Explore
Help
Register
Sign In
github
/
InvokeAI
Watch
1
Star
1
Fork
0
You've already forked InvokeAI
mirror of
https://github.com/invoke-ai/InvokeAI.git
synced
2026-01-31 13:48:02 -05:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
dbb5830027faeed264e1d052d93f4a405aa3efe3
InvokeAI
/
invokeai
/
backend
/
model_manager
/
load
/
model_cache
/
cached_model
History
Ryan Dick
da589b3f1f
Memory optimization to load state dicts one module at a time in CachedModelWithPartialLoad when we are not storing a CPU copy of the state dict (i.e. when keep_ram_copy_of_weights=False).
2025-01-16 17:00:33 +00:00
..
cached_model_only_full_load.py
Add keep_ram_copy option to CachedModelOnlyFullLoad.
2025-01-16 15:08:23 +00:00
cached_model_with_partial_load.py
Memory optimization to load state dicts one module at a time in CachedModelWithPartialLoad when we are not storing a CPU copy of the state dict (i.e. when keep_ram_copy_of_weights=False).
2025-01-16 17:00:33 +00:00