This website requires JavaScript.
Explore
Help
Register
Sign In
github
/
InvokeAI
Watch
1
Star
1
Fork
0
You've already forked InvokeAI
mirror of
https://github.com/invoke-ai/InvokeAI.git
synced
2026-01-24 07:08:06 -05:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
42e052d6f2e89295b6e897777852df505e6d666c
InvokeAI
/
invokeai
/
backend
/
model_manager
/
load
/
model_cache
History
Ryan Dick
cc9d215a9b
Add endpoint for emptying the model cache. Also, adds a threading lock to the ModelCache to make it thread-safe.
2025-01-30 09:18:28 -05:00
..
cached_model
Memory optimization to load state dicts one module at a time in CachedModelWithPartialLoad when we are not storing a CPU copy of the state dict (i.e. when keep_ram_copy_of_weights=False).
2025-01-16 17:00:33 +00:00
torch_module_autocast
Performance optimizations for LoRAs applied on top of GGML-quantized tensors.
2025-01-28 14:51:35 +00:00
__init__.py
Remove ModelCacheBase.
2024-12-24 14:23:18 +00:00
cache_record.py
First pass at adding partial loading support to the ModelCache.
2025-01-07 00:30:58 +00:00
cache_stats.py
Move CacheStats to its own file.
2024-12-24 14:23:18 +00:00
dev_utils.py
Add a utility to help with determining the working memory required for expensive operations.
2025-01-07 01:20:15 +00:00
model_cache.py
Add endpoint for emptying the model cache. Also, adds a threading lock to the ModelCache to make it thread-safe.
2025-01-30 09:18:28 -05:00
utils.py
Add get_effective_device(...) utility to aid in determining the effective device of models that are partially loaded.
2025-01-07 00:31:00 +00:00