mirror of
https://github.com/invoke-ai/InvokeAI.git
synced 2026-02-04 20:35:00 -05:00
To be consistent with max_cache_size, the amount of memory to hold in VRAM for model caching is now controlled by the max_vram_cache_size configuration parameter.