mirror of
https://github.com/invoke-ai/InvokeAI.git
synced 2026-02-12 17:25:15 -05:00
Tell the model manager that we need some extra working memory for VAE encoding operations to prevent OOMs. See previous commit for investigation and determination of the magic numbers used. This safety measure is especially relevant now that we have FLUX Kontext and may be encoding rather large ref images. Without the working memory estimation we can OOM as we prepare for denoising. See #8405 for an example of this issue on a very low VRAM system. It's possible we can have the same issue on any GPU, though - just a matter of hitting the right combination of models loaded.