Files
InvokeAI/invokeai/backend
psychedelicious 7d86f00d82 feat(mm): implement working memory estimation for VAE encode for all models
Tell the model manager that we need some extra working memory for VAE
encoding operations to prevent OOMs.

See previous commit for investigation and determination of the magic
numbers used.

This safety measure is especially relevant now that we have FLUX Kontext
and may be encoding rather large ref images. Without the working memory
estimation we can OOM as we prepare for denoising.

See #8405 for an example of this issue on a very low VRAM system. It's
possible we can have the same issue on any GPU, though - just a matter
of hitting the right combination of models loaded.
2025-08-12 10:51:05 +10:00
..
2024-06-26 21:46:59 +10:00
2025-06-24 08:53:39 +10:00
2025-05-27 07:28:47 +10:00
2025-05-19 13:50:04 +10:00
2025-04-04 18:42:13 +11:00