Unfortunately we cannot do strict floats or ints.
The batch data models don't specify the value types, it instead relies on pydantic parsing. JSON doesn't differentiate between float and int, so a float `1.0` gets parsed as `1` in python.
As a result, we _must_ accept mixed floats and ints for BatchDatum.items.
Tests and validation updated to handle this.
Maybe we should update the BatchDatum model to have a `type` field? Then we could parse as float or int, depending on the inputs...
## Summary
This PR revises the logic for calculating the model cache RAM limit. See
the code for thorough documentation of the change.
The updated logic is more conservative in the amount of RAM that it will
use. This will likely be a better default for more users. Of course,
users can still choose to set a more aggressive limit by overriding the
logic with `max_cache_ram_gb`.
## Related Issues / Discussions
- Should help with https://github.com/invoke-ai/InvokeAI/issues/7563
## QA Instructions
Exercise all heuristics:
- [x] Heuristic 1
- [x] Heuristic 2
- [x] Heuristic 3
- [x] Heuristic 4
## Merge Plan
- [x] Merge https://github.com/invoke-ai/InvokeAI/pull/7565 first and
update the target branch
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
## Summary
This PR adds a `keep_ram_copy_of_weights` config option the default (and
legacy) behavior is `true`. The tradeoffs for this setting are as
follows:
- `keep_ram_copy_of_weights: true`: Faster model switching and LoRA
patching.
- `keep_ram_copy_of_weights: false`: Lower average RAM load (may not
help significantly with peak RAM).
## Related Issues / Discussions
- Helps with https://github.com/invoke-ai/InvokeAI/issues/7563
- The Low-VRAM docs are updated to include this feature in
https://github.com/invoke-ai/InvokeAI/pull/7566
## QA Instructions
- Test with `enable_partial_load: false` and `keep_ram_copy_of_weights:
false`.
- [x] RAM usage when model is loaded is reduced.
- [x] Model loading / unloading works as expected.
- [x] LoRA patching still works.
- Test with `enable_partial_load: false` and `keep_ram_copy_of_weights:
true`.
- [x] Behavior should be unchanged.
- Test with `enable_partial_load: true` and `keep_ram_copy_of_weights:
false`.
- [x] RAM usage when model is loaded is reduced.
- [x] Model loading / unloading works as expected.
- [x] LoRA patching still works.
- Test with `enable_partial_load: true` and `keep_ram_copy_of_weights:
true`.
- [x] Behavior should be unchanged.
- [x] Smoke test CPU-only and MPS with default configs.
## Merge Plan
- [x] Merge https://github.com/invoke-ai/InvokeAI/pull/7564 first and
change target branch.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
## Summary
Prior to this change, there were several cases where we initialized the
weights of a FLUX model before loading its state dict (and, to make
things worse, in some cases the weights were in float32). This PR fixes
a handful of these cases. (I think I found all instances for the FLUX
family of models.)
## Related Issues / Discussions
- Helps with https://github.com/invoke-ai/InvokeAI/issues/7563
## QA Instructions
I tested that that model loading still works and that there is no
virtual memory reservation on model initialization for the following
models:
- [x] FLUX VAE
- [x] Full T5 Encoder
- [x] Full FLUX checkpoint
- [x] GGUF FLUX checkpoint
## Merge Plan
No special instructions.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_