Default Branch

33c288a97d · remove anima denoise case (#9072) · Updated 2026-04-21 20:43:50 -04:00

Branches

3e0a569826 · feat(ui): editable heading and text elements · Updated 2025-01-30 19:04:24 -05:00    github

3457
58

27278d1afd · wrap error message in Text node · Updated 2025-01-24 13:27:26 -05:00    github

3457
1

67afa7e339 · Update PartialLayer to work with unquantized / GGML quantized / BnB quantized layers. · Updated 2025-01-24 10:52:57 -05:00    github

3457
17

231099f913 · Infer FLUX model params from the shape of the state dict rather than assuming that the model is either Dev or Schnell. This enables the FLEX.1-alpha model. · Updated 2025-01-20 10:43:11 -05:00    github

3484
1

f607ff4461 · Add mark_flaky_mps_github_action test decorator. · Updated 2025-01-17 15:58:09 -05:00    github

3499
1

6df3e9f960 · refactor(ui): persistent workflow field value generators · Updated 2025-01-15 18:22:26 -05:00    github

3598
65

03157a35ec · Don't use safetensors mmap for some FLUX models to see what happens. · Updated 2025-01-14 21:23:03 -05:00    github

3620
1

c8ac19f2f7 · Drop models from the cache if they have been fully offloaded. · Updated 2025-01-14 20:44:51 -05:00    github

3620
2

020968a021 · Load state dict one module at a time in CachedModelWithPartialLoad. · Updated 2025-01-14 16:36:55 -05:00    github

3620
2

109cbb8532 · Update the default Model Cache behavior to be more conservative with RAM usage. · Updated 2025-01-13 13:48:52 -05:00    github

3620
1

a5f780c796 · chore(ui): bump @invoke-ai/ui-library · Updated 2025-01-09 03:23:23 -05:00    github

3622
1

b0be545381 · Change definition of VRAM in use for the ModelCache from sum of model weights to the total torch.cuda.memory_allocated(). · Updated 2025-01-06 19:06:39 -05:00    github

3679
11

cd268ff5b6 · Allow models to be locked in VRAM, even if they have been dropped from the RAM cache (related: https://github.com/invoke-ai/InvokeAI/issues/7513). · Updated 2025-01-06 11:48:05 -05:00    github

3679
1

543c152e10 · fix(api): add max len for BoardChanges · Updated 2025-01-05 18:09:01 -05:00    github

3681
1

6d7314ac0a · Consolidate the LayerPatching patching modes into a single implementation. · Updated 2024-12-24 10:57:54 -05:00    github

3721
0
Included

510ed6ed1f · Make CachedModelWithPartialLoad work with models that have non-persistent buffers. · Updated 2024-12-23 10:46:37 -05:00    github

3786
65

582d67b907 · experiment: fix types · Updated 2024-12-19 17:59:17 -05:00    github

3764
3

f01e41ceaf · First pass at dynamically calculating the working memory requirements for the VAE decoding operation. Still need to tune SD3 and FLUX. · Updated 2024-12-19 15:26:16 -05:00    github

3786
57

3ed6e65a6e · Enable LoRAPatcher.apply_smart_lora_patches(...) throughout the stack. · Updated 2024-12-12 17:41:50 -05:00    github

3848
15

5422bb74c6 · ruff · Updated 2024-12-12 17:28:44 -05:00    github

3848
5