Commit Graph

15298 Commits

Author SHA1 Message Date
Ryan Dick
b2bb359d47 Update the model loading logic for several of the large FLUX-related models to ensure that the model is initialized on the meta device prior to loading the state dict into it. This helps to keep peak memory down. 2025-01-16 02:30:28 +00:00
Mary Hipp
b57aa06d9e take out AbortController logic and simplify dependencies 2025-01-16 09:39:32 +11:00
Mary Hipp
f856246c36 try removing abortcontroller 2025-01-16 09:39:32 +11:00
Mary Hipp
195df2ebe6 remove logic changes, keep logging 2025-01-16 09:39:32 +11:00
Mary Hipp
7b5cef6bd7 lint fix 2025-01-16 09:39:32 +11:00
Mary Hipp
69e7ffaaf5 add logging, remove deps 2025-01-16 09:39:32 +11:00
psychedelicious
993401ad6c fix(ui): hide layer when previewing filter
Previously, when previewing a filter on a layer with some transparency or a filter that changes the alpha, the preview was rendered on top of the layer. The preview blended with the layer, which isn't right.

In this change, the layer is hidden during the preview, and when the filter finishes (having been applied or canceled - the two possible paths), the layer is shown.

Technically, we are hiding and showing the layer's object renderer's konva group, which contains the layer's "real" data.

Another small change was made to prevent a flash of empty layer, by waiting to destroy a previous filter preview image until the new preview image is ready to display.
2025-01-16 09:27:36 +11:00
psychedelicious
8d570dcffc chore(ui): typegen 2025-01-16 09:27:36 +11:00
psychedelicious
3f70e947fd chore: ruff 2025-01-16 09:27:36 +11:00
dunkeroni
157290bef4 add: size option for image noise node and filter 2025-01-16 09:27:36 +11:00
dunkeroni
b7389da89b add: Noise filter on Canvas 2025-01-16 09:27:36 +11:00
dunkeroni
254b89b1f5 add: Blur filter option on canvas 2025-01-16 09:27:36 +11:00
dunkeroni
2b122d7882 add: image noise invocation 2025-01-16 09:27:36 +11:00
dunkeroni
ded9213eb4 trim blur splitting logic 2025-01-16 09:27:36 +11:00
dunkeroni
9d51eb49cd fix: ImageBlurInvocation handles transparency now 2025-01-16 09:27:36 +11:00
dunkeroni
0a6e22bc9e fix: ImagePasteInvocation respects transparency 2025-01-16 09:27:36 +11:00
Ryan Dick
b301785dc8 Normalize the T5 model identifiers so that a FLUX T5 or an SD3 T5 model can be used interchangeably. 2025-01-16 08:33:58 +11:00
psychedelicious
edcdff4f78 fix(ui): round rects when applying transform
Due to the limited floating point precision, and konva's `scale` properties, it is possible for the relative rect of an object to have non-integer coordinates and dimensions.

When we go to rasterize and otherwise export images, the HTML canvas API truncates these numbers.

So, we can end up with situations where the relative width and height of a layer are very close to the "real" value, but slightly off.

For example, width and height might be 512px, but the relative rect is calculated to be something like 512.000000003 or 511.9999999997.

In the first case, the truncation results in 512x512 for the dimensions - which is correct. But in the second case, it results in 511x511!

One place where this causes issues is the image action `New Canvas from image -> As Raster Layer (resize)`. For certain input image sizes, this results in an incorrectly resized image. For example, a 1496x1946 input image is resized to 511x511 pixels when the bbox is 512x512.

To fix this, we can round both coords and dimensions of rects when rasterizing.

I've thought through the implications and done some testing. I believe this change will not cause any regressions and only fix edge cases. But, it's possible that something was inadvertently relying on the old behavior.
2025-01-16 01:17:30 +11:00
psychedelicious
66e04ea7ab fix(ui): sticky preset image tooltip
There's a bug where preset image tooltips get stuck open in the list.

After much fiddling, debugging, and review of upstream dependencies, I have determined that this is bug in Chakra-UI v2.

Specifically, it appears to be a race condition related to the Tooltip component's internal use of the `useDisclosure` hook to manage tooltip open state, and the react render cycle.

Unfortunately, Chakra v2 is no longer being updated, and it's a pain in the butt to vendor and fix that component given its dependencies. Not 100% sure I could easily fix it, anyways.

Fortunately, there is a workaround - reduce the tooltip openDelay to 0ms. I prefer the current 500ms delay but I think it's preferable to have too-quick tooltips than too-sticky tooltips...
2025-01-15 09:12:46 -05:00
Ryan Dick
497bc916cc Add unet_config to get_scheduler(...) call in TiledMultiDiffusionDenoiseLatents. 2025-01-15 08:44:08 -05:00
dunkeroni
ebe1873712 fix: only add prediction type if it exists 2025-01-15 08:44:08 -05:00
dunkeroni
59926c320c support v-prediction in denoise_latents.py 2025-01-15 08:44:08 -05:00
Mary Hipp
2d3e2f1907 use window instead of document 2025-01-14 20:01:08 -05:00
psychedelicious
d88b59c5c4 Revert "feat(ui): rearrange canvas paste back nodes to save an image step"
This reverts commit 7cdda00a54.
2025-01-10 15:59:29 +11:00
Simon Fuhrmann
1c7adb5c70 Update communityNodes.md - Fix broken image
The image under https://invoke-ai.github.io/InvokeAI/nodes/communityNodes/#stereogram-nodes is broken. Changing img src to fix.
2025-01-09 07:29:02 -05:00
psychedelicious
8da9d3bc19 chore: bump version to v5.6.0rc2 v5.6.0rc2 2025-01-09 14:12:46 +11:00
psychedelicious
d9c099bd3a docs: fix incorrect macOS launcher fix command 2025-01-09 11:26:59 +11:00
psychedelicious
a329588e5a feat: add link to low vram guide to OOM toast (local only)
Needed to do a bit of refactoring to support this. Overall, the error toast components are easier to understand now.
2025-01-09 11:20:05 +11:00
psychedelicious
e09cf64779 feat: more updates to first run view 2025-01-09 11:20:05 +11:00
psychedelicious
fc8cf224ca docs: typo 2025-01-09 11:20:05 +11:00
psychedelicious
3e1ed18a1f Update docs/features/low-vram.md
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2025-01-09 11:20:05 +11:00
psychedelicious
9a84c85486 docs: add section about disabling the sysmem fallback 2025-01-09 11:20:05 +11:00
psychedelicious
e6deaa2d2f feat(ui): minor layout tweaks for first run screen 2025-01-09 11:20:05 +11:00
psychedelicious
5246b31347 feat(ui): add low vram link to first run page 2025-01-09 11:20:05 +11:00
psychedelicious
b15dd00840 docs: add docs for low vram mode 2025-01-09 11:20:05 +11:00
psychedelicious
8808c36028 docs: update example yaml file 2025-01-09 11:20:05 +11:00
psychedelicious
89b576f10d fix(ui): prevent canvas & main panel content from scrolling
Hopefully fixes issues where, when run via the launcher, the main panel kinda just scrolls out of bounds.
2025-01-09 09:14:22 +11:00
psychedelicious
d7893a52c3 tweak(ui): whats new copy 2025-01-08 15:26:26 +11:00
Mary Hipp
b9c45c3232 Whats new update 2025-01-08 15:26:26 +11:00
David Burnett
afc9d3b98f more ruff formating 2025-01-07 20:18:19 -05:00
David Burnett
7ddc757bdb ruff format changes 2025-01-07 20:18:19 -05:00
David Burnett
d8da9b45cc Fix for DEIS / DPM clash 2025-01-07 20:18:19 -05:00
Ryan Dick
607d19f4dd We should not trust the value of since the model could be partially-loaded. 2025-01-07 19:22:31 -05:00
psychedelicious
32286f321c docs: note that version is not req for editable install 2025-01-07 17:17:40 -05:00
psychedelicious
03f7bdc9f9 docs: fix manual install rocm pypi indices 2025-01-07 17:17:40 -05:00
Ryan Dick
4df3d0861b Deprecate ram/vram configs for smoother migration path to dynamic limits (#7526)
## Summary

Changes:
- Deprecate `ram` and `vram` configs. If these are set in invokeai.yaml,
they will be ignored.
- Create new `max_cache_ram_gb` and `max_cache_vram_gb` configs with the
same definitions as the old configs.

The main motivation of this change is to make the migration path
smoother for users who had previously added `ram` /`vram` to their
config files. Now, these users will be automatically migrated into the
new dynamic limit behavior (which is better in most cases). These users
will have to manually re-add `max_cache_ram_gb` and `max_cache_vram_gb`
to their configs if they wish to go back to specifying manual limits.

## Related Issues / Discussions

See the release notes for RC v5.6.0rc1 for the old migration behavior
that we are trying to improve:
https://github.com/invoke-ai/InvokeAI/releases/tag/v5.6.0rc1

## QA Instructions

- [x] Test that if `ram` or `vram` are present in a user's
`invokeai.yaml`, these values are ignored.
- [x] Test that `max_cache_ram_gb` and `max_cache_vram_gb` are applied,
if set.

## Merge Plan

- Don't forget to update the RC release notes accordingly.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2025-01-07 17:03:11 -05:00
Ryan Dick
974b4671b1 Deprecate the ram and vram configs to make the migration to dynamic
memory limits smoother for users who had previously overriden these
values.
2025-01-07 16:45:29 +00:00
Ryan Dick
6b18f270dd Bugfix: Offload of GGML-quantized model in torch.inference_mode() cm (#7525)
## Summary

This PR contains a bugfix for an edge case with model unloading (from
VRAM to RAM). Thanks to @JPPhoto for finding it.

The bug was triggered under the following conditions:
- A GGML-quantized model is loaded in VRAM
- We run a Spandrel image-to-image invocation (which is wrapped in a
`torch.inference_mode()` context manager.
- The model cache attempts to unload the GGML-quantized model from VRAM
to RAM.
- Doing this inside of the `torch.inference_mode()` cm results in the
following error:
```
 [2025-01-07 15:48:17,744]::[InvokeAI]::ERROR --> Error while invoking session 98a07259-0c03-4111-a8d8-107041cb86f9, invocation d8daa90b-7e4c-4fc4-807c-50ba9be1a4ed (spandrel_image_to_image): Cannot set version_counter for inference tensor
[2025-01-07 15:48:17,744]::[InvokeAI]::ERROR --> Traceback (most recent call last):
  File "/home/ryan/src/InvokeAI/invokeai/app/services/session_processor/session_processor_default.py", line 129, in run_node
    output = invocation.invoke_internal(context=context, services=self._services)
  File "/home/ryan/src/InvokeAI/invokeai/app/invocations/baseinvocation.py", line 300, in invoke_internal
    output = self.invoke(context)
  File "/home/ryan/.pyenv/versions/3.10.14/envs/InvokeAI_3.10.14/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/home/ryan/src/InvokeAI/invokeai/app/invocations/spandrel_image_to_image.py", line 167, in invoke
    with context.models.load(self.image_to_image_model) as spandrel_model:
  File "/home/ryan/src/InvokeAI/invokeai/backend/model_manager/load/load_base.py", line 60, in __enter__
    self._cache.lock(self._cache_record, None)
  File "/home/ryan/src/InvokeAI/invokeai/backend/model_manager/load/model_cache/model_cache.py", line 224, in lock
    self._load_locked_model(cache_entry, working_mem_bytes)
  File "/home/ryan/src/InvokeAI/invokeai/backend/model_manager/load/model_cache/model_cache.py", line 272, in _load_locked_model
    vram_bytes_freed = self._offload_unlocked_models(model_vram_needed, working_mem_bytes)
  File "/home/ryan/src/InvokeAI/invokeai/backend/model_manager/load/model_cache/model_cache.py", line 458, in _offload_unlocked_models
    cache_entry_bytes_freed = self._move_model_to_ram(cache_entry, vram_bytes_to_free)
  File "/home/ryan/src/InvokeAI/invokeai/backend/model_manager/load/model_cache/model_cache.py", line 330, in _move_model_to_ram
    return cache_entry.cached_model.partial_unload_from_vram(
  File "/home/ryan/.pyenv/versions/3.10.14/envs/InvokeAI_3.10.14/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/home/ryan/src/InvokeAI/invokeai/backend/model_manager/load/model_cache/cached_model/cached_model_with_partial_load.py", line 182, in partial_unload_from_vram
    cur_state_dict = self._model.state_dict()
  File "/home/ryan/.pyenv/versions/3.10.14/envs/InvokeAI_3.10.14/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1939, in state_dict
    module.state_dict(destination=destination, prefix=prefix + name + '.', keep_vars=keep_vars)
  File "/home/ryan/.pyenv/versions/3.10.14/envs/InvokeAI_3.10.14/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1936, in state_dict
    self._save_to_state_dict(destination, prefix, keep_vars)
  File "/home/ryan/.pyenv/versions/3.10.14/envs/InvokeAI_3.10.14/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1843, in _save_to_state_dict
    destination[prefix + name] = param if keep_vars else param.detach()
RuntimeError: Cannot set version_counter for inference tensor
```

### Explanation

From the `torch.inference_mode()` docs:
> Code run under this mode gets better performance by disabling view
tracking and version counter bumps.

Disabling version counter bumps results in the aforementioned error when
saving `GGMLTensor`s to a state_dict.

This incompatibility between `GGMLTensors` and `torch.inference_mode()`
is likely caused by the custom tensor type implementation. There may
very well be a way to get these to cooperate, but for now it is much
simpler to remove the `torch.inference_mode()` contexts.

Note that there are several other uses of `torch.inference_mode()` in
the Invoke codebase, but they are all tight wrappers around the
inference forward pass and do not contain the model load/unload process.

## Related Issues / Discussions

Original discussion:
https://discord.com/channels/1020123559063990373/1149506274971631688/1326180753159094303

## QA Instructions

Find a sequence of operations that triggers the condition. For me, this
was:
- Reserve VRAM in a separate process so that there was ~12GB left.
- Fresh start of Invoke
- Run FLUX inference with a GGML 8K model
- Run Spandrel upscaling

Tests:
- [x] Confirmed that I can reproduce the error and that it is no longer
hit after the change
- [x] Confirm that there is no speed regression from switching from
`torch.inference_mode()` to `torch.no_grad()`.
    - Before: `50.354s`, After: `51.536s`


## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2025-01-07 11:31:20 -05:00
Ryan Dick
85eb4f0312 Fix an edge case with model offloading from VRAM to RAM. If a GGML-quantized model is offloaded from VRAM inside of a torch.inference_mode() context manager, this will cause the following error: 'RuntimeError: Cannot set version_counter for inference tensor'. 2025-01-07 15:59:50 +00:00
psychedelicious
67e948b50d chore: bump version to v5.6.0rc1 v5.6.0rc1 2025-01-07 19:41:56 +11:00