Commit Graph

11675 Commits

Author SHA1 Message Date
Billy
b88f4a24d0 Frontend types 2025-06-23 14:01:41 +10:00
Billy
150a876c73 Formatting 2025-06-23 13:52:19 +10:00
Billy
62c3b01e4f Merge branch 'main' into OMI 2025-06-23 13:52:07 +10:00
Billy
e1157f343b Support for Flux and SDXL 2025-06-23 13:51:16 +10:00
Kent Keirsey
6a78739076 Change save button to Invoke Blue 2025-06-20 15:07:40 +10:00
psychedelicious
0794eb43e7 fix(nodes): ensure each invocation overrides _original_model_fields with own field data 2025-06-20 15:03:55 +10:00
Billy
4ee54eac1d Another attempt 2025-06-20 14:10:06 +10:00
Billy
5851c46c81 Hard code source 2025-06-19 11:05:43 +10:00
Billy
a296559e79 Ignore 2025-06-19 11:02:18 +10:00
Billy
1fd83f5e68 Import 2025-06-19 11:01:50 +10:00
Billy
637487c573 Convert FROM OMI to diffusers 2025-06-19 11:00:27 +10:00
Billy
4e98e7d0a2 Typo: dot should be comma 2025-06-19 10:47:24 +10:00
Billy
12f65d800d Formatting 2025-06-19 09:40:58 +10:00
Billy
45d09f8f51 Use OMI conversion utils 2025-06-19 09:40:49 +10:00
Billy
2876c72fa9 Schema update 2025-06-18 10:54:01 +10:00
Billy
9b4fdb493e Loader 2025-06-18 10:53:54 +10:00
Billy
47e21d6e04 Formatting 2025-06-17 13:56:38 +10:00
Billy
84ab4a1c30 Convert from OMI to default LoRA state dict 2025-06-17 13:56:22 +10:00
Billy
85c4304efd Add OMI LoRA config 2025-06-17 13:34:03 +10:00
Billy
8f152f162b Add OMI to model format taxonomy 2025-06-17 13:33:40 +10:00
Mary Hipp
291e0736d6 fix names of unpublishable nodes 2025-06-16 12:40:54 -04:00
psychedelicious
4bfa6439d4 chore(ui): typgen 2025-06-16 19:33:19 +10:00
psychedelicious
a8d7969a1d fix(app): config docstrings 2025-06-16 19:33:19 +10:00
Heathen711
46bfa24af3 ruff format 2025-06-16 19:33:19 +10:00
Heathen711
a8cb8e128d run "make frontend-typegen" 2025-06-16 19:33:19 +10:00
Heathen711
8cef0f5bf5 Update supported cuda slot input. 2025-06-16 19:33:19 +10:00
psychedelicious
911baeb58b chore(ui): bump version to v5.15.0 2025-06-16 19:18:25 +10:00
Kevin Turner
312960645b fix: move AI Toolkit to the bottom of the detection list
to avoid disrupting already-working LoRA
2025-06-16 19:08:11 +10:00
Kevin Turner
a214f4fff5 fix: group aitoolkit lora layers 2025-06-16 19:08:11 +10:00
Kevin Turner
2981591c36 test: add some aitoolkit lora tests 2025-06-16 19:08:11 +10:00
Kevin Turner
b08f90c99f WIP!: …they weren't in diffusers format… 2025-06-16 19:08:11 +10:00
Kevin Turner
ab8c739cd8 fix(LoRA): add ai-toolkit to lora loader 2025-06-16 19:08:11 +10:00
Kevin Turner
5c5108c28a feat(LoRA): support AI Toolkit LoRA for FLUX [WIP] 2025-06-16 19:08:11 +10:00
j-brooke
3df7cfd605 Updated fracturedjsonjs to version 4.1.0 and included settings adjustments for more pleasing comma placement. 2025-06-14 14:59:43 +10:00
psychedelicious
1ff3d44dba fix(app): guard against possible race conditions during enqueue
In #7724 we made a number of perf optimisations related to enqueuing. One of these optimisations included moving the enqueue logic - including expensive prep work and db writes - to a separate thread.

At the same time manual DB locking was abandoned in favor of WAL mode.

Finally, we set `check_same_thread=False` to allow multiple threads to access the connection at a given time.

I think this may be the cause of #7950:
- We start an enqueue in a thread (running in bg)
- We dequeue
- Dequeue pulls a partially-written queue item from DB and we get the errors in the linked issue

To be honest, I don't understand enough about SQLite to confidently say that this kind of race condition is actually possible. But:
- The error started popping up around the time we made this change.
- I have reviewed the logic from enqueue to dequeue very carefully _many_ times over the past month or so, and I am confident that the error is only possible if we are getting unexpectedly `NULL` values from the DB.
- The DB schema includes `NOT NULL` constraints for the column that is apparently returning `NULL`.
- Therefore, without some kind of race condition or schema issue, the error should not be possible.
- The `enqueue_batch` call is the only place I can find where we have the possibility of a race condition due to async logic. Everywhere else, all DB interaction for the queue is synchronous, as far as I can tell.

This change retains the perf benefits by running the heavy enqueue prep logic in a separate thread, but moves back to the main thread for the DB write. It also uses an explicit transaction for the write.

Will just have to wait and see if this fixes the issue.
2025-06-13 23:51:47 +10:00
Emmanuel Ferdman
c80ad90f72 Migrate to modern logger interface
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
2025-06-13 13:07:09 +10:00
psychedelicious
3b4d1b8786 perf(app): gc before every queue item
This reduces peak memory usage at a negligible cost. Queue items typically take on the order of seconds, making the time cost of a GC essentially free.

Not a great idea on a hotter code path though.
2025-06-11 12:56:16 +10:00
psychedelicious
c66201c7e1 perf(app): skip TI logic when no TIs to apply 2025-06-11 12:56:16 +10:00
psychedelicious
35c7c59455 fix(app): reduce peak memory usage
We've long suspected there is a memory leak in Invoke, but that may not be true. What looks like a memory leak may in fact be the expected behaviour for our allocation patterns.

We observe ~20 to ~30 MB increase in memory usage per session executed. I did some prolonged tests, where I measured the process's RSS in bytes while doing 200 SDXL generations. I found that it eventually leveled off at around 100 generations, at which point memory usage had climbed by ~900MB from its starting point.

I used tracemalloc to diff the allocations of single session executions and found that we are allocating ~20MB or so per session in `ModelPatcher.apply_ti()`.

In `ModelPatcher.apply_ti()` we add tokens to the tokenizer when handling TIs. The added tokens should be scoped to only the current invocation, but there is no simple way to remove the tokens afterwards.

As a workaround for this, we clone the tokenizer, add the TI tokens to the clone, and use the clone to when running compel. Afterwards, this cloned tokenizer is discarded.

The tokenizer uses ~20MB of memory, and it has referrers/referents to other compel stuff. This is what is causing the observed increases in memory per session!

We'd expect these objects to be GC'd but python doesn't do it immediately. After creating the cond tensors, we quickly move on to denoising. So there isn't any time for the GC to happen to free up its existing memory arenas/blocks to reuse them. Instead, python needs to request more memory from the OS.

We can improve the situation by immediately calling `del` on the tokenizer clone and related objects. In fact, we already had some code in the compel nodes to `del` some of these objects, but not all.

Adding the `del`s vastly improves things. We hit peak RSS in half the sessions (~50 or less) and it's now ~100MB more than starting value. There is still a gradual increase in memory usage until we level off.
2025-06-11 12:56:16 +10:00
psychedelicious
85f98ab3eb fix(app): error on upload + resize for unusual image modes 2025-06-11 11:18:08 +10:00
Mary Hipp
dac75685be disable publish and cancel buttons once it begins 2025-06-10 19:50:09 -04:00
psychedelicious
d7b5a8b298 fix: opencv dependency conflict (#8095)
* build: prevent `opencv-python` from being installed

Fixes this error: `AttributeError: module 'cv2.ximgproc' has no attribute 'thinning'`

`opencv-contrib-python` supersedes `opencv-python`, providing the same API + additional features. The two packages should not be installed at the same time to avoid conflicts and/or errors.

The `invisible-watermark` package requires `opencv-python`, but we require the contrib variant.

This change updates `pyproject.toml` to prevent `opencv-python` from ever being installed using a `uv` features called dependency overrides.

* feat(ui): data viewer supports disabling wrap

* feat(api): list _all_ pkgs in app deps endpoint

* chore(ui): typegen

* feat(ui): update about modal to display new full deps list

* chore: uv lock
2025-06-10 08:33:41 -04:00
Kent Keirsey
d3ecaa740f Add Precise Reference to Starter Models 2025-06-09 22:02:11 +10:00
dunkeroni
b5a6765a3d also search image creation date 2025-06-09 21:54:26 +10:00
psychedelicious
3704573ef8 chore: bump version to v5.14.0 2025-06-06 22:36:32 +10:00
Hiroto N
01fbf2ce4d translationBot(ui): update translation (Japanese)
Currently translated at 76.5% (1467 of 1917 strings)

Co-authored-by: Hiroto N <hironow365@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ja/
Translation: InvokeAI/Web UI
2025-06-06 20:56:13 +10:00
Riccardo Giovanetti
96e7003449 translationBot(ui): update translation (Italian)
Currently translated at 98.9% (1896 of 1917 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-06-06 20:56:13 +10:00
RyoKoba
80197b8856 translationBot(ui): update translation (Japanese)
Currently translated at 76.1% (1460 of 1917 strings)

Co-authored-by: RyoKoba <kobayashi_ryo@cyberagent.co.jp>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ja/
Translation: InvokeAI/Web UI
2025-06-06 20:52:36 +10:00
Hosted Weblate
0187bc671e translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2025-06-06 20:52:36 +10:00
psychedelicious
31584daabe feat(ui): display canvas spinner during compositing operations 2025-06-06 20:50:02 +10:00