Compare commits

...

404 Commits

Author SHA1 Message Date
psychedelicious
3e200a2ba2 chore: bump version to v5.10.0dev3 2025-04-04 16:48:12 +10:00
psychedelicious
4610b55a5d chore: update uv.lock 2025-04-04 16:46:15 +10:00
psychedelicious
b3b3dbd92d build: remove pin on spandrel dependency 2025-04-04 16:41:10 +10:00
psychedelicious
6c36b0508b build: add comment about torchsde to pyproject 2025-04-04 16:40:50 +10:00
psychedelicious
2756c539e0 build: remove pin on gguf dependency
This allows it to pull in sentencepiece on its own. In 0.10.0, it didn't have this package listed as a dependency, but in recent releases it does. So we are able to remove sentencepiece as an explicit dep.
2025-04-04 16:40:36 +10:00
psychedelicious
a34383d460 build: remove unused clip_anytorch dependency 2025-04-04 16:39:20 +10:00
psychedelicious
77f22497d2 build: remove unused pytorch-lightning dependency 2025-04-04 16:39:20 +10:00
psychedelicious
5967d4e1da build: remove unused pyreadline3 dependency 2025-04-04 16:39:20 +10:00
psychedelicious
1253ad5053 build: remove unused pyperclip dependency 2025-04-04 16:39:20 +10:00
psychedelicious
5aa08ab09b build: remove unused pympler dependency 2025-04-04 16:39:19 +10:00
psychedelicious
6ce527768b build: remove unused scikit-image dependency 2025-04-04 16:39:19 +10:00
psychedelicious
fe88012236 build: remove unused npyscreen dependency 2025-04-04 16:39:19 +10:00
psychedelicious
8609b98217 build: remove unused torchmetrics dependency 2025-04-04 16:13:45 +10:00
psychedelicious
19f0bf828c build: remove unused datasets dependency 2025-04-04 16:12:13 +10:00
psychedelicious
26cbeccfdf build: remove unused click dependency 2025-04-04 16:11:38 +10:00
psychedelicious
b5be81b97b build: remove unused omegaconf dependency 2025-04-04 16:09:53 +10:00
psychedelicious
f14d07968b build: remove unused facexlib dependency 2025-04-04 16:09:36 +10:00
psychedelicious
525a89900a build: remove unused timm dependency 2025-04-04 16:08:31 +10:00
psychedelicious
d8df31a8ac chore(ui): typegen 2025-04-04 16:03:29 +10:00
psychedelicious
380a41be34 chore: update uv.lock 2025-04-04 16:03:29 +10:00
psychedelicious
e990afbccb build: remove unused matplotlib dep 2025-04-04 16:03:29 +10:00
psychedelicious
c591478d24 tidy(nodes): remove matplotlib dependency
It was only used for a single color conversion function. Replaced with cv2 code, tested functionality to confirm it works the same.
2025-04-04 16:03:29 +10:00
psychedelicious
30def6a9bd build: move humanize to test deps 2025-04-04 16:03:29 +10:00
psychedelicious
6cf88a601d build: remove unused albumentations dependency
This is not used
2025-04-04 16:03:29 +10:00
psychedelicious
5e14545c32 tidy: delete unused file 2025-04-04 16:03:29 +10:00
psychedelicious
eefbcd2485 build: remove controlnet_aux dependency, remove pin for timm 2025-04-04 16:03:29 +10:00
psychedelicious
13cc44a22c tidy(nodes): rename controlnet_image_processors.py -> controlnet.py 2025-04-04 16:03:29 +10:00
psychedelicious
2cca339a5c tidy(nodes): remove unused old dw openpose detector class 2025-04-04 16:03:29 +10:00
psychedelicious
0a7cf6c0ec tidy(nodes): remove deprecated controlnet "processor" nodes 2025-04-04 16:03:29 +10:00
psychedelicious
06abc1d40a build: upgrade python to 3.12 in pins 2025-04-04 16:03:29 +10:00
psychedelicious
2cde86b7b8 build: update uv.lock 2025-04-04 16:03:28 +10:00
psychedelicious
0a49463c79 fix(backend): remove mps_fixes
The fixes in this module monkeypatched `torch` to resolve some issues with FP16 on macOS. These issues have long since been resolved.

Included in the now-removed fixes is `CustomSlicedAttentionProcessor`, which is intended to reduce memory requirements for MPS. This overrides `diffusers`' own `SlicedAttentionProcessor`.

Unfortunately, `attention_type: sliced` produces hot garbage with the fixes and black images without the fixes. So this class appears to now be a moot point.

Regardless, SDPA is supported on MPS and very efficient, so sliced attention is largely obsolete.
2025-04-04 16:03:28 +10:00
psychedelicious
f3402b6ce7 chore: bump version to v5.10.0dev2
Doing a dev build so I can test the launcher.
2025-04-04 16:03:28 +10:00
psychedelicious
5d3fb822c5 build: downgrade python to 3.11 in pins 2025-04-04 16:03:28 +10:00
psychedelicious
9e70d8eb6e build: restore prev setuptools config to fix wheel build 2025-04-04 16:03:28 +10:00
psychedelicious
402758d502 ci: use py3.12 to build installer 2025-04-04 16:03:28 +10:00
psychedelicious
b97cc51f23 experiment: add pins.json to repo
The launcher will query this file to get the pins needed for installation
2025-04-04 16:03:28 +10:00
psychedelicious
f6f33b5999 chore: bump version to v5.10.0dev1
Doing a dev build so I can test the launcher.
2025-04-04 16:03:28 +10:00
psychedelicious
cd873f1fe5 chore: update uv.lock for latest pydantic
Ran `uv lock --upgrade-package pydantic`
2025-04-04 16:03:28 +10:00
psychedelicious
5f3d398074 fix(ui): handle updated schema structure during invocation parsing
In https://github.com/pydantic/pydantic/pull/10029, pydantic made an improvement to its generated JSON schemas (OpenAPI schemas). The previous and new generated schemas both meet the schema spec.

When we parse the OpenAPI schema to generate node templates, we use some typeguard to narrow schema components from generic OpenAPI schema objects to a node field schema objects. The narrower node field schema objects contain extra data.

For example, they contain a `field_kind` attribute that indicates it the field is an input field or output field. These extra attributes are not part of the OpenAPI spec (but the spec allows does allow for this extra data).

This typeguard relied on a pydantic implementation detail. This was changed in the linked pydantic PR, which released with v2.9.0. With the change, our typeguard rejects input field schema objects, causing parsing to fail with errors/warnings like `Unhandled input property` in the JS console.

In the UI, this causes many fields - mostly model fields - to not show up in the workflow editor.

The fix for this is very simple - instead of relying on an implementation detail for the typeguard, we can check if the incoming schema object has any of our invoke-specific extra attributes. Specifically, we now look for the presence of the `field_kind` attribute on the incoming schema object. If it is present, we know we are dealing with an invocation input field and can parse it appropriately.
2025-04-04 16:03:28 +10:00
psychedelicious
e6b366ff61 chore: typegen 2025-04-04 16:03:28 +10:00
psychedelicious
bcd50ed688 chore: remove pydantic pin 2025-04-04 16:03:27 +10:00
psychedelicious
a5966c3197 chore(ui): typegen 2025-04-04 16:03:27 +10:00
psychedelicious
f28b054872 tests: update tests/test_object_serializer_disk.py 2025-04-04 16:03:27 +10:00
psychedelicious
31681f4ad7 fix(app): add trusted classes to torch safe globals to prevent errors when loading them
In `ObjectSerializerDisk`, we use `torch.load` to load serialized objects from disk. With torch 2.6.0, torch defaults to `weights_only=True`. As a result, torch will raise when attempting to deserialize anything with an unrecognized class.

For example, our `ConditioningFieldData` class is untrusted. When we load conditioning from disk, we will get a runtime error.

Torch provides a method to add trusted classes to an allowlist. This change adds an arg to `ObjectSerializerDisk` to add a list of safe globals to the allowlist and uses it for both `ObjectSerializerDisk` instances.

Note: My first attempt inferred the class from the generic type arg that `ObjectSerializerDisk` accepts, and added that to the allowlist. Unfortunately, this doesn't work.

For example, `ConditioningFieldData` has a `conditionings` attribute that may be one some other untrusted classes representing model-specific conditioning data. So, even if we allowlist `ConditioningFieldData`, loading will fail when torch deserializes the `conditionings` attribute.
2025-04-04 16:03:27 +10:00
Eugene Brodsky
aaf042de48 resolve conflict between timm version needed by LLaVA and controlnet-aux 2025-04-04 16:03:27 +10:00
Eugene Brodsky
c28e685409 reintroduce GPU_DRIVER build arg in CI container build, as it has apparently been removed 2025-04-04 16:03:27 +10:00
Eugene Brodsky
d6ac822a1f remove obsoleted depenencies that were used by the CLI 2025-04-04 16:03:27 +10:00
Eugene Brodsky
f0a4d7ac7f modify docs for python 3.12 2025-04-04 16:03:27 +10:00
Eugene Brodsky
04b0e658df update nodes schema / typegen 2025-04-04 16:03:27 +10:00
Eugene Brodsky
68845f4d85 update uv.lock 2025-04-04 16:03:27 +10:00
Eugene Brodsky
6df5614b54 refactor Dockerfile; get rid of multi-stage build; upgrade to python 3.12 2025-04-04 16:03:27 +10:00
Eugene Brodsky
0bd6f0245b use uv.lock to pin dependencies 2025-04-04 16:03:26 +10:00
Eugene Brodsky
6c9165046e upgrade pytorch and unpin some of the strict dependency pins to facilitate upgrading co-dependencies.
we will use uv.lock to ensure reproducibility
2025-04-04 16:03:26 +10:00
Chantell
2b5da91beb Update manual.md
Removed a redundancy of package specifier on step 6.
2025-04-04 16:52:04 +11:00
psychedelicious
74bede14be feat(ui): put all validatoin run data into single object 2025-04-04 11:38:04 +11:00
psychedelicious
04ea3c491a chore(ui): typegen 2025-04-04 11:38:04 +11:00
psychedelicious
38e7b23d18 feat(api): put all validatoin run data into single object 2025-04-04 11:38:04 +11:00
psychedelicious
c052846e05 feat(ui): ensure workflow id is passed when doing validation run 2025-04-04 11:38:04 +11:00
psychedelicious
af3a31dfec chore(ui): typegen 2025-04-04 11:38:04 +11:00
psychedelicious
571710fab6 feat(app): add optional published_workflow_id to enqueue payloads and queue item 2025-04-04 11:38:04 +11:00
psychedelicious
a175a5c252 feat(ui): add safeguard against accidentally loading non-library workflow as library workflow 2025-04-04 11:38:04 +11:00
psychedelicious
8b3c36c6fa refactor(ui): better UX for choosing output nodes 2025-04-04 11:38:04 +11:00
psychedelicious
b9ffacd4bf fix(ui): disable publish button when not ready to enqueue (i.e. invalid graph) 2025-04-04 11:38:04 +11:00
psychedelicious
ae45fc8a74 gh: update codeowners
- Add @psychedelicious as codeowner for docs
- Remove inactive contributors
2025-04-03 18:34:39 -04:00
psychedelicious
85db9c65e5 fix(ui): add missing tkey 2025-04-03 12:42:28 +11:00
psychedelicious
ddddaef7ca refactor(ui): use dedicated allowPublishWorkflows instead of disabledFeatures 2025-04-03 12:42:28 +11:00
psychedelicious
e4678201cb feat(ui): add conditionally-enabled workflow publishing ui
This is a squash of a lot of scattered commits that became very difficult to clean up and make individually. Sorry.

Besides the new UI, there are a number of notable changes:
- Publishing logic is disabled in OSS by default. To enable it, provided a `disabledFeatures` prop _without_ "publishWorkflow".
- Enqueuing a workflow is no longer handled in a redux listener. It was  hard to track the state of the enqueue logic in the listener. It is now in a hook. I did not migrate the canvas and upscaling tabs - their enqueue logic is still in the listener.
- When queueing a validation run, the new `useEnqueueWorkflows()` hook will update the payload with the required data for the run.
- Some logic is added to the socket event listeners to handle workflow publish runs completing.
- The workflow library side nav has a new "published" view. It is hidden when the "publishWorkflow" feature is disabled.
- I've added `Safe` and `OrThrow` versions of some workflows hooks. These hooks typically retrieve some data from redux. For example, a node. The `Safe` hooks return the node or null if it cannot be found, while the `OrThrow` hooks return the node or raise if it cannot be found. The `OrThrow` hooks should be used within one of the gate components. These components use the `Safe` hooks and render a fallback if e.g. the node isn't found. This change is required for some of the publish flow UI.
- Add support for locking the workflow editor. When locked, you can pan and zoom but that's it. Currently, it is only locked during publish flow and if a published workflow is opened.
2025-04-03 12:42:28 +11:00
psychedelicious
d66fdfde71 chore(ui): typegen 2025-04-03 12:42:28 +11:00
psychedelicious
08ee08557b feat(app): add noop api validation run stuff to routes and methods 2025-04-03 12:42:28 +11:00
psychedelicious
496f1262c6 feat(app): truncate warnings for invalid model config in db
This message is logged _every_ time we retrieve a list of models if there is an invalid model. Previously it logged the _whole_ row which can be a lot of data. Truncate the row to 64 characters to reduce log pollution.
2025-04-03 12:42:28 +11:00
psychedelicious
188d52e4a5 chore(ui): bump tsafe to latest 2025-04-03 12:42:28 +11:00
Riku
db03c196a1 translationBot(ui): update translation (German)
Currently translated at 66.8% (1230 of 1840 strings)

Co-authored-by: Riku <riku.block@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2025-04-03 07:42:43 +11:00
Riccardo Giovanetti
6bc36b697d translationBot(ui): update translation (Italian)
Currently translated at 98.8% (1818 of 1840 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.6% (1816 of 1840 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.7% (1816 of 1839 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-04-03 07:42:43 +11:00
Linos
b7d71d3028 translationBot(ui): update translation (Vietnamese)
Currently translated at 100.0% (1840 of 1840 strings)

translationBot(ui): update translation (Vietnamese)

Currently translated at 100.0% (1838 of 1838 strings)

Co-authored-by: Linos <linos.coding@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/vi/
Translation: InvokeAI/Web UI
2025-04-03 07:42:43 +11:00
psychedelicious
fa1ebd9d2f fix(ui): do not switch between images when focused on a tab element
Arrow keys should only navigate between tabs, not gallery images.
2025-04-03 07:40:10 +11:00
psychedelicious
eed5d02069 fix(ui): handling for invalid edges when loading workflows
Previously, reactflow appears to have handled an edge case when using its `applyChanges` utility. If a change was provided without an item, it would skip that change. For example, an "add edge" change that somehow passed `null` as the edge, instead of a valid edge.

In our workflow loading and validation logic, invalid edges were removed from the array using `delete edges[i]`. This left "holes" in the array of edges. We then asked `reactflow` to add these edges to state. When it encountered one of the "holes", it skipped over it.

In a recent release (unsure which, somewhere between the latest v11 and ~v12.4) this seems to have changed. It no longer skips over the "holes" and instead trusts the data. This can cause a couple issues:
- Error when loading the workflow if `reactflow` attempt to do anything with the nonexistent edge.
- If somehow the workflow makes it into state with "holes" in the array of edges, all sorts of other stuff breaks when our code does anything with the nonexistent edge.

Two-part fix:
- Update the invalid edge handling to not use `delete edges[i]`. Instead, as we check each edge, we add invalid ones to a set. Then, after all the checks are finished, filter out the invalid edges. The resultant edges array has no holes.
- Simplify the logic around setting nodes and edges in redux. Previously we were using `reactflow`'s `applyChanges` utils, but this does literally nothing except take extra CPU cycles. We can simply set the loaded nodes and edges directly in redux. Perhaps we were using `applyChanges` because it addressed the "holes" issue? Not sure. But we don't need it now.

Closes #7868
2025-04-03 07:37:49 +11:00
psychedelicious
3650d91045 chore(ui): bump @xyflow/react to latest 2025-04-03 07:37:49 +11:00
Eugene Brodsky
6c7d08cacb Change timm and controlnet-aux pins to fix LLaVA model support (#7846)
## Summary

`timm` below 1.0.0 prevents llava models from working (broken in
transformers). but `controlnet-aux` pins `timm` to an earlier version
because otherwise it was breaking the ZoeDepth controlnet.

we don't use ZoeDepth (replaced by depthAnything), and downgrading
controlnet-aux seems to be acceptable.

more context here:

https://github.com/huggingface/controlnet_aux/issues/106
https://github.com/huggingface/controlnet_aux/pull/101


Note that this results in some warnings on startup, stemming from
controlnet-aux:

![image](https://github.com/user-attachments/assets/fa908837-6154-42a2-a93b-eb5e363f5783)

we can probably silence the warnings as a separate enhancement

## Related Issues / Discussions

<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->

## QA Instructions

<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->

## Merge Plan

<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2025-04-01 21:16:40 -04:00
Eugene Brodsky
bb1c40f222 Merge branch 'main' into pin-timm-for-llava 2025-04-01 21:10:30 -04:00
jazzhaiku
bfb117d0e0 Port LoRA to new classification API (#7849)
## Summary

- Port LoRA to new classification API
- Add 2 additional tests cases (ControlLora and Flux Diffusers LoRA)
- Moved `ModelOnDisk` to its own module

## Related Issues / Discussions

<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->

## QA Instructions

<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->

## Merge Plan

<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->

## Checklist

- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2025-04-01 08:05:48 +11:00
jazzhaiku
b31c1022c3 Merge branch 'main' into lora-classification 2025-04-01 07:58:36 +11:00
Mary Hipp
a5851ca31c fix from leftover testing 2025-03-31 12:45:53 -04:00
Mary Hipp
77bf5c15bb GET presigned URLs directly instead of trying to use redirects 2025-03-31 12:45:53 -04:00
Eugene Brodsky
d26b7a1a12 Merge branch 'main' into pin-timm-for-llava 2025-03-31 11:37:29 -04:00
psychedelicious
595133463e feat(nodes): add methods to invalidate invocation typeadapters 2025-03-31 19:15:59 +11:00
psychedelicious
6155f9ff9e feat(nodes): move invocation/output registration to separate class 2025-03-31 19:15:59 +11:00
psychedelicious
7be87c8048 refactor(nodes): simpler logic for baseinvocation typeadapter handling 2025-03-31 19:15:59 +11:00
jazzhaiku
9868c3bfe3 Merge branch 'main' into lora-classification 2025-03-31 16:43:26 +11:00
psychedelicious
8b299d0bac chore: prep for v5.9.1 2025-03-31 13:40:07 +11:00
psychedelicious
a44bfb4658 fix(mm): handle FLUX models w/ diff in_channels keys
Before FLUX Fill was merged, we didn't do any checks for the model variant. We always returned "normal".

To determine if a model is a FLUX Fill model, we need to check the state dict for a specific key. Initially, this logic was too strict and rejected quantized FLUX models. This issue was resolved, but it turns out there is another failure mode - some fine-tunes use a different key.

This change further reduces the strictness, handling the alternate key and also falling back to "normal" if we don't see either key. This effectively restores the previous probing behaviour for all FLUX models.

Closes #7856
Closes #7859
2025-03-31 12:32:55 +11:00
psychedelicious
96fb5f6881 feat(ui): disable denoising strength when selected models flux fill 2025-03-31 11:31:02 +11:00
psychedelicious
4109ea5324 fix(nodes): expanded masks not 100% transparent outside the fade out region
The polynomial fit isn't perfect and we end up with alpha values of 1 instead of 0 when applying the mask. This in turn causes issues on canvas where outputs aren't 100% transparent and individual layer bbox calculations are incorrect.
2025-03-31 11:17:00 +11:00
jazzhaiku
f6c2ee5040 Merge branch 'main' into lora-classification 2025-03-31 09:01:16 +11:00
Billy
965753bf8b Ruff formatting 2025-03-31 08:18:00 +11:00
Billy
40c53ab95c Guard 2025-03-29 09:58:02 +11:00
psychedelicious
aaa6211625 chore(backend): ruff C420 2025-03-28 18:28:32 -04:00
psychedelicious
f6d770eac9 ci: add python 3.12 to test matrix 2025-03-28 18:28:32 -04:00
psychedelicious
47cb61cd62 ci: remove python 3.10 from test matrix 2025-03-28 18:28:32 -04:00
psychedelicious
b0fdc8ae1c ci: bump linux-cpu test runner to ubuntu 24.04 2025-03-28 18:28:32 -04:00
psychedelicious
ed9b30efda ci: bump uv to 0.6.10 2025-03-28 18:28:32 -04:00
psychedelicious
168e5eeff0 ci: use uv in typegen-checks
ci: use uv in typegen-checks to generate types

experiment: simulate typegen-checks failure

Revert "experiment: simulate typegen-checks failure"

This reverts commit f53c6876fe8311de236d974194abce93ed84930c.
2025-03-28 18:28:32 -04:00
psychedelicious
7acaa86bdf ci: get ci working with uv instead of pip
Lots of squashed experimentation heh:

ci: manually specify python version in tests

ci: whoops typo in ruff cmds

ci: specify python versions for uv python install

ci: install python verbosely

ci: try forcing python preference?

ci: try forcing python preference a different way?

ci: try in a venv?

ci: it works, but try without venv

ci: oh maybe we need --preview?

ci: poking it with a stick

ci: it works, add summary to pytest output

ci: fix pytest output

experiment: simulate test failure

Revert "experiment: simulate test failure"

This reverts commit b99ca512f6e61a2a04a1c0636d44018c11019954.

ci: just use default pytest output

cI: attempt again to use uv to install python

cI: attempt again again to use uv to install python

Revert "cI: attempt again again to use uv to install python"

This reverts commit 3cba861c90738081caeeb3eca97b60656ab63929.

Revert "cI: attempt again to use uv to install python"

This reverts commit b30f2277041dc999ed514f6c594c6d6a78f5c810.
2025-03-28 18:28:32 -04:00
psychedelicious
96c0393fe7 ci: bump ruff to 0.11.2
Need to bump both CI and pyproject.toml at the same time
2025-03-28 18:28:32 -04:00
psychedelicious
403f795c5e ci: remove linux-cuda-11_7 & linux-rocm-5_2 from test matrix
We only have CPU runners, so these tests are not doing anything useful.
2025-03-28 18:28:32 -04:00
psychedelicious
c0f88a083e ci: use uv for python-tests 2025-03-28 18:28:32 -04:00
psychedelicious
542b182899 ci: use uv for python-checks 2025-03-28 18:28:32 -04:00
Mary Hipp
3f58c68c09 fix tag invalidation 2025-03-28 10:52:27 -04:00
Mary Hipp
e50c7e5947 restore multiple key 2025-03-28 10:52:27 -04:00
Mary Hipp
4a83700fe4 if clientSideUploading is enabled, handle bulk uploads using that flow 2025-03-28 10:52:27 -04:00
Eugene Brodsky
c9992914d6 Merge branch 'main' into pin-timm-for-llava 2025-03-28 09:20:30 -04:00
jazzhaiku
c25f6d1f84 Merge branch 'main' into lora-classification 2025-03-28 12:32:22 +11:00
jazzhaiku
a53e1ccf08 Small improvements (#7842)
## Summary

- Extend `ModelOnDisk` with caching, type hints, default args
- Fail early if there is an error classifying a config

## Related Issues / Discussions

<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->

## QA Instructions

<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->

## Merge Plan

<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->

## Checklist

- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2025-03-28 12:21:41 +11:00
jazzhaiku
1af9930951 Merge branch 'main' into small-improvements 2025-03-28 12:11:09 +11:00
Billy
c276c1cbee Comment 2025-03-28 10:57:46 +11:00
Billy
c619348f29 Extract ModelOnDisk to its own module 2025-03-28 10:35:13 +11:00
psychedelicious
c6f96613fc chore(ui): typegen 2025-03-28 08:14:06 +11:00
psychedelicious
258bf736da fix(nodes): handle zero fade size (e.g. mask blur 0)
Closes #7850
2025-03-28 08:14:06 +11:00
Billy
0d75c99476 Caching 2025-03-27 17:55:09 +11:00
Billy
323d409fb6 Make ruff happy 2025-03-27 17:47:57 +11:00
Billy
f251722f56 LoRA classification API 2025-03-27 17:47:01 +11:00
psychedelicious
7004fde41b fix(mm): vllm model calculates its own size 2025-03-27 09:36:14 +11:00
jazzhaiku
c9dc27afbb Merge branch 'main' into small-improvements 2025-03-27 08:14:48 +11:00
Billy
efd14ec0e4 Make ruff happy 2025-03-27 08:11:39 +11:00
Billy
21ee2b6251 Merge branch 'small-improvements' of github.com:invoke-ai/InvokeAI into small-improvements 2025-03-27 08:10:38 +11:00
Billy
82dd2d508f Deprecate checkpoint as file, diffusers as directory terminology 2025-03-27 08:10:12 +11:00
psychedelicious
ffb5f6c6a6 chore: bump version to v5.9.0 2025-03-27 08:08:44 +11:00
psychedelicious
5c5fff9ecb chore(ui): update whatsnew 2025-03-27 08:08:44 +11:00
psychedelicious
9ca071819b chore(nodes): remove beta/prototype flag from a lot of stable nodes 2025-03-27 08:08:44 +11:00
psychedelicious
b14d8e8192 chore(nodes): mark llava_onevision_vllm as beta 2025-03-27 08:08:44 +11:00
Eugene Brodsky
3f12a43e75 remove pin for controlnet-aux and pin timm to a version that works with llava
timm < 1.0.0 prevents llava models from working (broken in transformers). but controlnet-aux pinned it to an earlier version because otherwise it was breaking the ZoeDepth controlnet.

we don't use ZoeDepth (replaced by depthAnything), and downgrading controlnet-aux seems to be acceptable.

more context here:

https://github.com/huggingface/controlnet_aux/issues/106
https://github.com/huggingface/controlnet_aux/pull/101
2025-03-26 16:58:18 -04:00
jazzhaiku
5a59f6e3b8 Merge branch 'main' into small-improvements 2025-03-27 07:38:13 +11:00
Billy
60b5aef16a Log error -> warning 2025-03-27 06:56:22 +11:00
jazzhaiku
35222a8835 Taxonomy (#7833)
## Summary

This PR moves type definitions out of `config.py` into a new
`taxonomy.py` module.
The goal is to reduce clutter in `config.py`, and to resolve circular
import issues by isolating these types in a dedicated module with
(almost) no internal dependencies.
Because so many places import these definitions, these changes touch 73
files.

Additional changes:
- Removed star imports using "removestar" tool
- Added the commit to `.git-blame-ignore-revs` to avoid noise in git
blame history


## Related Issues / Discussions

<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->

## QA Instructions

<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->

## Merge Plan

<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->

## Checklist

- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2025-03-26 22:44:41 +11:00
Billy
0e8b5484d5 Error handling 2025-03-26 19:31:57 +11:00
Billy
454506c83e Type hints 2025-03-26 19:12:49 +11:00
Billy
8f6ab67376 Logs 2025-03-26 16:34:32 +11:00
Billy
5afcc7778f Redundant 2025-03-26 16:32:19 +11:00
Billy
325e07d330 Error handling 2025-03-26 16:30:45 +11:00
Billy
a016bdc159 Add todo 2025-03-26 16:17:26 +11:00
Billy
a14f0b2864 Fail early on invalid config 2025-03-26 16:10:32 +11:00
Billy
721483318a Extend ModelOnDisk 2025-03-26 16:10:00 +11:00
jazzhaiku
be04743649 Merge branch 'main' into taxonomy 2025-03-26 15:09:26 +11:00
psychedelicious
92f0c28d6c fix(ui): correctly render whitespace in strings in string generator previews
This is a visual issue - the underlying strings are not trimmed.

Closes #7830
2025-03-26 13:52:31 +11:00
Billy
a6b94e8ca4 Revert some files 2025-03-26 13:18:50 +11:00
Billy
00b11ef795 Git blame ignore revs 2025-03-26 12:56:04 +11:00
Billy
182580ff69 Imports 2025-03-26 12:55:10 +11:00
Billy
8e9d5c1187 Ruff formatting 2025-03-26 12:30:31 +11:00
Billy
99aac5870e Remove star imports 2025-03-26 12:27:00 +11:00
psychedelicious
c1b475c585 feat(ui): add getRuntimeConfig query and show it all in the about modal 2025-03-26 11:39:21 +11:00
psychedelicious
ec44e68cbf chore(ui): typegen 2025-03-26 11:39:21 +11:00
psychedelicious
73dbebbcc3 feat(api): add route to get app config and set config fields 2025-03-26 11:39:21 +11:00
psychedelicious
09f971467d feat(app): do not set port unless necessary 2025-03-26 11:39:21 +11:00
psychedelicious
2c71b0e873 fix(ui): long node titles overflow 2025-03-26 10:24:46 +11:00
Kevin Turner
92f69ac463 fix: make source location discovery more robust
The top-level `invokeai` package may have an obscured origin due to the way editible installs work, but it's much more likely that this module is from a specific file.
2025-03-26 10:12:36 +11:00
jazzhaiku
3b154df71a Import Smoke Test (#7835)
## Summary

This test imports all modules in the invokeai package and fails if there
are any exceptions.
Existing issues are excluded to avoid blocking main.

## Related Issues / Discussions

<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->

## QA Instructions

<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->

## Merge Plan

<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->

## Checklist

- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2025-03-26 08:40:07 +11:00
Billy
64aa965160 Set ordering 2025-03-25 19:21:14 +11:00
Billy
d715c27d07 Add more known failures 2025-03-25 17:59:28 +11:00
Billy
515084577c Test all imports work 2025-03-25 17:45:22 +11:00
psychedelicious
7596c07a64 chore: prep for v5.9.0rc2 2025-03-25 10:21:23 +11:00
Kevin Turner
98fd1d949b fix: make dev_reload work for files in nodes/ 2025-03-25 10:04:17 +11:00
Linos
6312e6aa8f translationBot(ui): update translation (Vietnamese)
Currently translated at 100.0% (1832 of 1832 strings)

Co-authored-by: Linos <linos.coding@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/vi/
Translation: InvokeAI/Web UI
2025-03-25 08:00:45 +11:00
Riccardo Giovanetti
6435f11bae translationBot(ui): update translation (Italian)
Currently translated at 98.7% (1815 of 1838 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.7% (1809 of 1832 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-03-25 08:00:45 +11:00
psychedelicious
1c69b9b1fa fix(ui): restore display: flex to image viewer and node editor
This was inadventently removed in #7786 and caused some minor layout overflow.
2025-03-25 07:44:07 +11:00
psychedelicious
731970ff88 fix(ui): use expanded mask for paste-back when inpainting 2025-03-25 00:03:13 +11:00
psychedelicious
038bac1614 feat(ui): make it clearer that we are doing scale before processing in graph builders 2025-03-25 00:03:13 +11:00
jazzhaiku
ed9efe7740 Port LLaVA to new API (#7817)
## Summary

- Port LLaVA model config to new classification API
- Add 2 test cases (stripped LLaVA models variants to git-lfs)

## Related Issues / Discussions

<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->

## QA Instructions

<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->

## Merge Plan

<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->

## Checklist

- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2025-03-24 22:50:54 +11:00
jazzhaiku
ffa0beba7a Merge branch 'main' into llava 2025-03-24 15:17:33 +11:00
psychedelicious
75d793f1c4 fix(ui): siglip model translation key 2025-03-24 13:26:38 +11:00
psychedelicious
2b086917e0 chore(ui): lint 2025-03-24 13:24:13 +11:00
psychedelicious
a9f2738086 feat(ui): layout improvements for string field collection input 2025-03-24 13:24:13 +11:00
psychedelicious
3a56799ea5 tidy(ui): remove unused code 2025-03-24 13:24:13 +11:00
psychedelicious
3162ce94dc tidy(ui): use settings for node field settings instead of config
Non-functional naming change to clarify the logic
2025-03-24 13:24:13 +11:00
psychedelicious
c0dc6ac4e1 fix(ui): issue where string drop-down options are not removed when changing component to a different type 2025-03-24 13:24:13 +11:00
psychedelicious
fed1995525 chore(ui): lint 2025-03-24 13:24:13 +11:00
psychedelicious
5006e23456 feat(ui): added reset options button 2025-03-24 13:24:13 +11:00
psychedelicious
2f063bddda fix(ui): restore field-node overlay
Accidentally removed it
2025-03-24 13:24:13 +11:00
psychedelicious
23a26422fd feat(ui): support for custom string field dropdowns in builder 2025-03-24 13:24:13 +11:00
psychedelicious
434f195a96 feat(ui): add empty string placeholder to string fields 2025-03-24 13:24:13 +11:00
psychedelicious
6a4c2d692c chore(ui): typegen 2025-03-24 12:45:46 +11:00
psychedelicious
5127a07cf9 feat(nodes): clean up lora node names
I had named them wonkily and caused some user confusion.
2025-03-24 12:45:46 +11:00
psychedelicious
0b4c6f0ab4 fix(mm): flux model variant probing
In #7780 we added FLUX Fill support, and needed the probe to be able to distinguish between "normal" FLUX models and FLUX Fill models.

Logic was added to the probe to check a particular state dict key (input channels), which should be 384 for FLUX Fill and 64 for other FLUX models.

The new logic was stricter and instead of falling back on the "normal" variant, it raised when an unexpected value for input channels was detected.

This caused failures to probe for BNB-NF4 quantized FLUX Dev/Schnell, which apparently only have 1 input channel.

After checking a variety of FLUX models, I loosened the strictness of the variant probing logic to only special-case the new FLUX Fill model, and otherwise fall back to returning the "normal" variant. This better matches the old behaviour and fixes the import errors.

Closes #7822
2025-03-24 12:36:18 +11:00
Billy
d8450033ea Fix 2025-03-21 17:46:18 +11:00
Billy
3938736bd8 Ruff formatting 2025-03-21 17:35:12 +11:00
Billy
fb2c7b9566 Defaults 2025-03-21 17:35:04 +11:00
Billy
29449ec27d Implement new api for LLaVA 2025-03-21 17:17:56 +11:00
Billy
e38f778d28 Extend ModelOnDisk 2025-03-21 17:17:15 +11:00
Billy
f5e78436a8 Update regression test 2025-03-21 17:14:02 +11:00
Billy
6a15b5d9be Add stripped models for testing llava 2025-03-21 15:34:20 +11:00
psychedelicious
a629102c87 chore(ui): update whatsnew 2025-03-21 13:09:27 +11:00
psychedelicious
848ade8ab8 chore: prep for v5.9.0rc1 2025-03-21 13:09:27 +11:00
Hosted Weblate
2110feb01c translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2025-03-21 12:55:07 +11:00
Riku
f3e1821957 translationBot(ui): update translation (German)
Currently translated at 67.0% (1224 of 1826 strings)

Co-authored-by: Riku <riku.block@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2025-03-21 12:55:07 +11:00
Linos
bbcf93089a translationBot(ui): update translation (Vietnamese)
Currently translated at 100.0% (1827 of 1827 strings)

translationBot(ui): update translation (Vietnamese)

Currently translated at 100.0% (1826 of 1826 strings)

translationBot(ui): update translation (Vietnamese)

Currently translated at 100.0% (1825 of 1825 strings)

Co-authored-by: Linos <linos.coding@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/vi/
Translation: InvokeAI/Web UI
2025-03-21 12:55:07 +11:00
Riccardo Giovanetti
66f41aa307 translationBot(ui): update translation (Italian)
Currently translated at 98.7% (1804 of 1827 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.7% (1803 of 1825 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-03-21 12:55:07 +11:00
psychedelicious
8a709766b3 feat(ui): better error for unknown fields in builder view mode 2025-03-21 12:51:12 +11:00
psychedelicious
efaa20a7a1 feat(ui): better labels for missing/unexpected fields 2025-03-21 12:51:12 +11:00
psychedelicious
3e4c808b23 refactor(ui): organise useInputFieldTemplate hooks again & add useInputFieldTemplateSafe 2025-03-21 12:51:12 +11:00
psychedelicious
00e3931af4 chore(ui): "useInputFieldLabel" -> "useInputFieldLabelSafe"
Also update docstrings
2025-03-21 12:51:12 +11:00
psychedelicious
08bea07f8b chore(ui): "useInputFieldDescription" -> "useInputFieldDescriptionSafe"
Also update docstrings
2025-03-21 12:51:12 +11:00
psychedelicious
166d2f0e39 chore(ui): "useInputFieldTemplate" -> "useInputFieldTemplateOrThrow" 2025-03-21 12:51:12 +11:00
psychedelicious
21f346717a docs(ui): add docstring to useInputFieldTemplate 2025-03-21 12:51:12 +11:00
psychedelicious
f966fb8b9c docs(ui): add docstring to useInputFieldDescription 2025-03-21 12:51:12 +11:00
psychedelicious
c2b20a5387 feat(ui): hide guidance when FLUX Fill model selected 2025-03-21 10:24:03 +11:00
psychedelicious
bed9089fe6 refactor(ui): just always set guidance to 30 when using FLUX Fill 2025-03-21 10:24:03 +11:00
psychedelicious
d34a4f765c feat(ui): better error for FLUX Fill + t2i/i2i incompatibility 2025-03-21 10:24:03 +11:00
psychedelicious
efe4708b8b feat(ui): better error message/warning for FLUX Fill w/ Control LoRA 2025-03-21 10:24:03 +11:00
psychedelicious
7cb1f61a9e feat(ui): bump FLUX guidance up to 30 if it's too low during graph building 2025-03-21 10:24:03 +11:00
psychedelicious
6e2ef34cba feat(ui): add warning for FLUX Fill + Control LoRA 2025-03-21 10:24:03 +11:00
psychedelicious
d208b99a47 feat(ui): pass the full model config throughout validation logic 2025-03-21 10:24:03 +11:00
psychedelicious
47eeafa5cb feat(ui): add selector to select the main model full config object 2025-03-21 10:24:03 +11:00
psychedelicious
0cb00fbe53 refactor(ui): use new compositing nodes for inpaint/outpaint graphs 2025-03-21 10:24:03 +11:00
psychedelicious
a7e8ed3bc2 feat(ui): add FLUX Fill graph builder util 2025-03-21 10:24:03 +11:00
psychedelicious
22eb25be48 refactor(ui): use more succient syntax to opt-out of RTKQ caching for model fetching utils 2025-03-21 10:24:03 +11:00
psychedelicious
a077f3fefc chore(ui): typegen 2025-03-21 10:24:03 +11:00
psychedelicious
c013a6e38d feat(nodes): deprecate canvas_v2_mask_and_crop 2025-03-21 10:24:03 +11:00
psychedelicious
6cfeb71bed feat(nodes): add expand_mask_with_fade to better handle canvas compositing needs
Previously we used erode/dilate and a Gaussian blur to expand and fade the edges of Canvas masks. The implementation a number of problems:
- Erode/dilate kernel sizes were not calculated correctly, and extra iterations were run to compensate. The result is the blur size, which should have been pixels, was very inaccurate and unreliable.
- What we want is to add a "soft bleed" - like a drop shadow with no offset - starting from the edge of the mask, extending out by however many pixels. But Gaussian blur does not do this. The blurred area starts _inside_ the mask and extends outside it. So it kinda blurs inwards and outwards. We compensated for this by expanding the mask.
- Using a Gaussian blur can cause banding artifacts. Gaussian blur doesn't have a "size" or "radius" parameter in the sense that you think it should. It's a convolution matrix and there are _no non-zero values in the result_. This means that, far away from the mask, once compositing completes, we have some values that are very close to zero but not quite zero. These values are quantized by HTML Canvas, resulting in banding artifacts where you'd expect the blur to have faded to 0% alpha. At least, that is my understanding of why the banding artifacts occur.

The new node uses a better strategy to expand the mask and add the fade out effect:
- Calculate the distance from each white pixel to the nearest black pixel.
- Normalize this distance by dividing by the fade size in px, then clip the values to 0 - 1. The result represents the distance of each white pixel to its nearest black pixel as a percentage of the fade size. At this point, it is a linear distribution.
- Create a polynomial to describe the fade's intensity so that we can have a smooth transition from the masked region (black) to unmasked (white). There are some magic numbers here, deterined experimentally.
- Evaluate the polynomial over the normalized distances, so we now have a matrix representing the fade intensity for every pixel
- Convert this matrix back to uint8 and apply it to the mask

This works soooo much better than the previous method. Not only does it fix the banding issues, but when we enable "output only generated regions", we get a much smaller image. Will add images to the PR to clarify.
2025-03-21 10:24:03 +11:00
psychedelicious
534f993023 feat(nodes): add apply_mask_to_image node
It simply applies the mask to an image.
2025-03-21 10:24:03 +11:00
psychedelicious
67f9b6420c fix(nodes): ensure alpha mask is opened as RGBA 2025-03-21 10:24:03 +11:00
psychedelicious
61bf065237 feat(nodes): rename "FLUX Fill" -> "FLUX Fill Conditioning" 2025-03-21 10:24:03 +11:00
psychedelicious
e78cf889ee fix(ui): clip shift-draw strokes to bbox when clip to bbox enabled
Closes #7809
2025-03-21 08:14:20 +11:00
psychedelicious
5d13f0ba15 tidy(ui): remove recommended flag from workflow (believe was for testing purposes) 2025-03-20 08:50:01 -04:00
psychedelicious
633b9afa46 fix(ui): recommended star stretches tag list layout 2025-03-20 08:50:01 -04:00
psychedelicious
f1889b259d tidy(ui): split browse workflows button into own component 2025-03-20 08:50:01 -04:00
psychedelicious
ed21d0b57e tidy(ui): remove noop useEffect 2025-03-20 08:50:01 -04:00
Mary Hipp
df90da28e1 tsc fix 2025-03-20 15:43:57 +11:00
Mary Hipp
702054aa62 make sure browse is selected 2025-03-20 15:43:57 +11:00
Mary Hipp
636ec1de6e add viewAllWorkflowsRecommended to studio init action to show library with only recomended workflows 2025-03-20 15:43:57 +11:00
Mary Hipp
063d07fd41 switch to using recommended with star insteaed of auto-selecting 2025-03-20 15:43:57 +11:00
Mary Hipp
c78eac624e update workflow tag/categories so that we can pass in 1+ selected tags to start with 2025-03-20 15:43:57 +11:00
Mary Hipp
05de3b7a84 workflow library UI updates: scrollbar to make obvious its overflowing, move deselecet all tags to be next to browse button 2025-03-20 15:43:57 +11:00
Ryan Dick
9cc2232b6f Bump FluxDenoise invocation version and typegen. 2025-03-19 14:45:18 +11:00
Ryan Dick
9fdc06b447 Add FLUX Fill input validation and error/warning reporting. 2025-03-19 14:45:18 +11:00
Ryan Dick
5ea3ec5cc8 Get FLUX Fill working. Note: To use FLUX Fill, set guidance to ~30. 2025-03-19 14:45:18 +11:00
Ryan Dick
f13a07ba6a WIP on updating FluxDenoise to support FLUX Fill. 2025-03-19 14:45:18 +11:00
Ryan Dick
a913f0163d WIP - Add FluxFillInvocation 2025-03-19 14:45:18 +11:00
Ryan Dick
f7cfbd1323 Add FLUX Fill starter model. 2025-03-19 14:45:18 +11:00
Ryan Dick
2806b60701 Add logic to probe FLUX variant (NORMAL vs INPAINT). 2025-03-19 14:45:18 +11:00
psychedelicious
d8c3af624b Use git-lfs for larger assets (#7804)
## Summary

- Integrate Git LFS to our automated Python tests in CI
- Add stripped model files with git-lfs
- `README.md` instructions to install and configure git-lfs
- Unrelated change (skip hashing to make unit test run faster)

## Related Issues / Discussions

<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->

## QA Instructions

<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->

## Merge Plan

<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->

## Checklist

- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2025-03-19 09:53:26 +11:00
psychedelicious
feed44b68d Stripped models (#7797)
## Summary

**Problem**
We want to have automated tests for model classification/probing, but
model files are too large to include in the source.

**Proposed Solution**
Classification/probing only requires metadata (key names, tensor
shapes), not weights.
This PR introduces "stripped" models - lightweight versions that retains
only essential metadata.

- Added script to strip models
- Added stripped models to automated tests


**Model size before and after "stripping":**
```
LLaVA Onevision Qwen2 0.5b-ov-hf before: 1.8 GB, after: 11.6 MB
text_encoder before: 246.1 MB, after: 35.6 kB
llava-onevision-qwen2-7b-si-hf before: 16.1 GB, after: 11.7 MB
RealESRGAN_x2plus.pth before: 67.1 MB, after: 143.0 kB
IP Adapter SD1 before: 2.5 GB, after: 94.9 kB
Hard Edge Detection (canny) before: 722.6 MB, after: 63.6 kB
Lineart before: 722.6 MB, after: 63.6 kB
Segmentation Map before: 722.6 MB, after: 63.6 kB
EasyNegative before: 24.7 kB, after: 151 Bytes
Face Reference (IP Adapter Plus Face) before: 98.2 MB, after: 13.7 kB
Standard Reference (IP Adapter) before: 44.6 MB, after: 6.0 kB
shinkai_makoto_offset before: 151.1 MB, after: 160.0 kB
thickline_fp16 before: 151.1 MB, after: 160.0 kB
Alien Style before: 228.5 MB, after: 582.6 kB
Noodles Style before: 228.5 MB, after: 582.6 kB
Juggernaut XL v9 before: 6.9 GB, after: 3.7 MB
dreamshaper-8 before: 168.9 MB, after: 1.6 MB
```





## Related Issues / Discussions

<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->

## QA Instructions

<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->

## Merge Plan

<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->

## Checklist

- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2025-03-19 08:13:10 +11:00
Billy
247f3b5d67 Merge branch 'stripped-models' into git-lfs 2025-03-19 07:53:27 +11:00
Billy
8e14f9d971 Merge branch 'main' into stripped-models 2025-03-19 07:52:56 +11:00
Billy
bdb44ee48d Merge branch 'git-lfs' of github.com:invoke-ai/InvokeAI into git-lfs 2025-03-19 07:30:34 +11:00
Billy
b57f5330c5 Pin action to commit 2025-03-19 07:28:28 +11:00
jazzhaiku
ade3c015b4 Update docs/contributing/dev-environment.md
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2025-03-19 07:23:23 +11:00
psychedelicious
7fe4d4c21a feat(app): better errors when scanning models with picklescan
Differentiate between malware detection and scan error.
2025-03-19 07:20:25 +11:00
psychedelicious
133a7fde55 Model classification api (#7742)
## Summary
The _goal_ of this PR is to make it easier to add an new config type.
This _scope_ of this PR is to integrate the API and does not include
adding new configs (outside tests) or porting existing ones.


One of the glaring issues of the existing *legacy probe* is that the
logic for each type is spread across multiple classes and intertwined
with the other configs. This means that adding a new config type (or
modifying an existing one) is complex and error prone.

This PR attempts to remedy this by providing a new API for adding
configs that:

- Is backwards compatible with the existing probe.
- Encapsulates fields and logic in a single class, keeping things
self-contained and easy to modify safely.

Below is a minimal toy example illustrating the proposed new structure:

```python
class MinimalConfigExample(ModelConfigBase):
    type: ModelType = ModelType.Main
    format: ModelFormat = ModelFormat.Checkpoint
    fun_quote: str

    @classmethod
    def matches(cls, mod: ModelOnDisk) -> bool:
        return mod.path.suffix == ".json"

    @classmethod
    def parse(cls, mod: ModelOnDisk) -> dict[str, Any]:
        with open(mod.path, "r") as f:
            contents = json.load(f)

        return {
            "fun_quote": contents["quote"],
            "base": BaseModelType.Any,
        }
```

To create a new config type, one needs to inherit from `ModelConfigBase`
and implement its interface.

The code falls back to the legacy model probe for existing models using
the old API.
This allows us to incrementally port the configs one by one.



## Related Issues / Discussions

<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->

## QA Instructions

<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->

## Merge Plan

<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2025-03-18 15:25:56 +11:00
Billy
6375214878 Merge branch 'stripped-models' into git-lfs 2025-03-18 14:57:58 +11:00
Billy
b9972be7f1 Merge branch 'model-classification-api' into stripped-models 2025-03-18 14:57:23 +11:00
Billy
e61c5a3f26 Merge 2025-03-18 14:55:11 +11:00
Billy
8c633786f6 Remove accidently included files 2025-03-18 14:16:51 +11:00
Billy
8703eea49b LFS cache 2025-03-18 14:08:21 +11:00
Billy
c8888be4c3 Formatting 2025-03-18 13:10:07 +11:00
Billy
11963a65a4 CI/CD 2025-03-18 12:56:28 +11:00
Billy
ab6422fdf7 Add to README.md 2025-03-18 12:37:32 +11:00
psychedelicious
1f8632029e fix(nodes): add validator to vllm node images field to handle single image field inputs 2025-03-18 11:53:06 +11:00
Ryan Dick
88a762474d typegen 2025-03-18 11:53:06 +11:00
Ryan Dick
e6dd721e33 Add max_length=3 to the LLaVA OneVision image input field. 2025-03-18 11:53:06 +11:00
Billy
2a09604baf Formatting 2025-03-18 11:53:06 +11:00
Billy
f94f00ede0 Ruff formatting 2025-03-18 11:53:06 +11:00
Billy
37af281299 WIP - model selection for LLaVA 2025-03-18 11:53:06 +11:00
Billy
fc82775d7a WIP - model selection for LLaVA 2025-03-18 11:53:06 +11:00
Billy
9ed46f60b7 Add LLaVA OneVision to Config dropdown in UI 2025-03-18 11:53:06 +11:00
Ryan Dick
9a389e6b93 Add a LLaVA OneVision starter model. 2025-03-18 11:53:06 +11:00
Ryan Dick
2ef1ecf381 Fix copy-paste errors. 2025-03-18 11:53:06 +11:00
Ryan Dick
41de112932 Make LLaVA Onevision node work with 0 images, and other minor improvements. 2025-03-18 11:53:06 +11:00
Ryan Dick
e9714fe476 Add LLaVA Onevision model loading and inference support. 2025-03-18 11:53:06 +11:00
Ryan Dick
3f29293e39 Add LlavaOnevision model type and probing logic. 2025-03-18 11:53:06 +11:00
Billy
db1aa38e98 Warning 2025-03-18 09:55:13 +11:00
Billy
12717d4a4d Stripped model data 2025-03-18 09:51:10 +11:00
Billy
1953f3cbcd Skip hashing to make test quicker 2025-03-18 09:50:18 +11:00
Billy
3469fc9843 Ruff 2025-03-18 09:22:16 +11:00
Billy
7cdd4187a9 Update classify script 2025-03-18 09:21:38 +11:00
Billy
ad66c101d2 Remove stripped model files 2025-03-18 09:10:37 +11:00
psychedelicious
28d3356710 chore: prep for v5.8.1 2025-03-18 09:06:47 +11:00
psychedelicious
81e70fb9d2 tidy(app): errant character 2025-03-18 08:00:51 +11:00
psychedelicious
971c425734 fix(app): incorrect values inserted when retrying queue item
In #7688 we optimized queuing preparation logic. This inadvertently broke retrying queue items.

Previously, a `NamedTuple` was used to store the values to insert in the DB when enqueuing. This handy class provides an API similar to a dataclass, where you can instantiate it with kwargs in any order. The resultant tuple re-orders the kwargs to match the order in the class definition.

For example, consider this `NamedTuple`:
```py
class SessionQueueValueToInsert(NamedTuple):
    foo: str
    bar: str
```

When instantiating it, no matter the order of the kwargs, if you make a normal tuple out of it, the tuple values are in the same order as in the class definition:

```
t1 = SessionQueueValueToInsert(foo="foo", bar="bar")
print(tuple(t1)) # -> ('foo', 'bar')

t2 = SessionQueueValueToInsert(bar="bar", foo="foo")
print(tuple(t2)) # -> ('foo', 'bar')
```

So, in the old code, when we used the `NamedTuple`, it implicitly normalized the order of the values we insert into the DB.

In the retry logic, the values of the tuple were not ordered correctly, but the use of `NamedTuple` had secretly fixed the order for us.

In the linked PR, `NamedTuple` was dropped for a normal tuple, after profiling showed `NamedTuple` to be meaningfully slower than a normal tuple.

The implicit order normalization behaviour wasn't understood, and the order wasn't fixed when changin the retry logic to use a normal tuple instead of `NamedTuple`. This results in a bug where we incorrectly create queue items in the DB. For example, we stored the `destination` in the `field_values` column.

When such an incorrectly-created queue item is dequeued, it fails pydantic validation and causes what appears to be an endless loop of errors.

The only user-facing solution is to add this line to `invokeai.yaml` and restart the app:
```yaml
clear_queue_on_startup: true
```

On next startup, the queue is forcibly cleared before the error loop is triggered. Then the user should remove this line so their queue is persisted across app launches per usual.

The solution is simple - fix the ordering of the tuple. I also added a type annotation and comment to the tuple type alias definition.

Note: The endless error loop, as a general problem, will take some thinking to fix. The queue service methods to cancel and fail a queue item still retrieve it and parse it. And the list queue items methods parse the queue items. Bit of a catch 22, maybe the solution is to simply delete totally borked queue items and log an error.
2025-03-18 08:00:51 +11:00
psychedelicious
b09008c530 feat(ui): add cancel and clear all as toggleable app feature 2025-03-18 06:48:10 +11:00
Billy
f9f99f873d More models 2025-03-17 04:18:44 +00:00
Billy
7f93f1b600 Dependencies 2025-03-17 12:57:13 +11:00
Billy
b1d336ce8a Ruff 2025-03-17 12:19:27 +11:00
Billy
40c7be8f5d Warning about missing test cases 2025-03-17 12:19:15 +11:00
Billy
24218b34bf Make ruff happy 2025-03-17 12:04:26 +11:00
Billy
d970c6d6d5 Use override fixture 2025-03-17 11:58:13 +11:00
Billy
e5308be0bb Use override fixture 2025-03-17 11:31:20 +11:00
Billy
7d5687e9ff Disable device meta for spandrel 2025-03-17 11:30:05 +11:00
Riccardo Giovanetti
7adac4581a translationBot(ui): update translation (Italian)
Currently translated at 98.7% (1800 of 1822 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.7% (1798 of 1820 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.7% (1796 of 1818 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-03-17 10:49:22 +11:00
Hosted Weblate
962db86cac translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2025-03-17 10:49:22 +11:00
psychedelicious
d65ec0e250 feat(ui): configurable form field constraints (WIP3) 2025-03-17 10:47:01 +11:00
psychedelicious
7fdde5e84a tests(ui): fix constrainNumber 2025-03-17 10:47:01 +11:00
psychedelicious
895956bcfe chore(ui): lint 2025-03-17 10:47:01 +11:00
psychedelicious
f27d26cfa2 feat(ui): configurable form field constraints (WIP2) 2025-03-17 10:47:01 +11:00
psychedelicious
965bcba6c2 feat(ui): configurable form field constraints (WIP) 2025-03-17 10:47:01 +11:00
psychedelicious
c9f2460ff2 fix(ui): generator widget should stretch to fill when added to builder 2025-03-17 10:41:59 +11:00
psychedelicious
5abbbf4b5b feat(ui): allow pasting images on workflows tab when workflows not focused 2025-03-17 10:37:27 +11:00
psychedelicious
e66688edbf feat(ui): only paste into canvas when canvas is focused 2025-03-17 10:37:27 +11:00
joshistoast
a519483f95 refactor(ui): ♻️ memoize merged styles, simplify data attribute conditional 2025-03-17 10:34:49 +11:00
joshistoast
75c91604bb fix: 🐛 export the region wrapper
am silly
2025-03-17 10:34:49 +11:00
joshistoast
53bdaba7b6 style: 🚨 linting 2025-03-17 10:34:49 +11:00
joshistoast
f3f405ca77 refactor(ui): ♻️ remove forward ref usage 2025-03-17 10:34:49 +11:00
joshistoast
dda69950a7 refactor(ui): ♻️ apply memoization, system style objects, and data attribute to region highlight wrapper 2025-03-17 10:34:49 +11:00
joshistoast
b2198b9fa7 feat: 🔧 region highlighting disabled by default
some users may not like this
2025-03-17 10:34:49 +11:00
joshistoast
02b91e8e7b feat: highlight focused regions
adds a region wrapper with a highlight effect when that region is focused, this behavior can be toggled as a setting
2025-03-17 10:34:49 +11:00
psychedelicious
09bf7c35eb chore(ui): typegen 2025-03-17 10:32:19 +11:00
psychedelicious
deb9a65b3d chore(ui): update whats new 2025-03-17 10:32:19 +11:00
psychedelicious
5be9a7227c chore: remove all explicit image references in default workflows 2025-03-17 10:32:19 +11:00
psychedelicious
bb9f886bd4 docs: update default workflows dev docs 2025-03-17 10:32:19 +11:00
psychedelicious
46520946f8 chore: remove all explicit model references in default workflows 2025-03-17 10:32:19 +11:00
psychedelicious
830880a6fc chore(nodes): update titles of all model-specific nodes to reference their models
Also bump versions on all of them.
2025-03-17 10:32:19 +11:00
psychedelicious
63b94a8ff3 feat(ui): add sd3.5 default workflows tag 2025-03-17 10:32:19 +11:00
psychedelicious
f12924a1e1 chore: update default workflow tags & names 2025-03-17 10:32:19 +11:00
psychedelicious
f8e51c86f5 chore: bump version to v5.8.0 2025-03-17 10:32:19 +11:00
Billy
654e992630 Accept extra args 2025-03-17 10:25:16 +11:00
Billy
21f247f499 Stripped models script 2025-03-17 09:18:58 +11:00
Billy
8bcd9fe4b7 Extend ModelOnDisk 2025-03-17 09:18:51 +11:00
psychedelicious
c84a646735 ci: pin tj-actions/changed-files
Closes #7793
2025-03-17 08:36:17 +11:00
psychedelicious
b52f8121af fix(ui): duplicate edges on reconnect
Closes #7127
2025-03-15 10:12:50 +11:00
psychedelicious
05bed3fddd fix(ui): do not mark workflow as touched when setting form field initial values 2025-03-15 10:10:21 +11:00
psychedelicious
87ea20192f chore(ui): knip 2025-03-14 20:54:58 +11:00
psychedelicious
2f9c95c462 fix(ui): return early in error-selecting hooks
Prevent an error when a node is deleted and the hook is being called
2025-03-14 20:54:58 +11:00
psychedelicious
47cadbb48e feat(ui): show field errors in tooltips 2025-03-14 20:54:58 +11:00
psychedelicious
23518b9830 feat(ui): useDebouncedAppSelector
Hook that replicates `useSelector`, but debounces calling the selector.
2025-03-14 20:54:58 +11:00
psychedelicious
94dcf391a6 tweak(ui): styling for image collection fields 2025-03-14 20:50:35 +11:00
Billy
637b93d2d8 Ruff 2025-03-14 10:18:25 +11:00
Billy
565b160060 More tests 2025-03-14 10:17:43 +11:00
psychedelicious
e7a60c01ed fix(ui): prevent vertical scrolling on row containers 2025-03-14 07:15:58 +11:00
Mary Hipp
4b54ccc29c getting started copy for workflows 2025-03-13 12:25:14 -04:00
Mary Hipp
c4183ec98c add with_hash to prevent rerenders on default 2025-03-13 10:29:22 -04:00
Mary Hipp
5a9cbe35e0 typegen fix 2025-03-13 10:29:22 -04:00
Mary Hipp
df18fe0298 make sure that recent view always sorts by opened_at even if not available as sort option in UI 2025-03-13 10:29:22 -04:00
Mary Hipp
e5591d145f allow workflow sort options to be passed in 2025-03-13 08:27:51 -04:00
psychedelicious
371c187fc3 chore: bump version to v5.8.0rc1 2025-03-13 23:00:01 +11:00
Billy
bdd0b90769 Merge branch 'model-classification-api' of github.com:invoke-ai/InvokeAI into model-classification-api 2025-03-13 13:37:15 +11:00
Billy
4377158503 Variant 2025-03-13 13:32:57 +11:00
Billy
c8c27079ed Codegen 2025-03-13 13:12:12 +11:00
Billy
d8b9a8d0dd Merge branch 'main' into model-classification-api 2025-03-13 13:03:51 +11:00
Billy
39a4608d15 Fix annotations compatability 3.11 2025-03-13 13:01:19 +11:00
jazzhaiku
cd2d5431db Merge branch 'main' into model-classification-api 2025-03-13 11:21:18 +11:00
Billy
c04cdd9779 Typegen 2025-03-13 11:00:26 +11:00
Billy
b86ac5e049 Explicit union 2025-03-13 10:28:07 +11:00
psychedelicious
e982c95687 fix(ui): respect line breaks in builder text and heading elements 2025-03-13 09:39:41 +11:00
Billy
665236bb79 Type hints 2025-03-13 09:21:58 +11:00
psychedelicious
0eeb0dd67b feat(ui): use invoke logo for thumbnail fallback for default workflows 2025-03-13 08:45:12 +11:00
psychedelicious
28c74cbe38 revert(app): remove test image from default workflow thumbnails 2025-03-13 08:45:12 +11:00
psychedelicious
7414f68acc fix(ui): save as marks workflow as not touched 2025-03-13 08:45:12 +11:00
psychedelicious
a984462b80 tweak(ui): workflow library card layout to fit 2 lines of title and 3 lines of desc 2025-03-13 08:45:12 +11:00
psychedelicious
c6c2567203 tweak(ui): workflow description shows 1 line w/ tooltip for full content 2025-03-13 08:45:12 +11:00
psychedelicious
f05c8b909f fix(ui): mark workflow touched on form builder state changes 2025-03-13 07:10:59 +11:00
psychedelicious
73330a1308 chore(ui): lint 2025-03-13 07:10:59 +11:00
psychedelicious
6f568d48ed fix(ui): studio init action workflow loading 2025-03-13 07:10:59 +11:00
psychedelicious
81a97f3796 fix(ui): load workflow from object 2025-03-13 07:10:59 +11:00
psychedelicious
3f9535d2f9 fix(ui): load workflow from graph 2025-03-13 07:10:59 +11:00
psychedelicious
83bfbdcad4 feat(ui): more workflow loading standardization
There is now a single entrypoint for loading a workflow - `useLoadWorkflowWithDialog`.

The hook:
Handles loading workflows from various sources. If there are unsaved changes, the user will be prompted to confirm before loading the workflow.

It returns  a function that:
Loads a workflow from various sources. If there are unsaved changes, the user will be prompted to confirm before loading the workflow. The workflow will be loaded immediately if there are no unsaved changes. On success, error or completion, the corresponding callback will be called.

WHEW
2025-03-13 07:10:59 +11:00
psychedelicious
729428084c feat(ui): prompt when loading workflow from file if unsaved changes 2025-03-13 07:10:59 +11:00
psychedelicious
523a932ecc feat(ui): accept button on workflow load dialog is "Load" 2025-03-13 07:10:59 +11:00
psychedelicious
21be7d7157 feat(ui): allow load workflow confirm dialog to load workflows from object instead of only id 2025-03-13 07:10:59 +11:00
psychedelicious
a29fb18c0b feat(ui): standardize and clean up workflow loading hooks and logic 2025-03-13 07:10:59 +11:00
psychedelicious
aed446f013 fix(ui): make the workflow load from file menu item work the same as the button in library
Upload and save as instead of just upload as draft.
2025-03-13 07:10:59 +11:00
Mary Hipp
e81c9b0d6e add default for opened_at 2025-03-12 14:35:34 -04:00
Billy
f45400a275 Remove hash algo 2025-03-12 18:39:29 +11:00
psychedelicious
89f457c486 fix(ui): mark workflow as opened when creating a new workflow 2025-03-12 12:11:00 +11:00
psychedelicious
30ed09a36e fix(ui): default categories for oss 2025-03-12 12:11:00 +11:00
psychedelicious
3334652acc feat(db): drop the opened_at column instead of marking deprecated 2025-03-12 12:11:00 +11:00
psychedelicious
e83536f396 chore(ui): lint 2025-03-12 12:11:00 +11:00
psychedelicious
97593f95f6 feat(ui): on first load, if the selected library view has no workflows, switch to the first view that has workflows 2025-03-12 12:11:00 +11:00
psychedelicious
7f14cee17e chore(ui): typegen 2025-03-12 12:11:00 +11:00
psychedelicious
0a836d6fc1 feat(app): add method and route to get workflow library counts by category 2025-03-12 12:11:00 +11:00
psychedelicious
54e781d5bb tidy(app): remove unused method in workflow records service 2025-03-12 12:11:00 +11:00
psychedelicious
aa71d0c817 tweak(ui): 'is_recent' -> 'has_been_opened' 2025-03-12 12:11:00 +11:00
psychedelicious
07313e429d chore(ui): typegen 2025-03-12 12:11:00 +11:00
psychedelicious
bad5023238 tweak(app): 'is_recent' -> 'has_been_opened' 2025-03-12 12:11:00 +11:00
psychedelicious
73a0d2c06c fix(ui): memo WorkflowLibraryModal 2025-03-12 12:11:00 +11:00
psychedelicious
918e9c8ccc feat(app): drop and recreate index on opened_at
Not sure if this is strictly required but doing it anyways.
2025-03-12 12:11:00 +11:00
psychedelicious
1e388e9ca4 tweak(ui): align new and upload workflow buttons 2025-03-12 12:11:00 +11:00
psychedelicious
5b84d45932 perf(ui): memoize workflow library components 2025-03-12 12:11:00 +11:00
psychedelicious
dc3f1184b2 fix(ui): other stuff borked by rebase 2025-03-12 12:11:00 +11:00
psychedelicious
87438bcad7 fix(ui): rebase broke things 2025-03-12 12:11:00 +11:00
Mary Hipp
afd894fd04 update recent workflows UI 2025-03-12 12:11:00 +11:00
Mary Hipp
df305c0b99 allow opened_at to be nullable for workflows that the user has never opened 2025-03-12 12:11:00 +11:00
psychedelicious
deecb7f3c3 feat(ui): "Reset Filters" -> "Deselect All" 2025-03-12 08:00:18 +11:00
psychedelicious
dd5f353465 revert(ui): use reverted API for workflow library 2025-03-12 08:00:18 +11:00
psychedelicious
a8759ea0a6 chore(ui): typegen 2025-03-12 08:00:18 +11:00
psychedelicious
3ff529c718 revert(app): use OR logic for workflow library filtering 2025-03-12 08:00:18 +11:00
psychedelicious
3b0fecafb0 fix(ui): URL mismatch for tag_counts_with_filter 2025-03-12 08:00:18 +11:00
psychedelicious
099011000f chore(ui): lint 2025-03-12 08:00:18 +11:00
psychedelicious
155daa3137 feat(ui): hide filters with no workflows 2025-03-12 08:00:18 +11:00
psychedelicious
c493e223cf feat(ui): "Reset Tags" -> "Reset Filters" 2025-03-12 08:00:18 +11:00
psychedelicious
124ca23f8b feat(ui): use new tag filtering for workflow library 2025-03-12 08:00:18 +11:00
psychedelicious
a8023cbcb6 chore(ui): typegen 2025-03-12 08:00:18 +11:00
psychedelicious
b733d3897e feat(app): revised workflow library filtering by tag
- Replace `get_counts` method with `get_tag_counts_with_filter` which gets the counts for a list of tags, filtering by a list of selected tags
- Update `get_many` logic to apply tag filtering with AND logic, to match the new `get_tag_counts_with_filter` method
- Update workflow library router
2025-03-12 08:00:18 +11:00
psychedelicious
ef95b37ace fix(ui): workflow library infinite query providesTags 2025-03-12 08:00:18 +11:00
psychedelicious
4feff5a185 chore(ui): bump @reduxjs/toolkit from 1.6.0 to 1.6.1
This brings in some fixes for the new infinite query support.
2025-03-12 08:00:18 +11:00
psychedelicious
6c8dc32d5c docs(ui): add comments to workflow library cache invalidation 2025-03-12 08:00:18 +11:00
psychedelicious
e5da808b2f fix(ui): updating workflow content should not invalidate the infinite query cache 2025-03-12 08:00:18 +11:00
psychedelicious
7d3434da62 fix(ui): updating workflow opened at invalidates infinite query cache 2025-03-12 08:00:18 +11:00
psychedelicious
4cc70d9f16 feat(ui): add cache tags for workflow library's infinite query 2025-03-12 08:00:18 +11:00
psychedelicious
7988bc1a59 chore(ui): remove unused WorkflowsRecent RTKQ tag
This didn't actually do anything. Will be implementing the actual functionality that you'd _think_ this tag would do in a future change.
2025-03-12 08:00:18 +11:00
psychedelicious
1756d885f6 refactor(ui): split workflow library state into separate slice
Has no business being in the workflow state slice.
2025-03-12 08:00:18 +11:00
Billy
be53b89203 Remove redundant hash_algo field 2025-03-11 09:28:57 +11:00
Billy
a215eeaabf Update schema 2025-03-11 09:22:29 +11:00
Billy
d86b392bfd Remove redundant hash_algo field 2025-03-11 09:16:59 +11:00
Billy
3e9e45b177 Update comments 2025-03-11 09:04:19 +11:00
Billy
907d960745 PR suggestions 2025-03-11 08:37:43 +11:00
Billy
bfdace6437 New API for model classification 2025-03-11 08:34:34 +11:00
503 changed files with 14167 additions and 6657 deletions

View File

@@ -1,9 +1,11 @@
*
!invokeai
!pyproject.toml
!uv.lock
!docker/docker-entrypoint.sh
!LICENSE
**/dist
**/node_modules
**/__pycache__
**/*.egg-info
**/*.egg-info

View File

@@ -1,2 +1,5 @@
b3dccfaeb636599c02effc377cdd8a87d658256c
218b6d0546b990fc449c876fb99f44b50c4daa35
182580ff6970caed400be178c5b888514b75d7f2
8e9d5c1187b0d36da80571ce4c8ba9b3a37b6c46
99aac5870e1092b182e6c5f21abcaab6936a4ad1

3
.gitattributes vendored
View File

@@ -2,4 +2,5 @@
# Only affects text files and ignores other file types.
# For more info see: https://www.aleksandrhovhannisyan.com/blog/crlf-vs-lf-normalizing-line-endings-in-git/
* text=auto
docker/** text eol=lf
docker/** text eol=lf
tests/test_model_probe/stripped_models/** filter=lfs diff=lfs merge=lfs -text

8
.github/CODEOWNERS vendored
View File

@@ -2,11 +2,11 @@
/.github/workflows/ @lstein @blessedcoolant @hipsterusername @ebr @jazzhaiku
# documentation
/docs/ @lstein @blessedcoolant @hipsterusername @Millu
/mkdocs.yml @lstein @blessedcoolant @hipsterusername @Millu
/docs/ @lstein @blessedcoolant @hipsterusername @psychedelicious
/mkdocs.yml @lstein @blessedcoolant @hipsterusername @psychedelicious
# nodes
/invokeai/app/ @Kyle0654 @blessedcoolant @psychedelicious @brandonrising @hipsterusername @jazzhaiku
/invokeai/app/ @blessedcoolant @psychedelicious @brandonrising @hipsterusername @jazzhaiku
# installation and configuration
/pyproject.toml @lstein @blessedcoolant @hipsterusername
@@ -22,7 +22,7 @@
/invokeai/backend @blessedcoolant @psychedelicious @lstein @maryhipp @hipsterusername
# generation, model management, postprocessing
/invokeai/backend @damian0815 @lstein @blessedcoolant @gregghelt2 @StAlKeR7779 @brandonrising @ryanjdick @hipsterusername @jazzhaiku
/invokeai/backend @lstein @blessedcoolant @brandonrising @hipsterusername @jazzhaiku
# front ends
/invokeai/frontend/CLI @lstein @hipsterusername

View File

@@ -97,6 +97,8 @@ jobs:
context: .
file: docker/Dockerfile
platforms: ${{ env.PLATFORMS }}
build-args: |
GPU_DRIVER=${{ matrix.gpu-driver }}
push: ${{ github.ref == 'refs/heads/main' || github.ref_type == 'tag' || github.event.inputs.push-to-registry }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

View File

@@ -17,7 +17,7 @@ jobs:
- name: setup python
uses: actions/setup-python@v5
with:
python-version: '3.10'
python-version: '3.12'
cache: pip
cache-dependency-path: pyproject.toml

View File

@@ -44,7 +44,12 @@ jobs:
- name: check for changed frontend files
if: ${{ inputs.always_run != true }}
id: changed-files
uses: tj-actions/changed-files@v42
# Pinned to the _hash_ for v45.0.9 to prevent supply-chain attacks.
# See:
# - CVE-2025-30066
# - https://www.stepsecurity.io/blog/harden-runner-detection-tj-actions-changed-files-action-is-compromised
# - https://github.com/tj-actions/changed-files/issues/2463
uses: tj-actions/changed-files@a284dc1814e3fd07f2e34267fc8f81227ed29fb8
with:
files_yaml: |
frontend:

View File

@@ -44,7 +44,12 @@ jobs:
- name: check for changed frontend files
if: ${{ inputs.always_run != true }}
id: changed-files
uses: tj-actions/changed-files@v42
# Pinned to the _hash_ for v45.0.9 to prevent supply-chain attacks.
# See:
# - CVE-2025-30066
# - https://www.stepsecurity.io/blog/harden-runner-detection-tj-actions-changed-files-action-is-compromised
# - https://github.com/tj-actions/changed-files/issues/2463
uses: tj-actions/changed-files@a284dc1814e3fd07f2e34267fc8f81227ed29fb8
with:
files_yaml: |
frontend:

View File

@@ -34,6 +34,9 @@ on:
jobs:
python-checks:
env:
# uv requires a venv by default - but for this, we can simply use the system python
UV_SYSTEM_PYTHON: 1
runs-on: ubuntu-latest
timeout-minutes: 5 # expected run time: <1 min
steps:
@@ -43,7 +46,12 @@ jobs:
- name: check for changed python files
if: ${{ inputs.always_run != true }}
id: changed-files
uses: tj-actions/changed-files@v42
# Pinned to the _hash_ for v45.0.9 to prevent supply-chain attacks.
# See:
# - CVE-2025-30066
# - https://www.stepsecurity.io/blog/harden-runner-detection-tj-actions-changed-files-action-is-compromised
# - https://github.com/tj-actions/changed-files/issues/2463
uses: tj-actions/changed-files@a284dc1814e3fd07f2e34267fc8f81227ed29fb8
with:
files_yaml: |
python:
@@ -52,25 +60,19 @@ jobs:
- '!invokeai/frontend/web/**'
- 'tests/**'
- name: setup python
- name: setup uv
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || inputs.always_run == true }}
uses: actions/setup-python@v5
uses: astral-sh/setup-uv@v5
with:
python-version: '3.10'
cache: pip
cache-dependency-path: pyproject.toml
- name: install ruff
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || inputs.always_run == true }}
run: pip install ruff==0.9.9
shell: bash
version: '0.6.10'
enable-cache: true
- name: ruff check
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || inputs.always_run == true }}
run: ruff check --output-format=github .
run: uv tool run ruff@0.11.2 check --output-format=github .
shell: bash
- name: ruff format
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || inputs.always_run == true }}
run: ruff format --check .
run: uv tool run ruff@0.11.2 format --check .
shell: bash

View File

@@ -39,24 +39,15 @@ jobs:
strategy:
matrix:
python-version:
- '3.10'
- '3.11'
- '3.12'
platform:
- linux-cuda-11_7
- linux-rocm-5_2
- linux-cpu
- macos-default
- windows-cpu
include:
- platform: linux-cuda-11_7
os: ubuntu-22.04
github-env: $GITHUB_ENV
- platform: linux-rocm-5_2
os: ubuntu-22.04
extra-index-url: 'https://download.pytorch.org/whl/rocm5.2'
github-env: $GITHUB_ENV
- platform: linux-cpu
os: ubuntu-22.04
os: ubuntu-24.04
extra-index-url: 'https://download.pytorch.org/whl/cpu'
github-env: $GITHUB_ENV
- platform: macos-default
@@ -70,14 +61,22 @@ jobs:
timeout-minutes: 15 # expected run time: 2-6 min, depending on platform
env:
PIP_USE_PEP517: '1'
UV_SYSTEM_PYTHON: 1
steps:
- name: checkout
uses: actions/checkout@v4
# https://github.com/nschloe/action-cached-lfs-checkout
uses: nschloe/action-cached-lfs-checkout@f46300cd8952454b9f0a21a3d133d4bd5684cfc2
- name: check for changed python files
if: ${{ inputs.always_run != true }}
id: changed-files
uses: tj-actions/changed-files@v42
# Pinned to the _hash_ for v45.0.9 to prevent supply-chain attacks.
# See:
# - CVE-2025-30066
# - https://www.stepsecurity.io/blog/harden-runner-detection-tj-actions-changed-files-action-is-compromised
# - https://github.com/tj-actions/changed-files/issues/2463
uses: tj-actions/changed-files@a284dc1814e3fd07f2e34267fc8f81227ed29fb8
with:
files_yaml: |
python:
@@ -86,20 +85,25 @@ jobs:
- '!invokeai/frontend/web/**'
- 'tests/**'
- name: setup uv
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || inputs.always_run == true }}
uses: astral-sh/setup-uv@v5
with:
version: '0.6.10'
enable-cache: true
python-version: ${{ matrix.python-version }}
- name: setup python
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || inputs.always_run == true }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: pyproject.toml
- name: install dependencies
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || inputs.always_run == true }}
env:
PIP_EXTRA_INDEX_URL: ${{ matrix.extra-index-url }}
run: >
pip3 install --editable=".[test]"
UV_INDEX: ${{ matrix.extra-index-url }}
run: uv pip install --editable ".[test]"
- name: run pytest
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || inputs.always_run == true }}

View File

@@ -42,24 +42,37 @@ jobs:
- name: check for changed files
if: ${{ inputs.always_run != true }}
id: changed-files
uses: tj-actions/changed-files@v42
# Pinned to the _hash_ for v45.0.9 to prevent supply-chain attacks.
# See:
# - CVE-2025-30066
# - https://www.stepsecurity.io/blog/harden-runner-detection-tj-actions-changed-files-action-is-compromised
# - https://github.com/tj-actions/changed-files/issues/2463
uses: tj-actions/changed-files@a284dc1814e3fd07f2e34267fc8f81227ed29fb8
with:
files_yaml: |
src:
- 'pyproject.toml'
- 'invokeai/**'
- name: setup uv
if: ${{ steps.changed-files.outputs.src_any_changed == 'true' || inputs.always_run == true }}
uses: astral-sh/setup-uv@v5
with:
version: '0.6.10'
enable-cache: true
python-version: '3.11'
- name: setup python
if: ${{ steps.changed-files.outputs.src_any_changed == 'true' || inputs.always_run == true }}
uses: actions/setup-python@v5
with:
python-version: '3.10'
cache: pip
cache-dependency-path: pyproject.toml
python-version: '3.11'
- name: install python dependencies
- name: install dependencies
if: ${{ steps.changed-files.outputs.src_any_changed == 'true' || inputs.always_run == true }}
run: pip3 install --use-pep517 --editable="."
env:
UV_INDEX: ${{ matrix.extra-index-url }}
run: uv pip install --editable .
- name: install frontend dependencies
if: ${{ steps.changed-files.outputs.src_any_changed == 'true' || inputs.always_run == true }}
@@ -72,7 +85,7 @@ jobs:
- name: generate schema
if: ${{ steps.changed-files.outputs.src_any_changed == 'true' || inputs.always_run == true }}
run: make frontend-typegen
run: cd invokeai/frontend/web && uv run ../../../scripts/generate_openapi_schema.py | pnpm typegen
shell: bash
- name: compare files

2
.nvmrc
View File

@@ -1 +1 @@
v22.12.0
v22.14.0

View File

@@ -1,77 +1,6 @@
# syntax=docker/dockerfile:1.4
## Builder stage
FROM library/ubuntu:24.04 AS builder
ARG DEBIAN_FRONTEND=noninteractive
RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt update && apt-get install -y \
build-essential \
git
# Install `uv` for package management
COPY --from=ghcr.io/astral-sh/uv:0.6.0 /uv /uvx /bin/
ENV VIRTUAL_ENV=/opt/venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
ENV INVOKEAI_SRC=/opt/invokeai
ENV PYTHON_VERSION=3.11
ENV UV_PYTHON=3.11
ENV UV_COMPILE_BYTECODE=1
ENV UV_LINK_MODE=copy
ENV UV_PROJECT_ENVIRONMENT="$VIRTUAL_ENV"
ENV UV_INDEX="https://download.pytorch.org/whl/cu124"
ARG GPU_DRIVER=cuda
# unused but available
ARG BUILDPLATFORM
# Switch to the `ubuntu` user to work around dependency issues with uv-installed python
RUN mkdir -p ${VIRTUAL_ENV} && \
mkdir -p ${INVOKEAI_SRC} && \
chmod -R a+w /opt && \
mkdir ~ubuntu/.cache && chown ubuntu: ~ubuntu/.cache
USER ubuntu
# Install python
RUN --mount=type=cache,target=/home/ubuntu/.cache/uv,uid=1000,gid=1000 \
uv python install ${PYTHON_VERSION}
WORKDIR ${INVOKEAI_SRC}
# Install project's dependencies as a separate layer so they aren't rebuilt every commit.
# bind-mount instead of copy to defer adding sources to the image until next layer.
#
# NOTE: there are no pytorch builds for arm64 + cuda, only cpu
# x86_64/CUDA is the default
RUN --mount=type=cache,target=/home/ubuntu/.cache/uv,uid=1000,gid=1000 \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
--mount=type=bind,source=invokeai/version,target=invokeai/version \
if [ "$TARGETPLATFORM" = "linux/arm64" ] || [ "$GPU_DRIVER" = "cpu" ]; then \
UV_INDEX="https://download.pytorch.org/whl/cpu"; \
elif [ "$GPU_DRIVER" = "rocm" ]; then \
UV_INDEX="https://download.pytorch.org/whl/rocm6.1"; \
fi && \
uv sync --no-install-project
# Now that the bulk of the dependencies have been installed, copy in the project files that change more frequently.
COPY invokeai invokeai
COPY pyproject.toml .
RUN --mount=type=cache,target=/home/ubuntu/.cache/uv,uid=1000,gid=1000 \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
if [ "$TARGETPLATFORM" = "linux/arm64" ] || [ "$GPU_DRIVER" = "cpu" ]; then \
UV_INDEX="https://download.pytorch.org/whl/cpu"; \
elif [ "$GPU_DRIVER" = "rocm" ]; then \
UV_INDEX="https://download.pytorch.org/whl/rocm6.1"; \
fi && \
uv sync
#### Build the Web UI ------------------------------------
#### Web UI ------------------------------------
FROM docker.io/node:22-slim AS web-builder
ENV PNPM_HOME="/pnpm"
@@ -85,69 +14,89 @@ RUN --mount=type=cache,target=/pnpm/store \
pnpm install --frozen-lockfile
RUN npx vite build
#### Runtime stage ---------------------------------------
## Backend ---------------------------------------
FROM library/ubuntu:24.04 AS runtime
FROM library/ubuntu:24.04
ARG DEBIAN_FRONTEND=noninteractive
ENV PYTHONUNBUFFERED=1
ENV PYTHONDONTWRITEBYTECODE=1
RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
RUN --mount=type=cache,target=/var/cache/apt \
--mount=type=cache,target=/var/lib/apt \
apt update && apt install -y --no-install-recommends \
ca-certificates \
git \
gosu \
libglib2.0-0 \
libgl1 \
libglx-mesa0 \
build-essential \
libopencv-dev \
libstdc++-10-dev
RUN apt update && apt install -y --no-install-recommends \
git \
curl \
vim \
tmux \
ncdu \
iotop \
bzip2 \
gosu \
magic-wormhole \
libglib2.0-0 \
libgl1 \
libglx-mesa0 \
build-essential \
libopencv-dev \
libstdc++-10-dev &&\
apt-get clean && apt-get autoclean
ENV \
PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
VIRTUAL_ENV=/opt/venv \
INVOKEAI_SRC=/opt/invokeai \
PYTHON_VERSION=3.12 \
UV_PYTHON=3.12 \
UV_COMPILE_BYTECODE=1 \
UV_MANAGED_PYTHON=1 \
UV_LINK_MODE=copy \
UV_PROJECT_ENVIRONMENT=/opt/venv \
UV_INDEX="https://download.pytorch.org/whl/cu124" \
INVOKEAI_ROOT=/invokeai \
INVOKEAI_HOST=0.0.0.0 \
INVOKEAI_PORT=9090 \
PATH="/opt/venv/bin:$PATH" \
CONTAINER_UID=${CONTAINER_UID:-1000} \
CONTAINER_GID=${CONTAINER_GID:-1000}
ENV INVOKEAI_SRC=/opt/invokeai
ENV VIRTUAL_ENV=/opt/venv
ENV UV_PROJECT_ENVIRONMENT="$VIRTUAL_ENV"
ENV PYTHON_VERSION=3.11
ENV INVOKEAI_ROOT=/invokeai
ENV INVOKEAI_HOST=0.0.0.0
ENV INVOKEAI_PORT=9090
ENV PATH="$VIRTUAL_ENV/bin:$INVOKEAI_SRC:$PATH"
ENV CONTAINER_UID=${CONTAINER_UID:-1000}
ENV CONTAINER_GID=${CONTAINER_GID:-1000}
ARG GPU_DRIVER=cuda
# Install `uv` for package management
# and install python for the ubuntu user (expected to exist on ubuntu >=24.x)
# this is too tiny to optimize with multi-stage builds, but maybe we'll come back to it
COPY --from=ghcr.io/astral-sh/uv:0.6.0 /uv /uvx /bin/
USER ubuntu
RUN uv python install ${PYTHON_VERSION}
USER root
COPY --from=ghcr.io/astral-sh/uv:0.6.9 /uv /uvx /bin/
# --link requires buldkit w/ dockerfile syntax 1.4
COPY --link --from=builder ${INVOKEAI_SRC} ${INVOKEAI_SRC}
COPY --link --from=builder ${VIRTUAL_ENV} ${VIRTUAL_ENV}
COPY --link --from=web-builder /build/dist ${INVOKEAI_SRC}/invokeai/frontend/web/dist
# Link amdgpu.ids for ROCm builds
# contributed by https://github.com/Rubonnek
RUN mkdir -p "/opt/amdgpu/share/libdrm" &&\
ln -s "/usr/share/libdrm/amdgpu.ids" "/opt/amdgpu/share/libdrm/amdgpu.ids"
# Install python & allow non-root user to use it by traversing the /root dir without read permissions
RUN --mount=type=cache,target=/root/.cache/uv \
uv python install ${PYTHON_VERSION} && \
# chmod --recursive a+rX /root/.local/share/uv/python
chmod 711 /root
WORKDIR ${INVOKEAI_SRC}
# Install project's dependencies as a separate layer so they aren't rebuilt every commit.
# bind-mount instead of copy to defer adding sources to the image until next layer.
#
# NOTE: there are no pytorch builds for arm64 + cuda, only cpu
# x86_64/CUDA is the default
RUN --mount=type=cache,target=/root/.cache/uv \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
--mount=type=bind,source=uv.lock,target=uv.lock \
# this is just to get the package manager to recognize that the project exists, without making changes to the docker layer
--mount=type=bind,source=invokeai/version,target=invokeai/version \
if [ "$TARGETPLATFORM" = "linux/arm64" ] || [ "$GPU_DRIVER" = "cpu" ]; then UV_INDEX="https://download.pytorch.org/whl/cpu"; \
elif [ "$GPU_DRIVER" = "rocm" ]; then UV_INDEX="https://download.pytorch.org/whl/rocm6.2"; \
fi && \
uv sync --frozen
# build patchmatch
RUN cd /usr/lib/$(uname -p)-linux-gnu/pkgconfig/ && ln -sf opencv4.pc opencv.pc
RUN python -c "from patchmatch import patch_match"
# Link amdgpu.ids for ROCm builds
# contributed by https://github.com/Rubonnek
RUN mkdir -p "/opt/amdgpu/share/libdrm" &&\
ln -s "/usr/share/libdrm/amdgpu.ids" "/opt/amdgpu/share/libdrm/amdgpu.ids"
RUN mkdir -p ${INVOKEAI_ROOT} && chown -R ${CONTAINER_UID}:${CONTAINER_GID} ${INVOKEAI_ROOT}
COPY docker/docker-entrypoint.sh ./
ENTRYPOINT ["/opt/invokeai/docker-entrypoint.sh"]
CMD ["invokeai-web"]
# --link requires buldkit w/ dockerfile syntax 1.4, does not work with podman
COPY --link --from=web-builder /build/dist ${INVOKEAI_SRC}/invokeai/frontend/web/dist
# add sources last to minimize image changes on code changes
COPY invokeai ${INVOKEAI_SRC}/invokeai

View File

@@ -18,9 +18,19 @@ If you just want to use Invoke, you should use the [launcher][launcher link].
2. [Fork and clone][forking link] the [InvokeAI repo][repo link].
3. Create an directory for user data (images, models, db, etc). This is typically at `~/invokeai`, but if you already have a non-dev install, you may want to create a separate directory for the dev install.
3. This repository uses Git LFS to manage large files. To ensure all assets are downloaded:
- Install git-lfs → [Download here](https://git-lfs.com/)
- Enable automatic LFS fetching for this repository:
```shell
git config lfs.fetchinclude "*"
```
- Fetch files from LFS (only needs to be done once; subsequent `git pull` will fetch changes automatically):
```
git lfs pull
```
4. Create an directory for user data (images, models, db, etc). This is typically at `~/invokeai`, but if you already have a non-dev install, you may want to create a separate directory for the dev install.
4. Follow the [manual install][manual install link] guide, with some modifications to the install command:
5. Follow the [manual install][manual install link] guide, with some modifications to the install command:
- Use `.` instead of `invokeai` to install from the current directory. You don't need to specify the version.
@@ -31,22 +41,22 @@ If you just want to use Invoke, you should use the [launcher][launcher link].
With the modifications made, the install command should look something like this:
```sh
uv pip install -e ".[dev,test,docs,xformers]" --python 3.11 --python-preference only-managed --index=https://download.pytorch.org/whl/cu124 --reinstall
uv pip install -e ".[dev,test,docs,xformers]" --python 3.12 --python-preference only-managed --index=https://download.pytorch.org/whl/cu124 --reinstall
```
5. At this point, you should have Invoke installed, a venv set up and activated, and the server running. But you will see a warning in the terminal that no UI was found. If you go to the URL for the server, you won't get a UI.
6. At this point, you should have Invoke installed, a venv set up and activated, and the server running. But you will see a warning in the terminal that no UI was found. If you go to the URL for the server, you won't get a UI.
This is because the UI build is not distributed with the source code. You need to build it manually. End the running server instance.
If you only want to edit the docs, you can stop here and skip to the **Documentation** section below.
6. Install the frontend dev toolchain:
7. Install the frontend dev toolchain:
- [`nodejs`](https://nodejs.org/) (v20+)
- [`pnpm`](https://pnpm.io/8.x/installation) (must be v8 - not v9!)
7. Do a production build of the frontend:
8. Do a production build of the frontend:
```sh
cd <PATH_TO_INVOKEAI_REPO>/invokeai/frontend/web
@@ -54,7 +64,7 @@ If you just want to use Invoke, you should use the [launcher][launcher link].
pnpm build
```
8. Restart the server and navigate to the URL. You should get a UI. After making changes to the python code, restart the server to see those changes.
9. Restart the server and navigate to the URL. You should get a UI. After making changes to the python code, restart the server to see those changes.
## Updating the UI

View File

@@ -43,10 +43,10 @@ The following commands vary depending on the version of Invoke being installed a
3. Create a virtual environment in that directory:
```sh
uv venv --relocatable --prompt invoke --python 3.11 --python-preference only-managed .venv
uv venv --relocatable --prompt invoke --python 3.12 --python-preference only-managed .venv
```
This command creates a portable virtual environment at `.venv` complete with a portable python 3.11. It doesn't matter if your system has no python installed, or has a different version - `uv` will handle everything.
This command creates a portable virtual environment at `.venv` complete with a portable python 3.12. It doesn't matter if your system has no python installed, or has a different version - `uv` will handle everything.
4. Activate the virtual environment:
@@ -64,7 +64,7 @@ The following commands vary depending on the version of Invoke being installed a
5. Choose a version to install. Review the [GitHub releases page](https://github.com/invoke-ai/InvokeAI/releases).
6. Determine the package package specifier to use when installing. This is a performance optimization.
6. Determine the package specifier to use when installing. This is a performance optimization.
- If you have an Nvidia 20xx series GPU or older, use `invokeai[xformers]`.
- If you have an Nvidia 30xx series GPU or newer, or do not have an Nvidia GPU, use `invokeai`.
@@ -88,13 +88,13 @@ The following commands vary depending on the version of Invoke being installed a
8. Install the `invokeai` package. Substitute the package specifier and version.
```sh
uv pip install <PACKAGE_SPECIFIER>==<VERSION> --python 3.11 --python-preference only-managed --force-reinstall
uv pip install <PACKAGE_SPECIFIER>==<VERSION> --python 3.12 --python-preference only-managed --force-reinstall
```
If you determined you needed to use a `PyPI` index URL in the previous step, you'll need to add `--index=<INDEX_URL>` like this:
```sh
uv pip install <PACKAGE_SPECIFIER>==<VERSION> --python 3.11 --python-preference only-managed --index=<INDEX_URL> --force-reinstall
uv pip install <PACKAGE_SPECIFIER>==<VERSION> --python 3.12 --python-preference only-managed --index=<INDEX_URL> --force-reinstall
```
9. Deactivate and reactivate your venv so that the invokeai-specific commands become available in the environment:

View File

@@ -41,7 +41,7 @@ The requirements below are rough guidelines for best performance. GPUs with less
You don't need to do this if you are installing with the [Invoke Launcher](./quick_start.md).
Invoke requires python 3.10 or 3.11. If you don't already have one of these versions installed, we suggest installing 3.11, as it will be supported for longer.
Invoke requires python 3.10 through 3.12. If you don't already have one of these versions installed, we suggest installing 3.12, as it will be supported for longer.
Check that your system has an up-to-date Python installed by running `python3 --version` in the terminal (Linux, macOS) or cmd/powershell (Windows).
@@ -49,19 +49,19 @@ Check that your system has an up-to-date Python installed by running `python3 --
=== "Windows"
- Install python 3.11 with [an official installer].
- Install python with [an official installer].
- The installer includes an option to add python to your PATH. Be sure to enable this. If you missed it, re-run the installer, choose to modify an existing installation, and tick that checkbox.
- You may need to install [Microsoft Visual C++ Redistributable].
=== "macOS"
- Install python 3.11 with [an official installer].
- Install python with [an official installer].
- If model installs fail with a certificate error, you may need to run this command (changing the python version to match what you have installed): `/Applications/Python\ 3.10/Install\ Certificates.command`
- If you haven't already, you will need to install the XCode CLI Tools by running `xcode-select --install` in a terminal.
=== "Linux"
- Installing python varies depending on your system. On Ubuntu, you can use the [deadsnakes PPA](https://launchpad.net/~deadsnakes/+archive/ubuntu/ppa).
- Installing python varies depending on your system. We recommend [using `uv` to manage your python installation](https://docs.astral.sh/uv/concepts/python-versions/#installing-a-python-version).
- You'll need to install `libglib2.0-0` and `libgl1-mesa-glx` for OpenCV to work. For example, on a Debian system: `sudo apt update && sudo apt install -y libglib2.0-0 libgl1-mesa-glx`
## Drivers

View File

@@ -37,7 +37,13 @@ from invokeai.app.services.style_preset_records.style_preset_records_sqlite impo
from invokeai.app.services.urls.urls_default import LocalUrlService
from invokeai.app.services.workflow_records.workflow_records_sqlite import SqliteWorkflowRecordsStorage
from invokeai.app.services.workflow_thumbnails.workflow_thumbnails_disk import WorkflowThumbnailFileStorageDisk
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningFieldData
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
BasicConditioningInfo,
ConditioningFieldData,
FLUXConditioningInfo,
SD3ConditioningInfo,
SDXLConditioningInfo,
)
from invokeai.backend.util.logging import InvokeAILogger
from invokeai.version.invokeai_version import __version__
@@ -101,10 +107,25 @@ class ApiDependencies:
images = ImageService()
invocation_cache = MemoryInvocationCache(max_cache_size=config.node_cache_size)
tensors = ObjectSerializerForwardCache(
ObjectSerializerDisk[torch.Tensor](output_folder / "tensors", ephemeral=True)
ObjectSerializerDisk[torch.Tensor](
output_folder / "tensors",
safe_globals=[torch.Tensor],
ephemeral=True,
),
max_cache_size=0,
)
conditioning = ObjectSerializerForwardCache(
ObjectSerializerDisk[ConditioningFieldData](output_folder / "conditioning", ephemeral=True)
ObjectSerializerDisk[ConditioningFieldData](
output_folder / "conditioning",
safe_globals=[
ConditioningFieldData,
BasicConditioningInfo,
SDXLConditioningInfo,
FLUXConditioningInfo,
SD3ConditioningInfo,
],
ephemeral=True,
),
)
download_queue_service = DownloadQueueService(app_config=configuration, event_bus=events)
model_images_service = ModelImageFileStorageDisk(model_images_folder / "model_images")

View File

@@ -12,6 +12,7 @@ from pydantic import BaseModel, Field
from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.invocations.upscale import ESRGAN_MODELS
from invokeai.app.services.config.config_default import InvokeAIAppConfig, get_config
from invokeai.app.services.invocation_cache.invocation_cache_common import InvocationCacheStatus
from invokeai.backend.image_util.infill_methods.patchmatch import PatchMatch
from invokeai.backend.util.logging import logging
@@ -99,7 +100,7 @@ async def get_app_deps() -> AppDependencyVersions:
@app_router.get("/config", operation_id="get_config", status_code=200, response_model=AppConfig)
async def get_config() -> AppConfig:
async def get_config_() -> AppConfig:
infill_methods = ["lama", "tile", "cv2", "color"] # TODO: add mosaic back
if PatchMatch.patchmatch_available():
infill_methods.append("patchmatch")
@@ -121,6 +122,21 @@ async def get_config() -> AppConfig:
)
class InvokeAIAppConfigWithSetFields(BaseModel):
"""InvokeAI App Config with model fields set"""
set_fields: set[str] = Field(description="The set fields")
config: InvokeAIAppConfig = Field(description="The InvokeAI App Config")
@app_router.get(
"/runtime_config", operation_id="get_runtime_config", status_code=200, response_model=InvokeAIAppConfigWithSetFields
)
async def get_runtime_config() -> InvokeAIAppConfigWithSetFields:
config = get_config()
return InvokeAIAppConfigWithSetFields(set_fields=config.model_fields_set, config=config)
@app_router.get(
"/logging",
operation_id="get_log_level",

View File

@@ -96,6 +96,22 @@ async def upload_image(
raise HTTPException(status_code=500, detail="Failed to create image")
class ImageUploadEntry(BaseModel):
image_dto: ImageDTO = Body(description="The image DTO")
presigned_url: str = Body(description="The URL to get the presigned URL for the image upload")
@images_router.post("/", operation_id="create_image_upload_entry")
async def create_image_upload_entry(
width: int = Body(description="The width of the image"),
height: int = Body(description="The height of the image"),
board_id: Optional[str] = Body(default=None, description="The board to add this image to, if any"),
) -> ImageUploadEntry:
"""Uploads an image from a URL, not implemented"""
raise HTTPException(status_code=501, detail="Not implemented")
@images_router.delete("/i/{image_name}", operation_id="delete_image")
async def delete_image(
image_name: str = Path(description="The name of the image to delete"),

View File

@@ -28,12 +28,10 @@ from invokeai.app.services.model_records import (
UnknownModelException,
)
from invokeai.app.util.suppress_output import SuppressOutput
from invokeai.backend.model_manager import BaseModelType, ModelFormat, ModelType
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
MainCheckpointConfig,
ModelFormat,
ModelType,
)
from invokeai.backend.model_manager.load.model_cache.cache_stats import CacheStats
from invokeai.backend.model_manager.metadata.fetch.huggingface import HuggingFaceMetadataFetch

View File

@@ -2,7 +2,7 @@ from typing import Optional
from fastapi import Body, Path, Query
from fastapi.routing import APIRouter
from pydantic import BaseModel
from pydantic import BaseModel, Field
from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.services.session_processor.session_processor_common import SessionProcessorStatus
@@ -15,6 +15,7 @@ from invokeai.app.services.session_queue.session_queue_common import (
CancelByDestinationResult,
ClearResult,
EnqueueBatchResult,
FieldIdentifier,
PruneResult,
RetryItemsResult,
SessionQueueCountsByDestination,
@@ -34,6 +35,12 @@ class SessionQueueAndProcessorStatus(BaseModel):
processor: SessionProcessorStatus
class ValidationRunData(BaseModel):
workflow_id: str = Field(description="The id of the workflow being published.")
input_fields: list[FieldIdentifier] = Body(description="The input fields for the published workflow")
output_fields: list[FieldIdentifier] = Body(description="The output fields for the published workflow")
@session_queue_router.post(
"/{queue_id}/enqueue_batch",
operation_id="enqueue_batch",
@@ -45,6 +52,10 @@ async def enqueue_batch(
queue_id: str = Path(description="The queue id to perform this operation on"),
batch: Batch = Body(description="Batch to process"),
prepend: bool = Body(default=False, description="Whether or not to prepend this batch in the queue"),
validation_run_data: Optional[ValidationRunData] = Body(
default=None,
description="The validation run data to use for this batch. This is only used if this is a validation run.",
),
) -> EnqueueBatchResult:
"""Processes a batch and enqueues the output graphs for execution."""

View File

@@ -105,6 +105,8 @@ async def list_workflows(
categories: Optional[list[WorkflowCategory]] = Query(default=None, description="The categories of workflow to get"),
tags: Optional[list[str]] = Query(default=None, description="The tags of workflow to get"),
query: Optional[str] = Query(default=None, description="The text to query by (matches name and description)"),
has_been_opened: Optional[bool] = Query(default=None, description="Whether to include/exclude recent workflows"),
is_published: Optional[bool] = Query(default=None, description="Whether to include/exclude published workflows"),
) -> PaginatedResults[WorkflowRecordListItemWithThumbnailDTO]:
"""Gets a page of workflows"""
workflows_with_thumbnails: list[WorkflowRecordListItemWithThumbnailDTO] = []
@@ -116,6 +118,8 @@ async def list_workflows(
query=query,
categories=categories,
tags=tags,
has_been_opened=has_been_opened,
is_published=is_published,
)
for workflow in workflows.items:
workflows_with_thumbnails.append(
@@ -221,14 +225,29 @@ async def get_workflow_thumbnail(
raise HTTPException(status_code=404)
@workflows_router.get("/counts", operation_id="get_counts")
async def get_counts(
tags: Optional[list[str]] = Query(default=None, description="The tags to include"),
@workflows_router.get("/counts_by_tag", operation_id="get_counts_by_tag")
async def get_counts_by_tag(
tags: list[str] = Query(description="The tags to get counts for"),
categories: Optional[list[WorkflowCategory]] = Query(default=None, description="The categories to include"),
) -> int:
"""Gets a the count of workflows that include the specified tags and categories"""
has_been_opened: Optional[bool] = Query(default=None, description="Whether to include/exclude recent workflows"),
) -> dict[str, int]:
"""Counts workflows by tag"""
return ApiDependencies.invoker.services.workflow_records.get_counts(tags=tags, categories=categories)
return ApiDependencies.invoker.services.workflow_records.counts_by_tag(
tags=tags, categories=categories, has_been_opened=has_been_opened
)
@workflows_router.get("/counts_by_category", operation_id="counts_by_category")
async def counts_by_category(
categories: list[WorkflowCategory] = Query(description="The categories to include"),
has_been_opened: Optional[bool] = Query(default=None, description="Whether to include/exclude recent workflows"),
) -> dict[str, int]:
"""Counts workflows by category"""
return ApiDependencies.invoker.services.workflow_records.counts_by_category(
categories=categories, has_been_opened=has_been_opened
)
@workflows_router.put(

View File

@@ -8,6 +8,7 @@ import sys
import warnings
from abc import ABC, abstractmethod
from enum import Enum
from functools import lru_cache
from inspect import signature
from typing import (
TYPE_CHECKING,
@@ -27,7 +28,6 @@ import semver
from pydantic import BaseModel, ConfigDict, Field, TypeAdapter, create_model
from pydantic.fields import FieldInfo
from pydantic_core import PydanticUndefined
from typing_extensions import TypeAliasType
from invokeai.app.invocations.fields import (
FieldKind,
@@ -100,37 +100,6 @@ class BaseInvocationOutput(BaseModel):
All invocation outputs must use the `@invocation_output` decorator to provide their unique type.
"""
_output_classes: ClassVar[set[BaseInvocationOutput]] = set()
_typeadapter: ClassVar[Optional[TypeAdapter[Any]]] = None
_typeadapter_needs_update: ClassVar[bool] = False
@classmethod
def register_output(cls, output: BaseInvocationOutput) -> None:
"""Registers an invocation output."""
cls._output_classes.add(output)
cls._typeadapter_needs_update = True
@classmethod
def get_outputs(cls) -> Iterable[BaseInvocationOutput]:
"""Gets all invocation outputs."""
return cls._output_classes
@classmethod
def get_typeadapter(cls) -> TypeAdapter[Any]:
"""Gets a pydantc TypeAdapter for the union of all invocation output types."""
if not cls._typeadapter or cls._typeadapter_needs_update:
AnyInvocationOutput = TypeAliasType(
"AnyInvocationOutput", Annotated[Union[tuple(cls._output_classes)], Field(discriminator="type")]
)
cls._typeadapter = TypeAdapter(AnyInvocationOutput)
cls._typeadapter_needs_update = False
return cls._typeadapter
@classmethod
def get_output_types(cls) -> Iterable[str]:
"""Gets all invocation output types."""
return (i.get_type() for i in BaseInvocationOutput.get_outputs())
@staticmethod
def json_schema_extra(schema: dict[str, Any], model_class: Type[BaseInvocationOutput]) -> None:
"""Adds various UI-facing attributes to the invocation output's OpenAPI schema."""
@@ -173,76 +142,16 @@ class BaseInvocation(ABC, BaseModel):
All invocations must use the `@invocation` decorator to provide their unique type.
"""
_invocation_classes: ClassVar[set[BaseInvocation]] = set()
_typeadapter: ClassVar[Optional[TypeAdapter[Any]]] = None
_typeadapter_needs_update: ClassVar[bool] = False
@classmethod
def get_type(cls) -> str:
"""Gets the invocation's type, as provided by the `@invocation` decorator."""
return cls.model_fields["type"].default
@classmethod
def register_invocation(cls, invocation: BaseInvocation) -> None:
"""Registers an invocation."""
cls._invocation_classes.add(invocation)
cls._typeadapter_needs_update = True
@classmethod
def get_typeadapter(cls) -> TypeAdapter[Any]:
"""Gets a pydantc TypeAdapter for the union of all invocation types."""
if not cls._typeadapter or cls._typeadapter_needs_update:
AnyInvocation = TypeAliasType(
"AnyInvocation", Annotated[Union[tuple(cls.get_invocations())], Field(discriminator="type")]
)
cls._typeadapter = TypeAdapter(AnyInvocation)
cls._typeadapter_needs_update = False
return cls._typeadapter
@classmethod
def invalidate_typeadapter(cls) -> None:
"""Invalidates the typeadapter, forcing it to be rebuilt on next access. If the invocation allowlist or
denylist is changed, this should be called to ensure the typeadapter is updated and validation respects
the updated allowlist and denylist."""
cls._typeadapter_needs_update = True
@classmethod
def get_invocations(cls) -> Iterable[BaseInvocation]:
"""Gets all invocations, respecting the allowlist and denylist."""
app_config = get_config()
allowed_invocations: set[BaseInvocation] = set()
for sc in cls._invocation_classes:
invocation_type = sc.get_type()
is_in_allowlist = (
invocation_type in app_config.allow_nodes if isinstance(app_config.allow_nodes, list) else True
)
is_in_denylist = (
invocation_type in app_config.deny_nodes if isinstance(app_config.deny_nodes, list) else False
)
if is_in_allowlist and not is_in_denylist:
allowed_invocations.add(sc)
return allowed_invocations
@classmethod
def get_invocations_map(cls) -> dict[str, BaseInvocation]:
"""Gets a map of all invocation types to their invocation classes."""
return {i.get_type(): i for i in BaseInvocation.get_invocations()}
@classmethod
def get_invocation_types(cls) -> Iterable[str]:
"""Gets all invocation types."""
return (i.get_type() for i in BaseInvocation.get_invocations())
@classmethod
def get_output_annotation(cls) -> BaseInvocationOutput:
"""Gets the invocation's output annotation (i.e. the return annotation of its `invoke()` method)."""
return signature(cls.invoke).return_annotation
@classmethod
def get_invocation_for_type(cls, invocation_type: str) -> BaseInvocation | None:
"""Gets the invocation class for a given invocation type."""
return cls.get_invocations_map().get(invocation_type)
@staticmethod
def json_schema_extra(schema: dict[str, Any], model_class: Type[BaseInvocation]) -> None:
"""Adds various UI-facing attributes to the invocation's OpenAPI schema."""
@@ -340,6 +249,105 @@ class BaseInvocation(ABC, BaseModel):
TBaseInvocation = TypeVar("TBaseInvocation", bound=BaseInvocation)
class InvocationRegistry:
_invocation_classes: ClassVar[set[type[BaseInvocation]]] = set()
_output_classes: ClassVar[set[type[BaseInvocationOutput]]] = set()
@classmethod
def register_invocation(cls, invocation: type[BaseInvocation]) -> None:
"""Registers an invocation."""
cls._invocation_classes.add(invocation)
cls.invalidate_invocation_typeadapter()
@classmethod
@lru_cache(maxsize=1)
def get_invocation_typeadapter(cls) -> TypeAdapter[Any]:
"""Gets a pydantic TypeAdapter for the union of all invocation types.
This is used to parse serialized invocations into the correct invocation class.
This method is cached to avoid rebuilding the TypeAdapter on every access. If the invocation allowlist or
denylist is changed, the cache should be cleared to ensure the TypeAdapter is updated and validation respects
the updated allowlist and denylist.
@see https://docs.pydantic.dev/latest/concepts/type_adapter/
"""
return TypeAdapter(Annotated[Union[tuple(cls.get_invocation_classes())], Field(discriminator="type")])
@classmethod
def invalidate_invocation_typeadapter(cls) -> None:
"""Invalidates the cached invocation type adapter."""
cls.get_invocation_typeadapter.cache_clear()
@classmethod
def get_invocation_classes(cls) -> Iterable[type[BaseInvocation]]:
"""Gets all invocations, respecting the allowlist and denylist."""
app_config = get_config()
allowed_invocations: set[type[BaseInvocation]] = set()
for sc in cls._invocation_classes:
invocation_type = sc.get_type()
is_in_allowlist = (
invocation_type in app_config.allow_nodes if isinstance(app_config.allow_nodes, list) else True
)
is_in_denylist = (
invocation_type in app_config.deny_nodes if isinstance(app_config.deny_nodes, list) else False
)
if is_in_allowlist and not is_in_denylist:
allowed_invocations.add(sc)
return allowed_invocations
@classmethod
def get_invocations_map(cls) -> dict[str, type[BaseInvocation]]:
"""Gets a map of all invocation types to their invocation classes."""
return {i.get_type(): i for i in cls.get_invocation_classes()}
@classmethod
def get_invocation_types(cls) -> Iterable[str]:
"""Gets all invocation types."""
return (i.get_type() for i in cls.get_invocation_classes())
@classmethod
def get_invocation_for_type(cls, invocation_type: str) -> type[BaseInvocation] | None:
"""Gets the invocation class for a given invocation type."""
return cls.get_invocations_map().get(invocation_type)
@classmethod
def register_output(cls, output: "type[TBaseInvocationOutput]") -> None:
"""Registers an invocation output."""
cls._output_classes.add(output)
cls.invalidate_output_typeadapter()
@classmethod
def get_output_classes(cls) -> Iterable[type[BaseInvocationOutput]]:
"""Gets all invocation outputs."""
return cls._output_classes
@classmethod
@lru_cache(maxsize=1)
def get_output_typeadapter(cls) -> TypeAdapter[Any]:
"""Gets a pydantic TypeAdapter for the union of all invocation output types.
This is used to parse serialized invocation outputs into the correct invocation output class.
This method is cached to avoid rebuilding the TypeAdapter on every access. If the invocation allowlist or
denylist is changed, the cache should be cleared to ensure the TypeAdapter is updated and validation respects
the updated allowlist and denylist.
@see https://docs.pydantic.dev/latest/concepts/type_adapter/
"""
return TypeAdapter(Annotated[Union[tuple(cls._output_classes)], Field(discriminator="type")])
@classmethod
def invalidate_output_typeadapter(cls) -> None:
"""Invalidates the cached invocation output type adapter."""
cls.get_output_typeadapter.cache_clear()
@classmethod
def get_output_types(cls) -> Iterable[str]:
"""Gets all invocation output types."""
return (i.get_type() for i in cls.get_output_classes())
RESERVED_NODE_ATTRIBUTE_FIELD_NAMES = {
"id",
"is_intermediate",
@@ -453,8 +461,8 @@ def invocation(
node_pack = cls.__module__.split(".")[0]
# Handle the case where an existing node is being clobbered by the one we are registering
if invocation_type in BaseInvocation.get_invocation_types():
clobbered_invocation = BaseInvocation.get_invocation_for_type(invocation_type)
if invocation_type in InvocationRegistry.get_invocation_types():
clobbered_invocation = InvocationRegistry.get_invocation_for_type(invocation_type)
# This should always be true - we just checked if the invocation type was in the set
assert clobbered_invocation is not None
@@ -539,8 +547,7 @@ def invocation(
)
cls.__doc__ = docstring
# TODO: how to type this correctly? it's typed as ModelMetaclass, a private class in pydantic
BaseInvocation.register_invocation(cls) # type: ignore
InvocationRegistry.register_invocation(cls)
return cls
@@ -565,7 +572,7 @@ def invocation_output(
if re.compile(r"^\S+$").match(output_type) is None:
raise ValueError(f'"output_type" must consist of non-whitespace characters, got "{output_type}"')
if output_type in BaseInvocationOutput.get_output_types():
if output_type in InvocationRegistry.get_output_types():
raise ValueError(f'Invocation type "{output_type}" already exists')
validate_fields(cls.model_fields, output_type)
@@ -586,7 +593,7 @@ def invocation_output(
)
cls.__doc__ = docstring
BaseInvocationOutput.register_output(cls) # type: ignore # TODO: how to type this correctly?
InvocationRegistry.register_output(cls)
return cls

View File

@@ -40,10 +40,10 @@ from invokeai.backend.util.devices import TorchDevice
@invocation(
"compel",
title="Prompt",
title="Prompt - SD1.5",
tags=["prompt", "compel"],
category="conditioning",
version="1.2.0",
version="1.2.1",
)
class CompelInvocation(BaseInvocation):
"""Parse prompt using compel package to conditioning."""
@@ -233,10 +233,10 @@ class SDXLPromptInvocationBase:
@invocation(
"sdxl_compel_prompt",
title="SDXL Prompt",
title="Prompt - SDXL",
tags=["sdxl", "compel", "prompt"],
category="conditioning",
version="1.2.0",
version="1.2.1",
)
class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
"""Parse prompt using compel package to conditioning."""
@@ -327,10 +327,10 @@ class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
@invocation(
"sdxl_refiner_compel_prompt",
title="SDXL Refiner Prompt",
title="Prompt - SDXL Refiner",
tags=["sdxl", "compel", "prompt"],
category="conditioning",
version="1.1.1",
version="1.1.2",
)
class SDXLRefinerCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
"""Parse prompt using compel package to conditioning."""
@@ -376,10 +376,10 @@ class CLIPSkipInvocationOutput(BaseInvocationOutput):
@invocation(
"clip_skip",
title="CLIP Skip",
title="Apply CLIP Skip - SD1.5, SDXL",
tags=["clipskip", "clip", "skip"],
category="conditioning",
version="1.1.0",
version="1.1.1",
)
class CLIPSkipInvocation(BaseInvocation):
"""Skip layers in clip text_encoder model."""

View File

@@ -0,0 +1,128 @@
# Invocations for ControlNet image preprocessors
# initial implementation by Gregg Helt, 2023
from typing import List, Union
from pydantic import BaseModel, Field, field_validator, model_validator
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import (
FieldDescriptions,
ImageField,
InputField,
OutputField,
UIType,
)
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.controlnet_utils import CONTROLNET_MODE_VALUES, CONTROLNET_RESIZE_VALUES, heuristic_resize
from invokeai.backend.image_util.util import np_to_pil, pil_to_np
class ControlField(BaseModel):
image: ImageField = Field(description="The control image")
control_model: ModelIdentifierField = Field(description="The ControlNet model to use")
control_weight: Union[float, List[float]] = Field(default=1, description="The weight given to the ControlNet")
begin_step_percent: float = Field(
default=0, ge=0, le=1, description="When the ControlNet is first applied (% of total steps)"
)
end_step_percent: float = Field(
default=1, ge=0, le=1, description="When the ControlNet is last applied (% of total steps)"
)
control_mode: CONTROLNET_MODE_VALUES = Field(default="balanced", description="The control mode to use")
resize_mode: CONTROLNET_RESIZE_VALUES = Field(default="just_resize", description="The resize mode to use")
@field_validator("control_weight")
@classmethod
def validate_control_weight(cls, v):
validate_weights(v)
return v
@model_validator(mode="after")
def validate_begin_end_step_percent(self):
validate_begin_end_step(self.begin_step_percent, self.end_step_percent)
return self
@invocation_output("control_output")
class ControlOutput(BaseInvocationOutput):
"""node output for ControlNet info"""
# Outputs
control: ControlField = OutputField(description=FieldDescriptions.control)
@invocation("controlnet", title="ControlNet - SD1.5, SDXL", tags=["controlnet"], category="controlnet", version="1.1.3")
class ControlNetInvocation(BaseInvocation):
"""Collects ControlNet info to pass to other nodes"""
image: ImageField = InputField(description="The control image")
control_model: ModelIdentifierField = InputField(
description=FieldDescriptions.controlnet_model, ui_type=UIType.ControlNetModel
)
control_weight: Union[float, List[float]] = InputField(
default=1.0, ge=-1, le=2, description="The weight given to the ControlNet"
)
begin_step_percent: float = InputField(
default=0, ge=0, le=1, description="When the ControlNet is first applied (% of total steps)"
)
end_step_percent: float = InputField(
default=1, ge=0, le=1, description="When the ControlNet is last applied (% of total steps)"
)
control_mode: CONTROLNET_MODE_VALUES = InputField(default="balanced", description="The control mode used")
resize_mode: CONTROLNET_RESIZE_VALUES = InputField(default="just_resize", description="The resize mode used")
@field_validator("control_weight")
@classmethod
def validate_control_weight(cls, v):
validate_weights(v)
return v
@model_validator(mode="after")
def validate_begin_end_step_percent(self) -> "ControlNetInvocation":
validate_begin_end_step(self.begin_step_percent, self.end_step_percent)
return self
def invoke(self, context: InvocationContext) -> ControlOutput:
return ControlOutput(
control=ControlField(
image=self.image,
control_model=self.control_model,
control_weight=self.control_weight,
begin_step_percent=self.begin_step_percent,
end_step_percent=self.end_step_percent,
control_mode=self.control_mode,
resize_mode=self.resize_mode,
),
)
@invocation(
"heuristic_resize",
title="Heuristic Resize",
tags=["image, controlnet"],
category="image",
version="1.0.1",
classification=Classification.Prototype,
)
class HeuristicResizeInvocation(BaseInvocation):
"""Resize an image using a heuristic method. Preserves edge maps."""
image: ImageField = InputField(description="The image to resize")
width: int = InputField(default=512, ge=1, description="The width to resize to (px)")
height: int = InputField(default=512, ge=1, description="The height to resize to (px)")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.images.get_pil(self.image.image_name, "RGB")
np_img = pil_to_np(image)
np_resized = heuristic_resize(np_img, (self.width, self.height))
resized = np_to_pil(np_resized)
image_dto = context.images.save(image=resized)
return ImageOutput.build(image_dto)

View File

@@ -1,716 +0,0 @@
# Invocations for ControlNet image preprocessors
# initial implementation by Gregg Helt, 2023
# heavily leverages controlnet_aux package: https://github.com/patrickvonplaten/controlnet_aux
from builtins import bool, float
from pathlib import Path
from typing import Dict, List, Literal, Union
import cv2
import numpy as np
from controlnet_aux import (
ContentShuffleDetector,
LeresDetector,
MediapipeFaceDetector,
MidasDetector,
MLSDdetector,
NormalBaeDetector,
PidiNetDetector,
SamDetector,
ZoeDetector,
)
from controlnet_aux.util import HWC3, ade_palette
from PIL import Image
from pydantic import BaseModel, Field, field_validator, model_validator
from transformers import pipeline
from transformers.pipelines import DepthEstimationPipeline
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import (
FieldDescriptions,
ImageField,
InputField,
OutputField,
UIType,
WithBoard,
WithMetadata,
)
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.controlnet_utils import CONTROLNET_MODE_VALUES, CONTROLNET_RESIZE_VALUES, heuristic_resize
from invokeai.backend.image_util.canny import get_canny_edges
from invokeai.backend.image_util.depth_anything.depth_anything_pipeline import DepthAnythingPipeline
from invokeai.backend.image_util.dw_openpose import DWPOSE_MODELS, DWOpenposeDetector
from invokeai.backend.image_util.hed import HEDProcessor
from invokeai.backend.image_util.lineart import LineartProcessor
from invokeai.backend.image_util.lineart_anime import LineartAnimeProcessor
from invokeai.backend.image_util.util import np_to_pil, pil_to_np
class ControlField(BaseModel):
image: ImageField = Field(description="The control image")
control_model: ModelIdentifierField = Field(description="The ControlNet model to use")
control_weight: Union[float, List[float]] = Field(default=1, description="The weight given to the ControlNet")
begin_step_percent: float = Field(
default=0, ge=0, le=1, description="When the ControlNet is first applied (% of total steps)"
)
end_step_percent: float = Field(
default=1, ge=0, le=1, description="When the ControlNet is last applied (% of total steps)"
)
control_mode: CONTROLNET_MODE_VALUES = Field(default="balanced", description="The control mode to use")
resize_mode: CONTROLNET_RESIZE_VALUES = Field(default="just_resize", description="The resize mode to use")
@field_validator("control_weight")
@classmethod
def validate_control_weight(cls, v):
validate_weights(v)
return v
@model_validator(mode="after")
def validate_begin_end_step_percent(self):
validate_begin_end_step(self.begin_step_percent, self.end_step_percent)
return self
@invocation_output("control_output")
class ControlOutput(BaseInvocationOutput):
"""node output for ControlNet info"""
# Outputs
control: ControlField = OutputField(description=FieldDescriptions.control)
@invocation("controlnet", title="ControlNet", tags=["controlnet"], category="controlnet", version="1.1.2")
class ControlNetInvocation(BaseInvocation):
"""Collects ControlNet info to pass to other nodes"""
image: ImageField = InputField(description="The control image")
control_model: ModelIdentifierField = InputField(
description=FieldDescriptions.controlnet_model, ui_type=UIType.ControlNetModel
)
control_weight: Union[float, List[float]] = InputField(
default=1.0, ge=-1, le=2, description="The weight given to the ControlNet"
)
begin_step_percent: float = InputField(
default=0, ge=0, le=1, description="When the ControlNet is first applied (% of total steps)"
)
end_step_percent: float = InputField(
default=1, ge=0, le=1, description="When the ControlNet is last applied (% of total steps)"
)
control_mode: CONTROLNET_MODE_VALUES = InputField(default="balanced", description="The control mode used")
resize_mode: CONTROLNET_RESIZE_VALUES = InputField(default="just_resize", description="The resize mode used")
@field_validator("control_weight")
@classmethod
def validate_control_weight(cls, v):
validate_weights(v)
return v
@model_validator(mode="after")
def validate_begin_end_step_percent(self) -> "ControlNetInvocation":
validate_begin_end_step(self.begin_step_percent, self.end_step_percent)
return self
def invoke(self, context: InvocationContext) -> ControlOutput:
return ControlOutput(
control=ControlField(
image=self.image,
control_model=self.control_model,
control_weight=self.control_weight,
begin_step_percent=self.begin_step_percent,
end_step_percent=self.end_step_percent,
control_mode=self.control_mode,
resize_mode=self.resize_mode,
),
)
# This invocation exists for other invocations to subclass it - do not register with @invocation!
class ImageProcessorInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Base class for invocations that preprocess images for ControlNet"""
image: ImageField = InputField(description="The image to process")
def run_processor(self, image: Image.Image) -> Image.Image:
# superclass just passes through image without processing
return image
def load_image(self, context: InvocationContext) -> Image.Image:
# allows override for any special formatting specific to the preprocessor
return context.images.get_pil(self.image.image_name, "RGB")
def invoke(self, context: InvocationContext) -> ImageOutput:
self._context = context
raw_image = self.load_image(context)
# image type should be PIL.PngImagePlugin.PngImageFile ?
processed_image = self.run_processor(raw_image)
# currently can't see processed image in node UI without a showImage node,
# so for now setting image_type to RESULT instead of INTERMEDIATE so will get saved in gallery
image_dto = context.images.save(image=processed_image)
"""Builds an ImageOutput and its ImageField"""
processed_image_field = ImageField(image_name=image_dto.image_name)
return ImageOutput(
image=processed_image_field,
# width=processed_image.width,
width=image_dto.width,
# height=processed_image.height,
height=image_dto.height,
# mode=processed_image.mode,
)
@invocation(
"canny_image_processor",
title="Canny Processor",
tags=["controlnet", "canny"],
category="controlnet",
version="1.3.3",
classification=Classification.Deprecated,
)
class CannyImageProcessorInvocation(ImageProcessorInvocation):
"""Canny edge detection for ControlNet"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
low_threshold: int = InputField(
default=100, ge=0, le=255, description="The low threshold of the Canny pixel gradient (0-255)"
)
high_threshold: int = InputField(
default=200, ge=0, le=255, description="The high threshold of the Canny pixel gradient (0-255)"
)
def load_image(self, context: InvocationContext) -> Image.Image:
# Keep alpha channel for Canny processing to detect edges of transparent areas
return context.images.get_pil(self.image.image_name, "RGBA")
def run_processor(self, image: Image.Image) -> Image.Image:
processed_image = get_canny_edges(
image,
self.low_threshold,
self.high_threshold,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
)
return processed_image
@invocation(
"hed_image_processor",
title="HED (softedge) Processor",
tags=["controlnet", "hed", "softedge"],
category="controlnet",
version="1.2.3",
classification=Classification.Deprecated,
)
class HedImageProcessorInvocation(ImageProcessorInvocation):
"""Applies HED edge detection to image"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
# safe not supported in controlnet_aux v0.0.3
# safe: bool = InputField(default=False, description=FieldDescriptions.safe_mode)
scribble: bool = InputField(default=False, description=FieldDescriptions.scribble_mode)
def run_processor(self, image: Image.Image) -> Image.Image:
hed_processor = HEDProcessor()
processed_image = hed_processor.run(
image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
# safe not supported in controlnet_aux v0.0.3
# safe=self.safe,
scribble=self.scribble,
)
return processed_image
@invocation(
"lineart_image_processor",
title="Lineart Processor",
tags=["controlnet", "lineart"],
category="controlnet",
version="1.2.3",
classification=Classification.Deprecated,
)
class LineartImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art processing to image"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
coarse: bool = InputField(default=False, description="Whether to use coarse mode")
def run_processor(self, image: Image.Image) -> Image.Image:
lineart_processor = LineartProcessor()
processed_image = lineart_processor.run(
image, detect_resolution=self.detect_resolution, image_resolution=self.image_resolution, coarse=self.coarse
)
return processed_image
@invocation(
"lineart_anime_image_processor",
title="Lineart Anime Processor",
tags=["controlnet", "lineart", "anime"],
category="controlnet",
version="1.2.3",
classification=Classification.Deprecated,
)
class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art anime processing to image"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
def run_processor(self, image: Image.Image) -> Image.Image:
processor = LineartAnimeProcessor()
processed_image = processor.run(
image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
)
return processed_image
@invocation(
"midas_depth_image_processor",
title="Midas Depth Processor",
tags=["controlnet", "midas"],
category="controlnet",
version="1.2.4",
classification=Classification.Deprecated,
)
class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Midas depth processing to image"""
a_mult: float = InputField(default=2.0, ge=0, description="Midas parameter `a_mult` (a = a_mult * PI)")
bg_th: float = InputField(default=0.1, ge=0, description="Midas parameter `bg_th`")
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
# depth_and_normal not supported in controlnet_aux v0.0.3
# depth_and_normal: bool = InputField(default=False, description="whether to use depth and normal mode")
def run_processor(self, image: Image.Image) -> Image.Image:
# TODO: replace from_pretrained() calls with context.models.download_and_cache() (or similar)
midas_processor = MidasDetector.from_pretrained("lllyasviel/Annotators")
processed_image = midas_processor(
image,
a=np.pi * self.a_mult,
bg_th=self.bg_th,
image_resolution=self.image_resolution,
detect_resolution=self.detect_resolution,
# dept_and_normal not supported in controlnet_aux v0.0.3
# depth_and_normal=self.depth_and_normal,
)
return processed_image
@invocation(
"normalbae_image_processor",
title="Normal BAE Processor",
tags=["controlnet"],
category="controlnet",
version="1.2.3",
classification=Classification.Deprecated,
)
class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies NormalBae processing to image"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
def run_processor(self, image: Image.Image) -> Image.Image:
normalbae_processor = NormalBaeDetector.from_pretrained("lllyasviel/Annotators")
processed_image = normalbae_processor(
image, detect_resolution=self.detect_resolution, image_resolution=self.image_resolution
)
return processed_image
@invocation(
"mlsd_image_processor",
title="MLSD Processor",
tags=["controlnet", "mlsd"],
category="controlnet",
version="1.2.3",
classification=Classification.Deprecated,
)
class MlsdImageProcessorInvocation(ImageProcessorInvocation):
"""Applies MLSD processing to image"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
thr_v: float = InputField(default=0.1, ge=0, description="MLSD parameter `thr_v`")
thr_d: float = InputField(default=0.1, ge=0, description="MLSD parameter `thr_d`")
def run_processor(self, image: Image.Image) -> Image.Image:
mlsd_processor = MLSDdetector.from_pretrained("lllyasviel/Annotators")
processed_image = mlsd_processor(
image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
thr_v=self.thr_v,
thr_d=self.thr_d,
)
return processed_image
@invocation(
"pidi_image_processor",
title="PIDI Processor",
tags=["controlnet", "pidi"],
category="controlnet",
version="1.2.3",
classification=Classification.Deprecated,
)
class PidiImageProcessorInvocation(ImageProcessorInvocation):
"""Applies PIDI processing to image"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
safe: bool = InputField(default=False, description=FieldDescriptions.safe_mode)
scribble: bool = InputField(default=False, description=FieldDescriptions.scribble_mode)
def run_processor(self, image: Image.Image) -> Image.Image:
pidi_processor = PidiNetDetector.from_pretrained("lllyasviel/Annotators")
processed_image = pidi_processor(
image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
safe=self.safe,
scribble=self.scribble,
)
return processed_image
@invocation(
"content_shuffle_image_processor",
title="Content Shuffle Processor",
tags=["controlnet", "contentshuffle"],
category="controlnet",
version="1.2.3",
classification=Classification.Deprecated,
)
class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
"""Applies content shuffle processing to image"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
h: int = InputField(default=512, ge=0, description="Content shuffle `h` parameter")
w: int = InputField(default=512, ge=0, description="Content shuffle `w` parameter")
f: int = InputField(default=256, ge=0, description="Content shuffle `f` parameter")
def run_processor(self, image: Image.Image) -> Image.Image:
content_shuffle_processor = ContentShuffleDetector()
processed_image = content_shuffle_processor(
image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
h=self.h,
w=self.w,
f=self.f,
)
return processed_image
# should work with controlnet_aux >= 0.0.4 and timm <= 0.6.13
@invocation(
"zoe_depth_image_processor",
title="Zoe (Depth) Processor",
tags=["controlnet", "zoe", "depth"],
category="controlnet",
version="1.2.3",
classification=Classification.Deprecated,
)
class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Zoe depth processing to image"""
def run_processor(self, image: Image.Image) -> Image.Image:
zoe_depth_processor = ZoeDetector.from_pretrained("lllyasviel/Annotators")
processed_image = zoe_depth_processor(image)
return processed_image
@invocation(
"mediapipe_face_processor",
title="Mediapipe Face Processor",
tags=["controlnet", "mediapipe", "face"],
category="controlnet",
version="1.2.4",
classification=Classification.Deprecated,
)
class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
"""Applies mediapipe face processing to image"""
max_faces: int = InputField(default=1, ge=1, description="Maximum number of faces to detect")
min_confidence: float = InputField(default=0.5, ge=0, le=1, description="Minimum confidence for face detection")
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
def run_processor(self, image: Image.Image) -> Image.Image:
mediapipe_face_processor = MediapipeFaceDetector()
processed_image = mediapipe_face_processor(
image,
max_faces=self.max_faces,
min_confidence=self.min_confidence,
image_resolution=self.image_resolution,
detect_resolution=self.detect_resolution,
)
return processed_image
@invocation(
"leres_image_processor",
title="Leres (Depth) Processor",
tags=["controlnet", "leres", "depth"],
category="controlnet",
version="1.2.3",
classification=Classification.Deprecated,
)
class LeresImageProcessorInvocation(ImageProcessorInvocation):
"""Applies leres processing to image"""
thr_a: float = InputField(default=0, description="Leres parameter `thr_a`")
thr_b: float = InputField(default=0, description="Leres parameter `thr_b`")
boost: bool = InputField(default=False, description="Whether to use boost mode")
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
def run_processor(self, image: Image.Image) -> Image.Image:
leres_processor = LeresDetector.from_pretrained("lllyasviel/Annotators")
processed_image = leres_processor(
image,
thr_a=self.thr_a,
thr_b=self.thr_b,
boost=self.boost,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
)
return processed_image
@invocation(
"tile_image_processor",
title="Tile Resample Processor",
tags=["controlnet", "tile"],
category="controlnet",
version="1.2.3",
classification=Classification.Deprecated,
)
class TileResamplerProcessorInvocation(ImageProcessorInvocation):
"""Tile resampler processor"""
# res: int = InputField(default=512, ge=0, le=1024, description="The pixel resolution for each tile")
down_sampling_rate: float = InputField(default=1.0, ge=1.0, le=8.0, description="Down sampling rate")
# tile_resample copied from sd-webui-controlnet/scripts/processor.py
def tile_resample(
self,
np_img: np.ndarray,
res=512, # never used?
down_sampling_rate=1.0,
):
np_img = HWC3(np_img)
if down_sampling_rate < 1.1:
return np_img
H, W, C = np_img.shape
H = int(float(H) / float(down_sampling_rate))
W = int(float(W) / float(down_sampling_rate))
np_img = cv2.resize(np_img, (W, H), interpolation=cv2.INTER_AREA)
return np_img
def run_processor(self, image: Image.Image) -> Image.Image:
np_img = np.array(image, dtype=np.uint8)
processed_np_image = self.tile_resample(
np_img,
# res=self.tile_size,
down_sampling_rate=self.down_sampling_rate,
)
processed_image = Image.fromarray(processed_np_image)
return processed_image
@invocation(
"segment_anything_processor",
title="Segment Anything Processor",
tags=["controlnet", "segmentanything"],
category="controlnet",
version="1.2.4",
classification=Classification.Deprecated,
)
class SegmentAnythingProcessorInvocation(ImageProcessorInvocation):
"""Applies segment anything processing to image"""
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
def run_processor(self, image: Image.Image) -> Image.Image:
# segment_anything_processor = SamDetector.from_pretrained("ybelkada/segment-anything", subfolder="checkpoints")
segment_anything_processor = SamDetectorReproducibleColors.from_pretrained(
"ybelkada/segment-anything", subfolder="checkpoints"
)
np_img = np.array(image, dtype=np.uint8)
processed_image = segment_anything_processor(
np_img, image_resolution=self.image_resolution, detect_resolution=self.detect_resolution
)
return processed_image
class SamDetectorReproducibleColors(SamDetector):
# overriding SamDetector.show_anns() method to use reproducible colors for segmentation image
# base class show_anns() method randomizes colors,
# which seems to also lead to non-reproducible image generation
# so using ADE20k color palette instead
def show_anns(self, anns: List[Dict]):
if len(anns) == 0:
return
sorted_anns = sorted(anns, key=(lambda x: x["area"]), reverse=True)
h, w = anns[0]["segmentation"].shape
final_img = Image.fromarray(np.zeros((h, w, 3), dtype=np.uint8), mode="RGB")
palette = ade_palette()
for i, ann in enumerate(sorted_anns):
m = ann["segmentation"]
img = np.empty((m.shape[0], m.shape[1], 3), dtype=np.uint8)
# doing modulo just in case number of annotated regions exceeds number of colors in palette
ann_color = palette[i % len(palette)]
img[:, :] = ann_color
final_img.paste(Image.fromarray(img, mode="RGB"), (0, 0), Image.fromarray(np.uint8(m * 255)))
return np.array(final_img, dtype=np.uint8)
@invocation(
"color_map_image_processor",
title="Color Map Processor",
tags=["controlnet"],
category="controlnet",
version="1.2.3",
classification=Classification.Deprecated,
)
class ColorMapImageProcessorInvocation(ImageProcessorInvocation):
"""Generates a color map from the provided image"""
color_map_tile_size: int = InputField(default=64, ge=1, description=FieldDescriptions.tile_size)
def run_processor(self, image: Image.Image) -> Image.Image:
np_image = np.array(image, dtype=np.uint8)
height, width = np_image.shape[:2]
width_tile_size = min(self.color_map_tile_size, width)
height_tile_size = min(self.color_map_tile_size, height)
color_map = cv2.resize(
np_image,
(width // width_tile_size, height // height_tile_size),
interpolation=cv2.INTER_CUBIC,
)
color_map = cv2.resize(color_map, (width, height), interpolation=cv2.INTER_NEAREST)
color_map = Image.fromarray(color_map)
return color_map
DEPTH_ANYTHING_MODEL_SIZES = Literal["large", "base", "small", "small_v2"]
# DepthAnything V2 Small model is licensed under Apache 2.0 but not the base and large models.
DEPTH_ANYTHING_MODELS = {
"large": "LiheYoung/depth-anything-large-hf",
"base": "LiheYoung/depth-anything-base-hf",
"small": "LiheYoung/depth-anything-small-hf",
"small_v2": "depth-anything/Depth-Anything-V2-Small-hf",
}
@invocation(
"depth_anything_image_processor",
title="Depth Anything Processor",
tags=["controlnet", "depth", "depth anything"],
category="controlnet",
version="1.1.3",
classification=Classification.Deprecated,
)
class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
"""Generates a depth map based on the Depth Anything algorithm"""
model_size: DEPTH_ANYTHING_MODEL_SIZES = InputField(
default="small_v2", description="The size of the depth model to use"
)
resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
def run_processor(self, image: Image.Image) -> Image.Image:
def load_depth_anything(model_path: Path):
depth_anything_pipeline = pipeline(model=str(model_path), task="depth-estimation", local_files_only=True)
assert isinstance(depth_anything_pipeline, DepthEstimationPipeline)
return DepthAnythingPipeline(depth_anything_pipeline)
with self._context.models.load_remote_model(
source=DEPTH_ANYTHING_MODELS[self.model_size], loader=load_depth_anything
) as depth_anything_detector:
assert isinstance(depth_anything_detector, DepthAnythingPipeline)
depth_map = depth_anything_detector.generate_depth(image)
# Resizing to user target specified size
new_height = int(image.size[1] * (self.resolution / image.size[0]))
depth_map = depth_map.resize((self.resolution, new_height))
return depth_map
@invocation(
"dw_openpose_image_processor",
title="DW Openpose Image Processor",
tags=["controlnet", "dwpose", "openpose"],
category="controlnet",
version="1.1.1",
classification=Classification.Deprecated,
)
class DWOpenposeImageProcessorInvocation(ImageProcessorInvocation):
"""Generates an openpose pose from an image using DWPose"""
draw_body: bool = InputField(default=True)
draw_face: bool = InputField(default=False)
draw_hands: bool = InputField(default=False)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
def run_processor(self, image: Image.Image) -> Image.Image:
onnx_det = self._context.models.download_and_cache_model(DWPOSE_MODELS["yolox_l.onnx"])
onnx_pose = self._context.models.download_and_cache_model(DWPOSE_MODELS["dw-ll_ucoco_384.onnx"])
dw_openpose = DWOpenposeDetector(onnx_det=onnx_det, onnx_pose=onnx_pose)
processed_image = dw_openpose(
image,
draw_face=self.draw_face,
draw_hands=self.draw_hands,
draw_body=self.draw_body,
resolution=self.image_resolution,
)
return processed_image
@invocation(
"heuristic_resize",
title="Heuristic Resize",
tags=["image, controlnet"],
category="image",
version="1.0.1",
classification=Classification.Prototype,
)
class HeuristicResizeInvocation(BaseInvocation):
"""Resize an image using a heuristic method. Preserves edge maps."""
image: ImageField = InputField(description="The image to resize")
width: int = InputField(default=512, ge=1, description="The width to resize to (px)")
height: int = InputField(default=512, ge=1, description="The height to resize to (px)")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.images.get_pil(self.image.image_name, "RGB")
np_img = pil_to_np(image)
np_resized = heuristic_resize(np_img, (self.width, self.height))
resized = np_to_pil(np_resized)
image_dto = context.images.save(image=resized)
return ImageOutput.build(image_dto)

View File

@@ -19,7 +19,8 @@ from invokeai.app.invocations.image_to_latents import ImageToLatentsInvocation
from invokeai.app.invocations.model import UNetField, VAEField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager import LoadedModel
from invokeai.backend.model_manager.config import MainConfigBase, ModelVariantType
from invokeai.backend.model_manager.config import MainConfigBase
from invokeai.backend.model_manager.taxonomy import ModelVariantType
from invokeai.backend.stable_diffusion.diffusers_pipeline import image_resized_to_grid_as_tensor

View File

@@ -22,7 +22,7 @@ from transformers import CLIPVisionModelWithProjection
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.controlnet_image_processors import ControlField
from invokeai.app.invocations.controlnet import ControlField
from invokeai.app.invocations.fields import (
ConditioningField,
DenoiseMaskField,
@@ -39,8 +39,8 @@ from invokeai.app.invocations.t2i_adapter import T2IAdapterField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.controlnet_utils import prepare_control_image
from invokeai.backend.ip_adapter.ip_adapter import IPAdapter
from invokeai.backend.model_manager import BaseModelType, ModelVariantType
from invokeai.backend.model_manager.config import AnyModelConfig
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelVariantType
from invokeai.backend.model_patcher import ModelPatcher
from invokeai.backend.patches.layer_patcher import LayerPatcher
from invokeai.backend.patches.model_patch_raw import ModelPatchRaw
@@ -127,10 +127,10 @@ def get_scheduler(
@invocation(
"denoise_latents",
title="Denoise Latents",
title="Denoise - SD1.5, SDXL",
tags=["latents", "denoise", "txt2img", "t2i", "t2l", "img2img", "i2i", "l2l"],
category="latents",
version="1.5.3",
version="1.5.4",
)
class DenoiseLatentsInvocation(BaseInvocation):
"""Denoises noisy latents to decodable images"""

View File

@@ -4,7 +4,7 @@ from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import ImageField, InputField, WithBoard, WithMetadata
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.image_util.dw_openpose import DWOpenposeDetector2
from invokeai.backend.image_util.dw_openpose import DWOpenposeDetector
@invocation(
@@ -25,20 +25,20 @@ class DWOpenposeDetectionInvocation(BaseInvocation, WithMetadata, WithBoard):
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.images.get_pil(self.image.image_name, "RGB")
onnx_det_path = context.models.download_and_cache_model(DWOpenposeDetector2.get_model_url_det())
onnx_pose_path = context.models.download_and_cache_model(DWOpenposeDetector2.get_model_url_pose())
onnx_det_path = context.models.download_and_cache_model(DWOpenposeDetector.get_model_url_det())
onnx_pose_path = context.models.download_and_cache_model(DWOpenposeDetector.get_model_url_pose())
loaded_session_det = context.models.load_local_model(
onnx_det_path, DWOpenposeDetector2.create_onnx_inference_session
onnx_det_path, DWOpenposeDetector.create_onnx_inference_session
)
loaded_session_pose = context.models.load_local_model(
onnx_pose_path, DWOpenposeDetector2.create_onnx_inference_session
onnx_pose_path, DWOpenposeDetector.create_onnx_inference_session
)
with loaded_session_det as session_det, loaded_session_pose as session_pose:
assert isinstance(session_det, ort.InferenceSession)
assert isinstance(session_pose, ort.InferenceSession)
detector = DWOpenposeDetector2(session_det=session_det, session_pose=session_pose)
detector = DWOpenposeDetector(session_det=session_det, session_pose=session_pose)
detected_image = detector.run(
image,
draw_face=self.draw_face,

View File

@@ -59,6 +59,7 @@ class UIType(str, Enum, metaclass=MetaEnum):
ControlLoRAModel = "ControlLoRAModelField"
SigLipModel = "SigLipModelField"
FluxReduxModel = "FluxReduxModelField"
LlavaOnevisionModel = "LLaVAModelField"
# endregion
# region Misc Field Types
@@ -205,6 +206,8 @@ class FieldDescriptions:
freeu_b2 = "Scaling factor for stage 2 to amplify the contributions of backbone features."
instantx_control_mode = "The control mode for InstantX ControlNet union models. Ignored for other ControlNet models. The standard mapping is: canny (0), tile (1), depth (2), blur (3), pose (4), gray (5), low quality (6). Negative values will be treated as 'None'."
flux_redux_conditioning = "FLUX Redux conditioning tensor"
vllm_model = "The VLLM model to use"
flux_fill_conditioning = "FLUX Fill conditioning tensor"
class ImageField(BaseModel):
@@ -274,6 +277,13 @@ class FluxReduxConditioningField(BaseModel):
)
class FluxFillConditioningField(BaseModel):
"""A FLUX Fill conditioning field."""
image: ImageField = Field(description="The FLUX Fill reference image.")
mask: TensorField = Field(description="The FLUX Fill inpaint mask.")
class SD3ConditioningField(BaseModel):
"""A conditioning tensor primitive value"""

View File

@@ -1,7 +1,6 @@
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
invocation,
invocation_output,
)
@@ -21,11 +20,10 @@ class FluxControlLoRALoaderOutput(BaseInvocationOutput):
@invocation(
"flux_control_lora_loader",
title="Flux Control LoRA",
title="Control LoRA - FLUX",
tags=["lora", "model", "flux"],
category="model",
version="1.1.0",
classification=Classification.Prototype,
version="1.1.1",
)
class FluxControlLoRALoaderInvocation(BaseInvocation):
"""LoRA model and Image to use with FLUX transformer generation."""

View File

@@ -3,7 +3,6 @@ from pydantic import BaseModel, Field, field_validator, model_validator
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
invocation,
invocation_output,
)
@@ -52,7 +51,6 @@ class FluxControlNetOutput(BaseInvocationOutput):
tags=["controlnet", "flux"],
category="controlnet",
version="1.0.0",
classification=Classification.Prototype,
)
class FluxControlNetInvocation(BaseInvocation):
"""Collect FLUX ControlNet info to pass to other nodes."""

View File

@@ -10,11 +10,12 @@ from PIL import Image
from torchvision.transforms.functional import resize as tv_resize
from transformers import CLIPImageProcessor, CLIPVisionModelWithProjection
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import (
DenoiseMaskField,
FieldDescriptions,
FluxConditioningField,
FluxFillConditioningField,
FluxReduxConditioningField,
ImageField,
Input,
@@ -48,7 +49,7 @@ from invokeai.backend.flux.sampling_utils import (
unpack,
)
from invokeai.backend.flux.text_conditioning import FluxReduxConditioning, FluxTextConditioning
from invokeai.backend.model_manager.config import ModelFormat
from invokeai.backend.model_manager.taxonomy import ModelFormat, ModelVariantType
from invokeai.backend.patches.layer_patcher import LayerPatcher
from invokeai.backend.patches.lora_conversions.flux_lora_constants import FLUX_LORA_TRANSFORMER_PREFIX
from invokeai.backend.patches.model_patch_raw import ModelPatchRaw
@@ -62,8 +63,7 @@ from invokeai.backend.util.devices import TorchDevice
title="FLUX Denoise",
tags=["image", "flux"],
category="image",
version="3.2.3",
classification=Classification.Prototype,
version="3.3.0",
)
class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Run denoising process with a FLUX transformer model."""
@@ -109,6 +109,11 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
description="FLUX Redux conditioning tensor.",
input=Input.Connection,
)
fill_conditioning: FluxFillConditioningField | None = InputField(
default=None,
description="FLUX Fill conditioning.",
input=Input.Connection,
)
cfg_scale: float | list[float] = InputField(default=1.0, description=FieldDescriptions.cfg_scale, title="CFG Scale")
cfg_scale_start_step: int = InputField(
default=0,
@@ -261,8 +266,19 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
if is_schnell and self.control_lora:
raise ValueError("Control LoRAs cannot be used with FLUX Schnell")
# Prepare the extra image conditioning tensor if a FLUX structural control image is provided.
img_cond = self._prep_structural_control_img_cond(context)
# Prepare the extra image conditioning tensor (img_cond) for either FLUX structural control or FLUX Fill.
img_cond: torch.Tensor | None = None
is_flux_fill = transformer_config.variant == ModelVariantType.Inpaint # type: ignore
if is_flux_fill:
img_cond = self._prep_flux_fill_img_cond(
context, device=TorchDevice.choose_torch_device(), dtype=inference_dtype
)
else:
if self.fill_conditioning is not None:
raise ValueError("fill_conditioning was provided, but the model is not a FLUX Fill model.")
if self.control_lora is not None:
img_cond = self._prep_structural_control_img_cond(context)
inpaint_mask = self._prep_inpaint_mask(context, x)
@@ -271,7 +287,6 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
# Pack all latent tensors.
init_latents = pack(init_latents) if init_latents is not None else None
inpaint_mask = pack(inpaint_mask) if inpaint_mask is not None else None
img_cond = pack(img_cond) if img_cond is not None else None
noise = pack(noise)
x = pack(x)
@@ -664,7 +679,70 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
img_cond = einops.rearrange(img_cond, "h w c -> 1 c h w")
vae_info = context.models.load(self.controlnet_vae.vae)
return FluxVaeEncodeInvocation.vae_encode(vae_info=vae_info, image_tensor=img_cond)
img_cond = FluxVaeEncodeInvocation.vae_encode(vae_info=vae_info, image_tensor=img_cond)
return pack(img_cond)
def _prep_flux_fill_img_cond(
self, context: InvocationContext, device: torch.device, dtype: torch.dtype
) -> torch.Tensor:
"""Prepare the FLUX Fill conditioning. This method should be called iff the model is a FLUX Fill model.
This logic is based on:
https://github.com/black-forest-labs/flux/blob/716724eb276d94397be99710a0a54d352664e23b/src/flux/sampling.py#L107-L157
"""
# Validate inputs.
if self.fill_conditioning is None:
raise ValueError("A FLUX Fill model is being used without fill_conditioning.")
# TODO(ryand): We should probable rename controlnet_vae. It's used for more than just ControlNets.
if self.controlnet_vae is None:
raise ValueError("A FLUX Fill model is being used without controlnet_vae.")
if self.control_lora is not None:
raise ValueError(
"A FLUX Fill model is being used, but a control_lora was provided. Control LoRAs are not compatible with FLUX Fill models."
)
# Log input warnings related to FLUX Fill usage.
if self.denoise_mask is not None:
context.logger.warning(
"Both fill_conditioning and a denoise_mask were provided. You probably meant to use one or the other."
)
if self.guidance < 25.0:
context.logger.warning("A guidance value of ~30.0 is recommended for FLUX Fill models.")
# Load the conditioning image and resize it to the target image size.
cond_img = context.images.get_pil(self.fill_conditioning.image.image_name, mode="RGB")
cond_img = cond_img.resize((self.width, self.height), Image.Resampling.BICUBIC)
cond_img = np.array(cond_img)
cond_img = torch.from_numpy(cond_img).float() / 127.5 - 1.0
cond_img = einops.rearrange(cond_img, "h w c -> 1 c h w")
cond_img = cond_img.to(device=device, dtype=dtype)
# Load the mask and resize it to the target image size.
mask = context.tensors.load(self.fill_conditioning.mask.tensor_name)
# We expect mask to be a bool tensor with shape [1, H, W].
assert mask.dtype == torch.bool
assert mask.dim() == 3
assert mask.shape[0] == 1
mask = tv_resize(mask, size=[self.height, self.width], interpolation=tv_transforms.InterpolationMode.NEAREST)
mask = mask.to(device=device, dtype=dtype)
mask = einops.rearrange(mask, "1 h w -> 1 1 h w")
# Prepare image conditioning.
cond_img = cond_img * (1 - mask)
vae_info = context.models.load(self.controlnet_vae.vae)
cond_img = FluxVaeEncodeInvocation.vae_encode(vae_info=vae_info, image_tensor=cond_img)
cond_img = pack(cond_img)
# Prepare mask conditioning.
mask = mask[:, 0, :, :]
# Rearrange mask to a 16-channel representation that matches the shape of the VAE-encoded latent space.
mask = einops.rearrange(mask, "b (h ph) (w pw) -> b (ph pw) h w", ph=8, pw=8)
mask = pack(mask)
# Merge image and mask conditioning.
img_cond = torch.cat((cond_img, mask), dim=-1)
return img_cond
def _normalize_ip_adapter_fields(self) -> list[IPAdapterField]:
if self.ip_adapter is None:

View File

@@ -0,0 +1,46 @@
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import (
FieldDescriptions,
FluxFillConditioningField,
InputField,
OutputField,
TensorField,
)
from invokeai.app.invocations.primitives import ImageField
from invokeai.app.services.shared.invocation_context import InvocationContext
@invocation_output("flux_fill_output")
class FluxFillOutput(BaseInvocationOutput):
"""The conditioning output of a FLUX Fill invocation."""
fill_cond: FluxFillConditioningField = OutputField(
description=FieldDescriptions.flux_redux_conditioning, title="Conditioning"
)
@invocation(
"flux_fill",
title="FLUX Fill Conditioning",
tags=["inpaint"],
category="inpaint",
version="1.0.0",
classification=Classification.Beta,
)
class FluxFillInvocation(BaseInvocation):
"""Prepare the FLUX Fill conditioning data."""
image: ImageField = InputField(description="The FLUX Fill reference image.")
mask: TensorField = InputField(
description="The bool inpainting mask. Excluded regions should be set to "
"False, included regions should be set to True.",
)
def invoke(self, context: InvocationContext) -> FluxFillOutput:
return FluxFillOutput(fill_cond=FluxFillConditioningField(image=self.image, mask=self.mask))

View File

@@ -4,7 +4,7 @@ from typing import List, Literal, Union
from pydantic import field_validator, model_validator
from typing_extensions import Self
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import InputField, UIType
from invokeai.app.invocations.ip_adapter import (
CLIP_VISION_MODEL_MAP,
@@ -28,7 +28,6 @@ from invokeai.backend.model_manager.config import (
tags=["ip_adapter", "control"],
category="ip_adapter",
version="1.0.0",
classification=Classification.Prototype,
)
class FluxIPAdapterInvocation(BaseInvocation):
"""Collects FLUX IP-Adapter info to pass to other nodes."""

View File

@@ -3,14 +3,13 @@ from typing import Optional
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
from invokeai.app.invocations.model import CLIPField, LoRAField, ModelIdentifierField, T5EncoderField, TransformerField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.config import BaseModelType
from invokeai.backend.model_manager.taxonomy import BaseModelType
@invocation_output("flux_lora_loader_output")
@@ -28,11 +27,10 @@ class FluxLoRALoaderOutput(BaseInvocationOutput):
@invocation(
"flux_lora_loader",
title="FLUX LoRA",
title="Apply LoRA - FLUX",
tags=["lora", "model", "flux"],
category="model",
version="1.2.0",
classification=Classification.Prototype,
version="1.2.1",
)
class FluxLoRALoaderInvocation(BaseInvocation):
"""Apply a LoRA model to a FLUX transformer and/or text encoder."""
@@ -107,11 +105,10 @@ class FluxLoRALoaderInvocation(BaseInvocation):
@invocation(
"flux_lora_collection_loader",
title="FLUX LoRA Collection Loader",
title="Apply LoRA Collection - FLUX",
tags=["lora", "model", "flux"],
category="model",
version="1.3.0",
classification=Classification.Prototype,
version="1.3.1",
)
class FLUXLoRACollectionLoader(BaseInvocation):
"""Applies a collection of LoRAs to a FLUX transformer."""

View File

@@ -3,7 +3,6 @@ from typing import Literal
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
invocation,
invocation_output,
)
@@ -17,8 +16,8 @@ from invokeai.app.util.t5_model_identifier import (
from invokeai.backend.flux.util import max_seq_lengths
from invokeai.backend.model_manager.config import (
CheckpointConfigBase,
SubModelType,
)
from invokeai.backend.model_manager.taxonomy import SubModelType
@invocation_output("flux_model_loader_output")
@@ -37,11 +36,10 @@ class FluxModelLoaderOutput(BaseInvocationOutput):
@invocation(
"flux_model_loader",
title="Flux Main Model",
title="Main Model - FLUX",
tags=["model", "flux"],
category="model",
version="1.0.5",
classification=Classification.Prototype,
version="1.0.6",
)
class FluxModelLoaderInvocation(BaseInvocation):
"""Loads a flux base model, outputting its submodels."""

View File

@@ -23,7 +23,8 @@ from invokeai.app.invocations.primitives import ImageField
from invokeai.app.services.model_records.model_records_base import ModelRecordChanges
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.redux.flux_redux_model import FluxReduxModel
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelType
from invokeai.backend.model_manager import BaseModelType, ModelType
from invokeai.backend.model_manager.config import AnyModelConfig
from invokeai.backend.model_manager.starter_models import siglip
from invokeai.backend.sig_lip.sig_lip_pipeline import SigLipPipeline
from invokeai.backend.util.devices import TorchDevice
@@ -44,7 +45,7 @@ class FluxReduxOutput(BaseInvocationOutput):
tags=["ip_adapter", "control"],
category="ip_adapter",
version="2.0.0",
classification=Classification.Prototype,
classification=Classification.Beta,
)
class FluxReduxInvocation(BaseInvocation):
"""Runs a FLUX Redux model to generate a conditioning tensor."""

View File

@@ -4,7 +4,7 @@ from typing import Iterator, Literal, Optional, Tuple
import torch
from transformers import CLIPTextModel, CLIPTokenizer, T5EncoderModel, T5Tokenizer, T5TokenizerFast
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import (
FieldDescriptions,
FluxConditioningField,
@@ -17,7 +17,7 @@ from invokeai.app.invocations.model import CLIPField, T5EncoderField
from invokeai.app.invocations.primitives import FluxConditioningOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.modules.conditioner import HFEncoder
from invokeai.backend.model_manager.config import ModelFormat
from invokeai.backend.model_manager import ModelFormat
from invokeai.backend.patches.layer_patcher import LayerPatcher
from invokeai.backend.patches.lora_conversions.flux_lora_constants import FLUX_LORA_CLIP_PREFIX, FLUX_LORA_T5_PREFIX
from invokeai.backend.patches.model_patch_raw import ModelPatchRaw
@@ -26,11 +26,10 @@ from invokeai.backend.stable_diffusion.diffusion.conditioning_data import Condit
@invocation(
"flux_text_encoder",
title="FLUX Text Encoding",
title="Prompt - FLUX",
tags=["prompt", "conditioning", "flux"],
category="conditioning",
version="1.1.1",
classification=Classification.Prototype,
version="1.1.2",
)
class FluxTextEncoderInvocation(BaseInvocation):
"""Encodes and preps a prompt for a flux image."""

View File

@@ -22,10 +22,10 @@ from invokeai.backend.util.devices import TorchDevice
@invocation(
"flux_vae_decode",
title="FLUX Latents to Image",
title="Latents to Image - FLUX",
tags=["latents", "image", "vae", "l2i", "flux"],
category="latents",
version="1.0.1",
version="1.0.2",
)
class FluxVaeDecodeInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Generates an image from latents."""

View File

@@ -19,10 +19,10 @@ from invokeai.backend.util.devices import TorchDevice
@invocation(
"flux_vae_encode",
title="FLUX Image to Latents",
title="Image to Latents - FLUX",
tags=["latents", "image", "vae", "i2l", "flux"],
category="latents",
version="1.0.0",
version="1.0.1",
)
class FluxVaeEncodeInvocation(BaseInvocation):
"""Encodes an image into latents."""

View File

@@ -6,7 +6,7 @@ from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.fields import FieldDescriptions, InputField, OutputField
from invokeai.app.invocations.model import UNetField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.config import BaseModelType
from invokeai.backend.model_manager.taxonomy import BaseModelType
@invocation_output("ideal_size_output")
@@ -19,9 +19,9 @@ class IdealSizeOutput(BaseInvocationOutput):
@invocation(
"ideal_size",
title="Ideal Size",
title="Ideal Size - SD1.5, SDXL",
tags=["latents", "math", "ideal_size"],
version="1.0.4",
version="1.0.5",
)
class IdealSizeInvocation(BaseInvocation):
"""Calculates the ideal size for generation to avoid duplication"""

View File

@@ -355,7 +355,6 @@ class ImageBlurInvocation(BaseInvocation, WithMetadata, WithBoard):
tags=["image", "unsharp_mask"],
category="image",
version="1.2.2",
classification=Classification.Beta,
)
class UnsharpMaskInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Applies an unsharp mask filter to an image"""
@@ -1051,7 +1050,7 @@ class MaskFromIDInvocation(BaseInvocation, WithMetadata, WithBoard):
tags=["image", "mask", "id"],
category="image",
version="1.0.0",
classification=Classification.Internal,
classification=Classification.Deprecated,
)
class CanvasV2MaskAndCropInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Handles Canvas V2 image output masking and cropping"""
@@ -1089,6 +1088,131 @@ class CanvasV2MaskAndCropInvocation(BaseInvocation, WithMetadata, WithBoard):
return ImageOutput.build(image_dto)
@invocation(
"expand_mask_with_fade", title="Expand Mask with Fade", tags=["image", "mask"], category="image", version="1.0.1"
)
class ExpandMaskWithFadeInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Expands a mask with a fade effect. The mask uses black to indicate areas to keep from the generated image and white for areas to discard.
The mask is thresholded to create a binary mask, and then a distance transform is applied to create a fade effect.
The fade size is specified in pixels, and the mask is expanded by that amount. The result is a mask with a smooth transition from black to white.
If the fade size is 0, the mask is returned as-is.
"""
mask: ImageField = InputField(description="The mask to expand")
threshold: int = InputField(default=0, ge=0, le=255, description="The threshold for the binary mask (0-255)")
fade_size_px: int = InputField(default=32, ge=0, description="The size of the fade in pixels")
def invoke(self, context: InvocationContext) -> ImageOutput:
pil_mask = context.images.get_pil(self.mask.image_name, mode="L")
if self.fade_size_px == 0:
# If the fade size is 0, just return the mask as-is.
image_dto = context.images.save(image=pil_mask, image_category=ImageCategory.MASK)
return ImageOutput.build(image_dto)
np_mask = numpy.array(pil_mask)
# Threshold the mask to create a binary mask - 0 for black, 255 for white
# If we don't threshold we can get some weird artifacts
np_mask = numpy.where(np_mask > self.threshold, 255, 0).astype(numpy.uint8)
# Create a mask for the black region (1 where black, 0 otherwise)
black_mask = (np_mask == 0).astype(numpy.uint8)
# Invert the black region
bg_mask = 1 - black_mask
# Create a distance transform of the inverted mask
dist = cv2.distanceTransform(bg_mask, cv2.DIST_L2, 5)
# Normalize distances so that pixels <fade_size_px become a linear gradient (0 to 1)
d_norm = numpy.clip(dist / self.fade_size_px, 0, 1)
# Control points: x values (normalized distance) and corresponding fade pct y values.
# There are some magic numbers here that are used to create a smooth transition:
# - The first point is at 0% of fade size from edge of mask (meaning the edge of the mask), and is 0% fade (black)
# - The second point is 1px from the edge of the mask and also has 0% fade, effectively expanding the mask
# by 1px. This fixes an issue where artifacts can occur at the edge of the mask
# - The third point is at 20% of the fade size from the edge of the mask and has 20% fade
# - The fourth point is at 80% of the fade size from the edge of the mask and has 90% fade
# - The last point is at 100% of the fade size from the edge of the mask and has 100% fade (white)
# x values: 0 = mask edge, 1 = fade_size_px from edge
x_control = numpy.array([0.0, 1.0 / self.fade_size_px, 0.2, 0.8, 1.0])
# y values: 0 = black, 1 = white
y_control = numpy.array([0.0, 0.0, 0.2, 0.9, 1.0])
# Fit a cubic polynomial that smoothly passes through the control points
coeffs = numpy.polyfit(x_control, y_control, 3)
poly = numpy.poly1d(coeffs)
# Evaluate the polynomial
feather = poly(d_norm)
# The polynomial fit isn't perfect. Points beyond the fade distance are likely to be slightly less than 1.0,
# even though the control points indicate that they should be exactly 1.0. This is due to the nature of the
# polynomial fit, which is a best approximation of the control points but not an exact match.
# When this occurs, the area outside the mask and fade-out will not be 100% transparent. For example, it may
# have an alpha value of 1 instead of 0. So we must force pixels at or beyond the fade distance to exactly 1.0.
# Force pixels at or beyond the fade distance to exactly 1.0
feather = numpy.where(d_norm >= 1.0, 1.0, feather)
# Clip any other values to ensure they're in the valid range [0,1]
feather = numpy.clip(feather, 0, 1)
# Build final image.
np_result = numpy.where(black_mask == 1, 0, (feather * 255).astype(numpy.uint8))
# Convert back to PIL, grayscale
pil_result = Image.fromarray(np_result.astype(numpy.uint8), mode="L")
image_dto = context.images.save(image=pil_result, image_category=ImageCategory.MASK)
return ImageOutput.build(image_dto)
@invocation(
"apply_mask_to_image",
title="Apply Mask to Image",
tags=["image", "mask", "blend"],
category="image",
version="1.0.0",
)
class ApplyMaskToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
"""
Extracts a region from a generated image using a mask and blends it seamlessly onto a source image.
The mask uses black to indicate areas to keep from the generated image and white for areas to discard.
"""
image: ImageField = InputField(description="The image from which to extract the masked region")
mask: ImageField = InputField(description="The mask defining the region (black=keep, white=discard)")
invert_mask: bool = InputField(
default=False,
description="Whether to invert the mask before applying it",
)
def invoke(self, context: InvocationContext) -> ImageOutput:
# Load images
image = context.images.get_pil(self.image.image_name, mode="RGBA")
mask = context.images.get_pil(self.mask.image_name, mode="L")
if self.invert_mask:
# Invert the mask if requested
mask = ImageOps.invert(mask.copy())
# Combine the mask as the alpha channel of the image
r, g, b, _ = image.split() # Split the image into RGB and alpha channels
result_image = Image.merge("RGBA", (r, g, b, mask)) # Use the mask as the new alpha channel
# Save the resulting image
image_dto = context.images.save(image=result_image)
return ImageOutput.build(image_dto)
@invocation(
"img_noise",
title="Add Image Noise",
@@ -1159,7 +1283,6 @@ class ImageNoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
category="image",
version="1.0.0",
tags=["image", "crop"],
classification=Classification.Beta,
)
class CropImageToBoundingBoxInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Crop an image to the given bounding box. If the bounding box is omitted, the image is cropped to the non-transparent pixels."""
@@ -1186,7 +1309,6 @@ class CropImageToBoundingBoxInvocation(BaseInvocation, WithMetadata, WithBoard):
category="image",
version="1.0.0",
tags=["image", "crop"],
classification=Classification.Beta,
)
class PasteImageIntoBoundingBoxInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Paste the source image into the target image at the given bounding box.

View File

@@ -31,10 +31,10 @@ from invokeai.backend.util.devices import TorchDevice
@invocation(
"i2l",
title="Image to Latents",
title="Image to Latents - SD1.5, SDXL",
tags=["latents", "image", "vae", "i2l"],
category="latents",
version="1.1.0",
version="1.1.1",
)
class ImageToLatentsInvocation(BaseInvocation):
"""Encodes an image into latents."""

View File

@@ -13,10 +13,8 @@ from invokeai.app.services.model_records.model_records_base import ModelRecordCh
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
IPAdapterCheckpointConfig,
IPAdapterInvokeAIConfig,
ModelType,
)
from invokeai.backend.model_manager.starter_models import (
StarterModel,
@@ -24,6 +22,7 @@ from invokeai.backend.model_manager.starter_models import (
ip_adapter_sd_image_encoder,
ip_adapter_sdxl_image_encoder,
)
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelType
class IPAdapterField(BaseModel):
@@ -69,7 +68,13 @@ CLIP_VISION_MODEL_MAP: dict[Literal["ViT-L", "ViT-H", "ViT-G"], StarterModel] =
}
@invocation("ip_adapter", title="IP-Adapter", tags=["ip_adapter", "control"], category="ip_adapter", version="1.5.0")
@invocation(
"ip_adapter",
title="IP-Adapter - SD1.5, SDXL",
tags=["ip_adapter", "control"],
category="ip_adapter",
version="1.5.1",
)
class IPAdapterInvocation(BaseInvocation):
"""Collects IP-Adapter info to pass to other nodes."""

View File

@@ -31,10 +31,10 @@ from invokeai.backend.util.devices import TorchDevice
@invocation(
"l2i",
title="Latents to Image",
title="Latents to Image - SD1.5, SDXL",
tags=["latents", "image", "vae", "l2i"],
category="latents",
version="1.3.1",
version="1.3.2",
)
class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Generates an image from latents."""

View File

@@ -0,0 +1,67 @@
from typing import Any
import torch
from PIL.Image import Image
from pydantic import field_validator
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.fields import FieldDescriptions, ImageField, InputField, UIComponent, UIType
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.primitives import StringOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.llava_onevision_model import LlavaOnevisionModel
from invokeai.backend.util.devices import TorchDevice
@invocation(
"llava_onevision_vllm",
title="LLaVA OneVision VLLM",
tags=["vllm"],
category="vllm",
version="1.0.0",
classification=Classification.Beta,
)
class LlavaOnevisionVllmInvocation(BaseInvocation):
"""Run a LLaVA OneVision VLLM model."""
images: list[ImageField] | ImageField | None = InputField(default=None, max_length=3, description="Input image.")
prompt: str = InputField(
default="",
description="Input text prompt.",
ui_component=UIComponent.Textarea,
)
vllm_model: ModelIdentifierField = InputField(
title="LLaVA Model Type",
description=FieldDescriptions.vllm_model,
ui_type=UIType.LlavaOnevisionModel,
)
@field_validator("images", mode="before")
def listify_images(cls, v: Any) -> list:
if v is None:
return v
if not isinstance(v, list):
return [v]
return v
def _get_images(self, context: InvocationContext) -> list[Image]:
if self.images is None:
return []
image_fields = self.images if isinstance(self.images, list) else [self.images]
return [context.images.get_pil(image_field.image_name, "RGB") for image_field in image_fields]
@torch.no_grad()
def invoke(self, context: InvocationContext) -> StringOutput:
images = self._get_images(context)
with context.models.load(self.vllm_model) as vllm_model:
assert isinstance(vllm_model, LlavaOnevisionModel)
output = vllm_model.run(
prompt=self.prompt,
images=images,
device=TorchDevice.choose_torch_device(),
dtype=TorchDevice.choose_torch_dtype(),
)
return StringOutput(value=output)

View File

@@ -4,7 +4,6 @@ from PIL import Image
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
Classification,
InvocationContext,
invocation,
)
@@ -58,7 +57,6 @@ class RectangleMaskInvocation(BaseInvocation, WithMetadata):
tags=["conditioning"],
category="conditioning",
version="1.0.0",
classification=Classification.Beta,
)
class AlphaMaskToTensorInvocation(BaseInvocation):
"""Convert a mask image to a tensor. Opaque regions are 1 and transparent regions are 0."""
@@ -67,7 +65,7 @@ class AlphaMaskToTensorInvocation(BaseInvocation):
invert: bool = InputField(default=False, description="Whether to invert the mask.")
def invoke(self, context: InvocationContext) -> MaskOutput:
image = context.images.get_pil(self.image.image_name)
image = context.images.get_pil(self.image.image_name, mode="RGBA")
mask = torch.zeros((1, image.height, image.width), dtype=torch.bool)
if self.invert:
mask[0] = torch.tensor(np.array(image)[:, :, 3] == 0, dtype=torch.bool)
@@ -87,7 +85,6 @@ class AlphaMaskToTensorInvocation(BaseInvocation):
tags=["conditioning"],
category="conditioning",
version="1.1.0",
classification=Classification.Beta,
)
class InvertTensorMaskInvocation(BaseInvocation):
"""Inverts a tensor mask."""
@@ -234,7 +231,6 @@ WHITE = ColorField(r=255, g=255, b=255, a=255)
tags=["mask"],
category="mask",
version="1.0.0",
classification=Classification.Beta,
)
class GetMaskBoundingBoxInvocation(BaseInvocation):
"""Gets the bounding box of the given mask image."""

View File

@@ -14,7 +14,7 @@ from invokeai.app.invocations.baseinvocation import (
invocation,
invocation_output,
)
from invokeai.app.invocations.controlnet_image_processors import ControlField, ControlNetInvocation
from invokeai.app.invocations.controlnet import ControlField, ControlNetInvocation
from invokeai.app.invocations.denoise_latents import DenoiseLatentsInvocation
from invokeai.app.invocations.fields import (
FieldDescriptions,
@@ -43,7 +43,7 @@ from invokeai.app.invocations.primitives import BooleanOutput, FloatOutput, Inte
from invokeai.app.invocations.scheduler import SchedulerOutput
from invokeai.app.invocations.t2i_adapter import T2IAdapterField, T2IAdapterInvocation
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.config import ModelType, SubModelType
from invokeai.backend.model_manager.taxonomy import ModelType, SubModelType
from invokeai.backend.stable_diffusion.schedulers.schedulers import SCHEDULER_NAME_VALUES
from invokeai.version import __version__
@@ -610,10 +610,10 @@ class LatentsMetaOutput(LatentsOutput, MetadataOutput):
@invocation(
"denoise_latents_meta",
title="Denoise Latents + metadata",
title=f"{DenoiseLatentsInvocation.UIConfig.title} + Metadata",
tags=["latents", "denoise", "txt2img", "t2i", "t2l", "img2img", "i2i", "l2l"],
category="latents",
version="1.1.0",
version="1.1.1",
)
class DenoiseLatentsMetaInvocation(DenoiseLatentsInvocation, WithMetadata):
def invoke(self, context: InvocationContext) -> LatentsMetaOutput:
@@ -675,10 +675,10 @@ class DenoiseLatentsMetaInvocation(DenoiseLatentsInvocation, WithMetadata):
@invocation(
"flux_denoise_meta",
title="Flux Denoise + metadata",
title=f"{FluxDenoiseInvocation.UIConfig.title} + Metadata",
tags=["flux", "latents", "denoise", "txt2img", "t2i", "t2l", "img2img", "i2i", "l2l"],
category="latents",
version="1.0.0",
version="1.0.1",
)
class FluxDenoiseLatentsMetaInvocation(FluxDenoiseInvocation, WithMetadata):
"""Run denoising process with a FLUX transformer model + metadata."""

View File

@@ -6,7 +6,6 @@ from pydantic import BaseModel, Field
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
invocation,
invocation_output,
)
@@ -15,10 +14,8 @@ from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.shared.models import FreeUConfig
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
ModelType,
SubModelType,
)
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelType, SubModelType
class ModelIdentifierField(BaseModel):
@@ -122,11 +119,10 @@ class ModelIdentifierOutput(BaseInvocationOutput):
@invocation(
"model_identifier",
title="Model identifier",
title="Any Model",
tags=["model"],
category="model",
version="1.0.0",
classification=Classification.Prototype,
version="1.0.1",
)
class ModelIdentifierInvocation(BaseInvocation):
"""Selects any model, outputting it its identifier. Be careful with this one! The identifier will be accepted as
@@ -144,10 +140,10 @@ class ModelIdentifierInvocation(BaseInvocation):
@invocation(
"main_model_loader",
title="Main Model",
title="Main Model - SD1.5",
tags=["model"],
category="model",
version="1.0.3",
version="1.0.4",
)
class MainModelLoaderInvocation(BaseInvocation):
"""Loads a main model, outputting its submodels."""
@@ -181,7 +177,7 @@ class LoRALoaderOutput(BaseInvocationOutput):
clip: Optional[CLIPField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP")
@invocation("lora_loader", title="LoRA", tags=["model"], category="model", version="1.0.3")
@invocation("lora_loader", title="Apply LoRA - SD1.5", tags=["model"], category="model", version="1.0.4")
class LoRALoaderInvocation(BaseInvocation):
"""Apply selected lora to unet and text_encoder."""
@@ -244,7 +240,7 @@ class LoRASelectorOutput(BaseInvocationOutput):
lora: LoRAField = OutputField(description="LoRA model and weight", title="LoRA")
@invocation("lora_selector", title="LoRA Selector", tags=["model"], category="model", version="1.0.1")
@invocation("lora_selector", title="Select LoRA", tags=["model"], category="model", version="1.0.3")
class LoRASelectorInvocation(BaseInvocation):
"""Selects a LoRA model and weight."""
@@ -257,7 +253,9 @@ class LoRASelectorInvocation(BaseInvocation):
return LoRASelectorOutput(lora=LoRAField(lora=self.lora, weight=self.weight))
@invocation("lora_collection_loader", title="LoRA Collection Loader", tags=["model"], category="model", version="1.1.0")
@invocation(
"lora_collection_loader", title="Apply LoRA Collection - SD1.5", tags=["model"], category="model", version="1.1.2"
)
class LoRACollectionLoader(BaseInvocation):
"""Applies a collection of LoRAs to the provided UNet and CLIP models."""
@@ -320,10 +318,10 @@ class SDXLLoRALoaderOutput(BaseInvocationOutput):
@invocation(
"sdxl_lora_loader",
title="SDXL LoRA",
title="Apply LoRA - SDXL",
tags=["lora", "model"],
category="model",
version="1.0.3",
version="1.0.5",
)
class SDXLLoRALoaderInvocation(BaseInvocation):
"""Apply selected lora to unet and text_encoder."""
@@ -400,10 +398,10 @@ class SDXLLoRALoaderInvocation(BaseInvocation):
@invocation(
"sdxl_lora_collection_loader",
title="SDXL LoRA Collection Loader",
title="Apply LoRA Collection - SDXL",
tags=["model"],
category="model",
version="1.1.0",
version="1.1.2",
)
class SDXLLoRACollectionLoader(BaseInvocation):
"""Applies a collection of SDXL LoRAs to the provided UNet and CLIP models."""
@@ -469,7 +467,9 @@ class SDXLLoRACollectionLoader(BaseInvocation):
return output
@invocation("vae_loader", title="VAE", tags=["vae", "model"], category="model", version="1.0.3")
@invocation(
"vae_loader", title="VAE Model - SD1.5, SDXL, SD3, FLUX", tags=["vae", "model"], category="model", version="1.0.4"
)
class VAELoaderInvocation(BaseInvocation):
"""Loads a VAE model, outputting a VaeLoaderOutput"""
@@ -496,10 +496,10 @@ class SeamlessModeOutput(BaseInvocationOutput):
@invocation(
"seamless",
title="Seamless",
title="Apply Seamless - SD1.5, SDXL",
tags=["seamless", "model"],
category="model",
version="1.0.1",
version="1.0.2",
)
class SeamlessModeInvocation(BaseInvocation):
"""Applies the seamless transformation to the Model UNet and VAE."""
@@ -539,7 +539,7 @@ class SeamlessModeInvocation(BaseInvocation):
return SeamlessModeOutput(unet=unet, vae=vae)
@invocation("freeu", title="FreeU", tags=["freeu"], category="unet", version="1.0.1")
@invocation("freeu", title="Apply FreeU - SD1.5, SDXL", tags=["freeu"], category="unet", version="1.0.2")
class FreeUInvocation(BaseInvocation):
"""
Applies FreeU to the UNet. Suggested values (b1/b2/s1/s2):

View File

@@ -72,10 +72,10 @@ class NoiseOutput(BaseInvocationOutput):
@invocation(
"noise",
title="Noise",
title="Create Latent Noise",
tags=["latents", "noise"],
category="latents",
version="1.0.2",
version="1.0.3",
)
class NoiseInvocation(BaseInvocation):
"""Generates latent noise."""

View File

@@ -6,7 +6,7 @@ from diffusers.models.transformers.transformer_sd3 import SD3Transformer2DModel
from torchvision.transforms.functional import resize as tv_resize
from tqdm import tqdm
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.fields import (
DenoiseMaskField,
@@ -23,7 +23,7 @@ from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.invocations.sd3_text_encoder import SD3_T5_MAX_SEQ_LEN
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.sampling_utils import clip_timestep_schedule_fractional
from invokeai.backend.model_manager.config import BaseModelType
from invokeai.backend.model_manager import BaseModelType
from invokeai.backend.sd3.extensions.inpaint_extension import InpaintExtension
from invokeai.backend.stable_diffusion.diffusers_pipeline import PipelineIntermediateState
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import SD3ConditioningInfo
@@ -32,11 +32,10 @@ from invokeai.backend.util.devices import TorchDevice
@invocation(
"sd3_denoise",
title="SD3 Denoise",
title="Denoise - SD3",
tags=["image", "sd3"],
category="image",
version="1.1.0",
classification=Classification.Prototype,
version="1.1.1",
)
class SD3DenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Run denoising process with a SD3 model."""

View File

@@ -2,7 +2,7 @@ import einops
import torch
from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import (
FieldDescriptions,
ImageField,
@@ -21,11 +21,10 @@ from invokeai.backend.util.devices import TorchDevice
@invocation(
"sd3_i2l",
title="SD3 Image to Latents",
title="Image to Latents - SD3",
tags=["image", "latents", "vae", "i2l", "sd3"],
category="image",
version="1.0.0",
classification=Classification.Prototype,
version="1.0.1",
)
class SD3ImageToLatentsInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Generates latents from an image."""

View File

@@ -24,10 +24,10 @@ from invokeai.backend.util.devices import TorchDevice
@invocation(
"sd3_l2i",
title="SD3 Latents to Image",
title="Latents to Image - SD3",
tags=["latents", "image", "vae", "l2i", "sd3"],
category="latents",
version="1.3.1",
version="1.3.2",
)
class SD3LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Generates an image from latents."""

View File

@@ -3,7 +3,6 @@ from typing import Optional
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
invocation,
invocation_output,
)
@@ -14,7 +13,7 @@ from invokeai.app.util.t5_model_identifier import (
preprocess_t5_encoder_model_identifier,
preprocess_t5_tokenizer_model_identifier,
)
from invokeai.backend.model_manager.config import SubModelType
from invokeai.backend.model_manager.taxonomy import SubModelType
@invocation_output("sd3_model_loader_output")
@@ -30,11 +29,10 @@ class Sd3ModelLoaderOutput(BaseInvocationOutput):
@invocation(
"sd3_model_loader",
title="SD3 Main Model",
title="Main Model - SD3",
tags=["model", "sd3"],
category="model",
version="1.0.0",
classification=Classification.Prototype,
version="1.0.1",
)
class Sd3ModelLoaderInvocation(BaseInvocation):
"""Loads a SD3 base model, outputting its submodels."""

View File

@@ -11,12 +11,12 @@ from transformers import (
T5TokenizerFast,
)
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField
from invokeai.app.invocations.model import CLIPField, T5EncoderField
from invokeai.app.invocations.primitives import SD3ConditioningOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.config import ModelFormat
from invokeai.backend.model_manager.taxonomy import ModelFormat
from invokeai.backend.patches.layer_patcher import LayerPatcher
from invokeai.backend.patches.lora_conversions.flux_lora_constants import FLUX_LORA_CLIP_PREFIX
from invokeai.backend.patches.model_patch_raw import ModelPatchRaw
@@ -29,11 +29,10 @@ SD3_T5_MAX_SEQ_LEN = 256
@invocation(
"sd3_text_encoder",
title="SD3 Text Encoding",
title="Prompt - SD3",
tags=["prompt", "conditioning", "sd3"],
category="conditioning",
version="1.0.0",
classification=Classification.Prototype,
version="1.0.1",
)
class Sd3TextEncoderInvocation(BaseInvocation):
"""Encodes and preps a prompt for a SD3 image."""

View File

@@ -2,7 +2,7 @@ from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocati
from invokeai.app.invocations.fields import FieldDescriptions, InputField, OutputField, UIType
from invokeai.app.invocations.model import CLIPField, ModelIdentifierField, UNetField, VAEField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager import SubModelType
from invokeai.backend.model_manager.taxonomy import SubModelType
@invocation_output("sdxl_model_loader_output")
@@ -24,7 +24,7 @@ class SDXLRefinerModelLoaderOutput(BaseInvocationOutput):
vae: VAEField = OutputField(description=FieldDescriptions.vae, title="VAE")
@invocation("sdxl_model_loader", title="SDXL Main Model", tags=["model", "sdxl"], category="model", version="1.0.3")
@invocation("sdxl_model_loader", title="Main Model - SDXL", tags=["model", "sdxl"], category="model", version="1.0.4")
class SDXLModelLoaderInvocation(BaseInvocation):
"""Loads an sdxl base model, outputting its submodels."""
@@ -58,10 +58,10 @@ class SDXLModelLoaderInvocation(BaseInvocation):
@invocation(
"sdxl_refiner_model_loader",
title="SDXL Refiner Model",
title="Refiner Model - SDXL",
tags=["model", "sdxl", "refiner"],
category="model",
version="1.0.3",
version="1.0.4",
)
class SDXLRefinerModelLoaderInvocation(BaseInvocation):
"""Loads an sdxl refiner model, outputting its submodels."""

View File

@@ -45,7 +45,11 @@ class T2IAdapterOutput(BaseInvocationOutput):
@invocation(
"t2i_adapter", title="T2I-Adapter", tags=["t2i_adapter", "control"], category="t2i_adapter", version="1.0.3"
"t2i_adapter",
title="T2I-Adapter - SD1.5, SDXL",
tags=["t2i_adapter", "control"],
category="t2i_adapter",
version="1.0.4",
)
class T2IAdapterInvocation(BaseInvocation):
"""Collects T2I-Adapter info to pass to other nodes."""

View File

@@ -7,9 +7,9 @@ from diffusers.models.unets.unet_2d_condition import UNet2DConditionModel
from diffusers.schedulers.scheduling_utils import SchedulerMixin
from pydantic import field_validator
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.controlnet_image_processors import ControlField
from invokeai.app.invocations.controlnet import ControlField
from invokeai.app.invocations.denoise_latents import DenoiseLatentsInvocation, get_scheduler
from invokeai.app.invocations.fields import (
ConditioningField,
@@ -53,11 +53,10 @@ def crop_controlnet_data(control_data: ControlNetData, latent_region: TBLR) -> C
@invocation(
"tiled_multi_diffusion_denoise_latents",
title="Tiled Multi-Diffusion Denoise Latents",
title="Tiled Multi-Diffusion Denoise - SD1.5, SDXL",
tags=["upscale", "denoise"],
category="latents",
classification=Classification.Beta,
version="1.0.0",
version="1.0.1",
)
class TiledMultiDiffusionDenoiseLatents(BaseInvocation):
"""Tiled Multi-Diffusion denoising.

View File

@@ -7,7 +7,6 @@ from pydantic import BaseModel
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
invocation,
invocation_output,
)
@@ -40,7 +39,6 @@ class CalculateImageTilesOutput(BaseInvocationOutput):
tags=["tiles"],
category="tiles",
version="1.0.1",
classification=Classification.Beta,
)
class CalculateImageTilesInvocation(BaseInvocation):
"""Calculate the coordinates and overlaps of tiles that cover a target image shape."""
@@ -74,7 +72,6 @@ class CalculateImageTilesInvocation(BaseInvocation):
tags=["tiles"],
category="tiles",
version="1.1.1",
classification=Classification.Beta,
)
class CalculateImageTilesEvenSplitInvocation(BaseInvocation):
"""Calculate the coordinates and overlaps of tiles that cover a target image shape."""
@@ -117,7 +114,6 @@ class CalculateImageTilesEvenSplitInvocation(BaseInvocation):
tags=["tiles"],
category="tiles",
version="1.0.1",
classification=Classification.Beta,
)
class CalculateImageTilesMinimumOverlapInvocation(BaseInvocation):
"""Calculate the coordinates and overlaps of tiles that cover a target image shape."""
@@ -168,7 +164,6 @@ class TileToPropertiesOutput(BaseInvocationOutput):
tags=["tiles"],
category="tiles",
version="1.0.1",
classification=Classification.Beta,
)
class TileToPropertiesInvocation(BaseInvocation):
"""Split a Tile into its individual properties."""
@@ -201,7 +196,6 @@ class PairTileImageOutput(BaseInvocationOutput):
tags=["tiles"],
category="tiles",
version="1.0.1",
classification=Classification.Beta,
)
class PairTileImageInvocation(BaseInvocation):
"""Pair an image with its tile properties."""
@@ -230,7 +224,6 @@ BLEND_MODES = Literal["Linear", "Seam"]
tags=["tiles"],
category="tiles",
version="1.1.1",
classification=Classification.Beta,
)
class MergeTilesToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Merge multiple tile images into a single image."""

View File

@@ -41,16 +41,15 @@ def run_app() -> None:
)
# Find an open port, and modify the config accordingly.
orig_config_port = app_config.port
app_config.port = find_open_port(app_config.port)
if orig_config_port != app_config.port:
first_open_port = find_open_port(app_config.port)
if app_config.port != first_open_port:
orig_config_port = app_config.port
app_config.port = first_open_port
logger.warning(f"Port {orig_config_port} is already in use. Using port {app_config.port}.")
# Miscellaneous startup tasks.
apply_monkeypatches()
register_mime_types()
if app_config.dev_reload:
enable_dev_reload()
check_cudnn(logger)
# Initialize the app and event loop.
@@ -61,6 +60,11 @@ def run_app() -> None:
# core nodes have been imported so that we can catch when a custom node clobbers a core node.
load_custom_nodes(custom_nodes_path=app_config.custom_nodes_path, logger=logger)
if app_config.dev_reload:
# load_custom_nodes seems to bypass jurrigged's import sniffer, so be sure to call it *after* they're already
# imported.
enable_dev_reload(custom_nodes_path=app_config.custom_nodes_path)
# Start the server.
config = uvicorn.Config(
app=app,

View File

@@ -44,7 +44,8 @@ if TYPE_CHECKING:
SessionQueueItem,
SessionQueueStatus,
)
from invokeai.backend.model_manager.config import AnyModelConfig, SubModelType
from invokeai.backend.model_manager import SubModelType
from invokeai.backend.model_manager.config import AnyModelConfig
class EventServiceBase:

View File

@@ -16,7 +16,8 @@ from invokeai.app.services.session_queue.session_queue_common import (
)
from invokeai.app.services.shared.graph import AnyInvocation, AnyInvocationOutput
from invokeai.app.util.misc import get_timestamp
from invokeai.backend.model_manager.config import AnyModelConfig, SubModelType
from invokeai.backend.model_manager import SubModelType
from invokeai.backend.model_manager.config import AnyModelConfig
if TYPE_CHECKING:
from invokeai.app.services.download.download_base import DownloadJob

View File

@@ -10,9 +10,9 @@ from typing_extensions import Annotated
from invokeai.app.services.download import DownloadJob, MultiFileDownloadJob
from invokeai.app.services.model_records import ModelRecordChanges
from invokeai.backend.model_manager import AnyModelConfig, ModelRepoVariant
from invokeai.backend.model_manager.config import ModelSourceType
from invokeai.backend.model_manager.config import AnyModelConfig
from invokeai.backend.model_manager.metadata import AnyModelRepoMetadata
from invokeai.backend.model_manager.taxonomy import ModelRepoVariant, ModelSourceType
class InstallStatus(str, Enum):

View File

@@ -38,9 +38,9 @@ from invokeai.backend.model_manager.config import (
AnyModelConfig,
CheckpointConfigBase,
InvalidModelConfigException,
ModelRepoVariant,
ModelSourceType,
ModelConfigBase,
)
from invokeai.backend.model_manager.legacy_probe import ModelProbe
from invokeai.backend.model_manager.metadata import (
AnyModelRepoMetadata,
HuggingFaceMetadataFetch,
@@ -49,8 +49,8 @@ from invokeai.backend.model_manager.metadata import (
RemoteModelFile,
)
from invokeai.backend.model_manager.metadata.metadata_base import HuggingFaceMetadata
from invokeai.backend.model_manager.probe import ModelProbe
from invokeai.backend.model_manager.search import ModelSearch
from invokeai.backend.model_manager.taxonomy import ModelRepoVariant, ModelSourceType
from invokeai.backend.util import InvokeAILogger
from invokeai.backend.util.catch_sigint import catch_sigint
from invokeai.backend.util.devices import TorchDevice
@@ -182,9 +182,7 @@ class ModelInstallService(ModelInstallServiceBase):
) -> str: # noqa D102
model_path = Path(model_path)
config = config or ModelRecordChanges()
info: AnyModelConfig = ModelProbe.probe(
Path(model_path), config.model_dump(), hash_algo=self._app_config.hashing_algorithm
) # type: ignore
info: AnyModelConfig = self._probe(Path(model_path), config) # type: ignore
if preferred_name := config.name:
preferred_name = Path(preferred_name).with_suffix(model_path.suffix)
@@ -644,12 +642,22 @@ class ModelInstallService(ModelInstallServiceBase):
move(old_path, new_path)
return new_path
def _probe(self, model_path: Path, config: Optional[ModelRecordChanges] = None):
config = config or ModelRecordChanges()
hash_algo = self._app_config.hashing_algorithm
fields = config.model_dump()
try:
return ModelConfigBase.classify(model_path=model_path, hash_algo=hash_algo, **fields)
except InvalidModelConfigException:
return ModelProbe.probe(model_path=model_path, fields=fields, hash_algo=hash_algo) # type: ignore
def _register(
self, model_path: Path, config: Optional[ModelRecordChanges] = None, info: Optional[AnyModelConfig] = None
) -> str:
config = config or ModelRecordChanges()
info = info or ModelProbe.probe(model_path, config.model_dump(), hash_algo=self._app_config.hashing_algorithm) # type: ignore
info = info or self._probe(model_path, config)
model_path = model_path.resolve()

View File

@@ -5,9 +5,10 @@ from abc import ABC, abstractmethod
from pathlib import Path
from typing import Callable, Optional
from invokeai.backend.model_manager import AnyModel, AnyModelConfig, SubModelType
from invokeai.backend.model_manager.config import AnyModelConfig
from invokeai.backend.model_manager.load import LoadedModel, LoadedModelWithoutConfig
from invokeai.backend.model_manager.load.model_cache.model_cache import ModelCache
from invokeai.backend.model_manager.taxonomy import AnyModel, SubModelType
class ModelLoadServiceBase(ABC):

View File

@@ -11,7 +11,7 @@ from torch import load as torch_load
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.model_load.model_load_base import ModelLoadServiceBase
from invokeai.backend.model_manager import AnyModel, AnyModelConfig, SubModelType
from invokeai.backend.model_manager.config import AnyModelConfig
from invokeai.backend.model_manager.load import (
LoadedModel,
LoadedModelWithoutConfig,
@@ -20,6 +20,7 @@ from invokeai.backend.model_manager.load import (
)
from invokeai.backend.model_manager.load.model_cache.model_cache import ModelCache
from invokeai.backend.model_manager.load.model_loaders.generic_diffusers import GenericDiffusersLoader
from invokeai.backend.model_manager.taxonomy import AnyModel, SubModelType
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.logging import InvokeAILogger
@@ -85,8 +86,11 @@ class ModelLoadService(ModelLoadServiceBase):
def torch_load_file(checkpoint: Path) -> AnyModel:
scan_result = scan_file_path(checkpoint)
if scan_result.infected_files != 0 or scan_result.scan_err:
raise Exception("The model at {checkpoint} is potentially infected by malware. Aborting load.")
if scan_result.infected_files != 0:
raise Exception(f"The model at {checkpoint} is potentially infected by malware. Aborting load.")
if scan_result.scan_err:
raise Exception(f"Error scanning model at {checkpoint} for malware. Aborting load.")
result = torch_load(checkpoint, map_location="cpu")
return result

View File

@@ -1,16 +1,12 @@
"""Initialization file for model manager service."""
from invokeai.app.services.model_manager.model_manager_default import ModelManagerService, ModelManagerServiceBase
from invokeai.backend.model_manager import AnyModel, AnyModelConfig, BaseModelType, ModelType, SubModelType
from invokeai.backend.model_manager import AnyModelConfig
from invokeai.backend.model_manager.load import LoadedModel
__all__ = [
"ModelManagerServiceBase",
"ModelManagerService",
"AnyModel",
"AnyModelConfig",
"BaseModelType",
"ModelType",
"SubModelType",
"LoadedModel",
]

View File

@@ -14,10 +14,12 @@ from invokeai.app.services.shared.pagination import PaginatedResults
from invokeai.app.util.model_exclude_null import BaseModelExcludeNull
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
ClipVariantType,
ControlAdapterDefaultSettings,
MainModelDefaultSettings,
)
from invokeai.backend.model_manager.taxonomy import (
BaseModelType,
ClipVariantType,
ModelFormat,
ModelSourceType,
ModelType,

View File

@@ -60,11 +60,9 @@ from invokeai.app.services.shared.pagination import PaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
ModelConfigFactory,
ModelFormat,
ModelType,
)
from invokeai.backend.model_manager.taxonomy import BaseModelType, ModelFormat, ModelType
class ModelRecordServiceSQL(ModelRecordServiceBase):
@@ -304,7 +302,10 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
# We catch this error so that the app can still run if there are invalid model configs in the database.
# One reason that an invalid model config might be in the database is if someone had to rollback from a
# newer version of the app that added a new model type.
self._logger.warning(f"Found an invalid model config in the database. Ignoring this model. ({row[0]})")
row_data = f"{row[0][:64]}..." if len(row[0]) > 64 else row[0]
self._logger.warning(
f"Found an invalid model config in the database. Ignoring this model. ({row_data})"
)
else:
results.append(model_config)

View File

@@ -21,10 +21,16 @@ class ObjectSerializerDisk(ObjectSerializerBase[T]):
"""Disk-backed storage for arbitrary python objects. Serialization is handled by `torch.save` and `torch.load`.
:param output_dir: The folder where the serialized objects will be stored
:param safe_globals: A list of types to be added to the safe globals for torch serialization
:param ephemeral: If True, objects will be stored in a temporary directory inside the given output_dir and cleaned up on exit
"""
def __init__(self, output_dir: Path, ephemeral: bool = False):
def __init__(
self,
output_dir: Path,
safe_globals: list[type],
ephemeral: bool = False,
) -> None:
super().__init__()
self._ephemeral = ephemeral
self._base_output_dir = output_dir
@@ -42,6 +48,8 @@ class ObjectSerializerDisk(ObjectSerializerBase[T]):
self._output_dir = Path(self._tempdir.name) if self._tempdir else self._base_output_dir
self.__obj_class_name: Optional[str] = None
torch.serialization.add_safe_globals(safe_globals) if safe_globals else None
def load(self, name: str) -> T:
file_path = self._get_path(name)
try:

View File

@@ -201,6 +201,12 @@ def get_workflow(queue_item_dict: dict) -> Optional[WorkflowWithoutID]:
return None
class FieldIdentifier(BaseModel):
kind: Literal["input", "output"] = Field(description="The kind of field")
node_id: str = Field(description="The ID of the node")
field_name: str = Field(description="The name of the field")
class SessionQueueItemWithoutGraph(BaseModel):
"""Session queue item without the full graph. Used for serialization."""
@@ -237,6 +243,20 @@ class SessionQueueItemWithoutGraph(BaseModel):
retried_from_item_id: Optional[int] = Field(
default=None, description="The item_id of the queue item that this item was retried from"
)
is_api_validation_run: bool = Field(
default=False,
description="Whether this queue item is an API validation run.",
)
published_workflow_id: Optional[str] = Field(
default=None,
description="The ID of the published workflow associated with this queue item",
)
api_input_fields: Optional[list[FieldIdentifier]] = Field(
default=None, description="The fields that were used as input to the API"
)
api_output_fields: Optional[list[FieldIdentifier]] = Field(
default=None, description="The nodes that were used as output from the API"
)
@classmethod
def queue_item_dto_from_dict(cls, queue_item_dict: dict) -> "SessionQueueItemDTO":
@@ -570,7 +590,10 @@ ValueToInsertTuple: TypeAlias = tuple[
str | None, # destination (optional)
int | None, # retried_from_item_id (optional, this is always None for new items)
]
"""A type alias for the tuple of values to insert into the session queue table."""
"""A type alias for the tuple of values to insert into the session queue table.
**If you change this, be sure to update the `enqueue_batch` and `retry_items_by_id` methods in the session queue service!**
"""
def prepare_values_to_insert(

View File

@@ -27,6 +27,7 @@ from invokeai.app.services.session_queue.session_queue_common import (
SessionQueueItemDTO,
SessionQueueItemNotFoundError,
SessionQueueStatus,
ValueToInsertTuple,
calc_session_count,
prepare_values_to_insert,
)
@@ -689,7 +690,7 @@ class SqliteSessionQueue(SessionQueueBase):
"""Retries the given queue items"""
try:
cursor = self._conn.cursor()
values_to_insert: list[tuple] = []
values_to_insert: list[ValueToInsertTuple] = []
retried_item_ids: list[int] = []
for item_id in item_ids:
@@ -715,16 +716,16 @@ class SqliteSessionQueue(SessionQueueBase):
else queue_item.item_id
)
value_to_insert = (
value_to_insert: ValueToInsertTuple = (
queue_item.queue_id,
queue_item.batch_id,
queue_item.destination,
field_values_json,
queue_item.origin,
queue_item.priority,
workflow_json,
cloned_session_json,
cloned_session.id,
queue_item.batch_id,
field_values_json,
queue_item.priority,
workflow_json,
queue_item.origin,
queue_item.destination,
retried_from_item_id,
)
values_to_insert.append(value_to_insert)

View File

@@ -21,6 +21,7 @@ from invokeai.app.invocations import * # noqa: F401 F403
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
InvocationRegistry,
invocation,
invocation_output,
)
@@ -283,7 +284,7 @@ class AnyInvocation(BaseInvocation):
@classmethod
def __get_pydantic_core_schema__(cls, source_type: Any, handler: GetCoreSchemaHandler) -> core_schema.CoreSchema:
def validate_invocation(v: Any) -> "AnyInvocation":
return BaseInvocation.get_typeadapter().validate_python(v)
return InvocationRegistry.get_invocation_typeadapter().validate_python(v)
return core_schema.no_info_plain_validator_function(validate_invocation)
@@ -294,7 +295,7 @@ class AnyInvocation(BaseInvocation):
# Nodes are too powerful, we have to make our own OpenAPI schema manually
# No but really, because the schema is dynamic depending on loaded nodes, we need to generate it manually
oneOf: list[dict[str, str]] = []
names = [i.__name__ for i in BaseInvocation.get_invocations()]
names = [i.__name__ for i in InvocationRegistry.get_invocation_classes()]
for name in sorted(names):
oneOf.append({"$ref": f"#/components/schemas/{name}"})
return {"oneOf": oneOf}
@@ -304,7 +305,7 @@ class AnyInvocationOutput(BaseInvocationOutput):
@classmethod
def __get_pydantic_core_schema__(cls, source_type: Any, handler: GetCoreSchemaHandler):
def validate_invocation_output(v: Any) -> "AnyInvocationOutput":
return BaseInvocationOutput.get_typeadapter().validate_python(v)
return InvocationRegistry.get_output_typeadapter().validate_python(v)
return core_schema.no_info_plain_validator_function(validate_invocation_output)
@@ -316,7 +317,7 @@ class AnyInvocationOutput(BaseInvocationOutput):
# No but really, because the schema is dynamic depending on loaded nodes, we need to generate it manually
oneOf: list[dict[str, str]] = []
names = [i.__name__ for i in BaseInvocationOutput.get_outputs()]
names = [i.__name__ for i in InvocationRegistry.get_output_classes()]
for name in sorted(names):
oneOf.append({"$ref": f"#/components/schemas/{name}"})
return {"oneOf": oneOf}

View File

@@ -20,14 +20,10 @@ from invokeai.app.services.session_processor.session_processor_common import Pro
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
from invokeai.app.util.step_callback import flux_step_callback, stable_diffusion_step_callback
from invokeai.backend.model_manager.config import (
AnyModel,
AnyModelConfig,
BaseModelType,
ModelFormat,
ModelType,
SubModelType,
)
from invokeai.backend.model_manager.load.load_base import LoadedModel, LoadedModelWithoutConfig
from invokeai.backend.model_manager.taxonomy import AnyModel, BaseModelType, ModelFormat, ModelType, SubModelType
from invokeai.backend.stable_diffusion.diffusers_pipeline import PipelineIntermediateState
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningFieldData

View File

@@ -20,6 +20,7 @@ from invokeai.app.services.shared.sqlite_migrator.migrations.migration_14 import
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_15 import build_migration_15
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_16 import build_migration_16
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_17 import build_migration_17
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_18 import build_migration_18
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_impl import SqliteMigrator
@@ -57,6 +58,7 @@ def init_db(config: InvokeAIAppConfig, logger: Logger, image_files: ImageFileSto
migrator.register_migration(build_migration_15())
migrator.register_migration(build_migration_16())
migrator.register_migration(build_migration_17())
migrator.register_migration(build_migration_18())
migrator.run_migrations()
return db

View File

@@ -0,0 +1,47 @@
import sqlite3
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration
class Migration18Callback:
def __call__(self, cursor: sqlite3.Cursor) -> None:
self._make_workflow_opened_at_nullable(cursor)
def _make_workflow_opened_at_nullable(self, cursor: sqlite3.Cursor) -> None:
"""
Make the `opened_at` column nullable in the `workflow_library` table. This is accomplished by:
- Dropping the existing `idx_workflow_library_opened_at` index (must be done before dropping the column)
- Dropping the existing `opened_at` column
- Adding a new nullable column `opened_at` (no data migration needed, all values will be NULL)
- Adding a new `idx_workflow_library_opened_at` index on the `opened_at` column
"""
# For index renaming in SQLite, we need to drop and recreate
cursor.execute("DROP INDEX IF EXISTS idx_workflow_library_opened_at;")
# Rename existing column to deprecated
cursor.execute("ALTER TABLE workflow_library DROP COLUMN opened_at;")
# Add new nullable column - all values will be NULL - no migration of data needed
cursor.execute("ALTER TABLE workflow_library ADD COLUMN opened_at DATETIME;")
# Create new index on the new column
cursor.execute(
"CREATE INDEX idx_workflow_library_opened_at ON workflow_library(opened_at);",
)
def build_migration_18() -> Migration:
"""
Build the migration from database version 17 to 18.
This migration does the following:
- Make the `opened_at` column nullable in the `workflow_library` table. This is accomplished by:
- Dropping the existing `idx_workflow_library_opened_at` index (must be done before dropping the column)
- Dropping the existing `opened_at` column
- Adding a new nullable column `opened_at` (no data migration needed, all values will be NULL)
- Adding a new `idx_workflow_library_opened_at` index on the `opened_at` column
"""
migration_18 = Migration(
from_version=17,
to_version=18,
callback=Migration18Callback(),
)
return migration_18

View File

@@ -1,11 +1,11 @@
{
"id": "default_686bb1d0-d086-4c70-9fa3-2f600b922023",
"name": "ESRGAN Upscaling with Canny ControlNet",
"name": "Upscaler - SD1.5, ESRGAN",
"author": "InvokeAI",
"description": "Sample workflow for using Upscaling with ControlNet with SD1.5",
"description": "Sample workflow for using ESRGAN to upscale with ControlNet with SD1.5",
"version": "2.1.0",
"contact": "invoke@invoke.ai",
"tags": "upscaling, controlnet, default",
"tags": "sd1.5, upscaling, control",
"notes": "",
"exposedFields": [
{
@@ -185,14 +185,7 @@
},
"control_model": {
"name": "control_model",
"label": "Control Model (select Canny)",
"value": {
"key": "a7b9c76f-4bc5-42aa-b918-c1c458a5bb24",
"hash": "blake3:260c7f8e10aefea9868cfc68d89970e91033bd37132b14b903e70ee05ebf530e",
"name": "sd-controlnet-canny",
"base": "sd-1",
"type": "controlnet"
}
"label": "Control Model (select Canny)"
},
"control_weight": {
"name": "control_weight",
@@ -295,14 +288,7 @@
"inputs": {
"model": {
"name": "model",
"label": "",
"value": {
"key": "5cd43ca0-dd0a-418d-9f7e-35b2b9d5e106",
"hash": "blake3:6987f323017f597213cc3264250edf57056d21a40a0a85d83a1a33a7d44dc41a",
"name": "Deliberate_v5",
"base": "sd-1",
"type": "main"
}
"label": ""
}
},
"isOpen": true,
@@ -849,4 +835,4 @@
"targetHandle": "image_resolution"
}
]
}
}

View File

@@ -1,11 +1,11 @@
{
"id": "default_cbf0e034-7b54-4b2c-b670-3b1e2e4b4a88",
"name": "FLUX Image to Image",
"name": "Image to Image - FLUX",
"author": "InvokeAI",
"description": "A simple image-to-image workflow using a FLUX dev model. ",
"version": "1.1.0",
"contact": "",
"tags": "image2image, flux, image-to-image, image to image",
"tags": "flux, image to image",
"notes": "Prerequisite model downloads: T5 Encoder, CLIP-L Encoder, and FLUX VAE. Quantized and un-quantized versions can be found in the starter models tab within your Model Manager. We recommend using FLUX dev models for image-to-image workflows. The image-to-image performance with FLUX schnell models is poor.",
"exposedFields": [
{
@@ -201,36 +201,15 @@
},
"t5_encoder_model": {
"name": "t5_encoder_model",
"label": "",
"value": {
"key": "d18d5575-96b6-4da3-b3d8-eb58308d6705",
"hash": "random:f2f9ed74acdfb4bf6fec200e780f6c25f8dd8764a35e65d425d606912fdf573a",
"name": "t5_bnb_int8_quantized_encoder",
"base": "any",
"type": "t5_encoder"
}
"label": ""
},
"clip_embed_model": {
"name": "clip_embed_model",
"label": "",
"value": {
"key": "5a19d7e5-8d98-43cd-8a81-87515e4b3b4e",
"hash": "random:4bd08514c08fb6ff04088db9aeb45def3c488e8b5fd09a35f2cc4f2dc346f99f",
"name": "clip-vit-large-patch14",
"base": "any",
"type": "clip_embed"
}
"label": ""
},
"vae_model": {
"name": "vae_model",
"label": "",
"value": {
"key": "9172beab-5c1d-43f0-b2f0-6e0b956710d9",
"hash": "random:c54dde288e5fa2e6137f1c92e9d611f598049e6f16e360207b6d96c9f5a67ba0",
"name": "FLUX.1-schnell_ae",
"base": "flux",
"type": "vae"
}
"label": ""
}
}
},

View File

@@ -1,11 +1,11 @@
{
"id": "default_dec5a2e9-f59c-40d9-8869-a056751d79b8",
"name": "Face Detailer with IP-Adapter & Canny (See Note in Details)",
"name": "Face Detailer - SD1.5",
"author": "kosmoskatten",
"description": "A workflow to add detail to and improve faces. This workflow is most effective when used with a model that creates realistic outputs. ",
"version": "2.1.0",
"contact": "invoke@invoke.ai",
"tags": "face detailer, IP-Adapter, Canny",
"tags": "sd1.5, reference image, control",
"notes": "Set this image as the blur mask: https://i.imgur.com/Gxi61zP.png",
"exposedFields": [
{
@@ -136,14 +136,7 @@
},
"control_model": {
"name": "control_model",
"label": "Control Model (select canny)",
"value": {
"key": "5bdaacf7-a7a3-4fb8-b394-cc0ffbb8941d",
"hash": "blake3:260c7f8e10aefea9868cfc68d89970e91033bd37132b14b903e70ee05ebf530e",
"name": "sd-controlnet-canny",
"base": "sd-1",
"type": "controlnet"
}
"label": "Control Model (select canny)"
},
"control_weight": {
"name": "control_weight",
@@ -197,14 +190,7 @@
},
"ip_adapter_model": {
"name": "ip_adapter_model",
"label": "IP-Adapter Model (select IP Adapter Face)",
"value": {
"key": "1cc210bb-4d0a-4312-b36c-b5d46c43768e",
"hash": "blake3:3d669dffa7471b357b4df088b99ffb6bf4d4383d5e0ef1de5ec1c89728a3d5a5",
"name": "ip_adapter_sd15",
"base": "sd-1",
"type": "ip_adapter"
}
"label": "IP-Adapter Model (select IP Adapter Face)"
},
"clip_vision_model": {
"name": "clip_vision_model",

View File

@@ -1,11 +1,11 @@
{
"id": "default_444fe292-896b-44fd-bfc6-c0b5d220fffc",
"name": "FLUX Text to Image",
"name": "Text to Image - FLUX",
"author": "InvokeAI",
"description": "A simple text-to-image workflow using FLUX dev or schnell models.",
"version": "1.1.0",
"contact": "",
"tags": "text2image, flux, text to image",
"tags": "flux, text to image",
"notes": "Prerequisite model downloads: T5 Encoder, CLIP-L Encoder, and FLUX VAE. Quantized and un-quantized versions can be found in the starter models tab within your Model Manager. We recommend 4 steps for FLUX schnell models and 30 steps for FLUX dev models.",
"exposedFields": [
{
@@ -169,36 +169,15 @@
},
"t5_encoder_model": {
"name": "t5_encoder_model",
"label": "",
"value": {
"key": "d18d5575-96b6-4da3-b3d8-eb58308d6705",
"hash": "random:f2f9ed74acdfb4bf6fec200e780f6c25f8dd8764a35e65d425d606912fdf573a",
"name": "t5_bnb_int8_quantized_encoder",
"base": "any",
"type": "t5_encoder"
}
"label": ""
},
"clip_embed_model": {
"name": "clip_embed_model",
"label": "",
"value": {
"key": "5a19d7e5-8d98-43cd-8a81-87515e4b3b4e",
"hash": "random:4bd08514c08fb6ff04088db9aeb45def3c488e8b5fd09a35f2cc4f2dc346f99f",
"name": "clip-vit-large-patch14",
"base": "any",
"type": "clip_embed"
}
"label": ""
},
"vae_model": {
"name": "vae_model",
"label": "",
"value": {
"key": "9172beab-5c1d-43f0-b2f0-6e0b956710d9",
"hash": "random:c54dde288e5fa2e6137f1c92e9d611f598049e6f16e360207b6d96c9f5a67ba0",
"name": "FLUX.1-schnell_ae",
"base": "flux",
"type": "vae"
}
"label": ""
}
}
},

View File

@@ -1,11 +1,11 @@
{
"id": "default_2d05e719-a6b9-4e64-9310-b875d3b2f9d2",
"name": "Multi ControlNet (Canny & Depth)",
"name": "Text to Image - SD1.5, Control",
"author": "InvokeAI",
"description": "A sample workflow using canny & depth ControlNets to guide the generation process. ",
"version": "2.1.0",
"contact": "invoke@invoke.ai",
"tags": "ControlNet, canny, depth",
"tags": "sd1.5, control, text to image",
"notes": "",
"exposedFields": [
{
@@ -217,14 +217,7 @@
},
"control_model": {
"name": "control_model",
"label": "Control Model (select canny)",
"value": {
"key": "5bdaacf7-a7a3-4fb8-b394-cc0ffbb8941d",
"hash": "blake3:260c7f8e10aefea9868cfc68d89970e91033bd37132b14b903e70ee05ebf530e",
"name": "sd-controlnet-canny",
"base": "sd-1",
"type": "controlnet"
}
"label": "Control Model (select canny)"
},
"control_weight": {
"name": "control_weight",
@@ -371,14 +364,7 @@
},
"control_model": {
"name": "control_model",
"label": "Control Model (select depth)",
"value": {
"key": "87e8855c-671f-4c9e-bbbb-8ed47ccb4aac",
"hash": "blake3:2550bf22a53942dfa28ab2fed9d10d80851112531f44d977168992edf9d0534c",
"name": "control_v11f1p_sd15_depth",
"base": "sd-1",
"type": "controlnet"
}
"label": "Control Model (select depth)"
},
"control_weight": {
"name": "control_weight",

View File

@@ -1,11 +1,11 @@
{
"id": "default_f96e794f-eb3e-4d01-a960-9b4e43402bcf",
"name": "MultiDiffusion SD1.5",
"name": "Upscaler - SD1.5, MultiDiffusion",
"author": "Invoke",
"description": "A workflow to upscale an input image with tiled upscaling, using SD1.5 based models.",
"version": "1.0.0",
"contact": "invoke@invoke.ai",
"tags": "tiled, upscaling, sdxl",
"tags": "sd1.5, upscaling",
"notes": "",
"exposedFields": [
{
@@ -135,14 +135,7 @@
"inputs": {
"model": {
"name": "model",
"label": "",
"value": {
"key": "e7b402e5-62e5-4acb-8c39-bee6bdb758ab",
"hash": "c8659e796168d076368256b57edbc1b48d6dafc1712f1bb37cc57c7c06889a6b",
"name": "526mix",
"base": "sd-1",
"type": "main"
}
"label": ""
}
}
},
@@ -384,21 +377,11 @@
},
"image": {
"name": "image",
"label": "Image to Upscale",
"value": {
"image_name": "ee7009f7-a35d-488b-a2a6-21237ef5ae05.png"
}
"label": "Image to Upscale"
},
"image_to_image_model": {
"name": "image_to_image_model",
"label": "",
"value": {
"key": "38bb1a29-8ede-42ba-b77f-64b3478896eb",
"hash": "blake3:e52fdbee46a484ebe9b3b20ea0aac0a35a453ab6d0d353da00acfd35ce7a91ed",
"name": "4xNomosWebPhoto_esrgan",
"base": "sdxl",
"type": "spandrel_image_to_image"
}
"label": ""
},
"tile_size": {
"name": "tile_size",
@@ -437,14 +420,7 @@
"inputs": {
"model": {
"name": "model",
"label": "ControlNet Model - Choose a Tile ControlNet",
"value": {
"key": "20645e4d-ef97-4c5a-9243-b834a3483925",
"hash": "f0812e13758f91baf4e54b7dbb707b70642937d3b2098cd2b94cc36d3eba308e",
"name": "tile",
"base": "sd-1",
"type": "controlnet"
}
"label": "ControlNet Model - Choose a Tile ControlNet"
}
}
},

View File

@@ -1,11 +1,11 @@
{
"id": "default_35658541-6d41-4a20-8ec5-4bf2561faed0",
"name": "MultiDiffusion SDXL",
"name": "Upscaler - SDXL, MultiDiffusion",
"author": "Invoke",
"description": "A workflow to upscale an input image with tiled upscaling, using SDXL based models.",
"version": "1.1.0",
"contact": "invoke@invoke.ai",
"tags": "tiled, upscaling, sdxl",
"tags": "sdxl, upscaling",
"notes": "",
"exposedFields": [
{
@@ -341,14 +341,7 @@
"inputs": {
"model": {
"name": "model",
"label": "ControlNet Model - Choose a Tile ControlNet",
"value": {
"key": "74f4651f-0ace-4b7b-b616-e98360257797",
"hash": "blake3:167a5b84583aaed3e5c8d660b45830e82e1c602743c689d3c27773c6c8b85b4a",
"name": "controlnet-tile-sdxl-1.0",
"base": "sdxl",
"type": "controlnet"
}
"label": "ControlNet Model - Choose a Tile ControlNet"
}
}
},
@@ -801,14 +794,7 @@
"inputs": {
"vae_model": {
"name": "vae_model",
"label": "",
"value": {
"key": "ff926845-090e-4d46-b81e-30289ee47474",
"hash": "9705ab1c31fa96b308734214fb7571a958621c7a9247eed82b7d277145f8d9fa",
"name": "VAEFix",
"base": "sdxl",
"type": "vae"
}
"label": ""
}
}
},
@@ -832,14 +818,7 @@
"inputs": {
"model": {
"name": "model",
"label": "SDXL Model",
"value": {
"key": "ab191f73-68d2-492c-8aec-b438a8cf0f45",
"hash": "blake3:2d50e940627e3bf555f015280ec0976d5c1fa100f7bc94e95ffbfc770e98b6fe",
"name": "CustomXLv7",
"base": "sdxl",
"type": "main"
}
"label": "SDXL Model"
}
}
},

View File

@@ -1,11 +1,11 @@
{
"id": "default_d7a1c60f-ca2f-4f90-9e33-75a826ca6d8f",
"name": "Prompt from File",
"name": "Text to Image - SD1.5, Prompt from File",
"author": "InvokeAI",
"description": "Sample workflow using Prompt from File node",
"version": "2.1.0",
"contact": "invoke@invoke.ai",
"tags": "text2image, prompt from file, default, text to image",
"tags": "sd1.5, text to image",
"notes": "",
"exposedFields": [
{
@@ -513,4 +513,4 @@
"targetHandle": "vae"
}
]
}
}

View File

@@ -10,6 +10,7 @@ _default workflows_ on app startup.
An exception will be raised during sync if this is not set correctly.
- Default workflows appear on the "Default Workflows" tab of the Workflow
Library.
- Default workflows should not reference any resources that are user-created or installed. That includes images and models. For example, if a default workflow references Juggernaut as an SDXL model, when a user loads the workflow, even if they have a version of Juggernaut installed, it will have a different UUID. They may see a warning. So, it's best to ship default workflows without any references to these types of resources.
After adding or updating default workflows, you **must** start the app up and
load them to ensure:

View File

@@ -1,11 +1,11 @@
{
"id": "default_dbe46d95-22aa-43fb-9c16-94400d0ce2fd",
"name": "SD3.5 Text to Image",
"name": "Text to Image - SD3.5",
"author": "InvokeAI",
"description": "Sample text to image workflow for Stable Diffusion 3.5",
"version": "1.0.0",
"contact": "invoke@invoke.ai",
"tags": "text2image, SD3.5, text to image",
"tags": "SD3.5, text to image",
"notes": "",
"exposedFields": [
{
@@ -38,14 +38,7 @@
"inputs": {
"model": {
"name": "model",
"label": "",
"value": {
"key": "f7b20be9-92a8-4cfb-bca4-6c3b5535c10b",
"hash": "placeholder",
"name": "stable-diffusion-3.5-medium",
"base": "sd-3",
"type": "main"
}
"label": ""
},
"t5_encoder_model": {
"name": "t5_encoder_model",

View File

@@ -5,7 +5,7 @@
"description": "Sample text to image workflow for Stable Diffusion 1.5/2",
"version": "2.1.0",
"contact": "invoke@invoke.ai",
"tags": "text2image, SD1.5, SD2, text to image",
"tags": "SD1.5, text to image",
"notes": "",
"exposedFields": [
{
@@ -417,4 +417,4 @@
"targetHandle": "vae"
}
]
}
}

View File

@@ -5,7 +5,7 @@
"description": "Sample text to image workflow for SDXL",
"version": "2.1.0",
"contact": "invoke@invoke.ai",
"tags": "text2image, SDXL, text to image",
"tags": "SDXL, text to image",
"notes": "",
"exposedFields": [
{
@@ -46,14 +46,7 @@
"inputs": {
"vae_model": {
"name": "vae_model",
"label": "VAE (use the FP16 model)",
"value": {
"key": "f20f9e5c-1bce-4c46-a84d-34ebfa7df069",
"hash": "blake3:9705ab1c31fa96b308734214fb7571a958621c7a9247eed82b7d277145f8d9fa",
"name": "sdxl-vae-fp16-fix",
"base": "sdxl",
"type": "vae"
}
"label": "VAE (use the FP16 model)"
}
},
"isOpen": true,
@@ -203,14 +196,7 @@
"inputs": {
"model": {
"name": "model",
"label": "",
"value": {
"key": "4a63b226-e8ff-4da4-854e-0b9f04b562ba",
"hash": "blake3:d279309ea6e5ee6e8fd52504275865cc280dac71cbf528c5b07c98b888bddaba",
"name": "dreamshaper-xl-v2-turbo",
"base": "sdxl",
"type": "main"
}
"label": ""
}
},
"isOpen": true,
@@ -715,4 +701,4 @@
"targetHandle": "style"
}
]
}
}

View File

@@ -1,11 +1,11 @@
{
"id": "default_e71d153c-2089-43c7-bd2c-f61f37d4c1c1",
"name": "Text to Image with LoRA",
"name": "Text to Image - SD1.5, LoRA",
"author": "InvokeAI",
"description": "Simple text to image workflow with a LoRA",
"version": "2.1.0",
"contact": "invoke@invoke.ai",
"tags": "text to image, lora, text to image",
"tags": "sd1.5, text to image, lora",
"notes": "",
"exposedFields": [
{

View File

@@ -1,11 +1,11 @@
{
"id": "default_43b0d7f7-6a12-4dcf-a5a4-50c940cbee29",
"name": "Tiled Upscaling (Beta)",
"name": "Upscaler - SD1.5, Tiled",
"author": "Invoke",
"description": "A workflow to upscale an input image with tiled upscaling. ",
"version": "2.1.0",
"contact": "invoke@invoke.ai",
"tags": "tiled, upscaling, sd1.5",
"tags": "sd1.5, upscaling",
"notes": "",
"exposedFields": [
{
@@ -86,14 +86,7 @@
},
"ip_adapter_model": {
"name": "ip_adapter_model",
"label": "IP-Adapter Model (select ip_adapter_sd15)",
"value": {
"key": "1cc210bb-4d0a-4312-b36c-b5d46c43768e",
"hash": "blake3:3d669dffa7471b357b4df088b99ffb6bf4d4383d5e0ef1de5ec1c89728a3d5a5",
"name": "ip_adapter_sd15",
"base": "sd-1",
"type": "ip_adapter"
}
"label": "IP-Adapter Model (select ip_adapter_sd15)"
},
"clip_vision_model": {
"name": "clip_vision_model",
@@ -201,14 +194,7 @@
},
"control_model": {
"name": "control_model",
"label": "Control Model (select contro_v11f1e_sd15_tile)",
"value": {
"key": "773843c8-db1f-4502-8f65-59782efa7960",
"hash": "blake3:f0812e13758f91baf4e54b7dbb707b70642937d3b2098cd2b94cc36d3eba308e",
"name": "control_v11f1e_sd15_tile",
"base": "sd-1",
"type": "controlnet"
}
"label": "Control Model (select control_v11f1e_sd15_tile)"
},
"control_weight": {
"name": "control_weight",
@@ -1816,4 +1802,4 @@
"targetHandle": "unet"
}
]
}
}

View File

@@ -46,17 +46,31 @@ class WorkflowRecordsStorageBase(ABC):
per_page: Optional[int],
query: Optional[str],
tags: Optional[list[str]],
has_been_opened: Optional[bool],
is_published: Optional[bool],
) -> PaginatedResults[WorkflowRecordListItemDTO]:
"""Gets many workflows."""
pass
@abstractmethod
def get_counts(
def counts_by_category(
self,
tags: Optional[list[str]],
categories: Optional[list[WorkflowCategory]],
) -> int:
"""Gets the count of workflows for the given tags and categories."""
categories: list[WorkflowCategory],
has_been_opened: Optional[bool] = None,
is_published: Optional[bool] = None,
) -> dict[str, int]:
"""Gets a dictionary of counts for each of the provided categories."""
pass
@abstractmethod
def counts_by_tag(
self,
tags: list[str],
categories: Optional[list[WorkflowCategory]] = None,
has_been_opened: Optional[bool] = None,
is_published: Optional[bool] = None,
) -> dict[str, int]:
"""Gets a dictionary of counts for each of the provided tags."""
pass
@abstractmethod

View File

@@ -1,6 +1,6 @@
import datetime
from enum import Enum
from typing import Any, Union
from typing import Any, Optional, Union
import semver
from pydantic import BaseModel, ConfigDict, Field, JsonValue, TypeAdapter, field_validator
@@ -67,6 +67,7 @@ class WorkflowWithoutID(BaseModel):
# This is typed as optional to prevent errors when pulling workflows from the DB. The frontend adds a default form if
# it is None.
form: dict[str, JsonValue] | None = Field(default=None, description="The form of the workflow.")
is_published: bool | None = Field(default=None, description="Whether the workflow is published or not.")
model_config = ConfigDict(extra="ignore")
@@ -98,7 +99,10 @@ class WorkflowRecordDTOBase(BaseModel):
name: str = Field(description="The name of the workflow.")
created_at: Union[datetime.datetime, str] = Field(description="The created timestamp of the workflow.")
updated_at: Union[datetime.datetime, str] = Field(description="The updated timestamp of the workflow.")
opened_at: Union[datetime.datetime, str] = Field(description="The opened timestamp of the workflow.")
opened_at: Optional[Union[datetime.datetime, str]] = Field(
default=None, description="The opened timestamp of the workflow."
)
is_published: bool | None = Field(default=None, description="Whether the workflow is published or not.")
class WorkflowRecordDTO(WorkflowRecordDTOBase):

View File

@@ -118,6 +118,8 @@ class SqliteWorkflowRecordsStorage(WorkflowRecordsStorageBase):
per_page: Optional[int] = None,
query: Optional[str] = None,
tags: Optional[list[str]] = None,
has_been_opened: Optional[bool] = None,
is_published: Optional[bool] = None,
) -> PaginatedResults[WorkflowRecordListItemDTO]:
# sanitize!
assert order_by in WorkflowRecordOrderBy
@@ -175,6 +177,11 @@ class SqliteWorkflowRecordsStorage(WorkflowRecordsStorageBase):
conditions.append(tags_condition)
params.extend(tags_params)
if has_been_opened:
conditions.append("opened_at IS NOT NULL")
elif has_been_opened is False:
conditions.append("opened_at IS NULL")
# Ignore whitespace in the query
stripped_query = query.strip() if query else None
if stripped_query:
@@ -230,54 +237,107 @@ class SqliteWorkflowRecordsStorage(WorkflowRecordsStorageBase):
total=total,
)
def get_counts(
def counts_by_tag(
self,
tags: Optional[list[str]],
categories: Optional[list[WorkflowCategory]],
) -> int:
tags: list[str],
categories: Optional[list[WorkflowCategory]] = None,
has_been_opened: Optional[bool] = None,
is_published: Optional[bool] = None,
) -> dict[str, int]:
if not tags:
return {}
cursor = self._conn.cursor()
result: dict[str, int] = {}
# Base conditions for categories and selected tags
base_conditions: list[str] = []
base_params: list[str | int] = []
# Start with an empty list of conditions and params
conditions: list[str] = []
params: list[str | int] = []
if tags:
# Construct a list of conditions for each tag
tags_conditions = ["tags LIKE ?" for _ in tags]
tags_conditions_joined = " OR ".join(tags_conditions)
tags_condition = f"({tags_conditions_joined})"
# And the params for the tags, case-insensitive
tags_params = [f"%{t.strip()}%" for t in tags]
conditions.append(tags_condition)
params.extend(tags_params)
# Add category conditions
if categories:
# Ensure all categories are valid (is this necessary?)
assert all(c in WorkflowCategory for c in categories)
# Construct a placeholder string for the number of categories
placeholders = ", ".join("?" for _ in categories)
base_conditions.append(f"category IN ({placeholders})")
base_params.extend([category.value for category in categories])
# Construct the condition string & params
conditions.append(f"category IN ({placeholders})")
params.extend([category.value for category in categories])
if has_been_opened:
base_conditions.append("opened_at IS NOT NULL")
elif has_been_opened is False:
base_conditions.append("opened_at IS NULL")
stmt = """--sql
SELECT COUNT(*)
FROM workflow_library
"""
# For each tag to count, run a separate query
for tag in tags:
# Start with the base conditions
conditions = base_conditions.copy()
params = base_params.copy()
if conditions:
# If there are conditions, add a WHERE clause and then join the conditions
stmt += " WHERE "
# Add this specific tag condition
conditions.append("tags LIKE ?")
params.append(f"%{tag.strip()}%")
all_conditions = " AND ".join(conditions)
stmt += all_conditions
# Construct the full query
stmt = """--sql
SELECT COUNT(*)
FROM workflow_library
"""
cursor.execute(stmt, tuple(params))
return cursor.fetchone()[0]
if conditions:
stmt += " WHERE " + " AND ".join(conditions)
cursor.execute(stmt, params)
count = cursor.fetchone()[0]
result[tag] = count
return result
def counts_by_category(
self,
categories: list[WorkflowCategory],
has_been_opened: Optional[bool] = None,
is_published: Optional[bool] = None,
) -> dict[str, int]:
cursor = self._conn.cursor()
result: dict[str, int] = {}
# Base conditions for categories
base_conditions: list[str] = []
base_params: list[str | int] = []
# Add category conditions
if categories:
assert all(c in WorkflowCategory for c in categories)
placeholders = ", ".join("?" for _ in categories)
base_conditions.append(f"category IN ({placeholders})")
base_params.extend([category.value for category in categories])
if has_been_opened:
base_conditions.append("opened_at IS NOT NULL")
elif has_been_opened is False:
base_conditions.append("opened_at IS NULL")
# For each category to count, run a separate query
for category in categories:
# Start with the base conditions
conditions = base_conditions.copy()
params = base_params.copy()
# Add this specific category condition
conditions.append("category = ?")
params.append(category.value)
# Construct the full query
stmt = """--sql
SELECT COUNT(*)
FROM workflow_library
"""
if conditions:
stmt += " WHERE " + " AND ".join(conditions)
cursor.execute(stmt, params)
count = cursor.fetchone()[0]
result[category.value] = count
return result
def update_opened_at(self, workflow_id: str) -> None:
try:

View File

@@ -8,12 +8,12 @@ class WorkflowThumbnailServiceBase(ABC):
"""Base class for workflow thumbnail services"""
@abstractmethod
def get_path(self, workflow_id: str) -> Path:
def get_path(self, workflow_id: str, with_hash: bool = True) -> Path:
"""Gets the path to a workflow thumbnail"""
pass
@abstractmethod
def get_url(self, workflow_id: str) -> str | None:
def get_url(self, workflow_id: str, with_hash: bool = True) -> str | None:
"""Gets the URL of a workflow thumbnail"""
pass

View File

@@ -41,7 +41,7 @@ class WorkflowThumbnailFileStorageDisk(WorkflowThumbnailServiceBase):
except Exception as e:
raise WorkflowThumbnailFileSaveException from e
def get_path(self, workflow_id: str) -> Path:
def get_path(self, workflow_id: str, with_hash: bool = True) -> Path:
workflow = self._invoker.services.workflow_records.get(workflow_id).workflow
if workflow.meta.category is WorkflowCategory.Default:
default_thumbnails_dir = Path(__file__).parent / Path("default_workflow_thumbnails")
@@ -51,7 +51,7 @@ class WorkflowThumbnailFileStorageDisk(WorkflowThumbnailServiceBase):
return path
def get_url(self, workflow_id: str) -> str | None:
def get_url(self, workflow_id: str, with_hash: bool = True) -> str | None:
path = self.get_path(workflow_id)
if not self._validate_path(path):
return
@@ -59,7 +59,8 @@ class WorkflowThumbnailFileStorageDisk(WorkflowThumbnailServiceBase):
url = self._invoker.services.urls.get_workflow_thumbnail_url(workflow_id)
# The image URL never changes, so we must add random query string to it to prevent caching
url += f"?{uuid_string()}"
if with_hash:
url += f"?{uuid_string()}"
return url

View File

@@ -4,7 +4,10 @@ from fastapi import FastAPI
from fastapi.openapi.utils import get_openapi
from pydantic.json_schema import models_json_schema
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, UIConfigBase
from invokeai.app.invocations.baseinvocation import (
InvocationRegistry,
UIConfigBase,
)
from invokeai.app.invocations.fields import InputFieldJSONSchemaExtra, OutputFieldJSONSchemaExtra
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.services.events.events_common import EventBase
@@ -56,14 +59,14 @@ def get_openapi_func(
invocation_output_map_required: list[str] = []
# We need to manually add all outputs to the schema - pydantic doesn't add them because they aren't used directly.
for output in BaseInvocationOutput.get_outputs():
for output in InvocationRegistry.get_output_classes():
json_schema = output.model_json_schema(mode="serialization", ref_template="#/components/schemas/{model}")
move_defs_to_top_level(openapi_schema, json_schema)
openapi_schema["components"]["schemas"][output.__name__] = json_schema
# Technically, invocations are added to the schema by pydantic, but we still need to manually set their output
# property, so we'll just do it all manually.
for invocation in BaseInvocation.get_invocations():
for invocation in InvocationRegistry.get_invocation_classes():
json_schema = invocation.model_json_schema(
mode="serialization", ref_template="#/components/schemas/{model}"
)

View File

@@ -1,6 +1,7 @@
import logging
import mimetypes
import socket
from pathlib import Path
import torch
@@ -33,7 +34,16 @@ def check_cudnn(logger: logging.Logger) -> None:
)
def enable_dev_reload() -> None:
def invokeai_source_dir() -> Path:
# `invokeai.__file__` doesn't always work for editable installs
this_module_path = Path(__file__).resolve()
# https://youtrack.jetbrains.com/issue/PY-38382/Unresolved-reference-spec-but-this-is-standard-builtin
# noinspection PyUnresolvedReferences
depth = len(__spec__.parent.split("."))
return this_module_path.parents[depth - 1]
def enable_dev_reload(custom_nodes_path=None) -> None:
"""Enable hot reloading on python file changes during development."""
from invokeai.backend.util.logging import InvokeAILogger
@@ -44,7 +54,10 @@ def enable_dev_reload() -> None:
'Can\'t start `--dev_reload` because jurigged is not found; `pip install -e ".[dev]"` to include development dependencies.'
) from e
else:
jurigged.watch(logger=InvokeAILogger.get_logger(name="jurigged").info)
paths = [str(invokeai_source_dir() / "*.py")]
if custom_nodes_path:
paths.append(str(custom_nodes_path / "*.py"))
jurigged.watch(pattern=paths, logger=InvokeAILogger.get_logger(name="jurigged").info)
def apply_monkeypatches() -> None:
@@ -52,9 +65,6 @@ def apply_monkeypatches() -> None:
import invokeai.backend.util.hotfixes # noqa: F401 (monkeypatching on import)
if torch.backends.mps.is_available():
import invokeai.backend.util.mps_fixes # noqa: F401 (monkeypatching on import)
def register_mime_types() -> None:
"""Register additional mime types for windows."""

View File

@@ -5,7 +5,7 @@ import torch
from PIL import Image
from invokeai.app.services.session_processor.session_processor_common import CanceledException
from invokeai.backend.model_manager.config import BaseModelType
from invokeai.backend.model_manager.taxonomy import BaseModelType
from invokeai.backend.stable_diffusion.diffusers_pipeline import PipelineIntermediateState
# fast latents preview matrix for sdxl

Some files were not shown because too many files have changed in this diff Show More