Compare commits

..

96 Commits

Author SHA1 Message Date
psychedelicious
9ec4d968aa chore: bump version to v5.8.0a2 2025-03-11 13:29:26 +11:00
Riccardo Giovanetti
76c09301f9 translationBot(ui): update translation (Italian)
Currently translated at 98.7% (1794 of 1816 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-03-11 11:33:01 +11:00
Linos
1cf8749754 translationBot(ui): update translation (Vietnamese)
Currently translated at 100.0% (1816 of 1816 strings)

translationBot(ui): update translation (Vietnamese)

Currently translated at 99.9% (1815 of 1816 strings)

Co-authored-by: Linos <linos.coding@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/vi/
Translation: InvokeAI/Web UI
2025-03-11 11:33:01 +11:00
Riku
5d6c468833 translationBot(ui): update translation (German)
Currently translated at 67.2% (1221 of 1816 strings)

Co-authored-by: Riku <riku.block@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2025-03-11 11:33:01 +11:00
Hosted Weblate
80b3f44ae8 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2025-03-11 11:33:01 +11:00
psychedelicious
c77c12aa1d fix(ui): missing builder translations 2025-03-11 11:28:51 +11:00
psychedelicious
731992c5ec fix(ui): restore accidentally deleted line 2025-03-11 11:17:19 +11:00
psychedelicious
c259899bf4 feat(ui): support for FLUX Redux in canvas
User facing:

When a FLUX main model is selected, users may now add Regional Reference Image layers.

When switching between FLUX Redux and FLUX IP Adapter, the settings will change to match the model type. (IP Adapter has weight, begin/end step, but Redux does not.) The image will be retained when switching between the two.

Otherwise it works the same way as IP Adapter - both in Global and Regional Reference Image layers.

---

Internal state handling:

Slightly awkward, but it was easiest to make FLUX Redux a second type of IP Adapter in redux state.

Global and regional reference images still have a single `ipAdapter` field, but it can have a type of `ip_adapter` or `flux_redux`.

Ideally, this field is called `config` or `settings` or something, but we are past that point. We _could_ do a migration to rename it, but I don't think it's worth the effort.

---

Other changes:
- Updated canvas layer validators to handle FLUX Redux.
- Updated model list loading logic to un-set FLUX Redux models in Canvas if they are not in the list (e.g. if the user deletes the model in the main app).
- Updated graph builders - new `addFLUXRedux` util & updated `addRegions` util.
- Updated the `buildModelsHook` util to return a hook that accepts a filter callback. This handles a discrepancy: FLUX IP Adapter does not support regional guidance, but FLUX Redux does. The Regional Guidance settings provide the filter to filter out FLUX IP Adapter models from the combined list of IP Adapter ahd Redux models.
2025-03-11 11:17:19 +11:00
psychedelicious
f62b9ad919 chore(ui): typegen 2025-03-11 11:17:19 +11:00
psychedelicious
57533657f9 feat(nodes): remove siglip from flux_redux, dl it jit when needed if we cannot find it
This follows the same pattern for IP Adapter w/ its CLIP Vision model. The SigLIP model is unlikely to ever change and we don't want to force the user to select it anywhere. Hardcoding it is safe and makes the UX much nicer.

The alternative is a model dropdown that will likely only ever have one valid choice in it.
2025-03-11 11:17:19 +11:00
psychedelicious
e35537e60a fix(mm): move flux_redux starter model to the flux bundle, make siglip a dependency of it 2025-03-11 11:17:19 +11:00
psychedelicious
a89d68b93a fix(ui): hide shared on workflow library 2025-03-10 12:29:48 -04:00
psychedelicious
59a8c0d441 feat(app): less janky custom node loading
- We don't need to copy the init file. Just crawl the custom nodes dir for modules and import them all. Dunno why I didn't do this initially.
- Pass the logger in as an arg. There was a race condition where if we got the logger directly in the load_custom_nodes function, the config would not have been loaded fully yet and we'd end up with the wrong custom nodes path!
- Remove permissions-setting logic, I do not believe it is relevant for custom nodes
- Minor cleanup of the utility
2025-03-08 09:42:13 +11:00
Riku
d5d08f6569 fix(ui): add webp to supported image types in toast messages 2025-03-07 20:38:16 +11:00
psychedelicious
8a4282365e chore: bump version to v5.8.0a1 2025-03-07 12:21:46 +11:00
psychedelicious
b9c7bc8b0e chore: ruff 2025-03-07 11:45:49 +11:00
psychedelicious
0f45ee04a2 tests: fix test_extract_valid_metadata_from_image to accomodate prev commit 2025-03-07 11:45:49 +11:00
psychedelicious
839a791509 fix(api): loosen graph parsing in extract_metadata_from_image
There's a pydantic thing that causes the graphs to fail validation erroneously. Details in the comments - not a high priority to fix but we should figure it out someday.
2025-03-07 11:45:49 +11:00
psychedelicious
f03a2bf03f chore(ui): typegen 2025-03-07 11:45:49 +11:00
psychedelicious
4136817d30 chore(ui): typegen 2025-03-07 11:45:49 +11:00
psychedelicious
7f0452173b feat(api): use extract_metadata_from_image in upload router 2025-03-07 11:45:49 +11:00
psychedelicious
8e46b03f09 tests: add tests for extract_metadata_from_image 2025-03-07 11:45:49 +11:00
psychedelicious
9045237bfb feat(api): add util to extract metadata from image 2025-03-07 11:45:49 +11:00
psychedelicious
58959a18cb chore: ruff 2025-03-07 08:44:15 +11:00
psychedelicious
e51588197f chore(ui): lint 2025-03-07 08:44:15 +11:00
psychedelicious
c5319ac48c feat(ui): restore new workflow button 2025-03-07 08:44:15 +11:00
psychedelicious
50657650c2 feat(ui): rough out recent workflows 2025-03-07 08:44:15 +11:00
psychedelicious
f657c95e45 chore(ui): lint 2025-03-07 08:44:15 +11:00
psychedelicious
2d3a2f9842 feat(app): add update_opened_at method for workflows
This method simply sets the `opened_at` attribute to the current time.

Previously `opened_at` was set when calling `get`, but that is not correct. We `get` workflows often, even when not opening them. So this needs to be a separate thing
2025-03-07 08:44:15 +11:00
psychedelicious
008837642e feat(ui): restore upload workflow button 2025-03-07 08:44:15 +11:00
psychedelicious
1a84a2fb7e feat(ui): restore share workflow button 2025-03-07 08:44:15 +11:00
psychedelicious
b87febcf4c chore(ui): lint 2025-03-07 08:44:15 +11:00
psychedelicious
95a9bb6c7b fix(ui): missing translation 2025-03-07 08:44:15 +11:00
psychedelicious
93ec9a048f fix(ui): workflow library overflow 2025-03-07 08:44:15 +11:00
psychedelicious
ec6cea6705 feat(ui): workflow library styling 2025-03-07 08:44:15 +11:00
psychedelicious
bfbcaad8c2 tweak(ui): workflow tag names 2025-03-07 08:44:15 +11:00
psychedelicious
3694158434 feat(ui): workflow library tags 2025-03-07 08:44:15 +11:00
psychedelicious
814fb939c0 chore: update default workflow tags 2025-03-07 08:44:15 +11:00
psychedelicious
4cb73e6c19 chore(ui): typegen 2025-03-07 08:44:15 +11:00
psychedelicious
e8aed67cf1 feat(app): add workflow library get_counts method
Get the counts of workflows for the given tags and/or categories. Made a separate method bc get_many will deserialize all matching workflows, which is unnecessary for this use case.
2025-03-07 08:44:15 +11:00
psychedelicious
f56dd01419 feat(ui): workflow library infinite scrolling 2025-03-07 08:44:15 +11:00
psychedelicious
ed9cd6a7a2 feat(ui): simpler workflow action buttons 2025-03-07 08:44:15 +11:00
psychedelicious
c44c28ec4c feat(ui): workflow library modal styling 2025-03-07 08:44:15 +11:00
psychedelicious
e1f7359171 feat(ui): set up RTKQ endpoint for infinite workflows list 2025-03-07 08:44:15 +11:00
psychedelicious
3e97d49a69 chore(ui): bump RTKQ to latest to get infinite query support 2025-03-07 08:44:15 +11:00
psychedelicious
c12585e52d fix(app): incorrect number of bindings for query 2025-03-07 08:44:15 +11:00
psychedelicious
b39774a57c feat(app): add searching by tags to workflow library APIs 2025-03-07 08:44:15 +11:00
psychedelicious
8988539cd5 feat(db): add generated column for tags in db migration 2025-03-07 08:44:15 +11:00
psychedelicious
88c68e8016 tidy(app): workflow records get_many 2025-03-07 08:44:15 +11:00
psychedelicious
5073c7d0a3 fix(app): ensure workflow record get_many stmt is terminated 2025-03-07 08:44:15 +11:00
psychedelicious
84e86819b8 chore(ui): lint 2025-03-07 08:44:15 +11:00
psychedelicious
440e3e01ac fix(ui): show workflow thumbnails in library 2025-03-07 08:44:15 +11:00
psychedelicious
c2302f7ab1 fix(ui): ts issues 2025-03-07 08:44:15 +11:00
Mary Hipp
2594eed1af add comments 2025-03-07 08:44:15 +11:00
Mary Hipp
e8db1c1d5a break out actions, start on marketplace categories 2025-03-07 08:44:15 +11:00
Mary Hipp
d5c5e8e8ed another new workflow library 2025-03-07 08:44:15 +11:00
Jonathan
518a7c941f Changed version of FluxDenoiseInvocation
A Redux field was added but the node version wasn't updated.
2025-03-07 07:33:31 +11:00
psychedelicious
bdafe53f2e repo: add @jazzhaiku to codeowners for CI, app and backend 2025-03-06 10:19:18 -05:00
psychedelicious
cf0cbaf0ae chore: ruff (more) 2025-03-06 10:57:54 +11:00
psychedelicious
ac6fc6eccb chore: ruff 2025-03-06 10:57:54 +11:00
psychedelicious
07d65b8fd1 refactor(ui): workflow loading, saving and saved status tracking
This big chungus reworks and simplifies much of the logic around loading and saving workflows. It also makes some minor changes to how store the current workflow and determine if it is a draft, user workflow or default workflow.

---

The lower-level hooks to save a workflow have been revised:
- `useSaveLibraryWorkflow`: Saves a user or project workflow that has had changes made to it.
- `useCreateNewWorkflow`: Saves a workflow as a new entity.

A new higher-level hook `useSaveOrSaveAsWorkflow` is intended to be used by components. It returns a single function that:
- Constructs the workflow payload to be sent to the server
- Checks if the workflow is an existing user workflow. If so, it immediately saves (updates) that workflow.
- If it's not an existing user workflow, it opens the save as dialog so the user can choose a name for it and create a new workflow. This occurs for both draft workflows and loaded default workflows.

---

The logic to build the current redux state into a workflow - either to be saved as JSON, to update an existing user workflow, or save as - was a bit convoluted.

Changes to redux state triggered a debounced function to build the workflow, setting it in a global nanostores atom. Then, all of the functions that consumed the "built workflow" referenced this atom.

Now, this logic is strictly imperative. When a consumer wants to save a workflow, we build it on the spot. This removes a layer of indirection.

The logic is in the `useBuildWorkflowFast` hook.

---

The logic for loading a workflow is also revised. Previously, it happened in an RTK listener. You'd need to dispatch an action to load a workflow, and wouldn't know if it succeeded or not (though the listener would make a toast if the load failed).

This is now done in a callback, outside redux middleware. The callback is returned from the `useLoadWorkflow` hook.

---

Previously, we stripped the id from default workflows when loading them. Then, when saving the workflow, we built a workflow object from redux state and hit the API with it.

This has two issues:
- It relies on redux state never having an ID set when a default workflow is loaded. If we somehow ended up with a default workflow's ID in redux, when we go to save the workflow, we'd get and error or it wouldn't work, because you cannot save a default workflow. You can only save-as it.
- We do not know the default workflow from which the current workflow was loaded. And be cause we don't know the default workflow, we cannot show a thumbnail image.

The responsibilities have been shifted around a bit.

Now, when we load a workflow, we load it as-is. The default workflow IDs are saved in redux state. We can render the thumbnail, and if the user goes to save the workflow, we detect that it is a default workflow and save-as it.

---

In `App.tsx`, the long list of modals are moved into their own "isolator" component to ensure any re-renders there do not affect the rest of the app.

---

The save-workflow-as modal is restructured to be a bit simpler. Still works the same. On commercial, "save to project" will be enabled by default.

---

The workflow JSON tab uses a debounced version of "buildWorkflow" to build the workflow as JSON.

---

`buildWorkflowFast` is updated to deep-copy its _whole_ output, preventing issues where field types could accidentally get mutated. I don't think this has ever happened but we may as well be safe.

---

Fixed an issue where the edit button in the workflow list didn't open the workflow in edit mode.
2025-03-06 10:57:54 +11:00
psychedelicious
3c2e6378ca chore(ui): typegen 2025-03-06 10:57:54 +11:00
psychedelicious
445f122f37 fix(api): allow deleting a workflow even if the thumbnail file doesn't exist 2025-03-06 10:57:54 +11:00
psychedelicious
8c0ee9c48f fix(app): fix import of WorkflowThumbnailServiceBase 2025-03-06 10:57:54 +11:00
psychedelicious
0eb237ac64 feat(app): make category required on workflows
It's only by misunderstanding the pydantic API that this field was is typed as optional. Workflows must _always_ have a category, and indeed they do.

Fixing this allows the generated types in the frontend to be easier to work with..
2025-03-06 10:57:54 +11:00
psychedelicious
9aa04f0bea feat(app): support thumbnails for default workflow images 2025-03-06 10:57:54 +11:00
psychedelicious
76e2f41ec7 feat(app): throw as early as possible when attempting to create, update or delete a default workflow 2025-03-06 10:57:54 +11:00
psychedelicious
1353c3301a typo(app): style_preset_id -> workflow_id 2025-03-06 10:57:54 +11:00
psychedelicious
bf209663ac tidy(app): make workflow thumbnails base class an ABC, move it to own file 2025-03-06 10:57:54 +11:00
psychedelicious
04b96dd7b4 feat(app): stable default workflows
There was a bit of wonk with default workflows. On every app startup, we wiped them all out and recreated them with new IDs. This is a quick-and-dirty way to ensure default workflows are always in sync.

Unfortunately, it also means default workflows are newly-created entities on every app load. Any thumbnails associated to them will be lost (bc they have new IDs), and `updated_at` doesn't work.

This changes makes default workflows stable entities.

The workflows we bundle in the python package in JSON format are still the source of truth for default workflows, but the startup logic that syncs them to the user DB is a bit smarter.

- All bundled workflows have an ID. It is prefixed with "default_" for  clarity.
- Any default workflows in the user's DB that are not in the bundled default workflows are deleted from the DB.
- Any bundled default workflows that are not in the user's DB are added to the DB.
- If a default workflow in the user's DB does not match the content of its corresponding bundled workflow, it is updated in the DB.

The end result is that default workflows are still kept in sync for the user, but they don't change their identity.

We may now add thumbnails to default workflows, and sorting by `updated_at` is now meaningful.
2025-03-06 10:57:54 +11:00
psychedelicious
79b2c68853 fix(ui): hide workflow thumbnail for unsaved and default workflows 2025-03-06 10:41:47 +11:00
psychedelicious
aac456527e refactor(ui): make workflow thumbnail rendering more explicit 2025-03-06 10:41:47 +11:00
psychedelicious
c88b835373 fix(ui): remove unused redux action & selector 2025-03-06 10:41:47 +11:00
Mary Hipp
9da116fd3d how to only show thumbnail for saved non-default workflows 2025-03-06 10:41:47 +11:00
Mary Hipp
201d7f1fdb fix test 2025-03-06 10:41:47 +11:00
Mary Hipp
17a5b1bd28 fix test 2025-03-06 10:41:47 +11:00
Mary Hipp
a409aec00f update schema 2025-03-06 10:41:47 +11:00
Mary Hipp
b0593eda92 ruff 2025-03-06 10:41:47 +11:00
Mary Hipp
9acb24914f tsc fix 2025-03-06 10:41:47 +11:00
Mary Hipp
ab4433da2f refactor workflow thumbnails to be separate flow/endpoints 2025-03-06 10:41:47 +11:00
Mary Hipp
d4423aa16f WIP workflow thumbnails - how to add to redux state? 2025-03-06 10:41:47 +11:00
Ryan Dick
1f6430c1b0 typegen 2025-03-06 10:31:17 +11:00
Ryan Dick
8e28888bc4 Fix SigLipPipeline model size calculation. 2025-03-06 10:31:17 +11:00
Ryan Dick
b6b21dbcbf Add model selecton fields to the FluxReduxInvocation. 2025-03-06 10:31:17 +11:00
Ryan Dick
7b48ef2264 First pass at frontend integration for FLUX Redux and SigLIP model types. 2025-03-06 10:31:17 +11:00
Ryan Dick
9c542ed655 typegen 2025-03-06 10:31:17 +11:00
Ryan Dick
4c02ba908a Add support for FLUX Redux masks. 2025-03-06 10:31:17 +11:00
Ryan Dick
82293ae3b2 Add helpful error messages when FLUX Redux starter models are not installed. 2025-03-06 10:31:17 +11:00
Ryan Dick
f1fde792ee Get FLUX Redux working: model loading and inference. 2025-03-06 10:31:17 +11:00
Ryan Dick
e82393f7ed Add FLUX Redux to starter models list. 2025-03-06 10:31:17 +11:00
Ryan Dick
d5211a8088 Add FluxRedux model type and probing logic. 2025-03-06 10:31:17 +11:00
Ryan Dick
3b095b5945 Add SigLIP starter model. 2025-03-06 10:31:17 +11:00
Ryan Dick
34959ef573 Add SigLIP model type and probing. 2025-03-06 10:31:17 +11:00
jazzhaiku
7f10f8f96a Ruff upgrade (#7741)
## Summary

Upgrade ruff version to 0.9.9 and format existing code.

## Related Issues / Discussions

<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->

## QA Instructions

<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->

## Merge Plan

<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->

## Checklist

- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2025-03-06 10:06:02 +11:00
Billy
f2689598c0 Formatting 2025-03-06 09:11:00 +11:00
Billy
551c78d9f3 Update ruff version 2025-03-06 09:10:50 +11:00
168 changed files with 5197 additions and 2013 deletions

6
.github/CODEOWNERS vendored
View File

@@ -1,12 +1,12 @@
# continuous integration
/.github/workflows/ @lstein @blessedcoolant @hipsterusername @ebr
/.github/workflows/ @lstein @blessedcoolant @hipsterusername @ebr @jazzhaiku
# documentation
/docs/ @lstein @blessedcoolant @hipsterusername @Millu
/mkdocs.yml @lstein @blessedcoolant @hipsterusername @Millu
# nodes
/invokeai/app/ @Kyle0654 @blessedcoolant @psychedelicious @brandonrising @hipsterusername
/invokeai/app/ @Kyle0654 @blessedcoolant @psychedelicious @brandonrising @hipsterusername @jazzhaiku
# installation and configuration
/pyproject.toml @lstein @blessedcoolant @hipsterusername
@@ -22,7 +22,7 @@
/invokeai/backend @blessedcoolant @psychedelicious @lstein @maryhipp @hipsterusername
# generation, model management, postprocessing
/invokeai/backend @damian0815 @lstein @blessedcoolant @gregghelt2 @StAlKeR7779 @brandonrising @ryanjdick @hipsterusername
/invokeai/backend @damian0815 @lstein @blessedcoolant @gregghelt2 @StAlKeR7779 @brandonrising @ryanjdick @hipsterusername @jazzhaiku
# front ends
/invokeai/frontend/CLI @lstein @hipsterusername

View File

@@ -62,7 +62,7 @@ jobs:
- name: install ruff
if: ${{ steps.changed-files.outputs.python_any_changed == 'true' || inputs.always_run == true }}
run: pip install ruff==0.6.0
run: pip install ruff==0.9.9
shell: bash
- name: ruff check

View File

@@ -36,6 +36,7 @@ from invokeai.app.services.style_preset_images.style_preset_images_disk import S
from invokeai.app.services.style_preset_records.style_preset_records_sqlite import SqliteStylePresetRecordsStorage
from invokeai.app.services.urls.urls_default import LocalUrlService
from invokeai.app.services.workflow_records.workflow_records_sqlite import SqliteWorkflowRecordsStorage
from invokeai.app.services.workflow_thumbnails.workflow_thumbnails_disk import WorkflowThumbnailFileStorageDisk
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningFieldData
from invokeai.backend.util.logging import InvokeAILogger
from invokeai.version.invokeai_version import __version__
@@ -83,6 +84,7 @@ class ApiDependencies:
model_images_folder = config.models_path
style_presets_folder = config.style_presets_path
workflow_thumbnails_folder = config.workflow_thumbnails_path
db = init_db(config=config, logger=logger, image_files=image_files)
@@ -120,6 +122,7 @@ class ApiDependencies:
workflow_records = SqliteWorkflowRecordsStorage(db=db)
style_preset_records = SqliteStylePresetRecordsStorage(db=db)
style_preset_image_files = StylePresetImageFileStorageDisk(style_presets_folder / "images")
workflow_thumbnails = WorkflowThumbnailFileStorageDisk(workflow_thumbnails_folder)
services = InvocationServices(
board_image_records=board_image_records,
@@ -147,6 +150,7 @@ class ApiDependencies:
conditioning=conditioning,
style_preset_records=style_preset_records,
style_preset_image_files=style_preset_image_files,
workflow_thumbnails=workflow_thumbnails,
)
ApiDependencies.invoker = Invoker(services)

View File

@@ -0,0 +1,124 @@
import json
import logging
from dataclasses import dataclass
from PIL import Image
from invokeai.app.services.workflow_records.workflow_records_common import WorkflowWithoutIDValidator
@dataclass
class ExtractedMetadata:
invokeai_metadata: str | None
invokeai_workflow: str | None
invokeai_graph: str | None
def extract_metadata_from_image(
pil_image: Image.Image,
invokeai_metadata_override: str | None,
invokeai_workflow_override: str | None,
invokeai_graph_override: str | None,
logger: logging.Logger,
) -> ExtractedMetadata:
"""
Extracts the "invokeai_metadata", "invokeai_workflow", and "invokeai_graph" data embedded in the PIL Image.
These items are stored as stringified JSON in the image file's metadata, so we need to do some parsing to validate
them. Once parsed, the values are returned as they came (as strings), or None if they are not present or invalid.
In some situations, we may prefer to override the values extracted from the image file with some other values.
For example, when uploading an image via API, the client can optionally provide the metadata directly in the request,
as opposed to embedding it in the image file. In this case, the client-provided metadata will be used instead of the
metadata embedded in the image file.
Args:
pil_image: The PIL Image object.
invokeai_metadata_override: The metadata override provided by the client.
invokeai_workflow_override: The workflow override provided by the client.
invokeai_graph_override: The graph override provided by the client.
logger: The logger to use for debug logging.
Returns:
ExtractedMetadata: The extracted metadata, workflow, and graph.
"""
# The fallback value for metadata is None.
stringified_metadata: str | None = None
# Use the metadata override if provided, else attempt to extract it from the image file.
metadata_raw = invokeai_metadata_override or pil_image.info.get("invokeai_metadata", None)
# If the metadata is present in the image file, we will attempt to parse it as JSON. When we create images,
# we always store metadata as a stringified JSON dict. So, we expect it to be a string here.
if isinstance(metadata_raw, str):
try:
# Must be a JSON string
metadata_parsed = json.loads(metadata_raw)
# Must be a dict
if isinstance(metadata_parsed, dict):
# Looks good, overwrite the fallback value
stringified_metadata = metadata_raw
except Exception as e:
logger.debug(f"Failed to parse metadata for uploaded image, {e}")
pass
# We expect the workflow, if embedded in the image, to be a JSON-stringified WorkflowWithoutID. We will store it
# as a string.
workflow_raw: str | None = invokeai_workflow_override or pil_image.info.get("invokeai_workflow", None)
# The fallback value for workflow is None.
stringified_workflow: str | None = None
# If the workflow is present in the image file, we will attempt to parse it as JSON. When we create images, we
# always store workflows as a stringified JSON WorkflowWithoutID. So, we expect it to be a string here.
if isinstance(workflow_raw, str):
try:
# Validate the workflow JSON before storing it
WorkflowWithoutIDValidator.validate_json(workflow_raw)
# Looks good, overwrite the fallback value
stringified_workflow = workflow_raw
except Exception:
logger.debug("Failed to parse workflow for uploaded image")
pass
# We expect the workflow, if embedded in the image, to be a JSON-stringified Graph. We will store it as a
# string.
graph_raw: str | None = invokeai_graph_override or pil_image.info.get("invokeai_graph", None)
# The fallback value for graph is None.
stringified_graph: str | None = None
# If the graph is present in the image file, we will attempt to parse it as JSON. When we create images, we
# always store graphs as a stringified JSON Graph. So, we expect it to be a string here.
if isinstance(graph_raw, str):
try:
# TODO(psyche): Due to pydantic's handling of None values, it is possible for the graph to fail validation,
# even if it is a direct dump of a valid graph. Node fields in the graph are allowed to have be unset if
# they have incoming connections, but something about the ser/de process cannot adequately handle this.
#
# In lieu of fixing the graph validation, we will just do a simple check here to see if the graph is dict
# with the correct keys. This is not a perfect solution, but it should be good enough for now.
# FIX ME: Validate the graph JSON before storing it
# Graph.model_validate_json(graph_raw)
# Crappy workaround to validate JSON
graph_parsed = json.loads(graph_raw)
if not isinstance(graph_parsed, dict):
raise ValueError("Not a dict")
if not isinstance(graph_parsed.get("nodes", None), dict):
raise ValueError("'nodes' is not a dict")
if not isinstance(graph_parsed.get("edges", None), list):
raise ValueError("'edges' is not a list")
# Looks good, overwrite the fallback value
stringified_graph = graph_raw
except Exception as e:
logger.debug(f"Failed to parse graph for uploaded image, {e}")
pass
return ExtractedMetadata(
invokeai_metadata=stringified_metadata, invokeai_workflow=stringified_workflow, invokeai_graph=stringified_graph
)

View File

@@ -6,9 +6,10 @@ from fastapi import BackgroundTasks, Body, HTTPException, Path, Query, Request,
from fastapi.responses import FileResponse
from fastapi.routing import APIRouter
from PIL import Image
from pydantic import BaseModel, Field, JsonValue
from pydantic import BaseModel, Field
from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.api.extract_metadata_from_image import extract_metadata_from_image
from invokeai.app.invocations.fields import MetadataField
from invokeai.app.services.image_records.image_records_common import (
ImageCategory,
@@ -45,18 +46,16 @@ async def upload_image(
board_id: Optional[str] = Query(default=None, description="The board to add this image to, if any"),
session_id: Optional[str] = Query(default=None, description="The session ID associated with this upload, if any"),
crop_visible: Optional[bool] = Query(default=False, description="Whether to crop the image"),
metadata: Optional[JsonValue] = Body(
default=None, description="The metadata to associate with the image", embed=True
metadata: Optional[str] = Body(
default=None,
description="The metadata to associate with the image, must be a stringified JSON dict",
embed=True,
),
) -> ImageDTO:
"""Uploads an image"""
if not file.content_type or not file.content_type.startswith("image"):
raise HTTPException(status_code=415, detail="Not an image")
_metadata = None
_workflow = None
_graph = None
contents = await file.read()
try:
pil_image = Image.open(io.BytesIO(contents))
@@ -67,30 +66,13 @@ async def upload_image(
ApiDependencies.invoker.services.logger.error(traceback.format_exc())
raise HTTPException(status_code=415, detail="Failed to read image")
# TODO: retain non-invokeai metadata on upload?
# attempt to parse metadata from image
metadata_raw = metadata if isinstance(metadata, str) else pil_image.info.get("invokeai_metadata", None)
if isinstance(metadata_raw, str):
_metadata = metadata_raw
else:
ApiDependencies.invoker.services.logger.debug("Failed to parse metadata for uploaded image")
pass
# attempt to parse workflow from image
workflow_raw = pil_image.info.get("invokeai_workflow", None)
if isinstance(workflow_raw, str):
_workflow = workflow_raw
else:
ApiDependencies.invoker.services.logger.debug("Failed to parse workflow for uploaded image")
pass
# attempt to extract graph from image
graph_raw = pil_image.info.get("invokeai_graph", None)
if isinstance(graph_raw, str):
_graph = graph_raw
else:
ApiDependencies.invoker.services.logger.debug("Failed to parse graph for uploaded image")
pass
extracted_metadata = extract_metadata_from_image(
pil_image=pil_image,
invokeai_metadata_override=metadata,
invokeai_workflow_override=None,
invokeai_graph_override=None,
logger=ApiDependencies.invoker.services.logger,
)
try:
image_dto = ApiDependencies.invoker.services.images.create(
@@ -99,9 +81,9 @@ async def upload_image(
image_category=image_category,
session_id=session_id,
board_id=board_id,
metadata=_metadata,
workflow=_workflow,
graph=_graph,
metadata=extracted_metadata.invokeai_metadata,
workflow=extracted_metadata.invokeai_workflow,
graph=extracted_metadata.invokeai_graph,
is_intermediate=is_intermediate,
)

View File

@@ -1,6 +1,10 @@
import io
import traceback
from typing import Optional
from fastapi import APIRouter, Body, HTTPException, Path, Query
from fastapi import APIRouter, Body, File, HTTPException, Path, Query, UploadFile
from fastapi.responses import FileResponse
from PIL import Image
from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.services.shared.pagination import PaginatedResults
@@ -10,11 +14,14 @@ from invokeai.app.services.workflow_records.workflow_records_common import (
WorkflowCategory,
WorkflowNotFoundError,
WorkflowRecordDTO,
WorkflowRecordListItemDTO,
WorkflowRecordListItemWithThumbnailDTO,
WorkflowRecordOrderBy,
WorkflowRecordWithThumbnailDTO,
WorkflowWithoutID,
)
from invokeai.app.services.workflow_thumbnails.workflow_thumbnails_common import WorkflowThumbnailFileNotFoundException
IMAGE_MAX_AGE = 31536000
workflows_router = APIRouter(prefix="/v1/workflows", tags=["workflows"])
@@ -22,15 +29,17 @@ workflows_router = APIRouter(prefix="/v1/workflows", tags=["workflows"])
"/i/{workflow_id}",
operation_id="get_workflow",
responses={
200: {"model": WorkflowRecordDTO},
200: {"model": WorkflowRecordWithThumbnailDTO},
},
)
async def get_workflow(
workflow_id: str = Path(description="The workflow to get"),
) -> WorkflowRecordDTO:
) -> WorkflowRecordWithThumbnailDTO:
"""Gets a workflow"""
try:
return ApiDependencies.invoker.services.workflow_records.get(workflow_id)
thumbnail_url = ApiDependencies.invoker.services.workflow_thumbnails.get_url(workflow_id)
workflow = ApiDependencies.invoker.services.workflow_records.get(workflow_id)
return WorkflowRecordWithThumbnailDTO(thumbnail_url=thumbnail_url, **workflow.model_dump())
except WorkflowNotFoundError:
raise HTTPException(status_code=404, detail="Workflow not found")
@@ -57,6 +66,11 @@ async def delete_workflow(
workflow_id: str = Path(description="The workflow to delete"),
) -> None:
"""Deletes a workflow"""
try:
ApiDependencies.invoker.services.workflow_thumbnails.delete(workflow_id)
except WorkflowThumbnailFileNotFoundException:
# It's OK if the workflow has no thumbnail file. We can still delete the workflow.
pass
ApiDependencies.invoker.services.workflow_records.delete(workflow_id)
@@ -78,7 +92,7 @@ async def create_workflow(
"/",
operation_id="list_workflows",
responses={
200: {"model": PaginatedResults[WorkflowRecordListItemDTO]},
200: {"model": PaginatedResults[WorkflowRecordListItemWithThumbnailDTO]},
},
)
async def list_workflows(
@@ -88,10 +102,141 @@ async def list_workflows(
default=WorkflowRecordOrderBy.Name, description="The attribute to order by"
),
direction: SQLiteDirection = Query(default=SQLiteDirection.Ascending, description="The direction to order by"),
category: WorkflowCategory = Query(default=WorkflowCategory.User, description="The category of workflow to get"),
categories: Optional[list[WorkflowCategory]] = Query(default=None, description="The categories of workflow to get"),
tags: Optional[list[str]] = Query(default=None, description="The tags of workflow to get"),
query: Optional[str] = Query(default=None, description="The text to query by (matches name and description)"),
) -> PaginatedResults[WorkflowRecordListItemDTO]:
) -> PaginatedResults[WorkflowRecordListItemWithThumbnailDTO]:
"""Gets a page of workflows"""
return ApiDependencies.invoker.services.workflow_records.get_many(
order_by=order_by, direction=direction, page=page, per_page=per_page, query=query, category=category
workflows_with_thumbnails: list[WorkflowRecordListItemWithThumbnailDTO] = []
workflows = ApiDependencies.invoker.services.workflow_records.get_many(
order_by=order_by,
direction=direction,
page=page,
per_page=per_page,
query=query,
categories=categories,
tags=tags,
)
for workflow in workflows.items:
workflows_with_thumbnails.append(
WorkflowRecordListItemWithThumbnailDTO(
thumbnail_url=ApiDependencies.invoker.services.workflow_thumbnails.get_url(workflow.workflow_id),
**workflow.model_dump(),
)
)
return PaginatedResults[WorkflowRecordListItemWithThumbnailDTO](
items=workflows_with_thumbnails,
total=workflows.total,
page=workflows.page,
pages=workflows.pages,
per_page=workflows.per_page,
)
@workflows_router.put(
"/i/{workflow_id}/thumbnail",
operation_id="set_workflow_thumbnail",
responses={
200: {"model": WorkflowRecordDTO},
},
)
async def set_workflow_thumbnail(
workflow_id: str = Path(description="The workflow to update"),
image: UploadFile = File(description="The image file to upload"),
):
"""Sets a workflow's thumbnail image"""
try:
ApiDependencies.invoker.services.workflow_records.get(workflow_id)
except WorkflowNotFoundError:
raise HTTPException(status_code=404, detail="Workflow not found")
if not image.content_type or not image.content_type.startswith("image"):
raise HTTPException(status_code=415, detail="Not an image")
contents = await image.read()
try:
pil_image = Image.open(io.BytesIO(contents))
except Exception:
ApiDependencies.invoker.services.logger.error(traceback.format_exc())
raise HTTPException(status_code=415, detail="Failed to read image")
try:
ApiDependencies.invoker.services.workflow_thumbnails.save(workflow_id, pil_image)
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
@workflows_router.delete(
"/i/{workflow_id}/thumbnail",
operation_id="delete_workflow_thumbnail",
responses={
200: {"model": WorkflowRecordDTO},
},
)
async def delete_workflow_thumbnail(
workflow_id: str = Path(description="The workflow to update"),
):
"""Removes a workflow's thumbnail image"""
try:
ApiDependencies.invoker.services.workflow_records.get(workflow_id)
except WorkflowNotFoundError:
raise HTTPException(status_code=404, detail="Workflow not found")
try:
ApiDependencies.invoker.services.workflow_thumbnails.delete(workflow_id)
except ValueError as e:
raise HTTPException(status_code=500, detail=str(e))
@workflows_router.get(
"/i/{workflow_id}/thumbnail",
operation_id="get_workflow_thumbnail",
responses={
200: {
"description": "The workflow thumbnail was fetched successfully",
},
400: {"description": "Bad request"},
404: {"description": "The workflow thumbnail could not be found"},
},
status_code=200,
)
async def get_workflow_thumbnail(
workflow_id: str = Path(description="The id of the workflow thumbnail to get"),
) -> FileResponse:
"""Gets a workflow's thumbnail image"""
try:
path = ApiDependencies.invoker.services.workflow_thumbnails.get_path(workflow_id)
response = FileResponse(
path,
media_type="image/png",
filename=workflow_id + ".png",
content_disposition_type="inline",
)
response.headers["Cache-Control"] = f"max-age={IMAGE_MAX_AGE}"
return response
except Exception:
raise HTTPException(status_code=404)
@workflows_router.get("/counts", operation_id="get_counts")
async def get_counts(
tags: Optional[list[str]] = Query(default=None, description="The tags to include"),
categories: Optional[list[WorkflowCategory]] = Query(default=None, description="The categories to include"),
) -> int:
"""Gets a the count of workflows that include the specified tags and categories"""
return ApiDependencies.invoker.services.workflow_records.get_counts(tags=tags, categories=categories)
@workflows_router.put(
"/i/{workflow_id}/opened_at",
operation_id="update_opened_at",
)
async def update_opened_at(
workflow_id: str = Path(description="The workflow to update"),
) -> None:
"""Updates the opened_at field of a workflow"""
ApiDependencies.invoker.services.workflow_records.update_opened_at(workflow_id)

View File

@@ -417,7 +417,7 @@ def validate_fields(model_fields: dict[str, FieldInfo], model_type: str) -> None
ui_type = field.json_schema_extra.get("ui_type", None)
if isinstance(ui_type, str) and ui_type.startswith("DEPRECATED_"):
logger.warn(f"\"UIType.{ui_type.split('_')[-1]}\" is deprecated, ignoring")
logger.warn(f'"UIType.{ui_type.split("_")[-1]}" is deprecated, ignoring')
field.json_schema_extra.pop("ui_type")
return None

View File

@@ -513,7 +513,7 @@ def log_tokenization_for_text(
usedTokens += 1
if usedTokens > 0:
print(f'\n>> [TOKENLOG] Tokens {display_label or ""} ({usedTokens}):')
print(f"\n>> [TOKENLOG] Tokens {display_label or ''} ({usedTokens}):")
print(f"{tokenized}\x1b[0m")
if discarded != "":

View File

@@ -1,64 +0,0 @@
"""
Invoke-managed custom node loader. See README.md for more information.
"""
import sys
import traceback
from importlib.util import module_from_spec, spec_from_file_location
from pathlib import Path
from invokeai.backend.util.logging import InvokeAILogger
logger = InvokeAILogger.get_logger()
loaded_packs: list[str] = []
failed_packs: list[str] = []
custom_nodes_dir = Path(__file__).parent
for d in custom_nodes_dir.iterdir():
# skip files
if not d.is_dir():
continue
# skip hidden directories
if d.name.startswith("_") or d.name.startswith("."):
continue
# skip directories without an `__init__.py`
init = d / "__init__.py"
if not init.exists():
continue
module_name = init.parent.stem
# skip if already imported
if module_name in globals():
continue
# load the module, appending adding a suffix to identify it as a custom node pack
spec = spec_from_file_location(module_name, init.absolute())
if spec is None or spec.loader is None:
logger.warn(f"Could not load {init}")
continue
logger.info(f"Loading node pack {module_name}")
try:
module = module_from_spec(spec)
sys.modules[spec.name] = module
spec.loader.exec_module(module)
loaded_packs.append(module_name)
except Exception:
failed_packs.append(module_name)
full_error = traceback.format_exc()
logger.error(f"Failed to load node pack {module_name} (may have partially loaded):\n{full_error}")
del init, module_name
loaded_count = len(loaded_packs)
if loaded_count > 0:
logger.info(
f"Loaded {loaded_count} node pack{'s' if loaded_count != 1 else ''} from {custom_nodes_dir}: {', '.join(loaded_packs)}"
)

View File

@@ -57,6 +57,8 @@ class UIType(str, Enum, metaclass=MetaEnum):
CLIPGEmbedModel = "CLIPGEmbedModelField"
SpandrelImageToImageModel = "SpandrelImageToImageModelField"
ControlLoRAModel = "ControlLoRAModelField"
SigLipModel = "SigLipModelField"
FluxReduxModel = "FluxReduxModelField"
# endregion
# region Misc Field Types
@@ -152,6 +154,7 @@ class FieldDescriptions:
sdxl_refiner_model = "SDXL Refiner Main Modde (UNet, VAE, CLIP2) to load"
onnx_main_model = "ONNX Main model (UNet, VAE, CLIP) to load"
spandrel_image_to_image_model = "Image-to-Image model"
vllm_model = "VLLM model"
lora_weight = "The weight at which the LoRA is applied to each model"
compel_prompt = "Prompt to be parsed by Compel to create a conditioning tensor"
raw_prompt = "Raw prompt text (no parsing)"
@@ -201,6 +204,7 @@ class FieldDescriptions:
freeu_b1 = "Scaling factor for stage 1 to amplify the contributions of backbone features."
freeu_b2 = "Scaling factor for stage 2 to amplify the contributions of backbone features."
instantx_control_mode = "The control mode for InstantX ControlNet union models. Ignored for other ControlNet models. The standard mapping is: canny (0), tile (1), depth (2), blur (3), pose (4), gray (5), low quality (6). Negative values will be treated as 'None'."
flux_redux_conditioning = "FLUX Redux conditioning tensor"
class ImageField(BaseModel):
@@ -259,6 +263,17 @@ class FluxConditioningField(BaseModel):
)
class FluxReduxConditioningField(BaseModel):
"""A FLUX Redux conditioning tensor primitive value"""
conditioning: TensorField = Field(description="The Redux image conditioning tensor.")
mask: Optional[TensorField] = Field(
default=None,
description="The mask associated with this conditioning tensor. Excluded regions should be set to False, "
"included regions should be set to True.",
)
class SD3ConditioningField(BaseModel):
"""A conditioning tensor primitive value"""

View File

@@ -15,6 +15,7 @@ from invokeai.app.invocations.fields import (
DenoiseMaskField,
FieldDescriptions,
FluxConditioningField,
FluxReduxConditioningField,
ImageField,
Input,
InputField,
@@ -46,7 +47,7 @@ from invokeai.backend.flux.sampling_utils import (
pack,
unpack,
)
from invokeai.backend.flux.text_conditioning import FluxTextConditioning
from invokeai.backend.flux.text_conditioning import FluxReduxConditioning, FluxTextConditioning
from invokeai.backend.model_manager.config import ModelFormat
from invokeai.backend.patches.layer_patcher import LayerPatcher
from invokeai.backend.patches.lora_conversions.flux_lora_constants import FLUX_LORA_TRANSFORMER_PREFIX
@@ -61,7 +62,7 @@ from invokeai.backend.util.devices import TorchDevice
title="FLUX Denoise",
tags=["image", "flux"],
category="image",
version="3.2.2",
version="3.2.3",
classification=Classification.Prototype,
)
class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
@@ -103,6 +104,11 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
description="Negative conditioning tensor. Can be None if cfg_scale is 1.0.",
input=Input.Connection,
)
redux_conditioning: FluxReduxConditioningField | list[FluxReduxConditioningField] | None = InputField(
default=None,
description="FLUX Redux conditioning tensor.",
input=Input.Connection,
)
cfg_scale: float | list[float] = InputField(default=1.0, description=FieldDescriptions.cfg_scale, title="CFG Scale")
cfg_scale_start_step: int = InputField(
default=0,
@@ -190,11 +196,23 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
dtype=inference_dtype,
device=TorchDevice.choose_torch_device(),
)
redux_conditionings: list[FluxReduxConditioning] = self._load_redux_conditioning(
context=context,
redux_cond_field=self.redux_conditioning,
packed_height=packed_h,
packed_width=packed_w,
device=TorchDevice.choose_torch_device(),
dtype=inference_dtype,
)
pos_regional_prompting_extension = RegionalPromptingExtension.from_text_conditioning(
pos_text_conditionings, img_seq_len=packed_h * packed_w
text_conditioning=pos_text_conditionings,
redux_conditioning=redux_conditionings,
img_seq_len=packed_h * packed_w,
)
neg_regional_prompting_extension = (
RegionalPromptingExtension.from_text_conditioning(neg_text_conditionings, img_seq_len=packed_h * packed_w)
RegionalPromptingExtension.from_text_conditioning(
text_conditioning=neg_text_conditionings, redux_conditioning=[], img_seq_len=packed_h * packed_w
)
if neg_text_conditionings
else None
)
@@ -400,6 +418,42 @@ class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
return text_conditionings
def _load_redux_conditioning(
self,
context: InvocationContext,
redux_cond_field: FluxReduxConditioningField | list[FluxReduxConditioningField] | None,
packed_height: int,
packed_width: int,
device: torch.device,
dtype: torch.dtype,
) -> list[FluxReduxConditioning]:
# Normalize to a list of FluxReduxConditioningFields.
if redux_cond_field is None:
return []
redux_cond_list = (
[redux_cond_field] if isinstance(redux_cond_field, FluxReduxConditioningField) else redux_cond_field
)
redux_conditionings: list[FluxReduxConditioning] = []
for redux_cond_field in redux_cond_list:
# Load the Redux conditioning tensor.
redux_cond_data = context.tensors.load(redux_cond_field.conditioning.tensor_name)
redux_cond_data.to(device=device, dtype=dtype)
# Load the mask, if provided.
mask: Optional[torch.Tensor] = None
if redux_cond_field.mask is not None:
mask = context.tensors.load(redux_cond_field.mask.tensor_name)
mask = mask.to(device=device)
mask = RegionalPromptingExtension.preprocess_regional_prompt_mask(
mask, packed_height, packed_width, dtype, device
)
redux_conditionings.append(FluxReduxConditioning(redux_embeddings=redux_cond_data, mask=mask))
return redux_conditionings
@classmethod
def prep_cfg_scale(
cls, cfg_scale: float | list[float], timesteps: list[float], cfg_scale_start_step: int, cfg_scale_end_step: int

View File

@@ -0,0 +1,119 @@
from typing import Optional
import torch
from PIL import Image
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import (
FieldDescriptions,
FluxReduxConditioningField,
InputField,
OutputField,
TensorField,
UIType,
)
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.primitives import ImageField
from invokeai.app.services.model_records.model_records_base import ModelRecordChanges
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.redux.flux_redux_model import FluxReduxModel
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelType
from invokeai.backend.model_manager.starter_models import siglip
from invokeai.backend.sig_lip.sig_lip_pipeline import SigLipPipeline
from invokeai.backend.util.devices import TorchDevice
@invocation_output("flux_redux_output")
class FluxReduxOutput(BaseInvocationOutput):
"""The conditioning output of a FLUX Redux invocation."""
redux_cond: FluxReduxConditioningField = OutputField(
description=FieldDescriptions.flux_redux_conditioning, title="Conditioning"
)
@invocation(
"flux_redux",
title="FLUX Redux",
tags=["ip_adapter", "control"],
category="ip_adapter",
version="2.0.0",
classification=Classification.Prototype,
)
class FluxReduxInvocation(BaseInvocation):
"""Runs a FLUX Redux model to generate a conditioning tensor."""
image: ImageField = InputField(description="The FLUX Redux image prompt.")
mask: Optional[TensorField] = InputField(
default=None,
description="The bool mask associated with this FLUX Redux image prompt. Excluded regions should be set to "
"False, included regions should be set to True.",
)
redux_model: ModelIdentifierField = InputField(
description="The FLUX Redux model to use.",
title="FLUX Redux Model",
ui_type=UIType.FluxReduxModel,
)
def invoke(self, context: InvocationContext) -> FluxReduxOutput:
image = context.images.get_pil(self.image.image_name, "RGB")
encoded_x = self._siglip_encode(context, image)
redux_conditioning = self._flux_redux_encode(context, encoded_x)
tensor_name = context.tensors.save(redux_conditioning)
return FluxReduxOutput(
redux_cond=FluxReduxConditioningField(conditioning=TensorField(tensor_name=tensor_name), mask=self.mask)
)
@torch.no_grad()
def _siglip_encode(self, context: InvocationContext, image: Image.Image) -> torch.Tensor:
siglip_model_config = self._get_siglip_model(context)
with context.models.load(siglip_model_config.key).model_on_device() as (_, siglip_pipeline):
assert isinstance(siglip_pipeline, SigLipPipeline)
return siglip_pipeline.encode_image(
x=image, device=TorchDevice.choose_torch_device(), dtype=TorchDevice.choose_torch_dtype()
)
@torch.no_grad()
def _flux_redux_encode(self, context: InvocationContext, encoded_x: torch.Tensor) -> torch.Tensor:
with context.models.load(self.redux_model).model_on_device() as (_, flux_redux):
assert isinstance(flux_redux, FluxReduxModel)
dtype = next(flux_redux.parameters()).dtype
encoded_x = encoded_x.to(dtype=dtype)
return flux_redux(encoded_x)
def _get_siglip_model(self, context: InvocationContext) -> AnyModelConfig:
siglip_models = context.models.search_by_attrs(name=siglip.name, base=BaseModelType.Any, type=ModelType.SigLIP)
if not len(siglip_models) > 0:
context.logger.warning(
f"The SigLIP model required by FLUX Redux ({siglip.name}) is not installed. Downloading and installing now. This may take a while."
)
# TODO(psyche): Can the probe reliably determine the type of the model? Just hardcoding it bc I don't want to experiment now
config_overrides = ModelRecordChanges(name=siglip.name, type=ModelType.SigLIP)
# Queue the job
job = context._services.model_manager.install.heuristic_import(siglip.source, config=config_overrides)
# Wait for up to 10 minutes - model is ~3.5GB
context._services.model_manager.install.wait_for_job(job, timeout=600)
siglip_models = context.models.search_by_attrs(
name=siglip.name,
base=BaseModelType.Any,
type=ModelType.SigLIP,
)
if len(siglip_models) == 0:
context.logger.error("Error while fetching SigLIP for FLUX Redux")
assert len(siglip_models) == 1
return siglip_models[0]

View File

@@ -1,40 +1,83 @@
import logging
import shutil
import sys
import traceback
from importlib.util import module_from_spec, spec_from_file_location
from pathlib import Path
def load_custom_nodes(custom_nodes_path: Path):
def load_custom_nodes(custom_nodes_path: Path, logger: logging.Logger):
"""
Loads all custom nodes from the custom_nodes_path directory.
This function copies a custom __init__.py file to the custom_nodes_path directory, effectively turning it into a
python module.
If custom_nodes_path does not exist, it creates it.
The custom __init__.py file itself imports all the custom node packs as python modules from the custom_nodes_path
directory.
It also copies the custom_nodes/README.md file to the custom_nodes_path directory. Because this file may change,
it is _always_ copied to the custom_nodes_path directory.
Then,the custom __init__.py file is programmatically imported using importlib. As it executes, it imports all the
custom node packs as python modules.
Then, it crawls the custom_nodes_path directory and imports all top-level directories as python modules.
If the directory does not contain an __init__.py file or starts with an `_` or `.`, it is skipped.
"""
# create the custom nodes directory if it does not exist
custom_nodes_path.mkdir(parents=True, exist_ok=True)
custom_nodes_init_path = str(custom_nodes_path / "__init__.py")
custom_nodes_readme_path = str(custom_nodes_path / "README.md")
# Copy the README file to the custom nodes directory
source_custom_nodes_readme_path = Path(__file__).parent / "custom_nodes/README.md"
target_custom_nodes_readme_path = Path(custom_nodes_path) / "README.md"
# copy our custom nodes __init__.py to the custom nodes directory
shutil.copy(Path(__file__).parent / "custom_nodes/init.py", custom_nodes_init_path)
shutil.copy(Path(__file__).parent / "custom_nodes/README.md", custom_nodes_readme_path)
# copy our custom nodes README to the custom nodes directory
shutil.copy(source_custom_nodes_readme_path, target_custom_nodes_readme_path)
# set the same permissions as the destination directory, in case our source is read-only,
# so that the files are user-writable
for p in custom_nodes_path.glob("**/*"):
p.chmod(custom_nodes_path.stat().st_mode)
loaded_packs: list[str] = []
failed_packs: list[str] = []
# Import custom nodes, see https://docs.python.org/3/library/importlib.html#importing-programmatically
spec = spec_from_file_location("custom_nodes", custom_nodes_init_path)
if spec is None or spec.loader is None:
raise RuntimeError(f"Could not load custom nodes from {custom_nodes_init_path}")
module = module_from_spec(spec)
sys.modules[spec.name] = module
spec.loader.exec_module(module)
for d in custom_nodes_path.iterdir():
# skip files
if not d.is_dir():
continue
# skip hidden directories
if d.name.startswith("_") or d.name.startswith("."):
continue
# skip directories without an `__init__.py`
init = d / "__init__.py"
if not init.exists():
continue
module_name = init.parent.stem
# skip if already imported
if module_name in globals():
continue
# load the module
spec = spec_from_file_location(module_name, init.absolute())
if spec is None or spec.loader is None:
logger.warning(f"Could not load {init}")
continue
logger.info(f"Loading node pack {module_name}")
try:
module = module_from_spec(spec)
sys.modules[spec.name] = module
spec.loader.exec_module(module)
loaded_packs.append(module_name)
except Exception:
failed_packs.append(module_name)
full_error = traceback.format_exc()
logger.error(f"Failed to load node pack {module_name} (may have partially loaded):\n{full_error}")
del init, module_name
loaded_count = len(loaded_packs)
if loaded_count > 0:
logger.info(
f"Loaded {loaded_count} node pack{'s' if loaded_count != 1 else ''} from {custom_nodes_path}: {', '.join(loaded_packs)}"
)

View File

@@ -185,9 +185,9 @@ class SegmentAnythingInvocation(BaseInvocation):
# Find the largest mask.
return [max(masks, key=lambda x: float(x.sum()))]
elif self.mask_filter == "highest_box_score":
assert (
bounding_boxes is not None
), "Bounding boxes must be provided to use the 'highest_box_score' mask filter."
assert bounding_boxes is not None, (
"Bounding boxes must be provided to use the 'highest_box_score' mask filter."
)
assert len(masks) == len(bounding_boxes)
# Find the index of the bounding box with the highest score.
# Note that we fallback to -1.0 if the score is None. This is mainly to satisfy the type checker. In most

View File

@@ -59,7 +59,7 @@ def run_app() -> None:
# Load custom nodes. This must be done after importing the Graph class, which itself imports all modules from the
# invocations module. The ordering here is implicit, but important - we want to load custom nodes after all the
# core nodes have been imported so that we can catch when a custom node clobbers a core node.
load_custom_nodes(custom_nodes_path=app_config.custom_nodes_path)
load_custom_nodes(custom_nodes_path=app_config.custom_nodes_path, logger=logger)
# Start the server.
config = uvicorn.Config(

View File

@@ -72,6 +72,7 @@ class InvokeAIAppConfig(BaseSettings):
outputs_dir: Path to directory for outputs.
custom_nodes_dir: Path to directory for custom nodes.
style_presets_dir: Path to directory for style presets.
workflow_thumbnails_dir: Path to directory for workflow thumbnails.
log_handlers: Log handler. Valid options are "console", "file=<path>", "syslog=path|address:host:port", "http=<url>".
log_format: Log format. Use "plain" for text-only, "color" for colorized output, "legacy" for 2.3-style logging and "syslog" for syslog-style.<br>Valid values: `plain`, `color`, `syslog`, `legacy`
log_level: Emit logging messages at this level or higher.<br>Valid values: `debug`, `info`, `warning`, `error`, `critical`
@@ -142,6 +143,7 @@ class InvokeAIAppConfig(BaseSettings):
outputs_dir: Path = Field(default=Path("outputs"), description="Path to directory for outputs.")
custom_nodes_dir: Path = Field(default=Path("nodes"), description="Path to directory for custom nodes.")
style_presets_dir: Path = Field(default=Path("style_presets"), description="Path to directory for style presets.")
workflow_thumbnails_dir: Path = Field(default=Path("workflow_thumbnails"), description="Path to directory for workflow thumbnails.")
# LOGGING
log_handlers: list[str] = Field(default=["console"], description='Log handler. Valid options are "console", "file=<path>", "syslog=path|address:host:port", "http=<url>".')
@@ -304,6 +306,11 @@ class InvokeAIAppConfig(BaseSettings):
"""Path to the style presets directory, resolved to an absolute path.."""
return self._resolve(self.style_presets_dir)
@property
def workflow_thumbnails_path(self) -> Path:
"""Path to the workflow thumbnails directory, resolved to an absolute path.."""
return self._resolve(self.workflow_thumbnails_dir)
@property
def convert_cache_path(self) -> Path:
"""Path to the converted cache models directory, resolved to an absolute path.."""
@@ -476,9 +483,9 @@ def load_and_migrate_config(config_path: Path) -> InvokeAIAppConfig:
try:
# Meta is not included in the model fields, so we need to validate it separately
config = InvokeAIAppConfig.model_validate(loaded_config_dict)
assert (
config.schema_version == CONFIG_SCHEMA_VERSION
), f"Invalid schema version, expected {CONFIG_SCHEMA_VERSION}: {config.schema_version}"
assert config.schema_version == CONFIG_SCHEMA_VERSION, (
f"Invalid schema version, expected {CONFIG_SCHEMA_VERSION}: {config.schema_version}"
)
return config
except Exception as e:
raise RuntimeError(f"Failed to load config file {config_path}: {e}") from e

View File

@@ -32,6 +32,7 @@ if TYPE_CHECKING:
from invokeai.app.services.session_queue.session_queue_base import SessionQueueBase
from invokeai.app.services.urls.urls_base import UrlServiceBase
from invokeai.app.services.workflow_records.workflow_records_base import WorkflowRecordsStorageBase
from invokeai.app.services.workflow_thumbnails.workflow_thumbnails_base import WorkflowThumbnailServiceBase
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningFieldData
@@ -65,6 +66,7 @@ class InvocationServices:
conditioning: "ObjectSerializerBase[ConditioningFieldData]",
style_preset_records: "StylePresetRecordsStorageBase",
style_preset_image_files: "StylePresetImageFileStorageBase",
workflow_thumbnails: "WorkflowThumbnailServiceBase",
):
self.board_images = board_images
self.board_image_records = board_image_records
@@ -91,3 +93,4 @@ class InvocationServices:
self.conditioning = conditioning
self.style_preset_records = style_preset_records
self.style_preset_image_files = style_preset_image_files
self.workflow_thumbnails = workflow_thumbnails

View File

@@ -19,6 +19,7 @@ from invokeai.app.services.shared.sqlite_migrator.migrations.migration_13 import
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_14 import build_migration_14
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_15 import build_migration_15
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_16 import build_migration_16
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_17 import build_migration_17
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_impl import SqliteMigrator
@@ -55,6 +56,7 @@ def init_db(config: InvokeAIAppConfig, logger: Logger, image_files: ImageFileSto
migrator.register_migration(build_migration_14())
migrator.register_migration(build_migration_15())
migrator.register_migration(build_migration_16())
migrator.register_migration(build_migration_17())
migrator.run_migrations()
return db

View File

@@ -0,0 +1,35 @@
import sqlite3
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration
class Migration17Callback:
def __call__(self, cursor: sqlite3.Cursor) -> None:
self._add_workflows_tags_col(cursor)
def _add_workflows_tags_col(self, cursor: sqlite3.Cursor) -> None:
"""
- Adds `tags` column to the workflow_library table. It is a generated column that extracts the tags from the
workflow JSON.
"""
cursor.execute(
"ALTER TABLE workflow_library ADD COLUMN tags TEXT GENERATED ALWAYS AS (json_extract(workflow, '$.tags')) VIRTUAL;"
)
def build_migration_17() -> Migration:
"""
Build the migration from database version 16 to 17.
This migration does the following:
- Adds `tags` column to the workflow_library table. It is a generated column that extracts the tags from the
workflow JSON.
"""
migration_17 = Migration(
from_version=16,
to_version=17,
callback=Migration17Callback(),
)
return migration_17

View File

@@ -18,3 +18,8 @@ class UrlServiceBase(ABC):
def get_style_preset_image_url(self, style_preset_id: str) -> str:
"""Gets the URL for a style preset image"""
pass
@abstractmethod
def get_workflow_thumbnail_url(self, workflow_id: str) -> str:
"""Gets the URL for a workflow thumbnail"""
pass

View File

@@ -22,3 +22,6 @@ class LocalUrlService(UrlServiceBase):
def get_style_preset_image_url(self, style_preset_id: str) -> str:
return f"{self._base_url}/style_presets/i/{style_preset_id}/image"
def get_workflow_thumbnail_url(self, workflow_id: str) -> str:
return f"{self._base_url}/workflows/i/{workflow_id}/thumbnail"

View File

@@ -1,10 +1,11 @@
{
"id": "default_686bb1d0-d086-4c70-9fa3-2f600b922023",
"name": "ESRGAN Upscaling with Canny ControlNet",
"author": "InvokeAI",
"description": "Sample workflow for using Upscaling with ControlNet with SD1.5",
"version": "2.1.0",
"contact": "invoke@invoke.ai",
"tags": "upscale, controlnet, default",
"tags": "upscaling, controlnet, default",
"notes": "",
"exposedFields": [
{

View File

@@ -1,10 +1,11 @@
{
"id": "default_cbf0e034-7b54-4b2c-b670-3b1e2e4b4a88",
"name": "FLUX Image to Image",
"author": "InvokeAI",
"description": "A simple image-to-image workflow using a FLUX dev model. ",
"version": "1.1.0",
"contact": "",
"tags": "image2image, flux, image-to-image",
"tags": "image2image, flux, image-to-image, image to image",
"notes": "Prerequisite model downloads: T5 Encoder, CLIP-L Encoder, and FLUX VAE. Quantized and un-quantized versions can be found in the starter models tab within your Model Manager. We recommend using FLUX dev models for image-to-image workflows. The image-to-image performance with FLUX schnell models is poor.",
"exposedFields": [
{

View File

@@ -1,4 +1,5 @@
{
"id": "default_dec5a2e9-f59c-40d9-8869-a056751d79b8",
"name": "Face Detailer with IP-Adapter & Canny (See Note in Details)",
"author": "kosmoskatten",
"description": "A workflow to add detail to and improve faces. This workflow is most effective when used with a model that creates realistic outputs. ",
@@ -1445,4 +1446,4 @@
"targetHandle": "vae"
}
]
}
}

View File

@@ -1,10 +1,11 @@
{
"id": "default_444fe292-896b-44fd-bfc6-c0b5d220fffc",
"name": "FLUX Text to Image",
"author": "InvokeAI",
"description": "A simple text-to-image workflow using FLUX dev or schnell models.",
"version": "1.1.0",
"contact": "",
"tags": "text2image, flux",
"tags": "text2image, flux, text to image",
"notes": "Prerequisite model downloads: T5 Encoder, CLIP-L Encoder, and FLUX VAE. Quantized and un-quantized versions can be found in the starter models tab within your Model Manager. We recommend 4 steps for FLUX schnell models and 30 steps for FLUX dev models.",
"exposedFields": [
{

View File

@@ -1,4 +1,5 @@
{
"id": "default_2d05e719-a6b9-4e64-9310-b875d3b2f9d2",
"name": "Multi ControlNet (Canny & Depth)",
"author": "InvokeAI",
"description": "A sample workflow using canny & depth ControlNets to guide the generation process. ",
@@ -1014,4 +1015,4 @@
"targetHandle": "image_resolution"
}
]
}
}

View File

@@ -1,4 +1,5 @@
{
"id": "default_f96e794f-eb3e-4d01-a960-9b4e43402bcf",
"name": "MultiDiffusion SD1.5",
"author": "Invoke",
"description": "A workflow to upscale an input image with tiled upscaling, using SD1.5 based models.",
@@ -52,7 +53,6 @@
"version": "3.0.0",
"category": "default"
},
"id": "e5b5fb01-8906-463a-963a-402dbc42f79b",
"nodes": [
{
"id": "33fe76a0-5efd-4482-a7f0-e2abf1223dc2",
@@ -1427,4 +1427,4 @@
"targetHandle": "noise"
}
]
}
}

View File

@@ -1,4 +1,5 @@
{
"id": "default_35658541-6d41-4a20-8ec5-4bf2561faed0",
"name": "MultiDiffusion SDXL",
"author": "Invoke",
"description": "A workflow to upscale an input image with tiled upscaling, using SDXL based models.",
@@ -56,7 +57,6 @@
"version": "3.0.0",
"category": "default"
},
"id": "dd607062-9e1b-48b9-89ad-9762cdfbb8f4",
"nodes": [
{
"id": "71a116e1-c631-48b3-923d-acea4753b887",
@@ -1642,4 +1642,4 @@
"targetHandle": "noise"
}
]
}
}

View File

@@ -1,10 +1,11 @@
{
"id": "default_d7a1c60f-ca2f-4f90-9e33-75a826ca6d8f",
"name": "Prompt from File",
"author": "InvokeAI",
"description": "Sample workflow using Prompt from File node",
"version": "2.1.0",
"contact": "invoke@invoke.ai",
"tags": "text2image, prompt from file, default",
"tags": "text2image, prompt from file, default, text to image",
"notes": "",
"exposedFields": [
{

View File

@@ -3,6 +3,7 @@
Workflows placed in this directory will be synced to the `workflow_library` as
_default workflows_ on app startup.
- Default workflows must have an id that starts with "default\_". The ID must be retained when the workflow is updated. You may need to do this manually.
- Default workflows are not editable by users. If they are loaded and saved,
they will save as a copy of the default workflow.
- Default workflows must have the `meta.category` property set to `"default"`.

View File

@@ -1,382 +1,382 @@
{
"name": "SD3.5 Text to Image",
"author": "InvokeAI",
"description": "Sample text to image workflow for Stable Diffusion 3.5",
"version": "1.0.0",
"contact": "invoke@invoke.ai",
"tags": "text2image, SD3.5, default",
"id": "default_dbe46d95-22aa-43fb-9c16-94400d0ce2fd",
"name": "SD3.5 Text to Image",
"author": "InvokeAI",
"description": "Sample text to image workflow for Stable Diffusion 3.5",
"version": "1.0.0",
"contact": "invoke@invoke.ai",
"tags": "text2image, SD3.5, text to image",
"notes": "",
"exposedFields": [
{
"nodeId": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"fieldName": "model"
},
{
"nodeId": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"fieldName": "prompt"
}
],
"meta": {
"version": "3.0.0",
"category": "default"
"exposedFields": [
{
"nodeId": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"fieldName": "model"
},
"id": "e3a51d6b-8208-4d6d-b187-fcfe8b32934c",
"nodes": [
{
{
"nodeId": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"fieldName": "prompt"
}
],
"meta": {
"version": "3.0.0",
"category": "default"
},
"nodes": [
{
"id": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"type": "invocation",
"data": {
"id": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"type": "invocation",
"data": {
"id": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"type": "sd3_model_loader",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": true,
"nodePack": "invokeai",
"inputs": {
"model": {
"name": "model",
"label": "",
"value": {
"key": "f7b20be9-92a8-4cfb-bca4-6c3b5535c10b",
"hash": "placeholder",
"name": "stable-diffusion-3.5-medium",
"base": "sd-3",
"type": "main"
}
},
"t5_encoder_model": {
"name": "t5_encoder_model",
"label": ""
},
"clip_l_model": {
"name": "clip_l_model",
"label": ""
},
"clip_g_model": {
"name": "clip_g_model",
"label": ""
},
"vae_model": {
"name": "vae_model",
"label": ""
"type": "sd3_model_loader",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": true,
"nodePack": "invokeai",
"inputs": {
"model": {
"name": "model",
"label": "",
"value": {
"key": "f7b20be9-92a8-4cfb-bca4-6c3b5535c10b",
"hash": "placeholder",
"name": "stable-diffusion-3.5-medium",
"base": "sd-3",
"type": "main"
}
},
"t5_encoder_model": {
"name": "t5_encoder_model",
"label": ""
},
"clip_l_model": {
"name": "clip_l_model",
"label": ""
},
"clip_g_model": {
"name": "clip_g_model",
"label": ""
},
"vae_model": {
"name": "vae_model",
"label": ""
}
},
"position": {
"x": -55.58689609637031,
"y": -111.53602444662268
}
},
{
"position": {
"x": -55.58689609637031,
"y": -111.53602444662268
}
},
{
"id": "f7e394ac-6394-4096-abcb-de0d346506b3",
"type": "invocation",
"data": {
"id": "f7e394ac-6394-4096-abcb-de0d346506b3",
"type": "invocation",
"data": {
"id": "f7e394ac-6394-4096-abcb-de0d346506b3",
"type": "rand_int",
"version": "1.0.1",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": false,
"nodePack": "invokeai",
"inputs": {
"low": {
"name": "low",
"label": "",
"value": 0
},
"high": {
"name": "high",
"label": "",
"value": 2147483647
}
"type": "rand_int",
"version": "1.0.1",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": false,
"nodePack": "invokeai",
"inputs": {
"low": {
"name": "low",
"label": "",
"value": 0
},
"high": {
"name": "high",
"label": "",
"value": 2147483647
}
},
"position": {
"x": 470.45870147220353,
"y": 350.3141781644303
}
},
{
"position": {
"x": 470.45870147220353,
"y": 350.3141781644303
}
},
{
"id": "9eb72af0-dd9e-4ec5-ad87-d65e3c01f48b",
"type": "invocation",
"data": {
"id": "9eb72af0-dd9e-4ec5-ad87-d65e3c01f48b",
"type": "invocation",
"data": {
"id": "9eb72af0-dd9e-4ec5-ad87-d65e3c01f48b",
"type": "sd3_l2i",
"version": "1.3.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": false,
"useCache": true,
"nodePack": "invokeai",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
}
"type": "sd3_l2i",
"version": "1.3.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": false,
"useCache": true,
"nodePack": "invokeai",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
}
},
"position": {
"x": 1192.3097009334897,
"y": -366.0994675072209
}
},
{
"position": {
"x": 1192.3097009334897,
"y": -366.0994675072209
}
},
{
"id": "3b4f7f27-cfc0-4373-a009-99c5290d0cd6",
"type": "invocation",
"data": {
"id": "3b4f7f27-cfc0-4373-a009-99c5290d0cd6",
"type": "invocation",
"data": {
"id": "3b4f7f27-cfc0-4373-a009-99c5290d0cd6",
"type": "sd3_text_encoder",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": true,
"nodePack": "invokeai",
"inputs": {
"clip_l": {
"name": "clip_l",
"label": ""
},
"clip_g": {
"name": "clip_g",
"label": ""
},
"t5_encoder": {
"name": "t5_encoder",
"label": ""
},
"prompt": {
"name": "prompt",
"label": "",
"value": ""
}
"type": "sd3_text_encoder",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": true,
"nodePack": "invokeai",
"inputs": {
"clip_l": {
"name": "clip_l",
"label": ""
},
"clip_g": {
"name": "clip_g",
"label": ""
},
"t5_encoder": {
"name": "t5_encoder",
"label": ""
},
"prompt": {
"name": "prompt",
"label": "",
"value": ""
}
},
"position": {
"x": 408.16054647924784,
"y": 65.06415352118786
}
},
{
"position": {
"x": 408.16054647924784,
"y": 65.06415352118786
}
},
{
"id": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"type": "invocation",
"data": {
"id": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"type": "invocation",
"data": {
"id": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"type": "sd3_text_encoder",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": true,
"nodePack": "invokeai",
"inputs": {
"clip_l": {
"name": "clip_l",
"label": ""
},
"clip_g": {
"name": "clip_g",
"label": ""
},
"t5_encoder": {
"name": "t5_encoder",
"label": ""
},
"prompt": {
"name": "prompt",
"label": "",
"value": ""
}
"type": "sd3_text_encoder",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": true,
"nodePack": "invokeai",
"inputs": {
"clip_l": {
"name": "clip_l",
"label": ""
},
"clip_g": {
"name": "clip_g",
"label": ""
},
"t5_encoder": {
"name": "t5_encoder",
"label": ""
},
"prompt": {
"name": "prompt",
"label": "",
"value": ""
}
},
"position": {
"x": 378.9283412440941,
"y": -302.65777497352553
}
},
{
"position": {
"x": 378.9283412440941,
"y": -302.65777497352553
}
},
{
"id": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"type": "invocation",
"data": {
"id": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"type": "invocation",
"data": {
"id": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"type": "sd3_denoise",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": true,
"nodePack": "invokeai",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"transformer": {
"name": "transformer",
"label": ""
},
"positive_conditioning": {
"name": "positive_conditioning",
"label": ""
},
"negative_conditioning": {
"name": "negative_conditioning",
"label": ""
},
"cfg_scale": {
"name": "cfg_scale",
"label": "",
"value": 3.5
},
"width": {
"name": "width",
"label": "",
"value": 1024
},
"height": {
"name": "height",
"label": "",
"value": 1024
},
"steps": {
"name": "steps",
"label": "",
"value": 30
},
"seed": {
"name": "seed",
"label": "",
"value": 0
}
"type": "sd3_denoise",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": true,
"nodePack": "invokeai",
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"transformer": {
"name": "transformer",
"label": ""
},
"positive_conditioning": {
"name": "positive_conditioning",
"label": ""
},
"negative_conditioning": {
"name": "negative_conditioning",
"label": ""
},
"cfg_scale": {
"name": "cfg_scale",
"label": "",
"value": 3.5
},
"width": {
"name": "width",
"label": "",
"value": 1024
},
"height": {
"name": "height",
"label": "",
"value": 1024
},
"steps": {
"name": "steps",
"label": "",
"value": 30
},
"seed": {
"name": "seed",
"label": "",
"value": 0
}
},
"position": {
"x": 813.7814762740603,
"y": -142.20529727605867
}
},
"position": {
"x": 813.7814762740603,
"y": -142.20529727605867
}
],
"edges": [
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4cvae-9eb72af0-dd9e-4ec5-ad87-d65e3c01f48bvae",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "9eb72af0-dd9e-4ec5-ad87-d65e3c01f48b",
"sourceHandle": "vae",
"targetHandle": "vae"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4ct5_encoder-3b4f7f27-cfc0-4373-a009-99c5290d0cd6t5_encoder",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "3b4f7f27-cfc0-4373-a009-99c5290d0cd6",
"sourceHandle": "t5_encoder",
"targetHandle": "t5_encoder"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4ct5_encoder-e17d34e7-6ed1-493c-9a85-4fcd291cb084t5_encoder",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"sourceHandle": "t5_encoder",
"targetHandle": "t5_encoder"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4cclip_g-3b4f7f27-cfc0-4373-a009-99c5290d0cd6clip_g",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "3b4f7f27-cfc0-4373-a009-99c5290d0cd6",
"sourceHandle": "clip_g",
"targetHandle": "clip_g"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4cclip_g-e17d34e7-6ed1-493c-9a85-4fcd291cb084clip_g",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"sourceHandle": "clip_g",
"targetHandle": "clip_g"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4cclip_l-3b4f7f27-cfc0-4373-a009-99c5290d0cd6clip_l",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "3b4f7f27-cfc0-4373-a009-99c5290d0cd6",
"sourceHandle": "clip_l",
"targetHandle": "clip_l"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4cclip_l-e17d34e7-6ed1-493c-9a85-4fcd291cb084clip_l",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"sourceHandle": "clip_l",
"targetHandle": "clip_l"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4ctransformer-c7539f7b-7ac5-49b9-93eb-87ede611409ftransformer",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"sourceHandle": "transformer",
"targetHandle": "transformer"
},
{
"id": "reactflow__edge-f7e394ac-6394-4096-abcb-de0d346506b3value-c7539f7b-7ac5-49b9-93eb-87ede611409fseed",
"type": "default",
"source": "f7e394ac-6394-4096-abcb-de0d346506b3",
"target": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"sourceHandle": "value",
"targetHandle": "seed"
},
{
"id": "reactflow__edge-c7539f7b-7ac5-49b9-93eb-87ede611409flatents-9eb72af0-dd9e-4ec5-ad87-d65e3c01f48blatents",
"type": "default",
"source": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"target": "9eb72af0-dd9e-4ec5-ad87-d65e3c01f48b",
"sourceHandle": "latents",
"targetHandle": "latents"
},
{
"id": "reactflow__edge-e17d34e7-6ed1-493c-9a85-4fcd291cb084conditioning-c7539f7b-7ac5-49b9-93eb-87ede611409fpositive_conditioning",
"type": "default",
"source": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"target": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"sourceHandle": "conditioning",
"targetHandle": "positive_conditioning"
},
{
"id": "reactflow__edge-3b4f7f27-cfc0-4373-a009-99c5290d0cd6conditioning-c7539f7b-7ac5-49b9-93eb-87ede611409fnegative_conditioning",
"type": "default",
"source": "3b4f7f27-cfc0-4373-a009-99c5290d0cd6",
"target": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"sourceHandle": "conditioning",
"targetHandle": "negative_conditioning"
}
]
}
}
],
"edges": [
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4cvae-9eb72af0-dd9e-4ec5-ad87-d65e3c01f48bvae",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "9eb72af0-dd9e-4ec5-ad87-d65e3c01f48b",
"sourceHandle": "vae",
"targetHandle": "vae"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4ct5_encoder-3b4f7f27-cfc0-4373-a009-99c5290d0cd6t5_encoder",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "3b4f7f27-cfc0-4373-a009-99c5290d0cd6",
"sourceHandle": "t5_encoder",
"targetHandle": "t5_encoder"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4ct5_encoder-e17d34e7-6ed1-493c-9a85-4fcd291cb084t5_encoder",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"sourceHandle": "t5_encoder",
"targetHandle": "t5_encoder"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4cclip_g-3b4f7f27-cfc0-4373-a009-99c5290d0cd6clip_g",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "3b4f7f27-cfc0-4373-a009-99c5290d0cd6",
"sourceHandle": "clip_g",
"targetHandle": "clip_g"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4cclip_g-e17d34e7-6ed1-493c-9a85-4fcd291cb084clip_g",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"sourceHandle": "clip_g",
"targetHandle": "clip_g"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4cclip_l-3b4f7f27-cfc0-4373-a009-99c5290d0cd6clip_l",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "3b4f7f27-cfc0-4373-a009-99c5290d0cd6",
"sourceHandle": "clip_l",
"targetHandle": "clip_l"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4cclip_l-e17d34e7-6ed1-493c-9a85-4fcd291cb084clip_l",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"sourceHandle": "clip_l",
"targetHandle": "clip_l"
},
{
"id": "reactflow__edge-3f22f668-0e02-4fde-a2bb-c339586ceb4ctransformer-c7539f7b-7ac5-49b9-93eb-87ede611409ftransformer",
"type": "default",
"source": "3f22f668-0e02-4fde-a2bb-c339586ceb4c",
"target": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"sourceHandle": "transformer",
"targetHandle": "transformer"
},
{
"id": "reactflow__edge-f7e394ac-6394-4096-abcb-de0d346506b3value-c7539f7b-7ac5-49b9-93eb-87ede611409fseed",
"type": "default",
"source": "f7e394ac-6394-4096-abcb-de0d346506b3",
"target": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"sourceHandle": "value",
"targetHandle": "seed"
},
{
"id": "reactflow__edge-c7539f7b-7ac5-49b9-93eb-87ede611409flatents-9eb72af0-dd9e-4ec5-ad87-d65e3c01f48blatents",
"type": "default",
"source": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"target": "9eb72af0-dd9e-4ec5-ad87-d65e3c01f48b",
"sourceHandle": "latents",
"targetHandle": "latents"
},
{
"id": "reactflow__edge-e17d34e7-6ed1-493c-9a85-4fcd291cb084conditioning-c7539f7b-7ac5-49b9-93eb-87ede611409fpositive_conditioning",
"type": "default",
"source": "e17d34e7-6ed1-493c-9a85-4fcd291cb084",
"target": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"sourceHandle": "conditioning",
"targetHandle": "positive_conditioning"
},
{
"id": "reactflow__edge-3b4f7f27-cfc0-4373-a009-99c5290d0cd6conditioning-c7539f7b-7ac5-49b9-93eb-87ede611409fnegative_conditioning",
"type": "default",
"source": "3b4f7f27-cfc0-4373-a009-99c5290d0cd6",
"target": "c7539f7b-7ac5-49b9-93eb-87ede611409f",
"sourceHandle": "conditioning",
"targetHandle": "negative_conditioning"
}
]
}

View File

@@ -1,10 +1,11 @@
{
"id": "default_7dde3e36-d78f-4152-9eea-00ef9c8124ed",
"name": "Text to Image - SD1.5",
"author": "InvokeAI",
"description": "Sample text to image workflow for Stable Diffusion 1.5/2",
"version": "2.1.0",
"contact": "invoke@invoke.ai",
"tags": "text2image, SD1.5, SD2, default",
"tags": "text2image, SD1.5, SD2, text to image",
"notes": "",
"exposedFields": [
{

View File

@@ -1,10 +1,11 @@
{
"id": "default_5e8b008d-c697-45d0-8883-085a954c6ace",
"name": "Text to Image - SDXL",
"author": "InvokeAI",
"description": "Sample text to image workflow for SDXL",
"version": "2.1.0",
"contact": "invoke@invoke.ai",
"tags": "text2image, SDXL, default",
"tags": "text2image, SDXL, text to image",
"notes": "",
"exposedFields": [
{

View File

@@ -1,10 +1,11 @@
{
"id": "default_e71d153c-2089-43c7-bd2c-f61f37d4c1c1",
"name": "Text to Image with LoRA",
"author": "InvokeAI",
"description": "Simple text to image workflow with a LoRA",
"version": "2.1.0",
"contact": "invoke@invoke.ai",
"tags": "text to image, lora, default",
"tags": "text to image, lora, text to image",
"notes": "",
"exposedFields": [
{

View File

@@ -1,4 +1,5 @@
{
"id": "default_43b0d7f7-6a12-4dcf-a5a4-50c940cbee29",
"name": "Tiled Upscaling (Beta)",
"author": "Invoke",
"description": "A workflow to upscale an input image with tiled upscaling. ",

View File

@@ -41,10 +41,25 @@ class WorkflowRecordsStorageBase(ABC):
self,
order_by: WorkflowRecordOrderBy,
direction: SQLiteDirection,
category: WorkflowCategory,
categories: Optional[list[WorkflowCategory]],
page: int,
per_page: Optional[int],
query: Optional[str],
tags: Optional[list[str]],
) -> PaginatedResults[WorkflowRecordListItemDTO]:
"""Gets many workflows."""
pass
@abstractmethod
def get_counts(
self,
tags: Optional[list[str]],
categories: Optional[list[WorkflowCategory]],
) -> int:
"""Gets the count of workflows for the given tags and categories."""
pass
@abstractmethod
def update_opened_at(self, workflow_id: str) -> None:
"""Open a workflow."""
pass

View File

@@ -36,9 +36,7 @@ class WorkflowCategory(str, Enum, metaclass=MetaEnum):
class WorkflowMeta(BaseModel):
version: str = Field(description="The version of the workflow schema.")
category: WorkflowCategory = Field(
default=WorkflowCategory.User, description="The category of the workflow (user or default)."
)
category: WorkflowCategory = Field(description="The category of the workflow (user or default).")
@field_validator("version")
def validate_version(cls, version: str):
@@ -118,6 +116,15 @@ WorkflowRecordDTOValidator = TypeAdapter(WorkflowRecordDTO)
class WorkflowRecordListItemDTO(WorkflowRecordDTOBase):
description: str = Field(description="The description of the workflow.")
category: WorkflowCategory = Field(description="The description of the workflow.")
tags: str = Field(description="The tags of the workflow.")
WorkflowRecordListItemDTOValidator = TypeAdapter(WorkflowRecordListItemDTO)
class WorkflowRecordWithThumbnailDTO(WorkflowRecordDTO):
thumbnail_url: str | None = Field(default=None, description="The URL of the workflow thumbnail.")
class WorkflowRecordListItemWithThumbnailDTO(WorkflowRecordListItemDTO):
thumbnail_url: str | None = Field(default=None, description="The URL of the workflow thumbnail.")

View File

@@ -14,11 +14,13 @@ from invokeai.app.services.workflow_records.workflow_records_common import (
WorkflowRecordListItemDTO,
WorkflowRecordListItemDTOValidator,
WorkflowRecordOrderBy,
WorkflowValidator,
WorkflowWithoutID,
WorkflowWithoutIDValidator,
)
from invokeai.app.util.misc import uuid_string
SQL_TIME_FORMAT = "%Y-%m-%d %H:%M:%f"
class SqliteWorkflowRecordsStorage(WorkflowRecordsStorageBase):
def __init__(self, db: SqliteDatabase) -> None:
@@ -32,15 +34,6 @@ class SqliteWorkflowRecordsStorage(WorkflowRecordsStorageBase):
def get(self, workflow_id: str) -> WorkflowRecordDTO:
"""Gets a workflow by ID. Updates the opened_at column."""
cursor = self._conn.cursor()
cursor.execute(
"""--sql
UPDATE workflow_library
SET opened_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE workflow_id = ?;
""",
(workflow_id,),
)
self._conn.commit()
cursor.execute(
"""--sql
SELECT workflow_id, workflow, name, created_at, updated_at, opened_at
@@ -55,9 +48,10 @@ class SqliteWorkflowRecordsStorage(WorkflowRecordsStorageBase):
return WorkflowRecordDTO.from_dict(dict(row))
def create(self, workflow: WorkflowWithoutID) -> WorkflowRecordDTO:
if workflow.meta.category is WorkflowCategory.Default:
raise ValueError("Default workflows cannot be created via this method")
try:
# Only user workflows may be created by this method
assert workflow.meta.category is WorkflowCategory.User
workflow_with_id = Workflow(**workflow.model_dump(), id=uuid_string())
cursor = self._conn.cursor()
cursor.execute(
@@ -77,6 +71,9 @@ class SqliteWorkflowRecordsStorage(WorkflowRecordsStorageBase):
return self.get(workflow_with_id.id)
def update(self, workflow: Workflow) -> WorkflowRecordDTO:
if workflow.meta.category is WorkflowCategory.Default:
raise ValueError("Default workflows cannot be updated")
try:
cursor = self._conn.cursor()
cursor.execute(
@@ -94,6 +91,9 @@ class SqliteWorkflowRecordsStorage(WorkflowRecordsStorageBase):
return self.get(workflow.id)
def delete(self, workflow_id: str) -> None:
if self.get(workflow_id).workflow.meta.category is WorkflowCategory.Default:
raise ValueError("Default workflows cannot be deleted")
try:
cursor = self._conn.cursor()
cursor.execute(
@@ -113,45 +113,102 @@ class SqliteWorkflowRecordsStorage(WorkflowRecordsStorageBase):
self,
order_by: WorkflowRecordOrderBy,
direction: SQLiteDirection,
category: WorkflowCategory,
categories: Optional[list[WorkflowCategory]],
page: int = 0,
per_page: Optional[int] = None,
query: Optional[str] = None,
tags: Optional[list[str]] = None,
) -> PaginatedResults[WorkflowRecordListItemDTO]:
# sanitize!
assert order_by in WorkflowRecordOrderBy
assert direction in SQLiteDirection
assert category in WorkflowCategory
count_query = "SELECT COUNT(*) FROM workflow_library WHERE category = ?"
main_query = """
SELECT
workflow_id,
category,
name,
description,
created_at,
updated_at,
opened_at
FROM workflow_library
WHERE category = ?
"""
main_params: list[int | str] = [category.value]
count_params: list[int | str] = [category.value]
# We will construct the query dynamically based on the query params
# The main query to get the workflows / counts
main_query = """
SELECT
workflow_id,
category,
name,
description,
created_at,
updated_at,
opened_at,
tags
FROM workflow_library
"""
count_query = "SELECT COUNT(*) FROM workflow_library"
# Start with an empty list of conditions and params
conditions: list[str] = []
params: list[str | int] = []
if categories:
# Categories is a list of WorkflowCategory enum values, and a single string in the DB
# Ensure all categories are valid (is this necessary?)
assert all(c in WorkflowCategory for c in categories)
# Construct a placeholder string for the number of categories
placeholders = ", ".join("?" for _ in categories)
# Construct the condition string & params
category_condition = f"category IN ({placeholders})"
category_params = [category.value for category in categories]
conditions.append(category_condition)
params.extend(category_params)
if tags:
# Tags is a list of strings, and a single string in the DB
# The string in the DB has no guaranteed format
# Construct a list of conditions for each tag
tags_conditions = ["tags LIKE ?" for _ in tags]
tags_conditions_joined = " OR ".join(tags_conditions)
tags_condition = f"({tags_conditions_joined})"
# And the params for the tags, case-insensitive
tags_params = [f"%{t.strip()}%" for t in tags]
conditions.append(tags_condition)
params.extend(tags_params)
# Ignore whitespace in the query
stripped_query = query.strip() if query else None
if stripped_query:
# Construct a wildcard query for the name, description, and tags
wildcard_query = "%" + stripped_query + "%"
main_query += " AND name LIKE ? OR description LIKE ? "
count_query += " AND name LIKE ? OR description LIKE ?;"
main_params.extend([wildcard_query, wildcard_query])
count_params.extend([wildcard_query, wildcard_query])
query_condition = "(name LIKE ? OR description LIKE ? OR tags LIKE ?)"
conditions.append(query_condition)
params.extend([wildcard_query, wildcard_query, wildcard_query])
if conditions:
# If there are conditions, add a WHERE clause and then join the conditions
main_query += " WHERE "
count_query += " WHERE "
all_conditions = " AND ".join(conditions)
main_query += all_conditions
count_query += all_conditions
# After this point, the query and params differ for the main query and the count query
main_params = params.copy()
count_params = params.copy()
# Main query also gets ORDER BY and LIMIT/OFFSET
main_query += f" ORDER BY {order_by.value} {direction.value}"
if per_page:
main_query += " LIMIT ? OFFSET ?"
main_params.extend([per_page, page * per_page])
# Put a ring on it
main_query += ";"
count_query += ";"
cursor = self._conn.cursor()
cursor.execute(main_query, main_params)
rows = cursor.fetchall()
@@ -173,6 +230,71 @@ class SqliteWorkflowRecordsStorage(WorkflowRecordsStorageBase):
total=total,
)
def get_counts(
self,
tags: Optional[list[str]],
categories: Optional[list[WorkflowCategory]],
) -> int:
cursor = self._conn.cursor()
# Start with an empty list of conditions and params
conditions: list[str] = []
params: list[str | int] = []
if tags:
# Construct a list of conditions for each tag
tags_conditions = ["tags LIKE ?" for _ in tags]
tags_conditions_joined = " OR ".join(tags_conditions)
tags_condition = f"({tags_conditions_joined})"
# And the params for the tags, case-insensitive
tags_params = [f"%{t.strip()}%" for t in tags]
conditions.append(tags_condition)
params.extend(tags_params)
if categories:
# Ensure all categories are valid (is this necessary?)
assert all(c in WorkflowCategory for c in categories)
# Construct a placeholder string for the number of categories
placeholders = ", ".join("?" for _ in categories)
# Construct the condition string & params
conditions.append(f"category IN ({placeholders})")
params.extend([category.value for category in categories])
stmt = """--sql
SELECT COUNT(*)
FROM workflow_library
"""
if conditions:
# If there are conditions, add a WHERE clause and then join the conditions
stmt += " WHERE "
all_conditions = " AND ".join(conditions)
stmt += all_conditions
cursor.execute(stmt, tuple(params))
return cursor.fetchone()[0]
def update_opened_at(self, workflow_id: str) -> None:
try:
cursor = self._conn.cursor()
cursor.execute(
f"""--sql
UPDATE workflow_library
SET opened_at = STRFTIME('{SQL_TIME_FORMAT}', 'NOW')
WHERE workflow_id = ?;
""",
(workflow_id,),
)
self._conn.commit()
except Exception:
self._conn.rollback()
raise
def _sync_default_workflows(self) -> None:
"""Syncs default workflows to the database. Internal use only."""
@@ -187,27 +309,68 @@ class SqliteWorkflowRecordsStorage(WorkflowRecordsStorageBase):
"""
try:
workflows: list[Workflow] = []
cursor = self._conn.cursor()
workflows_from_file: list[Workflow] = []
workflows_to_update: list[Workflow] = []
workflows_to_add: list[Workflow] = []
workflows_dir = Path(__file__).parent / Path("default_workflows")
workflow_paths = workflows_dir.glob("*.json")
for path in workflow_paths:
bytes_ = path.read_bytes()
workflow_without_id = WorkflowWithoutIDValidator.validate_json(bytes_)
workflow = Workflow(**workflow_without_id.model_dump(), id=uuid_string())
workflows.append(workflow)
# Only default workflows may be managed by this method
assert all(w.meta.category is WorkflowCategory.Default for w in workflows)
cursor = self._conn.cursor()
cursor.execute(
"""--sql
DELETE FROM workflow_library
WHERE category = 'default';
"""
)
for w in workflows:
workflow_from_file = WorkflowValidator.validate_json(bytes_)
assert workflow_from_file.id.startswith("default_"), (
f'Invalid default workflow ID (must start with "default_"): {workflow_from_file.id}'
)
assert workflow_from_file.meta.category is WorkflowCategory.Default, (
f"Invalid default workflow category: {workflow_from_file.meta.category}"
)
workflows_from_file.append(workflow_from_file)
try:
workflow_from_db = self.get(workflow_from_file.id).workflow
if workflow_from_file != workflow_from_db:
self._invoker.services.logger.debug(
f"Updating library workflow {workflow_from_file.name} ({workflow_from_file.id})"
)
workflows_to_update.append(workflow_from_file)
continue
except WorkflowNotFoundError:
self._invoker.services.logger.debug(
f"Adding missing default workflow {workflow_from_file.name} ({workflow_from_file.id})"
)
workflows_to_add.append(workflow_from_file)
continue
library_workflows_from_db = self.get_many(
order_by=WorkflowRecordOrderBy.Name,
direction=SQLiteDirection.Ascending,
categories=[WorkflowCategory.Default],
).items
workflows_from_file_ids = [w.id for w in workflows_from_file]
for w in library_workflows_from_db:
if w.workflow_id not in workflows_from_file_ids:
self._invoker.services.logger.debug(
f"Deleting obsolete default workflow {w.name} ({w.workflow_id})"
)
# We cannot use the `delete` method here, as it only deletes non-default workflows
cursor.execute(
"""--sql
DELETE from workflow_library
WHERE workflow_id = ?;
""",
(w.workflow_id,),
)
for w in workflows_to_add:
# We cannot use the `create` method here, as it only creates non-default workflows
cursor.execute(
"""--sql
INSERT OR REPLACE INTO workflow_library (
INSERT INTO workflow_library (
workflow_id,
workflow
)
@@ -215,6 +378,18 @@ class SqliteWorkflowRecordsStorage(WorkflowRecordsStorageBase):
""",
(w.id, w.model_dump_json()),
)
for w in workflows_to_update:
# We cannot use the `update` method here, as it only updates non-default workflows
cursor.execute(
"""--sql
UPDATE workflow_library
SET workflow = ?
WHERE workflow_id = ?;
""",
(w.model_dump_json(), w.id),
)
self._conn.commit()
except Exception:
self._conn.rollback()

View File

@@ -0,0 +1,28 @@
from abc import ABC, abstractmethod
from pathlib import Path
from PIL import Image
class WorkflowThumbnailServiceBase(ABC):
"""Base class for workflow thumbnail services"""
@abstractmethod
def get_path(self, workflow_id: str) -> Path:
"""Gets the path to a workflow thumbnail"""
pass
@abstractmethod
def get_url(self, workflow_id: str) -> str | None:
"""Gets the URL of a workflow thumbnail"""
pass
@abstractmethod
def save(self, workflow_id: str, image: Image.Image) -> None:
"""Saves a workflow thumbnail"""
pass
@abstractmethod
def delete(self, workflow_id: str) -> None:
"""Deletes a workflow thumbnail"""
pass

View File

@@ -0,0 +1,22 @@
class WorkflowThumbnailFileNotFoundException(Exception):
"""Raised when a workflow thumbnail file is not found"""
def __init__(self, message: str = "Workflow thumbnail file not found"):
self.message = message
super().__init__(self.message)
class WorkflowThumbnailFileSaveException(Exception):
"""Raised when a workflow thumbnail file cannot be saved"""
def __init__(self, message: str = "Workflow thumbnail file cannot be saved"):
self.message = message
super().__init__(self.message)
class WorkflowThumbnailFileDeleteException(Exception):
"""Raised when a workflow thumbnail file cannot be deleted"""
def __init__(self, message: str = "Workflow thumbnail file cannot be deleted"):
self.message = message
super().__init__(self.message)

View File

@@ -0,0 +1,86 @@
from pathlib import Path
from PIL import Image
from PIL.Image import Image as PILImageType
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.workflow_records.workflow_records_common import WorkflowCategory
from invokeai.app.services.workflow_thumbnails.workflow_thumbnails_base import WorkflowThumbnailServiceBase
from invokeai.app.services.workflow_thumbnails.workflow_thumbnails_common import (
WorkflowThumbnailFileDeleteException,
WorkflowThumbnailFileNotFoundException,
WorkflowThumbnailFileSaveException,
)
from invokeai.app.util.misc import uuid_string
from invokeai.app.util.thumbnails import make_thumbnail
class WorkflowThumbnailFileStorageDisk(WorkflowThumbnailServiceBase):
def __init__(self, thumbnails_path: Path):
self._workflow_thumbnail_folder = thumbnails_path
self._validate_storage_folders()
def start(self, invoker: Invoker) -> None:
self._invoker = invoker
def get(self, workflow_id: str) -> PILImageType:
try:
path = self.get_path(workflow_id)
return Image.open(path)
except FileNotFoundError as e:
raise WorkflowThumbnailFileNotFoundException from e
def save(self, workflow_id: str, image: PILImageType) -> None:
try:
self._validate_storage_folders()
image_path = self._workflow_thumbnail_folder / (workflow_id + ".webp")
thumbnail = make_thumbnail(image, 256)
thumbnail.save(image_path, format="webp")
except Exception as e:
raise WorkflowThumbnailFileSaveException from e
def get_path(self, workflow_id: str) -> Path:
workflow = self._invoker.services.workflow_records.get(workflow_id).workflow
if workflow.meta.category is WorkflowCategory.Default:
default_thumbnails_dir = Path(__file__).parent / Path("default_workflow_thumbnails")
path = default_thumbnails_dir / (workflow_id + ".png")
else:
path = self._workflow_thumbnail_folder / (workflow_id + ".webp")
return path
def get_url(self, workflow_id: str) -> str | None:
path = self.get_path(workflow_id)
if not self._validate_path(path):
return
url = self._invoker.services.urls.get_workflow_thumbnail_url(workflow_id)
# The image URL never changes, so we must add random query string to it to prevent caching
url += f"?{uuid_string()}"
return url
def delete(self, workflow_id: str) -> None:
try:
path = self.get_path(workflow_id)
if not self._validate_path(path):
raise WorkflowThumbnailFileNotFoundException
path.unlink()
except WorkflowThumbnailFileNotFoundException as e:
raise WorkflowThumbnailFileNotFoundException from e
except Exception as e:
raise WorkflowThumbnailFileDeleteException from e
def _validate_path(self, path: Path) -> bool:
"""Validates the path given for an image."""
return path.exists()
def _validate_storage_folders(self) -> None:
"""Checks if the required folders exist and create them if they don't"""
self._workflow_thumbnail_folder.mkdir(parents=True, exist_ok=True)

View File

@@ -3,7 +3,11 @@ from typing import Optional
import torch
import torchvision
from invokeai.backend.flux.text_conditioning import FluxRegionalTextConditioning, FluxTextConditioning
from invokeai.backend.flux.text_conditioning import (
FluxReduxConditioning,
FluxRegionalTextConditioning,
FluxTextConditioning,
)
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import Range
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.mask import to_standard_float_mask
@@ -32,14 +36,19 @@ class RegionalPromptingExtension:
return order[block_index % len(order)]
@classmethod
def from_text_conditioning(cls, text_conditioning: list[FluxTextConditioning], img_seq_len: int):
def from_text_conditioning(
cls,
text_conditioning: list[FluxTextConditioning],
redux_conditioning: list[FluxReduxConditioning],
img_seq_len: int,
):
"""Create a RegionalPromptingExtension from a list of text conditionings.
Args:
text_conditioning (list[FluxTextConditioning]): The text conditionings to use for regional prompting.
img_seq_len (int): The image sequence length (i.e. packed_height * packed_width).
"""
regional_text_conditioning = cls._concat_regional_text_conditioning(text_conditioning)
regional_text_conditioning = cls._concat_regional_text_conditioning(text_conditioning, redux_conditioning)
attn_mask_with_restricted_img_self_attn = cls._prepare_restricted_attn_mask(
regional_text_conditioning, img_seq_len
)
@@ -202,6 +211,7 @@ class RegionalPromptingExtension:
def _concat_regional_text_conditioning(
cls,
text_conditionings: list[FluxTextConditioning],
redux_conditionings: list[FluxReduxConditioning],
) -> FluxRegionalTextConditioning:
"""Concatenate regional text conditioning data into a single conditioning tensor (with associated masks)."""
concat_t5_embeddings: list[torch.Tensor] = []
@@ -217,18 +227,27 @@ class RegionalPromptingExtension:
global_clip_embedding = text_conditioning.clip_embeddings
break
# Handle T5 text embeddings.
cur_t5_embedding_len = 0
for text_conditioning in text_conditionings:
concat_t5_embeddings.append(text_conditioning.t5_embeddings)
concat_t5_embedding_ranges.append(
Range(start=cur_t5_embedding_len, end=cur_t5_embedding_len + text_conditioning.t5_embeddings.shape[1])
)
image_masks.append(text_conditioning.mask)
cur_t5_embedding_len += text_conditioning.t5_embeddings.shape[1]
# Handle Redux embeddings.
for redux_conditioning in redux_conditionings:
concat_t5_embeddings.append(redux_conditioning.redux_embeddings)
concat_t5_embedding_ranges.append(
Range(
start=cur_t5_embedding_len, end=cur_t5_embedding_len + redux_conditioning.redux_embeddings.shape[1]
)
)
image_masks.append(redux_conditioning.mask)
cur_t5_embedding_len += redux_conditioning.redux_embeddings.shape[1]
t5_embeddings = torch.cat(concat_t5_embeddings, dim=1)
# Initialize the txt_ids tensor.

View File

@@ -0,0 +1,17 @@
import torch
# This model definition is based on:
# https://github.com/black-forest-labs/flux/blob/716724eb276d94397be99710a0a54d352664e23b/src/flux/modules/image_embedders.py#L66
class FluxReduxModel(torch.nn.Module):
def __init__(self, redux_dim: int = 1152, txt_in_features: int = 4096) -> None:
super().__init__()
self.redux_dim = redux_dim
self.redux_up = torch.nn.Linear(redux_dim, txt_in_features * 3)
self.redux_down = torch.nn.Linear(txt_in_features * 3, txt_in_features)
def forward(self, x: torch.Tensor) -> torch.Tensor:
return self.redux_down(torch.nn.functional.silu(self.redux_up(x)))

View File

@@ -0,0 +1,11 @@
from typing import Any, Dict
def is_state_dict_likely_flux_redux(state_dict: Dict[str, Any]) -> bool:
"""Checks if the provided state dict is likely a FLUX Redux model."""
expected_keys = {"redux_down.bias", "redux_down.weight", "redux_up.bias", "redux_up.weight"}
if set(state_dict.keys()) == expected_keys:
return True
return False

View File

@@ -13,6 +13,13 @@ class FluxTextConditioning:
mask: torch.Tensor | None
@dataclass
class FluxReduxConditioning:
redux_embeddings: torch.Tensor
# If mask is None, the prompt is a global prompt.
mask: torch.Tensor | None
@dataclass
class FluxRegionalTextConditioning:
# Concatenated text embeddings.

View File

@@ -91,10 +91,10 @@ class PromptFormatter:
switches = []
switches.append(f'"{opt.prompt}"')
switches.append(f"-s{opt.steps or t2i.steps}")
switches.append(f"-W{opt.width or t2i.width}")
switches.append(f"-H{opt.height or t2i.height}")
switches.append(f"-C{opt.cfg_scale or t2i.cfg_scale}")
switches.append(f"-s{opt.steps or t2i.steps}")
switches.append(f"-W{opt.width or t2i.width}")
switches.append(f"-H{opt.height or t2i.height}")
switches.append(f"-C{opt.cfg_scale or t2i.cfg_scale}")
switches.append(f"-A{opt.sampler_name or t2i.sampler_name}")
# to do: put model name into the t2i object
# switches.append(f'--model{t2i.model_name}')
@@ -109,7 +109,7 @@ class PromptFormatter:
if opt.gfpgan_strength:
switches.append(f"-G{opt.gfpgan_strength}")
if opt.upscale:
switches.append(f'-U {" ".join([str(u) for u in opt.upscale])}')
switches.append(f"-U {' '.join([str(u) for u in opt.upscale])}")
if opt.variation_amount > 0:
switches.append(f"-v{opt.variation_amount}")
if opt.with_variations:

View File

@@ -76,6 +76,8 @@ class ModelType(str, Enum):
T2IAdapter = "t2i_adapter"
T5Encoder = "t5_encoder"
SpandrelImageToImage = "spandrel_image_to_image"
SigLIP = "siglip"
FluxRedux = "flux_redux"
class SubModelType(str, Enum):
@@ -528,6 +530,28 @@ class SpandrelImageToImageConfig(ModelConfigBase):
return Tag(f"{ModelType.SpandrelImageToImage.value}.{ModelFormat.Checkpoint.value}")
class SigLIPConfig(DiffusersConfigBase):
"""Model config for SigLIP."""
type: Literal[ModelType.SigLIP] = ModelType.SigLIP
format: Literal[ModelFormat.Diffusers] = ModelFormat.Diffusers
@staticmethod
def get_tag() -> Tag:
return Tag(f"{ModelType.SigLIP.value}.{ModelFormat.Diffusers.value}")
class FluxReduxConfig(ModelConfigBase):
"""Model config for FLUX Tools Redux model."""
type: Literal[ModelType.FluxRedux] = ModelType.FluxRedux
format: Literal[ModelFormat.Checkpoint] = ModelFormat.Checkpoint
@staticmethod
def get_tag() -> Tag:
return Tag(f"{ModelType.FluxRedux.value}.{ModelFormat.Checkpoint.value}")
def get_model_discriminator_value(v: Any) -> str:
"""
Computes the discriminator value for a model config.
@@ -575,6 +599,8 @@ AnyModelConfig = Annotated[
Annotated[CLIPEmbedDiffusersConfig, CLIPEmbedDiffusersConfig.get_tag()],
Annotated[CLIPLEmbedDiffusersConfig, CLIPLEmbedDiffusersConfig.get_tag()],
Annotated[CLIPGEmbedDiffusersConfig, CLIPGEmbedDiffusersConfig.get_tag()],
Annotated[SigLIPConfig, SigLIPConfig.get_tag()],
Annotated[FluxReduxConfig, FluxReduxConfig.get_tag()],
],
Discriminator(get_model_discriminator_value),
]

View File

@@ -70,7 +70,7 @@ def get_pretty_snapshot_diff(snapshot_1: Optional[MemorySnapshot], snapshot_2: O
def get_msg_line(prefix: str, val1: int, val2: int) -> str:
diff = val2 - val1
return f"{prefix: <30} ({(diff/GB):+5.3f}): {(val1/GB):5.3f}GB -> {(val2/GB):5.3f}GB\n"
return f"{prefix: <30} ({(diff / GB):+5.3f}): {(val1 / GB):5.3f}GB -> {(val2 / GB):5.3f}GB\n"
msg = ""

View File

@@ -192,7 +192,7 @@ class ModelCache:
self._cached_models[key] = cache_record
self._cache_stack.append(key)
self._logger.debug(
f"Added model {key} (Type: {model.__class__.__name__}, Wrap mode: {wrapped_model.__class__.__name__}, Model size: {size/MB:.2f}MB)"
f"Added model {key} (Type: {model.__class__.__name__}, Wrap mode: {wrapped_model.__class__.__name__}, Model size: {size / MB:.2f}MB)"
)
@synchronized
@@ -303,7 +303,7 @@ class ModelCache:
# 2. If the model can't fit fully into VRAM, then unload all other models and load as much of the model as
# possible.
vram_bytes_freed = self._offload_unlocked_models(model_vram_needed, working_mem_bytes)
self._logger.debug(f"Unloaded models (if necessary): vram_bytes_freed={(vram_bytes_freed/MB):.2f}MB")
self._logger.debug(f"Unloaded models (if necessary): vram_bytes_freed={(vram_bytes_freed / MB):.2f}MB")
# Check the updated vram_available after offloading.
vram_available = self._get_vram_available(working_mem_bytes)
@@ -317,7 +317,7 @@ class ModelCache:
vram_bytes_freed_from_own_model = self._move_model_to_ram(cache_entry, -vram_available)
vram_available = self._get_vram_available(working_mem_bytes)
self._logger.debug(
f"Unloaded {vram_bytes_freed_from_own_model/MB:.2f}MB from the model being locked ({cache_entry.key})."
f"Unloaded {vram_bytes_freed_from_own_model / MB:.2f}MB from the model being locked ({cache_entry.key})."
)
# Move as much of the model as possible into VRAM.
@@ -333,10 +333,12 @@ class ModelCache:
self._logger.info(
f"Loaded model '{cache_entry.key}' ({cache_entry.cached_model.model.__class__.__name__}) onto "
f"{self._execution_device.type} device in {(time.time() - start_time):.2f}s. "
f"Total model size: {model_total_bytes/MB:.2f}MB, "
f"VRAM: {model_cur_vram_bytes/MB:.2f}MB ({loaded_percent:.1%})"
f"Total model size: {model_total_bytes / MB:.2f}MB, "
f"VRAM: {model_cur_vram_bytes / MB:.2f}MB ({loaded_percent:.1%})"
)
self._logger.debug(
f"Loaded model onto execution device: model_bytes_loaded={(model_bytes_loaded / MB):.2f}MB, "
)
self._logger.debug(f"Loaded model onto execution device: model_bytes_loaded={(model_bytes_loaded/MB):.2f}MB, ")
self._logger.debug(
f"After loading: {self._get_vram_state_str(model_cur_vram_bytes, model_total_bytes, vram_available)}"
)
@@ -495,10 +497,10 @@ class ModelCache:
"""Helper function for preparing a VRAM state log string."""
model_cur_vram_bytes_percent = model_cur_vram_bytes / model_total_bytes if model_total_bytes > 0 else 0
return (
f"model_total={model_total_bytes/MB:.0f} MB, "
+ f"model_vram={model_cur_vram_bytes/MB:.0f} MB ({model_cur_vram_bytes_percent:.1%} %), "
f"model_total={model_total_bytes / MB:.0f} MB, "
+ f"model_vram={model_cur_vram_bytes / MB:.0f} MB ({model_cur_vram_bytes_percent:.1%} %), "
# + f"vram_total={int(self._max_vram_cache_size * GB)/MB:.0f} MB, "
+ f"vram_available={(vram_available/MB):.0f} MB, "
+ f"vram_available={(vram_available / MB):.0f} MB, "
)
def _offload_unlocked_models(self, vram_bytes_required: int, working_mem_bytes: Optional[int] = None) -> int:
@@ -509,7 +511,7 @@ class ModelCache:
int: The number of bytes freed based on believed model sizes. The actual change in VRAM may be different.
"""
self._logger.debug(
f"Offloading unlocked models with goal of making room for {vram_bytes_required/MB:.2f}MB of VRAM."
f"Offloading unlocked models with goal of making room for {vram_bytes_required / MB:.2f}MB of VRAM."
)
vram_bytes_freed = 0
# TODO(ryand): Give more thought to the offloading policy used here.
@@ -527,7 +529,7 @@ class ModelCache:
cache_entry_bytes_freed = self._move_model_to_ram(cache_entry, vram_bytes_to_free)
if cache_entry_bytes_freed > 0:
self._logger.debug(
f"Unloaded {cache_entry.key} from VRAM to free {(cache_entry_bytes_freed/MB):.0f} MB."
f"Unloaded {cache_entry.key} from VRAM to free {(cache_entry_bytes_freed / MB):.0f} MB."
)
vram_bytes_freed += cache_entry_bytes_freed
@@ -609,7 +611,7 @@ class ModelCache:
external references to the model, there's nothing that the cache can do about it, and those models will not be
garbage-collected.
"""
self._logger.debug(f"Making room for {bytes_needed/MB:.2f}MB of RAM.")
self._logger.debug(f"Making room for {bytes_needed / MB:.2f}MB of RAM.")
self._log_cache_state(title="Before dropping models:")
ram_bytes_available = self._get_ram_available()
@@ -625,7 +627,7 @@ class ModelCache:
if not cache_entry.is_locked:
ram_bytes_freed += cache_entry.cached_model.total_bytes()
self._logger.debug(
f"Dropping {model_key} from RAM cache to free {(cache_entry.cached_model.total_bytes()/MB):.2f}MB."
f"Dropping {model_key} from RAM cache to free {(cache_entry.cached_model.total_bytes() / MB):.2f}MB."
)
self._delete_cache_entry(cache_entry)
del cache_entry
@@ -650,7 +652,7 @@ class ModelCache:
gc.collect()
TorchDevice.empty_cache()
self._logger.debug(f"Dropped {models_cleared} models to free {ram_bytes_freed/MB:.2f}MB of RAM.")
self._logger.debug(f"Dropped {models_cleared} models to free {ram_bytes_freed / MB:.2f}MB of RAM.")
self._log_cache_state(title="After dropping models:")
def _delete_cache_entry(self, cache_entry: CacheRecord) -> None:

View File

@@ -25,6 +25,7 @@ from invokeai.backend.flux.ip_adapter.xlabs_ip_adapter_flux import (
)
from invokeai.backend.flux.model import Flux
from invokeai.backend.flux.modules.autoencoder import AutoEncoder
from invokeai.backend.flux.redux.flux_redux_model import FluxReduxModel
from invokeai.backend.flux.util import ae_params, params
from invokeai.backend.model_manager import (
AnyModel,
@@ -39,6 +40,7 @@ from invokeai.backend.model_manager.config import (
CLIPEmbedDiffusersConfig,
ControlNetCheckpointConfig,
ControlNetDiffusersConfig,
FluxReduxConfig,
IPAdapterCheckpointConfig,
MainBnbQuantized4bCheckpointConfig,
MainCheckpointConfig,
@@ -391,3 +393,25 @@ class FluxIpAdapterModel(ModelLoader):
model.load_xlabs_state_dict(sd, assign=True)
return model
@ModelLoaderRegistry.register(base=BaseModelType.Flux, type=ModelType.FluxRedux, format=ModelFormat.Checkpoint)
class FluxReduxModelLoader(ModelLoader):
"""Class to load FLUX Redux models."""
def _load_model(
self,
config: AnyModelConfig,
submodel_type: Optional[SubModelType] = None,
) -> AnyModel:
if not isinstance(config, FluxReduxConfig):
raise ValueError(f"Unexpected model config type: {type(config)}.")
sd = load_file(Path(config.path))
with accelerate.init_empty_weights():
model = FluxReduxModel()
model.load_state_dict(sd, assign=True)
model.to(dtype=torch.bfloat16)
return model

View File

@@ -0,0 +1,32 @@
from pathlib import Path
from typing import Optional
from invokeai.backend.model_manager.config import (
AnyModel,
AnyModelConfig,
BaseModelType,
ModelFormat,
ModelType,
SubModelType,
)
from invokeai.backend.model_manager.load.load_default import ModelLoader
from invokeai.backend.model_manager.load.model_loader_registry import ModelLoaderRegistry
from invokeai.backend.sig_lip.sig_lip_pipeline import SigLipPipeline
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.SigLIP, format=ModelFormat.Diffusers)
class SigLIPModelLoader(ModelLoader):
"""Class for loading SigLIP models."""
def _load_model(
self,
config: AnyModelConfig,
submodel_type: Optional[SubModelType] = None,
) -> AnyModel:
if submodel_type is not None:
raise ValueError("Unexpected submodel requested for LLaVA OneVision model.")
model_path = Path(config.path)
model = SigLipPipeline.load_from_path(model_path)
model.to(dtype=self._torch_dtype)
return model

View File

@@ -18,6 +18,7 @@ from invokeai.backend.ip_adapter.ip_adapter import IPAdapter
from invokeai.backend.model_manager.config import AnyModel
from invokeai.backend.onnx.onnx_runtime import IAIOnnxRuntimeModel
from invokeai.backend.patches.model_patch_raw import ModelPatchRaw
from invokeai.backend.sig_lip.sig_lip_pipeline import SigLipPipeline
from invokeai.backend.spandrel_image_to_image_model import SpandrelImageToImageModel
from invokeai.backend.textual_inversion import TextualInversionModelRaw
from invokeai.backend.util.calc_tensor_size import calc_tensor_size
@@ -48,6 +49,7 @@ def calc_model_size_by_data(logger: logging.Logger, model: AnyModel) -> int:
GroundingDinoPipeline,
SegmentAnythingPipeline,
DepthAnythingPipeline,
SigLipPipeline,
),
):
return model.calc_size()

View File

@@ -115,19 +115,19 @@ class ModelMerger(object):
base_models: Set[BaseModelType] = set()
variant = None if self._installer.app_config.precision == "float32" else "fp16"
assert (
len(model_keys) <= 2 or interp == MergeInterpolationMethod.AddDifference
), "When merging three models, only the 'add_difference' merge method is supported"
assert len(model_keys) <= 2 or interp == MergeInterpolationMethod.AddDifference, (
"When merging three models, only the 'add_difference' merge method is supported"
)
for key in model_keys:
info = store.get_model(key)
model_names.append(info.name)
assert isinstance(
info, MainDiffusersConfig
), f"{info.name} ({info.key}) is not a diffusers model. It must be optimized before merging"
assert info.variant == ModelVariantType(
"normal"
), f"{info.name} ({info.key}) is a {info.variant} model, which cannot currently be merged"
assert isinstance(info, MainDiffusersConfig), (
f"{info.name} ({info.key}) is not a diffusers model. It must be optimized before merging"
)
assert info.variant == ModelVariantType("normal"), (
f"{info.name} ({info.key}) is a {info.variant} model, which cannot currently be merged"
)
# tally base models used
base_models.add(info.base)

View File

@@ -15,6 +15,7 @@ from invokeai.backend.flux.controlnet.state_dict_utils import (
is_state_dict_xlabs_controlnet,
)
from invokeai.backend.flux.ip_adapter.state_dict_utils import is_state_dict_xlabs_ip_adapter
from invokeai.backend.flux.redux.flux_redux_state_dict_utils import is_state_dict_likely_flux_redux
from invokeai.backend.model_hash.model_hash import HASHING_ALGORITHMS, ModelHash
from invokeai.backend.model_manager.config import (
AnyModelConfig,
@@ -139,6 +140,7 @@ class ModelProbe(object):
"FluxControlNetModel": ModelType.ControlNet,
"SD3Transformer2DModel": ModelType.Main,
"CLIPTextModelWithProjection": ModelType.CLIPEmbed,
"SiglipModel": ModelType.SigLIP,
}
TYPE2VARIANT: Dict[ModelType, Callable[[str], Optional[AnyVariant]]] = {ModelType.CLIPEmbed: get_clip_variant_type}
@@ -267,6 +269,9 @@ class ModelProbe(object):
if isinstance(ckpt, dict) and is_state_dict_likely_flux_control(ckpt):
return ModelType.ControlLoRa
if isinstance(ckpt, dict) and is_state_dict_likely_flux_redux(ckpt):
return ModelType.FluxRedux
for key in [str(k) for k in ckpt.keys()]:
if key.startswith(
(
@@ -752,6 +757,16 @@ class SpandrelImageToImageCheckpointProbe(CheckpointProbeBase):
return BaseModelType.Any
class SigLIPCheckpointProbe(CheckpointProbeBase):
def get_base_type(self) -> BaseModelType:
raise NotImplementedError()
class FluxReduxCheckpointProbe(CheckpointProbeBase):
def get_base_type(self) -> BaseModelType:
return BaseModelType.Flux
########################################################
# classes for probing folders
#######################################################
@@ -1022,6 +1037,16 @@ class SpandrelImageToImageFolderProbe(FolderProbeBase):
raise NotImplementedError()
class SigLIPFolderProbe(FolderProbeBase):
def get_base_type(self) -> BaseModelType:
return BaseModelType.Any
class FluxReduxFolderProbe(FolderProbeBase):
def get_base_type(self) -> BaseModelType:
raise NotImplementedError()
class T2IAdapterFolderProbe(FolderProbeBase):
def get_base_type(self) -> BaseModelType:
config_file = self.model_path / "config.json"
@@ -1055,6 +1080,8 @@ ModelProbe.register_probe("diffusers", ModelType.CLIPEmbed, CLIPEmbedFolderProbe
ModelProbe.register_probe("diffusers", ModelType.CLIPVision, CLIPVisionFolderProbe)
ModelProbe.register_probe("diffusers", ModelType.T2IAdapter, T2IAdapterFolderProbe)
ModelProbe.register_probe("diffusers", ModelType.SpandrelImageToImage, SpandrelImageToImageFolderProbe)
ModelProbe.register_probe("diffusers", ModelType.SigLIP, SigLIPFolderProbe)
ModelProbe.register_probe("diffusers", ModelType.FluxRedux, FluxReduxFolderProbe)
ModelProbe.register_probe("checkpoint", ModelType.Main, PipelineCheckpointProbe)
ModelProbe.register_probe("checkpoint", ModelType.VAE, VaeCheckpointProbe)
@@ -1066,5 +1093,7 @@ ModelProbe.register_probe("checkpoint", ModelType.IPAdapter, IPAdapterCheckpoint
ModelProbe.register_probe("checkpoint", ModelType.CLIPVision, CLIPVisionCheckpointProbe)
ModelProbe.register_probe("checkpoint", ModelType.T2IAdapter, T2IAdapterCheckpointProbe)
ModelProbe.register_probe("checkpoint", ModelType.SpandrelImageToImage, SpandrelImageToImageCheckpointProbe)
ModelProbe.register_probe("checkpoint", ModelType.SigLIP, SigLIPCheckpointProbe)
ModelProbe.register_probe("checkpoint", ModelType.FluxRedux, FluxReduxCheckpointProbe)
ModelProbe.register_probe("onnx", ModelType.ONNX, ONNXFolderProbe)

View File

@@ -593,6 +593,26 @@ swinir = StarterModel(
# endregion
# region SigLIP
siglip = StarterModel(
name="SigLIP - google/siglip-so400m-patch14-384",
base=BaseModelType.Any,
source="google/siglip-so400m-patch14-384",
description="A SigLIP model (used by FLUX Redux).",
type=ModelType.SigLIP,
)
# endregion
# region FLUX Redux
flux_redux = StarterModel(
name="FLUX Redux",
base=BaseModelType.Flux,
source="black-forest-labs/FLUX.1-Redux-dev::flux1-redux-dev.safetensors",
description="FLUX Redux model (for image variation).",
type=ModelType.FluxRedux,
dependencies=[siglip],
)
# endregion
# List of starter models, displayed on the frontend.
# The order/sort of this list is not changed by the frontend - set it how you want it here.
@@ -661,6 +681,8 @@ STARTER_MODELS: list[StarterModel] = [
t5_base_encoder,
t5_8b_quantized_encoder,
clip_l_encoder,
siglip,
flux_redux,
]
sd1_bundle: list[StarterModel] = [
@@ -708,6 +730,7 @@ flux_bundle: list[StarterModel] = [
ip_adapter_flux,
flux_canny_control_lora,
flux_depth_control_lora,
flux_redux,
]
STARTER_BUNDLES: dict[str, list[StarterModel]] = {

View File

@@ -37,19 +37,21 @@ class Struct_mallinfo2(ctypes.Structure):
def __str__(self) -> str:
s = ""
s += f"{'arena': <10}= {(self.arena/2**30):15.5f} # Non-mmapped space allocated (GB) (uordblks + fordblks)\n"
s += (
f"{'arena': <10}= {(self.arena / 2**30):15.5f} # Non-mmapped space allocated (GB) (uordblks + fordblks)\n"
)
s += f"{'ordblks': <10}= {(self.ordblks): >15} # Number of free chunks\n"
s += f"{'smblks': <10}= {(self.smblks): >15} # Number of free fastbin blocks \n"
s += f"{'hblks': <10}= {(self.hblks): >15} # Number of mmapped regions \n"
s += f"{'hblkhd': <10}= {(self.hblkhd/2**30):15.5f} # Space allocated in mmapped regions (GB)\n"
s += f"{'hblkhd': <10}= {(self.hblkhd / 2**30):15.5f} # Space allocated in mmapped regions (GB)\n"
s += f"{'usmblks': <10}= {(self.usmblks): >15} # Unused\n"
s += f"{'fsmblks': <10}= {(self.fsmblks/2**30):15.5f} # Space in freed fastbin blocks (GB)\n"
s += f"{'fsmblks': <10}= {(self.fsmblks / 2**30):15.5f} # Space in freed fastbin blocks (GB)\n"
s += (
f"{'uordblks': <10}= {(self.uordblks/2**30):15.5f} # Space used by in-use allocations (non-mmapped)"
f"{'uordblks': <10}= {(self.uordblks / 2**30):15.5f} # Space used by in-use allocations (non-mmapped)"
" (GB)\n"
)
s += f"{'fordblks': <10}= {(self.fordblks/2**30):15.5f} # Space in free blocks (non-mmapped) (GB)\n"
s += f"{'keepcost': <10}= {(self.keepcost/2**30):15.5f} # Top-most, releasable space (GB)\n"
s += f"{'fordblks': <10}= {(self.fordblks / 2**30):15.5f} # Space in free blocks (non-mmapped) (GB)\n"
s += f"{'keepcost': <10}= {(self.keepcost / 2**30):15.5f} # Top-most, releasable space (GB)\n"
return s

View File

@@ -73,36 +73,36 @@ def _make_sdxl_unet_conversion_map() -> List[Tuple[str, str]]:
for j in range(2):
# loop over resnets/attentions for downblocks
hf_down_res_prefix = f"down_blocks.{i}.resnets.{j}."
sd_down_res_prefix = f"input_blocks.{3*i + j + 1}.0."
sd_down_res_prefix = f"input_blocks.{3 * i + j + 1}.0."
unet_conversion_map_layer.append((sd_down_res_prefix, hf_down_res_prefix))
if i < 3:
# no attention layers in down_blocks.3
hf_down_atn_prefix = f"down_blocks.{i}.attentions.{j}."
sd_down_atn_prefix = f"input_blocks.{3*i + j + 1}.1."
sd_down_atn_prefix = f"input_blocks.{3 * i + j + 1}.1."
unet_conversion_map_layer.append((sd_down_atn_prefix, hf_down_atn_prefix))
for j in range(3):
# loop over resnets/attentions for upblocks
hf_up_res_prefix = f"up_blocks.{i}.resnets.{j}."
sd_up_res_prefix = f"output_blocks.{3*i + j}.0."
sd_up_res_prefix = f"output_blocks.{3 * i + j}.0."
unet_conversion_map_layer.append((sd_up_res_prefix, hf_up_res_prefix))
# if i > 0: commentout for sdxl
# no attention layers in up_blocks.0
hf_up_atn_prefix = f"up_blocks.{i}.attentions.{j}."
sd_up_atn_prefix = f"output_blocks.{3*i + j}.1."
sd_up_atn_prefix = f"output_blocks.{3 * i + j}.1."
unet_conversion_map_layer.append((sd_up_atn_prefix, hf_up_atn_prefix))
if i < 3:
# no downsample in down_blocks.3
hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0.conv."
sd_downsample_prefix = f"input_blocks.{3*(i+1)}.0.op."
sd_downsample_prefix = f"input_blocks.{3 * (i + 1)}.0.op."
unet_conversion_map_layer.append((sd_downsample_prefix, hf_downsample_prefix))
# no upsample in up_blocks.3
hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0."
sd_upsample_prefix = f"output_blocks.{3*i + 2}.{2}." # change for sdxl
sd_upsample_prefix = f"output_blocks.{3 * i + 2}.{2}." # change for sdxl
unet_conversion_map_layer.append((sd_upsample_prefix, hf_upsample_prefix))
hf_mid_atn_prefix = "mid_block.attentions.0."
@@ -111,7 +111,7 @@ def _make_sdxl_unet_conversion_map() -> List[Tuple[str, str]]:
for j in range(2):
hf_mid_res_prefix = f"mid_block.resnets.{j}."
sd_mid_res_prefix = f"middle_block.{2*j}."
sd_mid_res_prefix = f"middle_block.{2 * j}."
unet_conversion_map_layer.append((sd_mid_res_prefix, hf_mid_res_prefix))
unet_conversion_map_resnet = [
@@ -133,13 +133,13 @@ def _make_sdxl_unet_conversion_map() -> List[Tuple[str, str]]:
unet_conversion_map.append((sd, hf))
for j in range(2):
hf_time_embed_prefix = f"time_embedding.linear_{j+1}."
sd_time_embed_prefix = f"time_embed.{j*2}."
hf_time_embed_prefix = f"time_embedding.linear_{j + 1}."
sd_time_embed_prefix = f"time_embed.{j * 2}."
unet_conversion_map.append((sd_time_embed_prefix, hf_time_embed_prefix))
for j in range(2):
hf_label_embed_prefix = f"add_embedding.linear_{j+1}."
sd_label_embed_prefix = f"label_emb.0.{j*2}."
hf_label_embed_prefix = f"add_embedding.linear_{j + 1}."
sd_label_embed_prefix = f"label_emb.0.{j * 2}."
unet_conversion_map.append((sd_label_embed_prefix, hf_label_embed_prefix))
unet_conversion_map.append(("input_blocks.0.0.", "conv_in."))

View File

@@ -0,0 +1,43 @@
from pathlib import Path
from typing import Optional
import torch
from PIL import Image
from transformers import SiglipImageProcessor, SiglipVisionModel
from invokeai.backend.raw_model import RawModel
class SigLipPipeline(RawModel):
"""A wrapper for a SigLIP model + processor."""
def __init__(
self,
siglip_processor: SiglipImageProcessor,
siglip_model: SiglipVisionModel,
):
self._siglip_processor = siglip_processor
self._siglip_model = siglip_model
@classmethod
def load_from_path(cls, path: str | Path):
siglip_model = SiglipVisionModel.from_pretrained(path, local_files_only=True)
assert isinstance(siglip_model, SiglipVisionModel)
siglip_processor = SiglipImageProcessor.from_pretrained(path, local_files_only=True)
assert isinstance(siglip_processor, SiglipImageProcessor)
return cls(siglip_processor, siglip_model)
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
self._siglip_model.to(device=device, dtype=dtype)
def encode_image(self, x: Image.Image, device: torch.device, dtype: torch.dtype) -> torch.Tensor:
imgs = self._siglip_processor.preprocess(images=[x], do_resize=True, return_tensors="pt", do_convert_rgb=True)
encoded_x = self._siglip_model(**imgs.to(device=device, dtype=dtype)).last_hidden_state
return encoded_x
def calc_size(self) -> int:
"""Get size of the model in memory in bytes."""
# HACK(ryand): Fix this issue with circular imports.
from invokeai.backend.model_manager.load.model_util import calc_module_size
return calc_module_size(self._siglip_model)

View File

@@ -60,14 +60,13 @@
"@fontsource-variable/inter": "^5.1.0",
"@invoke-ai/ui-library": "^0.0.46",
"@nanostores/react": "^0.7.3",
"@reduxjs/toolkit": "2.5.1",
"@reduxjs/toolkit": "2.6.0",
"@roarr/browser-log-writer": "^1.3.0",
"@xyflow/react": "^12.4.2",
"async-mutex": "^0.5.0",
"chakra-react-select": "^4.9.2",
"cmdk": "^1.0.0",
"compare-versions": "^6.1.1",
"dateformat": "^5.0.3",
"fracturedjsonjs": "^4.0.2",
"framer-motion": "^11.10.0",
"i18next": "^23.15.1",
@@ -101,7 +100,7 @@
"react-resizable-panels": "^2.1.4",
"react-textarea-autosize": "^8.5.7",
"react-use": "^17.5.1",
"react-virtuoso": "^4.10.4",
"react-virtuoso": "^4.12.5",
"redux-dynamic-middlewares": "^2.2.0",
"redux-remember": "^5.1.0",
"redux-undo": "^1.1.0",
@@ -131,7 +130,6 @@
"@storybook/react": "^8.3.4",
"@storybook/react-vite": "^8.5.5",
"@storybook/theming": "^8.3.4",
"@types/dateformat": "^5.0.2",
"@types/lodash-es": "^4.17.12",
"@types/node": "^20.16.10",
"@types/react": "^18.3.11",

View File

@@ -30,8 +30,8 @@ dependencies:
specifier: ^0.7.3
version: 0.7.3(nanostores@0.11.3)(react@18.3.1)
'@reduxjs/toolkit':
specifier: 2.5.1
version: 2.5.1(react-redux@9.1.2)(react@18.3.1)
specifier: 2.6.0
version: 2.6.0(react-redux@9.1.2)(react@18.3.1)
'@roarr/browser-log-writer':
specifier: ^1.3.0
version: 1.3.0
@@ -50,9 +50,6 @@ dependencies:
compare-versions:
specifier: ^6.1.1
version: 6.1.1
dateformat:
specifier: ^5.0.3
version: 5.0.3
fracturedjsonjs:
specifier: ^4.0.2
version: 4.0.2
@@ -153,8 +150,8 @@ dependencies:
specifier: ^17.5.1
version: 17.5.1(react-dom@18.3.1)(react@18.3.1)
react-virtuoso:
specifier: ^4.10.4
version: 4.10.4(react-dom@18.3.1)(react@18.3.1)
specifier: ^4.12.5
version: 4.12.5(react-dom@18.3.1)(react@18.3.1)
redux-dynamic-middlewares:
specifier: ^2.2.0
version: 2.2.0
@@ -226,9 +223,6 @@ devDependencies:
'@storybook/theming':
specifier: ^8.3.4
version: 8.3.4(storybook@8.3.4)
'@types/dateformat':
specifier: ^5.0.2
version: 5.0.2
'@types/lodash-es':
specifier: ^4.17.12
version: 4.17.12
@@ -2317,8 +2311,8 @@ packages:
- supports-color
dev: true
/@reduxjs/toolkit@2.5.1(react-redux@9.1.2)(react@18.3.1):
resolution: {integrity: sha512-UHhy3p0oUpdhnSxyDjaRDYaw8Xra75UiLbCiRozVPHjfDwNYkh0TsVm/1OmTW8Md+iDAJmYPWUKMvsMc2GtpNg==}
/@reduxjs/toolkit@2.6.0(react-redux@9.1.2)(react@18.3.1):
resolution: {integrity: sha512-mWJCYpewLRyTuuzRSEC/IwIBBkYg2dKtQas8mty5MaV2iXzcmicS3gW554FDeOvLnY3x13NIk8MB1e8wHO7rqQ==}
peerDependencies:
react: ^16.9.0 || ^17.0.0 || ^18 || ^19
react-redux: ^7.2.1 || ^8.1.3 || ^9.0.0
@@ -3355,10 +3349,6 @@ packages:
'@types/d3-selection': 3.0.10
dev: false
/@types/dateformat@5.0.2:
resolution: {integrity: sha512-M95hNBMa/hnwErH+a+VOD/sYgTmo15OTYTM2Hr52/e0OdOuY+Crag+kd3/ioZrhg0WGbl9Sm3hR7UU+MH6rfOw==}
dev: true
/@types/diff-match-patch@1.0.36:
resolution: {integrity: sha512-xFdR6tkm0MWvBfO8xXCSsinYxHcqkQUlcHeSpMC2ukzOb6lwQAfDmW+Qt0AvlGd8HpsS28qKsB+oPeJn9I39jg==}
dev: false
@@ -3435,7 +3425,7 @@ packages:
/@types/lodash.mergewith@4.6.7:
resolution: {integrity: sha512-3m+lkO5CLRRYU0fhGRp7zbsGi6+BZj0uTVSwvcKU+nSlhjA9/QRNfuSGnD2mX6hQA7ZbmcCkzk5h4ZYGOtk14A==}
dependencies:
'@types/lodash': 4.17.15
'@types/lodash': 4.17.16
dev: false
/@types/lodash.mergewith@4.6.9:
@@ -3447,8 +3437,8 @@ packages:
/@types/lodash@4.17.10:
resolution: {integrity: sha512-YpS0zzoduEhuOWjAotS6A5AVCva7X4lVlYLF0FYHAY9sdraBfnatttHItlWeZdGhuEkf+OzMNg2ZYAx8t+52uQ==}
/@types/lodash@4.17.15:
resolution: {integrity: sha512-w/P33JFeySuhN6JLkysYUK2gEmy9kHHFN7E8ro0tkfmlDOgxBDzWEZ/J8cWA+fHqFevpswDTFZnDx+R9lbL6xw==}
/@types/lodash@4.17.16:
resolution: {integrity: sha512-HX7Em5NYQAXKW+1T+FiuG27NGwzJfCX3s1GjOa7ujxZa52kjJLOr4FUxT+giF6Tgxv1e+/czV/iTtBw27WTU9g==}
dev: false
/@types/mdx@2.0.13:
@@ -4856,11 +4846,6 @@ packages:
'@babel/runtime': 7.25.7
dev: true
/dateformat@5.0.3:
resolution: {integrity: sha512-Kvr6HmPXUMerlLcLF+Pwq3K7apHpYmGDVqrxcDasBg86UcKeTSNWbEzU8bwdXnxnR44FtMhJAxI4Bov6Y/KUfA==}
engines: {node: '>=12.20'}
dev: false
/de-indent@1.0.2:
resolution: {integrity: sha512-e/1zu3xH5MQryN2zdVaF0OrdNLUbvWxzMbi+iNA6Bky7l1RoP8a2fIbRocyHclXt/arDrrR6lL3TqFD9pMQTsg==}
dev: true
@@ -7913,12 +7898,11 @@ packages:
tslib: 2.7.0
dev: false
/react-virtuoso@4.10.4(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-G/gprhTbK+lzMxoo/iStcZxVEGph/cIhc3WANEpt92RuMw+LiCZOmBfKoeoZOHlm/iyftTrDJhGaTCpxyucnkQ==}
engines: {node: '>=10'}
/react-virtuoso@4.12.5(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-YeCbRRsC9CLf0buD0Rct7WsDbzf+yBU1wGbo05/XjbcN2nJuhgh040m3y3+6HVogTZxEqVm45ac9Fpae4/MxRQ==}
peerDependencies:
react: '>=16 || >=17 || >= 18'
react-dom: '>=16 || >=17 || >= 18'
react: '>=16 || >=17 || >= 18 || >= 19'
react-dom: '>=16 || >=17 || >= 18 || >=19'
dependencies:
react: 18.3.1
react-dom: 18.3.1(react@18.3.1)

View File

@@ -113,7 +113,8 @@
"end": "Ende",
"layout": "Layout",
"board": "Ordner",
"combinatorial": "Kombinatorisch"
"combinatorial": "Kombinatorisch",
"saveChanges": "Änderungen speichern"
},
"gallery": {
"galleryImageSize": "Bildgröße",
@@ -761,7 +762,16 @@
"workflowDeleted": "Arbeitsablauf gelöscht",
"errorCopied": "Fehler kopiert",
"layerCopiedToClipboard": "Ebene in die Zwischenablage kopiert",
"sentToCanvas": "An Leinwand gesendet"
"sentToCanvas": "An Leinwand gesendet",
"problemDeletingWorkflow": "Problem beim Löschen des Arbeitsablaufs",
"uploadFailedInvalidUploadDesc_withCount_one": "Es darf maximal 1 PNG- oder JPEG-Bild sein.",
"uploadFailedInvalidUploadDesc_withCount_other": "Es dürfen maximal {{count}} PNG- oder JPEG-Bilder sein.",
"problemRetrievingWorkflow": "Problem beim Abrufen des Arbeitsablaufs",
"uploadFailedInvalidUploadDesc": "Müssen PNG- oder JPEG-Bilder sein.",
"pasteSuccess": "Eingefügt in {{destination}}",
"pasteFailed": "Einfügen fehlgeschlagen",
"unableToCopy": "Kopieren nicht möglich",
"unableToCopyDesc_theseSteps": "diese Schritte"
},
"accessibility": {
"uploadImage": "Bild hochladen",
@@ -1314,7 +1324,8 @@
"nodeName": "Knotenname",
"description": "Beschreibung",
"loadWorkflowDesc": "Arbeitsablauf laden?",
"loadWorkflowDesc2": "Ihr aktueller Arbeitsablauf enthält nicht gespeicherte Änderungen."
"loadWorkflowDesc2": "Ihr aktueller Arbeitsablauf enthält nicht gespeicherte Änderungen.",
"loadingTemplates": "Lade {{name}}"
},
"hrf": {
"enableHrf": "Korrektur für hohe Auflösungen",

View File

@@ -149,6 +149,7 @@
"safetensors": "Safetensors",
"save": "Save",
"saveAs": "Save As",
"saveChanges": "Save Changes",
"settingsLabel": "Settings",
"simple": "Simple",
"somethingWentWrong": "Something went wrong",
@@ -761,6 +762,7 @@
"deleteMsg2": "This WILL delete the model from disk if it is in the InvokeAI root folder. If you are using a custom location, then the model WILL NOT be deleted from disk.",
"description": "Description",
"edit": "Edit",
"fluxRedux": "FLUX Redux",
"height": "Height",
"huggingFace": "HuggingFace",
"huggingFacePlaceholder": "owner/model-name",
@@ -835,6 +837,7 @@
"settings": "Settings",
"simpleModelPlaceholder": "URL or path to a local file or diffusers folder",
"source": "Source",
"sigLip": "SigLIP",
"spandrelImageToImage": "Image to Image (Spandrel)",
"starterBundles": "Starter Bundles",
"starterBundleHelpText": "Easily install all models needed to get started with a base model, including a main model, controlnets, IP adapters, and more. Selecting a bundle will skip any models that you already have installed.",
@@ -1283,9 +1286,9 @@
"somethingWentWrong": "Something Went Wrong",
"uploadFailed": "Upload failed",
"imagesWillBeAddedTo": "Uploaded images will be added to board {{boardName}}'s assets.",
"uploadFailedInvalidUploadDesc_withCount_one": "Must be maximum of 1 PNG or JPEG image.",
"uploadFailedInvalidUploadDesc_withCount_other": "Must be maximum of {{count}} PNG or JPEG images.",
"uploadFailedInvalidUploadDesc": "Must be PNG or JPEG images.",
"uploadFailedInvalidUploadDesc_withCount_one": "Must be maximum of 1 PNG, JPEG or WEBP image.",
"uploadFailedInvalidUploadDesc_withCount_other": "Must be maximum of {{count}} PNG, JPEG or WEBP images.",
"uploadFailedInvalidUploadDesc": "Must be PNG, JPEG or WEBP images.",
"workflowLoaded": "Workflow Loaded",
"problemRetrievingWorkflow": "Problem Retrieving Workflow",
"workflowDeleted": "Workflow Deleted",
@@ -1682,12 +1685,21 @@
"created": "Created",
"descending": "Descending",
"workflows": "Workflows",
"workflowLibrary": "Library",
"workflowLibrary": "Workflow Library",
"loadMore": "Load More",
"allLoaded": "All Workflows Loaded",
"searchPlaceholder": "Search by name, description or tags",
"filterByTags": "Filter by Tags",
"yourWorkflows": "Your Workflows",
"recentlyOpened": "Recently Opened",
"private": "Private",
"shared": "Shared",
"browseWorkflows": "Browse Workflows",
"resetTags": "Reset Tags",
"opened": "Opened",
"openWorkflow": "Open Workflow",
"updated": "Updated",
"uploadWorkflow": "Load from File",
"uploadAndSaveWorkflow": "Upload to Library",
"deleteWorkflow": "Delete Workflow",
"deleteWorkflow2": "Are you sure you want to delete this workflow? This cannot be undone.",
"unnamedWorkflow": "Unnamed Workflow",
@@ -1719,6 +1731,8 @@
"copyShareLinkForWorkflow": "Copy Share Link for Workflow",
"delete": "Delete",
"openLibrary": "Open Library",
"workflowThumbnail": "Workflow Thumbnail",
"saveChanges": "Save Changes",
"builder": {
"deleteAllElements": "Delete All Form Elements",
"resetAllNodeFields": "Reset All Node Fields",
@@ -1727,6 +1741,8 @@
"row": "Row",
"column": "Column",
"container": "Container",
"containerRowLayout": "Container (row layout)",
"containerColumnLayout": "Container (column layout)",
"heading": "Heading",
"text": "Text",
"divider": "Divider",

View File

@@ -1771,7 +1771,6 @@
"projectWorkflows": "Workflows du projet",
"copyShareLink": "Copier le lien de partage",
"chooseWorkflowFromLibrary": "Choisir le Workflow dans la Bibliothèque",
"uploadAndSaveWorkflow": "Importer dans la bibliothèque",
"edit": "Modifer",
"deleteWorkflow2": "Êtes-vous sûr de vouloir supprimer ce Workflow? Cette action ne peut pas être annulé.",
"download": "Télécharger",

View File

@@ -109,7 +109,8 @@
"board": "Bacheca",
"layout": "Schema",
"row": "Riga",
"column": "Colonna"
"column": "Colonna",
"saveChanges": "Salva modifiche"
},
"gallery": {
"galleryImageSize": "Dimensione dell'immagine",
@@ -784,7 +785,7 @@
"serverError": "Errore del Server",
"connected": "Connesso al server",
"canceled": "Elaborazione annullata",
"uploadFailedInvalidUploadDesc": "Devono essere immagini PNG o JPEG.",
"uploadFailedInvalidUploadDesc": "Devono essere immagini PNG, JPEG o WEBP.",
"parameterSet": "Parametro richiamato",
"parameterNotSet": "Parametro non richiamato",
"problemCopyingImage": "Impossibile copiare l'immagine",
@@ -835,9 +836,9 @@
"linkCopied": "Collegamento copiato",
"addedToUncategorized": "Aggiunto alle risorse della bacheca $t(boards.uncategorized)",
"imagesWillBeAddedTo": "Le immagini caricate verranno aggiunte alle risorse della bacheca {{boardName}}.",
"uploadFailedInvalidUploadDesc_withCount_one": "Devi caricare al massimo 1 immagine PNG o JPEG.",
"uploadFailedInvalidUploadDesc_withCount_many": "Devi caricare al massimo {{count}} immagini PNG o JPEG.",
"uploadFailedInvalidUploadDesc_withCount_other": "Devi caricare al massimo {{count}} immagini PNG o JPEG.",
"uploadFailedInvalidUploadDesc_withCount_one": "Devi caricare al massimo 1 immagine PNG, JPEG o WEBP.",
"uploadFailedInvalidUploadDesc_withCount_many": "Devi caricare al massimo {{count}} immagini PNG, JPEG o WEBP.",
"uploadFailedInvalidUploadDesc_withCount_other": "Devi caricare al massimo {{count}} immagini PNG, JPEG o WEBP.",
"outOfMemoryErrorDescLocal": "Segui la nostra <LinkComponent>guida per bassa VRAM</LinkComponent> per ridurre gli OOM.",
"pasteFailed": "Incolla non riuscita",
"pasteSuccess": "Incollato su {{destination}}",
@@ -1704,7 +1705,7 @@
"saveWorkflow": "Salva flusso di lavoro",
"openWorkflow": "Apri flusso di lavoro",
"clearWorkflowSearchFilter": "Cancella il filtro di ricerca del flusso di lavoro",
"workflowLibrary": "Libreria",
"workflowLibrary": "Libreria flussi di lavoro",
"workflowSaved": "Flusso di lavoro salvato",
"unnamedWorkflow": "Flusso di lavoro senza nome",
"savingWorkflow": "Salvataggio del flusso di lavoro...",
@@ -1734,7 +1735,6 @@
"userWorkflows": "Flussi di lavoro utente",
"projectWorkflows": "Flussi di lavoro del progetto",
"defaultWorkflows": "Flussi di lavoro predefiniti",
"uploadAndSaveWorkflow": "Carica nella libreria",
"chooseWorkflowFromLibrary": "Scegli il flusso di lavoro dalla libreria",
"deleteWorkflow2": "Vuoi davvero eliminare questo flusso di lavoro? Questa operazione non può essere annullata.",
"edit": "Modifica",
@@ -1772,7 +1772,19 @@
"container": "Contenitore",
"text": "Testo",
"numberInput": "Ingresso numerico"
}
},
"loadMore": "Carica altro",
"searchPlaceholder": "Cerca per nome, descrizione o etichetta",
"filterByTags": "Filtra per etichetta",
"shared": "Condiviso",
"browseWorkflows": "Sfoglia i flussi di lavoro",
"resetTags": "Reimposta le etichette",
"allLoaded": "Tutti i flussi di lavoro caricati",
"saveChanges": "Salva modifiche",
"yourWorkflows": "I tuoi flussi di lavoro",
"recentlyOpened": "Aperto di recente",
"workflowThumbnail": "Miniatura del flusso di lavoro",
"private": "Privato"
},
"accordions": {
"compositing": {
@@ -2330,8 +2342,8 @@
"watchRecentReleaseVideos": "Guarda i video su questa versione",
"watchUiUpdatesOverview": "Guarda le novità dell'interfaccia",
"items": [
"Editor del flusso di lavoro: nuovo generatore di moduli trascina-e-rilascia per una creazione più facile del flusso di lavoro.",
"Altri miglioramenti: messa in coda dei lotti più rapida, migliore ampliamento, selettore colore migliorato e nodi metadati."
"Gestione della memoria: nuova impostazione per gli utenti con GPU Nvidia per ridurre l'utilizzo della VRAM.",
"Prestazioni: continui miglioramenti alle prestazioni e alla reattività complessive dell'applicazione."
]
},
"system": {

View File

@@ -1566,7 +1566,6 @@
"defaultWorkflows": "Стандартные рабочие процессы",
"deleteWorkflow2": "Вы уверены, что хотите удалить этот рабочий процесс? Это нельзя отменить.",
"chooseWorkflowFromLibrary": "Выбрать рабочий процесс из библиотеки",
"uploadAndSaveWorkflow": "Загрузить в библиотеку",
"edit": "Редактировать",
"download": "Скачать",
"copyShareLink": "Скопировать ссылку на общий доступ",

View File

@@ -235,7 +235,8 @@
"column": "Cột",
"layout": "Bố Cục",
"row": "Hàng",
"board": "Bảng"
"board": "Bảng",
"saveChanges": "Lưu Thay Đổi"
},
"prompt": {
"addPromptTrigger": "Thêm Prompt Trigger",
@@ -766,7 +767,9 @@
"urlUnauthorizedErrorMessage2": "Tìm hiểu thêm.",
"urlForbidden": "Bạn không có quyền truy cập vào model này",
"urlForbiddenErrorMessage": "Bạn có thể cần yêu cầu quyền truy cập từ trang web đang cung cấp model.",
"urlUnauthorizedErrorMessage": "Bạn có thể cần thiếp lập một token API để dùng được model này."
"urlUnauthorizedErrorMessage": "Bạn có thể cần thiếp lập một token API để dùng được model này.",
"fluxRedux": "FLUX Redux",
"sigLip": "SigLIP"
},
"metadata": {
"guidance": "Hướng Dẫn",
@@ -979,7 +982,7 @@
"unknownInput": "Đầu Vào Không Rõ: {{name}}",
"validateConnections": "Xác Thực Kết Nối Và Đồ Thị",
"workflowNotes": "Ghi Chú",
"workflowTags": "Thẻ Tên",
"workflowTags": "Nhãn",
"editMode": "Chỉnh sửa trong Trình Biên Tập Workflow",
"edit": "Chỉnh Sửa",
"executionStateInProgress": "Đang Xử Lý",
@@ -2021,7 +2024,7 @@
},
"mergingLayers": "Đang gộp layer",
"controlLayerEmptyState": "<UploadButton>Tải lên ảnh</UploadButton>, kéo thả ảnh từ <GalleryButton>thư viện</GalleryButton> vào layer này, hoặc vẽ trên canvas để bắt đầu.",
"referenceImageEmptyState": "<UploadButton>Tải lên ảnh</UploadButton> hoặc kéo thả ảnh từ <GalleryButton>thư viện</GalleryButton> vào layer này để bắt đầu.",
"referenceImageEmptyState": "<UploadButton>Tải lên hình ảnh</UploadButton>, kéo ảnh từ <GalleryButton>thư viện ảnh</GalleryButton> vào layer này, hoặc <PullBboxButton>kéo hộp giới hạn vào layer này</PullBboxButton> để bắt đầu.",
"useImage": "Dùng Hình Ảnh",
"resetCanvasLayers": "Khởi Động Lại Layer Canvas",
"asRasterLayer": "Như $t(controlLayers.rasterLayer)",
@@ -2137,7 +2140,7 @@
"toast": {
"imageUploadFailed": "Tải Lên Ảnh Thất Bại",
"layerCopiedToClipboard": "Sao Chép Layer Vào Clipboard",
"uploadFailedInvalidUploadDesc_withCount_other": "Tối đa là {{count}} ảnh PNG hoặc JPEG.",
"uploadFailedInvalidUploadDesc_withCount_other": "Tối đa là {{count}} ảnh PNG, JPEG hoặc WEBP.",
"imageCopied": "Ảnh Đã Được Sao Chép",
"sentToUpscale": "Chuyển Vào Upscale",
"unableToLoadImage": "Không Thể Tải Hình Ảnh",
@@ -2149,7 +2152,7 @@
"unableToLoadImageMetadata": "Không Thể Tải Metadata Của Ảnh",
"workflowLoaded": "Workflow Đã Tải",
"uploadFailed": "Tải Lên Thất Bại",
"uploadFailedInvalidUploadDesc": "Phải là ảnh PNG hoặc JPEG.",
"uploadFailedInvalidUploadDesc": "Phải là ảnh PNG, JPEG hoặc WEBP.",
"serverError": "Lỗi Server",
"addedToBoard": "Thêm vào tài nguyên của bảng {{name}}",
"sessionRef": "Phiên: {{sessionId}}",
@@ -2252,11 +2255,10 @@
"convertGraph": "Chuyển Đổi Đồ Thị",
"saveWorkflowToProject": "Lưu Workflow Vào Dự Án",
"workflowName": "Tên Workflow",
"workflowLibrary": "Thư Viện",
"workflowLibrary": "Thư Viện Workflow",
"opened": "Ngày Mở",
"deleteWorkflow": "Xoá Workflow",
"workflowEditorMenu": "Menu Biên Tập Workflow",
"uploadAndSaveWorkflow": "Tải Lên Thư Viện",
"openLibrary": "Mở Thư Viện",
"builder": {
"resetAllNodeFields": "Tải Lại Các Vùng Node",
@@ -2287,7 +2289,19 @@
"heading": "Đầu Dòng",
"text": "Văn Bản",
"divider": "Gạch Chia"
}
},
"yourWorkflows": "Workflow Của Bạn",
"browseWorkflows": "Khám Phá Workflow",
"workflowThumbnail": "Ảnh Minh Họa Workflow",
"saveChanges": "Lưu Thay Đổi",
"allLoaded": "Đã Tải Tất Cả Workflow",
"shared": "Nhóm",
"searchPlaceholder": "Tìm theo tên, mô tả, hoặc nhãn",
"filterByTags": "Lọc Theo Nhãn",
"recentlyOpened": "Mở Gần Đây",
"private": "Cá Nhân",
"resetTags": "Khởi Động Lại Nhãn",
"loadMore": "Tải Thêm"
},
"upscaling": {
"missingUpscaleInitialImage": "Thiếu ảnh dùng để upscale",
@@ -2322,8 +2336,8 @@
"watchRecentReleaseVideos": "Xem Video Phát Hành Mới Nhất",
"watchUiUpdatesOverview": "Xem Tổng Quan Về Những Cập Nhật Cho Giao Diện Người Dùng",
"items": [
"Trình Biên Tập Workflow: trình tạo vùng nhập dưới dạng kéo thả nhằm tạo dựng workflow dễ dàng hơn.",
"Các nâng cấp khác: Xếp hàng tạo sinh theo nhóm nhanh hơn, upscale tốt hơn, trình chọn màu được cải thiện, và node chứa metadata."
"Trình Quản Lý Bộ Nhớ: Thiết lập mới cho người dùng với GPU Nvidia để giảm lượng VRAM sử dng.",
"Hiệu suất: Các cải thiện tiếp theo nhm gói gọn hiệu suất và khả năng phản hồi của ứng dụng."
]
},
"upsell": {

View File

@@ -1629,7 +1629,6 @@
"projectWorkflows": "项目工作流程",
"copyShareLink": "复制分享链接",
"chooseWorkflowFromLibrary": "从库中选择工作流程",
"uploadAndSaveWorkflow": "上传到库",
"deleteWorkflow2": "您确定要删除此工作流程吗?此操作无法撤销。"
},
"accordions": {

View File

@@ -26,7 +26,8 @@ import { DynamicPromptsModal } from 'features/dynamicPrompts/components/DynamicP
import DeleteBoardModal from 'features/gallery/components/Boards/DeleteBoardModal';
import { ImageContextMenu } from 'features/gallery/components/ImageContextMenu/ImageContextMenu';
import { useStarterModelsToast } from 'features/modelManagerV2/hooks/useStarterModelsToast';
import { ShareWorkflowModal } from 'features/nodes/components/sidePanel/WorkflowListMenu/ShareWorkflowModal';
import { ShareWorkflowModal } from 'features/nodes/components/sidePanel/workflow/WorkflowLibrary/ShareWorkflowModal';
import { WorkflowLibraryModal } from 'features/nodes/components/sidePanel/workflow/WorkflowLibrary/WorkflowLibraryModal';
import { CancelAllExceptCurrentQueueItemConfirmationAlertDialog } from 'features/queue/components/CancelAllExceptCurrentQueueItemConfirmationAlertDialog';
import { ClearQueueConfirmationsAlertDialog } from 'features/queue/components/ClearQueueConfirmationAlertDialog';
import { useReadinessWatcher } from 'features/queue/store/readiness';
@@ -39,7 +40,9 @@ import { selectLanguage } from 'features/system/store/systemSelectors';
import { AppContent } from 'features/ui/components/AppContent';
import { DeleteWorkflowDialog } from 'features/workflowLibrary/components/DeleteLibraryWorkflowConfirmationAlertDialog';
import { LoadWorkflowConfirmationAlertDialog } from 'features/workflowLibrary/components/LoadWorkflowConfirmationAlertDialog';
import { LoadWorkflowFromGraphModal } from 'features/workflowLibrary/components/LoadWorkflowFromGraphModal/LoadWorkflowFromGraphModal';
import { NewWorkflowConfirmationAlertDialog } from 'features/workflowLibrary/components/NewWorkflowConfirmationAlertDialog';
import { SaveWorkflowAsDialog } from 'features/workflowLibrary/components/SaveWorkflowAsDialog';
import i18n from 'i18n';
import { size } from 'lodash-es';
import { memo, useCallback, useEffect } from 'react';
@@ -48,7 +51,6 @@ import { useGetOpenAPISchemaQuery } from 'services/api/endpoints/appInfo';
import { useSocketIO } from 'services/events/useSocketIO';
import AppErrorBoundaryFallback from './AppErrorBoundaryFallback';
const DEFAULT_CONFIG = {};
interface Props {
@@ -73,28 +75,7 @@ const App = ({ config = DEFAULT_CONFIG, studioInitAction }: Props) => {
{!didStudioInit && <Loading />}
</Box>
<HookIsolator config={config} studioInitAction={studioInitAction} />
<DeleteImageModal />
<ChangeBoardModal />
<DynamicPromptsModal />
<StylePresetModal />
<CancelAllExceptCurrentQueueItemConfirmationAlertDialog />
<ClearQueueConfirmationsAlertDialog />
<NewWorkflowConfirmationAlertDialog />
<LoadWorkflowConfirmationAlertDialog />
<DeleteStylePresetDialog />
<DeleteWorkflowDialog />
<ShareWorkflowModal />
<RefreshAfterResetModal />
<DeleteBoardModal />
<GlobalImageHotkeys />
<NewGallerySessionDialog />
<NewCanvasSessionDialog />
<ImageContextMenu />
<FullscreenDropzone />
<VideosModal />
<CanvasManagerProviderGate>
<CanvasPasteModal />
</CanvasManagerProviderGate>
<ModalIsolator />
</ErrorBoundary>
);
};
@@ -140,3 +121,36 @@ const HookIsolator = memo(
}
);
HookIsolator.displayName = 'HookIsolator';
const ModalIsolator = memo(() => {
return (
<>
<DeleteImageModal />
<ChangeBoardModal />
<DynamicPromptsModal />
<StylePresetModal />
<WorkflowLibraryModal />
<CancelAllExceptCurrentQueueItemConfirmationAlertDialog />
<ClearQueueConfirmationsAlertDialog />
<NewWorkflowConfirmationAlertDialog />
<LoadWorkflowConfirmationAlertDialog />
<DeleteStylePresetDialog />
<DeleteWorkflowDialog />
<ShareWorkflowModal />
<RefreshAfterResetModal />
<DeleteBoardModal />
<GlobalImageHotkeys />
<NewGallerySessionDialog />
<NewCanvasSessionDialog />
<ImageContextMenu />
<FullscreenDropzone />
<VideosModal />
<SaveWorkflowAsDialog />
<CanvasManagerProviderGate>
<CanvasPasteModal />
</CanvasManagerProviderGate>
<LoadWorkflowFromGraphModal />
</>
);
});
ModalIsolator.displayName = 'ModalIsolator';

View File

@@ -11,7 +11,7 @@ import { $imageViewer } from 'features/gallery/components/ImageViewer/useImageVi
import { sentImageToCanvas } from 'features/gallery/store/actions';
import { parseAndRecallAllMetadata } from 'features/metadata/util/handlers';
import { $hasTemplates } from 'features/nodes/store/nodesSlice';
import { $isWorkflowListMenuIsOpen } from 'features/nodes/store/workflowListMenu';
import { $isWorkflowLibraryModalOpen } from 'features/nodes/store/workflowLibraryModal';
import { $isStylePresetsMenuOpen, activeStylePresetIdChanged } from 'features/stylePresets/store/stylePresetSlice';
import { toast } from 'features/toast/toast';
import { activeTabCanvasRightPanelChanged, setActiveTab } from 'features/ui/store/uiSlice';
@@ -166,7 +166,7 @@ export const useStudioInitAction = (action?: StudioInitAction) => {
case 'viewAllWorkflows':
// Go to the workflows tab and open the workflow library modal
store.dispatch(setActiveTab('workflows'));
$isWorkflowListMenuIsOpen.set(true);
$isWorkflowLibraryModalOpen.set(true);
break;
case 'viewAllStylePresets':
// Go to the canvas tab and open the style presets menu

View File

@@ -27,7 +27,6 @@ import { addDynamicPromptsListener } from 'app/store/middleware/listenerMiddlewa
import { addSetDefaultSettingsListener } from 'app/store/middleware/listenerMiddleware/listeners/setDefaultSettings';
import { addSocketConnectedEventListener } from 'app/store/middleware/listenerMiddleware/listeners/socketConnected';
import { addUpdateAllNodesRequestedListener } from 'app/store/middleware/listenerMiddleware/listeners/updateAllNodesRequested';
import { addWorkflowLoadRequestedListener } from 'app/store/middleware/listenerMiddleware/listeners/workflowLoadRequested';
import type { AppDispatch, RootState } from 'app/store/store';
import { addArchivedOrDeletedBoardListener } from './listeners/addArchivedOrDeletedBoardListener';
@@ -89,7 +88,6 @@ addArchivedOrDeletedBoardListener(startAppListening);
addGetOpenAPISchemaListener(startAppListening);
// Workflows
addWorkflowLoadRequestedListener(startAppListening);
addUpdateAllNodesRequestedListener(startAppListening);
// Models

View File

@@ -31,6 +31,7 @@ import type { AnyModelConfig } from 'services/api/types';
import {
isCLIPEmbedModelConfig,
isControlLayerModelConfig,
isFluxReduxModelConfig,
isFluxVAEModelConfig,
isIPAdapterModelConfig,
isLoRAModelConfig,
@@ -77,6 +78,7 @@ export const addModelsLoadedListener = (startAppListening: AppStartListening) =>
handleT5EncoderModels(models, state, dispatch, log);
handleCLIPEmbedModels(models, state, dispatch, log);
handleFLUXVAEModels(models, state, dispatch, log);
handleFLUXReduxModels(models, state, dispatch, log);
},
});
};
@@ -209,6 +211,10 @@ const handleControlAdapterModels: ModelHandler = (models, state, dispatch, log)
const handleIPAdapterModels: ModelHandler = (models, state, dispatch, log) => {
const ipaModels = models.filter(isIPAdapterModelConfig);
selectCanvasSlice(state).referenceImages.entities.forEach((entity) => {
if (entity.ipAdapter.type !== 'ip_adapter') {
return;
}
const selectedIPAdapterModel = entity.ipAdapter.model;
// `null` is a valid IP adapter model - no need to do anything.
if (!selectedIPAdapterModel) {
@@ -224,6 +230,10 @@ const handleIPAdapterModels: ModelHandler = (models, state, dispatch, log) => {
selectCanvasSlice(state).regionalGuidance.entities.forEach((entity) => {
entity.referenceImages.forEach(({ id: referenceImageId, ipAdapter }) => {
if (ipAdapter.type !== 'ip_adapter') {
return;
}
const selectedIPAdapterModel = ipAdapter.model;
// `null` is a valid IP adapter model - no need to do anything.
if (!selectedIPAdapterModel) {
@@ -241,6 +251,49 @@ const handleIPAdapterModels: ModelHandler = (models, state, dispatch, log) => {
});
};
const handleFLUXReduxModels: ModelHandler = (models, state, dispatch, log) => {
const fluxReduxModels = models.filter(isFluxReduxModelConfig);
selectCanvasSlice(state).referenceImages.entities.forEach((entity) => {
if (entity.ipAdapter.type !== 'flux_redux') {
return;
}
const selectedFLUXReduxModel = entity.ipAdapter.model;
// `null` is a valid FLUX Redux model - no need to do anything.
if (!selectedFLUXReduxModel) {
return;
}
const isModelAvailable = fluxReduxModels.some((m) => m.key === selectedFLUXReduxModel.key);
if (isModelAvailable) {
return;
}
log.debug({ selectedFLUXReduxModel }, 'Selected FLUX Redux model is not available, clearing');
dispatch(referenceImageIPAdapterModelChanged({ entityIdentifier: getEntityIdentifier(entity), modelConfig: null }));
});
selectCanvasSlice(state).regionalGuidance.entities.forEach((entity) => {
entity.referenceImages.forEach(({ id: referenceImageId, ipAdapter }) => {
if (ipAdapter.type !== 'flux_redux') {
return;
}
const selectedFLUXReduxModel = ipAdapter.model;
// `null` is a valid FLUX Redux model - no need to do anything.
if (!selectedFLUXReduxModel) {
return;
}
const isModelAvailable = fluxReduxModels.some((m) => m.key === selectedFLUXReduxModel.key);
if (isModelAvailable) {
return;
}
log.debug({ selectedFLUXReduxModel }, 'Selected FLUX Redux model is not available, clearing');
dispatch(
rgIPAdapterModelChanged({ entityIdentifier: getEntityIdentifier(entity), referenceImageId, modelConfig: null })
);
});
});
};
const handlePostProcessingModel: ModelHandler = (models, state, dispatch, log) => {
const selectedPostProcessingModel = state.upscale.postProcessingModel;
const allSpandrelModels = models.filter(isSpandrelImageToImageModelConfig);

View File

@@ -9,7 +9,7 @@ import {
useAddRegionalGuidance,
useAddRegionalReferenceImage,
} from 'features/controlLayers/hooks/addLayerHooks';
import { selectIsFLUX, selectIsSD3 } from 'features/controlLayers/store/paramsSlice';
import { selectIsSD3 } from 'features/controlLayers/store/paramsSlice';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiPlusBold } from 'react-icons/pi';
@@ -22,7 +22,6 @@ export const CanvasAddEntityButtons = memo(() => {
const addControlLayer = useAddControlLayer();
const addGlobalReferenceImage = useAddGlobalReferenceImage();
const addRegionalReferenceImage = useAddRegionalReferenceImage();
const isFLUX = useAppSelector(selectIsFLUX);
const isSD3 = useAppSelector(selectIsSD3);
return (
@@ -75,7 +74,7 @@ export const CanvasAddEntityButtons = memo(() => {
justifyContent="flex-start"
leftIcon={<PiPlusBold />}
onClick={addRegionalReferenceImage}
isDisabled={isFLUX || isSD3}
isDisabled={isSD3}
>
{t('controlLayers.regionalReferenceImage')}
</Button>

View File

@@ -9,7 +9,7 @@ import {
useAddRegionalReferenceImage,
} from 'features/controlLayers/hooks/addLayerHooks';
import { useCanvasIsBusy } from 'features/controlLayers/hooks/useCanvasIsBusy';
import { selectIsFLUX, selectIsSD3 } from 'features/controlLayers/store/paramsSlice';
import { selectIsSD3 } from 'features/controlLayers/store/paramsSlice';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiPlusBold } from 'react-icons/pi';
@@ -23,7 +23,6 @@ export const EntityListGlobalActionBarAddLayerMenu = memo(() => {
const addRegionalReferenceImage = useAddRegionalReferenceImage();
const addRasterLayer = useAddRasterLayer();
const addControlLayer = useAddControlLayer();
const isFLUX = useAppSelector(selectIsFLUX);
const isSD3 = useAppSelector(selectIsSD3);
return (
@@ -52,7 +51,7 @@ export const EntityListGlobalActionBarAddLayerMenu = memo(() => {
<MenuItem icon={<PiPlusBold />} onClick={addRegionalGuidance} isDisabled={isSD3}>
{t('controlLayers.regionalGuidance')}
</MenuItem>
<MenuItem icon={<PiPlusBold />} onClick={addRegionalReferenceImage} isDisabled={isFLUX || isSD3}>
<MenuItem icon={<PiPlusBold />} onClick={addRegionalReferenceImage} isDisabled={isSD3}>
{t('controlLayers.regionalReferenceImage')}
</MenuItem>
</MenuGroup>

View File

@@ -0,0 +1,61 @@
import type { ComboboxOnChange } from '@invoke-ai/ui-library';
import { Combobox, FormControl } from '@invoke-ai/ui-library';
import { useAppSelector } from 'app/store/storeHooks';
import { selectIsFLUX } from 'features/controlLayers/store/paramsSlice';
import type { CLIPVisionModelV2 } from 'features/controlLayers/store/types';
import { isCLIPVisionModelV2 } from 'features/controlLayers/store/types';
import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { assert } from 'tsafe';
// at this time, ViT-L is the only supported clip model for FLUX IP adapter
const FLUX_CLIP_VISION = 'ViT-L';
const CLIP_VISION_OPTIONS = [
{ label: 'ViT-H', value: 'ViT-H' },
{ label: 'ViT-G', value: 'ViT-G' },
{ label: FLUX_CLIP_VISION, value: FLUX_CLIP_VISION },
];
type Props = {
model: CLIPVisionModelV2;
onChange: (clipVisionModel: CLIPVisionModelV2) => void;
};
export const CLIPVisionModel = memo(({ model, onChange }: Props) => {
const { t } = useTranslation();
const _onChangeCLIPVisionModel = useCallback<ComboboxOnChange>(
(v) => {
assert(isCLIPVisionModelV2(v?.value));
onChange(v.value);
},
[onChange]
);
const isFLUX = useAppSelector(selectIsFLUX);
const clipVisionOptions = useMemo(() => {
return CLIP_VISION_OPTIONS.map((option) => ({
...option,
isDisabled: isFLUX && option.value !== FLUX_CLIP_VISION,
}));
}, [isFLUX]);
const clipVisionModelValue = useMemo(() => {
return CLIP_VISION_OPTIONS.find((o) => o.value === model);
}, [model]);
return (
<FormControl width="max-content" minWidth={28}>
<Combobox
options={clipVisionOptions}
placeholder={t('common.placeholderSelectAModel')}
value={clipVisionModelValue}
onChange={_onChangeCLIPVisionModel}
/>
</FormControl>
);
});
CLIPVisionModel.displayName = 'CLIPVisionModel';

View File

@@ -1,40 +1,36 @@
import type { ComboboxOnChange } from '@invoke-ai/ui-library';
import { Combobox, Flex, FormControl, Tooltip } from '@invoke-ai/ui-library';
import { Combobox, FormControl, Tooltip } from '@invoke-ai/ui-library';
import { useAppSelector } from 'app/store/storeHooks';
import { useGroupedModelCombobox } from 'common/hooks/useGroupedModelCombobox';
import { selectBase, selectIsFLUX } from 'features/controlLayers/store/paramsSlice';
import type { CLIPVisionModelV2 } from 'features/controlLayers/store/types';
import { isCLIPVisionModelV2 } from 'features/controlLayers/store/types';
import { selectBase } from 'features/controlLayers/store/paramsSlice';
import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { useIPAdapterModels } from 'services/api/hooks/modelsByType';
import type { AnyModelConfig, IPAdapterModelConfig } from 'services/api/types';
import { assert } from 'tsafe';
// at this time, ViT-L is the only supported clip model for FLUX IP adapter
const FLUX_CLIP_VISION = 'ViT-L';
const CLIP_VISION_OPTIONS = [
{ label: 'ViT-H', value: 'ViT-H' },
{ label: 'ViT-G', value: 'ViT-G' },
{ label: FLUX_CLIP_VISION, value: FLUX_CLIP_VISION },
];
import { useIPAdapterOrFLUXReduxModels } from 'services/api/hooks/modelsByType';
import type { AnyModelConfig, FLUXReduxModelConfig, IPAdapterModelConfig } from 'services/api/types';
type Props = {
isRegionalGuidance: boolean;
modelKey: string | null;
onChangeModel: (modelConfig: IPAdapterModelConfig) => void;
clipVisionModel: CLIPVisionModelV2;
onChangeCLIPVisionModel: (clipVisionModel: CLIPVisionModelV2) => void;
onChangeModel: (modelConfig: IPAdapterModelConfig | FLUXReduxModelConfig) => void;
};
export const IPAdapterModel = memo(({ modelKey, onChangeModel, clipVisionModel, onChangeCLIPVisionModel }: Props) => {
export const IPAdapterModel = memo(({ isRegionalGuidance, modelKey, onChangeModel }: Props) => {
const { t } = useTranslation();
const currentBaseModel = useAppSelector(selectBase);
const [modelConfigs, { isLoading }] = useIPAdapterModels();
const filter = useCallback(
(config: IPAdapterModelConfig | FLUXReduxModelConfig) => {
// FLUX supports regional guidance for FLUX Redux models only - not IP Adapter models.
if (isRegionalGuidance && config.base === 'flux' && config.type === 'ip_adapter') {
return false;
}
return true;
},
[isRegionalGuidance]
);
const [modelConfigs, { isLoading }] = useIPAdapterOrFLUXReduxModels(filter);
const selectedModel = useMemo(() => modelConfigs.find((m) => m.key === modelKey), [modelConfigs, modelKey]);
const _onChangeModel = useCallback(
(modelConfig: IPAdapterModelConfig | null) => {
(modelConfig: IPAdapterModelConfig | FLUXReduxModelConfig | null) => {
if (!modelConfig) {
return;
}
@@ -43,21 +39,11 @@ export const IPAdapterModel = memo(({ modelKey, onChangeModel, clipVisionModel,
[onChangeModel]
);
const _onChangeCLIPVisionModel = useCallback<ComboboxOnChange>(
(v) => {
assert(isCLIPVisionModelV2(v?.value));
onChangeCLIPVisionModel(v.value);
},
[onChangeCLIPVisionModel]
);
const isFLUX = useAppSelector(selectIsFLUX);
const getIsDisabled = useCallback(
(model: AnyModelConfig): boolean => {
const isCompatible = currentBaseModel === model.base;
const hasMainModel = Boolean(currentBaseModel);
return !hasMainModel || !isCompatible;
const hasSameBase = currentBaseModel === model.base;
return !hasMainModel || !hasSameBase;
},
[currentBaseModel]
);
@@ -70,41 +56,18 @@ export const IPAdapterModel = memo(({ modelKey, onChangeModel, clipVisionModel,
isLoading,
});
const clipVisionOptions = useMemo(() => {
return CLIP_VISION_OPTIONS.map((option) => ({
...option,
isDisabled: isFLUX && option.value !== FLUX_CLIP_VISION,
}));
}, [isFLUX]);
const clipVisionModelValue = useMemo(() => {
return CLIP_VISION_OPTIONS.find((o) => o.value === clipVisionModel);
}, [clipVisionModel]);
return (
<Flex gap={2}>
<Tooltip label={selectedModel?.description}>
<FormControl isInvalid={!value || currentBaseModel !== selectedModel?.base} w="full">
<Combobox
options={options}
placeholder={t('common.placeholderSelectAModel')}
value={value}
onChange={onChange}
noOptionsMessage={noOptionsMessage}
/>
</FormControl>
</Tooltip>
{selectedModel?.format === 'checkpoint' && (
<FormControl isInvalid={!value || currentBaseModel !== selectedModel?.base} width="max-content" minWidth={28}>
<Combobox
options={clipVisionOptions}
placeholder={t('common.placeholderSelectAModel')}
value={clipVisionModelValue}
onChange={_onChangeCLIPVisionModel}
/>
</FormControl>
)}
</Flex>
<Tooltip label={selectedModel?.description}>
<FormControl isInvalid={!value || currentBaseModel !== selectedModel?.base} w="full">
<Combobox
options={options}
placeholder={t('common.placeholderSelectAModel')}
value={value}
onChange={onChange}
noOptionsMessage={noOptionsMessage}
/>
</FormControl>
</Tooltip>
);
});

View File

@@ -1,9 +1,10 @@
import { Box, Flex, IconButton } from '@invoke-ai/ui-library';
import { Flex, IconButton } from '@invoke-ai/ui-library';
import { createSelector } from '@reduxjs/toolkit';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { BeginEndStepPct } from 'features/controlLayers/components/common/BeginEndStepPct';
import { CanvasEntitySettingsWrapper } from 'features/controlLayers/components/common/CanvasEntitySettingsWrapper';
import { Weight } from 'features/controlLayers/components/common/Weight';
import { CLIPVisionModel } from 'features/controlLayers/components/IPAdapter/CLIPVisionModel';
import { IPAdapterMethod } from 'features/controlLayers/components/IPAdapter/IPAdapterMethod';
import { IPAdapterSettingsEmptyState } from 'features/controlLayers/components/IPAdapter/IPAdapterSettingsEmptyState';
import { useEntityIdentifierContext } from 'features/controlLayers/contexts/EntityIdentifierContext';
@@ -25,7 +26,7 @@ import { setGlobalReferenceImageDndTarget } from 'features/dnd/dnd';
import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiBoundingBoxBold } from 'react-icons/pi';
import type { ImageDTO, IPAdapterModelConfig } from 'services/api/types';
import type { FLUXReduxModelConfig, ImageDTO, IPAdapterModelConfig } from 'services/api/types';
import { IPAdapterImagePreview } from './IPAdapterImagePreview';
import { IPAdapterModel } from './IPAdapterModel';
@@ -65,7 +66,7 @@ const IPAdapterSettingsContent = memo(() => {
);
const onChangeModel = useCallback(
(modelConfig: IPAdapterModelConfig) => {
(modelConfig: IPAdapterModelConfig | FLUXReduxModelConfig) => {
dispatch(referenceImageIPAdapterModelChanged({ entityIdentifier, modelConfig }));
},
[dispatch, entityIdentifier]
@@ -98,14 +99,14 @@ const IPAdapterSettingsContent = memo(() => {
<CanvasEntitySettingsWrapper>
<Flex flexDir="column" gap={2} position="relative" w="full">
<Flex gap={2} alignItems="center" w="full">
<Box minW={0} w="full" transitionProperty="common" transitionDuration="0.1s">
<IPAdapterModel
modelKey={ipAdapter.model?.key ?? null}
onChangeModel={onChangeModel}
clipVisionModel={ipAdapter.clipVisionModel}
onChangeCLIPVisionModel={onChangeCLIPVisionModel}
/>
</Box>
<IPAdapterModel
isRegionalGuidance={false}
modelKey={ipAdapter.model?.key ?? null}
onChangeModel={onChangeModel}
/>
{ipAdapter.type === 'ip_adapter' && (
<CLIPVisionModel model={ipAdapter.clipVisionModel} onChange={onChangeCLIPVisionModel} />
)}
<IconButton
onClick={pullBboxIntoIPAdapter}
isDisabled={isBusy}
@@ -116,12 +117,14 @@ const IPAdapterSettingsContent = memo(() => {
/>
</Flex>
<Flex gap={2} w="full" alignItems="center">
<Flex flexDir="column" gap={2} w="full">
{!isFLUX && <IPAdapterMethod method={ipAdapter.method} onChange={onChangeIPMethod} />}
<Weight weight={ipAdapter.weight} onChange={onChangeWeight} />
<BeginEndStepPct beginEndStepPct={ipAdapter.beginEndStepPct} onChange={onChangeBeginEndStepPct} />
</Flex>
<Flex alignItems="center" justifyContent="center" h={32} w={32} aspectRatio="1/1">
{ipAdapter.type === 'ip_adapter' && (
<Flex flexDir="column" gap={2} w="full">
{!isFLUX && <IPAdapterMethod method={ipAdapter.method} onChange={onChangeIPMethod} />}
<Weight weight={ipAdapter.weight} onChange={onChangeWeight} />
<BeginEndStepPct beginEndStepPct={ipAdapter.beginEndStepPct} onChange={onChangeBeginEndStepPct} />
</Flex>
)}
<Flex alignItems="center" justifyContent="center" h={32} w={32} aspectRatio="1/1" flexGrow={1}>
<IPAdapterImagePreview
image={ipAdapter.image}
onChangeImage={onChangeImage}

View File

@@ -1,8 +1,9 @@
import { Box, Flex, IconButton, Spacer, Text } from '@invoke-ai/ui-library';
import { Flex, IconButton, Spacer, Text } from '@invoke-ai/ui-library';
import { createSelector } from '@reduxjs/toolkit';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { BeginEndStepPct } from 'features/controlLayers/components/common/BeginEndStepPct';
import { Weight } from 'features/controlLayers/components/common/Weight';
import { CLIPVisionModel } from 'features/controlLayers/components/IPAdapter/CLIPVisionModel';
import { IPAdapterImagePreview } from 'features/controlLayers/components/IPAdapter/IPAdapterImagePreview';
import { IPAdapterMethod } from 'features/controlLayers/components/IPAdapter/IPAdapterMethod';
import { IPAdapterModel } from 'features/controlLayers/components/IPAdapter/IPAdapterModel';
@@ -26,7 +27,7 @@ import { setRegionalGuidanceReferenceImageDndTarget } from 'features/dnd/dnd';
import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiBoundingBoxBold, PiXBold } from 'react-icons/pi';
import type { ImageDTO, IPAdapterModelConfig } from 'services/api/types';
import type { FLUXReduxModelConfig, ImageDTO, IPAdapterModelConfig } from 'services/api/types';
import { assert } from 'tsafe';
type Props = {
@@ -73,7 +74,7 @@ const RegionalGuidanceIPAdapterSettingsContent = memo(({ referenceImageId }: Pro
);
const onChangeModel = useCallback(
(modelConfig: IPAdapterModelConfig) => {
(modelConfig: IPAdapterModelConfig | FLUXReduxModelConfig) => {
dispatch(rgIPAdapterModelChanged({ entityIdentifier, referenceImageId, modelConfig }));
},
[dispatch, entityIdentifier, referenceImageId]
@@ -125,14 +126,14 @@ const RegionalGuidanceIPAdapterSettingsContent = memo(({ referenceImageId }: Pro
</Flex>
<Flex flexDir="column" gap={2} position="relative" w="full">
<Flex gap={2} alignItems="center" w="full">
<Box minW={0} w="full" transitionProperty="common" transitionDuration="0.1s">
<IPAdapterModel
modelKey={ipAdapter.model?.key ?? null}
onChangeModel={onChangeModel}
clipVisionModel={ipAdapter.clipVisionModel}
onChangeCLIPVisionModel={onChangeCLIPVisionModel}
/>
</Box>
<IPAdapterModel
isRegionalGuidance={true}
modelKey={ipAdapter.model?.key ?? null}
onChangeModel={onChangeModel}
/>
{ipAdapter.type === 'ip_adapter' && (
<CLIPVisionModel model={ipAdapter.clipVisionModel} onChange={onChangeCLIPVisionModel} />
)}
<IconButton
onClick={pullBboxIntoIPAdapter}
isDisabled={isBusy}
@@ -143,12 +144,14 @@ const RegionalGuidanceIPAdapterSettingsContent = memo(({ referenceImageId }: Pro
/>
</Flex>
<Flex gap={2} w="full">
<Flex flexDir="column" gap={2} w="full">
<IPAdapterMethod method={ipAdapter.method} onChange={onChangeIPMethod} />
<Weight weight={ipAdapter.weight} onChange={onChangeWeight} />
<BeginEndStepPct beginEndStepPct={ipAdapter.beginEndStepPct} onChange={onChangeBeginEndStepPct} />
</Flex>
<Flex alignItems="center" justifyContent="center" h={32} w={32} aspectRatio="1/1">
{ipAdapter.type === 'ip_adapter' && (
<Flex flexDir="column" gap={2} w="full">
<IPAdapterMethod method={ipAdapter.method} onChange={onChangeIPMethod} />
<Weight weight={ipAdapter.weight} onChange={onChangeWeight} />
<BeginEndStepPct beginEndStepPct={ipAdapter.beginEndStepPct} onChange={onChangeBeginEndStepPct} />
</Flex>
)}
<Flex alignItems="center" justifyContent="center" h={32} w={32} aspectRatio="1/1" flexGrow={1}>
<IPAdapterImagePreview
image={ipAdapter.image}
onChangeImage={onChangeImage}

View File

@@ -38,6 +38,7 @@ import type { UndoableOptions } from 'redux-undo';
import type {
ControlLoRAModelConfig,
ControlNetModelConfig,
FLUXReduxModelConfig,
ImageDTO,
IPAdapterModelConfig,
T2IAdapterModelConfig,
@@ -76,6 +77,7 @@ import {
imageDTOToImageWithDims,
initialControlLoRA,
initialControlNet,
initialFLUXRedux,
initialIPAdapter,
initialT2IAdapter,
} from './util';
@@ -619,11 +621,16 @@ export const canvasSlice = createSlice({
if (!entity) {
return;
}
if (entity.ipAdapter.type !== 'ip_adapter') {
return;
}
entity.ipAdapter.method = method;
},
referenceImageIPAdapterModelChanged: (
state,
action: PayloadAction<EntityIdentifierPayload<{ modelConfig: IPAdapterModelConfig | null }, 'reference_image'>>
action: PayloadAction<
EntityIdentifierPayload<{ modelConfig: IPAdapterModelConfig | FLUXReduxModelConfig | null }, 'reference_image'>
>
) => {
const { entityIdentifier, modelConfig } = action.payload;
const entity = selectEntity(state, entityIdentifier);
@@ -631,12 +638,39 @@ export const canvasSlice = createSlice({
return;
}
entity.ipAdapter.model = modelConfig ? zModelIdentifierField.parse(modelConfig) : null;
// Ensure that the IP Adapter model is compatible with the CLIP Vision model
if (entity.ipAdapter.model?.base === 'flux') {
entity.ipAdapter.clipVisionModel = 'ViT-L';
} else if (entity.ipAdapter.clipVisionModel === 'ViT-L') {
// Fall back to ViT-H (ViT-G would also work)
entity.ipAdapter.clipVisionModel = 'ViT-H';
if (!entity.ipAdapter.model) {
return;
}
if (entity.ipAdapter.type === 'ip_adapter' && entity.ipAdapter.model.type === 'flux_redux') {
// Switching from ip_adapter to flux_redux
entity.ipAdapter = {
...initialFLUXRedux,
image: entity.ipAdapter.image,
model: entity.ipAdapter.model,
};
return;
}
if (entity.ipAdapter.type === 'flux_redux' && entity.ipAdapter.model.type === 'ip_adapter') {
// Switching from flux_redux to ip_adapter
entity.ipAdapter = {
...initialIPAdapter,
image: entity.ipAdapter.image,
model: entity.ipAdapter.model,
};
return;
}
if (entity.ipAdapter.type === 'ip_adapter') {
// Ensure that the IP Adapter model is compatible with the CLIP Vision model
if (entity.ipAdapter.model?.base === 'flux') {
entity.ipAdapter.clipVisionModel = 'ViT-L';
} else if (entity.ipAdapter.clipVisionModel === 'ViT-L') {
// Fall back to ViT-H (ViT-G would also work)
entity.ipAdapter.clipVisionModel = 'ViT-H';
}
}
},
referenceImageIPAdapterCLIPVisionModelChanged: (
@@ -648,6 +682,9 @@ export const canvasSlice = createSlice({
if (!entity) {
return;
}
if (entity.ipAdapter.type !== 'ip_adapter') {
return;
}
entity.ipAdapter.clipVisionModel = clipVisionModel;
},
referenceImageIPAdapterWeightChanged: (
@@ -659,6 +696,9 @@ export const canvasSlice = createSlice({
if (!entity) {
return;
}
if (entity.ipAdapter.type !== 'ip_adapter') {
return;
}
entity.ipAdapter.weight = weight;
},
referenceImageIPAdapterBeginEndStepPctChanged: (
@@ -670,6 +710,9 @@ export const canvasSlice = createSlice({
if (!entity) {
return;
}
if (entity.ipAdapter.type !== 'ip_adapter') {
return;
}
entity.ipAdapter.beginEndStepPct = beginEndStepPct;
},
//#region Regional Guidance
@@ -843,6 +886,10 @@ export const canvasSlice = createSlice({
if (!referenceImage) {
return;
}
if (referenceImage.ipAdapter.type !== 'ip_adapter') {
return;
}
referenceImage.ipAdapter.weight = weight;
},
rgIPAdapterBeginEndStepPctChanged: (
@@ -856,6 +903,10 @@ export const canvasSlice = createSlice({
if (!referenceImage) {
return;
}
if (referenceImage.ipAdapter.type !== 'ip_adapter') {
return;
}
referenceImage.ipAdapter.beginEndStepPct = beginEndStepPct;
},
rgIPAdapterMethodChanged: (
@@ -869,6 +920,10 @@ export const canvasSlice = createSlice({
if (!referenceImage) {
return;
}
if (referenceImage.ipAdapter.type !== 'ip_adapter') {
return;
}
referenceImage.ipAdapter.method = method;
},
rgIPAdapterModelChanged: (
@@ -877,7 +932,7 @@ export const canvasSlice = createSlice({
EntityIdentifierPayload<
{
referenceImageId: string;
modelConfig: IPAdapterModelConfig | null;
modelConfig: IPAdapterModelConfig | FLUXReduxModelConfig | null;
},
'regional_guidance'
>
@@ -889,12 +944,39 @@ export const canvasSlice = createSlice({
return;
}
referenceImage.ipAdapter.model = modelConfig ? zModelIdentifierField.parse(modelConfig) : null;
// Ensure that the IP Adapter model is compatible with the CLIP Vision model
if (referenceImage.ipAdapter.model?.base === 'flux') {
referenceImage.ipAdapter.clipVisionModel = 'ViT-L';
} else if (referenceImage.ipAdapter.clipVisionModel === 'ViT-L') {
// Fall back to ViT-H (ViT-G would also work)
referenceImage.ipAdapter.clipVisionModel = 'ViT-H';
if (!referenceImage.ipAdapter.model) {
return;
}
if (referenceImage.ipAdapter.type === 'ip_adapter' && referenceImage.ipAdapter.model.type === 'flux_redux') {
// Switching from ip_adapter to flux_redux
referenceImage.ipAdapter = {
...initialFLUXRedux,
image: referenceImage.ipAdapter.image,
model: referenceImage.ipAdapter.model,
};
return;
}
if (referenceImage.ipAdapter.type === 'flux_redux' && referenceImage.ipAdapter.model.type === 'ip_adapter') {
// Switching from flux_redux to ip_adapter
referenceImage.ipAdapter = {
...initialIPAdapter,
image: referenceImage.ipAdapter.image,
model: referenceImage.ipAdapter.model,
};
return;
}
if (referenceImage.ipAdapter.type === 'ip_adapter') {
// Ensure that the IP Adapter model is compatible with the CLIP Vision model
if (referenceImage.ipAdapter.model?.base === 'flux') {
referenceImage.ipAdapter.clipVisionModel = 'ViT-L';
} else if (referenceImage.ipAdapter.clipVisionModel === 'ViT-L') {
// Fall back to ViT-H (ViT-G would also work)
referenceImage.ipAdapter.clipVisionModel = 'ViT-H';
}
}
},
rgIPAdapterCLIPVisionModelChanged: (
@@ -908,6 +990,10 @@ export const canvasSlice = createSlice({
if (!referenceImage) {
return;
}
if (referenceImage.ipAdapter.type !== 'ip_adapter') {
return;
}
referenceImage.ipAdapter.clipVisionModel = clipVisionModel;
},
//#region Inpaint mask

View File

@@ -233,6 +233,13 @@ const zIPAdapterConfig = z.object({
});
export type IPAdapterConfig = z.infer<typeof zIPAdapterConfig>;
const zFLUXReduxConfig = z.object({
type: z.literal('flux_redux'),
image: zImageWithDims.nullable(),
model: zServerValidatedModelIdentifierField.nullable(),
});
export type FLUXReduxConfig = z.infer<typeof zFLUXReduxConfig>;
const zCanvasEntityBase = z.object({
id: zId,
name: zName,
@@ -242,10 +249,16 @@ const zCanvasEntityBase = z.object({
const zCanvasReferenceImageState = zCanvasEntityBase.extend({
type: z.literal('reference_image'),
ipAdapter: zIPAdapterConfig,
ipAdapter: z.discriminatedUnion('type', [zIPAdapterConfig, zFLUXReduxConfig]),
});
export type CanvasReferenceImageState = z.infer<typeof zCanvasReferenceImageState>;
export const isIPAdapterConfig = (config: IPAdapterConfig | FLUXReduxConfig): config is IPAdapterConfig =>
config.type === 'ip_adapter';
export const isFLUXReduxConfig = (config: IPAdapterConfig | FLUXReduxConfig): config is FLUXReduxConfig =>
config.type === 'flux_redux';
const zFillStyle = z.enum(['solid', 'grid', 'crosshatch', 'diagonal', 'horizontal', 'vertical']);
export type FillStyle = z.infer<typeof zFillStyle>;
export const isFillStyle = (v: unknown): v is FillStyle => zFillStyle.safeParse(v).success;
@@ -253,7 +266,7 @@ const zFill = z.object({ style: zFillStyle, color: zRgbColor });
const zRegionalGuidanceReferenceImageState = z.object({
id: zId,
ipAdapter: zIPAdapterConfig,
ipAdapter: z.discriminatedUnion('type', [zIPAdapterConfig, zFLUXReduxConfig]),
});
export type RegionalGuidanceReferenceImageState = z.infer<typeof zRegionalGuidanceReferenceImageState>;

View File

@@ -9,6 +9,7 @@ import type {
CanvasRegionalGuidanceState,
ControlLoRAConfig,
ControlNetConfig,
FLUXReduxConfig,
ImageWithDims,
IPAdapterConfig,
RgbColor,
@@ -70,6 +71,11 @@ export const initialIPAdapter: IPAdapterConfig = {
clipVisionModel: 'ViT-H',
weight: 1,
};
export const initialFLUXRedux: FLUXReduxConfig = {
type: 'flux_redux',
image: null,
model: null,
};
export const initialT2IAdapter: T2IAdapterConfig = {
type: 't2i_adapter',
model: null,

View File

@@ -44,33 +44,33 @@ export const getRegionalGuidanceWarnings = (
if (model.base === 'sd-3' || model.base === 'sd-2') {
// Unsupported model architecture
warnings.push(WARNINGS.UNSUPPORTED_MODEL);
} else if (model.base === 'flux') {
return warnings;
}
if (model.base === 'flux') {
// Some features are not supported for flux models
if (entity.negativePrompt !== null) {
warnings.push(WARNINGS.RG_NEGATIVE_PROMPT_NOT_SUPPORTED);
}
if (entity.referenceImages.length > 0) {
warnings.push(WARNINGS.RG_REFERENCE_IMAGES_NOT_SUPPORTED);
}
if (entity.autoNegative) {
warnings.push(WARNINGS.RG_AUTO_NEGATIVE_NOT_SUPPORTED);
}
} else {
entity.referenceImages.forEach(({ ipAdapter }) => {
if (!ipAdapter.model) {
// No model selected
warnings.push(WARNINGS.IP_ADAPTER_NO_MODEL_SELECTED);
} else if (ipAdapter.model.base !== model.base) {
// Supported model architecture but doesn't match
warnings.push(WARNINGS.IP_ADAPTER_INCOMPATIBLE_BASE_MODEL);
}
if (!ipAdapter.image) {
// No image selected
warnings.push(WARNINGS.IP_ADAPTER_NO_IMAGE_SELECTED);
}
});
}
entity.referenceImages.forEach(({ ipAdapter }) => {
if (!ipAdapter.model) {
// No model selected
warnings.push(WARNINGS.IP_ADAPTER_NO_MODEL_SELECTED);
} else if (ipAdapter.model.base !== model.base) {
// Supported model architecture but doesn't match
warnings.push(WARNINGS.IP_ADAPTER_INCOMPATIBLE_BASE_MODEL);
}
if (!ipAdapter.image) {
// No image selected
warnings.push(WARNINGS.IP_ADAPTER_NO_IMAGE_SELECTED);
}
});
}
return warnings;
@@ -82,22 +82,27 @@ export const getGlobalReferenceImageWarnings = (
): WarningTKey[] => {
const warnings: WarningTKey[] = [];
if (!entity.ipAdapter.model) {
// No model selected
warnings.push(WARNINGS.IP_ADAPTER_NO_MODEL_SELECTED);
} else if (model) {
if (model) {
if (model.base === 'sd-3' || model.base === 'sd-2') {
// Unsupported model architecture
warnings.push(WARNINGS.UNSUPPORTED_MODEL);
} else if (entity.ipAdapter.model.base !== model.base) {
return warnings;
}
const { ipAdapter } = entity;
if (!ipAdapter.model) {
// No model selected
warnings.push(WARNINGS.IP_ADAPTER_NO_MODEL_SELECTED);
} else if (ipAdapter.model.base !== model.base) {
// Supported model architecture but doesn't match
warnings.push(WARNINGS.IP_ADAPTER_INCOMPATIBLE_BASE_MODEL);
}
}
if (!entity.ipAdapter.image) {
// No image selected
warnings.push(WARNINGS.IP_ADAPTER_NO_IMAGE_SELECTED);
if (!entity.ipAdapter.image) {
// No image selected
warnings.push(WARNINGS.IP_ADAPTER_NO_IMAGE_SELECTED);
}
}
return warnings;

View File

@@ -14,10 +14,12 @@ import {
useControlLoRAModel,
useControlNetModels,
useEmbeddingModels,
useFluxReduxModels,
useIPAdapterModels,
useLoRAModels,
useMainModels,
useRefinerModels,
useSigLipModels,
useSpandrelImageToImageModels,
useT2IAdapterModels,
useT5EncoderModels,
@@ -112,6 +114,18 @@ const ModelList = () => {
[spandrelImageToImageModels, searchTerm, filteredModelType]
);
const [sigLipModels, { isLoading: isLoadingSigLipModels }] = useSigLipModels();
const filteredSigLipModels = useMemo(
() => modelsFilter(sigLipModels, searchTerm, filteredModelType),
[sigLipModels, searchTerm, filteredModelType]
);
const [fluxReduxModels, { isLoading: isLoadingFluxReduxModels }] = useFluxReduxModels();
const filteredFluxReduxModels = useMemo(
() => modelsFilter(fluxReduxModels, searchTerm, filteredModelType),
[fluxReduxModels, searchTerm, filteredModelType]
);
const totalFilteredModels = useMemo(() => {
return (
filteredMainModels.length +
@@ -124,6 +138,8 @@ const ModelList = () => {
filteredCLIPVisionModels.length +
filteredVAEModels.length +
filteredSpandrelImageToImageModels.length +
filteredSigLipModels.length +
filteredFluxReduxModels.length +
t5EncoderModels.length +
clipEmbedModels.length +
controlLoRAModels.length
@@ -139,6 +155,8 @@ const ModelList = () => {
filteredT2IAdapterModels.length,
filteredVAEModels.length,
filteredSpandrelImageToImageModels.length,
filteredSigLipModels.length,
filteredFluxReduxModels.length,
t5EncoderModels.length,
clipEmbedModels.length,
controlLoRAModels.length,
@@ -229,6 +247,16 @@ const ModelList = () => {
key="spandrel-image-to-image"
/>
)}
{/* SigLIP List */}
{isLoadingSigLipModels && <FetchingModelsLoader loadingMessage="Loading SigLIP Models..." />}
{!isLoadingSigLipModels && filteredSigLipModels.length > 0 && (
<ModelListWrapper title={t('modelManager.sigLip')} modelList={filteredSigLipModels} key="sig-lip" />
)}
{/* Flux Redux List */}
{isLoadingFluxReduxModels && <FetchingModelsLoader loadingMessage="Loading Flux Redux Models..." />}
{!isLoadingFluxReduxModels && filteredFluxReduxModels.length > 0 && (
<ModelListWrapper title={t('modelManager.fluxRedux')} modelList={filteredFluxReduxModels} key="flux-redux" />
)}
{totalFilteredModels === 0 && (
<Flex w="full" h="full" alignItems="center" justifyContent="center">
<Text>{t('modelManager.noMatchingModels')}</Text>

View File

@@ -25,6 +25,8 @@ export const ModelTypeFilter = memo(() => {
clip_vision: 'CLIP Vision',
spandrel_image_to_image: t('modelManager.spandrelImageToImage'),
control_lora: t('modelManager.controlLora'),
siglip: t('modelManager.siglip'),
flux_redux: t('modelManager.fluxRedux'),
}),
[t]
);

View File

@@ -4,8 +4,6 @@ import { useFocusRegion } from 'common/hooks/focus';
import { AddNodeCmdk } from 'features/nodes/components/flow/AddNodeCmdk/AddNodeCmdk';
import TopPanel from 'features/nodes/components/flow/panels/TopPanel/TopPanel';
import WorkflowEditorSettings from 'features/nodes/components/flow/panels/TopRightPanel/WorkflowEditorSettings';
import { LoadWorkflowFromGraphModal } from 'features/workflowLibrary/components/LoadWorkflowFromGraphModal/LoadWorkflowFromGraphModal';
import { SaveWorkflowAsDialog } from 'features/workflowLibrary/components/SaveWorkflowAsDialog/SaveWorkflowAsDialog';
import { memo, useRef } from 'react';
import { useTranslation } from 'react-i18next';
import { PiFlowArrowBold } from 'react-icons/pi';
@@ -40,8 +38,6 @@ const NodeEditor = () => {
<TopPanel />
<BottomLeftPanel />
<MinimapPanel />
<SaveWorkflowAsDialog />
<LoadWorkflowFromGraphModal />
</>
)}
<WorkflowEditorSettings />

View File

@@ -20,7 +20,6 @@ import { useConnection } from 'features/nodes/hooks/useConnection';
import { useIsValidConnection } from 'features/nodes/hooks/useIsValidConnection';
import { useNodeCopyPaste } from 'features/nodes/hooks/useNodeCopyPaste';
import { useSyncExecutionState } from 'features/nodes/hooks/useNodeExecutionState';
import { useWorkflowWatcher } from 'features/nodes/hooks/useWorkflowWatcher';
import {
$addNodeCmdk,
$cursorPos,
@@ -95,7 +94,6 @@ export const Flow = memo(() => {
const isWorkflowsFocused = useIsRegionFocused('workflows');
useFocusRegion('workflows', flowWrapper);
useWorkflowWatcher();
useSyncExecutionState();
const [borderRadius] = useToken('radii', ['base']);
const flowStyles = useMemo<CSSProperties>(() => ({ borderRadius }), [borderRadius]);

View File

@@ -44,6 +44,8 @@ import {
isFloatGeneratorFieldInputTemplate,
isFluxMainModelFieldInputInstance,
isFluxMainModelFieldInputTemplate,
isFluxReduxModelFieldInputInstance,
isFluxReduxModelFieldInputTemplate,
isFluxVAEModelFieldInputInstance,
isFluxVAEModelFieldInputTemplate,
isImageFieldCollectionInputInstance,
@@ -74,6 +76,8 @@ import {
isSDXLMainModelFieldInputTemplate,
isSDXLRefinerModelFieldInputInstance,
isSDXLRefinerModelFieldInputTemplate,
isSigLipModelFieldInputInstance,
isSigLipModelFieldInputTemplate,
isSpandrelImageToImageModelFieldInputInstance,
isSpandrelImageToImageModelFieldInputTemplate,
isStringFieldCollectionInputInstance,
@@ -102,6 +106,7 @@ import ControlLoRAModelFieldInputComponent from './inputs/ControlLoraModelFieldI
import ControlNetModelFieldInputComponent from './inputs/ControlNetModelFieldInputComponent';
import EnumFieldInputComponent from './inputs/EnumFieldInputComponent';
import FluxMainModelFieldInputComponent from './inputs/FluxMainModelFieldInputComponent';
import FluxReduxModelFieldInputComponent from './inputs/FluxReduxModelFieldInputComponent';
import FluxVAEModelFieldInputComponent from './inputs/FluxVAEModelFieldInputComponent';
import ImageFieldInputComponent from './inputs/ImageFieldInputComponent';
import IPAdapterModelFieldInputComponent from './inputs/IPAdapterModelFieldInputComponent';
@@ -111,6 +116,7 @@ import RefinerModelFieldInputComponent from './inputs/RefinerModelFieldInputComp
import SchedulerFieldInputComponent from './inputs/SchedulerFieldInputComponent';
import SD3MainModelFieldInputComponent from './inputs/SD3MainModelFieldInputComponent';
import SDXLMainModelFieldInputComponent from './inputs/SDXLMainModelFieldInputComponent';
import SigLipModelFieldInputComponent from './inputs/SigLipModelFieldInputComponent';
import SpandrelImageToImageModelFieldInputComponent from './inputs/SpandrelImageToImageModelFieldInputComponent';
import T2IAdapterModelFieldInputComponent from './inputs/T2IAdapterModelFieldInputComponent';
import T5EncoderModelFieldInputComponent from './inputs/T5EncoderModelFieldInputComponent';
@@ -339,6 +345,20 @@ export const InputFieldRenderer = memo(({ nodeId, fieldName, settings }: Props)
return <SpandrelImageToImageModelFieldInputComponent nodeId={nodeId} field={field} fieldTemplate={template} />;
}
if (isSigLipModelFieldInputTemplate(template)) {
if (!isSigLipModelFieldInputInstance(field)) {
return null;
}
return <SigLipModelFieldInputComponent nodeId={nodeId} field={field} fieldTemplate={template} />;
}
if (isFluxReduxModelFieldInputTemplate(template)) {
if (!isFluxReduxModelFieldInputInstance(field)) {
return null;
}
return <FluxReduxModelFieldInputComponent nodeId={nodeId} field={field} fieldTemplate={template} />;
}
if (isColorFieldInputTemplate(template)) {
if (!isColorFieldInputInstance(field)) {
return null;

View File

@@ -0,0 +1,53 @@
import { Combobox, FormControl, Tooltip } from '@invoke-ai/ui-library';
import { useAppDispatch } from 'app/store/storeHooks';
import { useGroupedModelCombobox } from 'common/hooks/useGroupedModelCombobox';
import { fieldFluxReduxModelValueChanged } from 'features/nodes/store/nodesSlice';
import { NO_DRAG_CLASS, NO_WHEEL_CLASS } from 'features/nodes/types/constants';
import type { FluxReduxModelFieldInputInstance, FluxReduxModelFieldInputTemplate } from 'features/nodes/types/field';
import { memo, useCallback } from 'react';
import { useFluxReduxModels } from 'services/api/hooks/modelsByType';
import type { FLUXReduxModelConfig } from 'services/api/types';
import type { FieldComponentProps } from './types';
const FluxReduxModelFieldInputComponent = (
props: FieldComponentProps<FluxReduxModelFieldInputInstance, FluxReduxModelFieldInputTemplate>
) => {
const { nodeId, field } = props;
const dispatch = useAppDispatch();
const [modelConfigs, { isLoading }] = useFluxReduxModels();
const _onChange = useCallback(
(value: FLUXReduxModelConfig | null) => {
if (!value) {
return;
}
dispatch(
fieldFluxReduxModelValueChanged({
nodeId,
fieldName: field.name,
value,
})
);
},
[dispatch, field.name, nodeId]
);
const { options, value, onChange } = useGroupedModelCombobox({
modelConfigs,
onChange: _onChange,
selectedModel: field.value,
isLoading,
});
return (
<Tooltip label={value?.description}>
<FormControl className={`${NO_WHEEL_CLASS} ${NO_DRAG_CLASS}`} isInvalid={!value}>
<Combobox value={value} placeholder="Pick one" options={options} onChange={onChange} />
</FormControl>
</Tooltip>
);
};
export default memo(FluxReduxModelFieldInputComponent);

View File

@@ -0,0 +1,53 @@
import { Combobox, FormControl, Tooltip } from '@invoke-ai/ui-library';
import { useAppDispatch } from 'app/store/storeHooks';
import { useGroupedModelCombobox } from 'common/hooks/useGroupedModelCombobox';
import { fieldSigLipModelValueChanged } from 'features/nodes/store/nodesSlice';
import { NO_DRAG_CLASS, NO_WHEEL_CLASS } from 'features/nodes/types/constants';
import type { SigLipModelFieldInputInstance, SigLipModelFieldInputTemplate } from 'features/nodes/types/field';
import { memo, useCallback } from 'react';
import { useSigLipModels } from 'services/api/hooks/modelsByType';
import type { SigLipModelConfig } from 'services/api/types';
import type { FieldComponentProps } from './types';
const SigLipModelFieldInputComponent = (
props: FieldComponentProps<SigLipModelFieldInputInstance, SigLipModelFieldInputTemplate>
) => {
const { nodeId, field } = props;
const dispatch = useAppDispatch();
const [modelConfigs, { isLoading }] = useSigLipModels();
const _onChange = useCallback(
(value: SigLipModelConfig | null) => {
if (!value) {
return;
}
dispatch(
fieldSigLipModelValueChanged({
nodeId,
fieldName: field.name,
value,
})
);
},
[dispatch, field.name, nodeId]
);
const { options, value, onChange } = useGroupedModelCombobox({
modelConfigs,
onChange: _onChange,
selectedModel: field.value,
isLoading,
});
return (
<Tooltip label={value?.description}>
<FormControl className={`${NO_WHEEL_CLASS} ${NO_DRAG_CLASS}`} isInvalid={!value}>
<Combobox value={value} placeholder="Pick one" options={options} onChange={onChange} />
</FormControl>
</Tooltip>
);
};
export default memo(SigLipModelFieldInputComponent);

View File

@@ -1,31 +1,15 @@
import { IconButton } from '@invoke-ai/ui-library';
import { useAppSelector } from 'app/store/storeHooks';
import { $builtWorkflow } from 'features/nodes/hooks/useWorkflowWatcher';
import { selectWorkflowIsTouched } from 'features/nodes/store/workflowSlice';
import { useSaveWorkflowAsDialog } from 'features/workflowLibrary/components/SaveWorkflowAsDialog/useSaveWorkflowAsDialog';
import { isWorkflowWithID, useSaveLibraryWorkflow } from 'features/workflowLibrary/hooks/useSaveWorkflow';
import { memo, useCallback } from 'react';
import { useSaveOrSaveAsWorkflow } from 'features/workflowLibrary/hooks/useSaveOrSaveAsWorkflow';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiFloppyDiskBold } from 'react-icons/pi';
const SaveWorkflowButton = () => {
const { t } = useTranslation();
const isTouched = useAppSelector(selectWorkflowIsTouched);
const { onOpen } = useSaveWorkflowAsDialog();
const { saveWorkflow } = useSaveLibraryWorkflow();
const handleClickSave = useCallback(() => {
const builtWorkflow = $builtWorkflow.get();
if (!builtWorkflow) {
return;
}
if (isWorkflowWithID(builtWorkflow)) {
saveWorkflow();
} else {
onOpen();
}
}, [onOpen, saveWorkflow]);
const saveOrSaveAsWorkflow = useSaveOrSaveAsWorkflow();
return (
<IconButton
@@ -33,7 +17,7 @@ const SaveWorkflowButton = () => {
aria-label={t('workflows.saveWorkflow')}
icon={<PiFloppyDiskBold />}
isDisabled={!isTouched}
onClick={handleClickSave}
onClick={saveOrSaveAsWorkflow}
pointerEvents="auto"
/>
);

View File

@@ -1,41 +1,19 @@
import { IconButton } from '@invoke-ai/ui-library';
import { $builtWorkflow } from 'features/nodes/hooks/useWorkflowWatcher';
import { useSaveWorkflowAsDialog } from 'features/workflowLibrary/components/SaveWorkflowAsDialog/useSaveWorkflowAsDialog';
import { isWorkflowWithID, useSaveLibraryWorkflow } from 'features/workflowLibrary/hooks/useSaveWorkflow';
import type { MouseEventHandler } from 'react';
import { memo, useCallback } from 'react';
import { useSaveOrSaveAsWorkflow } from 'features/workflowLibrary/hooks/useSaveOrSaveAsWorkflow';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiFloppyDiskBold } from 'react-icons/pi';
const SaveWorkflowButton = () => {
const { t } = useTranslation();
const { onOpen } = useSaveWorkflowAsDialog();
const { saveWorkflow } = useSaveLibraryWorkflow();
const handleClickSave = useCallback<MouseEventHandler<HTMLButtonElement>>(
(e) => {
e.stopPropagation();
const builtWorkflow = $builtWorkflow.get();
if (!builtWorkflow) {
return;
}
if (isWorkflowWithID(builtWorkflow)) {
saveWorkflow();
} else {
onOpen();
}
},
[onOpen, saveWorkflow]
);
const saveOrSaveAsWorkflow = useSaveOrSaveAsWorkflow();
return (
<IconButton
tooltip={t('workflows.saveWorkflow')}
aria-label={t('workflows.saveWorkflow')}
icon={<PiFloppyDiskBold />}
onClick={handleClickSave}
onClick={saveOrSaveAsWorkflow}
pointerEvents="auto"
variant="ghost"
size="sm"

View File

@@ -1,83 +0,0 @@
import { Button, Collapse, Flex, Icon, Spinner, Text } from '@invoke-ai/ui-library';
import { EMPTY_ARRAY } from 'app/store/constants';
import { useAppSelector } from 'app/store/storeHooks';
import { IAINoContentFallback } from 'common/components/IAIImageFallback';
import { fixTooltipCloseOnScrollStyles } from 'common/util/fixTooltipCloseOnScrollStyles';
import { useCategorySections } from 'features/nodes/hooks/useCategorySections';
import {
selectWorkflowOrderBy,
selectWorkflowOrderDirection,
selectWorkflowSearchTerm,
} from 'features/nodes/store/workflowSlice';
import type { WorkflowCategory } from 'features/nodes/types/workflow';
import { useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiCaretDownBold } from 'react-icons/pi';
import { useListWorkflowsQuery } from 'services/api/endpoints/workflows';
import { WorkflowListItem } from './WorkflowListItem';
export const WorkflowList = ({ category }: { category: WorkflowCategory }) => {
const searchTerm = useAppSelector(selectWorkflowSearchTerm);
const orderBy = useAppSelector(selectWorkflowOrderBy);
const direction = useAppSelector(selectWorkflowOrderDirection);
const { t } = useTranslation();
const queryArg = useMemo<Parameters<typeof useListWorkflowsQuery>[0]>(() => {
if (category !== 'default') {
return {
order_by: orderBy,
direction,
category: category,
};
}
return {
order_by: 'name' as const,
direction: 'ASC' as const,
category: category,
};
}, [category, direction, orderBy]);
const { data, isLoading } = useListWorkflowsQuery(queryArg, {
selectFromResult: ({ data, isLoading }) => {
const filteredData =
data?.items.filter((workflow) => workflow.name.toLowerCase().includes(searchTerm.toLowerCase())) || EMPTY_ARRAY;
return {
data: filteredData,
isLoading,
};
},
});
const { isOpen, onToggle } = useCategorySections(category);
return (
<Flex flexDir="column">
<Button variant="unstyled" onClick={onToggle}>
<Flex gap={2} alignItems="center">
<Icon boxSize={4} as={PiCaretDownBold} transform={isOpen ? undefined : 'rotate(-90deg)'} fill="base.500" />
<Text fontSize="sm" fontWeight="semibold" userSelect="none" color="base.500">
{t(`workflows.${category}Workflows`)}
</Text>
</Flex>
</Button>
<Collapse in={isOpen} style={fixTooltipCloseOnScrollStyles}>
{isLoading ? (
<Flex alignItems="center" justifyContent="center" p={20}>
<Spinner />
</Flex>
) : data.length ? (
data.map((workflow) => <WorkflowListItem workflow={workflow} key={workflow.workflow_id} />)
) : (
<IAINoContentFallback
fontSize="sm"
py={4}
label={searchTerm ? t('nodes.noMatchingWorkflows') : t('nodes.noWorkflows')}
icon={null}
/>
)}
</Collapse>
</Flex>
);
};

View File

@@ -1,186 +0,0 @@
import { Badge, Flex, IconButton, Spacer, Text, Tooltip } from '@invoke-ai/ui-library';
import { useStore } from '@nanostores/react';
import { $projectUrl } from 'app/store/nanostores/projectId';
import { useAppSelector } from 'app/store/storeHooks';
import dateFormat, { masks } from 'dateformat';
import { selectWorkflowId } from 'features/nodes/store/workflowSlice';
import { useDeleteWorkflow } from 'features/workflowLibrary/components/DeleteLibraryWorkflowConfirmationAlertDialog';
import { useLoadWorkflow } from 'features/workflowLibrary/components/LoadWorkflowConfirmationAlertDialog';
import { useDownloadWorkflowById } from 'features/workflowLibrary/hooks/useDownloadWorkflowById';
import type { MouseEvent } from 'react';
import { useCallback, useMemo, useState } from 'react';
import { useTranslation } from 'react-i18next';
import { PiDownloadSimpleBold, PiPencilBold, PiShareFatBold, PiTrashBold } from 'react-icons/pi';
import type { WorkflowRecordListItemDTO } from 'services/api/types';
import { useShareWorkflow } from './ShareWorkflowModal';
import { WorkflowListItemTooltip } from './WorkflowListItemTooltip';
export const WorkflowListItem = ({ workflow }: { workflow: WorkflowRecordListItemDTO }) => {
const { t } = useTranslation();
const projectUrl = useStore($projectUrl);
const [isHovered, setIsHovered] = useState(false);
const handleMouseOver = useCallback(() => {
setIsHovered(true);
}, []);
const handleMouseOut = useCallback(() => {
setIsHovered(false);
}, []);
const workflowId = useAppSelector(selectWorkflowId);
const { downloadWorkflow, isLoading: isLoadingDownloadWorkflow } = useDownloadWorkflowById();
const shareWorkflow = useShareWorkflow();
const deleteWorkflow = useDeleteWorkflow();
const loadWorkflow = useLoadWorkflow();
const isActive = useMemo(() => {
return workflowId === workflow.workflow_id;
}, [workflowId, workflow.workflow_id]);
const handleClickLoad = useCallback(() => {
setIsHovered(false);
loadWorkflow.loadWithDialog(workflow.workflow_id, 'view');
}, [loadWorkflow, workflow.workflow_id]);
const handleClickEdit = useCallback(() => {
setIsHovered(false);
loadWorkflow.loadWithDialog(workflow.workflow_id, 'view');
}, [loadWorkflow, workflow.workflow_id]);
const handleClickDelete = useCallback(
(e: MouseEvent<HTMLButtonElement>) => {
e.stopPropagation();
setIsHovered(false);
deleteWorkflow(workflow);
},
[deleteWorkflow, workflow]
);
const handleClickShare = useCallback(
(e: MouseEvent<HTMLButtonElement>) => {
e.stopPropagation();
setIsHovered(false);
shareWorkflow(workflow);
},
[shareWorkflow, workflow]
);
const handleClickDownload = useCallback(
(e: MouseEvent<HTMLButtonElement>) => {
e.stopPropagation();
setIsHovered(false);
downloadWorkflow(workflow.workflow_id);
},
[downloadWorkflow, workflow.workflow_id]
);
return (
<Flex
gap={4}
onClick={handleClickLoad}
cursor="pointer"
_hover={{ backgroundColor: 'base.750' }}
p={2}
ps={3}
borderRadius="base"
w="full"
onMouseOver={handleMouseOver}
onMouseOut={handleMouseOut}
alignItems="center"
>
<Tooltip label={<WorkflowListItemTooltip workflow={workflow} />} closeOnScroll>
<Flex flexDir="column" gap={1}>
<Flex gap={4} alignItems="center">
<Text noOfLines={2}>{workflow.name}</Text>
{isActive && (
<Badge
color="invokeBlue.400"
borderColor="invokeBlue.700"
borderWidth={1}
bg="transparent"
flexShrink={0}
>
{t('workflows.opened')}
</Badge>
)}
</Flex>
{workflow.category !== 'default' && (
<Text fontSize="xs" variant="subtext" flexShrink={0} noOfLines={1}>
{t('common.updated')}: {dateFormat(workflow.updated_at, masks.shortDate)}{' '}
{dateFormat(workflow.updated_at, masks.shortTime)}
</Text>
)}
</Flex>
</Tooltip>
<Spacer />
<Flex alignItems="center" gap={1} opacity={isHovered ? 1 : 0}>
<Tooltip
label={t('workflows.edit')}
// This prevents an issue where the tooltip isn't closed after the modal is opened
isOpen={!isHovered ? false : undefined}
closeOnScroll
>
<IconButton
size="sm"
variant="ghost"
aria-label={t('workflows.edit')}
onClick={handleClickEdit}
icon={<PiPencilBold />}
/>
</Tooltip>
<Tooltip
label={t('workflows.download')}
// This prevents an issue where the tooltip isn't closed after the modal is opened
isOpen={!isHovered ? false : undefined}
closeOnScroll
>
<IconButton
size="sm"
variant="ghost"
aria-label={t('workflows.download')}
onClick={handleClickDownload}
icon={<PiDownloadSimpleBold />}
isLoading={isLoadingDownloadWorkflow}
/>
</Tooltip>
{!!projectUrl && workflow.workflow_id && workflow.category !== 'user' && (
<Tooltip
label={t('workflows.copyShareLink')}
// This prevents an issue where the tooltip isn't closed after the modal is opened
isOpen={!isHovered ? false : undefined}
closeOnScroll
>
<IconButton
size="sm"
variant="ghost"
aria-label={t('workflows.copyShareLink')}
onClick={handleClickShare}
icon={<PiShareFatBold />}
/>
</Tooltip>
)}
{workflow.category !== 'default' && (
<Tooltip
label={t('workflows.delete')}
// This prevents an issue where the tooltip isn't closed after the modal is opened
isOpen={!isHovered ? false : undefined}
closeOnScroll
>
<IconButton
size="sm"
variant="ghost"
aria-label={t('workflows.delete')}
onClick={handleClickDelete}
colorScheme="error"
icon={<PiTrashBold />}
/>
</Tooltip>
)}
</Flex>
</Flex>
);
};

View File

@@ -1,26 +0,0 @@
import { Flex, Text } from '@invoke-ai/ui-library';
import dateFormat, { masks } from 'dateformat';
import { useTranslation } from 'react-i18next';
import type { WorkflowRecordListItemDTO } from 'services/api/types';
export const WorkflowListItemTooltip = ({ workflow }: { workflow: WorkflowRecordListItemDTO }) => {
const { t } = useTranslation();
return (
<Flex flexDir="column" gap={1}>
<Text>{workflow.description}</Text>
{workflow.category !== 'default' && (
<Flex flexDir="column">
<Text>
{t('workflows.opened')}: {dateFormat(workflow.opened_at, masks.shortDate)}
</Text>
<Text>
{t('common.updated')}: {dateFormat(workflow.updated_at, masks.shortDate)}
</Text>
<Text>
{t('common.created')}: {dateFormat(workflow.created_at, masks.shortDate)}
</Text>
</Flex>
)}
</Flex>
);
};

View File

@@ -1,82 +1,27 @@
import {
Box,
Button,
Flex,
Popover,
PopoverBody,
PopoverContent,
PopoverTrigger,
Portal,
Text,
} from '@invoke-ai/ui-library';
import { useStore } from '@nanostores/react';
import { $workflowCategories } from 'app/store/nanostores/workflowCategories';
import { Button, Text } from '@invoke-ai/ui-library';
import { useAppSelector } from 'app/store/storeHooks';
import ScrollableContent from 'common/components/OverlayScrollbars/ScrollableContent';
import { useWorkflowListMenu } from 'features/nodes/store/workflowListMenu';
import { useWorkflowLibraryModal } from 'features/nodes/store/workflowLibraryModal';
import { selectWorkflowName } from 'features/nodes/store/workflowSlice';
import { NewWorkflowButton } from 'features/workflowLibrary/components/NewWorkflowButton';
import UploadWorkflowButton from 'features/workflowLibrary/components/UploadWorkflowButton';
import { useRef } from 'react';
import { useTranslation } from 'react-i18next';
import { PiFolderOpenFill } from 'react-icons/pi';
import { WorkflowList } from './WorkflowList';
import { WorkflowSearch } from './WorkflowSearch';
import { WorkflowSortControl } from './WorkflowSortControl';
export const WorkflowListMenuTrigger = () => {
const workflowListMenu = useWorkflowListMenu();
const workflowLibraryModal = useWorkflowLibraryModal();
const { t } = useTranslation();
const workflowCategories = useStore($workflowCategories);
const searchInputRef = useRef<HTMLInputElement>(null);
const workflowName = useAppSelector(selectWorkflowName);
return (
<Popover
isOpen={workflowListMenu.isOpen}
onClose={workflowListMenu.close}
onOpen={workflowListMenu.open}
isLazy
lazyBehavior="unmount"
placement="bottom-end"
initialFocusRef={searchInputRef}
>
<PopoverTrigger>
<Button variant="ghost" rightIcon={<PiFolderOpenFill />} size="sm">
<Text
display="auto"
noOfLines={1}
overflow="hidden"
textOverflow="ellipsis"
whiteSpace="nowrap"
wordBreak="break-all"
>
{workflowName || t('workflows.chooseWorkflowFromLibrary')}
</Text>
</Button>
</PopoverTrigger>
<Portal>
<PopoverContent p={4} w={512} maxW="full" minH={512} maxH="full">
<PopoverBody flex="1 1 0">
<Flex w="full" h="full" flexDir="column" gap={2}>
<Flex alignItems="center" gap={2} w="full" justifyContent="space-between">
<WorkflowSearch searchInputRef={searchInputRef} />
<WorkflowSortControl />
<UploadWorkflowButton />
<NewWorkflowButton />
</Flex>
<Box position="relative" w="full" h="full">
<ScrollableContent>
{workflowCategories.map((category) => (
<WorkflowList key={category} category={category} />
))}
</ScrollableContent>
</Box>
</Flex>
</PopoverBody>
</PopoverContent>
</Portal>
</Popover>
<Button variant="ghost" rightIcon={<PiFolderOpenFill />} size="sm" onClick={workflowLibraryModal.open}>
<Text
display="auto"
noOfLines={1}
overflow="hidden"
textOverflow="ellipsis"
whiteSpace="nowrap"
wordBreak="break-all"
>
{workflowName || t('workflows.chooseWorkflowFromLibrary')}
</Text>
</Button>
);
};

View File

@@ -1,120 +0,0 @@
import {
Flex,
FormControl,
FormLabel,
IconButton,
Popover,
PopoverBody,
PopoverContent,
PopoverTrigger,
Select,
} from '@invoke-ai/ui-library';
import { useStore } from '@nanostores/react';
import { $projectId } from 'app/store/nanostores/projectId';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import {
selectWorkflowOrderBy,
selectWorkflowOrderDirection,
workflowOrderByChanged,
workflowOrderDirectionChanged,
} from 'features/nodes/store/workflowSlice';
import type { ChangeEvent } from 'react';
import { useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiSortAscendingBold, PiSortDescendingBold } from 'react-icons/pi';
import { z } from 'zod';
const zOrderBy = z.enum(['opened_at', 'created_at', 'updated_at', 'name']);
type OrderBy = z.infer<typeof zOrderBy>;
const isOrderBy = (v: unknown): v is OrderBy => zOrderBy.safeParse(v).success;
const zDirection = z.enum(['ASC', 'DESC']);
type Direction = z.infer<typeof zDirection>;
const isDirection = (v: unknown): v is Direction => zDirection.safeParse(v).success;
export const WorkflowSortControl = () => {
const projectId = useStore($projectId);
const { t } = useTranslation();
const orderBy = useAppSelector(selectWorkflowOrderBy);
const direction = useAppSelector(selectWorkflowOrderDirection);
const ORDER_BY_LABELS = useMemo(
() => ({
opened_at: t('workflows.opened'),
created_at: t('workflows.created'),
updated_at: t('workflows.updated'),
name: t('workflows.name'),
}),
[t]
);
const DIRECTION_LABELS = useMemo(
() => ({
ASC: t('workflows.ascending'),
DESC: t('workflows.descending'),
}),
[t]
);
const dispatch = useAppDispatch();
const onChangeOrderBy = useCallback(
(e: ChangeEvent<HTMLSelectElement>) => {
if (!isOrderBy(e.target.value)) {
return;
}
dispatch(workflowOrderByChanged(e.target.value));
},
[dispatch]
);
const onChangeDirection = useCallback(
(e: ChangeEvent<HTMLSelectElement>) => {
if (!isDirection(e.target.value)) {
return;
}
dispatch(workflowOrderDirectionChanged(e.target.value));
},
[dispatch]
);
// In OSS, we don't have the concept of "opened_at" for workflows. This is only available in the Enterprise version.
const defaultOrderBy = projectId !== undefined ? 'opened_at' : 'created_at';
return (
<Popover placement="bottom">
<PopoverTrigger>
<IconButton
tooltip={`Sorting by ${ORDER_BY_LABELS[orderBy ?? defaultOrderBy]} ${DIRECTION_LABELS[direction]}`}
aria-label="Sort Workflow Library"
icon={direction === 'ASC' ? <PiSortAscendingBold /> : <PiSortDescendingBold />}
variant="ghost"
/>
</PopoverTrigger>
<PopoverContent>
<PopoverBody>
<Flex flexDir="column" gap={4}>
<FormControl orientation="horizontal" gap={1}>
<FormLabel>{t('common.orderBy')}</FormLabel>
<Select value={orderBy ?? defaultOrderBy} onChange={onChangeOrderBy} size="sm">
{projectId !== undefined && <option value="opened_at">{ORDER_BY_LABELS['opened_at']}</option>}
<option value="created_at">{ORDER_BY_LABELS['created_at']}</option>
<option value="updated_at">{ORDER_BY_LABELS['updated_at']}</option>
<option value="name">{ORDER_BY_LABELS['name']}</option>
</Select>
</FormControl>
<FormControl orientation="horizontal" gap={1}>
<FormLabel>{t('common.direction')}</FormLabel>
<Select value={direction} onChange={onChangeDirection} size="sm">
<option value="ASC">{DIRECTION_LABELS['ASC']}</option>
<option value="DESC">{DIRECTION_LABELS['DESC']}</option>
</Select>
</FormControl>
</Flex>
</PopoverBody>
</PopoverContent>
</Popover>
);
};

View File

@@ -9,7 +9,7 @@ import { useZoomToNode } from 'features/nodes/hooks/useZoomToNode';
import { formElementRemoved } from 'features/nodes/store/workflowSlice';
import type { FormElement, NodeFieldElement } from 'features/nodes/types/workflow';
import { isContainerElement, isNodeFieldElement } from 'features/nodes/types/workflow';
import { startCase } from 'lodash-es';
import { camelCase } from 'lodash-es';
import type { RefObject } from 'react';
import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
@@ -103,15 +103,16 @@ const RemoveElementButton = memo(({ element }: { element: FormElement }) => {
RemoveElementButton.displayName = 'RemoveElementButton';
const Label = memo(({ element }: { element: FormElement }) => {
const { t } = useTranslation();
const label = useMemo(() => {
if (isContainerElement(element) && element.data.layout === 'column') {
return `Container (column layout)`;
return t('workflows.builder.containerColumnLayout');
}
if (isContainerElement(element) && element.data.layout === 'row') {
return `Container (row layout)`;
return t('workflows.builder.containerRowLayout');
}
return startCase(element.type);
}, [element]);
return t(`workflows.builder.${camelCase(element.type)}`);
}, [element, t]);
return (
<Text fontWeight="semibold" noOfLines={1} wordBreak="break-all" userSelect="none">

View File

@@ -1,6 +1,6 @@
import { Button, Flex, Image, Link, Text } from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { useWorkflowListMenu } from 'features/nodes/store/workflowListMenu';
import { useWorkflowLibraryModal } from 'features/nodes/store/workflowLibraryModal';
import { selectCleanEditor, workflowModeChanged } from 'features/nodes/store/workflowSlice';
import InvokeLogoSVG from 'public/assets/images/invoke-symbol-wht-lrg.svg';
import { useCallback } from 'react';
@@ -40,7 +40,7 @@ export const EmptyState = () => {
const CleanEditorContent = () => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const workflowListMenu = useWorkflowListMenu();
const workflowLibraryModal = useWorkflowLibraryModal();
const onClickNewWorkflow = useCallback(() => {
dispatch(workflowModeChanged('edit'));
@@ -52,7 +52,7 @@ const CleanEditorContent = () => {
<Button size="sm" onClick={onClickNewWorkflow}>
{t('nodes.newWorkflow')}
</Button>
<Button size="sm" colorScheme="invokeBlue" onClick={workflowListMenu.open}>
<Button size="sm" colorScheme="invokeBlue" onClick={workflowLibraryModal.open}>
{t('nodes.loadWorkflow')}
</Button>
</Flex>

View File

@@ -1,5 +1,6 @@
import type { FormControlProps } from '@invoke-ai/ui-library';
import { Flex, FormControl, FormControlGroup, FormLabel, Input, Textarea } from '@invoke-ai/ui-library';
import { Box, Flex, FormControl, FormControlGroup, FormLabel, Image, Input, Textarea } from '@invoke-ai/ui-library';
import { skipToken } from '@reduxjs/toolkit/query';
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import ScrollableContent from 'common/components/OverlayScrollbars/ScrollableContent';
@@ -16,11 +17,15 @@ import {
import type { ChangeEvent } from 'react';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { useGetWorkflowQuery } from 'services/api/endpoints/workflows';
import { WorkflowThumbnailEditor } from './WorkflowThumbnail/WorkflowThumbnailEditor';
const selector = createMemoizedSelector(selectWorkflowSlice, (workflow) => {
const { author, name, description, tags, version, contact, notes } = workflow;
const { id, author, name, description, tags, version, contact, notes } = workflow;
return {
id,
name,
author,
description,
@@ -32,7 +37,7 @@ const selector = createMemoizedSelector(selectWorkflowSlice, (workflow) => {
});
const WorkflowGeneralTab = () => {
const { author, name, description, tags, version, contact, notes } = useAppSelector(selector);
const { id, author, name, description, tags, version, contact, notes } = useAppSelector(selector);
const dispatch = useAppDispatch();
const handleChangeName = useCallback(
@@ -89,6 +94,7 @@ const WorkflowGeneralTab = () => {
<FormLabel>{t('nodes.workflowName')}</FormLabel>
<Input variant="darkFilled" value={name} onChange={handleChangeName} />
</FormControl>
<Thumbnail id={id} />
<FormControl>
<FormLabel>{t('nodes.workflowVersion')}</FormLabel>
<Input variant="darkFilled" value={version} onChange={handleChangeVersion} />
@@ -138,3 +144,45 @@ export default memo(WorkflowGeneralTab);
const formControlProps: FormControlProps = {
flexShrink: 0,
};
const Thumbnail = ({ id }: { id?: string | null }) => {
const { t } = useTranslation();
const { data } = useGetWorkflowQuery(id ?? skipToken);
if (!data) {
return null;
}
if (data.workflow.meta.category === 'default' && data.thumbnail_url) {
// This is a default workflow and it has a thumbnail set. Users may only view the thumbnail.
return (
<FormControl>
<FormLabel>{t('workflows.workflowThumbnail')}</FormLabel>
<Box position="relative" flexShrink={0}>
<Image
src={data.thumbnail_url}
objectFit="cover"
objectPosition="50% 50%"
w={100}
h={100}
borderRadius="base"
/>
</Box>
</FormControl>
);
}
if (data.workflow.meta.category !== 'default') {
// This is a user workflow and they may edit the thumbnail.
return (
<FormControl>
<FormLabel>{t('workflows.workflowThumbnail')}</FormLabel>
<WorkflowThumbnailEditor thumbnailUrl={data.thumbnail_url} workflowId={data.workflow_id} />
</FormControl>
);
}
// This is a default workflow and it does not have a thumbnail set. Users may not edit the thumbnail.
return null;
};

View File

@@ -1,17 +1,51 @@
import { Flex } from '@invoke-ai/ui-library';
import { useStore } from '@nanostores/react';
import { EMPTY_OBJECT } from 'app/store/constants';
import { useAppSelector } from 'app/store/storeHooks';
import DataViewer from 'features/gallery/components/ImageMetadataViewer/DataViewer';
import { $builtWorkflow } from 'features/nodes/hooks/useWorkflowWatcher';
import { memo } from 'react';
import { selectNodesSlice } from 'features/nodes/store/selectors';
import type { NodesState, WorkflowsState } from 'features/nodes/store/types';
import { selectWorkflowSlice } from 'features/nodes/store/workflowSlice';
import type { WorkflowV3 } from 'features/nodes/types/workflow';
import { buildWorkflowFast } from 'features/nodes/util/workflow/buildWorkflow';
import { debounce } from 'lodash-es';
import { atom, computed } from 'nanostores';
import { memo, useEffect } from 'react';
import { useTranslation } from 'react-i18next';
const $maybePreviewWorkflow = atom<WorkflowV3 | null>(null);
const $previewWorkflow = computed(
$maybePreviewWorkflow,
(maybePreviewWorkflow) => maybePreviewWorkflow ?? EMPTY_OBJECT
);
const debouncedBuildPreviewWorkflow = debounce(
(nodes: NodesState['nodes'], edges: NodesState['edges'], workflow: WorkflowsState) => {
$maybePreviewWorkflow.set(buildWorkflowFast({ nodes, edges, workflow }));
},
300
);
const IsolatedWorkflowBuilderWatcher = memo(() => {
const { nodes, edges } = useAppSelector(selectNodesSlice);
const workflow = useAppSelector(selectWorkflowSlice);
useEffect(() => {
debouncedBuildPreviewWorkflow(nodes, edges, workflow);
}, [edges, nodes, workflow]);
return null;
});
IsolatedWorkflowBuilderWatcher.displayName = 'IsolatedWorkflowBuilderWatcher';
const WorkflowJSONTab = () => {
const workflow = useStore($builtWorkflow);
const previewWorkflow = useStore($previewWorkflow);
const { t } = useTranslation();
return (
<Flex flexDir="column" alignItems="flex-start" gap={2} h="full">
<DataViewer data={workflow ?? {}} label={t('nodes.workflow')} bg="base.850" color="base.200" />
<DataViewer data={previewWorkflow} label={t('nodes.workflow')} bg="base.850" color="base.200" />
<IsolatedWorkflowBuilderWatcher />
</Flex>
);
};

Some files were not shown because too many files have changed in this diff Show More