There's a technical challenge with outputting these values directly. `ImageField` does not store them, so the batch's `ImageField` collection does not have width and height for each image.
In order to set up the batch and pass along width and height for each image, we'd need to make a network request for each image when the user clicks Invoke. It would often be cached, but this will eventually create a scaling issue and poor user experience.
As a very simple workaround, users can output the batch image output into an `Image Primitive` node to access the width and height.
This change is implemented by adding some simple special handling when parsing the output fields for the `image_batch` node.
I'll keep this situation in mind when extending the batching system to other field types.
- Split up logic to determine reason why the user cannot invoke for each tab.
- Fix issue where the workflows tab would show reasons related to canvas/upscale tab. The tooltip now only shows information relevant to the current tab.
- Add calculation for batch size to the queue count prediction.
- Use a constant for the enqueue mutation's fixed cache key, instead of a string. Just some typo protection.
Currently translated at 42.3% (672 of 1588 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 28.0% (445 of 1588 strings)
Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
- Add special handling for `ImageBatchInvocation`
- Add input component for image collections, supporting multi-image upload and dnd
- Minor rework of some hooks for accessing node data
The canvas react components pass canvas entity identifiers around, then redux selectors are used to access that entity. This is good for perf - entity states may rapidly change. Passing only the identifiers allows components and other logic to have more granular state updates.
Unfortunately, this design opens the possibility for for an entity identifier to point to an entity that does not exist.
To get around this, I had created a redux selector `selectEntityOrThrow` for canvas entities. As the name implies, it throws if the entity is not found.
While it prevents components/hooks from needing to deal with missing entities, it results in mysterious errors if an entity is missing. Without sourcemaps, it's very difficult to determine what component or hook couldn't find the entity.
Refactoring the app to not depend on this behaviour is tricky. We could pass the entity state around directly as a prop or via context, but as mentioned, this could cause performance issues with rapidly changing entities.
As a workaround, I've made two changes:
- `<CanvasEntityStateGate/>` is a component that takes an entity identifier, returning its children if the entity state exists, or null if not. This component is wraps every usage of `selectEntityOrThrow`. Theoretically, this should prevent the entity not found errors.
- Add a `caller: string` arg to `selectEntityOrThrow`. This string is now added to the error message when the assertion fails, so we can more easily track the source of the errors.
In the future we can work out a way to not use this throwing selector and retain perf. The app has changed quite a bit since that selector was created - so we may not have to worry about perf at all.
When we added more progress events during generation, we indirectly broke the logic that controls when the progress bar throbs.
Co-authored-by: Mary Hipp Rogers <maryhipp@gmail.com>
Currently translated at 33.6% (533 of 1583 strings)
translationBot(ui): update translation (Japanese)
Currently translated at 30.3% (481 of 1583 strings)
Co-authored-by: Gohsuke Shimada <ghoskay@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ja/
Translation: InvokeAI/Web UI
Currently translated at 17.6% (278 of 1575 strings)
translationBot(ui): update translation (Spanish)
Currently translated at 17.3% (274 of 1575 strings)
Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
Currently translated at 100.0% (1581 of 1581 strings)
translationBot(ui): update translation (Vietnamese)
Currently translated at 100.0% (1576 of 1576 strings)
translationBot(ui): update translation (Vietnamese)
Currently translated at 100.0% (1575 of 1575 strings)
translationBot(ui): update translation (Vietnamese)
Currently translated at 85.0% (1340 of 1575 strings)
translationBot(ui): update translation (Vietnamese)
Currently translated at 78.7% (1240 of 1575 strings)
translationBot(ui): update translation (Vietnamese)
Currently translated at 73.1% (1152 of 1575 strings)
translationBot(ui): update translation (English)
Currently translated at 99.9% (1574 of 1575 strings)
translationBot(ui): update translation (Vietnamese)
Currently translated at 57.9% (913 of 1575 strings)
translationBot(ui): update translation (Vietnamese)
Currently translated at 37.0% (584 of 1575 strings)
translationBot(ui): update translation (Vietnamese)
Currently translated at 3.2% (51 of 1575 strings)
translationBot(ui): update translation (Vietnamese)
Currently translated at 3.2% (51 of 1575 strings)
Co-authored-by: Linos <tt250208@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/en/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/vi/
Translation: InvokeAI/Web UI
Currently translated at 79.9% (1266 of 1583 strings)
translationBot(ui): update translation (Chinese (Simplified Han script))
Currently translated at 74.4% (1171 of 1573 strings)
Co-authored-by: aidawanglion <youjayjeel@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
Currently translated at 99.6% (1569 of 1575 strings)
translationBot(ui): update translation (Italian)
Currently translated at 99.4% (1567 of 1575 strings)
translationBot(ui): update translation (Italian)
Currently translated at 99.4% (1565 of 1573 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
Turns out a gallery image's `imageDTO` object can actually be a different object by reference. I thought this was not possible thanks to how we have a quasi-normalized cache.
Need to check against image name instead of reference equality when deciding whether or not to use the single image or the gallery selection for the dnd payload.
Rework uploadImage and uploadImages helpers and the RTK listener, ensuring gallery view isn't changed unexpectedly and preventing extraneous toasts.
Fix staging area save to gallery button to essentially make a copy of the image, instead of changing its intermediate status.
- New name: "Output only Generated Regions"
- New default: true (this was the intention, but at some point the behaviour of the setting was inverted without the default being changed)
The styling in gallery for selected vs hovered was very similar, leading users to think that the hovered image was also selected.
Reducing the borders for hovered images to a single pixel makes it easier to distinguish between selected and hovered.
- Tweak layout/styling of alerts for consistent spacing
- Add percentage to message if it has percentage
- Only show events if the destination is canvas (so workflows events are hidden for example)
- Pass in the `UtilInterface` to the `ModelsInterface` so we can call the simple `signal_progress` method instead of the complicated `emit_invocation_progress` method.
- Only emit load events when starting to load - not after.
- Add more detail to the messages, like submodel type
## Summary
Add support for SD3 image-to-image and inpainting. Similar to FLUX, the
implementation supports fractional denoise_start/denoise_end for more
fine-grained denoise strength control, and a gradient mask adjustment
schedule for smoother inpainting seams.
## Example
Workflow
<img width="1016" alt="image"
src="https://github.com/user-attachments/assets/ee598d77-be80-4ca7-9355-c3cbefa2ef43">
Result

## QA Instructions
- [x] Regression test of text-to-image
- [x] Test image-to-image without mask
- [x] Test that adjusting denoising_start allows fine-grained control of
amount of change in image-to-image
- [x] Test inpainting with mask
- [x] Smoke test SD1, SDXL, FLUX image-to-image to make sure there was
no regression with the frontend changes.
## Merge Plan
<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
## Summary
The Flux VAE, like many VAEs, is broken if run using float16 inputs
returning black images due to NaNs
This will fix the issue by forcing the VAE to run in bfloat16 or float32
were compatible
## Related Issues / Discussions
Fix for issue https://github.com/invoke-ai/InvokeAI/issues/7208
## QA Instructions
Tested on MacOS, VAE works with float16 in the invoke.yaml and left to
default.
I also briefly forced it down the float32 route to check that to.
Needs testing on CUDA / ROCm
## Merge Plan
It should be a straight forward merge,
When an unsupported model architecture is selected, show that warning only, without the extra warnings (i.e. no "missing tile controlnet" warning)
Update Invoke tooltip warnings accordingly
Closes#7239Closes#7177
- Add `withToast` flag to `uploadImage` util
- Skip the toast if this is not set
- Use the flag to disable toasts when canvas does internal image-uploading stuff that should be invisible to user
We don't need a "dnd" image system. We need a "image action" system. We need to execute specific flows with images from various "origins":
- internal dnd e.g. from gallery
- external dnd e.g. user drags an image file into the browser
- direct file upload e.g. user clicks an upload button
- some other internal app button e.g. a context menu
The actions are now generalized to better support these various use-cases.
## Summary
Nodes to support SD3.5 txt2img generations
* adds SD3.5 to starter models
* adds default workflow for SD3.5 txt2img
## Related Issues / Discussions
<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->
## QA Instructions
<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->
## Merge Plan
<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->
## Checklist
- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
In a8de6406c5 a change was made to many menus in an effort to improve performance. The menus were made to be lazy, so that they are mounted only while open.
This causes unexpected behaviour when there is some logic in the menu that may need to execute after the user selects a menu item.
In this case, when you click to load a workflow from file, the file picker opens but then the menuitem unmounts, taking the input element and all uploading logic with it. When you select a file, nothing happens because we've nuked the handlers by unmounting everything.
Easy fix - un-lazy-fy the menu.
Closes#7240
The validation on this node causes graph validation to valid. It must be validated _after_ instantiation.
Also, it was a bit too strict. The only case we explicitly do not handle is when both bboxes and points are provided. It's acceptable if neither are provided.
Closes#7248
When filtering, we use a listener to trigger processing the image whenever a filter setting changes. For example, if the user changes from canny to depth, and auto-process is enabled, we re-process the layer with new filter settings.
The filterer has a method to reset its ephemeral state. This includes the filter settings, so resetting the ephemeral state is expected to trigger processing of the filter.
When we exit filtering, we reset the ephemeral state before resetting everything else, like the listeners.
This can cause problem when we exit filtering. The sequence:
- Start filtering a layer.
- Auto-process the filter in response to starting the filter process.
- Change the filter settings.
- Auto-process the filter in response to the changed settings.
- Apply the filter.
- Exit filtering, first by resetting the ephemeral state.
- Auto-process the filter in response to the reset settings.*
- Finish exiting, including unsubscribing from listeners.
*Whoops! That last auto-process has now borked the layer's rendering by processing a filter when we shouldn't be processing a filter.
We need to first unsubscribe from listeners, so we don't react to that change to the filter settings and erroneously process the layer.
Also, add a check to the `processImmediate` method to prevent processing if that method is accidentally called without first starting the filterer.
The same issue could affect the segmenyanything module - same fixes are implemented there.
The root issue is the compositing cache. When we save the canvas to gallery, we need to first composite raster layers together and then upload the image.
The compositor makes extensive use of caching to reduce the number of images created and improve performance. There are two "layers" of caching:
1. Caching the composite canvas element, which is used both for uploading the canvas and for generation mode analysis.
2. Caching the uploaded composite canvas element as an image.
The combination of these caches allows for the various processes that require composite canvases to do minimal work.
But this causes a problem in this situation, because the user expects a new image to be uploaded when they click save to gallery.
For example, suppose we have already composited and uploaded the raster layer state for use in a generation. Then, we ask the compositor to save the canvas to gallery.
The compositor sees that we are requesting an image for the current canvas state, and instead of recompositing and uploading the image again, it just returns the cached image.
In this case, no image is uploaded and it the button does nothing.
We need to be able to opt out of the caching at some level, for certain actions. A `forceUpload` arg is added to the compositor's high-level `getCompositeImageDTO` method to do this.
When true, we ignore the uppermost caching layer (the uploaded image layer), but still use the lower caching layer (the canvas element layer). So we don't recompute the canvas element, but we do upload it as a new image to the server.
Previously, we cleared the canvas progress image when the canvas had no active generations. This allowed for a brief flash of canvas state between the last progress image for a given generation, and when the output image for that generation rendered. Here's the sequence:
- Progress images are received and rendered
- Generation completes - no active canvas generations
- Clear the progress image -> canvas layers visible unexpectedly, creating an awkward jarring change
- Generation output image is rendered -> output image overlaid on canvas layers
In 83538c4b2b I attempted to fix this by only clearing the progress image while we were not staging.
This isn't quite right, though. We are often staging with no active generations - for example, you have a few images completed and are waiting to choose one.
In this situation, if you cancel a pending generation, the logic to clear the progress image doesn't fire because it sees staging is in progress.
What we really need is:
- Staging area module clears the progress image once it has rendered an output image.
- Progress image module clears the progress image when a generation is canceled or failed, in which case there will be no output image.
To do this, we can add an event listener to the progress image module to listen for queue item status changes, and when we get a cancelation or failure, clear the progress image.
pip's dependency resolution doesn't take into account transitive
dependencies when choosing package versions for download.
Even though `torch=~2.4.1` is required by `diffusers`, pip will
download 2.5.0 and higher, but only install 2.4.1.
Pinning torch to <2.5.0 prevents this behaviour.
## Summary
This change mimics the unet padding strategy to align T2I featuremaps
with the latents during denoising. It also slightly adjusts the crop and
scale logic so that the control will match the input image without
shifting when it needs to pad.
## Related Issues / Discussions
<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->
## QA Instructions
Image generated at 1032x1024

Image generated at 1080x1040 to prove feature alignment.

Edge artifacts on the bottom and right are a result of SDXL's unet
padding, and t2i influence will be cut off in those regions.
## Merge Plan
Contingent on #7205
Currently the Canvas UI prevents users from generating non-64
resolutions while t2i adapter layers are active. Will leave this as a
draft until fixing that.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
Previously we maintained an `isInteractable` flag, which was derived from these layer flags:
- Locked/unlocked
- Enabled/disabled
- Layer's type visible/hidden
When a layer was not interactable, we blocked all layer actions.
After comparing to the behaviour in Affinity and considering user feedback, I've loosened these restrictions while maintaining safety. First, some definitions.
There two kinds of layer actions - mutating actions and non-mutating actions.
- Mutating actions are drawing on the layer, cropping it, filtering it, converting it, etc. Anything that changes the layer.
- Non-mutating actions are copying the layer, saving the layer to gallery, etc. Anything that _uses_ the layer.
Then, there are two broad canvas states - busy and not busy. "Busy" means the canvas is actively filtering, staging, compositing layers together, etc - something that is "single-threaded" by nature.
And here are the revised restrictions:
- When canvas is busy, you cannot initiate any layer actions.
- When the canvas is not busy, and the layer is locked, you initiate any mutating actions.
- When the canvas is not busy and the layer is not locked, you can initiate any layer action.
Besides safely giving users more freedom, it also fixes an issue where the context menu for a layer was disabled if it was not the selected layer.
- Add method to force a rebuild of the pydantic type adapter for the union of invocations, which is used to validate graphs.
- Update the xfail'd test.
Had missed several of these, which means we were invalidating caches far too often. For example, when you changed a RG prompt, we were invalidating the cached canvas for that entity, even though changing the prompt doesn't affect the canvas at all.
Previously, merge visible deleted all other visible layers. This is not how affinity works, I should have confirmed before making it work like this in the first place.Ï
`CanvasCompositorModule` had a fairly inflexible API, only supporting compositing all raster layers or inpaint masks.
The API has been generalized work with a list of canvas entities. This enables `Merge Down` and `Merge Selected` functionality (though `Merge Selected` is not part of this set of changes).
Let the parent module adopt the filtered/segemented image instead of destroying it and making the parent re-create it, which results in a brief flash of the parent layer's original objects before the new image is rendered.
We were scaling the unscaled image and mask down before doing the paste-back, but this adds an extraneous step & image output.
We can do the paste-back first, then scale to output size after. So instead of 2 resizes before the paste-back, we have 1 resize after.
The end result is the same.
- Restore dedicated `Apply` buttons
- Remove icons from the buttons, too much noise when the words are short and clear
- Update loading state to show a spinner next to the `Process` button instead of on _every_ button
A blue button is begging to be clicked, but clicking it will do nothing. Instead, we should communicate that no action is needed by disabling the button when the default settings are already in use.
Using `&&` will result in false negatives for settings where a falsy value might be valid. For example, any setting for which 0 is a valid number. To be on the safe side, just use an explicit null check on all values.
We use an in-memory cache for PIL images to reduce I/O. If a node mutates the image in any way, the cached image object is also updated (but the on-disk image file is not).
We've lucked out that this hasn't caused major issues in the past (well, maybe it has but we didn't understand them?) mainly because of a happy accident. When you call `context.images.get_pil` in a node, if you provide an image mode (e.g. `mode="RGB"`), we call `convert` on the image. This returns a copy. The node can do whatever it wants to that copy and nothing breaks.
However, when mode is not specified, we return the image directly. This is where we get in trouble - nodes that load the image like this, and then mutate the image, update the cache. Other nodes that reference that same image will now get the mutated version of it.
The fix is super simple - we make sure to return only copies from `get_pil`.
- Use a hash of the last processed points instead of a `hasProcessed` flag to determine whether or not we should re-process a given set of points.
- Store point coords in state instead of pulling them out of the konva node positions. This makes moving a point a more explicit action in code.
- Add a `roundCoord` util to round the x and y values of a coordinate.
- Ensure we always re-process when $points changes.
Realized we are doing a lot of event listening even when segmenting is not occuring. I don't think this will have a meaningful performance impact, but it makes sense to remove these listeners when not in use.
Fix an issue where if the input image is transparent in a region to be masked, that transparent region ends up opaque black. Need to respect the input image transparency by applying the mask to the alpha channel only.
Each version of torch is only available for specific versions of CUDA and ROCm.
The Invoke installer and dockerfile try to install torch 2.4.1 with ROCm 5.6
support, which does not exist. As a result, the installation falls back to the
default CUDA version so AMD GPUs aren't detected. This commits fixes that by
bumping the ROCm version to 6.1, as suggested by the PyTorch documentation. [1]
The specified CUDA version of 12.4 is still correct according to [1] so it does
need to be changed.
Closes#7006Closes#7146
[1]: https://pytorch.org/get-started/previous-versions/#v241
## Summary
This PR adds support for the XLabs IP-Adapter
(https://huggingface.co/XLabs-AI/flux-ip-adapter) in workflows. Linear
UI integration is coming in a follow-up PR. The XLabs IP-Adapter can be
installed in the Starter Models tab.
Usage tips:
- Use a `cfg_scale` value of 2.0 to 4.0
- Start with an IP-Adatper weight of ~0.6 and adjust from there.
- Set `cfg_scale_start_step = 1`
- Set `cfg_scale_end_step` to roughly the halfway point (it's
unnecessary to apply CFG to all steps, and this will improve processing
time).
Sample workflow:
<img width="976" alt="image"
src="https://github.com/user-attachments/assets/4627b459-7e5a-4703-80e7-f7575c5fce19">
Result:

## Related Issues / Discussions
Prerequisite: https://github.com/invoke-ai/InvokeAI/pull/7152
## Remaining TODO:
- [ ] Update default workflows.
## QA Instructions
- [x] Test basic happy path
- [x] Test with multiple IP-Adapters (it runs, but results aren't great)
- [ ] ~Test with multiple images to a single IP-Adapter~ (this is not
supported for now)
- [ ] Test automatic runtime installation of CLIP-L, CLIP-H, and CLIP-G
image encoder models if they are not already installed.
- [ ] Test starter model installation of the XLabs FLUX IP-Adapter
- [ ] Test SD and SDXL IP-Adapters for regression.
- [ ] Check peak memory utilization.
## Merge Plan
- [ ] Merge #7152
- [ ] Change target branch to main
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
## Summary
Add support for Classifier-Free Guidance with FLUX.
- Using CFG doubles the time for the denoising process. Running both the
positive and negative conditioning in a single batch is left for future
work, because most users are already VRAM-constrained (this would
probably be faster at the cost of higher peak VRAM).
- Negative text conditioning is optional and only required if `cfg_scale
!= 1.0`
- CFG is skipped if `cfg_scale == 1.0` (i.e. no compute overhead in this
case)
- `cfg_scale_start_step` and `cfg_scale_end_step` can be used to easily
control the range of steps that CFG is applied for.
- CFG is a prerequisite for IP-Adapter support.
## Example
Positive Caption: `Professional photography of a luxury hotel in the
Nevada desert`
CFG: 1.0

Positive Caption: `Professional photography of a luxury hotel in the
Nevada desert`
Negative Caption: `Swimming pool`
CFG: 2.0
Same seed

## QA Instructions
- [ ] Test interactions with ControlNet
- [ ] Verify that peak RAM/VRAM utilization has not increased
significantly
- [ ] Test that CFG is skipped when cfg_scale == 1.0
- [ ] Test that negative text conditioning can be omitted when cfg_scale
== 1.0
- [ ] Test that a clear error message is returned when negative text
conditioning is omitted when cfg_scale != 1.0
- [ ] Test that the negative text prompt gets applied when cfg_scale
>1.0
- [ ] Test that a collection of cfg_scale values can be provided for
per-step control.
- [ ] Test that `cfg_scale_start_step` and `cfg_scale_end_step` control
the range of steps that CFG is applied
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
Introduce two-stage logging configuration and overrides for enabled status, log level and log namespaces.
The first stage in `<InvokeAIUI />`, before we set up redux (and therefore before we have access to the user's configured logging setup). In this stage, we use the overrides or default values.
The second stage is in `<App />`, after we set up redux, via `useSyncLoggingConfig`. In this stage, we use the overrides or the user's configured logging setup. This hook also handles pushing changes made by the user into localstorage.
Other changes:
- Extract logging config to util function
- Remove the `useEffect` from `SettingsModal` that was changing the logging settings
- Remove extraneous log effects from `useLogger`
- Export new `LoggingOverrides` type
While troubleshooting an issue with this middleware, I found the inclusion of the nextState and diff to be very noisy. It's now a function that accepts some options to configure the output, and returns the middleware.
We can use the drop overlay component directly for this, without needing to add it as a `noop` dnd target.
Other changes:
- The `label` prop is now used to conditionally render the label - every drop target provides its own label, so this doesn't break anything.
- Add `withBackdrop` prop to control whether we apply the dimmed drop target effect.
Instead of providing a duration to the upload action, we close the toast imperatively in the `imageUploaded` listener using a timeout. 3s after the last upload toast, we close it.
This handles the case when we are uploading multiple images and don't want the toast to close til it's all finished.
Currently translated at 98.7% (1476 of 1494 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.8% (1476 of 1493 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.8% (1474 of 1491 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
To trigger the edge case:
- Have an empty layer and non-empty layer
- Select the non-empty layer
- Refresh the page
- Select to the empty layer without doing any other action
- You may be unable to draw on the layer
- Zoom in/out slightly
- You can now draw on it
The problem was not syncing visibility when a layer is selected, leaving the layer hidden. This indirectly disabled interactions.
The fix is to listen for changes to the layer's selected status and sync visibility when that changes.
We were:
- Incrementing `addedControlNets` or `addedT2IAdapters`
- Attempting to add it, but maybe failing and skipping
Need to swap the order of operations to prevent misreporting of added cnet/t2i.
I don't think this would ever actually cause problems.
## Summary
Add support for FLUX ControlNet models (XLabs and InstantX).
## QA Instructions
- [x] SD1 and SDXL ControlNets, since the ModelLoaderRegistry calls were
changed.
- [x] Single Xlabs controlnet
- [x] Single InstantX union controlnet
- [x] Single InstantX controlnet
- [x] Single Shakker Labs Union controlnet
- [x] Multiple controlnets
- [x] Weight, start, end params all work as expected
- [x] Can be used with image-to-image and inpainting.
- [x] Clear error message if no VAE is passed when using InstantX
controlnet.
- [x] Install InstantX ControlNet in diffusers format from HF repo
(`InstantX/FLUX.1-dev-Controlnet-Union`)
- [x] Test all FLUX ControlNet starter models
## Merge Plan
No special instructions.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
This replicates the img2img flow:
- Reset the canvas
- Resize the bbox to the image's aspect ratio at the optimal size for the selected model
- Add the image as a raster layer
- Resizes the layer to fit the bbox using the 'fill' strategy
After this completes, the user can immediately click Invoke and it will do img2img.
If an entity needs to do something after init, it can use this system. For example, if a layer should be transformed immediately after initializing, it can use an init callback.
This feature involves a certain amount of extra work to ensure stroke and fill with partial opacity render correctly together. However, none of our shapes actually use that combination of attributes, so we can disable this for a minor perf boost.
Instead of pulling the preview canvas from the konva internals, use the canvas created for bbox calculations as the preview canvas.
This doesn't change perf characteristics, because we were already creating this canvas. It just means we don't need to dip into the konva internals.
It fixes an issue where the layer preview didn't update or show when a layer is disabled or otherwise hidden.
- When resetting workflows, retain the current mode state
- Remove the useEffect that reacted to the `isCleanEditor` flag to prevent getting menu getting locked open
This could be triggered by transforming a layer, undoing, then transforming again. The simple fix is to ignore the rasterization cache for all transforms.
There's a Konva bug where `pointerenter` & `pointerleave` events aren't fired correctly on the stage.
In 87fdea4cc6 I made a change that surfaced this bug, breaking touch and Apple Pencil interactions, because the cursor position doesn't get updated.
Simple fix - ensure we update the cursor on `pointerdown` events, even though we shouldn't need to.
Will make a bug report upstream
- Set an empty title to prevent browsers from showing "Please match the requested format." when hovering the number input
- Fix issue w/ `z-index` that prevented the popover button from being clicked while the input was focused
Currently translated at 98.7% (1461 of 1479 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.7% (1460 of 1479 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.5% (1458 of 1479 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.7% (1459 of 1477 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.7% (1453 of 1471 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
* add tooltips for images/assets tabs
* add icon by board name that can be used to activate editable
* update getting started text
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Currently translated at 98.7% (1453 of 1471 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.7% (1453 of 1471 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.7% (1452 of 1471 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
- Reverts the `onClick -> onPointerUp` changes, which fixed Apple Pencil interactions of buttons with tooltips but broke things in other subtle ways.
- Adds a default `openDelay` on tooltips of 500ms. This is another way to fix Apple Pencil interactions, and according to some searching online, is the best practice for tooltips anyways. The default behaviour should be for there to be a delay, and only in specific circumstances should there be no delay. So we'll see how this is received.
The color picker take some time to sample the color from the canvas state. This could cause a race condition where the cursor position changes between the time sampling starts, resulting in the picker showing the wrong color. Sometimes it picks up the color picker tool preview!
To resolve this, the color picker's color syncing is now throttled to once per animation frame. Besides fixing the incorrect color issue, it improves the perf substantially by reducing number of samples we take.
- Record both absolute and relative positions
- Use simpler method to get relative position
- Generalize getColorUnderCursor to be getColorAtCoordinate
We just changed all buttons to use `onPointerUp` events to fix Apple Pencil behaviour. This, plus the specific DOM layout of boards, resulted in the `onPointerUp` being triggered on a board before the drop triggered.
The app saw this as selecting the board, which then reset the gallery selection to the first image in the board. By the time you drop, the gallery selection had reset.
DOM layout slightly altered to work around this.
Currently translated at 45.4% (668 of 1470 strings)
translationBot(ui): update translation (French)
Currently translated at 33.1% (488 of 1470 strings)
translationBot(ui): update translation (French)
Currently translated at 32.5% (479 of 1470 strings)
translationBot(ui): update translation (French)
Currently translated at 30.7% (449 of 1458 strings)
translationBot(ui): update translation (French)
Currently translated at 30.2% (442 of 1460 strings)
Co-authored-by: Thomas Bolteau <thomas.bolteau50@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/fr/
Translation: InvokeAI/Web UI
## Summary
#6890 bumped torch, which caused an incompatibility with xformers when
installing with `pip install ".[xformers]"`. This PR bumps xformers.
## QA Instructions
I ran some smoke tests to confirm that generating with xformers still
works.
In my tests on an A100, there is a performance regression after bumping
xformers (2.7 it/s vs 3.2 it/s). I think it is ok to ignore this for
A100s, since users should be using torch-sdp, which is much faster (4.3
it/s). But, we should test for regression on older cards where xformers
is still recommended.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
A new "session" just means to reset most settings to default values, excluding model.
There are a few things that need to be reset:
- Parameters state, except for models and things dependent on model selection (like VAE precision)
- Canvas state, except for the `modelBase`, which is dependent on the model selection
- Canvas staging area state
- LoRAs state
- HRF state
- Style presets state
We also select the canvas tab.
For new gallery sessions, we:
- Open the image viewer
- Set the right panel tab to `gallery`
And for new canvas sessions, we:
- Close the image viewer
- Set the right panel tab to `layers`
Currently translated at 24.1% (351 of 1452 strings)
translationBot(ui): update translation (French)
Currently translated at 17.9% (261 of 1452 strings)
translationBot(ui): update translation (French)
Currently translated at 17.8% (259 of 1452 strings)
translationBot(ui): update translation (French)
Currently translated at 17.5% (255 of 1452 strings)
translationBot(ui): update translation (French)
Currently translated at 10.3% (150 of 1452 strings)
Co-authored-by: Thomas Bolteau <thomas.bolteau50@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/fr/
Translation: InvokeAI/Web UI
Currently translated at 62.0% (901 of 1452 strings)
translationBot(ui): update translation (German)
Currently translated at 56.4% (819 of 1452 strings)
translationBot(ui): update translation (German)
Currently translated at 53.8% (782 of 1452 strings)
Co-authored-by: Ettore Atalan <atalanttore@googlemail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
Currently translated at 56.4% (819 of 1452 strings)
translationBot(ui): update translation (German)
Currently translated at 53.8% (782 of 1452 strings)
translationBot(ui): update translation (German)
Currently translated at 45.3% (658 of 1451 strings)
Co-authored-by: B N <berndnieschalk@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
## Summary
This PR add support for FLUX LoRA models in kohya format with `lora_te1`
layers (i.e. CLIP LoRA layers). Previously, only transformer LoRA layers
were supported.
Example LoRA model in this format:
https://huggingface.co/cocktailpeanut/optimus
### Example
Prompt: `optimus is playing tennis in a tennis court`
Seed: 0
Without LoRA:

With LoRA:

## QA Instructions
I tested the following:
- [x] The optimus LoRA (with CLIP layers) can be applied.
- [x] FLUX LoRAs without CLIP layers still work
- [x] Loading the optimus LoRA, but applying it to the transformer
_only_ produces a different result. I.e. verified that patching the CLIP
layers is doing _something_. Ironically, the results seem better without
applying the CLIP layers. The CLIP layers seem to pull in more
background concepts. Regardless, it works.
- [x] The optimus LoRA can be applied via the Linear UI, and the output
matches results from manually constructing the workflow graph.
- [x] FLUX LoRAs without CLIP layers still work via the Linear UI.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
This pops up every now and then and I could never figure it out. A user figured it out in #6936. The cause is appending a query string to the app URL.
For example:
```sh
http://127.0.0.1:9090/?__theme=dark
```
The query string breaking the static file serving, which prevents our translations from loading correctly. Instead of the JSON translations, FastAPI sends the index HTML page. The UI then errors when attempting to parse the translation JSON.
The query string ?__theme=dark is used by Gradio to force dark mode. I believe the users with this issue are doing the same thing the user in #6936 did (just change the port number on an existing bookmark) or their browser history/bookmark includes the query string.
Though this is technically a user-caused problem (we cannot prevent the user from using a malformed URL), we can work around it. When query string is used on the root path, we can redirect the browser to the root path without the query string.
This is done via very simple middleware.
Closes#6696Closes#6817Closes#6828Closes#6936Closes#6983
`usePanel` started panels with a `minSize` and `defaultSize` of 0, which means collapsed. This causes panels to load as collapsed on the very first app load. Then, in the layout effect, we see the panel as collapsed and skip setting it to the correct size.
Reviewing the library's API, `minSize` and `defaultSize` should not be lower than 1. Thankfully, setting this to 1 also prevents the issue described above.
- `minSize` and `defaultSize` start at 1
- Return a sentinel value when converting percentages to pixels, if the panel's container has no size. When that happens, we should not update the `minSize` or `defaultSize`.
- Split observer callback into its own function, so that the exact same logic can be used on the first run of hte effect.
- Update prop names and docstrings to accurately reflect that the numerical values are in pixels
* restore send-to functionality
* lint
* feat(ui): add getImageMetadata helper
* feat(ui): updated usePreselectedImage logic
* fix(ui): race condition when creating & initializing canvas entity adapters
There was a race condition when the canvas was reset as it was initializing. This could occur when the "use preselected image" functionality was triggered.
It was possible to get an error (non-app-breaking) when attempting to initialize an entity:
1. Canvas initializes
2. Canvas starts creating and initializing all entities (this happens in `CanvasEntityRendererModule.render`)
3. Canvas is reset before that process finishes, clearing state
4. The method call from 2) attempts to initialize an entity that has been deleted from state and fails
Changes to fix this:
- Split `CanvasEntityRendererModule.render` into individual methods for each entity type, each with their own store subscription
- Do not `await` initialization after creating the entity adapter classes - let them initialize in the background
So the `render` method now completes very fast - quick enough that we don't run into this race condition.
It's possible that something will change in the future, and this race condition will come back. In that case, we could use mutexes in `CanvasEntityRendererModule` to prevent the failure condition. It's a bit more complicated to do that so I'm skipping it for now.
* feat(ui): export workflow library is open atom
* feat(ui): export image viewer atom
* tidy(ui): organise style presets menu state
* feat(ui): consolidate studio init actions
* build(ui): export type StudioInitAction
* feat(ui): add getStylePreset helper
* feat(ui): add toasts to useStudioInitAction
* tidy(ui): comment & minor cleanup for useStudioInitAction
* chore(ui): lint
* only show version when local
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Simplify the handle component and use the provided data attributes to style the handles correctly.
Fixes a styling issue where you if you hover at the T-junction between two handles, only one brightens up.
This unused logic was unnecessarily complicating the hook. It also inadvertently made the default panel size arg a percentage value even if it was actually a pixel value.
Cleaned up a couple other little bits.
Only change the selection array when its contents have changed. This prevents unnecessary re-renders.
For example, if the selection is currently `[image1]` and we set it again to `[image1]`, while the array contains the same objects, it is a new array. This will trigger unncessary re-renders.
Selecting a board selects the image, and then we were selecting it again afterwards. So we programmatically select the newly generated image twice.
This can cause a race condition if the user changes image selection between when the two programmatic image selection actions. Their selection will be quickly overridden by the second programmatic selection action.
I broke this in dfac0292f4 due to misunderstanding of what the upscale model actually was. I thought it was a main model but actually its a spandrel model.
There's a situation in which the enqueue response comes after the graph actually executes. This was unexpected when I first wrote the logic. I suppose it has to do with the async endpoint handling.
- Update canvas slice's to track the current base model architecture instead of just the optimal dimension. This lets us derive both optimal dimension _and_ grid size for the currently selected model.
- Update all bbox size utilities to use derived grid size instead of hardcoded values of 8 or 64
- Review every damned instance of the number 8 in the whole frontend and update the ones that need to use the grid size
- Update the invoke button blocking logic to check against scaled bbox size, unless scaling is disabled.
- Update the invoke button blocking to say if it's width or height that is invalid and if its bbox or scaled, for both FLUX and the T2I adapter constraints
- Use consistent logic for all model type handlers
- Fix bug where we could select invalid upscaling models (not sure how this hadn't caused problems...)
- Add logging for each action
- Only reset models when there is a change to be made - skip dispatching actions when there would be no change made to state
Previously the setting was `showOnlyRasterLayersWhileStaging`. This has been renamed to `isolatedStagingPreview`. Works the same.
Also added `isolatedFilteringPreview` an `isolatedTransformingPreview`. These work the same way, but they isolate the current selected layer. There are toggles in the canvas settings popover _and_ the filter/transform popups (same setting).
We need to ensure the getQueueCountsByDestination query is sync'd, invalidating its tags as queue items complete. Unfortunately it's 2 extra network requests per queue item.
Also clean up some jank w/ the handling of accepting staging images - there was this no-op action & a listener for it... should just be a simple callback.
Both the vanilla and autoscale invocations report progress while processing each tile.
The autoscale version, which may run the spandrel model multiple times, also includes the current iteration.
Each of these was a bit off:
- The SD callback started at `-1` and ended at `i`. Combined w/ the weird math on the previous `calc_percentage` util, this caused the progress bar to never finish.
- The MultiDiffusion callback had the same problems as SD.
- The FLUX callback didn't emit a pre-denoising step 0 image. It also reported total_steps as 1 higher than the actual step count.
Each of these now emit the expected events to the frontend:
- The initial latents at 0%
- Progress at each step, ending at 100%
- Update the step callback methods in the invocation API to use the new signal_progress API
- Copy and update the `calc_percentage`, reducing special handling for step and total_steps - a followup commit will fix callers of the step callbacks
## Summary
This PR makes some improvements to the FLUX image-to-image and
inpainting behaviours.
Changes:
- Expand inpainting region at a cutoff timestep. This improves seam
coherence around inpainting regions.
- Add Trajectory Guidance to improve the ability to control how much an
image gets modified during image-to-image/inpainting (see the code for a
more technical explanation - it's well-documented).
## `trajectory_guidance_strength` Usage
- The `trajectory_guidance_strength` param has been added to the `FLUX
Denoise` invocation.
- `trajectory_guidance_strength` defaults to `0.0` and should be in the
range [0, 1].
- `trajectory_guidance_strength = 0.0` has no effect on the denoising
process.
- `trajectory_guidance_strength = 1.0` will guide strongly towards the
original image.
## FLUX image-to-image usage tips
- As always, prompt matters a lot.
- If you are trying to making minor perturbations to an image, use
vanilla image-to-image by setting the `denoising_start` param.
- If you are trying to make significant changes to an image, using
trajectory guidance will give more control than using vanilla
image-to-image. Set `denoising_start=0.0` and adjust
`trajectory_guidance_strength` to control the amount of change in the
image.
- The 'transition point' where the image changes the most as you adjust
`trajectory_guidance_strength` or `denoise_start` varies depending on
the noise. So, set a fixed noise seed, then tune those params.
## QA Instructions
- [x] Vanilla image-to-image - No change in output
- [x] Vanilla inpainting - No change in output
- [x] Vanilla outpainting - No change in output
- Trajectory Guidance image-to-image
- [x] TGS = 0.0 is identical to Vanilla case
- [x] TGS = 1.0 guides close to the original image
- Not as close as I'd like, but it's not broken.
- [x] Smooth transition as TGS varies
- [x] Smoke test: TGS with denoise_start > 0.0
- TG inpainting
- [x] TGS = 0.0 is identical to Vanilla case
- [x] TGS = 1.0 guides close to the original image
- Not as close as I'd like, but it's not broken
- [x] Smooth transition as TGS varies
- [x] Smoke test: TGS with denoise_start > 0.0
- TG outpainting
- [x] TGS = 0.0 is identical to Vanilla case
- [x] Smoke test TGS outpainting
- [x] Smoke test FLUX text-to-image
- [x] Preview images look ok for all of above.
## Known issues (will be addressed in follow-up PRs)
- The current TGS scale biases towards creating more change than desired
in the image. More tuning of the TG change schedule is required.
- TGS does not work very well for outpainting right now. This _might_ be
solvable, but more likely we'll just want to discourage it in the Linear
UI.
## Merge Plan
No special instructions.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
## Summary
If a FLUX dev model is selected, show icon and popover telling user
about its license for commercial use
## Related Issues / Discussions
<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->
## QA Instructions
<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->
## Merge Plan
<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->
## Checklist
- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
## Summary
This PR attempts to fix a flaky FLUX LoRA unit test.
Example test failure:
https://github.com/invoke-ai/InvokeAI/actions/runs/10958325913/job/30428299328?pr=6898
The failure _seems_ to be caused by a numerical precision error, but I
haven't been able to reproduce it locally. I have reduced the tolerance
of the offending comparison, and am pretty confident that this will
solve the issue.
## QA Instructions
No QA necessary.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
This can be used for nodes that Invoke uses internally. Internal nodes do not have API stability guarantees. For example, they may change if the needs of the linear UI change.
Two main changes:
- Add `runGraphAndReturnImageOutput` to `CanvasStateApiModule`. This method is a safe and convenient abstraction to execute a graph and retrieve the image output of one of its nodes. It supports cancellation (via an AbortSignal) and timeout.
- Update filters to build whole graphs, as opposed to nodes.
These changes allow:
- Filter execution is resilient, with all error cases handled (afaik)
- `CanvasEntityFilterer` class is much simpler
- Stuck or long-running filters may be canceled
- Filters may be arbitrarily complex - so long as there is one node that outputs an image, the filter will just work
- Rename util to `getImageDTOSafe`
- Update API to accept the same options as RTKQ's `initiate`
- Add `getImageDTO`; while `getImageDTOSafe` returns null if the image is not found, the new util throws
- Update usage of `getImageDTOSafe`
* wip
* more updates for new user experience
* pull whats new out
* use loading state
* lint
* fix(ui): translation missing period
* feat(ui): create icon component for invoke logo
* feat(ui): tweaked invoke logo colors
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
## Summary
There was an issue w/ the calculation causing an infinite loop but the
fixed algorithm from #6887 wasn't correct bc it doesn't take into
account the grid gap correctly. This then breaks arrow key navigation.
- Restore the previous calculation
- Bail out if the gallery elements don't have any width, which causes
the infinite loop - this part was missed when copying the logic from
GalleryImageGrid
## Related Issues / Discussions
n/a
## QA Instructions
shouldn't freeze
## Merge Plan
n/a
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
There was an issue w/ the calculation causing an infinite loop but the fixed algorithm wasn't correct bc it doesn't take into account the grid gap correctly. This then breaks arrow key navigation.
- Restore the previous calculation
- Bail out if the gallery elements don't have any width, which causes the infinite loop - this part was missed when copying the logic from GalleryImageGrid
- Allow `uploadImage` util to accept `metadata` to embed in the image
- Update compositor to support `metadata` field when uploading rasterized composite layer
- Add async zod refiner to `zImageWithDims` which fetches the image as part of validation
- Add `zServerValidatedModelIdentifierField`, a zod-refined version of `zModelIdentifierField` which fetches the model as part of validation
- Add `zCanvasMetadata` zod schema, which contains only canvas entities - no bbox, and no `isHidden` flags
- Renamed "Send to Canvas" -> "New Layer from Image"
- Added "New Canvas from Image"
This clarifies the purpose of the menu items and gives tablet users a way to easily add images tot he canvas.
Also update the verbiage for the alerts:
- "Sending to Canvas" -> "Staging Generations on Canvas"
- "Sending to Gallery" -> "Sending Generations to Gallery"
- Add buttons to zoom in/out
- Update hotkeys for fit & 100% to match affinity (e.g. ctrl+0, ctrl+1)
- Add hotkeys for 200%, 400%, 800%
- Update tooltips
This mirrors affinity/photoshop's default `d` hotkey, which sets the fg/bg to white/black. We don't have a concept of "background color", and white is more useful for control images, so it sets to white.
- Rework hotkey data to include the keys for each hotkey action.
- Add wrapper for `useHotkeys` that accepts a hotkey category and id. Automatically selects the key from the hotkey data.
- Add handling for macOS (cmd vs ctrl, option vs alt).
- Redo all hotkey descriptions, deleting nonexistant ones.
- Some `esc` hotkeys that just close whatever you are currently in are omitted due to their relative simplicity and intuitiveness.
This was caused by allowing the stage to be set to fractional coordinates. For example, the stage might be positioned at `x: 142.22255, y: 488.79`.
When positioned like this, the canvas will be slightly misaligned with its native pixel grid. The browser does its best, but this causes tiny scaling artifacts throughout the image. It's most noticeable where there is a sharp contrast.
This behaviour was introduced while troubleshooting an issue with degraded quality when saving canvas to gallery. Turned out the stage position was unrelated to that issue, but I didn't realize that the change would cause this other type of problem.
The fix is super simple - ensure we floor stage coords when setting the manually. Konva never sets the position to fractional coordinates itself. For example, while dragging the stage, Konva sets the stage coordiantes itself, and they are always integers.
## Summary
This PR adds support for FLUX LoRA models on both quantized and
non-quantized base models.
Supported formats:
- diffusers
- kohya
Full changelist:
- Consolidated LoRA handling code in `invokeai/backend/lora`
- Add support for FLUX kohya and FLUX diffusers LoRA model loading
- Add ability to either patch LoRAs or run as a sidecar model (the
latter enables LoRAs to be applied to a wide range of quantized models).
## QA Instructions
Note to reviewers: I tested everything in this checklist. Feel free to
re-verify any of this, but also test any LoRAs that you have. There are
many small LoRA format variations, and there's a risk of breaking one of
them with this change.
FLUX LoRA
- [x] Import / probe of kohya FLUX LoRA
(https://civitai.com/models/159333/pokemon-trainer-sprite-pixelart?modelVersionId=779247)
- [x] Import / probe of Diffusers FLUX LoRA
(https://civitai.com/models/200255/hands-xl-sd-15-flux1-dev?modelVersionId=781855)
- [x] kohya with non-quantized base model
- [x] kohya with quantized base model (should roughly match the
non-quantized case)
- [x] diffusers with non-quantized base model
- [x] diffusers with quantized base model (should roughly match the
non-quantized case)
- [x] Sidecar LoRA patching speed (<0.1secs after model is loaded)
- [x] Stacking multiple fused LoRA models (i.e. on top on non-quantized
model)
- [x] Stacking multiple sidecar LoRA models (i.e. on top of quantized
model)
Regression Tests
- [x] SD1.5 LoRA (check output, speed and memory)
- [x] SDXL LoRA (check output, speed and memory)
- [x] `USE_MODULAR_DENOISE=1` smoke test with LoRA
Test for output regression with the following LoRA formats:
- [x] LoRA
- [x] LoHA
- [x] LoKr
- [x] IA3
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
- Canvas manages its own progress socket event listeners and progress event data.
- Remove cancellations listener jank.
- Dip into low-level redux subscription API to watch for queue status changes, clearing the last "global" progress event when the queue has nothing in progress. Could also do this in a useEffect I guess.
- Had to shuffle some things around to prevent circular imports, so there are a lot of tiny changes here.
- Remove queue front button. Hold shift while clicking `Invoke` button to queue front.
- Restore queue menu actions w/ the reclaimed space.
- Simplify queue interaction hooks.
The lineart model often outputs a lot of almost-black noise. SD1.5 ControlNets seem to be OK with this, but SDXL ControlNets are not - they need a cleaner map. 12 was experimentally determined to be a good threshold, eliminating all the noise while keeping the actual edges. Other approaches to thresholding may be better, for example stretching the contrast or removing noise.
I tried:
- Simple thresholding (as implemented here) - works fine.
- Adaptive thresholding - doesn't work, because the thresholding is done in the context of small blocks, while we want thresholding in the context of the whole image.
- Gamma adjustment - alters the white values too much. Hard to tuen.
- Contrast stretching, with and without pre-simple-thresholding - this allows us to treshold out the noise, then stretch everything above the threshold down to almost-zero. So you have a smoother gradient of lightness near zero. It works but it also stretches contrast near white down a bit, which is probably undesired.
In the end, simple thresholding works fine and is very simple.
The HTML Canvas context has an `imageSmoothingEnabled` property which defaults to `true`. This causes the browser canvas API to, well, apply image smoothing - everything gets antialiased when drawn.
This is, of course, problematic when our goal is to be pixel-perfect. When the same image is drawn multiple times, we get progressive image degradation.
In `CanvasEntityObjectRenderer.cloneObjectGroup()`, where we use Konva's `Node.cache()` method to create a canvas from the entity's objects. Here, we were not setting `imageSmoothingEnabled` to false. This method is used very often by the compositor and we end up feeding back antialiased versions of the image data back into the canvas or generation backend.
Disabling smoothing here appears to fix the issue. I've also disabled image smoothing everywhere else we interact with a canvas rendering context.
The checkerboard background was rendered as a separate DOM element that stretched to fill the canvas container.
While the canvas width and height are always integers, this background element could have non-integer dimensions, depending on panel sizes.As a result, it could be slightly larger than the canvas, introducing a fine border around the canvas.
This is purely a visual issue, but it's very noticeable when you use the bbox overlay. It also can be noticed with masks that extend beyond the edge of the visible canvas.
- Refactor the checkerboard background to be rendered by the canvas instead of as a DOM element, resolving the issue.
- Add a helper method to get the scaled rect of the stage, updating a few places where we need such a rect.
- Rename `CanvasStageModule.getScaledPixels` method to `unscale`, clarifying its purpose.
Track various canvas states:
- Filtering an entity
- Transforming an entity
- Rasterizing an entity
- Compositing
- Busy (derived from all of the above)
Also track individual entity states:
- Locked
- Disabled
- All of type are hidden
- Has objects
- Interactable (derived from all of the above)
These states then gate various actions. For example:
- Cannot invoke while the canvas is busy.
- Cannot transform, filter, duplicate, or delete when the canvas is busy.
Tool interaction restrictions are not yet implemented.
## Summary
This PR splits the lora.py monolith into separate files. The main
motivation for doing this in a standalone PR is to make the diffs more
interpretable in the [upcoming
changes](https://github.com/invoke-ai/InvokeAI/compare/main...ryan/flux-lora-sidecar)
to support LoRAs for FLUX.
This PR does not make any functional changes - it just moves files
around and changes import paths.
## QA Instructions
I smoke tested generation with LoRA, LoHA, LoKr, and IA3.
## Merge Plan
No special instructions. Merge on approval.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
- Add backcompat for cnet model default settings
- Default filter selection based on model type
- Updated UI components to use new filter nodes
- Added handling for failed filter executions, preventing filter from getting stuck in case it failed for some reason
- New translations for all filters & fields
Use a generic to narrow the `type` field from `string` to a literal. Now you can do e.g. `adapter.type === 'control_layer_adapter'` and TS narrows the type.
Similar to the existing node, but without any resizing. The backend logic was consolidated and modified so that it the model loading can be managed by the model manager.
The ONNX Runtime `InferenceSession` class was added to the `AnyModel` union to satisfy the type checker.
Similar to the existing node, but without any resizing and with a revised model loading API that uses the model manager.
All code related to the invocation now lives in the Invoke repo.
Similar to the existing node, but without any resizing and with a revised model loading API that uses the model manager.
All code related to the invocation now lives in the Invoke repo. Unfortunately, this includes a whole git repo for EfficientNet. I believe we could use the package `timm` instead of this, but it's beyond me.
Similar to the existing node, but without any resizing and with a revised model loading API that uses the model manager.
All code related to the invocation now lives in the Invoke repo.
Similar to the existing node, but without any resizing and with a revised model loading API that uses the model manager.
All code related to the invocation now lives in the Invoke repo.
So far, this includes:
- Save Canvas to Gallery
- Save Bbox to Gallery
- Send Bbox to Regional IP Adapter
- Send Bbox to Global IP Adapter
- Send Bbox to Control Layer
- Send Bbox to Raster Layer
To prevent losing all ephemeral canvas stage when switching tabs, we will refrain from destroying the canvas manager instance when its tab unmounts, and use the existing canvas manager instance on mount, if there is one.
One small change required in `CanvasStageModule` - a `setContainer` method to update the konva stage DOM element.
- Add `reset` functionality
- Rename badly named `autoPreviewFilter` to `autoProcessFilter`
- Do not process filter when starting, unless `autoProcessFilter` is enabled
This includes some fixes for the composite number input component's local value handling, resolving an infinite recursion problem when an invalid value is set.
Snap can be any of off, 8px or 64px.
The snap is used when moving and transforming entities.
When transforming and locking aspect ratio, the snap is ignored entirely, because we'd change the aspect ratio if we forced the snap.
Otherwise, if we are not locking aspect ratio (e.g. the user is holding shift), we snap the transform anchors to the grid.
Realized we can use listener middleware to respond to _actions_, as opposed to using the redux store subscription to respond to _state changes_... This might simplify some things.
Using this pattern here.
Only hiccup - there's a TS issue preventing this from being added to the state api module. The `addListener` method has an overloaded type signature and TS cannot extract the overloaded arg type using `Parameters<T>`. As a result, if we try to wrap this, we end up with a broken TS signature for the wrapper method.
There's a race condition where we sometimes get progress events from canceled queue items, depending on the timing of the cancellation request and last event or two from the queue item.
I can't imagine how to resolve this except by tracking all cancellations and ignoring events for cancelled items, which is implemented in this change.
- Add selectors to get the default control adapter and ip adapter with model, preferring controlnet over t2i adapter for model
- Add hooks to add each entity type, using the defaults
- Add hooks to add prompts/ip adapters to a regional guidance layer
- Use the defaults in other places where we add control layers or ip adapters (e.g. dnd-triggered entity creation)
- Each entity gets its own `CanvasEntityFilterer`
- Add auto-preview feature to filter, debounced by 1000ms leading + trailing
- Fix flash when preview updates
When resetting the canvas or staging area, we don't want to cancel generations that are going to the gallery - only those going to the canvas.
Thus the method should not cancel by origin, but instead cancel by destination.
Update the queue method and route.
Use the min of each pixel's alpha value and lightness for the output alpha. This prevents artifacts when using the transparency effect, especially with non-black pixels with low alpha.
- Rely on redux + reselect more
- Remove all nanostores that simply "mirrored" redux state in favor of direct subscriptions to redux store
- Add abstractions for creating redux subs and running selectors
- Add `initialize` method to CanvasModuleBase, for post-instantiation tasks
- Reduce local caching of state in modules to a minimum
Big cleanup. Makes these classes easier to implement, lots of comments and docstrings to clarify how it all works.
- Add default implementations for `destroy`, `repr` and `getLoggingContext`
- Tidy individual module configs
- Update `CanvasManager.buildLogger` to accept a canvas module as the arg
- Add `CanvasManager.buildPath`
TBH not sure exactly why this broke. Fixed by rollback back the use of a render prop in favor of global state. Also revised the API of `useBoolean` and `buildUseBoolean`.
- Canvas generation mode is replace with a boolean `sendToCanvas` flag. When off, images generated on the canvas go to the gallery. When on, they get added to the staging area.
- When an image result is received, if its destination is the canvas, staging is automatically started.
- Updated queue list to show the destination column.
- Added `IconSwitch` component to represent binary choices, used for the new `sendToCanvas` flag and image viewer toggle.
- Remove the queue actions menu in `QueueControls`. Move the queue count badge to the cancel button.
- Redo layout of `QueueControls` to prevent duplicate queue count badges.
- Fix issue where gallery and options panels could show thru transparent regions of queue tab.
- Disable panel hotkeys when on mm/queue tabs.
The frontend needs to know where queue items came from (i.e. which tab), and where results are going to (i.e. send images to gallery or canvas). The `origin` column is not quite enough to represent this cleanly.
A `destination` column provides the frontend what it needs to handle incoming generations.
This hook forcibly updates _all_ portals with `data-hidden=true` when the modal opens - then reverts it when the modal closes. It's intended to help screen readers. Unfortunately, this absolutely tanks performance because we have many portals. React needs to do alot of layout calculations (not re-renders).
IMO this behaviour is a bug in chakra. The modals which generated the portals are hidden by default, so this data attr should really be set by default. Dunno why it isn't.
Previously this badge, floating over the queue menu button next to the invoke button, was rendered within the existing layout. When I initially positioned it, the app layout interfered - it would extend into an area reserved for a flex gap, which cut off the badge.
As a (bad) workaround, I had shifted the whole app down a few pixels to make room for it. What I should have done is what I've done in this commit - render the badge in a portal to take it out of the layout so we don't need that extra vertical padding.
Sleekified some styling a bit too.
The canvas size was dynamic based on the container div's size. When the div was hidden (e.g. when selecting another tab), the container's effective size is 0. This resulted in the preview image canvas being drawn at a scale of 0.
Fixed by using an absolute size for the canvas container.
- Add lock toggle
- Tweak lock and enabled styles
- Update entity list action bar w/ delete & delete all
- Move add layer menu to action bar
- Adjust opacity slider style
- Throttle pushing to history for actions of the same type, starting with 1000ms throttle.
- History has a limit of 64 items, same as workflow editor
- Add clear history button
- Fix an issue where entity transformers would reset the entity state when the entity is fully transparent, resetting the redo stack. This could happen when you undo to the starting state of a layer
I learned that the inline selector syntax recreates the selector function on every render:
```ts
const val = useAppSelector((s) => s.slice.val)
```
Not good! Better is to create a selector outside the function and use it. Doing that for all selectors now, most of the way through now. Feels snappier.
Things like `$lastCursorPos` are now created within the canvas drawing classes. Consumers in react access them via `useCanvasManager`.
For example:
```tsx
const canvasManager = useCanvasManager();
const lastCursorPos = useStore(canvasManager.stateApi.$lastCursorPos);
```
Previously, canvas actions specific to an entity type only needed the id of that entity type. This allowed you to pass in the id of an entity of the wrong type.
All actions for a specific entity now take a full entity identifier, and the entity identifier type can be narrowed.
`selectEntity` and `selectEntityOrThrow` now need a full entity identifier, and narrow their return values to a specific entity type _if_ the entity identifier is narrowed.
The types for canvas entities are updated with optional type parameters for this purpose.
All reducers, actions and components have been updated.
While we lose the benefit of the caches persisting across reloads, this is a much simpler way to handle things. If we need a persistent cache, we can explore it in the future.
- use `stable-hash` to generate stable, non-crypto hashes for cache entries, instead of using deep object comparisons
- use an object to store image name caches
Sequence of events causing the race condition:
- Enqueue batch
- Invalidate `SessionQueueStatus` tag
- Request updated queue status via HTTP - batch still processing at this point
- Batch completes
- Event emitted saying so
- Optimistically update the queue status cache, it is correct
- HTTP request makes it back and overwrites the optimistic update, indicating the batch is still in progress
FIxed by not invalidating the cache.
Download events and invocation status events (including progress images) are very frequent. There's no real need for these to pass through redux. Handling them outside redux is a significant performance win - far fewer store subscription calls, far fewer trips through middleware.
All event handling is moved outside middleware. Cleanup of unused actions and listeners to follow.
- create a context for entity identifiers, massively simplifying UI for each entity int he list
- consolidate common redux actions
- remove now-unused code
The origin is an optional field indicating the queue item's origin. For example, "canvas" when the queue item originated from the canvas or "workflows" when the queue item originated from the workflows tab. If omitted, we assume the queue item originated from the API directly.
- Add migration to add the nullable column to the `session_queue` table.
- Update relevant event payloads with the new field.
- Add `cancel_by_origin` method to `session_queue` service and corresponding route. This is required for the canvas to bail out early when staging images.
- Add `origin` to both `SessionQueueItem` and `Batch` - it needs to be provided initially via the batch and then passed onto the queue item.
-
Instead of chaining konva `find` and `findOne` methods, all konva nodes are added to a mapping object. Finding and manipulating them is much simpler.
Done for regions and layers, wip for control adapters.
Subscribe to redux store directly, skipping all the react overhead.
With react in dev mode, a typical frame while using the brush tool on almost-empty canvas is reduced from ~7.5ms to ~3.5ms. All things considered, this still feels slow, but it's a massive improvement.
- Create separate object types for brush and eraser lines, instead of a single type that has a `tool` field.
- Create new object type for rect shapes.
- Add logic to schemas to migrate old object types to new.
- Update renderers & reducers.
Currently translated at 98.2% (1350 of 1374 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.2% (1350 of 1374 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.2% (1350 of 1374 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.4% (1349 of 1370 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.4% (1348 of 1369 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
* [MM] add API routes for getting & setting MM cache sizes, and retrieving MM stats
* Update invokeai/app/api/routers/model_manager.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
* code cleanup after @ryand review
* Update invokeai/app/api/routers/model_manager.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
* fix merge conflicts; tested and working
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
## Summary
This PR adds support for Image-to-Image and inpainting workflows with
the FLUX model.
Full changelog:
- Split out `FLUX VAE Encode` and `FLUX VAE Decode` nodes
- Renamed `FLUX Text-to-Image` node to `FLUX Denoise` (since it now
supports image-to-image too). This is a workflow-breaking change.
- Added support for FLUX image-to-image via the `Latents` param on the
FLUX denoising node.
- Added support for FLUX masked inpainting via the `Denoise Mask` param
on the FLUX denoising node.
- Added "Denoise Start" and "Denoise End" params to the "FLUX Denoise"
node.
- Updated the "FLUX Text to Image" default workflow.
- Added a "FLUX Image to Image" default workflow.
### Example
FLUX inpainting workflow
<img width="1282" alt="image"
src="https://github.com/user-attachments/assets/86fc1170-e620-4412-8fd8-e119f875fc2e">
Input image

Mask

Output image

### Callouts for reviewers:
- I renamed FLUXTextToImageInvocation -> FLUXDenoisingInvocation. This
is, of course, a breaking change. It feels like the right move and now
is the right time to do it. Any objection?
- I added new `FLUX VAE Encode` and `FLUX VAE Decode` nodes.
Alternatively, I could have tried to match these names to the
corresponding SD nodes (e.g. `FLUX Image to Latents`, `FLUX Latents to
Image`). Personally, I prefer the current names, but want to hear other
opinions.
### Usage notes:
- With the default dev timestep scheduler, the image structure is
largely determined in the first ~3 steps. A consequence of this is that
the denoise_start parameter provides limited 'granularity' of control.
This will likely be improved in the future as we add more scheduler
options. In the meantime, you will likely want to use small values for
`denoise_start` (e.g. 0.03) to start denoising on step ~1-4 out of ~30.
- Currently, there is no 'noise' parameter on the `FLUX Denoise` node,
so the `denoise_end` parameter has limited utility. This will be added
in the future.
## QA Instructions
Test the following workflows:
- [x] Vanilla FLUX text-to-image behaviour is unchanged
- [x] Image-to-image with FLUX dev, no mask
- [x] Image-to-image with FLUX dev, with mask
- [x] Image-to-image with FLUX schnell, no mask (smoke test, not
expected to work well)
## Merge Plan
No special instructions.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
Allocates the specified amount of VRAM, or allocates enough VRAM such that you have the specified amount of VRAM free.
Useful to simulate an environment with a specific amount of VRAM.
## Summary
This PR contains several improvements to memory management for FLUX
workflows.
It is now possible to achieve better FLUX model caching performance, but
this still requires users to manually configure their `ram`/`vram`
settings. E.g. a `vram` setting of 16.0 should allow for all quantized
FLUX models to be kept in memory on the GPU.
Changes:
- Check the size of a model on disk and free the requisite space in the
model cache before loading it. (This behaviour existed previously, but
was removed in https://github.com/invoke-ai/InvokeAI/pull/6072/files.
The removal did not seem to be intentional).
- Removed the hack to free 24GB of space in the cache before loading the
FLUX model.
- Split the T5 embedding and CLIP embedding steps into separate
functions so that the two models don't both have to be held in RAM at
the same time.
- Fix a bug in `InvokeLinear8bitLt` that was causing some tensors to be
left on the GPU when the model was offloaded to the CPU. (This class is
getting very messy due to the non-standard state_dict handling in
`bnb.nn.Linear8bitLt`. )
- Tidy up some dtype handling in FluxTextToImageInvocation to avoid
situations where we hold references to two copies of the same tensor
unnecessarily.
- (minor) Misc cleanup of ModelCache: improve docs and remove unused
vars.
Future:
We should revisit our default ram/vram configs. The current defaults are
very conservative, and users could see major performance improvements
from tuning these values.
## QA Instructions
I tested the FLUX workflow with the following configurations and
verified that the cache hit rates and memory usage matched the expected
behaviour:
- `ram = 16` and `vram = 16`
- `ram = 16` and `vram = 1`
- `ram = 1` and `vram = 1`
Note that the changes in this PR are not isolated to FLUX. Since we now
check the size of models on disk, we may see slight changes in model
cache offload patterns for other models as well.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
## Summary
- Add selectedStylePreset to app parameters
<!--A description of the changes in this PR. Include the kind of change
(fix, feature, docs, etc), the "why" and the "how". Screenshots or
videos are useful for frontend changes.-->
## Related Issues / Discussions
<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->
## QA Instructions
<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->
## Merge Plan
<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->
## Checklist
- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
These two scripts are broken and can cause data loss. Remove them.
They are not in the launcher script, but _are_ available to users in the terminal/file browser.
Hopefully, when we removing them here, `pip` will delete them on next installation of the package...
The root cause was the active style preset not being reset when it was deleted, or no longer present in the list of style presets.
- Add extra reducer to `stylePresetSlice` to reset the active preset if it is deleted or otherwise unavailable
- Update the dynamic prompts listener to trigger on delete/update/list of style presets
When invoke.sh is executed using a symlink with a working directory outside of InvokeAI's root directory, it will fail.
invoke.sh attempts to cd into the correct directory at the start of the script, but will cd into the directory of the symlink instead. This commit fixes that.
## Summary
Adds option to download all prompt templates to a CSV
## Related Issues / Discussions
<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->
## QA Instructions
<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->
## Merge Plan
<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->
## Checklist
- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
## Summary
added a base prop for selectedWorkflow to allow loading a workflow on
launch
<!--A description of the changes in this PR. Include the kind of change
(fix, feature, docs, etc), the "why" and the "how". Screenshots or
videos are useful for frontend changes.-->
## Related Issues / Discussions
<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->
## QA Instructions
can test by loading InvokeAIUI with a selectedWorkflow prop of the
workflow ID
<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->
## Merge Plan
<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->
## Checklist
- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
- Enforce name is present and not an empty string
- Provide empty string as default for positive and negative prompt
- Add `positive_prompt` as validation alias for `prompt` field
- Strip whitespace automatically
- Create `TypeAdapter` to validate the whole list in one go
Currently translated at 98.5% (1336 of 1355 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.5% (1302 of 1321 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.6% (1302 of 1320 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
## Summary
Adds prompt templates to the UI. Demo video is attached.
* added default prompt templates to seed database on startup (these
cannot be edited or deleted by users via the UI)
* can create fresh prompt template, create from an image in gallery that
has prompt metadata, or copy an existing prompt template and modify
* if a template is active, can view what your prompt will be invoked as
by switching to "view mode"
https://github.com/user-attachments/assets/32d84e0c-b04c-48da-bae5-aa6eb685d209
## Related Issues / Discussions
<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->
## QA Instructions
<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->
## Merge Plan
<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->
## Checklist
- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
Around the time we (I) implemented pydantic events, I noticed a short pause between progress images every 4 or 5 steps when generating with SDXL. It didn't happen with SD1.5, but I did notice that with SD1.5, we'd get 4 or 5 progress events simultaneously. I'd expect one event every ~25ms, matching my it/s with SD1.5. Mysterious!
Digging in, I found an issue is related to our use of a synchronous queue for events. When the event queue is empty, we must call `asyncio.sleep` before checking again. We were sleeping for 100ms.
Said another way, every time we clear the event queue, we have to wait 100ms before another event can be dispatched, even if it is put on the queue immediately after we start waiting. In practice, this means our events get buffered into batches, dispatched once every 100ms.
This explains why I was getting batches of 4 or 5 SD1.5 progress events at once, but not the intermittent SDXL delay.
But this 100ms wait has another effect when the events are put on the queue in intervals that don't perfectly line up with the 100ms wait. This is most noticeable when the time between events is >100ms, and can add up to 100ms delay before the event is dispatched.
For example, say the queue is empty and we start a 100ms wait. Then, immediately after - like 0.01ms later - we push an event on to the queue. We still need to wait another 99.9ms before that event will be dispatched. That's the SDXL delay.
The easy fix is to reduce the sleep to something like 0.01 seconds, but this feels kinda dirty. Can't we just wait on the queue and dispatch every event immediately? Not with the normal synchronous queue - but we can with `asyncio.Queue`.
I switched the events queue to use `asyncio.Queue` (as seen in this commit), which lets us asynchronous wait on the queue in a loop.
Unfortunately, I ran into another issue - events now felt like their timing was inconsistent, but in a different way than with the 100ms sleep. The time between pushing events on the queue and dispatching them was not consistently ~0ms as I'd expect - it was highly variable from ~0ms up to ~100ms.
This is resolved by passing the asyncio loop directly into the events service and using its methods to create the task and interact with the queue. I don't fully understand why this resolved the issue, because either way we are interacting with the same event loop (as shown by `asyncio.get_running_loop()`). I suppose there's some scheduling magic happening.
There's a FastAPI bug that results in the OpenAPI spec outputting the same operation id for each operation when specifying multiple HTTP methods.
- Discussion: https://github.com/tiangolo/fastapi/discussions/8449
- Pending PR to fix: https://github.com/tiangolo/fastapi/pull/10694
In our case, we have a `get_image_full` endpoint that handles GET and HEAD.
This results in an invalid OpenAPI schema. A workaround is to use two route decorators for the operation handler. This works as expected - HEAD requests get the header, and GET requests get the resource. And the OpenAPI schema is valid.
- Updated the previous DepthAnything manual implementation to use the
`transformers` implementation instead. So we can get upstream features.
- Plugged in the DepthAnything models to be handled by Invoke's Model
Manager.
- `small_v2` model will use DepthAnythingV2. This has been added as a
new model option and is now also the default in the Linear UI.

# Merge
Review and merge.
Currently translated at 98.6% (1303 of 1321 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.6% (1302 of 1320 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.6% (1294 of 1312 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
There was a problem w/ this release on windows and the builds were pulled from pypi. When installing invoke on windows, pip attempts to build from source, but most (all?) systems won't have the prerequisites for this and installs fail.
This also affects GH actions.
The simple fix is to exclude version 3.9.1 from our deps.
For more information, see https://github.com/matplotlib/matplotlib/issues/28551
## Summary
This PR enables Grounded SAM workflows
(https://arxiv.org/pdf/2401.14159) via the following:
- `GroundingDinoInvocation` for running a Grounding DINO model.
- `SegmentAnythingModelInvocation` for running a SAM model.
- `MaskTensorToImageInvocation` for convenient visualization.
Other notes:
- Uses the transformers implementation of Grounding DINO and SAM.
- The new models are treated as 'utility models' meaning that they are
not visible in the Models tab, and are downloaded automatically the
first time that they are used.
<img width="874" alt="image"
src="https://github.com/user-attachments/assets/1cbaa97d-0e27-4943-86b1-dc7327ba8675">
## Example
Input image

Prompt: "wheels", all other configs default
Result:

## Related Issues / Discussions
Thanks to @blessedcoolant for the initial draft here:
https://github.com/invoke-ai/InvokeAI/pull/6678
## QA Instructions
Manual tests:
- [ ] Test that default settings work well.
- [ ] Test with / without apply_polygon_refinement
- [ ] Test mask_filter options
- [ ] Test detection_threshold values
- [ ] Test RGB input image
- [ ] Test RGBA input image
- [ ] Test grayscale input image
- [ ] Smoke test that an empty mask is returned when 0 objects are
detected
- [ ] Test on CPU
- [ ] Test on MPS (Works on Mac OS, but had to force both models to run
on CPU instead of MPS)
Performance:
- Peak GPU memory utilization with both Grounding DINO and SAM models
loaded is ~4.5GB. (The models do not need to be loaded at the same time,
so could be offloaded by the MM if needed.)
- On an RTX4090, with the models already cached, node execution takes
~0.6 secs.
- On my CPU, with the models cached, node execution takes ~10secs.
## Merge Plan
No special instructions.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
## Summary
- we want a way to load the studio while being directed to a specific
tab, introduced a destination prop to achieve that
<!--A description of the changes in this PR. Include the kind of change
(fix, feature, docs, etc), the "why" and the "how". Screenshots or
videos are useful for frontend changes.-->
## Related Issues / Discussions
<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->
## QA Instructions
<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->
## Merge Plan
<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->
## Checklist
- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
## Summary
Code for lora patching from #6577.
Additionally made it the way, that lora can patch not only `weight`, but
also `bias`, because saw some loras which doing it.
## Related Issues / Discussions
#6606https://invokeai.notion.site/Modular-Stable-Diffusion-Backend-Design-Document-e8952daab5d5472faecdc4a72d377b0d
## QA Instructions
Run with and without set `USE_MODULAR_DENOISE` environment.
## Merge Plan
Replace old lora patcher with new after review done.
If you think that there should be some kind of tests - feel free to add.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
## Summary
Gradient mask node outputs mask tensor with values in range [-1, 1],
which unexpected range for mask.
It handled in denoise node the way it translates to [0, 2] mask, which
looks even more wrongly)
From discussion with @dunkeroni I understand him as he thought that
negative values will be treated same as 0, so clamping values not change
intended node logic.
## Related Issues / Discussions
#6643
## QA Instructions
\-
## Merge Plan
\-
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
## Summary
Add karras variants of `deis`, `unipc`, `kdpm2` and `kdpm_2_a`
schedulers.
Also added `dpmpp_3` schedulers, but `dpmpp_3s` currently bugged, so
added only 3m:
https://github.com/huggingface/diffusers/issues/9007
## Related Issues / Discussions
\-
## QA Instructions
\-
## Merge Plan
~@psychedelicious We need to decide what to do with schedulers order, as
it looks a bit broken:~

## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
## Summary
<!--A description of the changes in this PR. Include the kind of change
(fix, feature, docs, etc), the "why" and the "how". Screenshots or
videos are useful for frontend changes.-->
## Related Issues / Discussions
<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->
## QA Instructions
<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->
## Merge Plan
<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->
## Checklist
- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
## Summary
Code for inpainting and inpaint models handling from
https://github.com/invoke-ai/InvokeAI/pull/6577.
Separated in 2 extensions as discussed briefly before, so wait for
discussion about such implementation.
## Related Issues / Discussions
#6606https://invokeai.notion.site/Modular-Stable-Diffusion-Backend-Design-Document-e8952daab5d5472faecdc4a72d377b0d
## QA Instructions
Run with and without set `USE_MODULAR_DENOISE` environment.
Try and compare outputs between backends in cases:
- Normal generation on inpaint model
- Inpainting on inpaint model
- Inpainting on normal model
## Merge Plan
Nope.
If you think that there should be some kind of tests - feel free to add.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
We were checking the selected and auto-add board ids against the query cache to see if they still exist. If not, we reset.
This only works if the query cache is updated by the time we do the check - race condition!
We already have the board id from the query args, so there's no need to check the query cache - just compare the deleted board ID directly.
Previously this file's several listeners were all in a single one and I had adapted/split its logic up a bit wonkily, introducing these problems.
The logic was incorrect in two ways:
1. We only ran the logic if we _enable_ showing archived boards. It should be run we we _disable_ showing archived boards.
2. If we couldn't find the selected board in the query cache, we didn't do the reset. This is wrong - if the board isn't in the query cache, we _should_ do the reset. This inverted logic makes more sense before the fix for issue 1.
## Summary
T2I Adapter code from #6577.
## Related Issues / Discussions
#6606https://invokeai.notion.site/Modular-Stable-Diffusion-Backend-Design-Document-e8952daab5d5472faecdc4a72d377b0d
## QA Instructions
Run with and without set `USE_MODULAR_DENOISE` environment.
## Merge Plan
Nope.
If you think that there should be some kind of tests - feel free to add.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
## Summary
Seamless code from #6577.
## Related Issues / Discussions
#6606https://invokeai.notion.site/Modular-Stable-Diffusion-Backend-Design-Document-e8952daab5d5472faecdc4a72d377b0d
## QA Instructions
Run with and without set `USE_MODULAR_DENOISE` environment.
## Merge Plan
Nope.
If you think that there should be some kind of tests - feel free to add.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
The model edit UI's composition allows for the model edit form to be instantiated before the model's config has been received. This results in the form having no values - all the fields are blank instead of populated by the model config.
Part of the fix is to pass the model config around directly instead of relying on _all_ components to fetch the model directly.
I also fixed a crapload of performance issues related to improper use of redux selectors.
Problems this was causing:
- Deleting an edge was a copy of another edge deletes both edges
- Deleting a node that was a copy-with-edges of another node deletes its edges and it's original edges, leaving what I will call "ghost noodles" behind
Previously you could spam the next/prev buttons and really thrash the server. Throttled to 500ms, which feels like a happy medium between responsive and not-thrash-y.
- Autofocus on popover open
- Autoselect number on popover open
- Enter works to change page when input is focused
- Esc works to close popover when input is focused
It was possible to clear the search term while a debounced setSearchTerm is still pending. This resulted in the gallery getting out of sync w/ the search term.
To fix this, we need to lift the state up a bit and cancel any pending debounced setSearchTerm calls when closing the search or clearing the search term box.
`spandrel_image_to_image` now just runs the model with no changes.
`spandrel_image_to_image_autoscale` runs the model repeatedly until the desired scale is reached. previously, `spandrel_image_to_image` did this.
* [MM2] replace untyped config dict passed to install_model with typed ModelRecordChanges
- adjusted frontend to work with new schema
- used this facility to assign "starter model" names and descriptions to the installed
models.
* documentation fix
* [MM2] replace untyped config dict passed to install_model with typed ModelRecordChanges
- adjusted frontend to work with new schema
- used this facility to assign "starter model" names and descriptions to the installed
models.
* documentation fix
* remove v9 pnpm lockfile
* [MM2] replace untyped config dict passed to install_model with typed ModelRecordChanges
- adjusted frontend to work with new schema
- used this facility to assign "starter model" names and descriptions to the installed
models.
* [MM2] replace untyped config dict passed to install_model with typed ModelRecordChanges
- adjusted frontend to work with new schema
- used this facility to assign "starter model" names and descriptions to the installed
models.
* remove v9 pnpm lockfile
* regenerate schema.ts
* prettified
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
## Summary
Update Simple Upscale Button to work with spandrel models, add
UpscaleWarning when models aren't available, clean up ESRGAN logic
<!--A description of the changes in this PR. Include the kind of change
(fix, feature, docs, etc), the "why" and the "how". Screenshots or
videos are useful for frontend changes.-->
## Related Issues / Discussions
<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->
## QA Instructions
<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->
## Merge Plan
<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->
## Checklist
- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
## Summary
ControlNet code from #6577.
## Related Issues / Discussions
#6606https://invokeai.notion.site/Modular-Stable-Diffusion-Backend-Design-Document-e8952daab5d5472faecdc4a72d377b0d
## QA Instructions
Run with and without set `USE_MODULAR_DENOISE` environment.
## Merge Plan
Merge #6641 firstly, to be able see output difference properly.
If you think that there should be some kind of tests - feel free to add.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
## Summary
Rescale CFG code from #6577.
## Related Issues / Discussions
#6606https://invokeai.notion.site/Modular-Stable-Diffusion-Backend-Design-Document-e8952daab5d5472faecdc4a72d377b0d
## QA Instructions
Run with and without set `USE_MODULAR_DENOISE` environment.
~~Note: for some reasons there slightly different output from run to
run, but I able sometimes to get same output on main and this branch.~~
Fix presented in #6641.
## Merge Plan
~~Nope.~~ Merge #6641 firstly, to be able see output difference
properly.
If you think that there should be some kind of tests - feel free to add.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
## Summary
- currently the total for uncategorized images is not updating when
moving and deleting images, this will update that count when making
those actions
<!--A description of the changes in this PR. Include the kind of change
(fix, feature, docs, etc), the "why" and the "how". Screenshots or
videos are useful for frontend changes.-->
## Related Issues / Discussions
<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->
## QA Instructions
<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->
## Merge Plan
<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->
## Checklist
- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
## Summary
Fix function call that we forgot to update in #6606
## QA Instructions
Run a TiledMultiDiffusionDenoiseLatents invocation and make sure it
doesn't crash.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
## Summary
Base code of new modular backend from #6577.
Contains normal generation and regional prompts support.
Also preview extension included to test if extensions logic works.
## Related Issues / Discussions
https://invokeai.notion.site/Modular-Stable-Diffusion-Backend-Design-Document-e8952daab5d5472faecdc4a72d377b0d
## QA Instructions
Run with and without set `USE_MODULAR_DENOISE` environment.
Currently only normal and regional conditionings supported, so just
generate some images and compare with main output.
## Merge Plan
Discuss a bit more about injection point names?
As if for example in future unet will be overridable, current
`pre_unet`/`post_unet` assumes to name override as `unet` what feels a
bit odd.
Also `apply_cfg` - future implementation could ignore/not use cfg, so in
this case `combine_noise_predictions`/`combine_noise` seems more
suitable.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
## Summary
This PR adds some spandrel upscale models to the starter model list.
In the future we may also want to add:
- Some DAT models
(https://drive.google.com/drive/folders/1iBdf_-LVZuz_PAbFtuxSKd_11RL1YKxM)
## QA Instructions
I installed the starter models via the model manager UI, and tested that
I could use them in a workflow.
## Merge Plan
- [ ] Merge the preceding Spandrel PRs first, then change the target
branch to `main`.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
## Summary
Add tiling to the `SpandrelImageToImageInvocation` node so that it can
process large images.
Tiling enables this node to run on effectively any input image
dimension. Of course, the computation time increases quadratically with
the image dimension.
Some profiling results on an RTX4090:
- Input 1024x1024, 4x upscale, 4x UltraSharp ESRGAN: `13 secs`, `<4 GB
VRAM`
- Input 4096x4096, 4x upscale, 4x UltraSharop ESRGAN: `46 secs`, `<4 GB
VRAM`
- Input 4096x4096, 2x upscale, SwinIR: `165 secs`, `<5 GB VRAM`
A lot of the time is spent PNG encoding the final image:
- PNG encoding of a 16384x16384 image takes `83secs @
pil_compress_level=7`, `24secs @ pil_compress_level=1`
Callout: If we want to start building workflows that pass large images
between nodes, we are going to have to find a way to avoid the PNG
encode/decode roundtrip that we are currently doing. As is, we will be
incurring a huge penalty for every node that receives/produces a large
image.
## QA Instructions
- [x] Tested with tiling up to 4096x4096 -> 16384x16384.
- [x] Test on images with an alpha channel (the alpha channel is
dropped).
- [x] Test on images with odd dimension.
- [x] Test no tiling (`tile_size=0`)
## Merge Plan
- [x] Merge #6556 first, and change the target branch to `main`.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
## Summary
- Add support for all
[spandrel](https://github.com/chaiNNer-org/spandrel) image-to-image
models - this is a collection of many popular super-resolution models
(e.g. ESRGAN, Real-ESRGAN, SwinIR, DAT, etc.)
Examples of supported models:
- DAT:
https://drive.google.com/drive/folders/1iBdf_-LVZuz_PAbFtuxSKd_11RL1YKxM
- SwinIR: https://github.com/JingyunLiang/SwinIR/releases
- Any ESRGAN / Real-ESRGAN model
## Related Issues
Closes#6394
## QA Instructions
- [x] Test that unsupported models still fail the probe (i.e. no false
positive spandrel models)
- [x] Test adding a few non-spandrel model types
- [x] Test adding a handful of spandrel model types: ESRGAN,
Real-ESRGAN, SwinIR, DAT
- [x] Verify model size estimation for the model cache
- [x] Test using the spandrel models in a practical image upscaling
workflow
## Merge Plan
- [x] Get approval from @brandonrising and @maryhipp before merging -
this PR has commercial implications.
- [x] Merge #6571 and change the target branch to `main`
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
In #6490 we enabled non-blocking torch device transfers throughout the model manager's memory management code. When using this torch feature, torch attempts to wait until the tensor transfer has completed before allowing any access to the tensor. Theoretically, that should make this a safe feature to use.
This provides a small performance improvement but causes race conditions in some situations. Specific platforms/systems are affected, and complicated data dependencies can make this unsafe.
- Intermittent black images on MPS devices - reported on discord and #6545, fixed with special handling in #6549.
- Intermittent OOMs and black images on a P4000 GPU on Windows - reported in #6613, fixed in this commit.
On my system, I haven't experience any issues with generation, but targeted testing of non-blocking ops did expose a race condition when moving tensors from CUDA to CPU.
One workaround is to use torch streams with manual sync points. Our application logic is complicated enough that this would be a lot of work and feels ripe for edge cases and missed spots.
Much safer is to fully revert non-locking - which is what this change does.
This issue is caused by a race condition. When a large image is served to the client, it is done using a streaming `FileResponse`. This concurrently serves the image straight from disk. The file is kept open by FastAPI until the image is fully served.
When a user deletes an image before the file is done serving, the delete fails because the file is still held by FastAPI.
To reproduce the issue:
- Create a very large image (8k reliably creates the issue).
- Create a smaller image, so that the first image in the gallery is not the large image.
- Refresh the app. The small image should be selected.
- Select the large image and immediately delete it. You have to be fast, to delete it before it finishes loading.
- In the terminal, we expect to see an error saying `Failed to delete image file`, and the image does not disappear from the UI.
- After a short wait, once the image has fully loaded, try deleting it again. We expect this to work.
The workaround is to instead serve the image from memory.
Loading the image to memory is very fast, so there is only a tiny window in which we could create the race condition, but it technically could still occur, because FastAPI is asynchronous and handles requests concurrently.
Once we load the image into memory, deletions of that image will work. Then we return a normal `Response` object with the image bytes. This is essentially what `FileResponse` does - except it uses `anyio.open_file`, which is async.
The tradeoff is that the server thread is blocked while opening the file. I think this is a fair tradeoff.
A future enhancement could be to implement soft deletion of images (db is already set up for this), and then clean up deleted image files on startup/shutdown. We could move back to using the async `FileResponse` for best responsiveness in the server without any risk of race conditions.
For some reason, I started getting this indefinite hang when the app checks if port 9090 is available. After some fiddling around, I found that adding a timeout resolves the issue.
I confirmed that the util still works by starting the app on 9090, then starting a second instance. The second instance correctly saw 9090 in use and moved to 9091.
## Summary
This PR changes the handling of invalid model configs in the DB to log a
warning rather than crashing the app.
This change is being made in preparation for some upcoming new model
additions. Previously, if a user rolled back from an app version that
added a new model type, the app would not launch until the DB was fixed.
This PR changes this behaviour to allow rollbacks of this type (with
warnings).
**Keep in mind that this change is only helpful to users _rolling back
to a version that has this fix_. I.e. it offers no help in the first
version that includes it.**
## QA Instructions
1. Run the Spandrel model branch, which adds a new model type
https://github.com/invoke-ai/InvokeAI/pull/6556.
2. Add a spandrel model via the model manager.
3. Rollback to main. The app will crash on launch due to the invalid
spandrel model config.
4. Checkout this branch. The app should now run with warnings about the
invalid model config.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
Currently translated at 100.0% (1282 of 1282 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (1280 of 1280 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (1275 of 1275 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (1273 of 1273 strings)
Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
Currently translated at 98.2% (1260 of 1282 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.4% (1260 of 1280 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.4% (1255 of 1275 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.4% (1253 of 1273 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.4% (1245 of 1265 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
- Refine layout
- Update colors - more minimal, fewer shaded boxes
- Add indicator for search icons showing a search term is entered
- Handle new `projectName` and `projectUrl` ui props
## Summary
Update Boards UI in the gallery and adds support for creating and
displaying private boards
<!--A description of the changes in this PR. Include the kind of change
(fix, feature, docs, etc), the "why" and the "how". Screenshots or
videos are useful for frontend changes.-->
## Related Issues / Discussions
<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->
## QA Instructions
Can view private boards by setting config.allowPrivateBoards to true
<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->
## Merge Plan
<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->
## Checklist
- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
## Summary
Demote error log to warning for models treated as having size 0.
## Related Issues / Discussions
Closes#6587
I looked into handling ESRGAN model sizes properly. They load a
state_dict with a bit of an unusual nested-dict structure. Rather than
figure out how to accurately calculate their size, we can just wait for
https://github.com/invoke-ai/InvokeAI/pull/6556. ESRGAN model size
handling should work properly when loaded through that pathway.
## QA Instructions
Loaded an ESRGAN model, and confirmed that the warning log is at the
warning level.
## Merge Plan
No special instructions.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
This commit corrects a broken link on line 16 that was pointing to the latest release but causing a 404 error (page not found) when clicked. The issue was identified as a trailing dot at the end of the URL, which has now been removed. This ensures users can access the intended latest release page.
## Summary
This PR tweaks the wording of the PR template QA instructions with the
goals of:
1. Make it more clear that PR authors are responsible for testing their
PRs.
2. Encouraging sufficient detail in the test descriptions.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
## Summary
Delete an unused duplicate libc_util.py file. The active version is at
`invokeai/backend/model_manager/libc_util.py`
## QA Instructions
I ran a smoke test to confirm that memory snapshotting still works.
## Merge Plan
- [x] Change target branch to `main` before merging.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
## Summary
This PR migrates all relative imports to absolute imports, and adds a
ruff check to enforce this going forward.
The justification for this change is here:
https://github.com/invoke-ai/InvokeAI/issues/6575
## QA Instructions
Smoke test all common workflows. Most of the relative -> absolute
conversions could be completed automatically, so the risk is relatively
low.
## Merge Plan
As with any far-reaching change like this, it is likely to cause some
merge conflicts with some in-flight branches. Unfortunately, there's no
way around this, but let me know if you can think of in-flight work that
will be significantly disrupted by this.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_ N/A
- [x] _Documentation added / updated (if applicable)_ N/A
## Summary
This PR fixes a regression that caused the following models to be
treated as having size 0 in the model cache: `(TextualInversionModelRaw,
IPAdapter, LoRAModelRaw)`.
Changes:
- Call the correct model size calculation for all supported model types.
- Log an error message if an unexpected model type is loaded, to prevent
similar regressions in the future.
## QA Instructions
I tested the following features and verified that no models fell back to
using a size of 0 unexpectedly:
- Test-to-image
- Textual Inversion
- LoRA
- IP-Adapter
- ControlNet
(All tested with both SD1.5 and SDXL.)
I compared the model cache switching behavior before and after this
change with a large number of LoRAs (10). Since LoRAs are small compared
to the main models, the changes in behaviour are minimal. Nonetheless,
it makes sense to get this in for correctness. And it might make a
difference for some usage patterns with limited RAM.
## Merge Plan
No special instructions.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
- For single image deletion, select the image in the same slot as the deleted image
- For multiple image deletion, empty selection
- On list images, if no images are currently selected, select the first image
## Summary
- This PR exposes a `tile_size` field on `ImageToLatentsInvocation` and
`LatentsToImageInvocation`.
- Setting `tile_size = 0` preserves the default behaviour.
- This feature is primarily intended to support upscaling workflows that
require VAE encoding/decoding high resolution images. In the future, we
may want to expose the tile size as a global application config, but
that's a separate conversation.
- As a general rule, larger tile sizes produce better results at the
cost of higher memory usage.
### Example:
Original (5472x5472)

VAE roundtrip with 512x512 tiles (note the discoloration)

VAE roundtrip with 1024x1024 tiles (some discoloration still present,
but less severe than at 512x512)

## Related Issues / Discussions
Related: #6144
## QA Instructions
- [x] Test image generation via the Linear tab
- [x] Test VAE roundtrip with tiling disabled
- [x] Test VAE roundtrip with tiling and tile_size = 0
- [x] Test VAE roundtrip with tiling and tile_size > 0
## Merge Plan
No special instructions.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
The selection logic is a bit complicated. We have image selection and pagination, both of which can be triggered using the mouse or hotkeys. We have viewer image selection and comparison image selection, which is determined by the alt key.
This change ties the room together with these behaviours:
- Changing the page using pagination buttons never changes the selection.
- Changing the selected image using arrows may change the page, if the arrow key pressed would select an image off the current page.
- `right` on the last image of the current page goes to the next page
- `down` on the last row of images goes to the next page
- `left` on the first image of the current page goes to the previous page
- `up` on the first row of images goes to the previous page
- If `alt` is held when using arrow keys, we change the page, but we only change the comparison image selection.
- When using arrow keys, if the page has changed since the last image was selected, the selection is reset to the first image on the page.
- The next/previous buttons on the image viewer do the same thing as `left` and `right` without `alt`.
- When clicking an image in the gallery:
- If no modifier keys are held, the image is exclusively selected.
- If `ctrl` or `meta` are held, the image's selection status is toggled.
- If `shift` is held, all images from the last-selected image to the image are selected. If there are no images on the current page, the selection is unchanged.
- If `alt` is held, the image is set as the compare image.
- `ctrl+a` and `meta+a` add the current page to the selection.
The logic for gallery navigation and selection is now pretty hairy. It's spread across 3 hooks, a listener, redux slice, components.
When we next make changes to this part of the app, we should consider consolidating some of the related logic. Probably most of it can go into a single listener and make it much simpler to grok.
Don't like this UI (even though I suggested it). No need to prevent the user from interacting with the search term field during fetching. Let's figure out a nicer way to present this in a followup.
## Summary
Python 3.11 has a wonderfully devious breaking change where _sometimes_
using enum classes that inherit from `str` or `int` do not work the same
way as they do in 3.10 when used within string formatting/interpolation.
This breaks the new gallery sort queries. The fix is to use
`order_dir.value` instead of `order_dir` in the query.
This was not an issue during development because the feature was
developed w/ python 3.10.
## Related Issues / Discussions
Thanks to @JPPhoto for reporting and troubleshooting:
https://discord.com/channels/1020123559063990373/1149513625321603162/1256211815982039173
## QA Instructions
JP's fancy python 3.11 system should work on this PR.
## Merge Plan
n/a
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
If the currently selected or auto-add board is archived or deleted, we should reset them. There are some edge cases taht weren't handled in the previous implementation.
All handling of this logic is moved to the (renamed) listener.
Before this change, if you attempt to create an image that with a nonexistent board, we'd get an unhandled error when adding the image to a board. The record would be created, but file not, due to the structure of the code.
With this change, we now log a warning if we have a problem adding the image to the board, but the record and file are still created.
A future improvement would be to create a transaction for this part of the code, preventing some other situation that could result in only the record or only the file beings saved.
* use model_class.load_singlefile() instead of converting; works, but performance is poor
* adjust the convert api - not right just yet
* working, needs sql migrator update
* rename migration_11 before conflict merge with main
* Update invokeai/backend/model_manager/load/model_loaders/stable_diffusion.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
* Update invokeai/backend/model_manager/load/model_loaders/stable_diffusion.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
* implement lightweight version-by-version config migration
* simplified config schema migration code
* associate sdxl config with sdxl VAEs
* remove use of original_config_file in load_single_file()
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
## Summary
We can get black outputs when moving tensors from CPU to MPS. It appears
MPS to CPU is fine. See:
- https://github.com/pytorch/pytorch/issues/107455
-
https://discuss.pytorch.org/t/should-we-set-non-blocking-to-true/38234/28
Changes:
- Add properties for each device on `TorchDevice` as a convenience.
- Add `get_non_blocking` static method on `TorchDevice`. This utility
takes a torch device and returns the flag to be used for non_blocking
when moving a tensor to the device provided.
- Update model patching and caching APIs to use this new utility.
## Related Issues / Discussions
Fixes: #6545
## QA Instructions
For both MPS and CUDA:
- Generate at least 5 images using LoRAs
- Generate at least 5 images using IP Adapters
## Merge Plan
We have pagination merged into `main` but aren't ready for that to be
released.
Once this fix is tested and merged, we will probably want to create a
`v4.2.5post1` branch off the `v4.2.5` tag, cherry-pick the fix and do a
release from the hotfix branch.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_ @RyanJDick @lstein This
feels testable but I'm not sure how.
- [ ] _Documentation added / updated (if applicable)_
We can get black outputs when moving tensors from CPU to MPS. It appears MPS to CPU is fine. See:
- https://github.com/pytorch/pytorch/issues/107455
- https://discuss.pytorch.org/t/should-we-set-non-blocking-to-true/38234/28
Changes:
- Add properties for each device on `TorchDevice` as a convenience.
- Add `get_non_blocking` static method on `TorchDevice`. This utility takes a torch device and returns the flag to be used for non_blocking when moving a tensor to the device provided.
- Update model patching and caching APIs to use this new utility.
Fixes: #6545
We only need to show the totals in the tooltip. Tooltips accpet a component for the tooltip label. The component isn't rendered until the tooltip is triggered.
Move the board total fetching into a tooltip component for the boards. Now we only fire these requests when the user mouses over the board
- Simplify the gallery layout
- Set an initial gallery limit to load _some_ images immediately.
- Refactor the resize observer to use the actual rendered image component to calculate the number of images per row/col. This prevents inaccuracies caused by image padding that could result in the wrong number of images.
- Debounce the limit update to not thrash teh API
- Use absolute positioning trick to ensure the gallery container is always exactly the right size
- Minimum of `imagesPerRow` images loaded at all times
This is one of those unexpected CSS quirks. Flex containers need min-width or min-height for their children to not overflow. Add `minH={0}` to gallery container.
## Summary
https://github.com/invoke-ai/InvokeAI/pull/6522 introduced a change in
behavior in cases where start/end were set such that there are 0
timesteps. This PR reverts that change.
cc @StAlKeR7779
## QA Instructions
Run with euler, 5 steps, start: 0.0, end: 0.05. I ran this test before
#6522, after #6522, and on this branch. This branch restores the
behavior to pre-#6522 i.e. noise is injected even if no denoising steps
are applied.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
* add support for probing and loading SDXL VAE checkpoint files
* broaden regexp probe for SDXL VAEs
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
## Summary
No functional changes, just cleaning some things up as I touch the code.
This PR cleans up the `SilenceWarnings` context manager:
- Fix type errors
- Enable SilenceWarnings to be used as both a context manager and a
decorator
- Remove duplicate implementation
- Check the initial verbosity on `__enter__()` rather than `__init__()`
- Save an indentation level in DenoiseLatents
## QA Instructions
I generated an image to confirm that warnings are still muted.
## Merge Plan
- [x] ⚠️ Merge https://github.com/invoke-ai/InvokeAI/pull/6492 first,
then change the target branch to `main`.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
- Fix type errors
- Enable SilenceWarnings to be used as both a context manager and a decorator
- Remove duplicate implementation
- Check the initial verbosity on __enter__() rather than __init__()
## Summary
added route to install huggingface models from model marketplace
<!--A description of the changes in this PR. Include the kind of change
(fix, feature, docs, etc), the "why" and the "how". Screenshots or
videos are useful for frontend changes.-->
## Related Issues / Discussions
<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->
## QA Instructions
test by going to
http://localhost:5173/api/v2/models/install/huggingface?source=${hfRepo}
<!--WHEN APPLICABLE: Describe how we can test the changes in this PR.-->
## Merge Plan
<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->
## Checklist
- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
When a model install is initiated from outside the client, we now trigger the model manager tab's model install list to update.
- Handle new `model_install_download_started` event
- Handle `model_install_download_complete` event (this event is not new but was never handled)
- Update optimistic updates/cache invalidation logic to efficiently update the model install list
Previously, we used `model_install_download_progress` for both download starting and progressing. When handling this event, we don't know which actual thing it represents.
Add `model_install_download_started` event to explicitly represent a model download started event.
* allow model patcher to optimize away the unpatching step when feasible
* remove lazy_offloading functionality
* allow model patcher to optimize away the unpatching step when feasible
* remove lazy_offloading functionality
* do not save original weights if there is a CPU copy of state dict
* Update invokeai/backend/model_manager/load/load_base.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
* documentation fixes requested during penultimate review
* add non-blocking=True parameters to several torch.nn.Module.to() calls, for slight performance increases
* fix ruff errors
* prevent crash on non-cuda-enabled systems
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
## Summary
This three two model manager-related methods to the InvocationContext
uniform API. They are accessible via `context.models.*`:
1. **`load_local_model(model_path: Path, loader:
Optional[Callable[[Path], AnyModel]] = None) ->
LoadedModelWithoutConfig`**
*Load the model located at the indicated path.*
This will load a local model (.safetensors, .ckpt or diffusers
directory) into the model manager RAM cache and return its
`LoadedModelWithoutConfig`. If the optional loader argument is provided,
the loader will be invoked to load the model into memory. Otherwise the
method will call `safetensors.torch.load_file()` `torch.load()` (with a
pickle scan), or `from_pretrained()` as appropriate to the path type.
Be aware that the `LoadedModelWithoutConfig` object differs from
`LoadedModel` by having no `config` attribute.
Here is an example of usage:
```
def invoke(self, context: InvocatinContext) -> ImageOutput:
model_path = Path('/opt/models/RealESRGAN_x4plus.pth')
loadnet = context.models.load_local_model(model_path)
with loadnet as loadnet_model:
upscaler = RealESRGAN(loadnet=loadnet_model,...)
```
---
2. **`load_remote_model(source: str | AnyHttpUrl, loader:
Optional[Callable[[Path], AnyModel]] = None) ->
LoadedModelWithoutConfig`**
*Load the model located at the indicated URL or repo_id.*
This is similar to `load_local_model()` but it accepts either a
HugginFace repo_id (as a string), or a URL. The model's file(s) will be
downloaded to `models/.download_cache` and then loaded, returning a
```
def invoke(self, context: InvocatinContext) -> ImageOutput:
model_url = 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth'
loadnet = context.models.load_remote_model(model_url)
with loadnet as loadnet_model:
upscaler = RealESRGAN(loadnet=loadnet_model,...)
```
---
3. **`download_and_cache_model( source: str | AnyHttpUrl, access_token:
Optional[str] = None, timeout: Optional[int] = 0) -> Path`**
Download the model file located at source to the models cache and return
its Path. This will check `models/.download_cache` for the desired model
file and download it from the indicated source if not already present.
The local Path to the downloaded file is then returned.
---
## Other Changes
This PR performs a migration, in which it renames `models/.cache` to
`models/.convert_cache`, and migrates previously-downloaded ESRGAN,
openpose, DepthAnything and Lama inpaint models from the `models/core`
directory into `models/.download_cache`.
There are a number of legacy model files in `models/core`, such as
GFPGAN, which are no longer used. This PR deletes them and tidies up the
`models/core` directory.
## Related Issues / Discussions
I have systematically replaced all the calls to
`download_with_progress_bar()`. This function is no longer used
elsewhere and has been removed.
<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->
## QA Instructions
I have added unit tests for the three new calls. You may test that the
`load_and_cache_model()` call is working by running the upscaler within
the web app. On first try, you will see the model file being downloaded
into the models `.cache` directory. On subsequent tries, the model will
either load from RAM (if it hasn't been displaced) or will be loaded
from the filesystem.
<!--WHEN APPLICABLE: Describe how we can test the changes in this PR.-->
## Merge Plan
Squash merge when approved.
<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->
## Checklist
- [X] _The PR has a short but descriptive title, suitable for a
changelog_
- [X] _Tests added / updated (if applicable)_
- [X] _Documentation added / updated (if applicable)_
## Summary
I've started working towards a better tiled upscaling implementation. It
is going to require some refactoring of `DenoiseLatentsInvocation`. As a
first step, this PR splits up all of the invocations in latent.py into
their own files. That file had become a bit of a dumping ground - it
should be a bit more manageable to work with now.
This PR just re-organizes the code. There should be no functional
changes.
## QA Instructions
I've done some light smoke testing. I'll do some more before merging.
The main risk is that I missed a broken import, or some other copy-paste
error.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_: N/A
- [x] _Documentation added / updated (if applicable)_: N/A
* allow model patcher to optimize away the unpatching step when feasible
* remove lazy_offloading functionality
* allow model patcher to optimize away the unpatching step when feasible
* remove lazy_offloading functionality
* do not save original weights if there is a CPU copy of state dict
* Update invokeai/backend/model_manager/load/load_base.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
* documentation fixes added during penultimate review
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
Create intermediary nanostores for values required by the event handlers. This allows the event handlers to be purely imperative, with no reactivity: instead of recreating/setting the handlers when a dependent piece of state changes, we use nanostores' imperative API to access dependent state.
For example, some handlers depend on brush size. If we used the standard declarative `useSelector` API, we'd need to recreate the event handler callback each time the brush size changed. This can be costly.
An intermediate `$brushSize` nanostore is set in a `useLayoutEffect()`, which responds to changes to the redux store. Then, in the event handler, we use the imperative API to access the brush size: `$brushSize.get()`.
This change allows the event handler logic to be shared with the pending canvas v2, and also more easily tested. It's a noticeable perf improvement, too, especially when changing brush size.
Currently translated at 98.5% (1243 of 1261 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.5% (1243 of 1261 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.5% (1225 of 1243 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.5% (1225 of 1243 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
- Pass the seed from `latents_a` to the output latents. Fixed an issue where using `BlendLatentsInvocation` could result in different outputs during denoising even when the alpha or slerp weight was 0.
## Explanation
`LatentsField` has an optional `seed` field. During denoising, if this `seed` field is not present, we **fall back to 0 for the seed**. The seed is used during denoising in a few ways:
1. Initializing the scheduler.
The seed is used in two places in `invokeai/app/invocations/latent.py`.
The `get_scheduler()` utility function has special handling for `DPMSolverSDEScheduler`, which appears to need a seed for deterministic outputs.
`DenoiseLatentsInvocation.init_scheduler()` has special handling for schedulers that accept a generator - the generator needs to be seeded in a particular way. At the time of this commit, these are the Invoke-supported schedulers that need this seed:
- DDIMScheduler
- DDPMScheduler
- DPMSolverMultistepScheduler
- EulerAncestralDiscreteScheduler
- EulerDiscreteScheduler
- KDPM2AncestralDiscreteScheduler
- LCMScheduler
- TCDScheduler
2. Adding noise during inpainting.
If a mask is used for denoising, and we are not using an inpainting model, we add noise to the unmasked area. If, for some reason, we have a mask but no noise, the seed is used to add noise.
I wonder if we should instead assert that if a mask is provided, we also have noise.
This is done in `invokeai/backend/stable_diffusion/diffusers_pipeline.py` in `StableDiffusionGeneratorPipeline.latents_from_embeddings()`.
When we create noise to be used in denoising, we are expected to set `LatentsField.seed` to the seed used to create the noise. This introduces some awkwardness when we manipulate any "latents" that will be used for denoising. We have to pass the seed along for every operation.
If the wrong seed or no seed is passed along, we can get unexpected outputs during denoising. One notable case relates to blending latents (slerping tensors).
If we slerp two noise tensors (`LatentsField`s) _without_ passing along the seed from the source latents, when we denoise with a seed-dependent scheduler*, the schedulers use the fallback seed of 0 and we get the wrong output. This is most obvious when slerping with a weight of 0, in which case we expect the exact same output after denoising.
*It looks like only the DPMSolver* schedulers are affected, but I haven't tested all of them.
Passing the seed along in the output fixes this issue.
This required some minor reworking of of the logic to recall multiple items. I split this into a utility function that includes some special handling for concat.
Closes#6478
When the model in metadata's key no longer exists, fall back to fetching by name, base and type. This was the intention all along but the logic was never put in place.
- Any mypy issues are a misconfiguration of mypy
- Use simple conditionals instead of ternaries
- Consistent & standards-compliant docstring formatting
- Use `dict` instead of `typing.Dict`
It doesn't make sense to allow context menu here, because the context menu will technically be on a div and not an image - there won't be any image options there.
## Summary
Fix some issues with openapi schema generation. See commits for details.
## Related Issues / Discussions
https://discord.com/channels/1020123559063990373/1049495067846524939/1245141831394529352
## QA Instructions
App should work, workflows should work.
## Merge Plan
n/a
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
Some tech debt related to dynamic pydantic schemas for invocations became problematic. Including the invocations and results in the event schemas was breaking pydantic's handling of ref schemas. I don't really understand why - I think it's a pydantic bug in a remote edge case that we are hitting.
After many failed attempts I landed on this implementation, which is actually much tidier than what was in there before.
- Create pydantic-enabled types for `AnyInvocation` and `AnyInvocationOutput` and use these in place of the janky dynamic unions. Actually, they are kinda the same, but better encapsulated. Use these in `Graph`, `GraphExecutionState`, `InvocationEventBase` and `InvocationCompleteEvent`.
- Revise the custom openapi function to work with the new models.
- Split out the custom openapi function to a separate file. Add a `post_transform` callback so consumers can customize the output schema.
- Update makefile scripts.
## Summary
- Updated the documentation for `TextualInversionManager`
- Updated the `self.tokenizer.model_max_length` access to work with the
latest transformers version. Thanks to @skunkworxdark for looking into
this here:
https://github.com/invoke-ai/InvokeAI/issues/6445#issuecomment-2133098342
## Related Issues / Discussions
Closes#6445
## QA Instructions
I tested with `transformers==4.41.1`, and compared the results against a
recent InvokeAI version before updating tranformers - no change, as
expected.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
This is required to get these event fields to deserialize correctly. If omitted, pydantic uses `BaseInvocation`/`BaseInvocationOutput`, which is not correct.
This is similar to the workaround in the `Graph` and `GraphExecutionState` classes where we need to fanagle pydantic with manual validation handling.
Note about the huge diff: I had a different version of pydantic installed at some point, which slightly altered a _ton_ of schema components. This typegen was done on the correct version of pydantic and un-does those alterations, in addition to the intentional changes to event models.
There's no longer any need for session-scoped events now that we have the session queue. Session started/completed/canceled map 1-to-1 to queue item status events, but queue item status events also have an event for failed state.
We can simplify queue and processor handling substantially by removing session events and instead using queue item events.
- Remove the session-scoped events entirely.
- Remove all event handling from session queue. The processor still needs to respond to some events from the queue: `QueueClearedEvent`, `BatchEnqueuedEvent` and `QueueItemStatusChangedEvent`.
- Pass an `is_canceled` callback to the invocation context instead of the cancel event
- Update processor logic to ensure the local instance of the current queue item is synced with the instance in the database. This prevents race conditions and ensures lifecycle callback do not get stale callbacks.
- Update docstrings and comments
- Add `complete_queue_item` method to session queue service as an explicit way to mark a queue item as successfully completed. Previously, the queue listened for session complete events to do this.
Closes#6442
- Restore calculation of step percentage but in the backend instead of client
- Simplify signatures for denoise progress event callbacks
- Clean up `step_callback.py` (types, do not recreate constant matrix on every step, formatting)
We don't need to use the payload schema registry. All our events are dispatched as pydantic models, which are already validated on instantiation.
We do want to add all events to the OpenAPI schema, and we referred to the payload schema registry for this. To get all events, add a simple helper to EventBase. This is functionally identical to using the schema registry.
The model loader emits events. During testing, it doesn't have access to a fully-mocked events service, so the test fails when attempting to call a nonexistent method. There was a check for this previously, but I accidentally removed it. Restored.
- Remove ABCs, they do not work well with pydantic
- Remove the event type classvar - unused
- Remove clever logic to require an event name - we already get validation for this during schema registration.
- Rename event bases to all end in "Base"
Our events handling and implementation has a couple pain points:
- Adding or removing data from event payloads requires changes wherever the events are dispatched from.
- We have no type safety for events and need to rely on string matching and dict access when interacting with events.
- Frontend types for socket events must be manually typed. This has caused several bugs.
`fastapi-events` has a neat feature where you can create a pydantic model as an event payload, give it an `__event_name__` attr, and then dispatch the model directly.
This allows us to eliminate a layer of indirection and some unpleasant complexity:
- Event handler callbacks get type hints for their event payloads, and can use `isinstance` on them if needed.
- Event payload construction is now the responsibility of the event itself (a pydantic model), not the service. Every event model has a `build` class method, encapsulating this logic. The build methods are provided as few args as possible. For example, `InvocationStartedEvent.build()` gets the invocation instance and queue item, and can choose the data it wants to include in the event payload.
- Frontend event types may be autogenerated from the OpenAPI schema. We use the payload registry feature of `fastapi-events` to collect all payload models into one place, making it trivial to keep our schema and frontend types in sync.
This commit moves the backend over to this improved event handling setup.
* avoid copying model back from cuda to cpu
* handle models that don't have state dicts
* add assertions that models need a `device()` method
* do not rely on torch.nn.Module having the device() method
* apply all patches after model is on the execution device
* fix model patching in latents too
* log patched tokenizer
* closes#6375
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
Show error toasts on queue item error events instead of invocation error events. This allows errors that occurred outside node execution to be surfaced to the user.
The error description component is updated to show the new error message if available. Commercial handling is retained, but local now uses the same component to display the error message itself.
I had set the cancel event at some point during troubleshooting an unrelated issue. It seemed logical that it should be set there, and didn't seem to break anything. However, this is not correct.
The cancel event should not be set in response to a queue status change event. Doing so can cause a race condition when nodes are executed very quickly.
It's possible that a previously-executed session's queue item status change event is handled after the next session starts executing. The cancel event is set and the session runner sees it aborting the session run early.
In hindsight, it doesn't make sense to set the cancel event here either. It should be set in response to user action, e.g. the user cancelled the session or cleared the queue (which implicitly cancels the current session). These events actually trigger the queue item status changed event, so if we set the cancel event here, we'd be setting it twice per cancellation.
There's a race condition where a canceled session may emit a progress event or two after it's been canceled, and the progress image isn't cleared out.
To resolve this, the system slice tracks canceled session ids. When a progress event comes in, we check the cancellations and skip setting the progress if canceled.
- Add handling for new error columns `error_type`, `error_message`, `error_traceback`.
- Update queue item model to include the new data. The `error_traceback` field has an alias of `error` for backwards compatibility.
- Add `fail_queue_item` method. This was previously handled by `cancel_queue_item`. Splitting this functionality makes failing a queue item a bit more explicit. We also don't need to handle multiple optional error args.
-
We were not handling node preparation errors as node errors before. Here's the explanation, copied from a comment that is no longer required:
---
TODO(psyche): Sessions only support errors on nodes, not on the session itself. When an error occurs outside
node execution, it bubbles up to the processor where it is treated as a queue item error.
Nodes are pydantic models. When we prepare a node in `session.next()`, we set its inputs. This can cause a
pydantic validation error. For example, consider a resize image node which has a constraint on its `width`
input field - it must be greater than zero. During preparation, if the width is set to zero, pydantic will
raise a validation error.
When this happens, it breaks the flow before `invocation` is set. We can't set an error on the invocation
because we didn't get far enough to get it - we don't know its id. Hence, we just set it as a queue item error.
---
This change wraps the node preparation step with exception handling. A new `NodeInputError` exception is raised when there is a validation error. This error has the node (in the state it was in just prior to the error) and an identifier of the input that failed.
This allows us to mark the node that failed preparation as errored, correctly making such errors _node_ errors and not _processor_ errors. It's much easier to diagnose these situations. The error messages look like this:
> Node b5ac87c6-0678-4b8c-96b9-d215aee12175 has invalid incoming input for height
Some of the exception handling logic is cleaned up.
- Use protocol to define callbacks, this allows them to have kwargs
- Shuffle the profiler around a bit
- Move `thread_limit` and `polling_interval` to `__init__`; `start` is called programmatically and will never get these args in practice
- Add `OnNodeError` and `OnNonFatalProcessorError` callbacks
- Move all session/node callbacks to `SessionRunner` - this ensures we dump perf stats before resetting them and generally makes sense to me
- Remove `complete` event from `SessionRunner`, it's essentially the same as `OnAfterRunSession`
- Remove extraneous `next_invocation` block, which would treat a processor error as a node error
- Simplify loops
- Add some callbacks for testing, to be removed before merge
This query is only subscribed-to in the `QueueItemDetail` component - when is rendered only when the user clicks on a queue item in the queue. Invalidating this tag instead of optimistically updating it won't cause any meaningful change to network traffic.
The session is never updated in the queue after it is first enqueued. As a result, the queue detail view in the frontend never never updates and the session itself doesn't show outputs, execution graph, etc.
We need a new method on the queue service to update a queue item's session, then call it before updating the queue item's status.
Queue item status may be updated via a session-type event _or_ queue-type event. Adding the updated session to all these events is a hairy - simpler to just update the session before we do anything that could trigger a queue item status change event:
- Before calling `emit_session_complete` in the processor (handles session error, completed and cancel events and the corresponding queue events)
- Before calling `cancel_queue_item` in the processor (handles another way queue items can be canceled, outside the session execution loop)
When serializing the session, both in the new service method and the `get_queue_item` endpoint, we need to use `exclude_none=True` to prevent unexpected validation errors.
## Summary
TIL if you add `undefined` to a form data object, it gets stringified to
`'undefined'`. Whoops!
## Related Issues / Discussions
n/a
## QA Instructions
n/a
## Merge Plan
n/a
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
Currently translated at 98.5% (1210 of 1228 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.5% (1206 of 1224 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.5% (1204 of 1222 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
## Summary
Notes nodes used some overly-strict redux selectors. The selectors are
now more chill. Also fixed an issue where you couldn't edit a notes node
title.
Found another class of error related to the overly strict reducers that
caused errors when loading a workflow that had missing templates. Fixed
this with fallback wrapper component, works like an error boundary when
a template isn't found.
## Related Issues / Discussions
https://discord.com/channels/1020123559063990373/1149506274971631688/1242256425527545949
## QA Instructions
- Add a notes node to a workflow. Edit the notes title.
- Load a workflow that has nodes that aren't installed. Should get a
fallback UI for each missing node.
- Load a workflow that references a node with different inputs than are
in the template - like an old version of a node. Should get a fallback
field warning for both missing templates, or missing inputs.
## Merge Plan
n/a
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
Some asserts were bubbling up in places where they shouldn't have, causing errors when a node has a field without a matching template, or vice-versa.
To resolve this without sacrificing the runtime safety provided by asserts, a `InvocationFieldCheck` component now wraps all field components. This component renders a fallback when a field doesn't exist, so the inner components can safely use the asserts.
At some point, I made a mistake and imported the wrong types to some files for the old v1 and v2 workflow schema migration data.
The relevant zod schemas and inferred types have been restored.
This change doesn't alter runtime behaviour. Only type annotations.
Replace the `isCollection` and `isCollectionOrScalar` flags with a single enum value `cardinality`. Valid values are `SINGLE`, `COLLECTION` and `SINGLE_OR_COLLECTION`.
Why:
- The two flags were mutually exclusive, but this wasn't enforce. You could create a field type that had both `isCollection` and `isCollectionOrScalar` set to true, whuch makes no sense.
- There was no explicit declaration for scalar/single types.
- Checking if a type had only a single flag was tedious.
Thanks to a change a couple months back in which the workflows schema was revised, field types are internal implementation details. Changes to them are non-breaking.
Canvas images are saved by uploading a blob generated from the HTML canvas element. This means the existing metadata handling, inside the graph execution engine, is not available.
To save metadata to canvas images, we need to provide it when uploading that blob.
The upload route now has a `metadata` body param. If this is provided, we use it over any metadata embedded in the image.
Depending on the user behaviour and network conditions, it's possible that we could try to load a workflow before the invocation templates are available.
Fix is simple:
- Use the RTKQ query hook for openAPI schema in App.tsx
- Disable the load workflow buttons until w have templates parsed
Remove our DIY'd reducers, consolidating all node and edge mutations to use `edgesChanged` and `nodesChanged`, which are called by reactflow. This makes the API for manipulating nodes and edges less tangly and error-prone.
We now keep track of the original field type, derived from the python type annotation in addition to the override type provided by `ui_type`.
This makes `ui_type` work more like it sound like it should work - change the UI input component only.
Connection validation is extend to also check the original types. If there is any match between two fields' "final" or original types, we consider the connection valid.This change is backwards-compatible; there is no workflow migration needed.
When clearing the processor config, we shouldn't re-process the image. This logic wasn't handled correctly, but coincidentally the bug didn't cause a user-facing issue.
Without a config, we had a runtime error when trying to build the node for the processor graph and the listener failed.
So while we didn't re-process the image, it was because there was an error, not because the logic was correct.
Fix this by bailing if there is no image or config.
If you change the control model and the new model has the same default processor, we would still re-process the image, even if there was no need to do so.
With this change, if the image and processor config are unchanged, we bail out.
Graph, metadata and workflow all take stringified JSON only. This makes the API consistent and means we don't need to do a round-trip of pydantic parsing when handling this data.
It also prevents a failure mode where an uploaded image's metadata, workflow or graph are old and don't match the current schema.
As before, the frontend does strict validation and parsing when loading these values.
The previous super-minimal implementation had a major issue - the saved workflow didn't take into account batched field values. When generating with multiple iterations or dynamic prompts, the same workflow with the first prompt, seed, etc was stored in each image.
As a result, when the batch results in multiple queue items, only one of the images has the correct workflow - the others are mismatched.
To work around this, we can store the _graph_ in the image metadata (alongside the workflow, if generated via workflow editor). When loading a workflow from an image, we can choose to load the workflow or the graph, preferring the workflow.
Internally, we need to update images router image-saving services. The changes are minimal.
To avoid pydantic errors deserializing the graph, when we extract it from the image, we will leave it as stringified JSON and let the frontend's more sophisticated and flexible parsing handle it. The worklow is also changed to just return stringified JSON, so the API is consistent.
These simplify loading multiple LoRAs. Instead of requiring chained lora loader nodes, configure each LoRA (model & weight) with a selector, collect them, then send the collection to the collection loader to apply all of the LoRAs to the UNet/CLIP models.
The collection loaders accept a single lora or collection of loras.
This stateful class provides abstractions for building a graph. It exposes graph methods like adding and removing nodes and edges.
The methods are documented, tested, and strongly typed.
## Summary
Bump to v4.2.1
## Related Issues / Discussions
n/a
## QA Instructions
n/a
## Merge Plan
Do the release after merging.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
Currently translated at 98.5% (1192 of 1210 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.5% (1192 of 1210 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.5% (1192 of 1210 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.5% (1192 of 1210 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
Currently translated at 100.0% (1209 of 1209 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (1209 of 1209 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (1188 of 1188 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (1185 of 1185 strings)
Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
Currently translated at 97.3% (1154 of 1185 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (1174 of 1174 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (1173 of 1173 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (1166 of 1166 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (1165 of 1165 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (1149 of 1149 strings)
translationBot(ui): update translation (Russian)
Currently translated at 100.0% (1147 of 1147 strings)
Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
Currently translated at 96.0% (1138 of 1185 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.4% (1156 of 1174 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.3% (1155 of 1174 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.4% (1129 of 1147 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
Make the Invoke button show a loading spinner while queueing.
The queue mutations need to be awaited else the `isLoading` state doesn't work as expected. I feel like I should understand why, but I don't...
## Summary
Do not listen for mouse events on CA and II layers (which are not
interact-able).
## Related Issues / Discussions
Closes#6331
## QA Instructions
Move a CA or II layer above a regional guidance layer. The move tool
should now work.
## Merge Plan
n/a
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
## Summary
Use translations instead of plain strings.
## Related Issues / Discussions
https://discord.com/channels/1020123559063990373/1054129386447716433/1239181243078279208
## QA Instructions
The layer select should still work.
## Merge Plan
n/a
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
## Summary
The select had a default search value, which meant it only showed
"small" as an option on first load.
## Related Issues / Discussions
n/a
## QA Instructions
- Add a CA layer
- Expand advanced
- Set processor to depth anything
- Click the model size dropdown, it should show all 3 sizes
## Merge Plan
n/a
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
This allows comboboxes for models to have more granular groupings. For example, Control Adapter models can be grouped by base model & model type.
Before:
- `SD-1`
- `SDXL`
After:
- `SD-1 / ControlNet`
- `SD-1 / T2I Adapter`
- `SDXL / ControlNet`
- `SDXL / T2I Adapter`
When a control adapter processor config is changed, if we were already processing an image, that batch is immediately canceled. This prevents the processed image from getting stuck in a weird state if you change or reset the processor at the right (err, wrong?) moment.
- Update internal state for control adapters to track processor batches, instead of just having a flag indicating if the image is processing. Add a slice migration to not break the user's existing app state.
- Update preprocessor listener with more sophisticated logic to handle canceling the batch and resetting the processed image when the config changes or is reset.
- Fixed error handling that erroneously showed "failed to queue graph" errors when an active listener instance is canceled, need to check the abort signal.
This is largely an internal change, and it should have been this way from the start - less tip-toeing around layer types. The user-facing change is when you click an IP Adapter layer, it is highlighted. That's it.
Turns out, it's more efficient to just use the bbox logic for empty mask calculations. We already track if if the bbox needs updating, so this calculation does minimal work.
The dedicated calculation wasn't able to use the bbox tracking so it ran far more often than the bbox calculation.
Removed the "fast" bbox calculation logic, bc the new logic means we are continually updating the bbox in the background - not only when the user switches to the move tool and/or selects a layer.
The bbox calculation logic is split out from the bbox rendering logic to support this.
Result - better perf overall, with the empty mask handling retained.
Mask vector data includes additive (brush, rect) shapes and subtractive (eraser) shapes. A different composite operation is used to draw a shape, depending on whether it is additive or subtractive.
This means that a mask may have vector objects, but once rendered, is _visually_ empty (fully transparent). The only way determine if a mask is visually empty is to render it and check every pixel.
When we generate and save layer metadata, these fully erased masks are still used. Generating with an empty mask is a no-op in the backend, so we want to avoid this and not pollute graphs/metadata.
Previously, we did that pixel-based when calculating the bbox, which we only did when using the move tool, and only for the selected layer.
This change introduces a simpler function to check if a mask is transparent, and if so, deletes all its objects to reset it. This allows us skip these no-op layers entirely.
This check is debounced to 300 ms, trailing edge only.
When layer metadata is stored, the layer IDs are included. When recalling the metadata, we need to assign fresh IDs, else we can end up with multiple layers with the same ID, which of course causes all sorts of issues.
- Viewer only exists on Generation tab
- Viewer defaults to open
- When clicking the Control Layers tab on the left panel, close the viewer (i.e. open the CL editor)
- Do not switch to editor when adding layers (this is handled by clicking the Control Layers tab)
- Do not open viewer when single-clicking images in gallery
- _Do_ open viewer when _double_-clicking images in gallery
- Do not change viewer state when switching between app tabs (this no longer makes sense; the viewer only exists on generation tab)
- Change the button to a drop down menu that states what you are currently doing, e.g. Viewing vs Editing
There are unresolved platform-specific issues with this component, and its utility is debatable.
Should be easy to just revert this commit to add it back in the future if desired.
There are a number of bugs with `framer-motion` that can result in sync issues with AnimatePresence and the conditionally rendered component.
You can see this if you rapidly click an accordion, occasionally it gets out of sync and is closed when it should be open.
This is a bigger problem with the viewer where the user may hold down the `z` key. It's trivial to get it to lock up.
For now, just remove the animation entirely.
Upstream issues for reference:
https://github.com/framer/motion/issues/2023https://github.com/framer/motion/issues/2618https://github.com/framer/motion/issues/2554
- Rects snap to stage edge when within a threshold (10 screen pixels)
- When mouse leaves stage, set last mousedown pos to null, preventing nonfunctional rect outlines
Partially addresses #6306.
There's a technical challenge to fully address the issue - mouse event are not fired when the mouse is outside the stage. While we could draw the rect even if the mouse leaves, we cannot update the rect's dimensions on mouse move, or complete the drawing on mouse up.
To fully address the issue, we'd need to a way to forward window events back to the stage, or at least handle window events. We can explore this later.
When invoking with control layers, we were creating and uploading the mask images on every enqueue, even when the mask didn't change. The mask image can be cached to greatly reduce the number of uploads.
With this change, we are a bit smarter about the mask images:
- Check if there is an uploaded mask image name
- If so, attempt to retrieve its DTO. Typically it will be in the RTKQ cache, so there is no network request, but it will make a network request if not cached to confirm the image actually exists on the server.
- If we don't have an uploaded mask image name, or the request fails, we go ahead and upload the generated blob
- Update the layer's state with a reference to this uploaded image for next time
- Continue as before
Any time we modify the mask (drawing/erasing, resetting the layer), we invalidate that cached image name (set it to null).
We now only upload images when we need to and generation starts faster.
- Rework styling
- Replace "CurrentImageDisplay" entirely
- Add a super short fade to reduce jarring transition
- Make the viewer a singleton component, overlaid on everything else - reduces change when switching tabs
- Works on txt2img, canvas and workflows tabs, img2img has its own side-by-side view
- In workflow editor, the is closeable only if you are in edit mode, else it's always there
- Press `i` to open
- Press `esc` to close
- Selecting an image or changing image selection opens the viewer
- When generating, if auto-switch to new image is enabled, the viewer opens when an image comes in
To support this change, I organized and restructured some tab stuff.
When recalling metadata and/or using control image dimensions, it was possible to set a width or height that was not a multiple of 8, resulting in generation failures.
Added a `clamp` option to the w/h actions to fix this. The option is used for all untrusted sources - everything except for the w/h number inputs, which clamp the values themselves.
Firefox v125.0.3 and below has a bug where `mouseenter` events are fired continually during mouse moves. The issue isn't present on FF v126.0b6 Developer Edition. It's not clear if the issue is present on FF nightly, and we're not sure if it will actually be fixed in the stable v126 release.
The control layers drawing logic relied on on `mouseenter` events to create new lines, and `mousemove` to extend existing lines. On the affected version of FF, all line extensions are turned into new lines, resulting in very poor performance, noncontiguous lines, and way-too-big internal state.
To resolve this, the drawing handling was updated to not use `mouseenter` at all. As a bonus, resolving this issue has resulted in simpler logic for drawing on the canvas.
- Add set of metadata handlers for the control layers CAs
- Use these conditionally depending on the active tab - when recalling on txt2img, the CAs go to control layers, else they go to the old CA area.
These changes were left over from the previous attempt to handle control adapters in control layers with the same logic. Control Layers are now handled totally separately, so these changes may be reverted.
There were some invalid constraints with the processors - minimum of 0 for resolution or multiple of 64 for resolution.
Made minimum 1px and no multiple ofs.
When typing in a number into the w/h number inputs, if the number is less than the step, it appears the value of 0 is used. This is unexpected; it means Chakra isn't clamping the value correctly (or maybe our wrapper isn't clamping it).
Add checks to never bail if the width or height value from the number input component is 0.
- Revise control adapter config types
- Recreate all control adapter mutations in control layers slice
- Bit of renaming along the way - typing 'RegionalGuidanceLayer' over and over again was getting tedious
Konva doesn't react to changes to window zoom/scale. If you open the tab at, say, 90%, then bump to 100%, the pixel ratio of the canvas doesn't change. This results in lower-quality renders on the canvas (generation is unaffected).
`PC_PATH_MAX` doesn't exist for (some?) external drives on macOS. We need error handling when retrieving this value.
Also added error handling for `PC_NAME_MAX` just in case. This does work for me for external drives on macOS, though.
Closes#6277
There are only a couple SDXL inpainting models, and my tests indicate they are not as good as SD1.5 inpainting, but at least we support them now.
- Add the config file. This matches what is used in A1111. The only difference from the non-inpainting SDXL config is the number of in-channels.
- Update the legacy config maps to use this config file.
Pending:
- Move model install calls into model manager and create passthrus in invocation_context.
- Consider splitting load_model_from_url() into a call to get the path and a call to load the path.
- Use the our adaptation of the HWC3 function with better types
- Extraction some of the util functions, name them better, add comments
- Improve type annotations
- Remove unreachable codepaths
## Summary
<!--A description of the changes in this PR. Include the kind of change
(fix, feature, docs, etc), the "why" and the "how". Screenshots or
videos are useful for frontend changes.-->
## Related Issues / Discussions
<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->
## QA Instructions
<!--WHEN APPLICABLE: Describe how we can test the changes in this PR.-->
## Merge Plan
<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->
## Checklist
- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
- Shift+C: Reset selected layer mask (same as canvas)
- Shift+D: Delete selected layer (cannot be Del, that deletes an image in gallery)
- Shift+A: Add layer (cannot be Ctrl+Shift+N, that opens a new window)
- Ctrl/Cmd+Wheel: Brush size (same as canvas)
Trying a lot of different things as I iterated, so this is smooshed into one big commit... too hard to split it now.
- Iterated on IP adapter handling and UI. Unfortunately there is an bug related to undo/redo. The IP adapter state is split across the `controlAdapters` slice and the `regionalPrompts` slice, but only the `regionalPrompts` slice supports undo/redo. If you delete the IP adapter and then undo/redo to a history state where it existed, you'll get an error. The fix is likely to merge the slices... Maybe there's a workaround.
- Iterated on UI. I think the layers are OK now.
- Removed ability to disable RP globally for now. It's enabled if you have enabled RP layers.
- Many minor tweaks and fixes.
- Keep track of whether the bbox needs to be recalculated (e.g. had lines/points added)
- Keep track of whether the bbox has eraser strokes - if yes, we need to do the full pixel-perfect bbox calculation, otherwise we can use the faster getClientRect
- Use comparison rather than Math.min/max in bbox calculation (slightly faster)
- Return `null` if no pixel data at all in bbox
Adds an additional negative conditioning using the inverted mask of the positive conditioning and the positive prompt. May be useful for mutually exclusive regions.
## Summary
Until now IP Adapter had complete control on the contents of the output.
With this PR, users are now able to select "Style Only" or "Composition
Only" to draw just the style or layout of the reference image.
Based off: https://arxiv.org/abs/2404.02733
### New IP Method Option
- `Full` - Both style and layout of the refence image are used.
- `Style Only` - Only the style of the image is used
- `Composition Only` - Only the composition of the image is used.

### Example Result

### Notes
- Supports both SDXL and SD1.5
### Testing
- Just check and test if it works as expected with all IP Adapter models
- both SDXL and SD1.5
## Merge Plan
Good to merge once tested for all edge cases.
* introduce new abstraction layer for GPU devices
* add unit test for device abstraction
* fix ruff
* convert TorchDeviceSelect into a stateless class
* move logic to select context-specific execution device into context API
* add mock hardware environments to pytest
* remove dangling mocker fixture
* fix unit test for running on non-CUDA systems
* remove unimplemented get_execution_device() call
* remove autocast precision
* Multiple changes:
1. Remove TorchDeviceSelect.get_execution_device(), as well as calls to
context.models.get_execution_device().
2. Rename TorchDeviceSelect to TorchDevice
3. Added back the legacy public API defined in `invocation_api`, including
choose_precision().
4. Added a config file migration script to accommodate removal of precision=autocast.
* add deprecation warnings to choose_torch_device() and choose_precision()
* fix test crash
* remove app_config argument from choose_torch_device() and choose_torch_dtype()
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
Currently translated at 98.4% (1122 of 1140 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.4% (1120 of 1138 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.4% (1115 of 1133 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
When using refiner with a mask (i.e. inpainting), we don't have noise provided as an input to the node.
This situation uniquely hits a code path that wasn't reviewed when gradient denoising was implemented.
That code path does two things wrong:
- It lerp'd the input latents. This was fixed in 5a1f4cb1ce.
- It added noise to the latents an extra time. This is fixed in this change.
We don't need to add noise in `latents_from_embeddings` because we do it just a lines later in `AddsMaskGuidance`.
- Remove the extraneous call to `add_noise`
- Make `seed` a required arg. We never call the function without seed anyways. If we refactor this in the future, it will be clearer that we need to look at how seed is handled.
- Move the call to create the noise to a deeper conditional, just before we call `AddsMaskGuidance`. The created noise tensor is now only used in that function, no need to create it every time.
Note: Whether or not having both noise and latents as inputs on the node is correct is a separate conversation. This change just fixes the issue with the current setup.
`LatentsField` objects have an optional `seed` field. This should only be populated when the latents are noise, generated from a seed.
`DenoiseLatentsInvocation` needs a seed value for scheduler initialization. It's used in a few places, and there is some logic for determining the seed to use with a series of fallbacks:
- Use the seed from the noise (a `LatentsField` object)
- Use the seed from the latents (a `LatentsField` object - normally it won't have a seed)
- Use `0` as a final fallback
In `DenoisLatentsInvocation`, we set the seed in the `LatentsOutput`, even though the output latents are not noise.
This is normally fine, but when we use refiner, we re-use the those same latents for the refiner denoise. This causes that characteristic same-seed-fried look on the refiner pass.
Simple fix - do not set the field in the output latents.
Handful of intertwined fixes.
- Create and use helper function to reset staging area.
- Clear staging area when queue items are canceled, failed, cleared, etc. Fixes a bug where the bbox ends up offset and images are put into the wrong spot.
- Fix a number of similar bugs where canvas would "forget" it had pending generations, but they continued to generate. Canvas needs to track batches that should be displayed in it using `state.canvas.batchIds`, and this was getting cleared without actually canceling those batches.
- Disable the `discard current image` button on canvas if there is only one image. Prevents accidentally canceling all canvas batches if you spam the button.
Changed fields to go in w/h x/y order.
## Summary
The prior ordering of height, then width, and y, then x, doesn't match
up with the expected UX. This has been changed.
## Checklist
- [X] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
This is intended for debug usage, so it's hidden away in the workflow library `...` menu. Hold shift to see the button for it.
- Paste a graph (from a network request, for example) and then click the convert button to convert it to a workflow.
- Disable auto layout to stack the nodes with an offset (try it out). If you change this, you must re-convert to get the changes.
- Edit the workflow JSON if you need to tweak something before loading it.
- Allow user-defined precision on MPS.
- Use more explicit logic to handle all possible cases.
- Add comments.
- Remove the app_config args (they were effectively unused, just get the config using the singleton getter util)
This data is already in the template but it wasn't ever used.
One big place where this improves UX is the noise node. Previously, the UI let you change width and height in increments of 1, despite the template requiring a multiple of 8. It now works in multiples of 8.
Retrieving the DTO happens as part of the metadata parsing, not recall. This way, we don't show the option to recall a nonexistent image.
This matches the flow for other metadata entities like models - we don't show the model recall button if the model isn't available.
The previous algorithm errored if the image wasn't divisible by the tile size. I've reimplemented it from scratch to mitigate this issue.
The new algorithm is simpler. We create a pool of tiles, then use them to create an image composed completely of tiles. If there is any awkwardly sized space on the edge of the image, the tiles are cropped to fit.
Finally, paste the original image over the tile image.
I've added a jupyter notebook to do a smoke test of infilling methods, and 10 test images.
The other infill algorithms can be easily tested with the notebook on the same images, though I didn't set that up yet.
Tested and confirmed this gives results just as good as the earlier infill, though of course they aren't the same due to the change in the algorithm.
We have had a few bugs with v4 related to file encodings, especially on Windows.
Windows uses its own character encodings instead of `utf-8`, often `cp1252`. Some characters cannot be decoded using `utf-8`, causing `UnicodeDecodeError`.
There are a couple places where this can cause problems:
- In the installer bootstrap, we install or upgrade `pip` and decode the result, using `subprocess`.
The input to this includes the user's home dir. In #6105, the user had one of the problematic characters in their username. `subprocess` attempts and fails to decode the username, which crashes the installer.
To fix this, we need to use `locale.getpreferredencoding()` when executing the command.
- Similarly, in the model install service and config class, we attempt to load a yaml config file. If a problematic character is in the path to the file (which often includes the user's home dir), we can get the same error.
One example is #6129 in which the models.yaml migration fails.
To fix this, we need to open the file with `locale.getpreferredencoding()`.
- Remove `CUDA_AND_DML`. This was for onnx, which we have since removed.
- Remove `AUTODETECT`. This option causes problems for windows users, as it falls back on default pypi index resulting in a non-CUDA torch being installed.
- Add more explicit settings for extra index URL, based on the torch website
- Fix bug where `xformers` wasn't installed on linux and/or windows when autodetect was selected
This will be fairly common in v4 updates. The root cause is models not being added to the `models.yaml` file in v3, so we don't correctly migrate the models to the db.
The docs describe how to use `Scan Folder` to restore missing models.
Compare the installed paths to determine if the model is already installed. Fixes an issue where installed models showed up as uninstalled or vice-versa. Related to relative vs absolute path handling.
Renaming the model file to the model name introduces unnecessary contraints on model names.
For example, a model name can technically be any length, but a model _filename_ cannot be too long.
There are also constraints on valid characters for filenames which shouldn't be applied to model record names.
I believe the old behaviour is a holdover from the old system.
## Summary
This PR adds support for IP Adapter safetensor files for direct usage
inside InvokeAI.
# TEST
You can download the [Composition
Adapters](https://huggingface.co/ostris/ip-composition-adapter) which
weren't previously supported in Invoke and try them out. Every other IP
Adapter model should work too.
If you pick a Safetensor IP Adapter model, you will also need to set
ViT-H or ViT-G next to it. This is a raw implementation. Can refine it
further based on feedback.
Prompt: `Spiderman holding a bunny` -- Exact same composition as the
adapter image.

Setting to 'auto' works only for InvokeAI config and auto detects the SD model but will override if user explicitly sets it. If auto used with checkpoint models, we raise an error. Checkpoints will always need to set to non-auto.
The valid values for this parameter changed when inpainting changed to gradient denoise. The generation slice's redux migration wasn't updated, resulting in a generation error until you change the setting or reset web UI.
- Add and use more performant `deepClone` method for deep copying throughout the UI.
Benchmarks indicate the Really Fast Deep Clone library (`rfdc`) is the best all-around way to deep-clone large objects.
This is particularly relevant in canvas. When drawing or otherwise manipulating canvas objects, we need to do a lot of deep cloning of the canvas layer state objects.
Previously, we were using lodash's `cloneDeep`.
I did some fairly realistic benchmarks with a handful of deep-cloning algorithms/libraries (including the native `structuredClone`). I used a snapshot of the canvas state as the data to be copied:
On Chromium, `rfdc` is by far the fastest, over an order of magnitude faster than `cloneDeep`.
On FF, `fastest-json-copy` and `recursiveDeepCopy` are even faster, but are rather limited in data types. `rfdc`, while only half as fast as the former 2, is still nearly an order of magnitude faster than `cloneDeep`.
On Safari, `structuredClone` is the fastest, about 2x as fast as `cloneDeep`. `rfdc` is only 30% faster than `cloneDeep`.
`rfdc`'s peak memory usage is about 10% more than `cloneDeep` on Chrome. I couldn't get memory measurements from FF and Safari, but let's just assume the memory usage is similar relative to the other algos.
Overall, `rfdc` is the best choice for a single algo for all browsers. It's definitely the best for Chromium, by far the most popular desktop browser and thus our primary target.
A future enhancement might be to detect the browser and use that to determine which algorithm to use.
There were two ways the canvas history could grow too large (past the `MAX_HISTORY` setting):
- Sometimes, when pushing to history, we didn't `shift` an item out when we exceeded the max history size.
- If the max history size was exceeded by more than one item, we still only `shift`, which removes one item.
These issue could appear after an extended canvas session, resulting in a memory leak and recurring major GCs/browser performance issues.
To fix these issues, a helper function is added for both past and future layer states, which uses slicing to ensure history never grows too large.
Previously, exceptions raised as custom nodes are initialized were fatal errors, causing the app to exit.
With this change, any error on import is caught and the error message printed. App continues to start up without the node.
For example, a custom node that isn't updated for v4.0.0 may raise an error on import if it is attempting to import things that no longer exist.
Add `dump_path` arg to the converter function & save the model to disk inside the conversion function. This is the same pattern as in the other conversion functions.
Prefer an early return/continue to reduce the indentation of the processor loop. Easier to read.
There are other ways to improve its structure but at first glance, they seem to involve changing the logic in scarier ways.
This must not have been tested after the processors were unified. Needed to shift the logic around so the resume event is handled correctly. Clear and easy fix.
* pass model config to _load_model
* make conversion work again
* do not write diffusers to disk when convert_cache set to 0
* adding same model to cache twice is a no-op, not an assertion error
* fix issues identified by psychedelicious during pr review
* following conversion, avoid redundant read of cached submodels
* fix error introduced while merging
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
We switched all model paths to be absolute in #5900. In hindsight, this is a mistake, because it makes the `models_dir` non-portable.
This change reverts to the previous model pathing:
- Invoke-managed models (in the `models_dir`) are stored with relative paths
- Non-invoke-managed models (outside the `models_dir`, i.e. in-place installed models) still have absolute paths.
## Why absolute paths make things non-portable
Let's say my `models_dir` is `/media/rhino/invokeai/models/`. In the DB, all model paths will be absolute children of this path, like this:
- `/media/rhino/invokeai/models/sd-1/main/model1.ckpt`
I want to change my `models_dir` to `/home/bat/invokeai/models/`. I update my `invokeai.yaml` file and physically move the files to that directory.
On startup, the app checks for missing models. Because all of my model paths were absolute, they now point to a nonexistent path. All models are broken.
There are a couple options to recover from this situation, neither of which are reasonable:
1. The user must manually update every model's path. Unacceptable UX.
2. On startup, we check for missing models. For each missing model, we compare its path with the last-known models dir. If there is a match, we replace that portion of the path with the new models dir. Then we re-check to see if the path exists. If it does, we update the models DB entry. Brittle and requires a new DB entry for last-known models dir.
It's better to use relative paths for Invoke-managed models.
Setting to 'auto' works only for InvokeAI config and auto detects the SD model but will override if user explicitly sets it. If auto used with checkpoint models, we raise an error. Checkpoints will always need to set to non-auto.
The seamless logic errors when a second GPU is selected. I don't understand why, but a workaround is to skip the model patching when there there are no seamless axes specified.
This is also just a good practice regardless - don't patch the model unless we need to. Probably a negligible perf impact.
Closes#6010
The build workflow was naming the file `InvokeAI-installer-v4.0.0rc6.zip.zip` (note the double ".zip"). This caused some confusion when creating releases on GitHub.
Name the build artifact `installer`. This results in `installer.zip`, which it's clear needs to be extracted first before uploading to the GH release.
`scripts/get_external_contributions.py` gets all commits between two refs and outputs a summary.
Useful for getting all external contributions for release notes.
There's still a few references in `WEB.md` but this doc is very outdated and needs to be totally redone. It's hard to just remove the references without redoing a lot more.
Will need to follow up revising this doc.
These two changes are interrelated.
## Autoimport
The autoimport feature can be easily replicated using the scan folder tab in the model manager. Removing the implicit autoimport reduces surface area and unifies all model installation into the UI.
This functionality is removed, and the `autoimport_dir` config setting is removed.
## Startup model dir scanning
We scanned the invoke-managed models dir on startup and took certain actions:
- Register orphaned model files
- Remove model records from the db when the model path doesn't exist
### Orphaned model files
We should never have orphaned model files during normal use - we manage the models directory, and we only delete files when the user requests it.
During testing or development, when a fresh DB or memory DB is used, we could end up with orphaned models that should be registered.
Instead of always scanning for orphaned models and registering them, we now only do the scan if the new `scan_models_on_startup` config flag is set.
The description for this setting indicates it is intended for use for testing only.
### Remove records for missing model files
This functionality could unexpectedly wipe models from the db.
For example, if your models dir was on external media, and that media was inaccessible during startup, the scan would see all your models as missing and delete them from the db.
The "proactive" scan is removed. Instead, we will scan for missing models and log a warning if we find a model whose path doesn't exist. No possibility for data loss.
I had added this because I mistakenly believed the HF token was required to download HF models.
Turns out this is not the case, and the vast majority of HF models do not need the API token to download.
"Normal" models have 4 in-channels, while "Depth" models have 5 and "Inpaint" models have 9.
We need to explicitly tell diffusers the channel count when converting models.
Closes #6058
It's possible for a model's state dict to have integer keys, though we do not actually support such models.
As part of probing, we call `key.startswith(...)` on the state dict keys. This raises an `AttributeError` for integer keys.
This logic is in `invokeai/backend/model_manager/probe.py:get_model_type_from_checkpoint`
To fix this, we can cast the keys to strings first. The models w/ integer keys will still fail to be probed, but we'll get a `InvalidModelConfigException` instead of `AttributeError`.
Closes#6044
Previously we only handled expected error types. If a different error was raised, the install job would end up in an unexpected state where it has failed and isn't doing anything, but its status is still running.
This indirectly prevents the installer threads from exiting - they are waiting for all jobs to be completed, including the failed-but-still-running job.
We need to handle any error here to prevent this.
This allows us to easily test the installer without needing the desired version to be published on PyPI:
```sh
python3 installer/lib/main.py --wheel installer/dist/InvokeAI-4.0.0rc6-py3-none-any.whl
```
A warning message and confirmation are displayed when the arg is used.
The rest of the installer is unchanged.
Updating should always be done via the installer. We initially planned to only deprecate the updater, but given the scale of changes for v4, there's no point in waiting to remove it entirely.
Loading default workflows sometimes requires we mutate the workflow object in order to change the category or ID of the workflow.
This happens in `invokeai/frontend/web/src/features/nodes/util/workflow/validateWorkflow.ts`
The data we get back from the query hooks is frozen and sealed by redux, because they are part of redux state. We need to clone the workflow before operating on it.
It's not clear how this ever worked in the past, because redux state has always been frozen and sealed.
Add `extra="forbid"` to the default settings models.
Closes#6035.
Pydantic has some quirks related to unions. This affected how the union of default settings was evaluated. See https://github.com/pydantic/pydantic/issues/9095 for a detailed description of the behaviour that this change addresses.
- Enriched dependencies to not just be a string - allows reuse of a dependency as a starter model _and_ dependency of another model. For example, all the SDXL models have the fp16 VAE as a dependency, but you can also download it on its own.
- Looked at popular models on the major model sites to select the list. No SD2 models. All hosted on HF.
* Fix minor bugs involving model manager handling of model paths
- Leave models found in the `autoimport` directory there. Do not move them
into the `models` hierarchy.
- If model name, type or base is updated and model is in the `models` directory,
update its path as appropriate.
- On startup during model scanning, if a model's path is a symbolic link, then resolve
to an absolute path before deciding it is a new model that must be hashed and
registered. (This prevents needless hashing at startup time).
* fix issue with dropped suffix
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
Currently translated at 98.2% (1102 of 1122 strings)
translationBot(ui): update translation (Italian)
Currently translated at 97.9% (1099 of 1122 strings)
translationBot(ui): update translation (Italian)
Currently translated at 97.9% (1099 of 1122 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
- Add patched rootdir fixture to all config tests. I think this isn't strictly necessary but it does ensure that any config tests that need to write files don't accidentally write to user data locations.
- Be more careful when calling `get_config()` in the tests, by clearing the LRU cache before and after. This ensures a test doesn't reference the singleton config created by a previously run test.
- Add test for env var parsing.
- Add test for config writing in the context of `get_config()`. This is effectively a mini e2e test for the config lifecycle.
Add class `DefaultInvokeAIAppConfig`, which inherits from `InvokeAIAppConfig`. When instantiated, this class does not parse environment variables, so it outputs a "clean" default config. That's the only difference.
Then, we can use this new class in the 3 places:
- When creating the example config file (no env vars should be here)
- When migrating a v3 config (we want to instantiate the migrated config without env vars, so that when we write it out, they are not written to disk)
- When creating a fresh config file (i.e. on first run with an uninitialized root or new config file path - no env vars here!)
For SSDs, `blake3` is about 10x faster than `blake3_single` - 3 files/second vs 30 files/second.
For spinning HDDs, `blake3` is about 100x slower than `blake3_single` - 300 seconds/file vs 3 seconds/file.
For external drives, `blake3` is always worse, but the difference is highly variable. For external spinning drives, it's probably way worse than internal.
The least offensive algorithm is `blake3_single`, and it's still _much_ faster than any other algorithm.
With the change to model identifiers from v3 to v4, if a user had persisted redux state with the old format, we could get unexpected runtime errors when rehydrating state if we try to access model attributes that no longer exist.
For example, the CLIP Skip component does this:
```ts
CLIP_SKIP_MAP[model.base].maxClip
```
In v3, models had a `base_type` attribute, but it is renamed to `base` in v4. This code therefore causes a runtime error:
- `model.base` is `undefined`
- `CLIP_SKIP_MAP[undefined]` is also undefined
- `undefined.maxClip` is a runtime error!
Resolved by adding a migration for the redux slices that have model identifiers. The migration simply resets the slice or the part of the slice that is affected, when it's simple to do a partial reset.
Closes#6000
If you switch between different branches, by the time you get back to `main`, a different version of `ruff` might be installed that has slightly different formatting rules. This leads to incorrect formatting changes.
Pinning `ruff` avoids this issue.
* add probe for SDXL controlnet models
* Update invokeai/backend/model_management/model_probe.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
* Update invokeai/backend/model_manager/probe.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
These all support controlnet processors.
- `pil_to_cv2`
- `cv2_to_pil`
- `pil_to_np`
- `np_to_pil`
- `normalize_image_channel_count` (a readable version of `HWC3` from the controlnet repo)
- `fit_image_to_resolution` (a readable version of `resize_image` from the controlnet repo)
- `non_maximum_suppression` (a readable version of `nms` from the controlnet repo)
- `safe_step` (a readable version of `safe_step` from the controlnet repo)
Some processors, like Canny, didn't use `detect_resolution`. The resultant control images were then resized by the processors from 512x512 to the desired dimensions. The result is that the control images are the right size, but very low quality.
Using detect_resolution fixes this.
- Display a toast on UI launch if the HF token is invalid
- Show form in MM if token is invalid or unable to be verified, let user set the token via this form
This allows users to create simple "profiles" via separate `invokeai.yaml` files.
- Remove `InvokeAIAppConfig.set_root()`, it's extraneous
- Remove `InvokeAIAppConfig.merge_from_file()`, it's extraneous
- Add `--config` to the app arg parser, add `InvokeAIAppConfig._config_file`, and consume in the config singleton getter
- `InvokeAIAppConfig.init_file_path` -> `InvokeAIAppConfig.config_file_path`
The models from INITIAL_MODELS.yaml have been recreated as a structured python object. This data is served on a new route. The model sources are compared against currently-installed models to determine if they are already installed or not.
This flag acts as a proxy for the `get_config()` function to determine if the full application is running.
If it was, the config will set the root, do HF login, etc.
If not (e.g. it's called by an external script), all that stuff will be skipped.
HF login, legacy yaml confs, and default init file are all handled during app setup.
All directories are created as they are needed by the app.
No need to check for a valid root dir - we will make it if it doesn't exist.
This provides a simple way to provide a HF token. If HF reports no valid token, one is prompted for until a valid token is provided, or the user presses Ctrl + C to cancel.
This simple package provides a cross-platform way to type a password on the CLI and have it show up as asterisks.
The fork, pending merge into the upstream package, adds support for Ctrl+C to cancel input.
Use the util function to calculate ram cache size on startup. This way, the `ram` setting will always be optimized for a system, even if they add or remove RAM. In other words, the default value is now dynamic.
- Move base of t2i and clip_vision config models to DiffusersBase, which contains
a field to record the model variant (e.g. "fp16")
- This restore the ability to load fp16 t2i and clip_vision models
- Also add defensive coding to load the vanilla model when the fp16 model
has been replaced (or more likely, user's preferences changed since installation)
Co-authored-by: Lincoln Stein <lstein@gmail.com>
When consolidating all the model queries I messed up the query tags. Fixed now, so that when a model is installed, removed, or changed, the list refreshes.
Currently translated at 52.5% (576 of 1096 strings)
translationBot(ui): update translation (Japanese)
Currently translated at 52.0% (570 of 1096 strings)
Co-authored-by: Gohsuke Shimada <ghoskay@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ja/
Translation: InvokeAI/Web UI
Currently translated at 97.8% (1510 of 1543 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.1% (1503 of 1532 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.1% (1503 of 1532 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
In order to allow for null and undefined metadata values, this hook returned a symbol to indicate that parsing failed or was pending.
For values where the parsed value will never be null or undefined, it is useful get the value or null (instead of a symbol).
When running the configurator, the `legacy_models_conf_path` was stripped when saving the config file. Then the migration logic didn't fire correctly, and the custom models.yaml paths weren't migrated into the db.
- Rework the logic to migrate this path by adding it to the config object as a normal field that is not excluded from serialization.
- Rearrange the models.yaml migration logic to remove the legacy path after migrating, then write the config file. This way, the legacy path doesn't stick around.
- Move the schema version into the config object.
- Back up the config file before attempting migration.
- Add tests to cover this edge case
Hold onto `conf_path` temporarily while migrating `invokeai.yaml` so that it gets migrated correctly as the model installer starts up. Stashed as `legacy_models_yaml_path` in the config, excluded from serialization.
We have two problems with how argparse is being utilized:
- We parse CLI args as the `api_app.py` file is read. This causes a problem pytest, which has an incompatible set of CLI args. Some tests import the FastAPI app, which triggers the config to parse CLI args, which receives the pytest args and fails.
- We've repeatedly had problems when something that uses the config is imported before the CLI args are parsed. When this happens, the root dir may not be set correctly, so we attempt to operate on incorrect paths.
To resolve these issues, we need to lift CLI arg parsing outside of the application code, but still let the application access the CLI args. We can create a external app entrypoint to do this.
- `InvokeAIArgs` is a simple helper class that parses CLI args and stores the result.
- `run_app()` is the new entrypoint. It first parses CLI args, then runs `invoke_api` to start the app.
The `invokeai-web` project script and `invokeai-web.py` dev script now call `run_app()` instead of `invoke_api()`.
The first time `get_config()` is called to get the singleton config object, it retrieves the args from `InvokeAIArgs`, sets the root dir if provided, then merges settings in from `invokeai.yaml`.
CLI arg parsing is now safely insulated from application code, but still accessible. And we don't need to worry about import order having an impact on anything, because by the time the app is running, we have already parsed CLI args. Whew!
This fixes an issue with `test_images.py`, which tests the bulk images routers and imports the whole FastAPI app. This triggers the config logic which fails on the test runner, because it has no `invokeai.yaml`.
Also probably just good for graceful fallback.
- `write_file` requires an destination file path
- `read_config` -> `merge_from_file`, if no path is provided, reads from `self.init_file_path`
- update app, tests to use new methods
- fix configurator, was overwriting config file data unexpectedly
Tweak the name of it so that incoming configs with the old default value of 6 have the setting stripped out. The result is all configs will now have the new, much better default value of 1.
Having this all in the `get_config` function makes testing hard. Move these two functions to their own methods, and call them on app startup explicitly.
- Remove OmegaConf. It functioned as an intermediary data format, between YAML/argparse and pydantic. It's not necessary - we can parse YAML or CLI args directly with pydantic.
- Remove dynamic CLI args. Only `root` is explicitly supported. This greatly simplifies config handling. Configuration is done by editing the YAML file. Frequently-used args can be added if there is a demand.
- A separate arg parser is created to handle the slimmed-down CLI args. It's run immediately in the `invokeai-web` script to handle `--version` and `--help`. It is also used inside the singleton config getter (see below).
- Remove categories from the config. Our settings model is mostly flat. Handling categories adds complexity for both us and users - we have to handle transforming a flat config to categorized config (and vice-versa), while users have to be careful with indentation in their YAML file.
- Add a `meta` key to the config file. Currently, this holds the config schema version only. It is not a part of the config object itself.
- Remove legacy settings that are no longer referenced, or were effectively no-op settings when referenced in code.
- Implement simple migration logic to for v3 configs. If migration is successful, the v3 config file is backed up to `invokeai.yaml.bak` and the new config written to `invokeai.yaml`.
- Previously, the singleton config was accessed by calling `InvokeAIAppConfig.get_config()`. This returned an instance of `InvokeAIAppConfig`, which _also_ has the `get_config` function. This created to a confusing situation where you weren't sure if you needed to call `get_config` or just use the config object. This method is replaced by a standalone `get_config` function which returns a singleton config object.
- Wrap CLI arg parsing (for `root`) and loading/migrating `invokeai.yaml` into the new `get_config()` function.
- Move `generate_config_docstrings` into standalone utility function.
- Make `root` a private attr (`_root`). This reduces the temptation to directly modify and or use this sensitive field and ensures it is neither serialized nor read from input data. Use `root_path` to access the resolved root path, or `set_root` to set the root to something.
* allow removal of models with legacy relative path addressing
* added five controlnet models for sdxl to INITIAL_MODELS
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [X] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
We've been using a forked copy of the diffusers safetensors->diffusers
model conversion code, which was hacked to read CLIP and the other
models needed for conversion from the local invokeai root models
directory. This was getting unsustainable as the code bases diverged,
and also required the installation and maintenance of the "core/convert"
directory.
This PR gets rid of the hacked conversion code and reverts to using the
native diffusers methods. Core convert models are no longer installed at
root configure time. Instead we rely on the HuggingFace hub system to
download the conversion models if and when they are needed. They are
relatively small and the initial delay seems minor.
Conversion of SD-1, SD-2 (both epsilon and v-prediction), SDXL, VAE and
ControlNet SD-1/2 models has been tested. ControlNet SDXL models are
still a WIP due to the need for some work on the prober.
The main implication of this change is that InvokeAI is no longer
internet-independent and will need an internet connection at least the
first time a safetensors file needs to be converted. However, there are
several other places where the "no internet" rule is already violated,
and I suggest that we abandon this principle.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes#5964
## QA Instructions, Screenshots, Recordings
1. Remove or move `$INVOKEAI_ROOT/models/.cache`
2. Move `$INVOKEAI/models/core/convert`
3. Try generating with an unconverted .safetensors model.
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
Merge when approved.
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
- No longer install core conversion models. Use the HuggingFace cache to load
them if and when needed.
- Call directly into the diffusers library to perform conversions with only shallow
wrappers around them to massage arguments, etc.
- At root configuration time, do not create all the possible model subdirectories,
but let them be created and populated at model install time.
- Remove checks for missing core conversion files, since they are no
longer installed.
In the client, a controlnet or t2i adapter has two images:
- The source control image: the image the user selected (required)
- The processed control image: the user's image after we've processed it (optional)
The processed image is optional because a user may provide a pre-processed image.
We only actually use one of these images when building the graph, and until this change, we only stored one of the in image metadata. This created a situation where only a processed image was stored in metadata - say, a canny edge map - and the user-selected image wasn't provided.
By adding the processed image to metadata, we can recall both the control image and optional processed image.
This commit is followed by a UI-facing changes to support the change.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [z] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
Single query, with simple wrapper hooks (type-safe). Updated everywhere
in frontend.
## QA Instructions, Screenshots, Recordings
Things that use models should work. All of this code is strictly
typechecked, so we can be confident in this change.
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
This PR can be merged when approved
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
We were passing a PIL image when we needed to pass the np image.
Closes#5956
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
We were passing a PIL image when we needed to pass the np image.
Closes#5956
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes#5956
## QA Instructions, Screenshots, Recordings
Depth anything processor should work.
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
This PR can be merged when approved
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
- This adds additional logic to the safetensors->diffusers conversion script
to check for and install missing core conversion models at runtime.
- Fixes#5934
BLAKE3 has poor performance on spinning disks when parallelized. See https://github.com/BLAKE3-team/BLAKE3/issues/31
- Replace `skip_model_hash` setting with `hashing_algorithm`. Any algorithm we support is accepted.
- Add `random` algorithm: hashes a UUID with BLAKE3 to create a random "hash". Equivalent to the previous skip functionality.
- Add `blake3_single` algorithm: hashes on a single thread using BLAKE3, fixes the aforementioned performance issue
- Update model probe to accept the algorithm to hash with as an optional arg, defaulting to `blake3`
- Update all calls of the probe to use the app's configured hashing algorithm
- Update an external script that probes models
- Update tests
- Move ModelHash into its own module to avoid circuclar import issues
This script removes unused translations from the `en.json` source translation file:
- Parse `en.json` to build a list of all keys, e.g. `controlnet.depthAnything`
- Check every frontend file for every key
- If the key is not found, it is removed from the translation file
- Exact matches (e.g. `controlnet.depthAnything`) and stem matches (e.g. `depthAnything`) are ignored
The graph builders used awaited functions within `Array.prototype.forEach` loops. This doesn't do what you'd think. This caused graphs to be enqueued before they were fully constructed.
Changed to `for..of` loops to fix this.
There wasn't enough validation of control adapters during graph building. It would be possible for a graph to be built with empty collect node, causing an error. Addressed with an extra check.
This should never happen in practice, because the invoke button should be disabled if an invalid CA is active.
## What type of PR is this? (check all applicable)
- [x] Optimization
## Description
Was merged into next but never carried over to main. So cleaning up
again.
This bypasses the `changed-files` check, and forces the checks to run. The release workflow sets this flag to ensure that the checks and tests are always run for a release.
…ention processors if no mid_block is detected
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No
## Description
L2i throws an assertion error when run with `madebyollin/taesdxl` due to
it requiring a different class in diffusers to load it. This is a small
PR to update seamless and l2i to accept AutoencoderTiny models and not
throw exceptions while processing them.
## QA Instructions, Screenshots, Recordings
<img width="445" alt="Screenshot 2024-03-12 at 12 04 29 PM"
src="https://github.com/invoke-ai/InvokeAI/assets/58442074/34a17e44-d911-4fef-8fc1-71f7b688688c">
Run an sdxl pipeline using a vae that requires AutoencoderTiny and
validate that the image successfully encodes and decodes.
## Merge Plan
This PR can be merged when approved
We were stripping the file extension from file models when moving them in `_sync_model_path`. For example, `some_model.safetensors` would be moved to `some_model`, which of course breaks things.
Instead of using the model's name as the new path, use the model's path's last segment. This is the same behaviour for directories, but for files, it retains the file extension.
- No need for it to by a pydantic model. Just a class now.
- Remove ABC, it made it hard to understand what was going on as attributes were spread across the ABC and implementation. Also, there is no other implementation.
- Add tests
- If the metadata yaml has an invalid version, exist the app. If we don't, the app will crawl the models dir and add models to the db without having first parsed `models.yaml`. This should not happen often, as the vast majority of users are on v3.0.0 models.yaml files.
- Fix off-by-one error with models count (need to pop the `__metadata__` stanza
- After a successful migration, rename `models.yaml` to `models.yaml.bak` to prevent the migration logic from re-running on subsequent app startups.
The old logic to check if a model needed to be moved relied on the model path being a relative path. Paths are now absolute, causing this check to fail. We then assumed the paths were different and moved the model from its current location to, well, its current location.
Use more resilient method to check if a model should be moved.
mkdocs can autogenerate python class docs from its docstrings. Our config is a pydantic model.
It's tedious and error-prone to duplicate docstrings from the pydantic field descriptions to the class docstrings.
- Add helper function to generate a mkdocs-compatible docstring from the InvokeAIAppConfig class fields
Recently the schema for models was changed to a generic `ModelField`, and the UI was unable to derive the type of those fields. This didn't affect functionality, but it did break the styling of handles.
Add `ui_type` to the affected fields and update the UI to use the correct capitalizations.
A list of regex and token pairs is accepted. As a file is downloaded by the model installer, the URL is tested against the provided regex/token pairs. The token for the first matching regex is used during download, added as a bearer token.
Without this, the form will incorrectly compare its state to its initial default values to determine if it is dirty. Instead, it should reset its default values to the new values after successful submit.
When we change a model image, its URL remains the same. The browser will aggressively cache the image. The easiest way to fix this is to append a random query parameter to the URL whenever we build a model config in the API.
- Move image display to left
- Move description into model header
- Move model edit & convert buttons to top right of model header
- Tweak styles for model display component
Currently translated at 98.0% (1487 of 1516 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.0% (1482 of 1512 strings)
translationBot(ui): update translation (Italian)
Currently translated at 98.0% (1475 of 1505 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
- All models are identified by a key and optionally a submodel type via new model `ModelField`. Previously, a few model types had their own class, but not all of them. This inconsistency just added complexity without any benefit.
- Update all invocation to use the new format.
- In the node API, models are loaded by key or an instance of `ModelField` as a convenience.
- Add an enriched model schema for metadata. It includes key, hash, name, base and type.
In order for delete by match to work, we need the whole invocation output to be stringified.
For some reason, the serialization of the output was set to only include the `type` field. It should instead include the whole output.
I don't understand how this ever worked unless pydantic had different serialization behaviour in v1 (though it appears to have been the same).
Closes#5805
* move defaultModel logic to modelsLoaded and update to work for key instead of name/base/type string
* lint fix
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
- Update all queries
- Remove Advanced Add
- Removed un-editable, internal-only model attributes from model edit UI (e.g. format, repo variant, model type)
- Update model tags so the list refreshes when a model installs
- Rename some queries, components, variables, types to match backend
- Fix divide-by-zero in install queue
Rename MM routes to be consistent:
- "import" -> "install"
- "model_record" -> "model"
Comment several unused routes while I work (may end up removing them?):
- list model summary (we use the search route instead)
- add model record
- convert model
- merge models
There is a breaking change in python 3.11 related to how enums with `str` as a mixin are formatted. This appears to have not caused any grief for us until now.
Re-jigger the discriminator setup to use `.value` so everything works on both python 3.10 and 3.11.
- Metadata is merged with the config. We can simplify the MM substantially and remove the handling for metadata.
- Per discussion, we don't have an ETA for frontend implementation of tags, and with the realization that the tags from CivitAI are largely useless, there's no reason to keep tags in the MM right now. When we are ready to implement tags on the frontend, we can refer back to the implementation here and use it if it supports the design.
- Fix all tests.
Sometimes, diffusers model components (tokenizer, unet, etc.) have multiple weights files in the same directory.
In this situation, we assume the files are different versions of the same weights. For example, we may have multiple
formats (`.bin`, `.safetensors`) with different precisions. When downloading model files, we want to select only
the best of these files for the requested format and precision/variant.
The previous logic assumed that each model weights file would have the same base filename, but this assumption was
not always true. The logic is revised score each file and choose the best scoring file, resulting in only a single
file being downloaded for each submodel/subdirectory.
* UI in MM to create trigger phrases
* add scheduler and vaePrecision to config
* UI for configuring default settings for models'
* hook MM default model settings up to API
* add button to set default settings in parameters
* pull out trigger phrases
* back-end for default settings
* lint
* remove log;
gi
* ruff
* ruff format
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
- Use memory view for hashlib algorithms (closer to python 3.11's filehash API in hashlib)
- Remove `sha1_fast` (realized it doesn't even hash the whole file, it just does the first block)
- Add support for custom file filters
- Update docstrings
- Update tests
- When installing, model keys are now calculated from the model contents.
- .safetensors, .ckpt and other single file models are hashed with sha1
- The contents of diffusers directories are hashed using imohash (faster)
fixup yaml->sql db migration script to assign deterministic key
- this commit also detects and assigns the correct image encoder for
ip adapter models.
## What type of PR is this? (check all applicable)
- [x] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because
## Description
Attention map saving was a feature that existed a long time ago in
Invoke (>1 year ago). This PR strips out a bunch of dead code that still
remains from that feature and is polluting our diffusion implementation.
This change should not have any functional effect on the app.
## QA Instructions, Screenshots, Recordings
I did a quick smoke test of SD and SDXL image generation. All of the
deleted code was unused, so the risk should be relatively low.
## Merge Plan
- [x] Change target branch to `main` before merging.
## Added/updated tests?
- [ ] Yes
- [x] No: This PR just deletes a bunch of unused code.
The timeouts are at least 3x the expected time to complete the jobs.
This is particularly relevant for the `pytest` job. Occasionally, it hangs while running tests that do network things, and the job only times out after 6 hours.
- Restructure & update code check workflows
- Add release workflow to handle checks/tests, build and publish to PyPI
- Add docs/RELEASE.md explaining the workflow & process
- `create_installer.sh`: Update to work with the release workflow
- `create_installer.sh` & `tag_release.sh`: Fix the ANSI escape codes for macOS
- `tag_release.sh`: Add check for python binary name
- `tag_release.sh`: Print `git remote -v` output
- `tag_release.sh`: Fix error when deleting nonexistant tags
This ensures it matches the github workflow.
Also there's an update that stabilizes a number of formatting rules, so there will be a format commit after this.
Model metadata includes the main model, VAE and refiner model.
These used full model configs, as returned by the server, as their metadata type.
LoRA and control adapter metadata only use the metadata identifier.
This created a difference in handling. After parsing a model/vae/refiner, we have its name and can display it. But for LoRAs and control adapters, we only have the model key and must query for the full model config to get the name.
This change makes main model/vae/refiner metadata only have the model key, like LoRAs and control adapters.
The render function is now async so fetching can occur within it. All metadata fields with models now only contain the identifier, and fetch the model name to render their values.
When we retrieve a list of models, upsert that data into the `getModelConfig` and `getModelConfigByAttrs` query caches.
With this change, calls to those two queries are almost always going to be free, because their caches will already have all models in them. The exception is queries for models that no longer exist.
Add concepts for metadata handlers. Handlers include parsers, recallers and validators for different metadata types:
- Parsers parse a raw metadata object of any shape to a structured object.
- Recallers load the parsed metadata into state. Recallers are optional, as some metadata types don't need to be loaded into state.
- Validators provide an additional layer of validation before recalling the metadata. This is needed because a metadata object may be valid, but not able to be recalled due to some other requirement, like base model compatibility. Validators are optional.
Sometimes metadata is not a single object but a list of items - like LoRAs. Metadata handlers may implement an optional set of "item" handlers which operate on individual items in the list.
Parsers and validators are async to allow fetching additional data, like a model config. Recallers are synchronous.
The these handlers are composed into a public API, exported as a `handlers` object. Besides the handlers functions, a metadata handler set includes:
- A function to get the label of the metadata type.
- An optional function to render the value of the metadata type.
- An optional function to render the _item_ value of the metadata type.
Gets the first model that matches the given name, base and type. Raises 404 if there isn't one.
This will be used for backwards compatibility with old metadata.
This was done in the frontend before but it's something the backend should handle.
The logic compares the found model paths to the path and source of all installed models. It excludes core models.
Refactor of metadata recall handling. This is in preparation for a backwards compatibility layer for models.
- Create helpers to fetch a model outside react (e.g. not in a hook)
- Created helpers to parse model metadata
- Renamed a lot of types that were confusing and/or had naming collisions
The setup of `ModelConfigBase` means autogenerated types have critical fields flagged as nullable (like `key` and `base`). Need to manually flag them as required.
- Support extended HF repoid syntax in TUI. This allows
installation of subfolders and safetensors files, as in
`XpucT/Deliberate::Deliberate_v5.safetensors`
- Add `error` and `error_traceback` properties to the install
job objects.
- Rename the `heuristic_import` route to `heuristic_install`.
- Fix the example `config` input in the `heuristic_install` route.
Notable updates:
- Minor version of RTK includes customizable selectors for RTK Query, so we can remove the patch that was added to ensure only the LRU memoize function was used for perf reasons. Updated to use the LRU memoize function.
- Major version of react-resizable-panels. No breaking changes, works great, and you can now resize all panels when dragging at the intersection point of panels. Cool!
- Minor (?) version of nanostores. `action` API is removed, we were using it in one spot. Fixed.
- @invoke-ai/eslint-config-react has all deps bumped and now has its dependent plugins/configs listed as normal dependencies (as opposed to peer deps). This means we can remove those packages from explicit dev deps.
- Use a single listener for all of the to keep them in one spot
- Use the bulk download item name as a toast id so we can update the existing toasts
- Update handling to work with other environments
- Move all bulk download handling from components to listener
- Deduplicate the mock invocation services. This is possible now that the import order issue is resolved.
- Merge `DummyEventService` into `TestEventService` and update all tests to use `TestEventService`.
Double underscores are used in the app but it doesn't actually do or convey anything that single underscores don't already do. Considered unpythonic except for actual dunder/magic methods.
Consolidate graph processing logic into session processor.
With graphs as the unit of work, and the session queue distributing graphs, we no longer need the invocation queue or processor.
Instead, the session processor dequeues the next session and processes it in a simple loop, greatly simplifying the app.
- Remove `graph_execution_manager` service.
- Remove `queue` (invocation queue) service.
- Remove `processor` (invocation processor) service.
- Remove queue-related logic from `Invoker`. It now only starts and stops the services, providing them with access to other services.
- Remove unused `invocation_retrieval_error` and `session_retrieval_error` events, these are no longer needed.
- Clean up stats service now that it is less coupled to the rest of the app.
- Refactor cancellation logic - cancellations now originate from session queue (i.e. HTTP cancel endpoint) and are emitted as events. Processor gets the events and sets the canceled event. Access to this event is provided to the invocation context for e.g. the step callback.
- Remove `sessions` router; it provided access to `graph_executions` but that no longer exists.
`GraphInvocation` is a node that can contain a whole graph. It is removed for a number of reasons:
1. This feature was unused (the UI doesn't support it) and there is no plan for it to be used.
The use-case it served is known in other node execution engines as "node groups" or "blocks" - a self-contained group of nodes, which has group inputs and outputs. This is a planned feature that will be handled client-side.
2. It adds substantial complexity to the graph processing logic. It's probably not enough to have a measurable performance impact but it does make it harder to work in the graph logic.
3. It allows for graphs to be recursive, and the improved invocations union handling does not play well with it. Actually, it works fine within `graph.py` but not in the tests for some reason. I do not understand why. There's probably a workaround, but I took this as encouragement to remove `GraphInvocation` from the app since we don't use it.
The change to `Graph.nodes` and `GraphExecutionState.results` validation requires some fanagling to get the OpenAPI schema generation to work. See new comments for a details.
We use pydantic to validate a union of valid invocations when instantiating a graph.
Previously, we constructed the union while creating the `Graph` class. This introduces a dependency on the order of imports.
For example, consider a setup where we have 3 invocations in the app:
- Python executes the module where `FirstInvocation` is defined, registering `FirstInvocation`.
- Python executes the module where `SecondInvocation` is defined, registering `SecondInvocation`.
- Python executes the module where `Graph` is defined. A union of invocations is created and used to define the `Graph.nodes` field. The union contains `FirstInvocation` and `SecondInvocation`.
- Python executes the module where `ThirdInvocation` is defined, registering `ThirdInvocation`.
- A graph is created that includes `ThirdInvocation`. Pydantic validates the graph using the union, which does not know about `ThirdInvocation`, raising a `ValidationError` about an unknown invocation type.
This scenario has been particularly problematic in tests, where we may create invocations dynamically. The test files have to be structured in such a way that the imports happen in the right order. It's a major pain.
This PR refactors the validation of graph nodes to resolve this issue:
- `BaseInvocation` gets a new method `get_typeadapter`. This builds a pydantic `TypeAdapter` for the union of all registered invocations, caching it after the first call.
- `Graph.nodes`'s type is widened to `dict[str, BaseInvocation]`. This actually is a nice bonus, because we get better type hints whenever we reference `some_graph.nodes`.
- A "plain" field validator takes over the validation logic for `Graph.nodes`. "Plain" validators totally override pydantic's own validation logic. The validator grabs the `TypeAdapter` from `BaseInvocation`, then validates each node with it. The validation is identical to the previous implementation - we get the same errors.
`BaseInvocationOutput` gets the same treatment.
- Replace AnyModelLoader with ModelLoaderRegistry
- Fix type check errors in multiple files
- Remove apparently unneeded `get_model_config_enum()` method from model manager
- Remove last vestiges of old model manager
- Updated tests and documentation
resolve conflict with seamless.py
- Rename old "model_management" directory to "model_management_OLD" in order to catch
dangling references to original model manager.
- Caught and fixed most dangling references (still checking)
- Rename lora, textual_inversion and model_patcher modules
- Introduce a RawModel base class to simplfy the Union returned by the
model loaders.
- Tidy up the model manager 2-related tests. Add useful fixtures, and
a finalizer to the queue and installer fixtures that will stop the
services and release threads.
- ModelMetadataStoreService is now injected into ModelRecordStoreService
(these two services are really joined at the hip, and should someday be merged)
- ModelRecordStoreService is now injected into ModelManagerService
- Reduced timeout value for the various installer and download wait*() methods
- Introduced a Mock modelmanager for testing
- Removed bare print() statement with _logger in the install helper backend.
- Removed unused code from model loader init file
- Made `locker` a private variable in the `LoadedModel` object.
- Fixed up model merge frontend (will be deprecated anyway!)
- Update most model identifiers to be `{key: string}` instead of name/base/type. Doesn't change the model select components yet.
- Update model _parameters_, stored in redux, to be `{key: string, base: BaseModel}` - we need to store the base model to be able to check model compatibility. May want to store the whole config? Not sure...
- Replace legacy model manager service with the v2 manager.
- Update invocations to use new load interface.
- Fixed many but not all type checking errors in the invocations. Most
were unrelated to model manager
- Updated routes. All the new routes live under the route tag
`model_manager_v2`. To avoid confusion with the old routes,
they have the URL prefix `/api/v2/models`. The old routes
have been de-registered.
- Added a pytest for the loader.
- Updated documentation in contributing/MODEL_MANAGER.md
- Implement new model loader and modify invocations and embeddings
- Finish implementation loaders for all models currently supported by
InvokeAI.
- Move lora, textual_inversion, and model patching support into
backend/embeddings.
- Restore support for model cache statistics collection (a little ugly,
needs work).
- Fixed up invocations that load and patch models.
- Move seamless and silencewarnings utils into better location
- Cache stat collection enabled.
- Implemented ONNX loading.
- Add ability to specify the repo version variant in installer CLI.
- If caller asks for a repo version that doesn't exist, will fall back
to empty version rather than raising an error.
Unfortunately you cannot test for both a specific type of error and match its message. Splitting the error classes makes it easier to test expected error conditions.
The changes aim to deduplicate data between workflows and node templates, decoupling workflows from internal implementation details. A good amount of data that was needlessly duplicated from the node template to the workflow is removed.
These changes substantially reduce the file size of workflows (and therefore the images with embedded workflows):
- Default T2I SD1.5 workflow JSON is reduced from 23.7kb (798 lines) to 10.9kb (407 lines).
- Default tiled upscale workflow JSON is reduced from 102.7kb (3341 lines) to 51.9kb (1774 lines).
The trade-off is that we need to reference node templates to get things like the field type and other things. In practice, this is a non-issue, because we need a node template to do anything with a node anyways.
- Field types are not included in the workflow. They are always pulled from the node templates.
The field type is now properly an internal implementation detail and we can change it as needed. Previously this would require a migration for the workflow itself. With the v3 schema, the structure of a field type is an internal implementation detail that we are free to change as we see fit.
- Workflow nodes no long have an `outputs` property and there is no longer such a thing as a `FieldOutputInstance`. These are only on the templates.
These were never referenced at a time when we didn't also have the templates available, and there'd be no reason to do so.
- Node width and height are no longer stored in the node.
These weren't used. Also, per https://reactflow.dev/api-reference/types/node, we shouldn't be programmatically changing these properties. A future enhancement can properly add node resizing.
- `nodeTemplates` slice is merged back into `nodesSlice` as `nodes.templates`. Turns out it's just a hassle having these separate in separate slices.
- Workflow migration logic updated to support the new schema. V1 workflows migrate all the way to v3 now.
- Changes throughout the nodes code to accommodate the above changes.
We have two different classes named `ModelInfo` which might need to be used by API consumers. We need to export both but have to deal with this naming collision.
The `ModelInfo` I've renamed here is the one that is returned when a model is loaded. It's the object least likely to be used by API consumers.
Replace `delete_on_startup: bool` & associated logic with `ephemeral: bool` and `TemporaryDirectory`.
The temp dir is created inside of `output_dir`. For example, if `output_dir` is `invokeai/outputs/tensors/`, then the temp dir might be `invokeai/outputs/tensors/tmpvj35ht7b/`.
The temp dir is cleaned up when the service is stopped, or when it is GC'd if not properly stopped.
In the event of a catastrophic crash where the temp files are not cleaned up, the user can delete the tempdir themselves.
This situation may not occur in normal use, but if you kill the process, python cannot clean up the temp dir itself. This includes running the app in a debugger and killing the debugger process - something I do relatively often.
Tests updated.
- The default is to not delete on startup - feels safer.
- The two services using this class _do_ delete on startup.
- The class has "ephemeral" removed from its name.
- Tests & app updated for this change.
`_delete_all` logged how many items it deleted, and had to be called _after_ service start bc it needed access to logger.
Move the logger call to the startup method and return the the deleted stats from `_delete_all`. This lets `_delete_all` be called at any time.
Turns out they are just different enough in purpose that the implementations would be rather unintuitive. I've made a separate ObjectSerializer service to handle tensors and conditioning.
Refined the class a bit too.
Turns out `ItemStorageABC` was almost identical to `PickleStorageBase`. Instead of maintaining separate classes, we can use `ItemStorageABC` for both.
There's only one change needed - the `ItemStorageABC.set` method must return the newly stored item's ID. This allows us to let the service handle the responsibility of naming the item, but still create the requisite output objects during node execution.
The naming implementation is improved here. It extracts the name of the generic and appends a UUID to that string when saving items.
- New generic class `PickleStorageBase`, implements the same API as `LatentsStorageBase`, use for storing non-serializable data via pickling
- Implementation `PickleStorageTorch` uses `torch.save` and `torch.load`, same as `LatentsStorageDisk`
- Add `tensors: PickleStorageBase[torch.Tensor]` to `InvocationServices`
- Add `conditioning: PickleStorageBase[ConditioningFieldData]` to `InvocationServices`
- Remove `latents` service and all `LatentsStorage` classes
- Update `InvocationContext` and all usage of old `latents` service to use the new services/context wrapper methods
This class works the same way as `WithMetadata` - it simply adds a `board` field to the node. The context wrapper function is able to pull the board id from this. This allows image-outputting nodes to get a board field "for free", and have their outputs automatically saved to it.
This is a breaking change for node authors who may have a field called `board`, because it makes `board` a reserved field name. I'll look into how to avoid this - maybe by naming this invoke-managed field `_board` to avoid collisions?
Supporting changes:
- `WithBoard` is added to all image-outputting nodes, giving them the ability to save to board.
- Unused, duplicate `WithMetadata` and `WithWorkflow` classes are deleted from `baseinvocation.py`. The "real" versions are in `fields.py`.
- Remove `LinearUIOutputInvocation`. Now that all nodes that output images also have a `board` field by default, this node is no longer necessary. See comment here for context: https://github.com/invoke-ai/InvokeAI/pull/5491#discussion_r1480760629
- Without `LinearUIOutputInvocation`, the `ImagesInferface.update` method is no longer needed, and removed.
Note: This commit does not bump all node versions. I will ensure that is done correctly before merging the PR of which this commit is a part.
Note: A followup commit will implement the frontend changes to support this change.
- The config is already cached by the config class's `get_config()` method.
- The config mutates itself in its `root_path` property getter. Freezing the class makes any attempt to grab a path from the config error. Unfortunately this means we cannot easily freeze the class without fiddling with the inner workings of `InvokeAIAppConfig`, which is outside the scope here.
Update all invocations to use the new context. The changes are all fairly simple, but there are a lot of them.
Supporting minor changes:
- Patch bump for all nodes that use the context
- Update invocation processor to provide new context
- Minor change to `EventServiceBase` to accept a node's ID instead of the dict version of a node
- Minor change to `ModelManagerService` to support the new wrapped context
- Fanagling of imports to avoid circular dependencies
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [X] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
Added new tooltip popovers and updated copy of existing ones
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
Release - Invoke 3.7.0
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
Invoke 3.7.0 Release
## QA Instructions, Screenshots, Recordings
Test Installer:
[InvokeAI-installer-v3.7.0.zip](https://github.com/invoke-ai/InvokeAI/files/14298200/InvokeAI-installer-v3.7.0.zip)
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
Merge once approved
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## Added/updated tests?
- [ ] Yes
- [X] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
1. Release on PyPi
2. Release on GitHub
3. Announce on Discord
With these changes, the Docker image can be built and executed
successfully on hosts with AMD devices with ROCm acceleration.
Previously, a ROCm-enabled version of torch would be installed, but
later removed during installation of InvokeAI itself. This was caused by
InvokeAI needing a newer torch version than was previously installed.
The fix consists of multiple components:
* Update the hardcoded versions of torch and torchvision to the versions
currently used in pyproject.toml, so that a new version need not be
installed during installation of InvokeAI.
* Specify --extra-index-url on installation of InvokeAI so that even if
a verison mismatch occurs, the correct torch version should still be
installed. This also necessitates changing --index-url to
--extra-index-url for the Torch repo. Otherwise non-torch dependencies
would not be found.
* In run.sh, build the image for the selected service.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
* new workflow tab UI - still using shared state with workflow editor tab
* polish workflow details
* remove workflow tab, add edit/view mode to workflow slice and get that working to switch between within editor tab
* UI updates for view/edit mode
* cleanup
* add warning to view mode
* lint
* start with isTouched false
* working on styling mode toggle
* more UX iteration
* lint
* cleanup
* save original field values to state, add indicator if they have been changed and give user choice to reset
* lint
* fix import and commit translation
* dont switch to view mode when loading a workflow
* warns before clearing editor
* use folder icon
* fix(ui): track do not erase value when resetting field value
- When adding an exposed field, we need to add it to originalExposedFieldValues
- When removing an exposed field, we need to remove it from originalExposedFieldValues
- add `useFieldValue` and `useOriginalFieldValue` hooks to encapsulate related logic
* feat(ui): use IconButton for workflow view/edit button
* feat(ui): change icon for new workflow
It was the same as the workflow tab icon, confusing bc you think it's going to somehow take you to the tab.
* feat(ui): use render props for NewWorkflowConfirmationAlertDialog
There was a lot of potentially sensitive logic shared between the new workflow button and menu items. Also, two instances of ConfirmationAlertDialog.
Using a render prop deduplicates the logic & components
* fix(ui): do not mark workflow touched when loading workflow
This was occurring because the `nodesChanged` action is called by reactflow when loading a workflow. Specifically, it calculates and sets the node dimensions as it loads.
The existing logic set `isTouched` whenever this action was called.
The changes reactflow emits have types, and we can use the change types and data to determine if a change should result in the workflow being marked as touched.
* chore(ui): lint
* chore(ui): lint
* delete empty file
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Methods `get_node` and `complete` were typed as returning a dynamically created unions `InvocationsUnion` and `InvocationOutputsUnion`, respectively.
Static type analysers cannot work with dynamic objects, so these methods end up as effectively un-annotated, returning `Unknown`.
They now return `BaseInvocation` and `BaseInvocationOutput`, respectively, which are the superclasses of all members of each union. This gives us the best type annotation that is possible.
Note: the return types of these methods are never introspected, so it doesn't really matter what they are at runtime.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ X ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ X ] No, because: It's small
## Have you updated all relevant documentation?
- [ ] Yes
- [ X ] No
## Description
This pulls out some of the updates from the WIP Seamless branch that has
yet to be completed, and hardcodes values that are exposed in that
branch. Given that seamless currently does not generate seamless
textures, and this fix results in seamless outputs, it's an improvement
even if it doesn't resolve this in a "perfect" way that exposes all
variables to the end user.
better over perfect.

* remove thunk for receivedOpenApiSchema and use RTK query instead. add loading state for exposed fields
* clean up
* ignore any
* fix(ui): do not log on canceled openapi.json queries
- Rely on RTK Query for the `loadSchema` query by providing a custom `jsonReplacer` in our `dynamicBaseQuery`, so we don't need to manage error state.
- Detect when the query was canceled and do not log the error message in those situations.
* feat(ui): `utilitiesApi.endpoints.loadSchema` -> `appInfoApi.endpoints.getOpenAPISchema`
- Utilities is for server actions, move this to `appInfo` bc it fits better there.
- Rename to match convention for HTTP GET queries.
- Fix inverted logic in the `matchRejected` listener (typo'd this)
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
## What type of PR is this? (check all applicable)
Release Invoke 3.6.3
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
Invoke 3.6.3 Release
## QA Instructions, Screenshots, Recordings
Test the installer:
[InvokeAI-installer-v3.6.3.zip](https://github.com/invoke-ai/InvokeAI/files/14233359/InvokeAI-installer-v3.6.3.zip)
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
Merge once approved
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## [optional] Are there any post deployment tasks we need to perform?
1. Release on PyPi & GitHub
2. Announce on Discord
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: it is text only, simple, and (hopefully) self-evident
## Have you updated all relevant documentation?
- [x] Yes - as far as I can grep.
- [ ] No
## Description
`.env.sample` was misspelled as `env.sample` in a few places.
This changes documentation only. You may need to re-build/deploy docs,
I'm not sure.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
The change to memory session storage brings a subtle behaviour change.
Previously, we serialized and deserialized everything (e.g. field state,
invocation outputs, etc) constantly. The meant we were effectively
working with deep-copied objects at all time. We could mutate objects
freely without worrying about other references to the object.
With memory storage, objects are now passed around by reference, and we
cannot handle them in the same way.
This is problematic for nodes that mutate their own inputs. There are
two ways this causes a problem:
- An output is used as input for multiple nodes. If the first node
mutates the output object while `invoke`ing, the next node will get the
mutated object.
- The invocation cache stores live python objects. When a node mutates
an output pulled from the cache, the next node that uses the cached
object will get the mutated object.
The solution is to deep-copy a node's inputs as they are set,
effectively reproducing the same behaviour as we had with the SQLite
session storage. Nodes can safely mutate their inputs and those changes
never leave the node's scope.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Closes #5665
The root issue affects CLIP Skip because that node mutates its input
`ClipField`. Specifically, it increments `self.clip.skipped_layers` and
passes `self.clip` as its output. I don't know if there are any other
nodes that do this.
## QA Instructions, Screenshots, Recordings
Two issues to reproduce.
First is the caching issue:

Note the cache is enabled. Run this simple graph a couple times, and
check the outputs of the CLIP Skip node. You'll see the `skipped_layers`
value increasing each time.
Second is the nodes-sharing-inputs issue:

Note the cache is _disabled_. Run the graph a couple times and check the
outputs of the two CLIP Skip nodes. You'll see that one has the expected
value for `skipped_layers` and the other has double that.
Now update to the PR and try again. You should see `skipped_layers` is
the right value in all cases.
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
This PR can be merged when approved. It needs a real review with
braintime.
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
The change to memory session storage brings a subtle behaviour change.
Previously, we serialized and deserialized everything (e.g. field state, invocation outputs, etc) constantly. The meant we were effectively working with deep-copied objects at all time. We could mutate objects freely without worrying about other references to the object.
With memory storage, objects are now passed around by reference, and we cannot handle them in the same way.
This is problematic for nodes that mutate their own inputs. There are two ways this causes a problem:
- An output is used as input for multiple nodes. If the first node mutates the output object while `invoke`ing, the next node will get the mutated object.
- The invocation cache stores live python objects. When a node mutates an output pulled from the cache, the next node that uses the cached object will get the mutated object.
The solution is to deep-copy a node's inputs as they are set, effectively reproducing the same behaviour as we had with the SQLite session storage. Nodes can safely mutate their inputs and those changes never leave the node's scope.
Closes #5665
Currently translated at 74.4% (1054 of 1416 strings)
translationBot(ui): update translation (German)
Currently translated at 69.6% (986 of 1416 strings)
translationBot(ui): update translation (German)
Currently translated at 68.6% (972 of 1416 strings)
Co-authored-by: B N <berndnieschalk@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
…elected
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
Small bugfix: the installer would always print the latest stable version
as the one to be installed, even if a different one was selected. The
selected version would still be installed correctly. This PR fixes the
message.
## QA Instructions, Screenshots, Recordings
Select a pre-release version on install and observe the correct version
being printed. Compare to current behaviour to ascertain the fix.
## Merge Plan
- "This PR can be merged when approved"
## Added/updated tests?
- [ ] Yes
- [x] No
This has repeatedly shown itself useful in fixing install issues,
especially regarding pytorch CPU/GPU version, so there is little
downside to making this the default.
Performance impact of this should be negligible. Packages will
be reinstalled from pip cache if possible, and downloaded only if
necessary. Impact may be felt on slower disks.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [X] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because probably not needed
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
These are another minor dep updates that I was able to test without any
regressions. This will ensure we are up-to-date again.
The fixes are very minor, probably not noticeable in InvokeAI (at least
for diffusers) but it's still good to have them.
This is also to make sure that the RC is releasing with the latest
packages to ensure extended testing.
Greetings
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [x] Community Node Submission
## Description
- Adds BriaAI's new 1.4 model for background removal. Far superior
results from what I've tested compared to any other BG removal so far:
https://github.com/blessedcoolant/invoke_bria_rmbg
The stats service was logging error messages when attempting to retrieve stats for a graph that it wasn't tracking. This was rather noisy.
Instead of logging these errors within the service, we now will just raise the error and let the consumer of the service decide whether or not to log. Our usage of the service at this time is to suppress errors - we don't want to log anything to the console.
Note: With the improvements in the previous two commits, we shouldn't get these errors moving forward, but I still think this change is correct.
When an invocation is canceled, we consider the graph canceled. Log its graph's stats before resetting its graph's stats. No reason to not log these stats.
We also should stop the profiler at this point, because this graph is finished. If we don't stop it manually, it will stop itself and write the profile to disk when it is next started, but the resultant profile will include more than just its target graph.
Now we get both stats and profiles for canceled graphs.
When an invocation errored, we clear the stats for the whole graph. Later on, we check the graph for errors and see the failed invocation, and we consider the graph failed. We then attempts to log the stats for the failed graph.
Except now the failed graph has no stats, and the stats raises an error.
The user sees, in the terminal:
- An invocation error
- A stats error (scary!)
- No stats for the failed graph (uninformative!)
What the user should see:
- An invocation error
- Graph stats
The fix is simple - don't reset the graph stats when an invocation has an error.
Hardcode the options in the dropdown, don't rely on translators to fill this in.
Also, add a number of missing languages (Azerbaijani, Finnish, Hungarian, Swedish, Turkish).
Closes#5647
The alpha values in the UI are `0-1` but the backend wants `0-255`.
Previously, this was handled in `parseFIeldValue` when building the graph. In a recent release, field types were refactored and broke the alpha handling.
The logic for handling alpha values is moved into `ColorFieldInputComponent`, and `parseFieldValue` now just does no value transformations.
Though it would be a minor change, I'm leaving this function in because I don't want to change the rest of the logic except when necessary.
Closes#5616
Turns out the OpenAPI schema definition for a pydantic field with a `Literal` type annotation is different depending on the number of options.
When there is a single value (e.g. `Literal["foo"]`, this results in a `const` schema object. The schema parser didn't know how to handle this, and displayed a warning in the JS console.
This situation is now handled. When a `const` schema object is encountered, we interpret that as an `EnumField` with a single option.
I think this makes sense - if you had a truly constant value, you wouldn't make it a field, so a `const` must mean a dynamically generated enum that ended up with only a single option.
Currently translated at 40.6% (582 of 1433 strings)
translationBot(ui): update translation (Turkish)
Currently translated at 38.8% (557 of 1433 strings)
Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
- `ItemStorageMemory.get` now throws an `ItemNotFoundError` when the requested `item_id` is not found.
- Update docstrings in ABC and tests.
The new memory item storage implementation implemented the `get` method incorrectly, by returning `None` if the item didn't exist.
The ABC typed `get` as returning `T`, while the SQLite implementation typed `get` as returning `Optional[T]`. The SQLite implementation was referenced when writing the memory implementation.
This mismatched typing is a violation of the Liskov substitution principle, because the signature of the implementation of `get` in the implementation is wider than the abstract class's definition. Using `pyright` in strict mode catches this.
In `invocation_stats_default`, this introduced an error. The `_prune_stats` method calls `get`, expecting the method to throw if the item is not found. If the graph is no longer stored in the bounded item storage, we will call `is_complete()` on `None`, causing the error.
Note: This error condition never arose the SQLite implementation because it parsed the item with pydantic before returning it, which would throw if the item was not found. It implicitly threw, while the memory implementation did not.
The `getIntermediatesCount` query is set to `refetchOnMountOrArgsChange`. The intention was for when the settings modal opens (i.e. mounts), the `getIntermediatesCount` query is refetched. But it doesn't work - modals only mount once, there is no lazy rendering for them.
So we have to imperatively refetch, by refetching as we open the modal.
Closes#5639
* Port the command-line tools to use model_manager2
1.Reimplement the following:
- invokeai-model-install
- invokeai-merge
- invokeai-ti
To avoid breaking the original modeal manager, the udpated tools
have been renamed invokeai-model-install2 and invokeai-merge2. The
textual inversion training script should continue to work with
existing installations. The "starter" models now live in
`invokeai/configs/INITIAL_MODELS2.yaml`.
When the full model manager 2 is in place and working, I'll rename
these files and commands.
2. Add the `merge` route to the web API. This will merge two or three models,
resulting a new one.
- Note that because the model installer selectively installs the `fp16` variant
of models (rather than both 16- and 32-bit versions as previous),
the diffusers merge script will choke on any huggingface diffuserse models
that were downloaded with the new installer. Previously-downloaded models
should continue to merge correctly. I have a PR
upstream https://github.com/huggingface/diffusers/pull/6670 to fix
this.
3. (more important!)
During implementation of the CLI tools, found and fixed a number of small
runtime bugs in the model_manager2 implementation:
- During model database migration, if a registered models file was
not found on disk, the migration would be aborted. Now the
offending model is skipped with a log warning.
- Caught and fixed a condition in which the installer would download the
entire diffusers repo when the user provided a single `.safetensors`
file URL.
- Caught and fixed a condition in which the installer would raise an
exception and stop the app when a request for an unknown model's metadata
was passed to Civitai. Now an error is logged and the installer continues.
- Replaced the LoWRA starter LoRA with FlatColor. The former has been removed
from Civitai.
* fix ruff issue
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
Seems we elected to convert checkpoints into .bin files when we set it
up. This doesn't seem to corrupt them anymore.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
Initially I wanted to show how many sessions were being deleted. In hindsight, this is not great:
- It requires extra logic in the migrator, which should be as simple as possible.
- It may be alarming to see "Clearing 224591 old sessions".
The app still reports on freed space during the DB startup logic.
* fix(ui): download image opens in new tab
In some environments, a simple `a` element cannot trigger a download of an image. Fetching the image directly can get around this and provide more reliable download functionality.
* use hook for imageUrlToBlob so token gets sent if needed
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
This substantially reduces the time spent encoding PNGs. In workflows with many image outputs, this is a drastic improvement.
For a tiled upscaling workflow going from 512x512 to a scale factor of 4, this can provide over 15% speed increase.
This allows the stats to be written to disk as JSON and analyzed.
- Add dataclasses to hold stats.
- Move stats pretty-print logic to `__str__` of the new `InvocationStatsSummary` class.
- Add `get_stats` and `dump_stats` methods to `InvocationStatsServiceBase`.
- `InvocationStatsService` now throws if stats are requested for a session it doesn't know about. This avoids needing to do a lot of messy null checks.
- Update `DefaultInvocationProcessor` to use the new stats methods and suppresses the new errors.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
Small PR to allow users to pass in a civit api key via config options
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
* redo top panel of workflow editor
* add checkbox option to save to project, integrate save-as flow into first time saving workflow
* remove log
* remove workflowLibrary as a feature that can be disabled
* lint
* feat(ui): make SaveWorkflowAsDialog a singleton
Fixes an issue where the workflow name would erroneously be an empty string (which it should show the current workflow name).
Also makes it easier to interact with this component.
- Extract the dialog state to a hook
- Render the dialog once in `<NodeEditor />`
- Use the hook in the various buttons that should open the dialog
- Fix a few wonkily named components (pre-existing issue)
* fix(ui): when saving a never-before-saved workflow, do not append " (copy)" to the name
* fix(ui): do not obscure workflow library button with add node popover
This component is kinda janky :/ the popover content somehow renders invisibly over the button. I think it's related to the `<PopoverAnchor />.
Need to redo this in the future, but for now, making the popover render lazily fixes this.
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Adds adds ctrl/meta + scroll to change brush size on canvas.
* changed hotkeys
* new hotkey as an additional
* lint fixed"
* added ctrl scroll and removed hotkey
* using
* added fix
* feedbck_changes
* brush size change logic
* feat(ui): also check for meta key when modifying brush size
* feat(ui): add comment linking to where brush size algo was determined
---------
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
This brings `docs/other/CONTRIBUTORS.md` into sync with collaborator
roles in Discord as of January 27, 2024.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
## QA Instructions, Screenshots, Recordings
N/A
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
Merge when approved.
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
Currently translated at 10.5% (151 of 1426 strings)
translationBot(ui): update translation (Turkish)
Currently translated at 8.1% (116 of 1426 strings)
translationBot(ui): update translation (Turkish)
Currently translated at 6.6% (95 of 1426 strings)
Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
It doesn't work now that the theme is external. I'm not sure how to fix it and not sure if it really did much (I don't think I ever got autocomplete...). Maybe it can be implemented in `@invoke-ai/ui-library`.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
- Update docs to make link to automated installer easier to find
- Fixed issue in SDXL + refiner example workflow
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
Read over docs changes
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
Merge when approved
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## [optional] Are there any post deployment tasks we need to perform?
Deploy new docs
…y to distinguish what's being changed
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## Added/updated tests?
- [ ] Yes
- [x] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
* dont show duplicate toasts if workflow actions fail due to auth
* dynamic order by options based on projectId
* add endpointName to authtoast to makeit unique per endpoint
* lint
* update toast logic to check based on endpoint name w type safety
* fix save as endpoit name
* lint
* fix type
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
## What type of PR is this? (check all applicable)
Invoke v3.6.2 release
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
Invoke v3.6.2
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
[InvokeAI-installer-v3.6.2.zip](https://github.com/invoke-ai/InvokeAI/files/14046191/InvokeAI-installer-v3.6.2.zip)
* retain id through workflow state so that we correctly update or save new
* lint
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
## What type of PR is this? (check all applicable)
Invoke 3.6.1 release
## QA Instructions, Screenshots, Recordings
[InvokeAI-installer-v3.6.1.zip](https://github.com/invoke-ai/InvokeAI/files/14041411/InvokeAI-installer-v3.6.1.zip)
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
This PR can be merged when approved
## [optional] Are there any post deployment tasks we need to perform?
PyPI Release & GitHub Release
## What type of PR is this? (check all applicable)
- [x] Feature
## Have you discussed this change with the InvokeAI team?
- [x] Yes
## Have you updated all relevant documentation?
- [x] No
## Description
- This adds the newly released Depth Anything to InvokeAI. A new node
`Depth Anything Processor` has been added to generate depth maps using
this new technique. https://depth-anything.github.io
- All related checkpoints will be downloaded automatically on first
boot. The `DinoV2` models will be loaded to your torch cache dir and the
checkpoints pertaining to Depth Anything will be downloaded to
`any/annotators/depth_anything`.
- Alternatively you can find the checkpoints here and download them to
that folder:
https://huggingface.co/spaces/LiheYoung/Depth-Anything/tree/main/checkpoints
- This depth map can be used with any depth ControlNet model out there
but the folks at DepthAnything have also released a custom fine tuned
ControlNet model. From my limited testing, I still prefer the original
depth model because this one seems to be producing weird artifacts. Not
sure if that is a specific problem to Invoke or just the model itself.
I'll test more later. Place these in your controlnet folder like your
other ControlNets. You can get that here:
https://huggingface.co/spaces/LiheYoung/Depth-Anything/tree/main/checkpoints_controlnet
- Also available in the LinearUI
- DepthAnything has three models `large`, `base` and `small` -- I've
defaulted the processor to small but a user can change to the large
model if they wish to do so. Small is way faster but obviously somewhat
of a lesser quality.
- DepthAnything is now the default processor for depth controlnet
models.
## Screenshots

## Merge Plan
DO NOT MERGE YET. Test it first and I'm sure the model caching can be
done better. Coz I don't think I've done that at all. I would appreciate
if @brandonrising or @lstein or anyone can take a look at that part of
it.
* feat: ✨ "Remix Image" option on images
Adds a new "remix image" option where applicable, recalls all metadata except the seed
* refactor: 🚨 lint code
* feat: ✨ "Remix Image" option on images
Adds a new "remix image" option where applicable, recalls all metadata except the seed
* refactor: 🚨 lint code
* feat: ✨ add new remix hotkey to hotkeys modal
---------
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Remove `trim()` from model identifier schema, which prevented parsed model identifiers from matching.
The root issue here is that model names are identifiers. This will be resolved in the model manager refactor.
Closes#5556
- Bump `@invoke-ai/ui` for updated styles
- Update regex to parse prompts with newlines
- Update styling of overlay button when prompt has an error
- Fix bug where loading and error state sometimes weren't cleared
We had a one-behind issue with recalling metadata items that had a model.
For example, when recalling LoRAs, we check against the current main model to decide whether or not the requested LoRA is compatible and may be recalled.
When recalling all params, we are often also recalling the main model, but the compat logic didn't compare against this new main model.
The logic is updated to check against the new main model, if one is being set.
Closes#5512
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission
## Description
Update UI README
## Merge Plan
This PR can be merged when approved
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
The frontend docs should just be in the frontend. This is a standard practice for monorepos with developer information for specific packages within the monorepo.
The Ideal Size node is useful for High-Res Optimization as it gives the optimum size for creating an initial generation with minimal artifacts (duplication and other strangeness) from today's models.
After inclusion, front end graph generation can be simplified by offloading calculations for HRO initial generation to this node.
The previous method wasn't totally foolproof, and locales/assets were cached.
To solve this once and for all (famous last words, I know), we can subclass `StaticFiles` and use maximally strict no-caching headers to disable caching on all static files.
Currently translated at 97.3% (1365 of 1402 strings)
translationBot(ui): update translation (Italian)
Currently translated at 97.3% (1365 of 1402 strings)
translationBot(ui): update translation (Italian)
Currently translated at 97.3% (1365 of 1402 strings)
translationBot(ui): update translation (Italian)
Currently translated at 97.3% (1365 of 1402 strings)
Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
* resolved conflicts
* changed logo and some design changes
* feedback changes
* resolved conflicts
* changed logo and some design changes
* feedback changes
* lint fixed
* added translations
* some requested changes done
* all feedback changes done and replace links in settingsmenu comp
* fixed the gap between deps verisons & chnaged heights
* feat(ui): minor about modal styling
* feat(ui): tag app endpoints with FetchOnReconnect
* fix(ui): remove unused translation string
---------
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
Use each language's own language for their option in the language select. This falls back to the english translation if the language name isn't translated.
* add basic functionality for model metadata fetching from hf and civitai
* add storage
* start unit tests
* add unit tests and documentation
* add missing dependency for pytests
* remove redundant fetch; add modified/published dates; updated docs
* add code to select diffusers files based on the variant type
* implement Civitai installs
* make huggingface parallel downloading work
* add unit tests for model installation manager
- Fixed race condition on selection of download destination path
- Add fixtures common to several model_manager_2 unit tests
- Added dummy model files for testing diffusers and safetensors downloading/probing
- Refactored code for selecting proper variant from list of huggingface repo files
- Regrouped ordering of methods in model_install_default.py
* improve Civitai model downloading
- Provide a better error message when Civitai requires an access token (doesn't give a 403 forbidden, but redirects
to the HTML of an authorization page -- arrgh)
- Handle case of Civitai providing a primary download link plus additional links for VAEs, config files, etc
* add routes for retrieving metadata and tags
* code tidying and documentation
* fix ruff errors
* add file needed to maintain test root diretory in repo for unit tests
* fix self->cls in classmethod
* add pydantic plugin for mypy
* use TestSession instead of requests.Session to prevent any internet activity
improve logging
fix error message formatting
fix logging again
fix forward vs reverse slash issue in Windows install tests
* Several fixes of problems detected during PR review:
- Implement cancel_model_install_job and get_model_install_job routes
to allow for better control of model download and install.
- Fix thread deadlock that occurred after cancelling an install.
- Remove unneeded pytest_plugins section from tests/conftest.py
- Remove unused _in_terminal_state() from model_install_default.
- Remove outdated documentation from several spots.
- Add workaround for Civitai API results which don't return correct
URL for the default model.
* fix docs and tests to match get_job_by_source() rather than get_job()
* Update invokeai/backend/model_manager/metadata/fetch/huggingface.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
* Call CivitaiMetadata.model_validate_json() directly
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
* Second round of revisions suggested by @ryanjdick:
- Fix type mismatch in `list_all_metadata()` route.
- Do not have a default value for the model install job id
- Remove static class variable declarations from non Pydantic classes
- Change `id` field to `model_id` for the sqlite3 `model_tags` table.
- Changed AFTER DELETE triggers to ON DELETE CASCADE for the metadata and tags tables.
- Made the `id` field of the `model_metadata` table into a primary key to achieve uniqueness.
* Code cleanup suggested in PR review:
- Narrowed the declaration of the `parts` attribute of the download progress event
- Removed auto-conversion of str to Url in Url-containing sources
- Fixed handling of `InvalidModelConfigException`
- Made unknown sources raise `NotImplementedError` rather than `Exception`
- Improved status reporting on cached HuggingFace access tokens
* Multiple fixes:
- `job.total_size` returns a valid size for locally installed models
- new route `list_models` returns a paged summary of model, name,
description, tags and other essential info
- fix a few type errors
* consolidated all invokeai root pytest fixtures into a single location
* Update invokeai/backend/model_manager/metadata/metadata_store.py
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
* Small tweaks in response to review comments:
- Remove flake8 configuration from pyproject.toml
- Use `id` rather than `modelId` for huggingface `ModelInfo` object
- Use `last_modified` rather than `LastModified` for huggingface `ModelInfo` object
- Add `sha256` field to file metadata downloaded from huggingface
- Add `Invoker` argument to the model installer `start()` and `stop()` routines
(but made it optional in order to facilitate use of the service outside the API)
- Removed redundant `PRAGMA foreign_keys` from metadata store initialization code.
* Additional tweaks and minor bug fixes
- Fix calculation of aggregate diffusers model size to only count the
size of files, not files + directories (which gives different unit test
results on different filesystems).
- Refactor _get_metadata() and _get_download_urls() to have distinct code paths
for Civitai, HuggingFace and URL sources.
- Forward the `inplace` flag from the source to the job and added unit test for this.
- Attach cached model metadata to the job rather than to the model install service.
* fix unit test that was breaking on windows due to CR/LF changing size of test json files
* fix ruff formatting
* a few last minor fixes before merging:
- Turn job `error` and `error_type` into properties derived from the exception.
- Add TODO comment about the reason for handling temporary directory destruction
manually rather than using tempfile.tmpdir().
* add unit tests for reporting HTTP download errors
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Per user feedback, this is preferrable to letting them expand when the window grows.
Also bumps `react-resizable-panels` now that one of my PRs is merged to fix an issue.
## What type of PR is this? (check all applicable)
Release v3.6.0
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
Invoke v3.6.0
## QA Instructions, Screenshots, Recordings
[InvokeAI-installer-v3.6.0.zip](https://github.com/invoke-ai/InvokeAI/files/13923761/InvokeAI-installer-v3.6.0.zip)
## [optional] Are there any post deployment tasks we need to perform?
1. Release on PyPi
2. Release on GitHub
3. Announce in #releases
* feat: allow bfloat16 to be configurable in invoke.yaml
* fix: `torch_dtype()` util
- Use `choose_precision` to get the precision string
- Do not reference deprecated `config.full_precision` flat (why does this still exist?), if a user had this enabled it would override their actual precision setting and potentially cause a lot of confusion.
---------
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
- Add various brand images, organise images
- Create favicon for docs pages (light blue version of key logo)
- Rename app title to `Invoke - Community Edition`
Add `FetchOnReconnect` tag, tagging relevant queries with it. This tag is invalidated in the socketConnected listener, when it is determined that the queue changed.
- Add checks to the "recovery" logic for socket connect events to reduce the number of network requests.
- Remove the `isInitialized` state from `systemSlice` and make it a nanostore local to the socketConnected listener. It didn't need to be global state. It's also now more clearly named `isFirstConnection`.
- Export the queue status selector (minor improvement, memoizes it correctly).
## What type of PR is this? (check all applicable)
Release v3.6.0rc6
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
Release candidate $6
## QA Instructions, Screenshots, Recordings
[InvokeAI-installer-v3.6.0rc6.zip](https://github.com/invoke-ai/InvokeAI/files/13890206/InvokeAI-installer-v3.6.0rc6.zip)
## Merge Plan
Merge when approved
## [optional] Are there any post deployment tasks we need to perform?
Release on PyPi & Github
- Fixed a bug where after you load more, changing boards doesn't work. The offset and limit for the list image query had some wonky logic, now resolved.
- Addressed major lag in gallery when selecting an image.
Both issues were related to the useMultiselect and useGalleryImages hooks, which caused every image in the gallery to re-render on whenever the selection changed. There's no way to memoize away this - we need to know when the selection changes. This is a longstanding issue.
The selection is only used in a callback, though - the onClick handler for an image to select it (or add it to the existing selection). We don't really need the reactivity for a callback, so we don't need to listen for changes to the selection.
The logic to handle multiple selection is moved to a new `galleryImageClicked` listener, which does all the selection right when it is needed.
The result is that gallery images no long need to do heavy re-renders on any selection change.
Besides the multiselect click handler, there was also inefficient use of DND payloads. Previously, the `IMAGE_DTOS` type had a payload of image DTO objects. This was only used to drag gallery selection into a board. There is no need to hold onto image DTOs when we have the selection state already in redux. We were recalculating this payload for every image, on every tick.
This payload is now just the board id (the only piece of information we need for this particular DND event).
- I also removed some unused DND types while making this change.
There was a lot of convoluted, janky logic related to trying to not mount the context menu's portal until its needed. This was in the library where the component was originally copied from.
I've removed that and resolved the jank, at the cost of there being an extra portal for each instance of the context menu. Don't think this is going to be an issue. If it is, the whole context menu could be refactored to be a singleton.
* ci: add docker build timout; log free space on runner before and after build
* docker: bump frontend builder to node=20.x; skip linting on build
* chore: gitignore .pnpm-store
* update code owners for docker and CI
---------
Co-authored-by: Millun Atluri <Millu@users.noreply.github.com>
I was troubleshooting a hotkeys issue on canvas and thought I had broken the tool logic in a past change so I redid it moving it to nanostores. In the end, the issue was an upstream but with the hotkeys library, but I like having tool in nanostores so I'm leaving it.
It's ephemeral interaction state anyways, doesn't need to be in redux.
There's a challenge to accomplish this due to our slice structure - the model is stored in `generationSlice`, but `canvasSlice` also needs to have awareness of it. For example, when the model changes, the canvas slice doesn't know what the previous model was, so it doesn't know whether or not to optimize the size.
This means we need to lift the "should we optimize size" information up. To do this, the `modelChanged` action creator accepts the previous model as an optional second arg.
Now the canvas has access to both the previous model and new model selection, and can decide whether or not it should optimize its size setting in the same way that the generation slice does.
Closes #5452
For some reason `ReturnType<typeof useListImagesQuery>` isn't working correctly, and destructuring `queryResult` it results in `any`, when the hook is used.
I've removed the explicit return typing so that consumers of the hook get correct types.
Organise deps into ~3 categories:
- Core generation dependencies, pinned for reproducible builds.
- Core application dependencies, pinned for reproducible builds.
- Auxiliary dependencies, pinned only if necessary.
I pinned / bumped these to latest:
- `controlnet_aux`
- `fastapi`
- `fastapi-events`
- `huggingface-hub`
- `numpy`
- `python-socketio`
- `torchmetrics`
- `transformers`
- `uvicorn`
I checked the release notes for these and didn't see any breaking changes that would affect us. There is a `fastapi` breaking change in v108 related to background tasks but it doesn't affect us.
I tested on a fresh venv. The app still works and I can generate on macOS.
Hopefully, enforcing explicit pinned versions will reduce the issues where people get CPU torch.
It also means we should periodically bump versions up to ensure we don't get too far behind on our dependencies and have to do painful upgrades.
Workflow building would fail when a current image node was in the workflow due to the strict validation.
So we need to use the other workflow builder util first, which strips out extraneous data.
This bug was introduced during an attempt to optimize the workflow building logic, which was causing slowdowns on the workflow editor.
* do not show toast if 403 is triggered by lack of image access
* remove log
* lint
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
## What type of PR is this? (check all applicable)
Release - InvokeAI v3.5.0rc5
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
Release - InvokeAI v3.5.0rc5
## QA Instructions, Screenshots, Recordings
[InvokeAI-installer-v3.6.0rc5.zip](https://github.com/invoke-ai/InvokeAI/files/13863661/InvokeAI-installer-v3.6.0rc5.zip)
## [optional] Are there any post deployment tasks we need to perform?
Releasee on PyPi & GitHub
* feat(ui): get rid of convoluted socket vs appSocket redux actions
There's no need to have `socket...` and `appSocket...` actions.
I did this initially due to a misunderstanding about the sequence of handling from middleware to reducers.
* feat(ui): bump deps
Mainly bumping to get latest `redux-remember`.
A change to socket.io required a change to the types in `useSocketIO`.
* chore(ui): format
* feat(ui): add error handling to redux persistence layer
- Add an error handler to `redux-remember` config using our logger
- Add custom errors representing storage set and get failures
- Update storage driver to raise these accordingly
- wrap method to clear idbkeyval storage and tidy its logic up
* feat(ui): add debuggingLoggerMiddleware
This simply logs every action and a diff of the state change.
Due to the noise this creates, it's not added by default at all. Add it to the middlewares if you want to use it.
* feat(ui): add $socket to window if in dev mode
* fix(ui): do not enable cancel hotkeys on inputs
* fix(ui): use JSON.stringify for ROARR logger serializer
A recent change to ROARR introduced limits to the size of data that will logged. This ends up making our logs far less useful. Change the serializer back to what it was previously.
* feat(ui): change diff util, update debuggerLoggerMiddleware
The previous diff library would present deleted things as `undefined`. Unfortunately, a JSON.stringify cycle will strip those values out. The ROARR logger does this and so the diffs end up being a lot less useful, not showing removed keys.
The new diff library uses a different format for the delta that serializes nicely.
* feat(ui): add migrations to redux persistence layer
- All persisted slices must now have a slice config, consisting of their initial state and a migrate callback. The migrate callback is very simple for now, with no type safety. It adds missing properties to the state. A future enhancement might be to model the each slice's state with e.g. zod and have proper validation and types.
- Persisted slices now have a `_version` property
- The migrate callback is called inside `redux-remember`'s `unserialize` handler. I couldn't figure out a good way to put this into the reducer and do logging (reducers should have no side effects). Also I ran into a weird race condition that I couldn't figure out. And finally, the typings are tricky. This works for now.
- `generationSlice` and `canvasSlice` both need migrations for the new aspect ratio setup, this has been added
- Stuff related to persistence has been moved in to `store.ts` for simplicity
* feat(ui): clean up StorageError class
* fix(ui): scale method default is now 'auto'
* feat(ui): when changing controlnet model, enable autoconfig
* fix(ui): make embedding popover immediately accessible
Prevents hotkeys from being captured when embeddings are still loading.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
The new select component appears to close itself before calling the
onchange handler. This short-circuits the autoconnect logic. Tweaked so
the ordering is correct.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Closes#5425
## QA Instructions, Screenshots, Recordings
bug should be fixed
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
This PR can be merged when approved
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
The new select component appears to close itself before calling the onchange handler. This short-circuits the autoconnect logic. Tweaked so the ordering is correct.
Centralize the initial/min/max/etc values for all numerical params. We used this for some but at some point stopped updating it.
All numerical params now use their respective configs. Far fewer hardcoded values throughout the app now.
Also updated the config types a bit to better accommodate slider vs number input constraints.
- Use the virtuoso grid item container and list containers to calculate imagesPerRow, skipping manual compensation for padding of images
- Round the imagesPerRow instead of flooring - we often will end up with values like 4.99999 due to floating point precision
- Update `getDownImage` comments & logic to be clearer
- Use variables for the ids in query selectors, preventing future typos
- Only scroll if the new selected image is different from the prev one
- Fix preexisting bug where gallery network requests were duplicated when triggering infinite scroll
- Refactor `useNextPrevImage` to not use `state => state` as an input selector - logic split up into different hooks
- Remove use instant scroll for arrow key navigation - smooth scroll is janky when you hold the arrow down and it fires rapidly
- Move gallery nav hotkeys to GalleryImageGrid component, so they work whenever the gallery is open (previously didn't work on canvas or workflow editor tabs)
- Use nanostores for gallery grid refs instead of passing context with virtuoso's context feature, making it much simpler to do the imperative gallery nav
- General gallery hook/component cleanup
## What type of PR is this? (check all applicable)
Release v3.6.0rc4
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
Release for v3.6.0rc4
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
[Uploading InvokeAI-installer-v3.6.0rc4.zip…](Installer Zip)
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
- This PR can be merged when approved
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
Release on PyPi & GitHub
Pending resolution of https://github.com/reduxjs/reselect/issues/635, we can patch `reselect` to use `lruMemoize` exclusively.
Pin RTK and react-redux versions too just to be safe.
This reduces the major GC events that were causing lag/stutters in the app, particularly in canvas and workflow editor.
## What type of PR is this? (check all applicable)
Release v3.6.0rc3
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [] Yes
- [X] No
## Description
Next release candidate
## Related Tickets & Documents
N/A
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
[Uploading InvokeAI-installer-v3.6.0rc3.zip…](Installer zip)
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
This PR can be merged when approved
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
Release on PyPI & Github
A bug that caused panels to be collapsed on a fresh indexedDb in was fixed in dd32c632cd, but this re-introduced a different bug that caused the panels to expand on window resize, if they were already collapsed.
Revert the previous change and instead add one imperative resize outside the observer, so that on startup, we set both panels to their minimum sizes.
* replace custom header with custom nav component to go below settings
* add option for custom gallery header
* add option for custom app info text on logo hover
* add data-testid for tabs
* remove descriptions
* lint
* lint
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
We are now using the lefthand vertical strip for the settings menu button. This is a good place for the status indicator.
Really, we only need to display something *if there is a problem*. If the app is processing, the progress bar indicates that.
For the case where the panels are collapsed, I'll add the floating buttons back in some form, and we'll indicate via those if the app is processing something.
just make it like a normal button - normal and hover state, no difference when its expanded. the icon clearly indicates this, and you see the extra components
On one hand I like the color but on the other it makes this divider a focus point, which doesn't really makes sense to me. I tried several shades but think it adds a bit too much distraction for your eyes.
There was an extra div, needed for the fullscreen file upload dropzone, that made styling the main app containers a bit awkward.
Refactor the uploader a bit to simplify this - no longer need so many app-level wrappers. Much cleaner.
Removed logic related to aspect ratio from the components.
When the main bbox changes, if the scale method is auto, the reducers will handle the scaled bbox size appropriately.
Somehow linking up the manual mode to the aspect ratio is tricky, and instead of adding complexity for a rarely-used mode, I'm leaving manual mode as fully manual.
Cannot figure out how to allow the bbox to be transformed when aspect ratio is locked from all handles. Only the bottom right handle works as expected.
As a workaround, when the aspect ratio is locked, you can only resize the bbox from the bottom right handle.
## What type of PR is this? (check all applicable)
Release
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
v3.6.0rc2 release
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
Test latest main & [Uploading
InvokeAI-installer-v3.6.0rc2.zip…](Installer zip)
## Merge Plan
PR can be merged immediately
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## Added/updated tests?
- [ ] Yes
- [X] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
Publish release on PyPI and GitHub
- Do not _merge_ prompt and style prompt when concat is enabled - either use the prompt as style, or use the style directly.
- Set style prompt metadata correctly.
- Add metadata recall for style prompt.
`react-select` has some weird behaviour where if the value is `undefined`, it shows the last-selected value instead of nothing. Must fall back to `null`
Ensure workflow editor model selector component gets a value
This introduced some funky type issues related to ONNX models. ONNX doesn't work anyways (unmaintained). Instead of fixing the types to work with a non-working feature, ONNX is now removed entirely from the UI.
- Remove all refs to ONNX (and Olives)
- Fix some type issues
- Add ONNX nodes to the nodes denylist (so they are not visible in UI)
- Update VAE graph helper, which still had some ONNX logic. It's a very simple change and doesn't change any logic. Just removes some conditions that were for ONNX. I tested it and nothing broke.
- Regenerate types
- Fix prettier and eslint ignores for generated types
- Lint
* Udpater suggest db backup when installing RC
* Update invokeai_update.py to be more specific
* Update invokeai_update.py
* Update invokeai_update.py
* Update invokeai_update.py
* Update invokeai_update.py
* Update docker-compose.yml to bind local data path
* Update LOCAL_DATA_PATH in .env.sample
* Add fallback to INVOKEAI_ROOT envar if LOCAL_DATA_PATH not present.
* rename LOCAL_DATA_PATH to INVOKAI_LOCAL_ROOT
* Whoops, didnt mean to include this
* Update docker/docker-compose.yml
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
* [chore] rename envar
* Apply suggestions from code review
---------
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
- Prompt must have an open curly brace followed by a close curly brace to enable dynamic prompts processing
- If a the given prompt already had a dynamic prompt cached, do not re-process
- If processing is not needed, user may invoke immediately
- Invoke button shows loading state when dynamic prompts are processing, tooltip says generating
- Dynamic prompts preview icon in prompt box shows loading state when processing, tooltip says generating
- Support grid size of 8 on canvas
- Internal canvas math works on 8
- Update gridlines rendering to show 64 spaced lines and 32/16/8 when zoomed in
- Bbox manipulation defaults to grid of 64 - hold shift to get grid of 8
Besides being something we support internally, supporting 8 on canvas avoids a lot of hacky logic needed to work well with aspect ratios.
Canvas and non-canvas have separate width and height and need their own separate aspect ratios. In order to not duplicate a lot of aspect ratio logic, the components relating to image size have been modularized.
- Fix `weight` and `begin_step_percent`, the constraints were mixed up
- Add model validatort to ensure `begin_step_percent < end_step_percent`
- Bump version
## What type of PR is this? (check all applicable)
InvokeAI 3.6.0rc1 Release
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
Update version & frontend build for Invoke v3.6.0rc1
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
Upload release to PyPI & create release on GitHub
- Store workflow in nanostore as singleton instead of building for each consumer
- Debounce the build (already was indirectly debounced)
- When the workflow is needed, imperatively grab it from the nanostores, instead of letting react handle it via reactivity
This drastically reduces the computation needed when moving the cursor. It also correctly separates ephemeral interaction state from redux, where it is not needed.
Also removed some unused canvas state.
This uses the previous implementation of the memoization function in reselect. It's possible for the new weakmap-based memoization to cause memory leaks in certain scenarios, so we will avoid it for now.
## What type of PR is this? (check all applicable)
Release v3.5.1
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
InvokeAI v3.5.1 release
## [optional] Are there any post deployment tasks we need to perform?
1. Release on PyPi
2. Create GH release
3. Annonce on Discord
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No
## Description
Add Tiled Upscaling to default workflows
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
If the user specifies `torch-sdp` as the attention type in `config.yaml`, we can go ahead and use it (if available) rather than always throwing an exception.
## What type of PR is this? (check all applicable)
- [X] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
To release 3.5.0 successfully, a front end build needed to be in the
repo so that it would be included in the invokeai package distributed on
PyPi.
This PR remove the frontend build and updates the frontend gitignore to
not include the build.
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [X] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
N/A
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because: it's a simple fix
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
if there are two multi vector TI in a prompt eg `<ti-1> <ti-2>` with
ti-1 has vector size 16 and ti-2 has vector size 8 then the second one
uses the first ti_embedding.shape[0] and you get errors like eg
"<ti-2-!pad-8> is not found" because ti-2 only has vector size 8 but the
code is taking the wrong ti_embedding.shape[0]
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
InvokeAI v3.5.0
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
3.5.0 release
## QA Instructions, Screenshots, Recordings
Test Installer:
[InvokeAI-installer-v3.5.0.zip](https://github.com/invoke-ai/InvokeAI/files/13776161/InvokeAI-installer-v3.5.0.zip)
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
* Update front end .gitignore & remove the fe build
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Description
For example, if PIL tries to open a *really* big image, it will raise an
exception to prevent reading a huge object into memory.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
-
https://discord.com/channels/1020123559063990373/1149513695567810630/1186200089149046804
## QA Instructions, Screenshots, Recordings
This should fix the error in the discord thread
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
Can be merged when @Millu confirms it fixes the issue he ran into
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
* add base definition of download manager
* basic functionality working
* add unit tests for download queue
* add documentation and FastAPI route
* fix docs
* add missing test dependency; fix import ordering
* fix file path length checking on windows
* fix ruff check error
* move release() into the __del__ method
* disable testing of stderr messages due to issues with pytest capsys fixture
* fix unsorted imports
* harmonized implementation of start() and stop() calls in download and & install modules
* Update invokeai/app/services/download/download_base.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
* replace test datadir fixture with tmp_path
* replace DownloadJobBase->DownloadJob in download manager documentation
* make source and dest arguments to download_queue.download() an AnyHttpURL and Path respectively
* fix pydantic typecheck errors in the download unit test
* ruff formatting
* add "job cancelled" as an event rather than an exception
* fix ruff errors
* Update invokeai/app/services/download/download_default.py
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
* use threading.Event to stop service worker threads; handle unfinished job edge cases
* remove dangling STOP job definition
* fix ruff complaint
* fix ruff check again
* avoid race condition when start() and stop() are called simultaneously from different threads
* avoid race condition in stop() when a job becomes active while shutting down
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
The graph library occasionally causes issues when the default graph changes substantially between versions and pydantic validation fails. See #5289 for an example.
We are not currently using the graph library, so we can disable it until we are ready to use it. It's possible that the workflow library will supersede it anyways.
* remove MacOS Sonoma check in devices.py
As of pytorch 2.1.0, float16 works with our MPS fixes on Sonoma, so the check is no longer needed.
* remove unused platform import
The project is no longer using yarn as a package manager and have moved
to pnpm, So I wanted to update the documentation on the contribution
page.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [] No, because:
I spoke with user: imic in the #dev-chat on discord.
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No
## Merge Plan
- "This PR can be merged when approved"
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
Added more default workflows to the workflow library
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
* add code to repopulate model config records after schema update
* reformat for ruff
* migrate model records using db cursor rather than the ModelRecordConfigService
* ruff fixes
* tweak exception reporting
* fix: build frontend in pypi-release workflow
This was missing, resulting in the 3.5.0rc1 having no frontend.
* fix: use node 18, set working directory
- Node 20 has a problem with `pnpm`; set it to Node 18
- Set the working directory for the frontend commands
* Don't copy extraneous paths into installer .zip
* feat(installer): delete frontend build after creating installer
This prevents an empty `dist/` from breaking the app on startup.
* feat: add python dist as release artifact, as input to enable publish to pypi
- The release workflow never runs automatically. It must be manually kicked off.
- The release workflow has an input. When running it from the GH actions UI, you will see a "Publish build on PyPi" prompt. If this value is "true", the workflow will upload the build to PyPi, releasing it. If this is anything else (e.g. "false", the default), the workflow will build but not upload to PyPi.
- The `dist/` folder (where the python package is built) is uploaded as a workflow artifact as a zip file. This can be downloaded and inspected. This allows "dry" runs of the workflow.
- The workflow job and some steps have been renamed to clarify what they do
* translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
* freeze yaml migration logic at upgrade to 3.5
* moved migration code to migration_3
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [X] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No
## Description
Updater script pulls from PyPI instead of GitHub releases (this is why
the RC packages are having issues when updating through the launcher
script)
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## Added/updated tests?
- [ ] Yes
- [X] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you updated all relevant documentation?
- [x] Yes (N/A)
- [ ] No
## Description
This change enables the model probe to work with TI embeddings that have
the follow state_dict structure:
```python
{
"<any_key>": torch.Tensor(...), # where the tensor has shape (N, embedding_dim)
}
```
## QA Instructions, Screenshots, Recordings
I can't imagine an embedding format that would previously have passed
the model probe, and would now fail after this change. That being said,
I'll exercise a bunch of existing TIs before merging.
- [x] Exercise existing TI formats
## Added/updated tests?
- [ ] Yes
- [x] No : _We could really benefit from tests for all of the supported
TI formats... but I'm not taking on that project right now._
## What type of PR is this? (check all applicable)
- [X] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
As discussed with @psychedelicious , this PR changes the swagger label
on the model manager V2 routes to `model_manager_v2_unstable` in order
to warn community members that the API is liable to change.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No
## Description
Change CalculateImageTilesEvenSplitInvocation to have an overlap in
pixels rather than as a percentage of the tile. This makes it easier to
have predictable blending of the seams as you have a known overlap size.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## Added/updated tests?
- [x] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [x] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes -
https://github.com/invoke-ai/InvokeAI/pull/5007#discussion_r1378792615
- [ ] No, because:
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No
## Description
Simplify Docker image creation and execution to a single script that
spins up the right service in the docker compose file.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Depends on #5007
## QA Instructions, Screenshots, Recordings
N/A
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [x] No : same tests should work.
## [optional] Are there any post deployment tasks we need to perform?
Not to my knowledge.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Description
This was missing, resulting in the 3.5.0rc1 having no frontend.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Discord installer thread:
https://discord.com/channels/1020123559063990373/1149513695567810630/1185200427717898260
- Comments from here in the release chat:
https://discord.com/channels/1020123559063990373/1020123559831539744/1185004017521279007
## QA Instructions, Screenshots, Recordings
I've run this locally and it works (I commented out the final steps of
the workflow that do PyPi stuff to ensure I didn't accidentally deploy
something).
You can run the workflow locally with https://github.com/nektos/act.
Suggest using the `gh` CLI version, its very easy to set up if you have
the github CLI installed. Then you can run `gh act -W
.github/workflows/pypi-release.yml` to run the workflow locally in a
docker image.
I don't know this local action runner would actually release to PyPi -
as mentioned, I commented those steps out when testing - but it does
successfully do both frontend and backend builds.
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
This needs @lstein 's approval.
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## [optional] Are there any post deployment tasks we need to perform?
Cut an RC2
- The release workflow never runs automatically. It must be manually kicked off.
- The release workflow has an input. When running it from the GH actions UI, you will see a "Publish build on PyPi" prompt. If this value is "true", the workflow will upload the build to PyPi, releasing it. If this is anything else (e.g. "false", the default), the workflow will build but not upload to PyPi.
- The `dist/` folder (where the python package is built) is uploaded as a workflow artifact as a zip file. This can be downloaded and inspected. This allows "dry" runs of the workflow.
- The workflow job and some steps have been renamed to clarify what they do
The VAE decode on linear graphs was getting cached. This caused some unexpected behaviour around image outputs.
For example, say you ran the exact same graph twice. The first time, you get an image written to disk and added to gallery. The second time, the VAE decode is cached and no image file is created. But, the UI still gets the graph complete event and selects the first image in the gallery. The second run does not add an image to the gallery.
There are probbably edge cases related to this - the UI does not expect this to happen. I'm not sure how to handle it any better in the UI.
The solution is to not cache VAE decode on the linear graphs, ever. If you run a graph twice in linear, you expect two images.
This simple change disables the node cache for terminal VAE decode nodes in all linear graphs, ensuring you always get images. If they graph was fully cached, all images after the first will be created very quickly of course.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
This adds a probe for the SDXL LoRA format found in the wild at
https://civitai.com/models/224641.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
See discord message at:
https://discord.com/channels/1020123559063990373/1149510134058471514/1184982133912113182
## QA Instructions, Screenshots, Recordings
Try installing the SDXL LoRA at the URL given above.
## Merge Plan
This can be merged when approved.
## Added/updated tests?
- [ ] Yes
- [X] No : we do not yet have a comprehensive suite of models to test
probing on.
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
This minor change adds the ability to filter the model lists returned by
V2 of the model manager using the model file format (e.g. "checkpoint").
Just thought this would be a useful feature.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
This can be merged when approved without any adverse effects.
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## Added/updated tests?
- [ ] Yes
- [X] No : minor feature - tested informally using the router API
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ x ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ x ] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ x ] Yes
- [ ] No
## Description
This adds the Kapa assistant to our docs.
- "Reset Workflow Editor" -> "New Workflow"
- "New Workflow" gets nodes icon & is no longer danger coloured
- When creating a new workflow, if the current workflow has unsaved changes, you get a dialog asking for confirmation. If the current workflow is saved, it immediately creates a new workflow.
- "Download Workflow" -> "Save to File"
- "Upload Workflow" -> "Load from File"
- Moved "Load from File" up 1 in the menu
This model was a bit too strict, and raised validation errors when workflows we expect to *not* have an ID (eg, an embedded workflow) have one.
Now it strips unknown attributes, allowing those workflows to load.
- Handle an image file not existing despite being in the database.
- Add a simple pydantic model that tests only for the existence of a workflow's version.
- Check against this new model when migrating workflows, skipping if the workflow fails validation. If it succeeds, the frontend should be able to handle the workflow.
Currently translated at 98.1% (1340 of 1365 strings)
translationBot(ui): update translation (Russian)
Currently translated at 84.2% (1150 of 1365 strings)
translationBot(ui): update translation (Russian)
Currently translated at 83.1% (1135 of 1365 strings)
Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ X ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ X ] No, because: dependency bump
## Have you updated all relevant documentation?
- [ ] Yes
- [ x ] No
## Description
Updating diffusers to .24 - fixes a few issues. Needs to be tested to
ensure things like our IP Adapter implementation don't break
## What type of PR is this? (check all applicable)
- [x] Refactor
- [x] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Description
This PR enhances our SQLite database with migration logic.
### `SQLiteMigrator` class
The new `SQLiteMigrator` class handles safely running database
migrations. It is initialized in the `SqliteDatabase` class's init, and
immediately runs all database migrations.
### `Migration` class
Migrations are reprsented by a `Migration` class, which has 3
attributes:
- `db_version: int`: The database version this migration results in.
- `app_version: str`: The semver app version this migration is run for.
- `migrate: Callable[[sqlite3.Cursor], None]`: A function that performs
the migration. It receives a cursor _only_, but can do anything it wants
to do. A convention is established for these functions.
All schema-creating SQL now lives in a `migrate` function. We haven't
needed to make any data migrations yet, but when we do, this will also
be handled within one of these callbacks.
### Migration Flow
First, migrations are registered with `SQLiteMigrator` with it's
`register_migration` method. This performs some basic checks of the
migration version.
After registering all migrations, they are run with the `run_migrations`
method. This does a few things:
- Creates a `version` table in the DB, if it doesn't already exist. This
table has `db_version INTEGER`, `app_version TEXT` and `migrated_at
DATETIME` columns.
- Sort the migrations by their `db_version`.
- Do some checks to see if we need a migration.
- Backs up the database (if it's a file database). The migration bails
out if this fails.
- Runs each migration. If there is a problem, restore from backup.
### Included Migrations
Migrations are in `invokeai/app/services/shared/sqlite/migrations`.
#### `migrate_1.py`
All\* schema SQL up to 3.4.0post2 is in `migration_1.py`. Running only
this migration should result in a database that is identical to the one
you get from starting up 3.4.0post2.
SQL in this migration is **idempotent** (same as it was when the SQL was
spread across the various services).
#### `migrate_2.py`
Schema changes through 3.5.0 (the upcoming release) are in
`migration_2.py`.
SQL in this migration is **not idempotent**. Future migrations need not
be idempotent, as the migration logic ensures each will only be run
once.
### \*Caveat - ItemStorage
This class provides a generic document-db-like interface for storing
objects. Our `graph_executions` and `graphs` tables are created and
managed by this service. This PR does not touch this class and therefore
does not touch either of those two tables.
We can decide how to handle those tables in the future as the need
arises.
### Change to Model Manager Metadata table
I noticed that there is a `model_manager_metadata` table which included
the app version, and whose `version` property wasn't accessed outside
the service.
I believe the new `version` table fulfills the purpose of this table,
and have removed it.
@lstein Please let me know if this is not right.
## QA Instructions, Screenshots, Recordings
1. Case 1 - Upgrade
- Back up your 3.4.0post2 database
- Run this PR
- It should upgrade your database and everything should work exactly
like it did before
2. Case 2 - New Install
- Move your database out of the invoke root so that when the app starts,
it creates a new one
- Run this PR
- It should work just like a new install
3. Case 3 - With an In-Memory Database
- Enable the in-memory memory database (set `use_memory_db` under
`Paths` in `invokeai.yaml` to `true`)
- Run this PR
- It should work just like a new install
## Added/updated tests?
- [x] Yes: Fairly comprehensive tests are added for the
`SQLiteMigrator`.
- [ ] No : _please replace this line with details on why tests
have not been included_
- use simpler pattern for migration dependencies
- move SqliteDatabase & migration to utility method `init_db`, use this in both the app and tests, ensuring the same db schema is used in both
This fixes a problem with `Annotated` which prevented us from using pydantic's `Field` to specify a discriminator for a union. We had to use FastAPI's `Body` as a workaround.
* selector added
* ref and useeffect added
* scrolling done using useeffect
* fixed scroll and changed the ref name
* fixed scroll again
* created hook for scroll logic
* feat(ui): debounce metadata fetch by 300ms
This vastly reduces the network requests when using the arrow keys to quickly skim through images.
* feat(ui): extract logic to determine virtuoso scrollToIndex align
This needs to be used in `useNextPrevImage()` to ensure the scrolling puts the image at the top or bottom appropriately
* feat(ui): add debounce to image workflow hook
This was spamming network requests like the metadata query
---------
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Invocations now have a classification:
- Stable: LTS
- Beta: LTS planned, API may change
- Prototype: No LTS planned, API may change, may be removed entirely
The `@invocation` decorator has a new arg `classification`, and an enum `Classification` is added to `baseinvocation.py`.
The default is Stable; this is a non-breaking change.
The classification is presented in the node header as a hammer icon (Beta) or flask icon (prototype).
The icon has a tooltip briefly describing the classification.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No
## Description
1. The new model manager sqlite3-based configuration record storage
system is automatically populated with probed values from existing
models found in the models path when `invokeai-web` starts up for the
first time. However, the user's customization of these models in
`invokeai.yaml`, including such things as the prediction type and model
description, are not automatically copied over. This PR enhances the
`invokeai-migrate-models-to-db` script so that any customized
configuration data from `invokeai.yaml` replaces the original probed
values. This script only needs to be run once, but it does not hurt to
run it additional times. In the near future, I'm going to register this
module with psychedelicious's sqlite migration system so that the update
happens automatically during database migration.
2. The SQL-based model config record system stores a JSON version of the
config, as well as several fields that are broken out into individual
columns for search/indexing purposes. This PR keeps the JSON and the
broken-out fields in sync using the `json_extract()` sqlite3 function to
populate the broken out `base`, `type`, `name`, `path` and `format`
fields in the `model_config` table.
3. Finally, this PR fixes the annoying `invokeai-web` shutdown message:
`TypeError: ModelInstallService.stop() takes 1 positional argument but 2
were given`
## Related Tickets & Documents
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
If you've run `invokeai-web` at any time since PR #5039, your
`invokeai.db` will have a `model_config` table containing probe
information from all models in the invokeai models directory as well as
those in `autoimport` (if applicable). However, any models present in
`models.yaml` whose paths are outside these directories will not be
present. To add them, and to update the description and other values
from `models.yaml`, run the command `invokeai-migrate-models-to-db`. You
should see the missing models added to the database table with the
correct information.
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [X] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
This PR does three things:
1) It separates out the script that creates the installer zipfile
(`create_installer.sh`) from the script that tags the repository with
the current release version (now called `tag_release.sh`)
2) It adds new targets to Makefile for running the installer script and
tagging.
3) It adds a `help` target that lists the Makefile targets:
```
$ make help
Developer commands:
ruff Run ruff, fixing any safely-fixable errors and formatting
ruff-unsafe Run ruff, fixing all fixable errors and formatting
mypy Run mypy using the config in pyproject.toml to identify type mismatches and other coding errors
mypy-all Run mypy ignoring the config in pyproject.tom but still ignoring missing imports
frontend-build Build the frontend in order to run on localhost:9090
frontend-dev Run the frontend in developer mode on localhost:5173
installer-zip Build the installer .zip file for the current version
tag-release Tag the GitHub repository with the current version (use at release time only!)
```
`help` is also the default target so that the help message will print
out when only `make` is issued.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [X] No: not needed
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No
## Description
Additional tile generation nodes of
CalculateImageTilesEvenSplitInvocation &
CalculateImageTilesMinimumOverlapInvocation
Additional blending method of merge_tiles_with_seam_blending
Updated Node MergeTilesToImageInvocation with seam blending
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
Simplifies a couple things:
- Init is more straightforward
- It's clear in the migrator that the connection we are working with is related to the SqliteDatabase
- Simplify init args to path (None means use memory), logger, and verbose
- Add docstrings to SqliteDatabase (it had almost none)
- Update all usages of the class
CONTAINER_UID is used for the user ID within the container, however I noticed the UID was hard coded to 1000 in the Dockerfile chown -R command.
This leaves the default as 1000, but allows it to be overrriden by setting CONTAINER_UID.
- min_overlap removed * restrictions and round_to_8
- min_overlap handles tile size > image size by clipping the num tiles to 1.
- Updated assert test on min_overlap.
On Windows, we must ensure the connection to the database is closed before exiting the tempfile context.
Also, rejiggered the thing to use the file directly.
## What type of PR is this? (check all applicable)
- [X] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
This is the next phase of the model manager refactor, as discussed with
@psychedelicious and @RyanJDick. This implements the model installer,
which is responsible for managing model weights on disk and installing
new models.
Currently only installation of local files and directories is supported.
Remote installation will be implemented after the queued download
manager is reviewed and approved.
Please see the documentation located at
[docs/contributing/MODEL_MANAGER.md](8695ad6f59/docs/contributing/MODEL_MANAGER.md (model-installation))
for an explanation of how this module works.
Things that have changed relative to the current implementation.
1. Model importation runs in a background thread. Access to the
installation status is through a ModelInstallJob object returned by the
`import_model()` call. In addition, the installation process generates a
series of `model_install` events on the event bus.
2. `model_install_progress` events are documented, but not currently
issued. These will be issued when background downloading is implemented.
3. The model installer currently runs in parallel to the current model
manager. The frontend continues to use `configs/models.yaml` and ignores
what is in the `model_config` table of `invokeai.db`.
4. When the installer is initialized at app startup time, it
synchronizes its database to the contents of the InvokeAI `models`
directory. The current model manager does this as well, so you will see
two log messages indicating that this directory is being scanned.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
You can test using the FastAPI swagger pages at
http://localhost:9090/docs. Use the calls listed under
`model_manager_v2`. Be aware that only installation of local models
(indicated by their file or directory path) are currently supported.
## Added/updated tests?
- [X] Yes -- see
`tests/app/services/model_install/test_model_install.py`
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
In other words, build frontend when creating installer.
Changes to `create_installer.sh`
- If `python` is not in `PATH` but `python3` is, alias them (well, via function). This is needed on some machines to run the installer without symlinking to `python3`.
- Make the messages about pushing tags clearer. The script force-pushes, so it's possible to accidentally take destructive action. I'm not sure how to otherwise prevent damage, so I just added a warning.
- Print out `pwd` when prompting about being in the `installer` dir.
- Rebuild the frontend - if there is already a frontend build, first checks if the user wants to rebuild it.
- Checks for existence of `../build` dir before deleting - if the dir doesn't exist, the script errors and exits at this point.
- Format and spell check.
Other changes:
- Ignore `dist/` folder.
- Delete `dist/`.
**Note: you may need to use `git rm --cached invokeai/app/frontend/web/dist/` if git still wants to track `dist/`.**
Calling `inspect.getmembers()` on a pydantic field results in `getattr` being called on all members of the field. Pydantic has some attrs that are marked deprecated.
In our test suite, we do not filter deprecation warnings, so this is surfaced.
Use a context manager to ignore deprecation warnings when calling the function.
In the latest redux, unknown actions are typed as `unknown`. This forces type-safety upon us, requiring us to be more careful about the shape of actions.
In this case, we don't know if the rejection has a payload or what shape it may be in, so we need to do runtime checks. This is implemented with a simple zod schema, but probably the right way to handle this is to have consistent types in our RTK-Query error logic.
There are a few breaking changes, which I've addressed.
The vast majority of changes are related to new handling of `reselect`'s `createSelector` options.
For better or worse, we memoize just about all our selectors using lodash `isEqual` for `resultEqualityCheck`. The upgrade requires we explicitly set the `memoize` option to `lruMemoize` to continue using lodash here.
Doing that required changing our `defaultSelectorOptions`.
Instead of changing that and finding dozens of instances where we weren't using that and instead were defining selector options manually, I've created a pre-configured selector: `createMemoizedSelector`.
This is now used everywhere instead of `createSelector`.
- update all scripts
- update the frontend GH action
- remove yarn-related files
- update ignores
Yarn classic + storybook has some weird module resolution issue due to how it hoists dependencies.
See https://github.com/storybookjs/storybook/issues/22431#issuecomment-1630086092
When I did the `package.json` solution in this thread, it broke vite. Next option is to upgrade to yarn 3 or pnpm. I chose pnpm.
Using default_factory to autogenerate UUIDs doesn't make sense here, and results awkward typescript types.
Remove the default factory and instead manually create a UUID for workflow id. There are only two places where this needs to happen so it's not a big change.
This addresses an edge case where:
1. the workflow references fields that are present on the workflow's nodes, but not on the invocation templates for those nodes and
2. The invocation template for that type does exist
This should be a fairly obscure edge case, but could happen if a user fiddled around with the workflow manually.
I ran into it as a result of two nodes having accidentally mixed up their invocation types, a problem introduced with a wonky merge commit.
This logic is moved into a hook.
This is needed for our context menus to close when the user clicks something in reactflow. It needed to be extended to support menus also.
Disabling these introduces an issue where, if you were on an image with a workflow/metadata, then switch to one without, you can end up on a disabled tab. This could potentially cause a runtime error.
* chore: bump pydantic to 2.5.2
This release fixespydantic/pydantic#8175 and allows us to use `JsonValue`
* fix(ui): exclude public/en.json from prettier config
* fix(workflow_records): fix SQLite workflow insertion to ignore duplicates
* feat(backend): update workflows handling
Update workflows handling for Workflow Library.
**Updated Workflow Storage**
"Embedded Workflows" are workflows associated with images, and are now only stored in the image files. "Library Workflows" are not associated with images, and are stored only in DB.
This works out nicely. We have always saved workflows to files, but recently began saving them to the DB in addition to in image files. When that happened, we stopped reading workflows from files, so all the workflows that only existed in images were inaccessible. With this change, access to those workflows is restored, and no workflows are lost.
**Updated Workflow Handling in Nodes**
Prior to this change, workflows were embedded in images by passing the whole workflow JSON to a special workflow field on a node. In the node's `invoke()` function, the node was able to access this workflow and save it with the image. This (inaccurately) models workflows as a property of an image and is rather awkward technically.
A workflow is now a property of a batch/session queue item. It is available in the InvocationContext and therefore available to all nodes during `invoke()`.
**Database Migrations**
Added a `SQLiteMigrator` class to handle database migrations. Migrations were needed to accomodate the DB-related changes in this PR. See the code for details.
The `images`, `workflows` and `session_queue` tables required migrations for this PR, and are using the new migrator. Other tables/services are still creating tables themselves. A followup PR will adapt them to use the migrator.
**Other/Support Changes**
- Add a `has_workflow` column to `images` table to indicate that the image has an embedded workflow.
- Add handling for retrieving the workflow from an image in python. The image file must be fetched, the workflow extracted, and then sent to client, avoiding needing the browser to parse the image file. With the `has_workflow` column, the UI knows if there is a workflow to be fetched, and only fetches when the user requests to load the workflow.
- Add route to get the workflow from an image
- Add CRUD service/routes for the library workflows
- `workflow_images` table and services removed (no longer needed now that embedded workflows are not in the DB)
* feat(ui): updated workflow handling (WIP)
Clientside updates for the backend workflow changes.
Includes roughed-out workflow library UI.
* feat: revert SQLiteMigrator class
Will pursue this in a separate PR.
* feat(nodes): do not overwrite custom node module names
Use a different, simpler method to detect if a node is custom.
* feat(nodes): restore WithWorkflow as no-op class
This class is deprecated and no longer needed. Set its workflow attr value to None (meaning it is now a no-op), and issue a warning when an invocation subclasses it.
* fix(nodes): fix get_workflow from queue item dict func
* feat(backend): add WorkflowRecordListItemDTO
This is the id, name, description, created at and updated at workflow columns/attrs. Used to display lists of workflowsl
* chore(ui): typegen
* feat(ui): add workflow loading, deleting to workflow library UI
* feat(ui): workflow library pagination button styles
* wip
* feat: workflow library WIP
- Save to library
- Duplicate
- Filter/sort
- UI/queries
* feat: workflow library - system graphs - wip
* feat(backend): sync system workflows to db
* fix: merge conflicts
* feat: simplify default workflows
- Rename "system" -> "default"
- Simplify syncing logic
- Update UI to match
* feat(workflows): update default workflows
- Update TextToImage_SD15
- Add TextToImage_SDXL
- Add README
* feat(ui): refine workflow list UI
* fix(workflow_records): typo
* fix(tests): fix tests
* feat(ui): clean up workflow library hooks
* fix(db): fix mis-ordered db cleanup step
It was happening before pruning queue items - should happen afterwards, else you have to restart the app again to free disk space made available by the pruning.
* feat(ui): tweak reset workflow editor translations
* feat(ui): split out workflow redux state
The `nodes` slice is a rather complicated slice. Removing `workflow` makes it a bit more reasonable.
Also helps to flatten state out a bit.
* docs: update default workflows README
* fix: tidy up unused files, unrelated changes
* fix(backend): revert unrelated service organisational changes
* feat(backend): workflow_records.get_many arg "filter_text" -> "query"
* feat(ui): use custom hook in current image buttons
Already in use elsewhere, forgot to use it here.
* fix(ui): remove commented out property
* fix(ui): fix workflow loading
- Different handling for loading from library vs external
- Fix bug where only nodes and edges loaded
* fix(ui): fix save/save-as workflow naming
* fix(ui): fix circular dependency
* fix(db): fix bug with releasing without lock in db.clean()
* fix(db): remove extraneous lock
* chore: bump ruff
* fix(workflow_records): default `category` to `WorkflowCategory.User`
This allows old workflows to validate when reading them from the db or image files.
* hide workflow library buttons if feature is disabled
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
* add middleware to handle 403 errors
* remove log
* add logic to warn the user if not all requested images could be deleted
* lint
* fix copy
* feat(ui): simplify batchEnqueuedListener error toast logic
* feat(ui): use translations for error messages
* chore(ui): lint
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
You can only have one pre-commit setup on a repo. Removing husky so it
doesn't interfere with the python pre-commit.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue
https://discord.com/channels/1020123559063990373/1149513625321603162/1181752622684831884
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because: minor bug
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
While writing regression tests for the queued downloader I discovered
that when using `InvokeAILogger.get_logger()` to fetch a
previously-created logger it resets that logger's log level to the
default specified in the global config. In other words, this didn't work
as expected:
```
import logging
from invokeai.backend.util.logging import InvokeAILogger
logger1 = InvokeAILogger.get_logger('TestLogger')
logger1.setLevel(logging.DEBUG)
logger2 = InvokeAILogger.get_logger('TestLogger')
assert logger1.level == logging.DEBUG
assert logger2.level == logging.DEBUG
```
This PR fixes the problem and adds a corresponding pytest.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [X] Yes
- [ ] No
## [optional] Are there any post deployment tasks we need to perform?
Adds logic to `DiskLatentsStorage.start()` to empty the latents folder on startup.
Adds start and stop methods to `ForwardCacheLatentsStorage`. This is required for `DiskLatentsStorage.start()` to be called, due to how this particular service breaks the direct DI pattern, wrapping the underlying storage with a cache.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
## Description
This adds support for at least some of the SDXL embeddings currently
available on Civitai. The embeddings I have tested include:
- https://civitai.com/models/154898/marblingtixl?modelVersionId=173668
- https://civitai.com/models/148131?modelVersionId=167640
-
https://civitai.com/models/123485/hannah-ferguson-or-sdxl-or-comfyui-only-or-embedding?modelVersionId=134674
(said to be "comfyui only")
-
https://civitai.com/models/185938/kendall-jenner-sdxl-embedding?modelVersionId=208785
I am _not entirely sure_ that I have implemented support in the most
elegant way. The issue is that these embeddings have two weight tensors,
`clip_g` and `clip_l`, which correspond to `text_encoder` and
`text_encoder_2` in the main model. When the patcher calls the
ModelPatcher's `apply_ti()` method, I simply check the dimensions of the
incoming text encoder and choose the weights that match the dimensions
of the encoder.
While writing this, I also ran into a possible issue with the Compel
library's `get_pooled_embeddings()` call. It pads the input token list
to the model's max token length and then calls the TI manager to add the
additional tokens from the embedding. However, this ends up making the
input token list longer than the max length, and CLIPTextEncoder crashes
with a tensor size mismatch. I worked around this behavior by making the
TI manager's `expand_textual_inversion_token_ids_if_necessary()` method
remove the excess pads at the end of the token list.
Also note that I have made similar changes to `apply_ti()` in the
ONNXModelPatcher, but haven't tested them yet.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes#4401
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [X] No : We need to create tests for model patching...
## [optional] Are there any post deployment tasks we need to perform?
IndexedDB has a much larger storage limit than LocalStorage, and is widely supported.
Implemented as a custom storage driver for `redux-remember` via `idb-keyval`. `idb-keyval` is a simple wrapper for IndexedDB that allows it to be used easily as a key-value store.
The logic to clear persisted storage has been updated throughout the app.
- Reset init image, control adapter images, and node image fields when their selected image fails to load
- Only do this if the app is connected via socket (this indicates that the image is "really" gone, and there isn't just a transient network issue)
It's possible for image parameters/nodes/states to have reference a deleted image. For example, a resize image node might have an image set on it, and the workflow saved. The workflow contains a hard reference to that image.
The image is deleted and the workflow loaded again later. The deleted image is still in that workflow, but the app doesn't detect that. The result is that the workflow/graph appears to be valid, but will fail on invoke.
This creates a really confusing user experience, where when somebody shares a workflow with an image baked into it, and another person opens it, everything *looks* ok, but the workflow fails with a mysterious error about a missing image.
The problem affects node images, control adapter images and the img2img init image. Resetting the image when it fails to load *and* socket is connected resolves this in a simple way.
The problem also affects canvas images, but we have handle that by displaying an error fallback image, so no change is made there.
Closes#5121
- Parse `anyOf` for enums (present when they are optional)
- Consolidate `FieldTypeParseError` and `UnsupportedFieldTypeError` into `FieldParseError` (there was no difference in handling and it simplifies things a bit)
label:Is there an existing issue for this problem?
description:|
Please use the [search function](https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen+label%3Abug)
irst to see if an issue already exists for the bug you encountered.
Please [search](https://github.com/invoke-ai/InvokeAI/issues) first to see if an issue already exists for the problem.
options:
- label:I have searched the existing issues
required:true
@@ -33,80 +28,119 @@ body:
- type:dropdown
id:os_dropdown
attributes:
label:OS
description:Which operating System did you use when the bug occured
label:Operating system
description:Your computer's operating system.
multiple:false
options:
- 'Linux'
- 'Windows'
- 'macOS'
- 'other'
validations:
required:true
- type:dropdown
id:gpu_dropdown
attributes:
label:GPU
description:Which kind of Graphic-Adapter is your System using
label:GPU vendor
description:Your GPU's vendor.
multiple:false
options:
- 'cuda'
- 'amd'
- 'mps'
- 'cpu'
- 'Nvidia (CUDA)'
- 'AMD (ROCm)'
- 'Apple Silicon (MPS)'
- 'None (CPU)'
validations:
required:true
- type:input
id:gpu_model
attributes:
label:GPU model
description:Your GPU's model. If on Apple Silicon, this is your Mac's chip. Leave blank if on CPU.
placeholder:ex. RTX 2080 Ti, Mac M1 Pro
validations:
required:false
- type:input
id:vram
attributes:
label:VRAM
description:Size of the VRAM if known
label:GPU VRAM
description:Your GPU's VRAM. If on Apple Silicon, this is your Mac's unified memory. Leave blank if on CPU.
placeholder:8GB
validations:
required:false
- type:input
id:version-number
attributes:
label:What version did you experience this issue on?
label:Version number
description:|
Please share the version of Invoke AI that you experienced the issue on. If this is not the latest version, please update first to confirm the issue still exists. If you are testing main, please include the commit hash instead.
placeholder:X.X.X
The version of Invoke you have installed. If it is not the latest version, please update and try again to confirm the issue still exists. If you are testing main, please include the commit hash instead.
placeholder:ex. 3.6.1
validations:
required:true
- type:input
id:browser-version
attributes:
label:Browser
description:Your web browser and version.
placeholder:ex. Firefox 123.0b3
validations:
required:true
- type:textarea
id:python-deps
attributes:
label:Python dependencies
description:|
If the problem occurred during image generation, click the gear icon at the bottom left corner, click "About", click the copy button and then paste here.
validations:
required:false
- type:textarea
id:what-happened
attributes:
label:What happened?
label:What happened
description:|
Briefly describe what happened, what you expected to happen and how to reproduce this bug.
placeholder:When using the webinterface and right-clicking on button X instead of the popup-menu there error Y appears
Describe what happened. Include any relevant error messages, stack traces and screenshots here.
placeholder:I clicked button X and then Y happened.
validations:
required:true
- type:textarea
id:what-you-expected
attributes:
label:Screenshots
description:If applicable, add screenshots to help explain your problem
placeholder:this is what the result looked like <screenshot>
label:What you expected to happen
description:Describe what you expected to happen.
placeholder:I expected Z to happen.
validations:
required:true
- type:textarea
id:how-to-repro
attributes:
label:How to reproduce the problem
description:List steps to reproduce the problem.
placeholder:Start the app, generate an image with these settings, then click button X.
validations:
required:false
- type:textarea
id:additional-context
attributes:
label:Additional context
description:Add any other context about the problem here
description:Any other context that might help us to understand the problem.
placeholder:Only happens when there is full moon and Friday the 13th on Christmas Eve 🎅🏻
validations:
required:false
- type:input
id:contact
id:discord-username
attributes:
label:Contact Details
description:__OPTIONAL__ How can we get in touch with you if we need more info (besides this issue)?
## What type of PR is this? (check all applicable)
## Summary
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
<!--A description of the changes in this PR. Include the kind of change (fix, feature, docs, etc), the "why" and the "how". Screenshots or videos are useful for frontend changes.-->
## Related Issues / Discussions
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
<!--WHEN APPLICABLE: List any related issues or discussions on github or discord. If this PR closes an issue, please use the "Closes #1234" format, so that the issue will be automatically closed when the PR merges.-->
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## QA Instructions
<!--WHEN APPLICABLE: Describe how you have tested the changes in this PR. Provide enough detail that a reviewer can reproduce your tests.-->
## Description
## Merge Plan
<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like DB schemas, may need some care when merging. For example, a careful rebase by the change author, timing to not interfere with a pending release, or a message to contributors on discord after merging.-->
## Related Tickets & Documents
## Checklist
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
- [ ]_The PR has a short but descriptive title, suitable for a changelog_
# Invoke AI - Generative AI for Professional Creatives
## Professional Creative Tools for Stable Diffusion, Custom-Trained Models, and more.
To learn more about Invoke AI, get started instantly, or implement our Business solutions, visit [invoke.ai](https://invoke.ai)
# Invoke - Professional Creative AI Tools for Visual Media
#### To learn more about Invoke, or implement our Business solutions, visit [invoke.com]
[![discord badge]][discord link] [![latest release badge]][latest release link] [![github stars badge]][github stars link] [![github forks badge]][github forks link] [![CI checks on main badge]][CI checks on main link] [![latest commit to main badge]][latest commit to main link] [![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link] [![translation status badge]][translation status link]
</div>
Invoke is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. Invoke offers an industry leading web-based UI, and serves as the foundation for multiple commercial products.
| **For users looking for a locally installed, self-hosted and self-managed service** | **For users or teams looking for a cloud-hosted, fully managed service** |
| - Free to use under a commercially-friendly license | - Monthly subscription fee with three different plan levels |
| - Download and install on compatible hardware | - Offers additional benefits, including multi-user support, improved model training, and more |
| - Includes all core studio features: generate, refine, iterate on images, and build workflows | - Hosted in the cloud for easy, secure model access and scalability |
| Quick Start -> [Installation and Updates][installation docs] | More Information -> [www.invoke.com/pricing](https://www.invoke.com/pricing) |
[![discord badge]][discord link]

| [Installation and Updates][installation docs] - [Documentation and Tutorials][docs home] - [Bug Reports][github issues] - [Contributing][contributing docs] |
[![CI checks on main badge]][CI checks on main link] [![latest commit to main badge]][latest commit to main link]
</div>
[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link] [![translation status badge]][translation status link]
## Quick Start
1. Download and unzip the installer from the bottom of the [latest release][latest release link].
2. Run the installer script.
- **Windows**: Double-click on the `install.bat` script.
- **macOS**: Open a Terminal window, drag the file `install.sh` from Finder into the Terminal, and press enter.
- **Linux**: Run `install.sh`.
3. When prompted, enter a location for the install and select your GPU type.
4. Once the install finishes, find the directory you selected during install. The default location is `C:\Users\Username\invokeai` for Windows or `~/invokeai` for Linux/macOS.
5. Run the launcher script (`invoke.bat` for Windows, `invoke.sh` for macOS and Linux) the same way you ran the installer script in step 2.
6. Select option 1 to start the application. Once it starts up, open your browser and go to <http://localhost:9090>.
7. Open the model manager tab to install a starter model and then you'll be ready to generate.
More detail, including hardware requirements and manual install instructions, are available in the [installation documentation][installation docs].
## Docker Container
We publish official container images in Github Container Registry: https://github.com/invoke-ai/InvokeAI/pkgs/container/invokeai. Both CUDA and ROCm images are available. Check the above link for relevant tags.
> [!IMPORTANT]
> Ensure that Docker is set up to use the GPU. Refer to [NVIDIA][nvidia docker docs] or [AMD][amd docker docs] documentation.
### Generate!
Run the container, modifying the command as necessary:
```bash
docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
```
Then open `http://localhost:9090` and install some models using the Model Manager tab to begin generating.
For ROCm, add `--device /dev/kfd --device /dev/dri` to the `docker run` command.
### Persist your data
You will likely want to persist your workspace outside of the container. Use the `--volume /home/myuser/invokeai:/invokeai` flag to mount some local directory (using its **absolute** path) to the `/invokeai` path inside the container. Your generated images and models will reside there. You can use this directory with other InvokeAI installations, or switch between runtime directories as needed.
### DIY
Build your own image and customize the environment to match your needs using our `docker-compose` stack. See [README.md](./docker/README.md) in the [docker](./docker) directory.
## Troubleshooting, FAQ and Support
Please review our [FAQ][faq] for solutions to common installation problems and other issues.
For more help, please join our [Discord][discord link].
## Features
Full details on features can be found in [our documentation][features docs].
### Web Server & UI
Invoke runs a locally hosted web server & React UI with an industry-leading user experience.
### Unified Canvas
The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/out-painting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.
### Workflows & Nodes
Invoke offers a fully featured workflow management solution, enabling users to combine the power of node-based workflows with the easy of a UI. This allows for customizable generation pipelines to be developed and shared by users looking to create specific workflows to support their production use-cases.
### Board & Gallery Management
Invoke features an organized gallery system for easily storing, accessing, and remixing your content in the Invoke workspace. Images can be dragged/dropped onto any Image-base UI element in the application, and rich metadata within the Image allows for easy recall of key prompts or settings used in your workflow.
### Other features
- Support for both ckpt and diffusers models
- SD1.5, SD2.0, SDXL, and FLUX support
- Upscaling Tools
- Embedding Manager & Support
- Model Manager & Support
- Workflow creation & management
- Node-Based Architecture
## Contributing
Anyone who wishes to contribute to this project - whether documentation, features, bug fixes, code cleanup, testing, or code reviews - is very much encouraged to do so.
Get started with contributing by reading our [contribution documentation][contributing docs], joining the [#dev-chat] or the GitHub discussion board.
We hope you enjoy using Invoke as much as we enjoy creating it, and we hope you will elect to become part of our community.
## Thanks
Invoke is a combined effort of [passionate and talented people from across the world][contributors]. We thank them for their time, hard work and effort.
[latest commit to main badge]: https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/main?icon=github&color=yellow&label=last%20dev%20commit&cache=900
[latest commit to main link]: https://github.com/invoke-ai/InvokeAI/commits/main
(Replace `v3.0.0` with the current release number if this document is out of date).
The first command will install and upgrade new software to run
InvokeAI. The second will prepare the 2.3 directory for use with 3.0.
You may now launch the WebUI in the usual way, by selecting option [1]
from the launcher script
#### Migrating Images
The migration script will migrate your invokeai settings and models,
including textual inversion models, LoRAs and merges that you may have
installed previously. However it does **not** migrate the generated
images stored in your 2.3-format outputs directory. To do this, you
need to run an additional step:
1. From a working InvokeAI 3.0 root directory, start the launcher and
enter menu option [8] to open the "developer's console".
2. At the developer's console command line, type the command:
```bash
invokeai-import-images
```
3. This will lead you through the process of confirming the desired
source and destination for the imported images. The images will
appear in the gallery board of your choice, and contain the
original prompt, model name, and other parameters used to generate
the image.
(Many kudos to **techjedi** for contributing this script.)
## Hardware Requirements
InvokeAI is supported across Linux, Windows and macOS. Linux
users can use either an Nvidia-based card (with CUDA support) or an
AMD card (using the ROCm driver).
### System
You will need one of the following:
- An NVIDIA-based graphics card with 4 GB or more VRAM memory. 6-8 GB
of VRAM is highly recommended for rendering using the Stable
Diffusion XL models
- An Apple computer with an M1 chip.
- An AMD-based graphics card with 4GB or more VRAM memory (Linux
only), 6-8 GB for XL rendering.
We do not recommend the GTX 1650 or 1660 series video cards. They are
unable to run in half-precision mode and do not have sufficient VRAM
to render 512x512 images.
**Memory** - At least 12 GB Main Memory RAM.
**Disk** - At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
## Features
Feature documentation can be reviewed by navigating to [the InvokeAI Documentation page](https://invoke-ai.github.io/InvokeAI/features/)
### *Web Server & UI*
InvokeAI offers a locally hosted Web Server & React Frontend, with an industry leading user experience. The Web-based UI allows for simple and intuitive workflows, and is responsive for use on mobile devices and tablets accessing the web server.
### *Unified Canvas*
The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/outpainting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.
### *Workflows & Nodes*
InvokeAI offers a fully featured workflow management solution, enabling users to combine the power of nodes based workflows with the easy of a UI. This allows for customizable generation pipelines to be developed and shared by users looking to create specific workflows to support their production use-cases.
### *Board & Gallery Management*
Invoke AI provides an organized gallery system for easily storing, accessing, and remixing your content in the Invoke workspace. Images can be dragged/dropped onto any Image-base UI element in the application, and rich metadata within the Image allows for easy recall of key prompts or settings used in your workflow.
### Other features
- *Support for both ckpt and diffusers models*
- *SD 2.0, 2.1, XL support*
- *Upscaling Tools*
- *Embedding Manager & Support*
- *Model Manager & Support*
- *Workflow creation & management*
- *Node-Based Architecture*
### Latest Changes
For our latest changes, view our [Release
Notes](https://github.com/invoke-ai/InvokeAI/releases) and the
[CHANGELOG](docs/CHANGELOG.md).
### Troubleshooting
Please check out our **[Troubleshooting Guide](https://invoke-ai.github.io/InvokeAI/installation/010_INSTALL_AUTOMATED/#troubleshooting)** to get solutions for common installation
problems and other issues. For more help, please join our [Discord][discord link]
## Contributing
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so.
Get started with contributing by reading our [Contribution documentation](https://invoke-ai.github.io/InvokeAI/contributing/CONTRIBUTING/), joining the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) or the GitHub discussion board.
If you are unfamiliar with how
to contribute to GitHub projects, we have a new contributor checklist you can follow to get started contributing:
All commands are to be run from the `docker` directory: `cd docker`
First things first:
- Ensure that Docker can use your [NVIDIA][nvidia docker docs] or [AMD][amd docker docs] GPU.
- This document assumes a Linux system, but should work similarly under Windows with WSL2.
- We don't recommend running Invoke in Docker on macOS at this time. It works, but very slowly.
## Quickstart
No `docker compose`, no persistence, single command, using the official images:
**CUDA (NVIDIA GPU):**
```bash
docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
```
**ROCm (AMD GPU):**
```bash
docker run --device /dev/kfd --device /dev/dri --publish 9090:9090 ghcr.io/invoke-ai/invokeai:main-rocm
```
Open `http://localhost:9090` in your browser once the container finishes booting, install some models, and generate away!
### Data persistence
To persist your generated images and downloaded models outside of the container, add a `--volume/-v` flag to the above command, e.g.:
```bash
docker run --volume /some/local/path:/invokeai {...etc...}
```
`/some/local/path/invokeai` will contain all your data.
It can *usually* be reused between different installs of Invoke. Tread with caution and read the release notes!
## Customize the container
The included `run.sh` script is a convenience wrapper around `docker compose`. It can be helpful for passing additional build arguments to `docker compose`. Alternatively, the familiar `docker compose` commands work just as well.
```bash
cd docker
cp .env.sample .env
# edit .env to your liking if you need to; it is well commented.
./run.sh
```
It will take a few minutes to build the image the first time. Once the application starts up, open `http://localhost:9090` in your browser to invoke!
>[!TIP]
>When using the `run.sh` script, the container will continue running after Ctrl+C. To shut it down, use the `docker compose down` command.
## Docker setup in detail
#### Linux
1. Ensure builkit is enabled in the Docker daemon settings (`/etc/docker/daemon.json`)
1. Ensure buildkit is enabled in the Docker daemon settings (`/etc/docker/daemon.json`)
2. Install the `docker compose` plugin using your package manager, or follow a [tutorial](https://docs.docker.com/compose/install/linux/#install-using-the-repository).
- The deprecated `docker-compose` (hyphenated) CLI continues to work for now.
- The deprecated `docker-compose` (hyphenated) CLI probably won't work. Update to a recent version.
3. Ensure docker daemon is able to access the GPU.
-You may need to install [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
> You'll be better off installing Invoke directly on your system, because Docker can not use the GPU on macOS.
If you are still reading:
1. Ensure Docker has at least 16GB RAM
2. Enable VirtioFS for file sharing
3. Enable `docker compose` V2 support
This is done via Docker Desktop preferences
This is done via Docker Desktop preferences.
## Quickstart
### Configure the Invoke Environment
1. Make a copy of `env.sample` and name it `.env` (`cp env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to:
a. the desired location of the InvokeAI runtime directory, or
b. an existing, v3.0.0 compatible runtime directory.
1.`docker compose up`
1. Make a copy of `.env.sample` and name it `.env` (`cp .env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to the desired location of the InvokeAI runtime directory. It may be an existing directory from a previous installation (post 4.0.0).
1. Execute `run.sh`
The image will be built automatically if needed.
The runtime directory (holding models and outputs) will be created in the location specified by `INVOKEAI_ROOT`. The default location is `~/invokeai`. The runtime directory will be populated with the base configs and models necessary to start generating.
The runtime directory (holding models and outputs) will be created in the location specified by `INVOKEAI_ROOT`. The default location is `~/invokeai`. Navigate to the Model Manager tab and install some models before generating.
### Use a GPU
@@ -35,15 +90,17 @@ The runtime directory (holding models and outputs) will be created in the locati
- WSL2 is *required* for Windows.
- only `x86_64` architecture is supported.
The Docker daemon on the system must be already set up to use the GPU. In case of Linux, this involves installing `nvidia-docker-runtime` and configuring the `nvidia` runtime as default. Steps will be different for AMD. Please see Docker documentation for the most up-to-date instructions for using your GPU with Docker.
The Docker daemon on the system must be already set up to use the GPU. In case of Linux, this involves installing `nvidia-docker-runtime` and configuring the `nvidia` runtime as default. Steps will be different for AMD. Please see Docker/NVIDIA/AMD documentation for the most up-to-date instructions for using your GPU with Docker.
To use an AMD GPU, set `GPU_DRIVER=rocm` in your `.env` file before running `./run.sh`.
## Customize
Check the `.env.sample` file. It contains some environment variables for running in Docker. Copy it, name it `.env`, and fill it in with your own values. Next time you run `docker compose up`, your custom values will be used.
Check the `.env.sample` file. It contains some environment variables for running in Docker. Copy it, name it `.env`, and fill it in with your own values. Next time you run `run.sh`, your custom values will be used.
You can also set these values in `docker-compose.yml` directly, but `.env` will help avoid conflicts when code is updated.
Example (values are optional, but setting `INVOKEAI_ROOT` is highly recommended):
Values are optional, but setting `INVOKEAI_ROOT` is highly recommended. The default is `~/invokeai`. Example:
```bash
INVOKEAI_ROOT=/Volumes/WorkDrive/invokeai
@@ -52,27 +109,9 @@ CONTAINER_UID=1000
GPU_DRIVER=cuda
```
Any environment variables supported by InvokeAI can be set here - please see the [Configuration docs](https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/) for further detail.
Any environment variables supported by InvokeAI can be set here. See the [Configuration docs](https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/) for further detail.
## Even Moar Customizing!
---
See the `docker-compose.yml` file. The `command` instruction can be uncommented and used to run arbitrary startup commands. Some examples below.
### Reconfigure the runtime directory
Can be used to download additional models from the supported model list
In conjunction with `INVOKEAI_ROOT` can be also used to initialize a runtime directory
This release (along with the post1 and post2 follow-on releases) expands support for additional LoRA and LyCORIS models, upgrades diffusers versions, and fixes a few bugs.
### LoRA and LyCORIS Support Improvement
A number of LoRA/LyCORIS fine-tune files (those which alter the text encoder as well as the unet model) were not having the desired effect in InvokeAI. This bug has now been fixed. Full documentation of LoRA support is available at InvokeAI LoRA Support.
Previously, InvokeAI did not distinguish between LoRA/LyCORIS models based on Stable Diffusion v1.5 vs those based on v2.0 and 2.1, leading to a crash when an incompatible model was loaded. This has now been fixed. In addition, the web pulldown menus for LoRA and Textual Inversion selection have been enhanced to show only those files that are compatible with the currently-selected Stable Diffusion model.
Support for the newer LoKR LyCORIS files has been added.
### Library Updates and Speed/Reproducibility Advancements
The major enhancement in this version is that NVIDIA users no longer need to decide between speed and reproducibility. Previously, if you activated the Xformers library, you would see improvements in speed and memory usage, but multiple images generated with the same seed and other parameters would be slightly different from each other. This is no longer the case. Relative to 2.3.5 you will see improved performance when running without Xformers, and even better performance when Xformers is activated. In both cases, images generated with the same settings will be identical.
Here are the new library versions:
Library Version
Torch 2.0.0
Diffusers 0.16.1
Xformers 0.0.19
Compel 1.1.5
Other Improvements
### Performance Improvements
When a model is loaded for the first time, InvokeAI calculates its checksum for incorporation into the PNG metadata. This process could take up to a minute on network-mounted disks and WSL mounts. This release noticeably speeds up the process.
### Bug Fixes
The "import models from directory" and "import from URL" functionality in the console-based model installer has now been fixed.
When running the WebUI, we have reduced the number of times that InvokeAI reaches out to HuggingFace to fetch the list of embeddable Textual Inversion models. We have also caught and fixed a problem with the updater not correctly detecting when another instance of the updater is running
## v2.3.4 <small>(7 April 2023)</small>
What's New in 2.3.4
This features release adds support for LoRA (Low-Rank Adaptation) and LyCORIS (Lora beYond Conventional) models, as well as some minor bug fixes.
### LoRA and LyCORIS Support
LoRA files contain fine-tuning weights that enable particular styles, subjects or concepts to be applied to generated images. LyCORIS files are an extended variant of LoRA. InvokeAI supports the most common LoRA/LyCORIS format, which ends in the suffix .safetensors. You will find numerous LoRA and LyCORIS models for download at Civitai, and a small but growing number at Hugging Face. Full documentation of LoRA support is available at InvokeAI LoRA Support.( Pre-release note: this page will only be available after release)
To use LoRA/LyCORIS models in InvokeAI:
Download the .safetensors files of your choice and place in /path/to/invokeai/loras. This directory was not present in earlier version of InvokeAI but will be created for you the first time you run the command-line or web client. You can also create the directory manually.
Add withLora(lora-file,weight) to your prompts. The weight is optional and will default to 1.0. A few examples, assuming that a LoRA file named loras/sushi.safetensors is present:
family sitting at dinner table eating sushi withLora(sushi,0.9)
family sitting at dinner table eating sushi withLora(sushi, 0.75)
family sitting at dinner table eating sushi withLora(sushi)
Multiple withLora() prompt fragments are allowed. The weight can be arbitrarily large, but the useful range is roughly 0.5 to 1.0. Higher weights make the LoRA's influence stronger. Negative weights are also allowed, which can lead to some interesting effects.
Generate as you usually would! If you find that the image is too "crisp" try reducing the overall CFG value or reducing individual LoRA weights. As is the case with all fine-tunes, you'll get the best results when running the LoRA on top of the model similar to, or identical with, the one that was used during the LoRA's training. Don't try to load a SD 1.x-trained LoRA into a SD 2.x model, and vice versa. This will trigger a non-fatal error message and generation will not proceed.
You can change the location of the loras directory by passing the --lora_directory option to `invokeai.
### New WebUI LoRA and Textual Inversion Buttons
This version adds two new web interface buttons for inserting LoRA and Textual Inversion triggers into the prompt as shown in the screenshot below.
Clicking on one or the other of the buttons will bring up a menu of available LoRA/LyCORIS or Textual Inversion trigger terms. Select a menu item to insert the properly-formatted withLora() or <textual-inversion> prompt fragment into the positive prompt. The number in parentheses indicates the number of trigger terms currently in the prompt. You may click the button again and deselect the LoRA or trigger to remove it from the prompt, or simply edit the prompt directly.
Currently terms are inserted into the positive prompt textbox only. However, some textual inversion embeddings are designed to be used with negative prompts. To move a textual inversion trigger into the negative prompt, simply cut and paste it.
By default the Textual Inversion menu only shows locally installed models found at startup time in /path/to/invokeai/embeddings. However, InvokeAI has the ability to dynamically download and install additional Textual Inversion embeddings from the HuggingFace Concepts Library. You may choose to display the most popular of these (with five or more likes) in the Textual Inversion menu by going to Settings and turning on "Show Textual Inversions from HF Concepts Library." When this option is activated, the locally-installed TI embeddings will be shown first, followed by uninstalled terms from Hugging Face. See The Hugging Face Concepts Library and Importing Textual Inversion files for more information.
### Minor features and fixes
This release changes model switching behavior so that the command-line and Web UIs save the last model used and restore it the next time they are launched. It also improves the behavior of the installer so that the pip utility is kept up to date.
### Known Bugs in 2.3.4
These are known bugs in the release.
The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.
## v2.3.3 <small>(28 March 2023)</small>
This is a bugfix and minor feature release.
### Bugfixes
Since version 2.3.2 the following bugs have been fixed:
Bugs
When using legacy checkpoints with an external VAE, the VAE file is now scanned for malware prior to loading. Previously only the main model weights file was scanned.
Textual inversion will select an appropriate batchsize based on whether xformers is active, and will default to xformers enabled if the library is detected.
The batch script log file names have been fixed to be compatible with Windows.
Occasional corruption of the .next_prefix file (which stores the next output file name in sequence) on Windows systems is now detected and corrected.
Support loading of legacy config files that have no personalization (textual inversion) section.
An infinite loop when opening the developer's console from within the invoke.sh script has been corrected.
Documentation fixes, including a recipe for detecting and fixing problems with the AMD GPU ROCm driver.
Enhancements
It is now possible to load and run several community-contributed SD-2.0 based models, including the often-requested "Illuminati" model.
The "NegativePrompts" embedding file, and others like it, can now be loaded by placing it in the InvokeAI embeddings directory.
If no --model is specified at launch time, InvokeAI will remember the last model used and restore it the next time it is launched.
On Linux systems, the invoke.sh launcher now uses a prettier console-based interface. To take advantage of it, install the dialog package using your package manager (e.g. sudo apt install dialog).
When loading legacy models (safetensors/ckpt) you can specify a custom config file and/or a VAE by placing like-named files in the same directory as the model following this example:
my-favorite-model.ckpt
my-favorite-model.yaml
my-favorite-model.vae.pt # or my-favorite-model.vae.safetensors
### Known Bugs in 2.3.3
These are known bugs in the release.
The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.
## v2.3.2 <small>(11 March 2023)</small>
This is a bugfix and minor feature release.
### Bugfixes
Since version 2.3.1 the following bugs have been fixed:
Black images appearing for potential NSFW images when generating with legacy checkpoint models and both --no-nsfw_checker and --ckpt_convert turned on.
Black images appearing when generating from models fine-tuned on Stable-Diffusion-2-1-base. When importing V2-derived models, you may be asked to select whether the model was derived from a "base" model (512 pixels) or the 768-pixel SD-2.1 model.
The "Use All" button was not restoring the Hi-Res Fix setting on the WebUI
When using the model installer console app, models failed to import correctly when importing from directories with spaces in their names. A similar issue with the output directory was also fixed.
Crashes that occurred during model merging.
Restore previous naming of Stable Diffusion base and 768 models.
Upgraded to latest versions of diffusers, transformers, safetensors and accelerate libraries upstream. We hope that this will fix the assertion NDArray > 2**32 issue that MacOS users have had when generating images larger than 768x768 pixels. Please report back.
As part of the upgrade to diffusers, the location of the diffusers-based models has changed from models/diffusers to models/hub. When you launch InvokeAI for the first time, it will prompt you to OK a one-time move. This should be quick and harmless, but if you have modified your models/diffusers directory in some way, for example using symlinks, you may wish to cancel the migration and make appropriate adjustments.
New "Invokeai-batch" script
### Invoke AI Batch
2.3.2 introduces a new command-line only script called invokeai-batch that can be used to generate hundreds of images from prompts and settings that vary systematically. This can be used to try the same prompt across multiple combinations of models, steps, CFG settings and so forth. It also allows you to template prompts and generate a combinatorial list like:
a shack in the mountains, photograph
a shack in the mountains, watercolor
a shack in the mountains, oil painting
a chalet in the mountains, photograph
a chalet in the mountains, watercolor
a chalet in the mountains, oil painting
a shack in the desert, photograph
...
If you have a system with multiple GPUs, or a single GPU with lots of VRAM, you can parallelize generation across the combinatorial set, reducing wait times and using your system's resources efficiently (make sure you have good GPU cooling).
To try invokeai-batch out. Launch the "developer's console" using the invoke launcher script, or activate the invokeai virtual environment manually. From the console, give the command invokeai-batch --help in order to learn how the script works and create your first template file for dynamic prompt generation.
### Known Bugs in 2.3.2
These are known bugs in the release.
The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
Windows Defender will sometimes raise a Trojan alert for the codeformer.pth face restoration model. As far as we have been able to determine, this is a false positive and can be safely whitelisted.
## v2.3.1 <small>(22 February 2023)</small>
This is primarily a bugfix release, but it does provide several new features that will improve the user experience.
### Enhanced support for model management
InvokeAI now makes it convenient to add, remove and modify models. You can individually import models that are stored on your local system, scan an entire folder and its subfolders for models and import them automatically, and even directly import models from the internet by providing their download URLs. You also have the option of designating a local folder to scan for new models each time InvokeAI is restarted.
There are three ways of accessing the model management features:
From the WebUI, click on the cube to the right of the model selection menu. This will bring up a form that allows you to import models individually from your local disk or scan a directory for models to import.
Using the Model Installer App
Choose option (5) download and install models from the invoke launcher script to start a new console-based application for model management. You can use this to select from a curated set of starter models, or import checkpoint, safetensors, and diffusers models from a local disk or the internet. The example below shows importing two checkpoint URLs from popular SD sites and a HuggingFace diffusers model using its Repository ID. It also shows how to designate a folder to be scanned at startup time for new models to import.
Command-line users can start this app using the command invokeai-model-install.
Using the Command Line Client (CLI)
The !install_model and !convert_model commands have been enhanced to allow entering of URLs and local directories to scan and import. The first command installs .ckpt and .safetensors files as-is. The second one converts them into the faster diffusers format before installation.
Internally InvokeAI is able to probe the contents of a .ckpt or .safetensors file to distinguish among v1.x, v2.x and inpainting models. This means that you do not need to include "inpaint" in your model names to use an inpainting model. Note that Stable Diffusion v2.x models will be autoconverted into a diffusers model the first time you use it.
Please see INSTALLING MODELS for more information on model management.
### An Improved Installer Experience
The installer now launches a console-based UI for setting and changing commonly-used startup options:
After selecting the desired options, the installer installs several support models needed by InvokeAI's face reconstruction and upscaling features and then launches the interface for selecting and installing models shown earlier. At any time, you can edit the startup options by launching invoke.sh/invoke.bat and entering option (6) change InvokeAI startup options
Command-line users can launch the new configure app using invokeai-configure.
This release also comes with a renewed updater. To do an update without going through a whole reinstallation, launch invoke.sh or invoke.bat and choose option (9) update InvokeAI . This will bring you to a screen that prompts you to update to the latest released version, to the most current development version, or any released or unreleased version you choose by selecting the tag or branch of the desired version.
Command-line users can run this interface by typing invokeai-configure
### Image Symmetry Options
There are now features to generate horizontal and vertical symmetry during generation. The way these work is to wait until a selected step in the generation process and then to turn on a mirror image effect. In addition to generating some cool images, you can also use this to make side-by-side comparisons of how an image will look with more or fewer steps. Access this option from the WebUI by selecting Symmetry from the image generation settings, or within the CLI by using the options --h_symmetry_time_pct and --v_symmetry_time_pct (these can be abbreviated to --h_sym and --v_sym like all other options).
### A New Unified Canvas Look
This release introduces a beta version of the WebUI Unified Canvas. To try it out, open up the settings dialogue in the WebUI (gear icon) and select Use Canvas Beta Layout:
Refresh the screen and go to to Unified Canvas (left side of screen, third icon from the top). The new layout is designed to provide more space to work in and to keep the image controls close to the image itself:
Model conversion and merging within the WebUI
The WebUI now has an intuitive interface for model merging, as well as for permanent conversion of models from legacy .ckpt/.safetensors formats into diffusers format. These options are also available directly from the invoke.sh/invoke.bat scripts.
An easier way to contribute translations to the WebUI
We have migrated our translation efforts to Weblate, a FOSS translation product. Maintaining the growing project's translations is now far simpler for the maintainers and community. Please review our brief translation guide for more information on how to contribute.
Numerous internal bugfixes and performance issues
### Bug Fixes
This releases quashes multiple bugs that were reported in 2.3.0. Major internal changes include upgrading to diffusers 0.13.0, and using the compel library for prompt parsing. See Detailed Change Log for a detailed list of bugs caught and squished.
Summary of InvokeAI command line scripts (all accessible via the launcher menu)
Command Description
invokeai Command line interface
invokeai --web Web interface
invokeai-model-install Model installer with console forms-based front end
invokeai-ti --gui Textual inversion, with a console forms-based front end
invokeai-merge --gui Model merging, with a console forms-based front end
invokeai-configure Startup configuration; can also be used to reinstall support models
invokeai-update InvokeAI software updater
### Known Bugs in 2.3.1
These are known bugs in the release.
MacOS users generating 768x768 pixel images or greater using diffusers models may experience a hard crash with assertion NDArray > 2**32 This appears to be an issu...
## v2.3.0 <small>(15 January 2023)</small>
**Transition to diffusers
Version 2.3 provides support for both the traditional `.ckpt` weight
checkpoint files as well as the HuggingFace `diffusers` format. This
introduces several changes you should know about.
1. The models.yaml format has been updated. There are now two
different type of configuration stanza. The traditional ckpt
one will look like this, with a `format` of `ckpt` and a
`weights` field that points to the absolute or ROOTDIR-relative
location of the ckpt file.
```
inpainting-1.5:
description: RunwayML SD 1.5 model optimized for inpainting (4.27 GB)
1. On CUDA systems, the 768 pixel stable-diffusion-2.0 and
stable-diffusion-2.1 models can only be run as `diffusers` models
when the `xformer` library is installed and configured. Without
`xformers`, InvokeAI returns black images.
2. Inpainting and outpainting have regressed in quality.
Both these issues are being actively worked on.
## v2.2.4 <small>(11 December 2022)</small>
**the `invokeai` directory**
Previously there were two directories to worry about, the directory that
contained the InvokeAI source code and the launcher scripts, and the `invokeai`
directory that contained the models files, embeddings, configuration and
outputs. With the 2.2.4 release, this dual system is done away with, and
everything, including the `invoke.bat` and `invoke.sh` launcher scripts, now
live in a directory named `invokeai`. By default this directory is located in
your home directory (e.g. `\Users\yourname` on Windows), but you can select
where it goes at install time.
After installation, you can delete the install directory (the one that the zip
file creates when it unpacks). Do **not** delete or move the `invokeai`
directory!
**Initialization file `invokeai/invokeai.init`**
You can place frequently-used startup options in this file, such as the default
number of steps or your preferred sampler. To keep everything in one place, this
file has now been moved into the `invokeai` directory and is named
`invokeai.init`.
**To update from Version 2.2.3**
The easiest route is to download and unpack one of the 2.2.4 installer files.
When it asks you for the location of the `invokeai` runtime directory, respond
with the path to the directory that contains your 2.2.3 `invokeai`. That is, if
`invokeai` lives at `C:\Users\fred\invokeai`, then answer with `C:\Users\fred`
and answer "Y" when asked if you want to reuse the directory.
The `update.sh` (`update.bat`) script that came with the 2.2.3 source installer
does not know about the new directory layout and won't be fully functional.
**To update to 2.2.5 (and beyond) there's now an update path**
As they become available, you can update to more recent versions of InvokeAI
using an `update.sh` (`update.bat`) script located in the `invokeai` directory.
Running it without any arguments will install the most recent version of
InvokeAI. Alternatively, you can get set releases by running the `update.sh`
script with an argument in the command shell. This syntax accepts the path to
the desired release's zip file, which you can find by clicking on the green
"Code" button on this repository's home page.
**Other 2.2.4 Improvements**
- Fix InvokeAI GUI initialization by @addianto in #1687
- fix link in documentation by @lstein in #1728
- Fix broken link by @ShawnZhong in #1736
- Remove reference to binary installer by @lstein in #1731
- documentation fixes for 2.2.3 by @lstein in #1740
- Modify installer links to point closer to the source installer by @ebr in
#1745
- add documentation warning about 1650/60 cards by @lstein in #1753
- Fix Linux source URL in installation docs by @andybearman in #1756
- Make install instructions discoverable in readme by @damian0815 in #1752
- typo fix by @ofirkris in #1755
- Non-interactive model download (support HUGGINGFACE_TOKEN) by @ebr in #1578
- fix(srcinstall): shell installer - cp scripts instead of linking by @tildebyte
in #1765
- stability and usage improvements to binary & source installers by @lstein in
#1760
- fix off-by-one bug in cross-attention-control by @damian0815 in #1774
- Eventually update APP_VERSION to 2.2.3 by @spezialspezial in #1768
- invoke script cds to its location before running by @lstein in #1805
- Make PaperCut and VoxelArt models load again by @lstein in #1730
- Fix --embedding_directory / --embedding_path not working by @blessedcoolant in
#1817
- Clean up readme by @hipsterusername in #1820
- Optimized Docker build with support for external working directory by @ebr in
#1544
- disable pushing the cloud container by @mauwii in #1831
- Fix docker push github action and expand with additional metadata by @ebr in
#1837
- Fix Broken Link To Notebook by @VedantMadane in #1821
- Account for flat models by @spezialspezial in #1766
- Update invoke.bat.in isolate environment variables by @lynnewu in #1833
- Arch Linux Specific PatchMatch Instructions & fixing conda install on linux by
@SammCheese in #1848
- Make force free GPU memory work in img2img by @addianto in #1844
- New installer by @lstein
## v2.2.3 <small>(2 December 2022)</small>
!!! Note
This point release removes references to the binary installer from the
installation guide. The binary installer is not stable at the current
time. First time users are encouraged to use the "source" installer as
described in [Installing InvokeAI with the Source Installer](installation/deprecated_documentation/INSTALL_SOURCE.md)
With InvokeAI 2.2, this project now provides enthusiasts and professionals a
robust workflow solution for creating AI-generated and human facilitated
compositions. Additional enhancements have been made as well, improving safety,
ease of use, and installation.
Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a
512x768 image (and less for smaller images), and is compatible with
Windows/Linux/Mac (M1 & M2).
You can see the [release video](https://youtu.be/hIYBfDtKaus) here, which
introduces the main WebUI enhancement for version 2.2 -
[The Unified Canvas](features/UNIFIED_CANVAS.md). This new workflow is the
biggest enhancement added to the WebUI to date, and unlocks a stunning amount of
potential for users to create and iterate on their creations. The following
sections describe what's new for InvokeAI.
## v2.2.2 <small>(30 November 2022)</small>
!!! note
The binary installer is not ready for prime time. First time users are recommended to install via the "source" installer accessible through the links at the bottom of this page.****
With InvokeAI 2.2, this project now provides enthusiasts and professionals a
robust workflow solution for creating AI-generated and human facilitated
compositions. Additional enhancements have been made as well, improving safety,
ease of use, and installation.
Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a
512x768 image (and less for smaller images), and is compatible with
Windows/Linux/Mac (M1 & M2).
You can see the [release video](https://youtu.be/hIYBfDtKaus) here, which
introduces the main WebUI enhancement for version 2.2 -
The app is published in twice, in different build formats.
- A [PyPI] distribution. This includes both a source distribution and built distribution (a wheel). Users install with `pip install invokeai`. The updater uses this build.
- An installer on the [InvokeAI Releases Page]. This is a zip file with install scripts and a wheel. This is only used for new installs.
## General Prep
Make a developer call-out for PRs to merge. Merge and test things out.
While the release workflow does not include end-to-end tests, it does pause before publishing so you can download and test the final build.
## Release Workflow
The `release.yml` workflow runs a number of jobs to handle code checks, tests, build and publish on PyPI.
It is triggered on **tag push**, when the tag matches `v*`. It doesn't matter if you've prepped a release branch like `release/v3.5.0` or are releasing from `main` - it works the same.
> Because commits are reference-counted, it is safe to create a release branch, tag it, let the workflow run, then delete the branch. So long as the tag exists, that commit will exist.
### Triggering the Workflow
Run `make tag-release` to tag the current commit and kick off the workflow.
The release may also be dispatched [manually].
### Workflow Jobs and Process
The workflow consists of a number of concurrently-run jobs, and two final publish jobs.
The publish jobs require manual approval and are only run if the other jobs succeed.
#### `check-version` Job
This job checks that the git ref matches the app version. It matches the ref against the `__version__` variable in `invokeai/version/invokeai_version.py`.
When the workflow is triggered by tag push, the ref is the tag. If the workflow is run manually, the ref is the target selected from the **Use workflow from** dropdown.
This job uses [samuelcolvin/check-python-version].
> Any valid [version specifier] works, so long as the tag matches the version. The release workflow works exactly the same for `RC`, `post`, `dev`, etc.
#### Check and Test Jobs
- **`python-tests`**: runs `pytest` on matrix of platforms
- **`python-checks`**: runs `ruff` (format and lint)
- **`frontend-tests`**: runs `vitest`
- **`frontend-checks`**: runs `prettier` (format), `eslint` (lint), `dpdm` (circular refs), `tsc` (static type check) and `knip` (unused imports)
> **TODO** We should add `mypy` or `pyright` to the **`check-python`** job.
> **TODO** We should add an end-to-end test job that generates an image.
#### `build-installer` Job
This sets up both python and frontend dependencies and builds the python package. Internally, this runs `installer/create_installer.sh` and uploads two artifacts:
- **`dist`**: the python distribution, to be published on PyPI
- **`InvokeAI-installer-${VERSION}.zip`**: the installer to be included in the GitHub release
#### Sanity Check & Smoke Test
At this point, the release workflow pauses as the remaining publish jobs require approval. Time to test the installer.
Because the installer pulls from PyPI, and we haven't published to PyPI yet, you will need to install from the wheel:
- Download and unzip `dist.zip` and the installer from the **Summary** tab of the workflow
- Run the installer script using the `--wheel` CLI arg, pointing at the wheel:
- Install to a temporary directory so you get the new user experience
- Download a model and generate
> The same wheel file is bundled in the installer and in the `dist` artifact, which is uploaded to PyPI. You should end up with the exactly the same installation as if the installer got the wheel from PyPI.
##### Something isn't right
If testing reveals any issues, no worries. Cancel the workflow, which will cancel the pending publish jobs (you didn't approve them prematurely, right?).
Now you can start from the top:
- Fix the issues and PR the fixes per usual
- Get the PR approved and merged per usual
- Switch to `main` and pull in the fixes
- Run `make tag-release` to move the tag to `HEAD` (which has the fixes) and kick off the release workflow again
- Re-do the sanity check
#### PyPI Publish Jobs
The publish jobs will run if any of the previous jobs fail.
They use [GitHub environments], which are configured as [trusted publishers] on PyPI.
Both jobs require a maintainer to approve them from the workflow's **Summary** tab.
- Click the **Review deployments** button
- Select the environment (either `testpypi` or `pypi`)
- Click **Approve and deploy**
> **If the version already exists on PyPI, the publish jobs will fail.** PyPI only allows a given version to be published once - you cannot change it. If version published on PyPI has a problem, you'll need to "fail forward" by bumping the app version and publishing a followup release.
##### Failing PyPI Publish
Check the [python infrastructure status page] for incidents.
If there are no incidents, contact @hipsterusername or @lstein, who have owner access to GH and PyPI, to see if access has expired or something like that.
#### `publish-testpypi` Job
Publishes the distribution on the [Test PyPI] index, using the `testpypi` GitHub environment.
This job is not required for the production PyPI publish, but included just in case you want to test the PyPI release.
If approved and successful, you could try out the test release like this:
Publishes the distribution on the production PyPI index, using the `pypi` GitHub environment.
## Publish the GitHub Release with installer
Once the release is published to PyPI, it's time to publish the GitHub release.
1. [Draft a new release] on GitHub, choosing the tag that triggered the release.
1. Write the release notes, describing important changes. The **Generate release notes** button automatically inserts the changelog and new contributors, and you can copy/paste the intro from previous releases.
1. Use `scripts/get_external_contributions.py` to get a list of external contributions to shout out in the release notes.
1. Upload the zip file created in **`build`** job into the Assets section of the release notes.
1. Check **Set as a pre-release** if it's a pre-release.
1. Check **Create a discussion for this release**.
1. Publish the release.
1. Announce the release in Discord.
> **TODO** Workflows can create a GitHub release from a template and upload release assets. One popular action to handle this is [ncipollo/release-action]. A future enhancement to the release process could set this up.
## Manual Build
The `build installer` workflow can be dispatched manually. This is useful to test the installer for a given branch or tag.
No checks are run, it just builds.
## Manual Release
The `release` workflow can be dispatched manually. You must dispatch the workflow from the right tag, else it will fail the version check.
This functionality is available as a fallback in case something goes wonky. Typically, releases should be triggered via tag push as described above.
Some model marketplaces require an API key to download models. You can provide a URL pattern and appropriate token in your `invokeai.yaml` file to provide that API key.
The pattern can be any valid regex (you may need to surround the pattern with quotes):
```yaml
remote_api_tokens:
# Any URL containing `models.com` will automatically use `your_models_com_token`
- url_regex:models.com
token:your_models_com_token
# Any URL matching this contrived regex will use `some_other_token`
- url_regex:'^[a-z]{3}whatever.*\.com$'
token:some_other_token
```
The provided token will be added as a `Bearer` token to the network requests to download the model files. As far as we know, this works for all model marketplaces that require authorization.
#### Model Hashing
Models are hashed during installation, providing a stable identifier for models across all platforms. Hashing is a one-time operation.
```yaml
hashing_algorithm:blake3_single# default value
```
You might want to change this setting, depending on your system:
-`blake3_single` (default): Single-threaded - best for spinning HDDs, still OK for SSDs
-`blake3_multi`: Parallelized, memory-mapped implementation - best for SSDs, terrible for spinning disks
-`random`: Skip hashing entirely - fastest but of course no hash
During the first startup after upgrading to v4, all of your models will be hashed. This can take a few minutes.
Most common algorithms are supported, like `md5`, `sha256`, and `sha512`. These are typically much, much slower than either of the BLAKE3 variants.
#### Path Settings
These options set the paths of various directories and files used by InvokeAI. Any user-defined paths should be absolute paths.
#### Logging
Several different log handler destinations are available, and multiple destinations are supported by providing a list:
```yaml
log_handlers:
- console
- syslog=localhost
- file=/var/log/invokeai.log
```
-`console` is the default. It prints log messages to the command-line window from which InvokeAI was launched.
-`syslog` is only available on Linux and Macintosh systems. It uses
the operating system's "syslog" facility to write log file entries
locally or to a remote logging machine. `syslog` offers a variety
of configuration options:
```yaml
syslog=/dev/log` - log to the /dev/log device
syslog=localhost` - log to the network logger running on the local machine
syslog=localhost:512` - same as above, but using a non-standard port
"error": "Traceback (most recent call last):\n File \"/home/lstein/Projects/InvokeAI/invokeai/app/services/download/download_default.py\", line 182, in _download_next_item\n self._do_download(job)\n File \"/home/lstein/Projects/InvokeAI/invokeai/app/services/download/download_default.py\", line 206, in _do_download\n raise HTTPError(resp.reason)\nrequests.exceptions.HTTPError: Not Found\n"
InvokeAI Nodes can be found in the `invokeai/app/invocations` directory. These can be used as examples to create your own nodes.
InvokeAI Nodes can be found in the `invokeai/app/invocations` directory. These
can be used as examples to create your own nodes.
New nodes should be added to a subfolder in `nodes` direction found at the root level of the InvokeAI installation location. Nodes added to this folder will be able to be used upon application startup.
New nodes should be added to a subfolder in `nodes` direction found at the root
level of the InvokeAI installation location. Nodes added to this folder will be
able to be used upon application startup.
Example `nodes` subfolder structure:
Example `nodes` subfolder structure:
```py
├──__init__.py# Invoke-managed custom node loader
│
@@ -30,14 +34,14 @@ Example `nodes` subfolder structure:
└──fancy_node.py
```
Each node folder must have an `__init__.py` file that imports its nodes. Only nodes imported in the `__init__.py` file are loaded.
See the README in the nodes folder for more examples:
Each node folder must have an `__init__.py` file that imports its nodes. Only
nodes imported in the `__init__.py` file are loaded. See the README in the nodes
folder for more examples:
```py
from.cool_nodeimportCoolInvocation
```
## Creating A New Invocation
In order to understand the process of creating a new Invocation, let us actually
@@ -131,7 +135,6 @@ from invokeai.app.invocations.primitives import ImageField
We use [mkdocs](https://www.mkdocs.org) for our documentation with the [material theme](https://squidfunk.github.io/mkdocs-material/). Documentation is written in markdown files under the `./docs` folder and then built into a static website for hosting with GitHub Pages at [invoke-ai.github.io/InvokeAI](https://invoke-ai.github.io/InvokeAI).
To contribute to the documentation you'll need to install the dependencies. Note
the use of `"`.
@@ -50,6 +39,7 @@ and will be required for testing the changes you make to the code.
### Tests
See the [tests documentation](./TESTS.md) for information about running and writing tests.
### Reloading Changes
Experimenting with changes to the Python source code is a drag if you have to re-start the server —
@@ -63,7 +53,6 @@ running server on the fly.
This will allow you to avoid restarting the server (and reloading models) in most cases, but there are some caveats; see
the [jurigged documentation](https://github.com/breuleux/jurigged#caveats) for details.
## Front End
<!--#TODO: get input from blessedcoolant here, for the moment inserted the frontend README via snippets extension.-->
@@ -250,7 +239,7 @@ Consult the
get it set up.
Suggest using VSCode's included settings sync so that your remote dev host has
all the same app settings and extensions automagically.
all the same app settings and extensions automatically.
The UI is a fairly straightforward Typescript React app, with the Unified Canvas being more complex.
Code is located in`invokeai/frontend/web/`for review.
## Stack
State management is Redux via[Redux Toolkit](https://github.com/reduxjs/redux-toolkit). We lean heavily on RTK:
-`createAsyncThunk`for HTTP requests
-`createEntityAdapter`for fetching images and models
-`createListenerMiddleware`for workflows
The API client and associated types are generated from the OpenAPI schema. See API_CLIENT.md.
Communication with server is a mix of HTTP and[socket.io](https://github.com/socketio/socket.io-client)(with a simple socket.io redux middleware to help).
[Chakra-UI](https://github.com/chakra-ui/chakra-ui)& [Mantine](https://github.com/mantinedev/mantine) for components and styling.
[Konva](https://github.com/konvajs/react-konva)for the canvas, but we are pushing the limits of what is feasible with it (and HTML canvas in general). We plan to rebuild it with[PixiJS](https://github.com/pixijs/pixijs)to take advantage of WebGL's improved raster handling.
[Vite](https://vitejs.dev/)for bundling.
Localisation is via[i18next](https://github.com/i18next/react-i18next), but translation happens on our[Weblate](https://hosted.weblate.org/engage/invokeai/)project. Only the English source strings should be changed on this repo.
## Contributing
Thanks for your interest in contributing to the InvokeAI Web UI!
We encourage you to ping @psychedelicious and @blessedcoolant on[Discord](https://discord.gg/ZmtBAhwWhy)if you want to contribute, just to touch base and ensure your work doesn't conflict with anything else going on. The project is very active.
### Dev Environment
**Setup**
1. Install[node](https://nodejs.org/en/download/). You can confirm node is installed with:
```bash
node --version
```
2. Install [yarn classic](https://classic.yarnpkg.com/lang/en/) and confirm it is installed by running this:
```bash
npm install --global yarn
yarn --version
```
From`invokeai/frontend/web/`run`yarn install`to get everything set up.
Start everything in dev mode:
1. Ensure your virtual environment is running
2. Start the dev server:`yarn dev`
3. Start the InvokeAI Nodes backend:`python scripts/invokeai-web.py # run from the repo root`
4. Point your browser to the dev server address e.g.[http://localhost:5173/](http://localhost:5173/)
### VSCode Remote Dev
We've noticed an intermittent issue with the VSCode Remote Dev port forwarding. If you use this feature of VSCode, you may intermittently click the Invoke button and then get nothing until the request times out. Suggest disabling the IDE's port forwarding feature and doing it manually via SSH:
If you are looking to help to with a code contribution, InvokeAI uses several different technologies under the hood: Python (Pydantic, FastAPI, diffusers) and Typescript (React, Redux Toolkit, ChakraUI, Mantine, Konva). Familiarity with StableDiffusion and image generation concepts is helpful, but not essential.
If you are looking to help with a code contribution, InvokeAI uses several different technologies under the hood: Python (Pydantic, FastAPI, diffusers) and Typescript (React, Redux Toolkit, ChakraUI, Mantine, Konva). Familiarity with StableDiffusion and image generation concepts is helpful, but not essential.
## **Get Started**
@@ -12,7 +12,7 @@ To get started, take a look at our [new contributors checklist](newContributorCh
Once you're setup, for more information, you can review the documentation specific to your area of interest:
Documentation is an important part of any open source project. It provides a clear and concise way to communicate how the software works, how to use it, and how to troubleshoot issues. Without proper documentation, it can be difficult for users to understand the purpose and functionality of the project.
Documentation is an important part of any open source project. It provides a clear and concise way to communicate how the software works, how to use it, and how to troubleshoot issues. Without proper documentation, it can be difficult for users to understand the purpose and functionality of the project.
## Contributing
All documentation is maintained in the InvokeAI GitHub repository. If you come across documentation that is out of date or incorrect, please submit a pull request with the necessary changes.
All documentation is maintained in our [GitHub repository](https://github.com/invoke-ai/InvokeAI). If you come across documentation that is out of date or incorrect, please submit a pull request with the necessary changes.
When updating or creating documentation, please keep in mind InvokeAI is a tool for everyone, not just those who have familiarity with generative art.
When updating or creating documentation, please keep in mind Invoke is a tool for everyone, not just those who have familiarity with generative art.
## Help & Questions
Please ping @imic or @hipsterusernamein the [Discord](https://discord.com/channels/1020123559063990373/1049495067846524939) if you have any questions.
Please ping @hipsterusernameon [Discord](https://discord.gg/ZmtBAhwWhy) if you have any questions.
If you're a new contributor to InvokeAI or Open Source Projects, this is the guide for you.
If you're a new contributor to InvokeAI or Open Source Projects, this is the guide for you.
## New Contributor Checklist
- [x] Set up your local development environment & fork of InvokAI by following [the steps outlined here](../../installation/020_INSTALL_MANUAL.md#developer-install)
- [x] Set up your local tooling with [this guide](InvokeAI/contributing/LOCAL_DEVELOPMENT/#developing-invokeai-in-vscode). Feel free to skip this step if you already have tooling you're comfortable with.
- [x] Set up your local development environment & fork of InvokAI by following [the steps outlined here](../dev-environment.md)
- [x] Set up your local tooling with [this guide](../LOCAL_DEVELOPMENT.md). Feel free to skip this step if you already have tooling you're comfortable with.
- [x] Familiarize yourself with [Git](https://www.atlassian.com/git) & our project structure by reading through the [development documentation](development.md)
- [x] Join the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord
- [x] Choose an issue to work on! This can be achieved by asking in the #dev-chat channel, tackling a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) or finding an item on the [roadmap](https://github.com/orgs/invoke-ai/projects/7). If nothing in any of those places catches your eye, feel free to work on something of interest to you!
- [x] Choose an issue to work on! This can be achieved by asking in the #dev-chat channel, tackling a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) or finding an item on the [roadmap](https://github.com/orgs/invoke-ai/projects/7). If nothing in any of those places catches your eye, feel free to work on something of interest to you!
- [x] Make your first Pull Request with the guide below
- [x] Happy development! Don't be afraid to ask for help - we're happy to help you contribute!
## How do I make a contribution?
Never made an open source contribution before? Wondering how contributions work in our project? Here's a quick rundown!
Before starting these steps, ensure you have your local environment [configured for development](../LOCAL_DEVELOPMENT.md).
1. Find a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) that you are interested in addressing or a feature that you would like to add. Then, reach out to our team in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord to ensure you are setup for success.
1. Find a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) that you are interested in addressing or a feature that you would like to add. Then, reach out to our team in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord to ensure you are setup for success.
2. Fork the [InvokeAI](https://github.com/invoke-ai/InvokeAI) repository to your GitHub profile. This means that you will have a copy of the repository under**your-GitHub-username/InvokeAI**.
3. Clone the repository to your local machine using:
If you're unfamiliar with using Git through the commandline, [GitHub Desktop](https://desktop.github.com) is a easy-to-use alternative with a UI. You can do all the same steps listed here, but through the interface.
If you're unfamiliar with using Git through the commandline, [GitHub Desktop](https://desktop.github.com) is a easy-to-use alternative with a UI. You can do all the same steps listed here, but through the interface. 4. Create a new branch for your fix using:
```bash
git checkout -b branch-name-here
```
5. Make the appropriate changes for the issue you are trying to address or the feature that you want to add.
6. Add the file contents of the changed files to the "snapshot" git uses to manage the state of the project, also known as the index:
```bash
git add -A
```
```bash
git add -A
```
7. Store the contents of the index with a descriptive message.
```bash
git commit -m "Insert a short message of the changes made here"
```
```bash
git commit -m "Insert a short message of the changes made here"
```
8. Push the changes to the remote repository using
```bash
git push origin branch-name-here
```
```bash
git push origin branch-name-here
```
9. Submit a pull request to the **main** branch of the InvokeAI repository. If you're not sure how to, [follow this guide](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request)
10. Title the pull request with a short description of the changes made and the issue or bug number associated with your change. For example, you can title an issue like so "Added more log outputting to resolve #1234".
11. In the description of the pull request, explain the changes that you made, any issues you think exist with the pull request you made, and any questions you have for the maintainer. It's OK if your pull request is not perfect (no pull request is), the reviewer will be able to help you fix any problems and improve it!
13. Make changes to the pull request if the reviewer(s) recommend them.
14. Celebrate your success after your pull request is merged!
If you’d like to learn more about contributing to Open Source projects, here is a[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github).
If you’d like to learn more about contributing to Open Source projects, here is a[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github).
## Best Practices
## Best Practices:
* Keep your pull requests small. Smaller pull requests are more likely to be accepted and merged
* Comments! Commenting your code helps reviewers easily understand your contribution
* Use Python and Typescript’s typing systems, and consider using an editor with [LSP](https://microsoft.github.io/language-server-protocol/) support to streamline development
* Make all communications public. This ensure knowledge is shared with the whole community
- Keep your pull requests small. Smaller pull requests are more likely to be accepted and merged
- Comments! Commenting your code helps reviewers easily understand your contribution
- Use Python and Typescript’s typing systems, and consider using an editor with [LSP](https://microsoft.github.io/language-server-protocol/) support to streamline development
- Make all communications public. This ensure knowledge is shared with the whole community
## **Where can I go for help?**
If you need help, you can ask questions in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord.
For frontend related work, **@pyschedelicious** is the best person to reach out to.
For frontend related work, **@pyschedelicious** is the best person to reach out to.
For backend related work, please reach out to **@blessedcoolant**, **@lstein**, **@StAlKeR7779** or **@pyschedelicious**.
Tutorials help new & existing users expand their abilty to use InvokeAI to the full extent of our features and services.
Tutorials help new & existing users expand their ability to use InvokeAI to the full extent of our features and services.
Currently, we have a set of tutorials available on our [YouTube channel](https://www.youtube.com/@invokeai), but as InvokeAI continues to evolve with new updates, we want to ensure that we are giving our users the resources they need to succeed.
@@ -8,4 +8,4 @@ Tutorials can be in the form of videos or article walkthroughs on a subject of y
## Contributing
Please reach out to @imic or @hipsterusername on [Discord](https://discord.gg/ZmtBAhwWhy) to help create tutorials for InvokeAI.
Please reach out to @imic or @hipsterusername on [Discord](https://discord.gg/ZmtBAhwWhy) to help create tutorials for InvokeAI.
To make changes to Invoke's backend, frontend, or documentation, you'll need to set up a dev environment.
If you just want to use Invoke, you should use the [installer][installer link].
!!! info "Why do I need the frontend toolchain?"
The repo doesn't contain a build of the frontend. You'll be responsible for rebuilding it every time you pull in new changes, or run it in dev mode (which incurs a substantial performance penalty).
!!! warning
Invoke uses a SQLite database. When you run the application as a dev install, you accept responsibility for your database. This means making regular backups (especially before pulling) and/or fixing it yourself in the event that a PR introduces a schema change.
If you don't need to persist your db, you can use an ephemeral in-memory database by setting `use_memory_db: true` in your `invokeai.yaml` file. You'll also want to set `scan_models_on_startup: true` so that your models are registered on startup.
## Setup
1. Run through the [requirements][requirements link].
2. [Fork and clone][forking link] the [InvokeAI repo][repo link].
3. Create an directory for user data (images, models, db, etc). This is typically at `~/invokeai`, but if you already have a non-dev install, you may want to create a separate directory for the dev install.
4. Create a python virtual environment inside the directory you just created:
```sh
python3 -m venv .venv --prompt InvokeAI-Dev
```
5. Activate the venv (you'll need to do this every time you want to run the app):
```sh
source .venv/bin/activate
```
6. Install the repo as an [editable install][editable install link]:
Refer to the [manual installation][manual install link]] instructions for more determining the correct install options. `xformers` is optional, but `dev` and `test` are not.
Thanks for your interest in contributing to the Invoke Web UI!
Please follow these guidelines when contributing.
## Check in before investing your time
Please check in before you invest your time on anything besides a trivial fix, in case it conflicts with ongoing work or isn't aligned with the vision for the app.
If a feature request or issue doesn't already exist for the thing you want to work on, please create one.
Ping `@psychedelicious` on [discord] in the `#frontend-dev` channel or in the feature request / issue you want to work on - we're happy to chat.
## Code conventions
- This is a fairly complex app with a deep component tree. Please use memoization (`useCallback`, `useMemo`, `memo`) with enthusiasm.
- If you need to add some global, ephemeral state, please use [nanostores] if possible.
- Be careful with your redux selectors. If they need to be parameterized, consider creating them inside a `useMemo`.
- Feel free to use `lodash` (via `lodash-es`) to make the intent of your code clear.
- Please add comments describing the "why", not the "how" (unless it is really arcane).
## Commit format
Please use the [conventional commits] spec for the web UI, with a scope of "ui":
-`chore(ui): bump deps`
-`chore(ui): lint`
-`feat(ui): add some cool new feature`
-`fix(ui): fix some bug`
## Tests
We don't do any UI testing at this time, but consider adding tests for sensitive logic.
We use `vitest`, and tests should be next to the file they are testing. If the logic is in `something.ts`, the tests should be in `something.test.ts`.
In some situations, we may want to test types. For example, if you use `zod` to create a schema that should match a generated type, it's best to add a test to confirm that the types match. Use `tsafe`'s assert for this.
## Submitting a PR
- Ensure your branch is tidy. Use an interactive rebase to clean up the commit history and reword the commit messages if they are not descriptive.
- Run `pnpm lint`. Some issues are auto-fixable with `pnpm fix`.
- Fill out the PR form when creating the PR.
- It doesn't need to be super detailed, but a screenshot or video is nice if you changed something visually.
// Outside a component, or within a callback for performance-critical logic
$myStringOption.get();
$myStringOption.set('new value');
// Inside a component
constmyStringOption=useStore($myStringOption);
```
### Where to put nanostores
- For global application state, export your stores from `invokeai/frontend/web/src/app/store/nanostores/`.
- For feature state, create a file for the stores next to the redux slice definition (e.g. `invokeai/frontend/web/src/features/myFeature/myFeatureNanostores.ts`).
- For hooks with global state, export the store from the same file the hook is in, or put it next to the hook.
### When to use nanostores
- For non-serializable data that needs to be available throughout the app, use `nanostores` instead of a global.
- For ephemeral global state (i.e. state that does not need to be persisted), use `nanostores` instead of redux.
- For performance-critical code and in callbacks, redux selectors can be problematic due to the declarative reactivity system. Consider refactoring to use `nanostores` if there's a **measurable** performance issue.
- [Building Nodes and Edges](#building-nodes-and-edges)
- [Building a Workflow](#building-a-workflow)
- [Loading a Workflow](#loading-a-workflow)
- [Workflow Migrations](#workflow-migrations)
<!-- /code_chunk_output -->
> This document describes, at a high level, the design and implementation of workflows in the InvokeAI frontend. There are a substantial number of implementation details not included, but which are hopefully clear from the code.
InvokeAI's backend uses graphs, composed of **nodes** and **edges**, to process data and generate images.
@@ -152,13 +117,13 @@ Stateless fields do not store their value in the node, so their field instances
"Custom" fields will always be treated as stateless fields.
##### Collection and Polymorphic Fields
##### Single and Collection Fields
Field types have a name and two flags which may identify it as a **collection** or **polymorphic** field.
Field types have a name and cardinality property which may identify it as a **SINGLE**, **COLLECTION** or **SINGLE_OR_COLLECTION** field.
If a field is annotated in python as a list, its field type is parsed and flagged as a collection type (e.g. `list[int]`).
If it is annotated as a union of a type and list, the type will be flagged as a polymorphic type (e.g. `Union[int, list[int]]`). Fields may not be unions of different types (e.g. `Union[int, list[str]]` and `Union[int, str]` are not allowed).
- If a field is annotated in python as a singular value or class, its field type is parsed as a **SINGLE** type (e.g. `int`, `ImageField`, `str`).
- If a field is annotated in python as a list, its field type is parsed as a **COLLECTION** type (e.g. `list[int]`).
- If it is annotated as a union of a type and list, the type will be parsed as a **SINGLE_OR_COLLECTION** type (e.g. `Union[int, list[int]]`). Fields may not be unions of different types (e.g. `Union[int, list[str]]` and `Union[int, str]` are not allowed).
## Implementation
@@ -208,8 +173,7 @@ Field types are represented as structured objects:
@@ -221,7 +185,7 @@ There are 4 general cases for field type parsing.
When a field is annotated as a primitive values (e.g. `int`, `str`, `float`), the field type parsing is fairly straightforward. The field is represented by a simple OpenAPI **schema object**, which has a `type` property.
We create a field type name from this `type` string (e.g. `string` -> `StringField`).
We create a field type name from this `type` string (e.g. `string` -> `StringField`). The cardinality is `"SINGLE"`.
##### Complex Types
@@ -235,13 +199,13 @@ We need to **dereference** the schema to pull these out. Dereferencing may requi
When a field is annotated as a list of a single type, the schema object has an `items` property. They may be a schema object or reference object and must be parsed to determine the item type.
We use the item type for field type name, adding `isCollection: true` to the field type.
We use the item type for field type name. The cardinality is `"COLLECTION"`.
##### Collection or Scalar Types
##### Single or Collection Types
When a field is annotated as a union of a type and list of that type, the schema object has an `anyOf` property, which holds a list of valid types for the union.
After verifying that the union has two members (a type and list of the same type), we use the type for field type name, adding `isCollectionOrScalar: true` to the field type.
After verifying that the union has two members (a type and list of the same type), we use the type for field type name, with cardinality `"SINGLE_OR_COLLECTION"`.
##### Optional Fields
@@ -338,13 +302,13 @@ Migration logic is in [migrations.ts].
Invoke AI originated as a project built by the community, and that vision carries forward today as we aim to build the best pro-grade tools available. We work together to incorporate the latest in AI/ML research, making these tools available in over 20 languages to artists and creatives around the world as part of our fully permissive OSS project designed for individual users to self-host and use.
Invoke originated as a project built by the community, and that vision carries forward today as we aim to build the best pro-grade tools available. We work together to incorporate the latest in AI/ML research, making these tools available in over 20 languages to artists and creatives around the world as part of our fully permissive OSS project designed for individual users to self-host and use.
# Methods of Contributing to Invoke AI
Anyone who wishes to contribute to InvokeAI, whether features, bug fixes, code cleanup, testing, code reviews, documentation or translation is very much encouraged to do so.
We welcome contributions, whether features, bug fixes, code cleanup, testing, code reviews, documentation or translation. Please check in with us before diving in to code to ensure your work aligns with our vision.
## Development
If you’d like to help with development, please see our [development guide](contribution_guides/development.md).
If you’d like to help with development, please see our [development guide](contribution_guides/development.md).
**New Contributors:** If you’re unfamiliar with contributing to open source projects, take a look at our [new contributor guide](contribution_guides/newContributorChecklist.md).
## Nodes
If you’d like to add a Node, please see our [nodes contribution guide](../nodes/contributingNodes.md).
## Support and Triaging
Helping support other users in [Discord](https://discord.gg/ZmtBAhwWhy) and on Github are valuable forms of contribution that we greatly appreciate.
We receive many issues and requests for help from users. We're limited in bandwidth relative to our the user base, so providing answers to questions or helping identify causes of issues is very helpful. By doing this, you enable us to spend time on the highest priority work.
Helping support other users in [Discord](https://discord.gg/ZmtBAhwWhy) and on Github are valuable forms of contribution that we greatly appreciate.
We receive many issues and requests for help from users. We're limited in bandwidth relative to our the user base, so providing answers to questions or helping identify causes of issues is very helpful. By doing this, you enable us to spend time on the highest priority work.
## Documentation
If you’d like to help with documentation, please see our [documentation guide](contribution_guides/documentation.md).
## Translation
If you'd like to help with translation, please see our[translation guide](contribution_guides/translation.md).
## Tutorials
Please reach out to @imic or @hipsterusername on [Discord](https://discord.gg/ZmtBAhwWhy) to help create tutorials for InvokeAI.
## Tutorials
We hope you enjoy using our software as much as we enjoy creating it, and we hope that some of those of you who are reading this will elect to become part of our contributor community.
Please reach out to @hipsterusername on [Discord](https://discord.gg/ZmtBAhwWhy) to help create tutorials for InvokeAI.
# Contributors
## Contributors
This project is a combined effort of dedicated people from across the world.[Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for their time, hard work and effort.
# Code of Conduct
## Code of Conduct
The InvokeAI community is a welcoming place, and we want your help in maintaining that. Please review our [Code of Conduct](https://github.com/invoke-ai/InvokeAI/blob/main/CODE_OF_CONDUCT.md) to learn more - it's essential to maintaining a respectful and inclusive environment.
@@ -49,12 +50,3 @@ By making a contribution to this project, you certify that:
This disclaimer is not a license and does not grant any rights or permissions. You must obtain necessary permissions and licenses, including from third parties, before contributing to this project.
This disclaimer is provided "as is" without warranty of any kind, whether expressed or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, or non-infringement. In no event shall the authors or copyright holders be liable for any claim, damages, or other liability, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the contribution or the use or other dealings in the contribution.
# Support
For support, please use this repository's [GitHub Issues](https://github.com/invoke-ai/InvokeAI/issues), or join the [Discord](https://discord.gg/ZmtBAhwWhy).
Original portions of the software are Copyright (c) 2023 by respective contributors.
---
Remember, your contributions help make this project great. We're excited to see what you'll bring to our community!
| `--prompt_as_dir` | `-p` | `False` | Name output directories using the prompt text. |
| `--from_file <path>` | | `None` | Read list of prompts from a file. Use `-` to read from standard input |
| `--model <modelname>` | | `stable-diffusion-1.5` | Loads the initial model specified in configs/models.yaml. |
| `--ckpt_convert ` | | `False` | If provided both .ckpt and .safetensors files will be auto-converted into diffusers format in memory |
| `--autoconvert <path>` | | `None` | On startup, scan the indicated directory for new .ckpt/.safetensor files and automatically convert and import them |
| `--precision` | | `fp16` | Provide `fp32` for full precision mode, `fp16` for half-precision. `fp32` needed for Macintoshes and some NVidia cards. |
| `--png_compression <0-9>` | `-z<0-9>` | `6` | Select level of compression for output files, from 0 (no compression) to 9 (max compression) |
| `--safety-checker` | | `False` | Activate safety checker for NSFW and other potentially disturbing imagery |
| `--height <int>` | `-H<int>` | `512` | Height of generated image | `--steps <int>` | `-s<int>` | `50` | How many steps of refinement to apply |
| `--strength <float>` | `-s<float>` | `0.75` | For img2img: how hard to try to match the prompt to the initial image. Ranges from 0.0-0.99, with higher values replacing the initial image completely. |
| `--fit` | `-F` | `False` | For img2img: scale the init image to fit into the specified -H and -W dimensions |
| `--grid` | `-g` | `False` | Save all image series as a grid rather than individually. |
| `--sampler <sampler>` | `-A<sampler>` | `k_lms` | Sampler to use. Use `-h` to get list of available samplers. |
| `--seamless` | | `False` | Create interesting effects by tiling elements of the image. |
| `--embedding_path <path>` | | `None` | Path to pre-trained embedding manager checkpoints, for custom models |
| `--gfpgan_model_path` | | `experiments/pretrained_models/GFPGANv1.4.pth` | Path to GFPGAN model file. |
| `--free_gpu_mem` | | `False` | Free GPU memory after sampling, to allow image decoding and saving in low VRAM conditions |
| `--precision` | | `auto` | Set model precision, default is selected by device. Options: auto, float32, float16, autocast |
!!! warning "These arguments are deprecated but still work"
| `--iterations <int>` | `-n<int>` | `1` | How many images to generate from this prompt |
| `--steps <int>` | `-s<int>` | `50` | How many steps of refinement to apply |
| `--cfg_scale <float>` | `-C<float>` | `7.5` | How hard to try to match the prompt to the generated image; any number greater than 1.0 works, but the useful range is roughly 5.0 to 20.0 |
| `--seed <int>` | `-S<int>` | `None` | Set the random seed for the next series of images. This can be used to recreate an image generated previously. |
| `--sampler <sampler>` | `-A<sampler>` | `k_lms` | Sampler to use. Use -h to get list of available samplers. |
| `--karras_max <int>` | | `29` | When using k\_\* samplers, set the maximum number of steps before shifting from using the Karras noise schedule (good for low step counts) to the LatentDiffusion noise schedule (good for high step counts) This value is sticky. [29] |
| `--hires_fix` | | | Larger images often have duplication artefacts. This option suppresses duplicates by generating the image at low res, and then using img2img to increase the resolution |
| `--png_compression <0-9>` | `-z<0-9>` | `6` | Select level of compression for output files, from 0 (no compression) to 9 (max compression) |
| `--grid` | `-g` | `False` | Turn on grid mode to return a single image combining all the images generated by this prompt |
| `--individual` | `-i` | `True` | Turn off grid mode (deprecated; leave off --grid instead) |
| `--outdir <path>` | `-o<path>` | `outputs/img_samples` | Temporarily change the location of these images |
| `--seamless_axes` | | `x,y` | Specify which axes to use circular convolution on. |
| `--log_tokenization` | `-t` | `False` | Display a color-coded list of the parsed tokens derived from the prompt |
| `--skip_normalization` | `-x` | `False` | Weighted subprompts will not be normalized. See [Weighted Prompts](../features/OTHER.md#weighted-prompts) |
| `--upscale <int> <float>` | `-U <int> <float>` | `-U 1 0.75` | Upscale image by magnification factor (2, 4), and set strength of upscaling (0.0-1.0). If strength not set, will default to 0.75. |
| `--facetool_strength <float>` | `-G <float> ` | `-G0` | Fix faces (defaults to using the GFPGAN algorithm); argument indicates how hard the algorithm should try (0.0-1.0) |
| `--facetool <name>` | `-ft <name>` | `-ft gfpgan` | Select face restoration algorithm to use: gfpgan, codeformer |
| `--codeformer_fidelity` | `-cf <float>` | `0.75` | Used along with CodeFormer. Takes values between 0 and 1. 0 produces high quality but low accuracy. 1 produces high accuracy but low quality |
| `--save_original` | `-save_orig` | `False` | When upscaling or fixing faces, this will cause the original image to be saved rather than replaced. |
| `--variation <float>` | `-v<float>` | `0.0` | Add a bit of noise (0.0=none, 1.0=high) to the image in order to generate a series of variations. Usually used in combination with `-S<seed>` and `-n<int>` to generate a series a riffs on a starting image. See [Variations](VARIATIONS.md). |
| `--with_variations <pattern>` | | `None` | Combine two or more variations. See [Variations](VARIATIONS.md) for now to use this. |
| `--save_intermediates <n>` | | `None` | Save the image from every nth step into an "intermediates" folder inside the output directory |
| `--h_symmetry_time_pct <float>` | | `None` | Create symmetry along the X axis at the desired percent complete of the generation process. (Must be between 0.0 and 1.0; set to a very small number like 0.0001 for just after the first step of generation.) |
| `--v_symmetry_time_pct <float>` | | `None` | Create symmetry along the Y axis at the desired percent complete of the generation process. (Must be between 0.0 and 1.0; set to a very small number like 0.0001 for just after the first step of generation.) |
!!! note
the width and height of the image must be multiples of 64. You can
provide different values, but they will be rounded down to the nearest multiple
of 64.
!!! example "This is a example of img2img"
```bash
invoke> waterfall and rainbow -I./vacation-photo.png -W640 -H480 --fit
```
This will modify the indicated vacation photograph by making it more like the
prompt. Results will vary greatly depending on what is in the image. We also ask
to --fit the image into a box no bigger than 640x480. Otherwise the image size
will be identical to the provided photo and you may run out of memory if it is
large.
In addition to the command-line options recognized by txt2img, img2img accepts
| `--init_img <path>` | `-I<path>` | `None` | Path to the initialization image |
| `--fit` | `-F` | `False` | Scale the image to fit into the specified -H and -W dimensions |
| `--strength <float>` | `-s<float>` | `0.75` | How hard to try to match the prompt to the initial image. Ranges from 0.0-0.99, with higher values replacing the initial image completely. |
### inpainting
!!! example ""
```bash
invoke> waterfall and rainbow -I./vacation-photo.png -M./vacation-mask.png -W640 -H480 --fit
```
This will do the same thing as img2img, but image alterations will
only occur within transparent areas defined by the mask file specified
by `-M`. You may also supply just a single initial image with the areas
to overpaint made transparent, but you must be careful not to destroy
the pixels underneath when you create the transparent areas. See
[Inpainting](INPAINTING.md) for details.
inpainting accepts all the arguments used for txt2img and img2img, as well as
InvokeAI uses [Weblate](https://weblate.org) for translation. Weblate is a FOSS project providing a scalable translation service. Weblate automates the tedious parts of managing translation of a growing project, and the service is generously provided at no cost to FOSS projects like InvokeAI.
## Contributing
If you'd like to contribute by adding or updating a translation, please visit our [Weblate project](https://hosted.weblate.org/engage/invokeai/). You'll need to sign in with your GitHub account (a number of other accounts are supported, including Google).
Once signed in, select a language and then the Web UI component. From here you can Browse and Translate strings from English to your chosen language. Zen mode offers a simpler translation experience.
Your changes will be attributed to you in the automated PR process; you don't need to do anything else.
## Help & Questions
Please check Weblate's [documentation](https://docs.weblate.org/en/latest/index.html) or ping @psychedelicious or @blessedcoolant on Discord if you have any questions.
## Thanks
Thanks to the InvokeAI community for their efforts to translate the project!
Many issues can be resolved by re-installing the application. You won't lose any data by re-installing. We suggest downloading the [latest release](https://github.com/invoke-ai/InvokeAI/releases/latest) and using it to re-install the application. Consult the [installer guide](./installation/installer.md) for more information.
When you run the installer, you'll have an option to select the version to install. If you aren't ready to upgrade, you choose the current version to fix a broken install.
If the troubleshooting steps on this page don't get you up and running, please either [create an issue] or hop on [discord] for help.
## How to Install
You can download the latest installers [here](https://github.com/invoke-ai/InvokeAI/releases).
Note that any releases marked as _pre-release_ are in a beta state. You may experience some issues, but we appreciate your help testing those! For stable/reliable installations, please install the [latest release].
## Downloading models and using existing models
The Model Manager tab in the UI provides a few ways to install models, including using your already-downloaded models. You'll see a popup directing you there on first startup. For more information, see the [model install docs].
## Missing models after updating to v4
If you find some models are missing after updating to v4, it's likely they weren't correctly registered before the update and didn't get picked up in the migration.
You can use the `Scan Folder` tab in the Model Manager UI to fix this. The models will either be in the old, now-unused `autoimport` folder, or your `models` folder.
- Find and copy your install's old `autoimport` folder path, install the main install folder.
- Go to the Model Manager and click `Scan Folder`.
- Paste the path and scan.
- IMPORTANT: Uncheck `Inplace install`.
- Click `Install All` to install all found models, or just install the models you want.
Next, find and copy your install's `models` folder path (this could be your custom models folder path, or the `models` folder inside the main install folder).
Follow the same steps to scan and import the missing models.
## Slow generation
- Check the [system requirements] to ensure that your system is capable of generating images.
- Check the `ram` setting in `invokeai.yaml`. This setting tells Invoke how much of your system RAM can be used to cache models. Having this too high or too low can slow things down. That said, it's generally safest to not set this at all and instead let Invoke manage it.
- Check the `vram` setting in `invokeai.yaml`. This setting tells Invoke how much of your GPU VRAM can be used to cache models. Counter-intuitively, if this setting is too high, Invoke will need to do a lot of shuffling of models as it juggles the VRAM cache and the currently-loaded model. The default value of 0.25 is generally works well for GPUs without 16GB or more VRAM. Even on a 24GB card, the default works well.
- Check that your generations are happening on your GPU (if you have one). InvokeAI will log what is being used for generation upon startup. If your GPU isn't used, re-install to ensure the correct versions of torch get installed.
- If you are on Windows, you may have exceeded your GPU's VRAM capacity and are using slower [shared GPU memory](#shared-gpu-memory-windows). There's a guide to opt out of this behaviour in the linked FAQ entry.
## Shared GPU Memory (Windows)
!!! tip "Nvidia GPUs with driver 536.40"
This only applies to current Nvidia cards with driver 536.40 or later, released in June 2023.
When the GPU doesn't have enough VRAM for a task, Windows is able to allocate some of its CPU RAM to the GPU. This is much slower than VRAM, but it does allow the system to generate when it otherwise might no have enough VRAM.
When shared GPU memory is used, generation slows down dramatically - but at least it doesn't crash.
If you'd like to opt out of this behavior and instead get an error when you exceed your GPU's VRAM, follow [this guide from Nvidia](https://nvidia.custhelp.com/app/answers/detail/a_id/5490).
Here's how to get the python path required in the linked guide:
- Run `invoke.bat`.
- Select option 2 for developer console.
- At least one python path will be printed. Copy the path that includes your invoke installation directory (typically the first).
## Installer cannot find python (Windows)
Ensure that you checked **Add python.exe to PATH** when installing Python. This can be found at the bottom of the Python Installer window. If you already have Python installed, you can re-run the python installer, choose the Modify option and check the box.
## Triton error on startup
This can be safely ignored. InvokeAI doesn't use Triton, but if you are on Linux and wish to dismiss the error, you can install Triton.
## Updated to 3.4.0 and xformers can’t load C++/CUDA
An issue occurred with your PyTorch update. Follow these steps to fix :
1. Launch your invoke.bat / invoke.sh and select the option to open the developer console
- If you run into an error with `typing_extensions`, re-open the developer console and run: `pip install -U typing-extensions`
Note that v3.4.0 is an old, unsupported version. Please upgrade to the [latest release].
## Install failed and says `pip` is out of date
An out of date `pip` typically won't cause an installation to fail. The cause of the error can likely be found above the message that says `pip` is out of date.
If you saw that warning but the install went well, don't worry about it (but you can update `pip` afterwards if you'd like).
## Replicate image found online
Most example images with prompts that you'll find on the internet have been generated using different software, so you can't expect to get identical results. In order to reproduce an image, you need to replicate the exact settings and processing steps, including (but not limited to) the model, the positive and negative prompts, the seed, the sampler, the exact image size, any upscaling steps, etc.
## OSErrors on Windows while installing dependencies
During a zip file installation or an update, installation stops with an error like this:
- Paste your access token when prompted and press Enter. You won't see anything when you paste it.
- Type `n` if prompted about git credentials.
If you get an error, try the command again - maybe the token didn't paste correctly.
Once your token is set, start Invoke and try downloading the model again. The installer will automatically use the access token.
If the install still fails, you may not have access to the model.
## Stable Diffusion XL generation fails after trying to load UNet
InvokeAI is working in other respects, but when trying to generate
images with Stable Diffusion XL you get a "Server Error". The text log
in the launch window contains this log line above several more lines of
error messages:
`INFO --> Loading model:D:\LONG\PATH\TO\MODEL, type sdxl:main:unet`
This failure mode occurs when there is a network glitch during
downloading the very large SDXL model.
To address this, first go to the Model Manager and delete the
Stable-Diffusion-XL-base-1.X model. Then, click the HuggingFace tab,
paste the Repo ID stabilityai/stable-diffusion-xl-base-1.0 and install
the model.
## Package dependency conflicts during installation or update
If you have previously installed InvokeAI or another Stable Diffusion
package, the installer may occasionally pick up outdated libraries and
either the installer or `invoke` will fail with complaints about
library conflicts.
To resolve this, re-install the application as described above.
## Invalid configuration file
Everything seems to install ok, you get a `ValidationError` when starting up the app.
This is caused by an invalid setting in the `invokeai.yaml` configuration file. The error message should tell you what is wrong.
Check the [configuration docs] for more detail about the settings and how to specify them.
## `ModuleNotFoundError: No module named 'controlnet_aux'`
`controlnet_aux` is a dependency of Invoke and appears to have been packaged or distributed strangely. Sometimes, it doesn't install correctly. This is outside our control.
If you encounter this error, the solution is to remove the package from the `pip` cache and re-run the Invoke installer so a fresh, working version of `controlnet_aux` can be downloaded and installed:
- Run the Invoke launcher
- Choose the developer console option
- Run this command: `pip cache remove controlnet_aux`
- Close the terminal window
- Download and run the [installer][latest release], selecting your current install location
## Out of Memory Issues
The models are large, VRAM is expensive, and you may find yourself
faced with Out of Memory errors when generating images. Here are some
tips to reduce the problem:
!!! info "Optimizing for GPU VRAM"
=== "4GB VRAM GPU"
This should be adequate for 512x512 pixel images using Stable Diffusion 1.5
and derived models, provided that you do not use the NSFW checker. It won't be loaded unless you go into the UI settings and turn it on.
If you are on a CUDA-enabled GPU, we will automatically use xformers or torch-sdp to reduce VRAM requirements, though you can explicitly configure this. See the [configuration docs].
=== "6GB VRAM GPU"
This is a border case. Using the SD 1.5 series you should be able to
generate images up to 640x640 with the NSFW checker enabled, and up to
1024x1024 with it disabled.
If you run into persistent memory issues there are a series of
environment variables that you can set before launching InvokeAI that
alter how the PyTorch machine learning library manages memory. See
<https://pytorch.org/docs/stable/notes/cuda.html#memory-management> for
a list of these tweaks.
=== "12GB VRAM GPU"
This should be sufficient to generate larger images up to about 1280x1280.
## Checkpoint Models Load Slowly or Use Too Much RAM
The difference between diffusers models (a folder containing multiple
subfolders) and checkpoint models (a file ending with .safetensors or
.ckpt) is that InvokeAI is able to load diffusers models into memory
incrementally, while checkpoint models must be loaded all at
once. With very large models, or systems with limited RAM, you may
experience slowdowns and other memory-related issues when loading
checkpoint models.
To solve this, go to the Model Manager tab (the cube), select the
checkpoint model that's giving you trouble, and press the "Convert"
button in the upper right of your browser window. This will convert the
checkpoint into a diffusers model, after which loading should be
faster and less memory-intensive.
## Memory Leak (Linux)
If you notice a memory leak, it could be caused to memory fragmentation as models are loaded and/or moved from CPU to GPU.
A workaround is to tune memory allocation with an environment variable:
```bash
# Force blocks >1MB to be allocated with `mmap` so that they are released to the system immediately when they are freed.
MALLOC_MMAP_THRESHOLD_=1048576
```
!!! warning "Speed vs Memory Tradeoff"
Your generations may be slower overall when setting this environment variable.
!!! info "Possibly dependent on `libc` implementation"
It's not known if this issue occurs with other `libc` implementations such as `musl`.
If you encounter this issue and your system uses a different implementation, please try this environment variable and let us know if it fixes the issue.
<h3>Detailed Discussion</h3>
Python (and PyTorch) relies on the memory allocator from the C Standard Library (`libc`). On linux, with the GNU C Standard Library implementation (`glibc`), our memory access patterns have been observed to cause severe memory fragmentation.
This fragmentation results in large amounts of memory that has been freed but can't be released back to the OS. Loading models from disk and moving them between CPU/CUDA seem to be the operations that contribute most to the fragmentation.
This memory fragmentation issue can result in OOM crashes during frequent model switching, even if `ram` (the max RAM cache size) is set to a reasonable value (e.g. a OOM crash with `ram=16` on a system with 32GB of RAM).
This problem may also exist on other OSes, and other `libc` implementations. But, at the time of writing, it has only been investigated on linux with `glibc`.
To better understand how the `glibc` memory allocator works, see these references:
Note the differences between memory allocated as chunks in an arena vs. memory allocated with `mmap`. Under `glibc`'s default configuration, most model tensors get allocated as chunks in an arena making them vulnerable to the problem of fragmentation.
The configuration settings are divided into several distinct
groups in `invokeia.yaml`:
### Web Server
| Setting | Default Value | Description |
|----------|----------------|--------------|
| `host` | `localhost` | Name or IP address of the network interface that the web server will listen on |
| `port` | `9090` | Network port number that the web server will listen on |
| `allow_origins` | `[]` | A list of host names or IP addresses that are allowed to connect to the InvokeAI API in the format `['host1','host2',...]` |
| `allow_credentials` | `true` | Require credentials for a foreign host to access the InvokeAI API (don't change this) |
| `allow_methods` | `*` | List of HTTP methods ("GET", "POST") that the web server is allowed to use when accessing the API |
| `allow_headers` | `*` | List of HTTP headers that the web server will accept when accessing the API |
The documentation for InvokeAI's API can be accessed by browsing to the following URL: [http://localhost:9090/docs].
### Features
These configuration settings allow you to enable and disable various InvokeAI features:
| Setting | Default Value | Description |
|----------|----------------|--------------|
| `esrgan` | `true` | Activate the ESRGAN upscaling options|
| `internet_available` | `true` | When a resource is not available locally, try to fetch it via the internet |
| `log_tokenization` | `false` | Before each text2image generation, print a color-coded representation of the prompt to the console; this can help understand why a prompt is not working as expected |
| `patchmatch` | `true` | Activate the "patchmatch" algorithm for improved inpainting |
### Generation
These options tune InvokeAI's memory and performance characteristics.
| `sequential_guidance` | `false` | Calculate guidance in serial rather than in parallel, lowering memory requirements at the cost of some performance loss |
| `attention_type` | `auto` | Select the type of attention to use. One of `auto`,`normal`,`xformers`,`sliced`, or `torch-sdp` |
| `attention_slice_size` | `auto` | When "sliced" attention is selected, set the slice size. One of `auto`, `balanced`, `max` or the integers 1-8|
| `force_tiled_decode` | `false` | Force the VAE step to decode in tiles, reducing memory consumption at the cost of performance |
### Device
These options configure the generation execution device.
| `device` | `auto` | Preferred execution device. One of `auto`, `cpu`, `cuda`, `cuda:1`, `mps`. `auto` will choose the device depending on the hardware platform and the installed torch capabilities. |
| `precision` | `auto` | Floating point precision. One of `auto`, `float16` or `float32`. `float16` will consume half the memory of `float32` but produce slightly lower-quality images. The `auto` setting will guess the proper precision based on your video card and operating system |
### Paths
These options set the paths of various directories and files used by
InvokeAI. Relative paths are interpreted relative to INVOKEAI_ROOT, so
if INVOKEAI_ROOT is `/home/fred/invokeai` and the path is
`autoimport/main`, then the corresponding directory will be located at
`/home/fred/invokeai/autoimport/main`.
| Setting | Default Value | Description |
|----------|----------------|--------------|
| `autoimport_dir` | `autoimport/main` | At startup time, read and import any main model files found in this directory |
| `lora_dir` | `autoimport/lora` | At startup time, read and import any LoRA/LyCORIS models found in this directory |
| `embedding_dir` | `autoimport/embedding` | At startup time, read and import any textual inversion (embedding) models found in this directory |
| `controlnet_dir` | `autoimport/controlnet` | At startup time, read and import any ControlNet models found in this directory |
| `conf_path` | `configs/models.yaml` | Location of the `models.yaml` model configuration file |
| `models_dir` | `models` | Location of the directory containing models installed by InvokeAI's model manager |
| `legacy_conf_dir` | `configs/stable-diffusion` | Location of the directory containing the .yaml configuration files for legacy checkpoint models |
| `db_dir` | `databases` | Location of the directory containing InvokeAI's image, schema and session database |
| `outdir` | `outputs` | Location of the directory in which the gallery of generated and uploaded images will be stored |
| `use_memory_db` | `false` | Keep database information in memory rather than on disk; this will not preserve image gallery information across restarts |
Note that the autoimport directories will be searched recursively,
allowing you to organize the models into folders and subfolders in any
way you wish. In addition, while we have split up autoimport
directories by the type of model they contain, this isn't
necessary. You can combine different model types in the same folder
and InvokeAI will figure out what they are. So you can easily use just
one autoimport directory by commenting out the unneeded paths:
```
Paths:
autoimport_dir: autoimport
# lora_dir: null
# embedding_dir: null
# controlnet_dir: null
```
### Logging
These settings control the information, warning, and debugging
messages printed to the console log while InvokeAI is running:
| Setting | Default Value | Description |
|----------|----------------|--------------|
| `log_handlers` | `console` | This controls where log messages are sent, and can be a list of one or more destinations. Values include `console`, `file`, `syslog` and `http`. These are described in more detail below |
| `log_format` | `color` | This controls the formatting of the log messages. Values are `plain`, `color`, `legacy` and `syslog` |
| `log_level` | `debug` | This filters messages according to the level of severity and can be one of `debug`, `info`, `warning`, `error` and `critical`. For example, setting to `warning` will display all messages at the warning level or higher, but won't display "debug" or "info" messages |
Several different log handler destinations are available, and multiple destinations are supported by providing a list:
```
log_handlers:
- console
- syslog=localhost
- file=/var/log/invokeai.log
```
*`console` is the default. It prints log messages to the command-line window from which InvokeAI was launched.
*`syslog` is only available on Linux and Macintosh systems. It uses
the operating system's "syslog" facility to write log file entries
locally or to a remote logging machine. `syslog` offers a variety
of configuration options:
```
syslog=/dev/log` - log to the /dev/log device
syslog=localhost` - log to the network logger running on the local machine
syslog=localhost:512` - same as above, but using a non-standard port
ControlNet is a powerful set of features developed by the open-source
community (notably, Stanford researcher
[**@ilyasviel**](https://github.com/lllyasviel)) that allows you to
apply a secondary neural network model to your image generation
process in Invoke.
With ControlNet, you can get more control over the output of your
image generation, providing you with a way to direct the network
towards generating images that better fit your desired style or
outcome.
ControlNet works by analyzing an input image, pre-processing that
image to identify relevant information that can be interpreted by each
specific ControlNet model, and then inserting that control information
into the generation process. This can be used to adjust the style,
composition, or other aspects of the image to better achieve a
specific result.
#### Installation
InvokeAI provides access to a series of ControlNet models that provide
different effects or styles in your generated images.
To install ControlNet Models:
1. The easiest way to install them is
to use the InvokeAI model installer application. Use the
`invoke.sh`/`invoke.bat` launcher to select item [4] and then navigate
to the CONTROLNETS section. Select the models you wish to install and
press "APPLY CHANGES". You may also enter additional HuggingFace
repo_ids in the "Additional models" textbox.
2. Using the "Add Model" function of the model manager, enter the HuggingFace Repo ID of the ControlNet. The ID is in the format "author/repoName"
_Be aware that some ControlNet models require additional code
functionality in order to work properly, so just installing a
third-party ControlNet model may not have the desired effect._ Please
read and follow the documentation for installing a third party model
not currently included among InvokeAI's default list.
Currently InvokeAI **only** supports 🤗 Diffusers-format ControlNet models. These are
folders that contain the files `config.json` and/or
`diffusion_pytorch_model.safetensors` and
`diffusion_pytorch_model.fp16.safetensors`. The name of the folder is
the name of the model.
🤗 Diffusers-format ControlNet models are available at HuggingFace
(http://huggingface.co) and accessed via their repo IDs (identifiers
in the format "author/modelname").
#### ControlNet Models
The models currently supported include:
**Canny**:
When the Canny model is used in ControlNet, Invoke will attempt to generate images that match the edges detected.
Canny edge detection works by detecting the edges in an image by looking for abrupt changes in intensity. It is known for its ability to detect edges accurately while reducing noise and false edges, and the preprocessor can identify more information by decreasing the thresholds.
**M-LSD**:
M-LSD is another edge detection algorithm used in ControlNet. It stands for Multi-Scale Line Segment Detector.
It detects straight line segments in an image by analyzing the local structure of the image at multiple scales. It can be useful for architectural imagery, or anything where straight-line structural information is needed for the resulting output.
**Lineart**:
The Lineart model in ControlNet generates line drawings from an input image. The resulting pre-processed image is a simplified version of the original, with only the outlines of objects visible.The Lineart model in ControlNet is known for its ability to accurately capture the contours of the objects in an input sketch.
**Lineart Anime**:
A variant of the Lineart model that generates line drawings with a distinct style inspired by anime and manga art styles.
**Depth**:
A model that generates depth maps of images, allowing you to create more realistic 3D models or to simulate depth effects in post-processing.
**Normal Map (BAE):**
A model that generates normal maps from input images, allowing for more realistic lighting effects in 3D rendering.
**Image Segmentation**:
A model that divides input images into segments or regions, each of which corresponds to a different object or part of the image. (More details coming soon)
**QR Code Monster**:
A model that helps generate creative QR codes that still scan. Can also be used to create images with text, logos or shapes within them.
**Openpose**:
The OpenPose control model allows for the identification of the general pose of a character by pre-processing an existing image with a clear human structure. With advanced options, Openpose can also detect the face or hands in the image.
**Mediapipe Face**:
The MediaPipe Face identification processor is able to clearly identify facial features in order to capture vivid expressions of human faces.
**Tile**:
The Tile model fills out details in the image to match the image, rather than the prompt. The Tile Model is a versatile tool that offers a range of functionalities. Its primary capabilities can be boiled down to two main behaviors:
- It can reinterpret specific details within an image and create fresh, new elements.
- It has the ability to disregard global instructions if there's a discrepancy between them and the local context or specific parts of the image. In such cases, it uses the local context to guide the process.
The Tile Model can be a powerful tool in your arsenal for enhancing image quality and details. If there are undesirable elements in your images, such as blurriness caused by resizing, this model can effectively eliminate these issues, resulting in cleaner, crisper images. Moreover, it can generate and add refined details to your images, improving their overall quality and appeal.
**Pix2Pix (experimental)**
With Pix2Pix, you can input an image into the controlnet, and then "instruct" the model to change it using your prompt. For example, you can say "Make it winter" to add more wintry elements to a scene.
Each of these models can be adjusted and combined with other ControlNet models to achieve different results, giving you even more control over your image generation process.
### Using ControlNet
To use ControlNet, you can simply select the desired model and adjust both the ControlNet and Pre-processor settings to achieve the desired result. You can also use multiple ControlNet models at the same time, allowing you to achieve even more complex effects or styles in your generated images.
Each ControlNet has two settings that are applied to the ControlNet.
Weight - Strength of the Controlnet model applied to the generation for the section, defined by start/end.
Start/End - 0 represents the start of the generation, 1 represents the end. The Start/end setting controls what steps during the generation process have the ControlNet applied.
Additionally, each ControlNet section can be expanded in order to manipulate settings for the image pre-processor that adjusts your uploaded image before using it in when you Invoke.
## T2I-Adapter
[T2I-Adapter](https://github.com/TencentARC/T2I-Adapter) is a tool similar to ControlNet that allows for control over the generation process by providing control information during the generation process. T2I-Adapter models tend to be smaller and more efficient than ControlNets.
##### Installation
To install T2I-Adapter Models:
1. The easiest way to install models is
to use the InvokeAI model installer application. Use the
`invoke.sh`/`invoke.bat` launcher to select item [5] and then navigate
to the T2I-Adapters section. Select the models you wish to install and
press "APPLY CHANGES". You may also enter additional HuggingFace
repo_ids in the "Additional models" textbox.
2. Using the "Add Model" function of the model manager, enter the HuggingFace Repo ID of the T2I-Adapter. The ID is in the format "author/repoName"
#### Usage
Each T2I Adapter has two settings that are applied.
Weight - Strength of the model applied to the generation for the section, defined by start/end.
Start/End - 0 represents the start of the generation, 1 represents the end. The Start/end setting controls what steps during the generation process have the ControlNet applied.
Additionally, each section can be expanded with the "Show Advanced" button in order to manipulate settings for the image pre-processor that adjusts your uploaded image before using it in during the generation process.
## IP-Adapter
[IP-Adapter](https://ip-adapter.github.io) is a tooling that allows for image prompt capabilities with text-to-image diffusion models. IP-Adapter works by analyzing the given image prompt to extract features, then passing those features to the UNet along with any other conditioning provided.
There are several ways to install IP-Adapter models with an existing InvokeAI installation:
1. Through the command line interface launched from the invoke.sh / invoke.bat scripts, option [4] to download models.
2. Through the Model Manager UI with models from the *Tools* section of [www.models.invoke.ai](https://www.models.invoke.ai). To do this, copy the repo ID from the desired model page, and paste it in the Add Model field of the model manager. **Note** Both the IP-Adapter and the Image Encoder must be installed for IP-Adapter to work. For example, the [SD 1.5 IP-Adapter](https://models.invoke.ai/InvokeAI/ip_adapter_plus_sd15) and [SD1.5 Image Encoder](https://models.invoke.ai/InvokeAI/ip_adapter_sd_image_encoder) must be installed to use IP-Adapter with SD1.5 based models.
3.**Advanced -- Not recommended ** Manually downloading the IP-Adapter and Image Encoder files - Image Encoder folders shouid be placed in the `models\any\clip_vision` folders. IP Adapter Model folders should be placed in the relevant `ip-adapter` folder of relevant base model folder of Invoke root directory. For example, for the SDXL IP-Adapter, files should be added to the `model/sdxl/ip_adapter/` folder.
#### Using IP-Adapter
IP-Adapter can be used by navigating to the *Control Adapters* options and enabling IP-Adapter.
IP-Adapter requires an image to be used as the Image Prompt. It can also be used in conjunction with text prompts, Image-to-Image, Inpainting, Outpainting, ControlNets and LoRAs.
Each IP-Adapter has two settings that are applied to the IP-Adapter:
* Weight - Strength of the IP-Adapter model applied to the generation for the section, defined by start/end
* Start/End - 0 represents the start of the generation, 1 represents the end. The Start/end setting controls what steps during the generation process have the IP-Adapter applied.
With the advances in research, many new capabilities are available to customize the knowledge and understanding of novel concepts not originally contained in the base model.
## LoRAs
Low-Rank Adaptation (LoRA) files are models that customize the output of Stable Diffusion
image generation. Larger than embeddings, but much smaller than full
models, they augment SD with improved understanding of subjects and
artistic styles.
Unlike TI files, LoRAs do not introduce novel vocabulary into the
model's known tokens. Instead, LoRAs augment the model's weights that
are applied to generate imagery. LoRAs may be supplied with a
"trigger" word that they have been explicitly trained on, or may
simply apply their effect without being triggered.
LoRAs are typically stored in .safetensors files, which are the most
secure way to store and transmit these types of weights. You may
install any number of `.safetensors` LoRA files simply by copying them
into the `autoimport/lora` directory of the corresponding InvokeAI models
directory (usually `invokeai` in your home directory).
To use these when generating, open the LoRA menu item in the options
panel, select the LoRAs you want to apply and ensure that they have
the appropriate weight recommended by the model provider. Typically,
most LoRAs perform best at a weight of .75-1.
## LCM-LoRAs
Latent Consistency Models (LCMs) allowed a reduced number of steps to be used to generate images with Stable Diffusion. These are created by distilling base models, creating models that only require a small number of steps to generate images. However, LCMs require that any fine-tune of a base model be distilled to be used as an LCM.
LCM-LoRAs are models that provide the benefit of LCMs but are able to be used as LoRAs and applied to any fine tune of a base model. LCM-LoRAs are created by training a small number of adapters, rather than distilling the entire fine-tuned base model. The resulting LoRA can be used the same way as a standard LoRA, but with a greatly reduced step count. This enables SDXL images to be generated up to 10x faster than without the use of LCM-LoRAs.
**Using LCM-LoRAs**
LCM-LoRAs are natively supported in InvokeAI throughout the application. To get started, install any diffusers format LCM-LoRAs using the model manager and select it in the LoRA field.
There are a number parameter differences when using LCM-LoRAs and standard generation:
- When using LCM-LoRAs, the LoRA strength should be lower than if using a standard LoRA, with 0.35 recommended as a starting point.
- The LCM scheduler should be used for generation
- CFG-Scale should be reduced to ~1
- Steps should be reduced in the range of 4-8
Standard LoRAs can also be used alongside LCM-LoRAs, but will also require a lower strength, with 0.45 being recommended as a starting point.
More information can be found here: https://huggingface.co/blog/lcm_lora#fast-inference-with-sdxl-lcm-loras
InvokeAI provides the ability to merge two or three diffusers-type models into a new merged model. The
resulting model will combine characteristics of the original, and can
be used to teach an old model new tricks.
## How to Merge Models
Model Merging can be be done by navigating to the Model Manager and clicking the "Merge Models" tab. From there, you can select the models and settings you want to use to merge th models.
## Settings
* Model Selection: there are three multiple choice fields that
display all the diffusers-style models that InvokeAI knows about.
If you do not see the model you are looking for, then it is probably
a legacy checkpoint model and needs to be converted using the
"Convert" option in the Web-based Model Manager tab.
You must select at least two models to merge. The third can be left
at "None" if you desire.
* Alpha: This is the ratio to use when combining models. It ranges
from 0 to 1. The higher the value, the more weight is given to the
2d and (optionally) 3d models. So if you have two models named "A"
and "B", an alpha value of 0.25 will give you a merged model that is
25% A and 75% B.
* Interpolation Method: This is the method used to combine
weights. The options are "weighted_sum" (the default), "sigmoid",
"inv_sigmoid" and "add_difference". Each produces slightly different
results. When three models are in use, only "add_difference" is
available.
* Save Location: The location you want the merged model to be saved in. Default is in the InvokeAI root folder
* Name for merged model: This is the name for the new model. Please
use InvokeAI conventions - only alphanumeric letters and the
characters ".+-".
* Ignore Mismatches / Force: Not all models are compatible with each other. The merge
script will check for compatibility and refuse to merge ones that
are incompatible. Set this checkbox to try merging anyway.
You may run the merge script by starting the invoke launcher
(`invoke.sh` or `invoke.bat`) and choosing the option (4) for _merge
models_. This will launch a text-based interactive user interface that
prompts you to select the models to merge, how to merge them, and the
merged model name.
Alternatively you may activate InvokeAI's virtual environment from the
command line, and call the script via `merge_models --gui` to open up
a version that has a nice graphical front end. To get the commandline-
only version, omit `--gui`.
The user interface for the text-based interactive script is
straightforward. It shows you a series of setting fields. Use control-N (^N)
to move to the next field, and control-P (^P) to move to the previous
one. You can also use TAB and shift-TAB to move forward and
backward. Once you are in a multiple choice field, use the up and down
cursor arrows to move to your desired selection, and press <SPACE> or
<ENTER> to select it. Change text fields by typing in them, and adjust
scrollbars using the left and right arrow keys.
Once you are happy with your settings, press the OK button. Note that
there may be two pages of settings, depending on the height of your
screen, and the OK button may be on the second page. Advance past the
last field of the first page to get to the second page, and reverse
this to get back.
If the merge runs successfully, it will create a new diffusers model
under the selected name and register it with InvokeAI.
[{ align="right" }](https://colab.research.google.com/github/lstein/stable-diffusion/blob/main/notebooks/Stable_Diffusion_AI_Notebook.ipynb)
Open and follow instructions to use an isolated environment running Dream.
Output Example:

---
## **Invisible Watermark**
In keeping with the principles for responsible AI generation, and to
help AI researchers avoid synthetic images contaminating their
training sets, InvokeAI adds an invisible watermark to each of the
final images it generates. The watermark consists of the text
This sections details the ability to improve faces and upscale images.
## Face Fixing
As of InvokeAI 3.0, the easiest way to improve faces created during image generation is through the Inpainting functionality of the Unified Canvas. Simply add the image containing the faces that you would like to improve to the canvas, mask the face to be improved and run the invocation. For best results, make sure to use an inpainting specific model; these are usually identified by the "-inpainting" term in the model name.
## Upscaling
Open the upscaling dialog by clicking on the "expand" icon located
The default upscaling option is Real-ESRGAN x2 Plus, which will scale your image by a factor of two. This means upscaling a 512x512 image will result in a new 1024x1024 image.
Other options are the x4 upscalers, which will scale your image by a factor of 4.
!!! note
Real-ESRGAN is memory intensive. In order to avoid crashes and memory overloads
during the Stable Diffusion process, these effects are applied after Stable Diffusion has completed
its work.
In single image generations, you will see the output right away but when you are using multiple
iterations, the images will first be generated and then upscaled after that
process is complete. While the image generation is taking place, you will still be able to preview
the base images.
## How to disable
If, for some reason, you do not wish to load the ESRGAN libraries,
you can disable them on the invoke.py command line with the `--no_esrgan` options.
|  |  |  |
Using `+` to increase apricot-ness:
| `a man picking apricots+ from a tree` | `a man picking apricots++ from a tree` | `a man picking apricots+++ from a tree` | `a man picking apricots++++ from a tree` | `a man picking apricots+++++ from a tree` |
|  |  |  |  |  |
You can also change the balance between different parts of a prompt. For
example, below is a `mountain man`:
<figure markdown>

-`("a tall thin man picking apricots", "a tall thin man picking pears").blend(1,1)`
- The existing prompt blending using `:<weight>` will continue to be supported -
`("a tall thin man picking apricots", "a tall thin man picking pears").blend(1,1)`
is equivalent to
`a tall thin man picking apricots:1 a tall thin man picking pears:1` in the
old syntax.
- Attention weights can be nested inside blends.
- Non-normalized blends are supported by passing `no_normalize` as an additional
argument to the blend weights, eg
`("a tall thin man picking apricots", "a tall thin man picking pears").blend(1,-1,no_normalize)`.
very fun to explore local maxima in the feature space, but also easy to
produce garbage output.
See the section below on "Prompt Blending" for more information about how this
works.
### Prompt Conjunction
Join multiple clauses together to create a conjoined prompt. Each clause will be passed to CLIP separately.
For example, the prompt:
```bash
"A mystical valley surround by towering granite cliffs, watercolor, warm"
```
Can be used with .and():
```bash
("A mystical valley", "surround by towering granite cliffs", "watercolor", "warm").and()
```
Each will give you different results - try them out and see what you prefer!
### Cross-Attention Control ('prompt2prompt')
Sometimes an image you generate is almost right, and you just want to change one
detail without affecting the rest. You could use a photo editor and inpainting
to overpaint the area, but that's a pain. Here's where `prompt2prompt` comes in
handy.
Generate an image with a given prompt, record the seed of the image, and then
use the `prompt2prompt` syntax to substitute words in the original prompt for
words in a new prompt. This works for `img2img` as well.
For example, consider the prompt `a cat.swap(dog) playing with a ball in the forest`. Normally, because of the word words interact with each other when doing a stable diffusion image generation, these two prompts would generate different compositions:
-`a cat playing with a ball in the forest`
-`a dog playing with a ball in the forest`
| `a cat playing with a ball in the forest` | `a dog playing with a ball in the forest` |
| --- | --- |
| img | img |
- For multiple word swaps, use parentheses: `a (fluffy cat).swap(barking dog) playing with a ball in the forest`.
- To swap a comma, use quotes: `a ("fluffy, grey cat").swap("big, barking dog") playing with a ball in the forest`.
- Supports options `t_start` and `t_end` (each 0-1) loosely corresponding to (bloc97's)[(https://github.com/bloc97/CrossAttentionControl)] `prompt_edit_tokens_start/_end` but with the math swapped to make it easier to
intuitively understand. `t_start` and `t_end` are used to control on which steps cross-attention control should run. With the default values `t_start=0` and `t_end=1`, cross-attention control is active on every step of image generation. Other values can be used to turn cross-attention control off for part of the image generation process.
- For example, if doing a diffusion with 10 steps for the prompt is `a cat.swap(dog, t_start=0.3, t_end=1.0) playing with a ball in the forest`, the first 3 steps will be run as `a cat playing with a ball in the forest`, while the last 7 steps will run as `a dog playing with a ball in the forest`, but the pixels that represent `dog` will be locked to the pixels that would have represented `cat` if the `cat` prompt had been used instead.
- Conversely, for `a cat.swap(dog, t_start=0, t_end=0.7) playing with a ball in the forest`, the first 7 steps will run as `a dog playing with a ball in the forest` with the pixels that represent `dog` locked to the same pixels that would have represented `cat` if the `cat` prompt was being used instead. The final 3 steps will just run `a cat playing with a ball in the forest`.
> For img2img, the step sequence does not start at 0 but instead at `(1.0-strength)` - so if the img2img `strength` is `0.7`, `t_start` and `t_end` must both be greater than `0.3` (`1.0-0.7`) to have any effect.
Prompt2prompt `.swap()` is not compatible with xformers, which will be temporarily disabled when doing a `.swap()` - so you should expect to use more VRAM and run slower that with xformers enabled.
If the model you are using has parentheses () or speech marks "" as part of its
syntax, you will need to "escape" these using a backslash, so that`(my_keyword)`
becomes `\(my_keyword\)`. Otherwise, the prompt parser will attempt to interpret
the parentheses as part of the prompt syntax and it will get confused.
---
## **Prompt Blending**
You may blend together prompts to explore the AI's
latent semantic space and generate interesting (and often surprising!)
variations. The syntax is:
```bash
("prompt #1", "prompt #2").blend(0.25, 0.75)
```
This will tell the sampler to blend 25% of the concept of prompt #1 with 75%
of the concept of prompt #2. It is recommended to keep the sum of the weights to around 1.0, but interesting things might happen if you go outside of this range.
Because you are exploring the "mind" of the AI, the AI's way of mixing two
concepts may not match yours, leading to surprising effects. To illustrate, here
are three images generated using various combinations of blend weights. As
usual, unless you fix the seed, the prompts will give you different results each
time you run them.
Let's examine how this affects image generation results:
```bash
"blue sphere, red cube, hybrid"
```
This example doesn't use blending at all and represents the default way of mixing
It's interesting to see how the AI expressed the concept of "cube" within the sphere. If you look closely, there is depth there, so the enclosing frame is actually a cube.
Now that's interesting. We get an image with a resemblance of a red cube, with a hint of blue shadows which represents a melding of concepts within the AI's "latent space" of semantic representations.
Whoa...! I see blue and red, and if I squint, spheres and cubes.
## Dynamic Prompts
Dynamic Prompts are a powerful feature designed to produce a variety of prompts based on user-defined options. Using a special syntax, you can construct a prompt with multiple possibilities, and the system will automatically generate a series of permutations based on your settings. This is extremely beneficial for ideation, exploring various scenarios, or testing different concepts swiftly and efficiently.
### Structure of a Dynamic Prompt
A Dynamic Prompt comprises of regular text, supplemented with alternatives enclosed within curly braces {} and separated by a vertical bar |. For example: {option1|option2|option3}. The system will then select one of the options to include in the final prompt. This flexible system allows for options to be placed throughout the text as needed.
Furthermore, Dynamic Prompts can designate multiple selections from a single group of options. This feature is triggered by prefixing the options with a numerical value followed by $$. For example, in {2$$option1|option2|option3}, the system will select two distinct options from the set.
### Creating Dynamic Prompts
To create a Dynamic Prompt, follow these steps:
Draft your sentence or phrase, identifying words or phrases with multiple possible options.
Encapsulate the different options within curly braces {}.
Within the braces, separate each option using a vertical bar |.
If you want to include multiple options from a single group, prefix with the desired number and $$.
For instance: A {house|apartment|lodge|cottage} in {summer|winter|autumn|spring} designed in {style1|style2|style3}.
### How Dynamic Prompts Work
Once a Dynamic Prompt is configured, the system generates an array of combinations using the options provided. Each group of options in curly braces is treated independently, with the system selecting one option from each group. For a prefixed set (e.g., 2$$), the system will select two distinct options.
For example, the following prompts could be generated from the above Dynamic Prompt:
A house in summer designed in style1, style2
A lodge in autumn designed in style3, style1
A cottage in winter designed in style2, style3
And many more!
When the `Combinatorial` setting is on, Invoke will disable the "Images" selection, and generate every combination up until the setting for Max Prompts is reached.
When the `Combinatorial` setting is off, Invoke will randomly generate combinations up until the setting for Images has been reached.
### Tips and Tricks for Using Dynamic Prompts
Below are some useful strategies for creating Dynamic Prompts:
Utilize Dynamic Prompts to generate a wide spectrum of prompts, perfect for brainstorming and exploring diverse ideas.
Ensure that the options within a group are contextually relevant to the part of the sentence where they are used. For instance, group building types together, and seasons together.
Apply the 2$$ prefix when you want to incorporate more than one option from a single group. This becomes quite handy when mixing and matching different elements.
Experiment with different quantities for the prefix. For example, 3$$ will select three distinct options.
Be aware of coherence in your prompts. Although the system can generate all possible combinations, not all may semantically make sense. Therefore, carefully choose the options for each group.
Always review and fine-tune the generated prompts as needed. While Dynamic Prompts can help you generate a multitude of combinations, the final polishing and refining remain in your hands.
## SDXL Prompting
Prompting with SDXL is slightly different than prompting with SD1.5 or SD2.1 models - SDXL expects a prompt _and_ a style.
### Prompting
<figure markdown>

</figure>
In the prompt box, enter a positive or negative prompt as you normally would.
For the style box you can enter a style that you want the image to be generated in. You can use styles from this example list, or any other style you wish: anime, photographic, digital art, comic book, fantasy art, analog film, neon punk, isometric, low poly, origami, line art, cinematic, 3d model, pixel art, etc.
### Concatenated Prompts
InvokeAI also has the option to concatenate the prompt and style inputs, by pressing the "link" button in the Positive Prompt box.
This concatenates the prompt & style inputs, and passes the joined prompt and style to the SDXL model.

|  |  |
"Outpainting" means asking the AI to expand the original image beyond its
original borders, making a bigger image that's still based on the original. For
example, extending the above image of your Grandmother in a biker's jacket to
include her wearing jeans (and while we're at it, a motorcycle!)
<figure markdown>

</figure>
When you are using the Unified Canvas, Invoke decides automatically whether to
do Inpainting, Outpainting, ImageToImage, or TextToImage by looking inside the
area enclosed by the Bounding Box. It chooses the appropriate type of generation
based on whether the Bounding Box contains empty (transparent) areas on the Base
layer, or whether it contains colored areas from previous generations (or from
painted brushstrokes) on the Base layer, and/or whether the Mask layer contains
any brushstrokes. See [Generation Methods](#generation-methods) below for more
information.
## Getting Started
To get started with the Unified Canvas, you will want to generate a new base
layer using Txt2Img or importing an initial image. We'll refer to either of
these methods as the "initial image" in the below guide.
From there, you can consider the following techniques to augment your image:
- **New Images**: Move the bounding box to an empty area of the Canvas, type in
your prompt, and Invoke, to generate a new image using the Text to Image
function.
- **Image Correction**: Use the color picker and brush tool to paint corrections
on the image, switch to the Mask layer, and brush a mask over your painted
area to use **Inpainting**. You can also use the **ImageToImage** generation
method to invoke new interpretations of the image.
- **Image Expansion**: Move the bounding box to include a portion of your
initial image, and a portion of transparent/empty pixels, then Invoke using a
prompt that describes what you'd like to see in that area. This will Outpaint
the image. You'll typically find more coherent results if you keep about
50-60% of the original image in the bounding box. Make sure that the Image To
Image Strength slider is set to a high value - you may need to set it higher
than you are used to.
- **New Content on Existing Images**: If you want to add new details or objects
into your image, use the brush tool to paint a sketch of what you'd like to
see on the image, switch to the Mask layer, and brush a mask over your painted
area to use **Inpainting**. If the masked area is small, consider using a
smaller bounding box to take advantage of Invoke's automatic Scaling features,
which can help to produce better details.
- **And more**: There are a number of creative ways to use the Canvas, and the
above are just starting points. We're excited to see what you come up with!
Invoke uses a SQLite database to store image, workflow, model, and execution data.
We take great care to ensure your data is safe, by utilizing transactions and a database migration system.
Even so, when testing a prerelease version of the app, we strongly suggest either backing up your database or using an in-memory database. This ensures any prelease hiccups or databases schema changes will not cause problems for your data.
## Database Backup
Backing up your database is very simple. Invoke's data is stored in an `$INVOKEAI_ROOT` directory - where your `invoke.sh`/`invoke.bat` and `invokeai.yaml` files live.
To back up your database, copy the `invokeai.db` file from `$INVOKEAI_ROOT/databases/invokeai.db` to somewhere safe.
If anything comes up during prelease testing, you can simply copy your backup back into `$INVOKEAI_ROOT/databases/`.
## In-Memory Database
SQLite can run on an in-memory database. Your existing database is untouched when this mode is enabled, but your existing data won't be accessible.
This is very useful for testing, as there is no chance of a database change modifying your "physical" database.
To run Invoke with a memory database, edit your `invokeai.yaml` file and add `use_memory_db: true`:
```yaml
use_memory_db:true
```
Delete this line (or set it to `false`) to use your main database.
## Quick guided walkthrough of the Gallery Panel's features
The Gallery Panel is a fast way to review, find, and make use of images you've
generated and loaded. The Gallery is divided into Boards. The Uncategorized board is always
present but you can create your own for better organization.

### Board Display and Settings
At the very top of the Gallery Panel are the boards disclosure and settings buttons.

The disclosure button shows the name of the currently selected board and allows you to show and hide the board thumbnails (shown in the image below).

The settings button opens a list of options.

- ***Image Size*** this slider lets you control the size of the image previews (images of three different sizes).
- ***Auto-Switch to New Images*** if you turn this on, whenever a new image is generated, it will automatically be loaded into the current image panel on the Text to Image tab and into the result panel on the [Image to Image](IMG2IMG.md) tab. This will happen invisibly if you are on any other tab when the image is generated.
- ***Auto-Assign Board on Click*** whenever an image is generated or saved, it always gets put in a board. The board it gets put into is marked with AUTO (image of board marked). Turning on Auto-Assign Board on Click will make whichever board you last selected be the destination when you click Invoke. That means you can click Invoke, select a different board, and then click Invoke again and the two images will be put in two different boards. (bold)It's the board selected when Invoke is clicked that's used, not the board that's selected when the image is finished generating.(bold) Turning this off, enables the Auto-Add Board drop down which lets you set one specific board to always put generated images into. This also enables and disables the Auto-add to this Board menu item described below.
- ***Always Show Image Size Badge*** this toggles whether to show image sizes for each image preview (show two images, one with sizes shown, one without)
Below these two buttons, you'll see the Search Boards text entry area. You use this to search for specific boards by the name of the board.
Next to it is the Add Board (+) button which lets you add new boards. Boards can be renamed by clicking on the name of the board under its thumbnail and typing in the new name.
### Board Thumbnail Menu
Each board has a context menu (ctrl+click / right-click).

- ***Auto-add to this Board*** if you've disabled Auto-Assign Board on Click in the board settings, you can use this option to set this board to be where new images are put.
- ***Download Board*** this will add all the images in the board into a zip file and provide a link to it in a notification (image of notification)
- ***Delete Board*** this will delete the board
> [!CAUTION]
> This will delete all the images in the board and the board itself.
### Board Contents
Every board is organized by two tabs, Images and Assets.

Images are the Invoke-generated images that are placed into the board. Assets are images that you upload into Invoke to be used as an [Image Prompt](https://support.invoke.ai/support/solutions/articles/151000159340-using-the-image-prompt-adapter-ip-adapter-) or in the [Image to Image](IMG2IMG.md) tab.
### Image Thumbnail Menu
Every image generated by Invoke has its generation information stored as text inside the image file itself. This can be read directly by selecting the image and clicking on the Info button  in any of the image result panels.
Each image also has a context menu (ctrl+click / right-click).

The options are (items marked with an * will not work with images that lack generation information):
- ***Open in New Tab*** this will open the image alone in a new browser tab, separate from the Invoke interface.
- ***Download Image*** this will trigger your browser to download the image.
- ***Load Workflow **** this will load any workflow settings into the Workflow tab and automatically open it.
- ***Remix Image **** this will load all of the image's generation information, (bold)excluding its Seed, into the left hand control panel
- ***Use Prompt **** this will load only the image's text prompts into the left-hand control panel
- ***Use Seed **** this will load only the image's Seed into the left-hand control panel
- ***Use All **** this will load all of the image's generation information into the left-hand control panel
- ***Send to Image to Image*** this will put the image into the left-hand panel in the Image to Image tab and automatically open it
- ***Send to Unified Canvas*** This will (bold)replace whatever is already present(bold) in the Unified Canvas tab with the image and automatically open the tab
- ***Change Board*** this will oipen a small window that will let you move the image to a different board. This is the same as dragging the image to that board's thumbnail.
- ***Star Image*** this will add the image to the board's list of starred images that are always kept at the top of the gallery. This is the same as clicking on the star on the top right-hand side of the image that appears when you hover over the image with the mouse
- ***Delete Image*** this will delete the image from the board
> [!CAUTION]
> This will delete the image entirely from Invoke.
## Summary
This walkthrough only covers the Gallery interface and Boards. Actually generating images is handled by [Prompts](PROMPTS.md), the [Image to Image](IMG2IMG.md) tab, and the [Unified Canvas](UNIFIED_CANVAS.md).
## Acknowledgements
A huge shout-out to the core team working to make the Web GUI a reality,
including [psychedelicious](https://github.com/psychedelicious),
**Where do I get started? How can I install Invoke?**
- You can download the latest installers [here](https://github.com/invoke-ai/InvokeAI/releases) - Note that any releases marked as *pre-release* are in a beta state. You may experience some issues, but we appreciate your help testing those! For stable/reliable installations, please install the **[Latest Release](https://github.com/invoke-ai/InvokeAI/releases/latest)**
**How can I download models? Can I use models I already have downloaded?**
- Models can be downloaded through the model manager, or through option [4] in the invoke.bat/invoke.sh launcher script. To download a model through the Model Manager, use the HuggingFace Repo ID by pressing the “Copy” button next to the repository name. Alternatively, to download a model from CivitAi, use the download link in the Model Manager.
- Models that are already downloaded can be used by creating a symlink to the model location in the `autoimport` folder or by using the Model Manger’s “Scan for Models” function.
**My images are taking a long time to generate. How can I speed up generation?**
- A common solution is to reduce the size of your RAM & VRAM cache to 0.25. This ensures your system has enough memory to generate images.
- Additionally, check the [hardware requirements](https://invoke-ai.github.io/InvokeAI/#hardware-requirements) to ensure that your system is capable of generating images.
- Lastly, double check your generations are happening on your GPU (if you have one). InvokeAI will log what is being used for generation upon startup.
**I’ve installed Python on Windows but the installer says it can’t find it?**
- Then ensure that you checked **'Add python.exe to PATH'** when installing Python. This can be found at the bottom of the Python Installer window. If you already have Python installed, this can be done with the modify / repair feature of the installer.
**I’ve installed everything successfully but I still get an error about Triton when starting Invoke?**
- This can be safely ignored. InvokeAI doesn't use Triton, but if you are on Linux and wish to dismiss the error, you can install Triton.
**I updated to 3.4.0 and now xFormers can’t load C++/CUDA?**
- An issue occurred with your PyTorch update. Follow these steps to fix :
1. Launch your invoke.bat / invoke.sh and select the option to open the developer console
- If you run into an error with `typing_extensions`, re-open the developer console and run: `pip install -U typing-extensions`
**It says my pip is out of date - is that why my install isn't working?**
- An out of date won't cause an installation to fail. The cause of the error can likely be found above the message that says pip is out of date.
- If you saw that warning but the install went well, don't worry about it (but you can update pip afterwards if you'd like).
**How can I generate the exact same that I found on the internet?**
Most example images with prompts that you'll find on the internet have been generated using different software, so you can't expect to get identical results. In order to reproduce an image, you need to replicate the exact settings and processing steps, including (but not limited to) the model, the positive and negative prompts, the seed, the sampler, the exact image size, any upscaling steps, etc.
**Where can I get more help?**
- Create an issue on [GitHub](https://github.com/invoke-ai/InvokeAI/issues) or post in the [#help channel](https://discord.com/channels/1020123559063990373/1149510134058471514) of the InvokeAI Discord
@@ -20,7 +20,7 @@ When you generate an image using text-to-image, multiple steps occur in latent s
4. The VAE decodes the final latent image from latent space into image space.
Image-to-image is a similar process, with only step 1 being different:
1. The input image is encoded from image space into latent space by the VAE. Noise is then added to the input latent image. Denoising Strength dictates how may noise steps are added, and the amount of noise added at each step. A Denoising Strength of 0 means there are 0 steps and no noise added, resulting in an unchanged image, while a Denoising Strength of 1 results in the image being completely replaced with noise and a full set of denoising steps are performance. The process is then the same as steps 2-4 in the text-to-image process.
1. The input image is encoded from image space into latent space by the VAE. Noise is then added to the input latent image. Denoising Strength dictates how many noise steps are added, and the amount of noise added at each step. A Denoising Strength of 0 means there are 0 steps and no noise added, resulting in an unchanged image, while a Denoising Strength of 1 results in the image being completely replaced with noise and a full set of denoising steps are performance. The process is then the same as steps 2-4 in the text-to-image process.
Furthermore, a model provides the CLIP prompt tokenizer, the VAE, and a U-Net (where noise prediction occurs given a prompt and initial noise tensor).
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.