Compare commits

..

577 Commits

Author SHA1 Message Date
psychedelicious
0ea88dc170 chore: release v4.2.9.dev9 2024-08-30 22:24:08 +10:00
psychedelicious
8369826d22 fix(ui): entity opacity number input focus prevents slider from opening 2024-08-30 22:20:49 +10:00
psychedelicious
0e354f5164 feat(ui): add merge visible for raster and inpaint mask layers
I don't think it makes sense to merge control layers or regional guidance layers because they have additional state.
2024-08-30 22:20:49 +10:00
psychedelicious
41f2ee2633 fix(ui): save to gallery rect too large
Was including all layer types in the rect - only want the raster layers.
2024-08-30 22:20:49 +10:00
psychedelicious
4e74006c5f fix(ui): canvasToBlob not raising error correctly 2024-08-30 22:20:49 +10:00
psychedelicious
48edb6e023 feat(ui): add save to gallery button 2024-08-30 22:20:49 +10:00
psychedelicious
aeae6af0a1 fix(ui): fix getRectUnion util, add some tests 2024-08-30 22:20:49 +10:00
psychedelicious
ab11d9af8e fix(ui): modals not staying open
TBH not sure exactly why this broke. Fixed by rollback back the use of a render prop in favor of global state. Also revised the API of `useBoolean` and `buildUseBoolean`.
2024-08-30 22:20:49 +10:00
psychedelicious
2e84327ca4 fix(ui): correct labels for generation tab origin 2024-08-30 22:20:49 +10:00
psychedelicious
fa6842121c fix(ui): context menu doesn't work for new entities
I do not understand why this fixes the issue, doesn't seem like it should. But it does.
2024-08-30 22:20:49 +10:00
psychedelicious
c402aa397d tidy(ui): organise tool module 2024-08-30 22:20:49 +10:00
psychedelicious
a58c8adc38 fix(ui): staging hotkeys enabled at wrong times 2024-08-30 22:20:49 +10:00
psychedelicious
d43e2d690e fix(ui): incorrect batch origin preventing progress/staging 2024-08-30 22:20:49 +10:00
psychedelicious
284f768810 feat(ui): restore minimal HUD 2024-08-30 22:20:49 +10:00
psychedelicious
e933d1ae2b feat(ui): remove unused asPreview for StageComponent 2024-08-30 22:20:49 +10:00
psychedelicious
1e134de771 chore(ui): lint 2024-08-30 22:20:49 +10:00
psychedelicious
29c47c8be5 chore: release v4.2.9.dev8 2024-08-30 22:20:49 +10:00
psychedelicious
e1122c541d feat(ui): revise generation mode logic
- Canvas generation mode is replace with a boolean `sendToCanvas` flag. When off, images generated on the canvas go to the gallery. When on, they get added to the staging area.
- When an image result is received, if its destination is the canvas, staging is automatically started.
- Updated queue list to show the destination column.
- Added `IconSwitch` component to represent binary choices, used for the new `sendToCanvas` flag and image viewer toggle.
- Remove the queue actions menu in `QueueControls`. Move the queue count badge to the cancel button.
- Redo layout of `QueueControls` to prevent duplicate queue count badges.
- Fix issue where gallery and options panels could show thru transparent regions of queue tab.
- Disable panel hotkeys when on mm/queue tabs.
2024-08-30 22:20:49 +10:00
psychedelicious
2f81d1ac83 chore(ui): typegen 2024-08-30 22:20:49 +10:00
psychedelicious
56fbe751db feat(app): add destination column to session_queue
The frontend needs to know where queue items came from (i.e. which tab), and where results are going to (i.e. send images to gallery or canvas). The `origin` column is not quite enough to represent this cleanly.

A `destination` column provides the frontend what it needs to handle incoming generations.
2024-08-30 22:20:49 +10:00
psychedelicious
93f1d67fbf tidy(ui): ViewerToggleMenu -> ViewerToggle 2024-08-30 22:20:49 +10:00
psychedelicious
9467b937ff feat(ui): alt quick switches to color picker 2024-08-30 22:20:49 +10:00
psychedelicious
4242e6e6c2 feat(ui): tweak add entity button layout 2024-08-30 22:20:49 +10:00
psychedelicious
9b39452b3e feat(ui): restore context menu for entity list 2024-08-30 22:20:49 +10:00
psychedelicious
85b23784cf feat(ui): add delete button to each layer 2024-08-30 22:20:49 +10:00
psychedelicious
085cc82926 feat(ui): add + buttons to entity categories 2024-08-30 22:20:49 +10:00
psychedelicious
0098c33f81 feat(ui): tweak brush fill UI 2024-08-30 22:20:49 +10:00
psychedelicious
292e00ab68 feat(ui): do not select layer on staging accept 2024-08-30 22:20:49 +10:00
psychedelicious
6c1fb2d06e fix(ui): more fiddly queue count layout stuff 2024-08-30 22:20:49 +10:00
psychedelicious
d60605fcd8 fix(ui): floating params panel invoke button loading state 2024-08-30 22:20:49 +10:00
psychedelicious
38ed720ff2 feat(ui): move canvas undo/redo to hook 2024-08-30 22:20:49 +10:00
psychedelicious
22203b8eb0 fix(ui): queue count badge positioning 2024-08-30 22:20:49 +10:00
psychedelicious
cf5fa792a1 fix(ui): add node cmdk only enabled on workflows tab 2024-08-30 22:20:49 +10:00
psychedelicious
c636633a8e chore: release v4.2.9.dev7 2024-08-30 22:20:49 +10:00
psychedelicious
55fe1ebc53 fix(ui): pending node connection stuck 2024-08-30 22:20:49 +10:00
psychedelicious
3c2fa6b475 chore(ui): lint 2024-08-30 22:20:49 +10:00
psychedelicious
9b927de2e0 chore: release v4.2.9.dev6 2024-08-30 22:20:49 +10:00
psychedelicious
6a62854e7d feat(ui): migrate add node popover to cmdk
Put this together as a way to figure out the library before moving on to the full app cmdk. Works great.
2024-08-30 22:20:49 +10:00
psychedelicious
312093cbb0 fix(ui): schema parsing now that node_pack is guaranteed to be present 2024-08-30 22:20:49 +10:00
psychedelicious
06fe14e1fc chore(ui): typegen 2024-08-30 22:20:49 +10:00
psychedelicious
1b54e58726 fix(app): node_pack not added to openapi schema correctly 2024-08-30 22:20:49 +10:00
psychedelicious
219d7c9611 fix(ui): unnecessary z-index on invoke button 2024-08-30 22:20:49 +10:00
psychedelicious
9f742a669e feat(ui): split settings modal 2024-08-30 22:20:49 +10:00
psychedelicious
41e324fd51 perf(ui): disable useInert on modals
This hook forcibly updates _all_ portals with `data-hidden=true` when the modal opens - then reverts it when the modal closes. It's intended to help screen readers. Unfortunately, this absolutely tanks performance because we have many portals. React needs to do alot of layout calculations (not re-renders).

IMO this behaviour is a bug in chakra. The modals which generated the portals are hidden by default, so this data attr should really be set by default. Dunno why it isn't.
2024-08-30 22:20:36 +10:00
psychedelicious
ce55a96125 feat(ui): fix queue item count badge positioning
Previously this badge, floating over the queue menu button next to the invoke button, was rendered within the existing layout. When I initially positioned it, the app layout interfered - it would extend into an area reserved for a flex gap, which cut off the badge.

As a (bad) workaround, I had shifted the whole app down a few pixels to make room for it. What I should have done is what I've done in this commit - render the badge in a portal to take it out of the layout so we don't need that extra vertical padding.

Sleekified some styling a bit too.
2024-08-30 22:20:36 +10:00
psychedelicious
64e60a7fde fix(ui): transparency effect not updating 2024-08-30 22:20:36 +10:00
psychedelicious
972f03960a feat(ui): tidy canvas toolbar buttons 2024-08-30 22:20:36 +10:00
psychedelicious
5a403f087d feat(ui): revised viewer toggle @joshistoast 2024-08-30 22:20:36 +10:00
psychedelicious
fe59d7f3b0 fix(ui): opacity reset value incorrect 2024-08-30 22:20:36 +10:00
psychedelicious
b2b2b73aed revert(ui): roll back flip, doesn't work with rotate yet 2024-08-30 22:20:36 +10:00
psychedelicious
20b563c4cb fix(ui): disable opacity slider fully when no valid entity selected 2024-08-30 22:20:36 +10:00
psychedelicious
263a0ef5b4 fix(ui): layer preview image sometimes not rendering
The canvas size was dynamic based on the container div's size. When the div was hidden (e.g. when selecting another tab), the container's effective size is 0. This resulted in the preview image canvas being drawn at a scale of 0.

Fixed by using an absolute size for the canvas container.
2024-08-30 22:20:36 +10:00
psychedelicious
e8723b7cd3 feat(ui): tweak regional prompt box styles 2024-08-30 22:20:36 +10:00
psychedelicious
03e05b2068 feat(ui): tweak enabled/locked toggle styles 2024-08-30 22:20:36 +10:00
psychedelicious
6c0482a71d feat(ui): tweak filter styling 2024-08-30 22:20:36 +10:00
psychedelicious
e6153e6fa4 feat(ui): add flip & reset to transform 2024-08-30 22:20:36 +10:00
psychedelicious
6d209c6cc3 tidy(ui): use helper to sync scaled bbox size on model change 2024-08-30 22:20:36 +10:00
psychedelicious
beb4e823dc fix(ui): randomize seed toggle linked to prompt concat 2024-08-30 22:20:36 +10:00
psychedelicious
61ba4c606b chore: release v4.2.9.dev5 2024-08-30 22:20:36 +10:00
psychedelicious
af840cedf3 chore(ui): lint 2024-08-30 22:20:36 +10:00
psychedelicious
0bf0bca03f feat(ui): generalize mask fill, add to action bar 2024-08-30 22:20:36 +10:00
psychedelicious
e470eaf8f3 feat(ui): implement interaction locking on layers 2024-08-30 22:20:36 +10:00
psychedelicious
377db3f726 feat(ui): iterate on layer actions
- Add lock toggle
- Tweak lock and enabled styles
- Update entity list action bar w/ delete & delete all
- Move add layer menu to action bar
- Adjust opacity slider style
2024-08-30 22:20:36 +10:00
psychedelicious
77f020a997 feat(ui): collapsible entity groups 2024-08-30 22:20:36 +10:00
psychedelicious
34e2eda625 tidy(ui): rename some classes to be consistent 2024-08-30 22:20:36 +10:00
psychedelicious
e1d559db69 feat(ui): tuned canvas undo/redo
- Throttle pushing to history for actions of the same type, starting with 1000ms throttle.
- History has a limit of 64 items, same as workflow editor
- Add clear history button
- Fix an issue where entity transformers would reset the entity state when the entity is fully transparent, resetting the redo stack. This could happen when you undo to the starting state of a layer
2024-08-30 22:20:36 +10:00
psychedelicious
23a98e2ed6 tidy(ui): move all undoable reducers back to canvas slice 2024-08-30 22:20:36 +10:00
psychedelicious
fe3b2ed357 fix(ui): dnd image count 2024-08-30 22:20:36 +10:00
psychedelicious
eedf81dcc5 fix(ui): canvas entity opacity scale 2024-08-30 22:20:36 +10:00
psychedelicious
dbef1a9e06 perf(ui): optimize all selectors 2
Mostly selector optimization. Still a few places to tidy up but I'll get to that later.
2024-08-30 22:20:36 +10:00
psychedelicious
a41406ca9a perf(ui): optimize all selectors 1
I learned that the inline selector syntax recreates the selector function on every render:

```ts
const val = useAppSelector((s) => s.slice.val)
```

Not good! Better is to create a selector outside the function and use it. Doing that for all selectors now, most of the way through now. Feels snappier.
2024-08-30 22:20:12 +10:00
psychedelicious
f126a61f66 feat(ui): rough out undo/redo on canvas 2024-08-30 22:20:12 +10:00
psychedelicious
89c79276f3 chore: release v4.2.9.dev4
Canvas dev build.
2024-08-30 22:20:12 +10:00
psychedelicious
423e463b95 fix(ui): handle error from internal konva method
We are dipping into konva's private API for preview images and it appears to be unsafe (got an error once). Wrapped in a try/catch.
2024-08-30 22:20:12 +10:00
psychedelicious
52202e45de feat(ui): split out loras state from canvas rendering state 2024-08-30 22:20:12 +10:00
psychedelicious
100832c66d feat(ui): split out session state from canvas rendering state 2024-08-30 22:20:12 +10:00
psychedelicious
a58b91b221 feat(ui): split out settings state from canvas rendering state 2024-08-30 22:20:12 +10:00
psychedelicious
3af6d79852 feat(ui): split out tool state from canvas rendering state 2024-08-30 22:20:12 +10:00
psychedelicious
1303e18e93 feat(ui): split out params/compositing state from canvas rendering state
First step to restoring undo/redo - the undoable state must be in its own slice. So params and settings must be isolated.
2024-08-30 22:20:12 +10:00
psychedelicious
301da97670 feat(ui): add CanvasModuleBase class to standardize canvas APIs
I did this ages ago but undid it for some reason, not sure why. Caught a few issues related to subscriptions.
2024-08-30 22:20:12 +10:00
psychedelicious
17e76981bb feat(ui): move selected tool and tool buffer out of redux
This ephemeral state can live in the canvas classes.
2024-08-30 22:20:12 +10:00
psychedelicious
9c1732e2bb feat(ui): move ephemeral state into canvas classes
Things like `$lastCursorPos` are now created within the canvas drawing classes. Consumers in react access them via `useCanvasManager`.

For example:
```tsx
const canvasManager = useCanvasManager();
const lastCursorPos = useStore(canvasManager.stateApi.$lastCursorPos);
```
2024-08-30 22:20:12 +10:00
psychedelicious
a3179e7a3f feat(ui): normalize all actions to accept an entityIdentifier
Previously, canvas actions specific to an entity type only needed the id of that entity type. This allowed you to pass in the id of an entity of the wrong type.

All actions for a specific entity now take a full entity identifier, and the entity identifier type can be narrowed.

`selectEntity` and `selectEntityOrThrow` now need a full entity identifier, and narrow their return values to a specific entity type _if_ the entity identifier is narrowed.

The types for canvas entities are updated with optional type parameters for this purpose.

All reducers, actions and components have been updated.
2024-08-30 22:20:12 +10:00
psychedelicious
f86b50d18a feat(ui): move events into modules who care about them 2024-08-30 22:20:12 +10:00
psychedelicious
307885f505 fix(ui): color picker resets brush opacity 2024-08-30 22:20:12 +10:00
psychedelicious
4b49c1dd6b fix(ui): scaled bbox loses sync 2024-08-30 22:20:12 +10:00
psychedelicious
f917cefa84 feat(ui): add context menu to entity list 2024-08-30 22:20:12 +10:00
psychedelicious
bea98438fc chore(ui): bump @invoke-ai/ui-library 2024-08-30 22:20:12 +10:00
psychedelicious
17d3275086 fix(ui): missing vae precision in graph builders 2024-08-30 22:20:12 +10:00
psychedelicious
059b7a0fcf chore: release v4.2.9.dev3
Instead of using dates, just going to increment.
2024-08-30 22:20:12 +10:00
psychedelicious
05d3a989f6 feat(ui): use new Result utils for enqueueing 2024-08-30 22:20:12 +10:00
psychedelicious
590ae70c12 fix(ui): graph building issue w/ controlnet 2024-08-30 22:20:12 +10:00
psychedelicious
5240ec6e6f feat(ui): add Result type & helpers
Wrappers to capture errors and turn into results:
- `withResult` wraps a sync function
- `withResultAsync` wraps an async function

Comments, tests.
2024-08-30 22:20:12 +10:00
psychedelicious
04772b642c chore: release v4.2.9.dev20240824 2024-08-30 22:20:12 +10:00
psychedelicious
65f6cb416f fix(ui): lint & fix issues with adding regional ip adapters 2024-08-30 22:20:12 +10:00
psychedelicious
24c2028739 feat(ui): add knipignore tag
I'm not ready to delete some things but still want to build the app.
2024-08-30 22:20:12 +10:00
psychedelicious
b0db9a3f56 feat(ui): duplicate entity 2024-08-30 22:20:12 +10:00
psychedelicious
3ea83574c0 feat(ui): autocomplete on getPrefixeId 2024-08-30 22:20:12 +10:00
psychedelicious
05252a9bfc feat(ui): paste canvas gens back on source in generate mode 2024-08-30 22:20:12 +10:00
psychedelicious
ce854f086e chore(ui): typegen 2024-08-30 22:20:12 +10:00
psychedelicious
ff0c16978c feat(nodes): CanvasV2MaskAndCropInvocation can paste generated image back on source
This is needed for `Generate` mode.
2024-08-30 22:20:12 +10:00
psychedelicious
41cc650031 fix(ui): extraneous entity preview updates 2024-08-30 22:20:12 +10:00
psychedelicious
c3f7554053 fix(ui): newly-added entities are selected 2024-08-30 22:20:12 +10:00
psychedelicious
3f597a1c60 feat(ui): add crosshair to color picker 2024-08-30 22:20:12 +10:00
psychedelicious
ccffdf1878 fix(ui): color picker ignores alpha 2024-08-30 22:20:12 +10:00
psychedelicious
474089e892 fix(ui): calculate renderable entities correctly in tool module 2024-08-30 22:20:12 +10:00
psychedelicious
778e8ad161 feat(ui): better color picker 2024-08-30 22:20:12 +10:00
psychedelicious
9f29892c24 feat(ui): colored mask preview image 2024-08-30 22:20:12 +10:00
psychedelicious
56fd46a069 fix(ui): new rectangles don't trigger rerender 2024-08-30 22:20:12 +10:00
psychedelicious
579e594861 chore: bump version v4.2.9.dev20240823 2024-08-30 22:20:12 +10:00
psychedelicious
af3440fbe3 feat(ui): disable most interaction while filtering 2024-08-30 22:19:54 +10:00
psychedelicious
cc101f55c4 fix(ui): filter preview offset 2024-08-30 22:19:54 +10:00
psychedelicious
ef1adf07f5 feat(ui): tweak layout of staging area toolbar 2024-08-30 22:19:54 +10:00
psychedelicious
625c05d9be chore(ui): typegen 2024-08-30 22:19:54 +10:00
psychedelicious
8ad3d8f738 tidy(app): clean up app changes for canvas v2 2024-08-30 22:19:54 +10:00
psychedelicious
4759875733 feat(ui): use singleton for clear q confirm dialog 2024-08-30 22:19:54 +10:00
psychedelicious
768e6a3c55 fix(ui): rip out broken recall logic, NO TS ERRORS 2024-08-30 22:19:54 +10:00
psychedelicious
45bd85c039 chore(ui): lint 2024-08-30 22:19:54 +10:00
psychedelicious
9f94c5a8bd fix(ui): staging area interaction scopes 2024-08-30 22:19:54 +10:00
psychedelicious
23fdd65961 fix(ui): staging area actions 2024-08-30 22:19:54 +10:00
psychedelicious
8034195c30 tidy(ui): more cleanup 2024-08-30 22:19:54 +10:00
psychedelicious
08761127c9 fix(ui): upscale tab graph 2024-08-30 22:19:54 +10:00
psychedelicious
4a10010b6c fix(ui): sdxl graph builder 2024-08-30 22:19:54 +10:00
psychedelicious
14cc5e2453 fix(ui): select next entity in the list when deleting 2024-08-30 22:19:54 +10:00
psychedelicious
3d87adea60 feat(ui): fix delete layer hotkey 2024-08-30 22:19:54 +10:00
psychedelicious
36e8232ab6 tidy(ui): "eye dropper" -> "color picker" 2024-08-30 22:19:54 +10:00
psychedelicious
72722a73be tidy(ui): regional guidance buttons 2024-08-30 22:19:54 +10:00
psychedelicious
a09aa232a9 feat(ui): update entity list menu 2024-08-30 22:19:54 +10:00
psychedelicious
7ae8b64699 feat(ui): add log debug button 2024-08-30 22:19:54 +10:00
psychedelicious
60e0d17f34 chore(ui): lint 2024-08-30 22:19:54 +10:00
psychedelicious
bf8bef2f00 chore(ui): prettier 2024-08-30 22:19:54 +10:00
psychedelicious
b586d67bac chore(ui): eslint 2024-08-30 22:19:54 +10:00
psychedelicious
31e5e5af13 tidy(ui): remove unused stuff 4 2024-08-30 22:19:35 +10:00
psychedelicious
94871e88cd tidy(ui): remove unused stuff 3 2024-08-30 22:18:50 +10:00
psychedelicious
00e56d1968 tidy(ui): remove unused pkg @chakra-ui/react-use-size 2024-08-30 22:18:50 +10:00
psychedelicious
43672a53ab feat(ui): revise graph building for control layers, fix issues w/ invocation complete events 2024-08-30 22:18:50 +10:00
psychedelicious
45097ed2a6 feat(ui): use unique id for metadata in Graph class 2024-08-30 22:18:50 +10:00
psychedelicious
871f6b9f95 tidy(ui): remove unused stuff 2 2024-08-30 22:18:50 +10:00
psychedelicious
e6476e3c75 tidy(ui): remove unused stuff 2024-08-30 22:18:50 +10:00
psychedelicious
ac9b5f246d tidy(ui): reduce use of parseify util 2024-08-30 22:18:50 +10:00
psychedelicious
8bc72a2744 feat(ui): refine canvas entity list items & menus 2024-08-30 22:18:50 +10:00
psychedelicious
f76f1d89d7 feat(ui): canvas layer preview, revised reactivity for adapters 2024-08-30 22:18:50 +10:00
psychedelicious
7b54762b5e feat(ui): add SyncableMap
Can be used with useSyncExternal store to make a `Map` reactive.
2024-08-30 22:18:50 +10:00
psychedelicious
bc6faf6a6d tidy(ui): removed unused transform methods from canvasmanager 2024-08-30 22:18:50 +10:00
psychedelicious
e7ae1ac9b2 feat(ui): transform tool ux 2024-08-30 22:18:50 +10:00
psychedelicious
dcb436adb1 feat(ui): rough out canvas mode 2024-08-30 22:18:50 +10:00
psychedelicious
80f0441905 feat(ui): add canvas autosave checkbox 2024-08-30 22:18:50 +10:00
psychedelicious
8cde803654 fix(ui): memory leak when getting image DTO
must unsubscribe!
2024-08-30 22:18:50 +10:00
psychedelicious
62445680ad feat(ui): rework settings menu 2024-08-30 22:18:50 +10:00
psychedelicious
7685e36886 feat(ui): no entities fallback buttons 2024-08-30 22:18:50 +10:00
psychedelicious
4c196844bd perf(ui): optimize gallery image delete button rendering 2024-08-30 22:18:50 +10:00
psychedelicious
b36159bda4 feat(ui): remove "solid" background option 2024-08-30 22:18:50 +10:00
psychedelicious
b02948d49a tidy(ui): organise files and classes 2024-08-30 22:18:50 +10:00
psychedelicious
f442d206be tidy(ui): abstract compositing logic to module 2024-08-30 22:18:50 +10:00
psychedelicious
21ed6bccd8 fix(ui): fix canvas cache property access 2024-08-30 22:18:50 +10:00
psychedelicious
143ce7f00b tidy(ui): clean up CanvasFilter class 2024-08-30 22:18:50 +10:00
psychedelicious
28e716139b tidy(ui): clean up a few bits and bobs 2024-08-30 22:18:50 +10:00
psychedelicious
80a7c0c521 tidy(ui): abstract canvas rendering logic to module 2024-08-30 22:18:50 +10:00
psychedelicious
255ad3d2ad tidy(ui): abstract caching logic to module 2024-08-30 22:18:50 +10:00
psychedelicious
089bc9c7d8 tidy(ui): abstract worker logic to module 2024-08-30 22:18:50 +10:00
psychedelicious
ee7dafaf57 tidy(ui): abstract stage logic into module 2024-08-30 22:18:50 +10:00
psychedelicious
516ecdb0ee feat(ui): add entity group hiding 2024-08-30 22:18:50 +10:00
psychedelicious
b77675f74d feat(ui): move all caching out of redux
While we lose the benefit of the caches persisting across reloads, this is a much simpler way to handle things. If we need a persistent cache, we can explore it in the future.
2024-08-30 22:18:50 +10:00
psychedelicious
eea5c8efad feat(ui): revised rasterization caching
- use `stable-hash` to generate stable, non-crypto hashes for cache entries, instead of using deep object comparisons
- use an object to store image name caches
2024-08-30 22:18:50 +10:00
psychedelicious
09f1aac3a3 feat(ui): revise filter implementation 2024-08-30 22:18:50 +10:00
psychedelicious
dd1dcb5eba fix(ui): add button to delete inpaint mask 2024-08-30 22:18:50 +10:00
psychedelicious
757bd62ebe feat(ui): add contexts/hooks to access entity adapters directly 2024-08-30 22:18:50 +10:00
psychedelicious
5a3127949b feat(ui): add CanvasManagerProviderGate
This context waits to render its children its until the canvas manager is available. Then its children have access to the manager directly via hook.
2024-08-30 22:18:50 +10:00
psychedelicious
ced934c0a3 feat(ui) do not set $canvasManager until ready 2024-08-30 22:18:50 +10:00
psychedelicious
c32445084f fix(ui): inpaint mask naming 2024-08-30 22:18:50 +10:00
psychedelicious
9f1af0cdaa feat(ui): efficient canvas compositing
Also solves issue of exporting layers at different opacities than what is visible
2024-08-30 22:18:50 +10:00
psychedelicious
0d26cab400 feat(ui): allow multiple inpaint masks
This is easier than making it a nullable singleton
2024-08-30 22:18:50 +10:00
psychedelicious
c8de2da3fc fix(ui): missing rasterization cache invalidations 2024-08-30 22:18:50 +10:00
psychedelicious
ca089a105e feat(ui): iterate on filter UI, flow 2024-08-30 22:18:50 +10:00
psychedelicious
22000918d6 fix(ui): rehydration data loss 2024-08-30 22:18:50 +10:00
psychedelicious
6affc28da4 feat(ui): sort log namespaces 2024-08-30 22:18:50 +10:00
psychedelicious
f659995e1c fix(ui): do not merge arrays by index during rehydration 2024-08-30 22:18:50 +10:00
psychedelicious
56fb3e738f fix(ui): clone parsed data during state rehydration
Without this, the objects and arrays in `parsed` could be mutated, and the log statment would show the mutated data.
2024-08-30 22:18:50 +10:00
psychedelicious
56d450a907 fix(ui): fix logger filter
was accidetnally replacing the filter instead of appending to it.
2024-08-30 22:18:50 +10:00
psychedelicious
d3cdcef36b fix(ui): race condition queue status
Sequence of events causing the race condition:
- Enqueue batch
- Invalidate `SessionQueueStatus` tag
- Request updated queue status via HTTP - batch still processing at this point
- Batch completes
- Event emitted saying so
- Optimistically update the queue status cache, it is correct
- HTTP request makes it back and overwrites the optimistic update, indicating the batch is still in progress

FIxed by not invalidating the cache.
2024-08-30 22:18:50 +10:00
psychedelicious
19434e73b4 fix(ui): handle opacity for masks 2024-08-30 22:18:50 +10:00
psychedelicious
f7b3df9583 feat(ui): default background to checkerboard 2024-08-30 22:18:50 +10:00
psychedelicious
4da4b3bd50 feat(ui): clean up logging namespaces, allow skipping namespaces 2024-08-30 22:18:50 +10:00
psychedelicious
e83513882a chore(ui): bump ui library 2024-08-30 22:18:50 +10:00
psychedelicious
5adc784b6b fix(ui): do not allow drawing if layer disabled 2024-08-30 22:18:50 +10:00
psychedelicious
f177513523 fix(ui): stale state causing race conditions & extraneous renders 2024-08-30 22:18:50 +10:00
psychedelicious
8ebcf79b1a fix(ui): do not clear buffer when rendering "real" objects 2024-08-30 22:18:50 +10:00
psychedelicious
c7e5f24704 tidy(ui): remove "filter" from CanvasImageState 2024-08-30 22:18:50 +10:00
psychedelicious
ab3eb32ec8 feat(ui): better editable title 2024-08-30 22:18:50 +10:00
psychedelicious
d76509e5cb fix(ui): stroke eraserline 2024-08-30 22:18:50 +10:00
psychedelicious
04f56aab82 feat(ui): restore transparency effect for control layers 2024-08-30 22:18:50 +10:00
psychedelicious
c7913cbbbb feat(ui): use text cursor for entity title 2024-08-30 22:18:50 +10:00
psychedelicious
0556468518 tidy(ui): remove extraneous logging in CanvasStateApi 2024-08-30 22:18:49 +10:00
psychedelicious
1c7ef827b6 feat(ui): better buffer commit logic 2024-08-30 22:18:49 +10:00
psychedelicious
5720ed4d64 feat(ui): render buffer separately from "real" objects 2024-08-30 22:18:49 +10:00
psychedelicious
7f05af4a68 fix(ui): pixelRect should always be integer 2024-08-30 22:18:49 +10:00
psychedelicious
6db615ed5a fix(ui): only update stage attrs when stage itself is dragged 2024-08-30 22:18:49 +10:00
psychedelicious
465f020c86 feat(ui): add line simplification
This fixes some awkward issues where line segments stack up.
2024-08-30 22:18:49 +10:00
psychedelicious
f05b77088f fix(ui): various things listening when they need not listen 2024-08-30 22:18:49 +10:00
psychedelicious
80a5abf1ad feat(ui): layer opacity via caching 2024-08-30 22:18:49 +10:00
psychedelicious
7a6e8de60f feat(ui): reset view fits all visible objects 2024-08-30 22:18:49 +10:00
psychedelicious
8364fa74cf fix(ui): rerenders when changing canvas scale 2024-08-30 22:18:49 +10:00
psychedelicious
14f4566dd0 fix(ui): do not render rasterized layer unless renderObjects=true 2024-08-30 22:18:49 +10:00
psychedelicious
6145378923 feat(ui): revise app layout strategy, add interaction scopes for hotkeys 2024-08-30 22:18:49 +10:00
psychedelicious
68e2606427 feat(ui): tweak mask patterns 2024-08-30 22:18:49 +10:00
psychedelicious
0f3eb04d1a fix(ui): dynamic prompts recalcs when presets are loaded 2024-08-30 22:18:49 +10:00
psychedelicious
4a355323b2 fix(ui): use style preset prompts correctly 2024-08-30 22:18:49 +10:00
psychedelicious
8601fbb4ea fix(ui): discard selected staging image not all other images 2024-08-30 22:18:49 +10:00
psychedelicious
db885aa180 fix(ui): respect image size in staging preview 2024-08-30 22:18:49 +10:00
psychedelicious
c18fb980a2 tidy(ui): cleanup after events change 2024-08-30 22:18:49 +10:00
psychedelicious
b630dbdf20 feat(ui): move socket event handling out of redux
Download events and invocation status events (including progress images) are very frequent. There's no real need for these to pass through redux. Handling them outside redux is a significant performance win - far fewer store subscription calls, far fewer trips through middleware.

All event handling is moved outside middleware. Cleanup of unused actions and listeners to follow.
2024-08-30 22:18:49 +10:00
psychedelicious
29ac1b5e01 fix(ui): rebase conflicts 2024-08-30 22:18:49 +10:00
psychedelicious
506d3b079e fix(ui): update compositing rect when fill changes 2024-08-30 22:18:49 +10:00
psychedelicious
0670e6b53a feat(ui): add canvas background style 2024-08-30 22:18:49 +10:00
psychedelicious
76124ea35b feat(ui): mask layers choose own opacity 2024-08-30 22:18:49 +10:00
psychedelicious
6eae3470cd feat(ui): mask fill patterns 2024-08-30 22:18:49 +10:00
psychedelicious
c7ba7ac876 build(ui): add vite types to tsconfig 2024-08-30 22:18:49 +10:00
psychedelicious
edc733abd9 fix(ui): do not smooth pixel data when using eyeDropper 2024-08-30 22:18:49 +10:00
psychedelicious
a56ded664e tidy(ui): tool components & translations 2024-08-30 22:18:49 +10:00
psychedelicious
31ace5fb0c feat(ui): rough out eyedropper tool
It's a bit slow bc we are converting the stage to canvas on every mouse move. Also need to improve the visual but it works.
2024-08-30 22:18:49 +10:00
psychedelicious
11010236b3 fix(ui): ip adapters work 2024-08-30 22:18:49 +10:00
psychedelicious
5f061ac1e2 feat(ui): rename layers 2024-08-30 22:18:49 +10:00
psychedelicious
72919fa34e feat(ui): revise entity menus 2024-08-30 22:18:49 +10:00
psychedelicious
d5ca99fc3c feat(ui): split control layers from raster layers for UI and internal state, same rendering as raster layers 2024-08-30 22:18:49 +10:00
psychedelicious
e49b72ee4e feat(ui): implement cache for image rasterization, rip out some old controladapters code 2024-08-30 22:18:49 +10:00
psychedelicious
abe8db8154 feat(ui, app): use layer as control (wip) 2024-08-30 22:18:49 +10:00
psychedelicious
e0e5941384 feat(ui): add contextmenu for canvas entities 2024-08-30 22:18:49 +10:00
psychedelicious
86e1f4e8b0 feat(ui): more better logging & naming 2024-08-30 22:18:49 +10:00
psychedelicious
447d873ef0 feat(ui): better logging w/ path 2024-08-30 22:18:49 +10:00
psychedelicious
b21d613ce4 feat(ui): always show marks on canvas scale slider 2024-08-30 22:18:49 +10:00
psychedelicious
fc91adb32f fix(ui): do not import button from chakra 2024-08-30 22:18:49 +10:00
psychedelicious
71885db5fd fix(ui): scaled bbox preview 2024-08-30 22:18:49 +10:00
psychedelicious
b88d14b3df feat(ui): tidy up atoms 2024-08-30 22:18:49 +10:00
psychedelicious
d98d35a8a8 feat(ui): convert all my pubsubs to atoms
its the same but better
2024-08-30 22:18:49 +10:00
psychedelicious
87bc0ebd73 feat(ui): add trnalsation 2024-08-30 22:18:49 +10:00
psychedelicious
7b6ba3f690 fix(ui): give up on thumbnail loading, causes flash during transformer 2024-08-30 22:18:49 +10:00
psychedelicious
b0d8948428 fix(ui): depth anything v2 2024-08-30 22:18:49 +10:00
psychedelicious
b32d681cee tidy(ui): remove unused code, comments 2024-08-30 22:18:49 +10:00
psychedelicious
11a66d1d09 fix(ui): staging area works 2024-08-30 22:18:49 +10:00
psychedelicious
e41987f08c feat(nodes): temp disable canvas output crop 2024-08-30 22:18:49 +10:00
psychedelicious
34b57ec188 fix(ui): max scale 1 when reset view 2024-08-30 22:18:49 +10:00
psychedelicious
d74843be31 feat(ui): better scale changer component, reset view functionality 2024-08-30 22:18:49 +10:00
psychedelicious
1216c6f9c9 fix(ui): img2img 2024-08-30 22:18:49 +10:00
psychedelicious
865b6017d3 feat(ui): add manual scale controls 2024-08-30 22:18:49 +10:00
psychedelicious
922a021821 fix(ui): do not await clearBuffer 2024-08-30 22:18:49 +10:00
psychedelicious
0b5f4cac57 feat(ui): dnd image into layer 2024-08-30 22:18:49 +10:00
psychedelicious
c988c58c63 fix(ui): do not await commitBuffer 2024-08-30 22:18:49 +10:00
psychedelicious
ceb8cbf59e fix(ui): properly destroy entities in manager cleanup 2024-08-30 22:18:49 +10:00
psychedelicious
52e9f43c46 tidy(ui): clearer component names for regional guidance 2024-08-30 22:18:49 +10:00
psychedelicious
4e5e7761fc tidy(ui): clearer component names for ip adapter 2024-08-30 22:18:49 +10:00
psychedelicious
9879999a65 tidy(ui): clearer component names for inpaint mask 2024-08-30 22:18:49 +10:00
psychedelicious
bedaca70a3 tidy(ui): clearer component names for control adapters 2024-08-30 22:18:49 +10:00
psychedelicious
2dd2225d2e feat(ui): simplify canvas list item headers 2024-08-30 22:18:49 +10:00
psychedelicious
d82031eec1 fix(ui): ip adapter list item 2024-08-30 22:18:49 +10:00
psychedelicious
e5f2860b74 tidy(ui): clean up unused logic 2024-08-30 22:18:49 +10:00
psychedelicious
fa3560bb61 feat(ui): clean up state, add mutex for image loading, add thumbnail loading 2024-08-30 22:18:49 +10:00
psychedelicious
9b23f6ce30 chore(ui): add async-mutex dep 2024-08-30 22:18:49 +10:00
psychedelicious
5d6aa6cfd5 feat(ui): txt2img, img2img, inpaint & outpaint working 2024-08-30 22:18:49 +10:00
psychedelicious
7d1819335f feat(ui): no padding on transformer outlines 2024-08-30 22:18:49 +10:00
psychedelicious
539e7a3f2d feat(ui): restore object count to layer titles 2024-08-30 22:18:49 +10:00
psychedelicious
1686924ac8 tidy(ui): "useIsEntitySelected" -> "useEntityIsSelected" 2024-08-30 22:18:49 +10:00
psychedelicious
556c1dc67b tidy(ui): move transformer statics into class 2024-08-30 22:18:49 +10:00
psychedelicious
00f7093e65 tidy(ui): massive cleanup
- create a context for entity identifiers, massively simplifying UI for each entity int he list
- consolidate common redux actions
- remove now-unused code
2024-08-30 22:18:49 +10:00
psychedelicious
79eb11dce9 perf(ui): do not add duplicate points to lines 2024-08-30 22:18:49 +10:00
psychedelicious
0bf48c0d41 feat(ui): up line tension to 0.3 2024-08-30 22:18:49 +10:00
psychedelicious
3f33e5f770 perf(ui): disable stroke, perfect draw on compositing rect 2024-08-30 22:18:49 +10:00
psychedelicious
da3888ba9e tidy(ui): remove unused code, initial image 2024-08-30 22:18:49 +10:00
psychedelicious
a2f91b1055 tidy(ui): remove unused state & actions 2024-08-30 22:18:49 +10:00
psychedelicious
d26095dfa1 feat(ui): region mask rendering 2024-08-30 22:18:49 +10:00
psychedelicious
83e786bd1e feat(ui): esc cancels drawing buffer
maybe this is not wanted? we'll see
2024-08-30 22:18:49 +10:00
psychedelicious
4cae12a507 fix(ui): render transformer over objects, fix issue w/ inpaint rect color 2024-08-30 22:18:49 +10:00
psychedelicious
d8e3708e0f fix(ui): brush preview fill for inpaint/region 2024-08-30 22:18:49 +10:00
psychedelicious
f4de2fd3b1 fix(ui): no objects rendered until vis toggled 2024-08-30 22:18:49 +10:00
psychedelicious
e1cb30bbb4 feat(ui): inpaint mask transform 2024-08-30 22:18:49 +10:00
psychedelicious
97e0edc549 fix(ui): layer accidental early set isFirstRender=false 2024-08-30 22:18:49 +10:00
psychedelicious
f4e66bf14f fix(ui): inpaint mask rendering 2024-08-30 22:18:49 +10:00
psychedelicious
a6a7fe8aba feat(ui): wip inpaint mask uses new API 2024-08-30 22:18:49 +10:00
psychedelicious
a273f72560 feat(ui): move updatePosition to transformer 2024-08-30 22:18:49 +10:00
psychedelicious
b5126f45d6 feat(ui): move resetScale to transformer 2024-08-30 22:18:49 +10:00
psychedelicious
ba3bb7cbf3 tidy(ui): more imperative naming 2024-08-30 22:18:49 +10:00
psychedelicious
608279487b tidy(ui): use imperative names for setters in stateapi 2024-08-30 22:18:49 +10:00
psychedelicious
72b5374916 fix(ui): commit drawing buffer on tool change, fixing bbox not calculating 2024-08-30 22:18:49 +10:00
psychedelicious
08b03212ca fix(ui): sync transformer when requesting bbox calc 2024-08-30 22:18:49 +10:00
psychedelicious
7e341a05a1 tidy(ui): rename union CanvasEntity -> CanvasEntityState 2024-08-30 22:18:49 +10:00
psychedelicious
e665d08ee1 fix(ui): request rect calc immediately on transform, hiding rect 2024-08-30 22:18:49 +10:00
psychedelicious
ba6362dc9d feat(ui): move bbox calculation to transformer 2024-08-30 22:18:49 +10:00
psychedelicious
48f0797c43 feat(ui): use set for transformer subscriptions 2024-08-30 22:18:49 +10:00
psychedelicious
640b0c4939 tidy(ui): clean up worker tasks when complete 2024-08-30 22:18:49 +10:00
psychedelicious
287c61e277 tidy(ui): remove unused code in CanvasTool 2024-08-30 22:18:49 +10:00
psychedelicious
f7b2516109 feat(ui): use pubsub for isTransforming on manager 2024-08-30 22:18:49 +10:00
psychedelicious
b530eb49d4 docs(ui): update transformer docstrings 2024-08-30 22:18:49 +10:00
psychedelicious
fa94979ab6 feat(ui): revised event pubsub, transformer logic split out 2024-08-30 22:18:49 +10:00
psychedelicious
54f2acf5b9 feat(ui): add simple pubsub 2024-08-30 22:18:49 +10:00
psychedelicious
b6d845a4d0 feat(ui): document & clean up object renderer 2024-08-30 22:18:49 +10:00
psychedelicious
1095b7c37f feat(ui): split out object renderer 2024-08-30 22:18:49 +10:00
psychedelicious
136ffd97ca fix(ui): unable to hold shit while transforming to retain ratio 2024-08-30 22:18:49 +10:00
psychedelicious
80163d0af2 tidy(ui): rename canvas stuff 2024-08-30 22:18:49 +10:00
psychedelicious
e1c6e926e7 tidy(ui): consolidate getLoggingContext builders 2024-08-30 22:18:49 +10:00
psychedelicious
2bb74abf31 fix(ui): align all tools to 1px grid
- Offset brush tool by 0.5px when width is odd, ensuring each stroke edge is exactly on a pixel boundary
- Round the rect tool also
2024-08-30 22:18:49 +10:00
psychedelicious
0d4b91afe0 feat(ui): disable image smoothing on layers 2024-08-30 22:18:49 +10:00
psychedelicious
6c688d6878 fix(ui): round position when rasterizing layer 2024-08-30 22:18:49 +10:00
psychedelicious
243feecef9 feat(ui): continue modularizing transform 2024-08-30 22:18:49 +10:00
psychedelicious
abd22ba087 feat(ui): fix a few things that didn't unsubscribe correctly, add helper to manage subscriptions 2024-08-30 22:18:49 +10:00
psychedelicious
ab25546e97 feat(ui): merge bbox outline into transformer 2024-08-30 22:18:49 +10:00
psychedelicious
925f0fca2a fix(ui): update parent's pos not transformers 2024-08-30 22:18:49 +10:00
psychedelicious
066366d885 feat(ui): merge interaction rect into transformer class 2024-08-30 22:18:49 +10:00
psychedelicious
61d52e96b7 feat(ui): prepare staging area 2024-08-30 22:18:49 +10:00
psychedelicious
051e88ca90 feat(ui): typing for logging context 2024-08-30 22:18:49 +10:00
psychedelicious
e873b69850 feat(ui): remove inheritance of CanvasObject
JS is terrible
2024-08-30 22:18:49 +10:00
psychedelicious
661fd55556 feat(ui): split & document transformer logic, iterate on class structures 2024-08-30 22:18:49 +10:00
psychedelicious
402f5a4717 feat(ui): rotation snap to nearest 45deg when holding shift 2024-08-30 22:18:49 +10:00
psychedelicious
81bf52ef37 feat(ui): expose subscribe method for nanostores 2024-08-30 22:18:49 +10:00
psychedelicious
8ff92796df tidy(ui): remove layer scaling reducers 2024-08-30 22:18:49 +10:00
psychedelicious
68af60e12e fix(ui): pixel-perfect transforms 2024-08-30 22:18:49 +10:00
psychedelicious
cce6bf9428 fix(ui): layer visibility toggle 2024-08-30 22:18:49 +10:00
psychedelicious
078908fbea fix(nodes): fix canvas mask erode
it wasn't eroding enough and caused incorrect transparency in result images
2024-08-30 22:18:49 +10:00
psychedelicious
7275caaf5b fix(ui): do not reset layer on first render 2024-08-30 22:18:49 +10:00
psychedelicious
d9487c1df4 feat(ui): revised logging and naming setup, fix staging area 2024-08-30 22:18:49 +10:00
psychedelicious
3a9f955388 feat(ui): add repr methods to layer and object classes 2024-08-30 22:18:49 +10:00
psychedelicious
e46c7acd2e feat(ui): use nanoid(10) instead of uuidv4 for canvas
Shorter ids makes it much more readable
2024-08-30 22:18:49 +10:00
psychedelicious
b771664851 build(ui): add nanoid as explicit dep 2024-08-30 22:18:49 +10:00
psychedelicious
7c21819d20 fix(ui): move CanvasImage's konva image to correct object 2024-08-30 22:18:49 +10:00
psychedelicious
a57e618d47 fix(ui): prevent flash when applying transform 2024-08-30 22:18:49 +10:00
psychedelicious
c9849a79ea build(ui): add eslint rules for async stuff 2024-08-30 22:18:49 +10:00
psychedelicious
f1643fec08 feat(ui): trying to fix flicker after transform 2024-08-30 22:18:49 +10:00
psychedelicious
951e63ca87 feat(ui): transform cleanup 2024-08-30 22:18:49 +10:00
psychedelicious
8e539c8a8c feat(ui): fix transform when rotated 2024-08-30 22:18:49 +10:00
psychedelicious
1e689a4902 fix(ui): use pixel bbox when image is in layer 2024-08-30 22:18:49 +10:00
psychedelicious
7bbd25b5ec fix(ui): transforming when axes flipped 2024-08-30 22:18:49 +10:00
psychedelicious
b1c7236117 feat(ui): hallelujah (???) 2024-08-30 22:18:49 +10:00
psychedelicious
ae3e473024 feat(ui): add debug button 2024-08-30 22:18:49 +10:00
psychedelicious
fd616f247c fix(ui): transformer padding 2024-08-30 22:18:48 +10:00
psychedelicious
45dca2c821 feat(ui): wip transform mode 2 2024-08-30 22:18:48 +10:00
psychedelicious
40dc108c84 feat(ui): wip transform mode 2024-08-30 22:18:48 +10:00
psychedelicious
a421c25952 feat(ui): wip transform mode 2024-08-30 22:18:48 +10:00
psychedelicious
562d0afdbb fix(ui): dnd to canvas broke 2024-08-30 22:18:48 +10:00
psychedelicious
2ce4698eef fix(ui): conflicts after rebasing 2024-08-30 22:18:48 +10:00
psychedelicious
cb53108041 fix(ui): imageDropped listener 2024-08-30 22:18:48 +10:00
psychedelicious
5fa65e5cc6 wip 2024-08-30 22:18:48 +10:00
psychedelicious
e8b0b6cef5 fix(ui): transform tool seems to be working 2024-08-30 22:18:48 +10:00
psychedelicious
eca2712828 fix(ui): move tool fixes, add transform tool 2024-08-30 22:18:48 +10:00
psychedelicious
2804c0aede feat(ui): move tool now only moves 2024-08-30 22:18:48 +10:00
psychedelicious
0429f0480d feat(ui): layer bbox calc in worker 2024-08-30 22:18:48 +10:00
psychedelicious
024759a0fc feat(ui): tweaked entity & group selection styles 2024-08-30 22:18:48 +10:00
psychedelicious
9a94aef2b0 feat(ui): canvas entity list headers 2024-08-30 22:18:48 +10:00
psychedelicious
e329cb45cd tidy(ui): CanvasRegion 2024-08-30 22:18:48 +10:00
psychedelicious
0dc38bd684 tidy(ui): CanvasRect 2024-08-30 22:18:48 +10:00
psychedelicious
98ebca5f8c tidy(ui): CanvasLayer 2024-08-30 22:18:48 +10:00
psychedelicious
05cb3e03cf tidy(ui): CanvasInpaintMask 2024-08-30 22:18:48 +10:00
psychedelicious
181132c149 tidy(ui): CanvasInitialImage 2024-08-30 22:18:48 +10:00
psychedelicious
a69aa00155 tidy(ui): CanvasImage 2024-08-30 22:18:48 +10:00
psychedelicious
47d415e31c tidy(ui): CanvasEraserLine 2024-08-30 22:18:48 +10:00
psychedelicious
667a156817 tidy(ui): CanvasControlAdapter 2024-08-30 22:18:48 +10:00
psychedelicious
00f39b977e tidy(ui): CanvasBrushLine 2024-08-30 22:18:48 +10:00
psychedelicious
e5776e2bd6 tidy(ui): CanvasBbox 2024-08-30 22:18:48 +10:00
psychedelicious
2b21f54897 tidy(ui): CanvasBackground 2024-08-30 22:18:48 +10:00
psychedelicious
678d12fcd5 tidy(ui): update canvas classes, organise location of konva nodes 2024-08-30 22:18:48 +10:00
psychedelicious
03f06f611e feat(ui): add names to all konva objects
Makes troubleshooting much simpler
2024-08-30 22:18:48 +10:00
psychedelicious
6571e0f814 fix(ui): do not await creating new canvas image
If you await this, it causes a race condition where multiple images are created.
2024-08-30 22:18:48 +10:00
psychedelicious
44f91026e1 feat(ui): use position and dimensions instead of separate x,y,width,height attrs 2024-08-30 22:18:48 +10:00
psychedelicious
56237328f1 fix(ui): remove weird rtkq hook wrapper
I do not understand why I did that initially but it doesn't work with TS.
2024-08-30 22:18:48 +10:00
psychedelicious
ff68901e89 feat(ui): rename types size and position to dimensions and coordinate 2024-08-30 22:18:48 +10:00
psychedelicious
e0e7adb2b2 tidy(ui): hide layer settings by default 2024-08-30 22:18:48 +10:00
psychedelicious
0923a5b128 fix(ui): layer rendering when starting as disabled 2024-08-30 22:18:48 +10:00
psychedelicious
75f8a84c79 feat(invocation): reduce canvas v2 mask & crop mask dilation 2024-08-30 22:18:48 +10:00
psychedelicious
af815cf7eb feat(ui): de-jank staging area and progress images 2024-08-30 22:18:48 +10:00
psychedelicious
ef4d6c26f6 feat(ui): update staging handling to work w/ cropped mask 2024-08-30 22:18:48 +10:00
psychedelicious
5087b306c0 chore(ui): typegen 2024-08-30 22:18:48 +10:00
psychedelicious
a5708eaefe feat(app): update CanvasV2MaskAndCropInvocation 2024-08-30 22:18:48 +10:00
psychedelicious
389bfc9e31 feat(ui): use new canvas output node 2024-08-30 22:18:48 +10:00
psychedelicious
fd269e91e0 chore(ui): typegen 2024-08-30 22:18:48 +10:00
psychedelicious
80136b0dfc feat(app): add CanvasV2MaskAndCropInvocation & CanvasV2MaskAndCropOutput
This handles some masking and cropping that the canvas needs.
2024-08-30 22:18:48 +10:00
psychedelicious
9595eff1f9 fix(ui): restore nodes output tracking 2024-08-30 22:18:48 +10:00
psychedelicious
c3c95754f7 feat(ui): rip out document size
barely knew ye
2024-08-30 22:18:48 +10:00
psychedelicious
22ab63fe8d feat(ui): convert initial image to layer when starting canvas session 2024-08-30 22:18:48 +10:00
psychedelicious
5fefcab475 fix(ui): fix layer transparency calculation 2024-08-30 22:18:48 +10:00
psychedelicious
771a05b894 fix(ui): reset initial image when resetting canvas 2024-08-30 22:18:48 +10:00
psychedelicious
e2d8aaa923 fix(ui): reset node executions states when loading workflow 2024-08-30 22:18:48 +10:00
psychedelicious
0951aecb13 fix(ui): entity display list 2024-08-30 22:18:48 +10:00
psychedelicious
b1fe6f9853 feat(ui): img2img working 2024-08-30 22:18:48 +10:00
psychedelicious
551dd393aa feat(ui): rough out img2img on canvas 2024-08-30 22:18:48 +10:00
psychedelicious
78b4562184 UNDO ME WIP 2024-08-30 22:18:48 +10:00
psychedelicious
c49b90e621 feat(ui): log invocation source id on socket event 2024-08-30 22:18:48 +10:00
psychedelicious
89e6233fbf feat(ui): restore document size overlay renderer 2024-08-30 22:18:48 +10:00
psychedelicious
3f9496c237 feat(ui): make documnet size a rect 2024-08-30 22:18:48 +10:00
psychedelicious
36e94af598 refactor(ui): remove modular imagesize components
This is no longer necessary with canvas v2 and added a ton of extraneous redux actions when changing the image size. Also renamed to document size
2024-08-30 22:18:48 +10:00
psychedelicious
a181a684f5 feat(ui): initialState is for generation mode 2024-08-30 22:18:48 +10:00
psychedelicious
bb712b3b3f feat(ui): split out canvas entity list component 2024-08-30 22:18:48 +10:00
psychedelicious
e795de5647 feat(ui): hide bbox button when no canvas session active 2024-08-30 22:18:48 +10:00
psychedelicious
bdc428cdd8 tidy(ui): remove unused naming objects/utils
The canvas manager means we don't need to worry about konva node names as we never directly select konva nodes.
2024-08-30 22:18:48 +10:00
psychedelicious
e4376e21dd feat(ui): split up tool chooser buttons
Prep for distinct toolbars for generation vs canvas modes
2024-08-30 22:18:48 +10:00
psychedelicious
77acc7baed feat(ui): add useAssertSingleton util hook
This simple hook asserts that it is only ever called once. Particularly useful for things like hotkeys hooks.
2024-08-30 22:18:48 +10:00
psychedelicious
9db1556c4d feat(ui): "stagingArea" -> "session" 2024-08-30 22:18:48 +10:00
psychedelicious
65de8b329b feat(ui): add reset button to canvas 2024-08-30 22:18:48 +10:00
psychedelicious
08dae5b047 feat(ui): add snapToRect util 2024-08-30 22:18:48 +10:00
psychedelicious
8d2f056407 fix(ui): fiddle with control adapter filters
some jank still
2024-08-30 22:18:48 +10:00
psychedelicious
e66ef2e25e feat(ui): temp disable doc size overlay 2024-08-30 22:18:48 +10:00
psychedelicious
4d3ee7e082 feat(ui): no animation on layer selection
Felt sluggish
2024-08-30 22:18:48 +10:00
psychedelicious
fe48fda2f3 feat(ui): use canvas as source for control images (wip) 2024-08-30 22:18:48 +10:00
psychedelicious
0f66753aa1 fix(ui): control adapter translate & scale 2024-08-30 22:18:48 +10:00
psychedelicious
a18878474b tidy(ui): removed unused state related to non-buffered drawing 2024-08-30 22:18:48 +10:00
psychedelicious
0aa4568fd4 feat(ui): control adapter image rendering 2024-08-30 22:18:48 +10:00
psychedelicious
1de7e5760a fix(ui): do not floor bbox calc, it cuts off the last pixels 2024-08-30 22:18:48 +10:00
psychedelicious
135d6f2763 feat(ui): fix issue where creating line needs 2 points 2024-08-30 22:18:48 +10:00
psychedelicious
061767ede3 fix(ui): edge cases when holding shift and drawing lines 2024-08-30 22:18:48 +10:00
psychedelicious
7204844bcb fix(ui): set buffered rect color to full alpha 2024-08-30 22:18:48 +10:00
psychedelicious
f2279ecadd fix(ui): handle mouseup correctly 2024-08-30 22:18:48 +10:00
psychedelicious
75694869d2 feat(ui): buffered rect drawing 2024-08-30 22:18:48 +10:00
psychedelicious
d029680ac1 fix(ui): buffered drawing edge cases 2024-08-30 22:18:48 +10:00
psychedelicious
41c195d936 perf(ui): do not use stage.find 2024-08-30 22:18:48 +10:00
psychedelicious
03ea005e9c perf(ui): object groups do not listen 2024-08-30 22:18:48 +10:00
psychedelicious
6d936a7c44 perf(ui): buffered drawing (wip) 2024-08-30 22:18:48 +10:00
psychedelicious
fba17b93a6 tidy(ui): organise files 2024-08-30 22:18:48 +10:00
psychedelicious
73a7a27ea1 tidy(ui): organise files 2024-08-30 22:18:48 +10:00
psychedelicious
79287c2d16 tidy(ui): organise files 2024-08-30 22:18:48 +10:00
psychedelicious
662c5f4b77 fix(ui): background rendering 2024-08-30 22:18:48 +10:00
psychedelicious
7728ca6843 pkg(ui): remove unused deps react-konva & use-image 2024-08-30 22:18:48 +10:00
psychedelicious
9607372f89 feat(ui): organize konva state and files 2024-08-30 22:18:48 +10:00
psychedelicious
d27f948b78 fix(ui): merge conflicts in image deletion listener 2024-08-30 22:18:48 +10:00
psychedelicious
b7aab81717 fix(ui): region rendering 2024-08-30 22:18:48 +10:00
psychedelicious
2998287f61 fix(ui): inpaint mask rendering 2024-08-30 22:18:48 +10:00
psychedelicious
55d7f0ff5b fix(ui): staging area rendering 2024-08-30 22:18:48 +10:00
psychedelicious
4564f36d4a fix(ui): stale selected entity 2024-08-30 22:18:48 +10:00
psychedelicious
319de5c4e9 fix(ui): staging area image offset 2024-08-30 22:18:48 +10:00
psychedelicious
eee499faa3 feat(ui): tweak layer ui component 2024-08-30 22:18:48 +10:00
psychedelicious
63c5e42f2a fix(ui): resetting layer resets position 2024-08-30 22:18:48 +10:00
psychedelicious
bd16dc4479 feat(ui): updated layer list component styling 2024-08-30 22:18:48 +10:00
psychedelicious
49371ddec9 feat(ui): transformable layers 2024-08-30 22:18:48 +10:00
psychedelicious
6a10d31b19 feat(ui): move tool icon is pointer like in other apps 2024-08-30 22:18:48 +10:00
psychedelicious
c951e733d3 feat(ui): do not floor cursor position 2024-08-30 22:18:48 +10:00
psychedelicious
7ed24cf847 feat(ui): disable gallery hotkeys while staging 2024-08-30 22:18:48 +10:00
psychedelicious
821b7a0435 feat(ui): revised canvas progress & staging image handling 2024-08-30 22:18:48 +10:00
psychedelicious
1b0344c412 feat(ui): show queue item origin in queue list 2024-08-30 22:18:48 +10:00
psychedelicious
03ca3c4b3d chore(ui): typegen 2024-08-30 22:18:48 +10:00
psychedelicious
b939192b16 feat(app): add origin to session queue
The origin is an optional field indicating the queue item's origin. For example, "canvas" when the queue item originated from the canvas or "workflows" when the queue item originated from the workflows tab. If omitted, we assume the queue item originated from the API directly.

- Add migration to add the nullable column to the `session_queue` table.
- Update relevant event payloads with the new field.
- Add `cancel_by_origin` method to `session_queue` service and corresponding route. This is required for the canvas to bail out early when staging images.
- Add `origin` to both `SessionQueueItem` and `Batch` - it needs to be provided initially via the batch and then passed onto the queue item.
-
2024-08-30 22:18:48 +10:00
psychedelicious
7ccf559a06 fix(ui): denoise start on outpainting 2024-08-30 22:18:48 +10:00
psychedelicious
9eb091f873 feat(ui): add redux events for queue cleared & batch enqueued socket events 2024-08-30 22:18:48 +10:00
psychedelicious
3bd5521641 feat(ui): canvas staging area works 2024-08-30 22:18:48 +10:00
psychedelicious
ced748e419 feat(ui): switch to view tool when staging 2024-08-30 22:18:48 +10:00
psychedelicious
fbd137da9f tidy(ui): disable preview images on every enqueue 2024-08-30 22:18:48 +10:00
psychedelicious
03baebced6 feat(ui): rough out save staging image 2024-08-30 22:18:48 +10:00
psychedelicious
cb19c1c370 feat(ui): staging area image visibility toggle 2024-08-30 22:18:47 +10:00
psychedelicious
788bad61d0 fix(ui): batch building after removing canvas files 2024-08-30 22:18:47 +10:00
psychedelicious
8f5f9bd44e feat(ui): make Graph class's getMetadataNode public 2024-08-30 22:18:47 +10:00
psychedelicious
2873e3e084 tidy(ui): remove old canvas graphs 2024-08-30 22:18:47 +10:00
psychedelicious
b004f17ae3 fix(ui): do not select already-selected entity 2024-08-30 22:18:47 +10:00
psychedelicious
bea1e8c99b tidy(ui): naming things 2024-08-30 22:18:47 +10:00
psychedelicious
111493223f tidy(ui): file organisation 2024-08-30 22:18:47 +10:00
psychedelicious
0a5ac2baec fix(ui): reset cursor pos when fitting document 2024-08-30 22:18:47 +10:00
psychedelicious
eec3c3b884 feat(ui): staging area works more better 2024-08-30 22:18:47 +10:00
psychedelicious
07b72c3d70 feat(ui): staging area barely works 2024-08-30 22:18:47 +10:00
psychedelicious
766e8c4eb0 feat(ui): consolidate konva API 2024-08-30 22:18:47 +10:00
psychedelicious
57c257d10d feat(ui): consolidate konva API 2024-08-30 22:18:47 +10:00
psychedelicious
d497da0e61 feat(ui): staging area (rendering wip) 2024-08-30 22:18:47 +10:00
psychedelicious
62310e7929 tidy(ui): type "Dimensions" -> "Size" 2024-08-30 22:18:47 +10:00
psychedelicious
d79aa173a6 feat(ui): add updateNode to Graph 2024-08-30 22:18:47 +10:00
psychedelicious
fbfdd3e003 feat(ui): sdxl graphs 2024-08-30 22:18:47 +10:00
psychedelicious
a62b4a26ef feat(ui): sd1 outpaint graph 2024-08-30 22:18:47 +10:00
psychedelicious
817d4168c6 tests(ui): add missing tests for Graph class 2024-08-30 22:18:47 +10:00
psychedelicious
7e0a6d1538 feat(ui): add Graph.getid() util 2024-08-30 22:18:47 +10:00
psychedelicious
ebc498ad19 feat(ui): outpaint graph, organize builder a bit 2024-08-30 22:18:47 +10:00
psychedelicious
b97b8c6ce6 feat(ui): inpaint sd1 graph 2024-08-30 22:18:47 +10:00
psychedelicious
b8abff65a1 feat(ui): temp disable image caching while testing 2024-08-30 22:18:47 +10:00
psychedelicious
a953dc1dbd feat(ui): txt2img & img2img graphs 2024-08-30 22:18:47 +10:00
psychedelicious
a7c9848e99 feat(ui): minor change to canvas bbox state type 2024-08-30 22:18:47 +10:00
psychedelicious
73a1449eaf feat(ui): simplified konva node to blob/imagedata utils 2024-08-30 22:18:47 +10:00
psychedelicious
59f57ff542 feat(ui): node manager getter/setter 2024-08-30 22:18:47 +10:00
psychedelicious
e9204b87e3 feat(ui): generation mode calculation, fudged graphs 2024-08-30 22:18:47 +10:00
psychedelicious
7dd11bd60a feat(ui): add utils for getting images from canvas 2024-08-30 22:18:47 +10:00
psychedelicious
275fc2ccf9 feat(ui): even more simplified API - lean on the konva node manager to abstract imperative state API & rendering 2024-08-30 22:18:47 +10:00
psychedelicious
a2ef8d9d47 feat(ui): revised docstrings for renderers & simplified api 2024-08-30 22:18:47 +10:00
psychedelicious
196779ff19 feat(ui): inpaint mask UI components 2024-08-30 22:18:47 +10:00
psychedelicious
aee3147365 feat(ui): inpaint mask rendering (wip) 2024-08-30 22:18:47 +10:00
psychedelicious
eaca940956 fix(ui): models loaded handler 2024-08-30 22:18:47 +10:00
psychedelicious
06006733e2 feat(ui): internal state for inpaint mask 2024-08-30 22:18:47 +10:00
psychedelicious
14d0bfbef6 refactor(ui): divvy up canvas state a bit 2024-08-30 22:18:47 +10:00
psychedelicious
0c9cf73702 feat(ui): get region and base layer canvas to blob logic working 2024-08-30 22:18:47 +10:00
psychedelicious
3b864921ac refactor(ui): node manager handles more tedious annoying stuff 2024-08-30 22:18:47 +10:00
psychedelicious
f41539532f feat(ui): use node manager for addRegions 2024-08-30 22:18:47 +10:00
psychedelicious
657009c254 feat(ui): persist bbox 2024-08-30 22:18:47 +10:00
psychedelicious
c47e02c309 fix(ui): fix generation graphs 2024-08-30 22:18:47 +10:00
psychedelicious
ce8a7bc178 feat(ui): add toggle for clipToBbox 2024-08-30 22:18:47 +10:00
psychedelicious
488ca87787 feat(ui): rename konva node manager 2024-08-30 22:18:47 +10:00
psychedelicious
d965df8ca9 refactor(ui): create classes to abstract mgmt of konva nodes 2024-08-30 22:18:47 +10:00
psychedelicious
995c26751e tidy(ui): organise renderers 2024-08-30 22:18:47 +10:00
psychedelicious
dd09723a2a refactor(ui): create entity to konva node map abstraction (wip)
Instead of chaining konva `find` and `findOne` methods, all konva nodes are added to a mapping object. Finding and manipulating them is much simpler.

Done for regions and layers, wip for control adapters.
2024-08-30 22:18:47 +10:00
psychedelicious
5ff5af3ba2 perf(ui): fix lag w/ region rendering
Needed to memoize these selectors
2024-08-30 22:18:47 +10:00
psychedelicious
4cb85404c0 feat(ui): move canvas fill color picker to right 2024-08-30 22:18:47 +10:00
psychedelicious
50bc2f100d refactor(ui): remove unused ellipse & polygon objects 2024-08-30 22:18:47 +10:00
psychedelicious
f65ce6a019 fix(ui): incorrect rect/brush/eraser positions 2024-08-30 22:18:47 +10:00
psychedelicious
c28b635f2d refactor(ui): enable global debugging flag 2024-08-30 22:18:47 +10:00
psychedelicious
e55896240d refactor(ui): disable the preview renderer for now 2024-08-30 22:18:47 +10:00
psychedelicious
2b478ee7e1 tweak(ui): canvas editor layout 2024-08-30 22:18:47 +10:00
psychedelicious
69912a35ea perf(ui): memoize layeractionsmenu valid actions 2024-08-30 22:18:47 +10:00
psychedelicious
9f1bd98c7e refactor(ui): decouple konva renderer from react
Subscribe to redux store directly, skipping all the react overhead.

With react in dev mode, a typical frame while using the brush tool on almost-empty canvas is reduced from ~7.5ms to ~3.5ms. All things considered, this still feels slow, but it's a massive improvement.
2024-08-30 22:18:47 +10:00
psychedelicious
b531d6b7f0 feat(ui): clip lines to bbox 2024-08-30 22:18:47 +10:00
psychedelicious
8aa963fb81 fix(ui): document fit positioning 2024-08-30 22:18:47 +10:00
psychedelicious
b76e0ab4e4 feat(ui): document bounds overlay 2024-08-30 22:18:47 +10:00
psychedelicious
aea03b4e92 tidy(ui): background layer 2024-08-30 22:18:47 +10:00
psychedelicious
b39e95966c refactor(ui): use "entity" instead of "data" for canvas 2024-08-30 22:18:47 +10:00
psychedelicious
d53e5e0158 feat(ui): brush size border radius = 1 2024-08-30 22:18:47 +10:00
psychedelicious
0368dd651b fix(ui): canvas HUD doesn't interrupt tool 2024-08-30 22:18:47 +10:00
psychedelicious
84a4a1024e refactor(ui): split up canvas entity renderers, temp disable preview 2024-08-30 22:18:47 +10:00
psychedelicious
af4f258489 fix(ui): delete all layers button 2024-08-30 22:18:47 +10:00
psychedelicious
ddfc8785b4 fix(ui): ignore keyboard shortcuts in input/textarea elements 2024-08-30 22:18:47 +10:00
psychedelicious
d8515b6efc fix(ui): canvas entity ids getting clobbered 2024-08-30 22:18:47 +10:00
psychedelicious
6a07f007a4 fix(ui): move lora followup fixes 2024-08-30 22:18:47 +10:00
psychedelicious
7a5a0c8075 chore(ui): lint 2024-08-30 22:18:47 +10:00
psychedelicious
5ed2e9b0fc refactor(ui): move loras to canvas slice 2024-08-30 22:18:47 +10:00
psychedelicious
aeb0a45eb6 fix(ui): layer is selected when added 2024-08-30 22:18:47 +10:00
psychedelicious
21e814d766 feat(ui): r to center & fit stage on document 2024-08-30 22:18:47 +10:00
psychedelicious
cafc1839e2 feat(ui): better HUD 2024-08-30 22:18:47 +10:00
psychedelicious
e937aa831f fix(ui): always use current brush width when making straight lines 2024-08-30 22:18:47 +10:00
psychedelicious
890e6a95ed feat(ui): hold shift w/ brush to draw straight line 2024-08-30 22:18:47 +10:00
psychedelicious
a5b7274359 fix(ui): update bg on canvas resize 2024-08-30 22:18:47 +10:00
psychedelicious
172acf2cf5 refactor(ui): better hud 2024-08-30 22:18:47 +10:00
psychedelicious
b49fdf6407 refactor(ui): scaled tool preview border 2024-08-30 22:18:47 +10:00
psychedelicious
5184d05bc2 refactor(ui): port remaining canvasV1 rendering logic to V2, remove old code 2024-08-30 22:18:47 +10:00
psychedelicious
7ef4553fc9 refactor(ui): fix more types 2024-08-30 22:18:47 +10:00
psychedelicious
d6bd1e4a49 refactor(ui): metadata recall (wip)
just enough let the app run
2024-08-30 22:18:47 +10:00
psychedelicious
29413f20a7 refactor(ui): undo/redo button temp fix 2024-08-30 22:18:47 +10:00
psychedelicious
04a44c8ea7 refactor(ui): fix renderer stuff 2024-08-30 22:18:47 +10:00
psychedelicious
426f1b6f9a refactor(ui): fix misc types 2024-08-30 22:18:47 +10:00
psychedelicious
9c7f5ed321 refactor(ui): fix gallery stuff 2024-08-30 22:18:47 +10:00
psychedelicious
4c37c7f280 refactor(ui): fix delete image stuff 2024-08-30 22:18:47 +10:00
psychedelicious
a2d13cacbf refactor(ui): fix useIsReadyToEnqueue for new adapterType field 2024-08-30 22:18:47 +10:00
psychedelicious
aa127b83a3 refactor(ui): update generation tab graphs 2024-08-30 22:18:47 +10:00
psychedelicious
e55192ae2a refactor(ui): add adapterType to ControlAdapterData 2024-08-30 22:18:47 +10:00
psychedelicious
5159fcbc33 refactor(ui): update components & logic to use new unified slice (again) 2024-08-30 22:18:47 +10:00
psychedelicious
02ad7a0f93 refactor(ui): update components & logic to use new unified slice 2024-08-30 22:18:47 +10:00
psychedelicious
bfa496e37f refactor(ui): merge compositing, params into canvasV2 slice 2024-08-30 22:18:47 +10:00
psychedelicious
fdf347af26 refactor(ui): add scaled bbox state 2024-08-30 22:18:47 +10:00
psychedelicious
0833dbb19d refactor(ui): update dnd/image upload 2024-08-30 22:18:47 +10:00
psychedelicious
1b6bf58e58 refactor(ui): update size/prompts state 2024-08-30 22:18:47 +10:00
psychedelicious
5ead7bc7b4 refactor(ui): rip out old control adapter implementation 2024-08-30 22:18:47 +10:00
psychedelicious
f326d17856 refactor(ui): canvas v2 (wip)
fix entity count select
2024-08-30 22:18:47 +10:00
psychedelicious
908aa9beea refactor(ui): canvas v2 (wip)
delete unused file
2024-08-30 22:18:47 +10:00
psychedelicious
4071e96245 refactor(ui): canvas v2 (wip)
merge all canvas state reducers into one big slice (but with the logic split across files so it's not hell)
2024-08-30 22:18:47 +10:00
psychedelicious
b4daf29bd8 refactor(ui): canvas v2 (wip)
Fix a few more components
2024-08-30 22:18:47 +10:00
psychedelicious
bf185339c2 refactor(ui): canvas v2 (wip)
missed a spot
2024-08-30 22:18:47 +10:00
psychedelicious
df3abc75c2 refactor(ui): canvas v2 (wip)
Redo all UI components for different canvas entity types
2024-08-30 22:18:47 +10:00
psychedelicious
28fc9a387c refactor(ui): canvas v2 (wip) 2024-08-30 22:18:47 +10:00
psychedelicious
8533f207dc refactor(ui): canvas v2 (wip) 2024-08-30 22:18:47 +10:00
psychedelicious
d135c48319 refactor(ui): canvas v2 (wip) 2024-08-30 22:18:47 +10:00
psychedelicious
ca9090d070 refactor(ui): canvas v2 (wip) 2024-08-30 22:18:47 +10:00
psychedelicious
93b185dc3b feat(ui): bbox tool 2024-08-30 22:18:47 +10:00
psychedelicious
98e5efa895 fix(ui): rect tool preview 2024-08-30 22:18:47 +10:00
psychedelicious
c6774b829d fix(ui): multiple stages 2024-08-30 22:18:47 +10:00
psychedelicious
22925f92bd feat(ui): decouple konva logic from nanostores 2024-08-30 22:18:47 +10:00
psychedelicious
302efcf6e8 feat(ui): store all stage attrs in nanostores 2024-08-30 22:18:47 +10:00
psychedelicious
76f9f90f0a feat(ui): round stage scale 2024-08-30 22:18:47 +10:00
psychedelicious
5ba338e471 chore(ui): bump konva 2024-08-30 22:18:47 +10:00
psychedelicious
01f101c6f2 feat(ui): generation bbox transformation working
whew
2024-08-30 22:18:47 +10:00
psychedelicious
5606aec78d feat(ui): wip generation bbox 2024-08-30 22:18:47 +10:00
psychedelicious
db90e1fe8b feat(ui): wip generation bbox 2024-08-30 22:18:47 +10:00
psychedelicious
ae96c479f2 feat(ui): CL zoom and pan, some rendering optimizations 2024-08-30 22:18:47 +10:00
psychedelicious
344ed2c83e Revert "feat(ui): add x,y,scaleX,scaleY,rotation to objects"
This reverts commit 53318b396c967c72326a7e4dea09667b2ab20bdd.
2024-08-30 22:18:47 +10:00
psychedelicious
1985944659 feat(ui): layers manage their own bbox 2024-08-30 22:18:47 +10:00
psychedelicious
915357a6c1 docs(ui): konva image object docstrings 2024-08-30 22:18:47 +10:00
psychedelicious
63c34e78d7 feat(ui): add x,y,scaleX,scaleY,rotation to objects 2024-08-30 22:18:47 +10:00
psychedelicious
366c460c1f fix(ui): show color picker when using rect tool 2024-08-30 22:18:47 +10:00
psychedelicious
40cab08133 feat(ui): image loading fallback for raster layers 2024-08-30 22:18:47 +10:00
psychedelicious
51de25122a feat(ui): bbox calc for raster layers 2024-08-30 22:18:47 +10:00
psychedelicious
90313091db feat(ui): do not fill brush preview when drawing 2024-08-30 22:18:47 +10:00
psychedelicious
9982219d18 fix(ui): brush spacing handling 2024-08-30 22:18:47 +10:00
psychedelicious
b3fe03b8f9 fix(ui): jank when starting a shape when not already focused on stage 2024-08-30 22:18:47 +10:00
psychedelicious
6edd15d68a feat(ui): wip raster layers
I meant to split this up into smaller commits and undo some of it, but I committed afterwards and it's tedious to undo.
2024-08-30 22:18:47 +10:00
psychedelicious
0e2b328c88 feat(ui): support image objects on raster layers
Just the UI and internal state, not rendering yet.
2024-08-30 22:18:47 +10:00
psychedelicious
25d7f9c316 tidy(ui): clean up event handlers
Separate logic for each tool in preparation for ellipse and polygon tools.
2024-08-30 22:18:47 +10:00
psychedelicious
3870ebdf29 feat(ui): raster layer reset, object group util 2024-08-30 22:18:47 +10:00
psychedelicious
7595d05191 feat(ui): rect shape preview now has fill 2024-08-30 22:18:47 +10:00
psychedelicious
21af727d79 feat(ui): cancel shape drawing on esc 2024-08-30 22:18:47 +10:00
psychedelicious
5691829de6 feat(ui): temp disable history on CL 2024-08-30 22:18:47 +10:00
psychedelicious
20e6a57cf1 feat(ui): raster layer logic
- Deduplicate shared logic
- Split up giant renderers file into separate cohesive files
- Tons of cleanup
- Progress on raster layer functionality
2024-08-30 22:18:47 +10:00
psychedelicious
d0c40a8b5b feat(ui): add raster layer rendering and interaction (WIP) 2024-08-30 22:18:46 +10:00
psychedelicious
f663215f25 feat(ui): scaffold out raster layers
Raster layers may have images, lines and shapes. These will replace initial image layers and provide sketching functionality like we have on canvas.
2024-08-30 22:18:46 +10:00
psychedelicious
7c5dea6d12 refactor(ui): revise types for line and rect objects
- Create separate object types for brush and eraser lines, instead of a single type that has a `tool` field.
- Create new object type for rect shapes.
- Add logic to schemas to migrate old object types to new.
- Update renderers & reducers.
2024-08-30 22:18:46 +10:00
336 changed files with 8167 additions and 14720 deletions

View File

@@ -13,12 +13,6 @@ on:
tags:
- 'v*.*.*'
workflow_dispatch:
inputs:
push-to-registry:
description: Push the built image to the container registry
required: false
type: boolean
default: false
permissions:
contents: write
@@ -56,15 +50,16 @@ jobs:
df -h
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v3
- name: Docker meta
id: meta
uses: docker/metadata-action@v5
uses: docker/metadata-action@v4
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
images: |
ghcr.io/${{ github.repository }}
${{ env.DOCKERHUB_REPOSITORY }}
tags: |
type=ref,event=branch
type=ref,event=tag
@@ -77,33 +72,49 @@ jobs:
suffix=-${{ matrix.gpu-driver }},onlatest=false
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v2
with:
platforms: ${{ env.PLATFORMS }}
- name: Login to GitHub Container Registry
if: github.event_name != 'pull_request'
uses: docker/login-action@v3
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
# - name: Login to Docker Hub
# if: github.event_name != 'pull_request' && vars.DOCKERHUB_REPOSITORY != ''
# uses: docker/login-action@v2
# with:
# username: ${{ secrets.DOCKERHUB_USERNAME }}
# password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build container
timeout-minutes: 40
id: docker_build
uses: docker/build-push-action@v6
uses: docker/build-push-action@v4
with:
context: .
file: docker/Dockerfile
platforms: ${{ env.PLATFORMS }}
push: ${{ github.ref == 'refs/heads/main' || github.ref_type == 'tag' || github.event.inputs.push-to-registry }}
push: ${{ github.ref == 'refs/heads/main' || github.ref_type == 'tag' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: |
type=gha,scope=${{ github.ref_name }}-${{ matrix.gpu-driver }}
type=gha,scope=main-${{ matrix.gpu-driver }}
cache-to: type=gha,mode=max,scope=${{ github.ref_name }}-${{ matrix.gpu-driver }}
# - name: Docker Hub Description
# if: github.ref == 'refs/heads/main' || github.ref == 'refs/tags/*' && vars.DOCKERHUB_REPOSITORY != ''
# uses: peter-evans/dockerhub-description@v3
# with:
# username: ${{ secrets.DOCKERHUB_USERNAME }}
# password: ${{ secrets.DOCKERHUB_TOKEN }}
# repository: ${{ vars.DOCKERHUB_REPOSITORY }}
# short-description: ${{ github.event.repository.description }}

View File

@@ -196,22 +196,6 @@ tips to reduce the problem:
=== "12GB VRAM GPU"
This should be sufficient to generate larger images up to about 1280x1280.
## Checkpoint Models Load Slowly or Use Too Much RAM
The difference between diffusers models (a folder containing multiple
subfolders) and checkpoint models (a file ending with .safetensors or
.ckpt) is that InvokeAI is able to load diffusers models into memory
incrementally, while checkpoint models must be loaded all at
once. With very large models, or systems with limited RAM, you may
experience slowdowns and other memory-related issues when loading
checkpoint models.
To solve this, go to the Model Manager tab (the cube), select the
checkpoint model that's giving you trouble, and press the "Convert"
button in the upper right of your browser window. This will conver the
checkpoint into a diffusers model, after which loading should be
faster and less memory-intensive.
## Memory Leak (Linux)

View File

@@ -3,10 +3,8 @@
import io
import pathlib
import shutil
import traceback
from copy import deepcopy
from enum import Enum
from tempfile import TemporaryDirectory
from typing import List, Optional, Type
@@ -19,7 +17,6 @@ from starlette.exceptions import HTTPException
from typing_extensions import Annotated
from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.services.config import get_config
from invokeai.app.services.model_images.model_images_common import ModelImageFileNotFoundException
from invokeai.app.services.model_install.model_install_common import ModelInstallJob
from invokeai.app.services.model_records import (
@@ -34,7 +31,6 @@ from invokeai.backend.model_manager.config import (
ModelFormat,
ModelType,
)
from invokeai.backend.model_manager.load.model_cache.model_cache_base import CacheStats
from invokeai.backend.model_manager.metadata.fetch.huggingface import HuggingFaceMetadataFetch
from invokeai.backend.model_manager.metadata.metadata_base import ModelMetadataWithFiles, UnknownMetadataException
from invokeai.backend.model_manager.search import ModelSearch
@@ -54,13 +50,6 @@ class ModelsList(BaseModel):
model_config = ConfigDict(use_enum_values=True)
class CacheType(str, Enum):
"""Cache type - one of vram or ram."""
RAM = "RAM"
VRAM = "VRAM"
def add_cover_image_to_model_config(config: AnyModelConfig, dependencies: Type[ApiDependencies]) -> AnyModelConfig:
"""Add a cover image URL to a model configuration."""
cover_image = dependencies.invoker.services.model_images.get_url(config.key)
@@ -808,83 +797,3 @@ async def get_starter_models() -> list[StarterModel]:
model.dependencies = missing_deps
return starter_models
@model_manager_router.get(
"/model_cache",
operation_id="get_cache_size",
response_model=float,
summary="Get maximum size of model manager RAM or VRAM cache.",
)
async def get_cache_size(cache_type: CacheType = Query(description="The cache type", default=CacheType.RAM)) -> float:
"""Return the current RAM or VRAM cache size setting (in GB)."""
cache = ApiDependencies.invoker.services.model_manager.load.ram_cache
value = 0.0
if cache_type == CacheType.RAM:
value = cache.max_cache_size
elif cache_type == CacheType.VRAM:
value = cache.max_vram_cache_size
return value
@model_manager_router.put(
"/model_cache",
operation_id="set_cache_size",
response_model=float,
summary="Set maximum size of model manager RAM or VRAM cache, optionally writing new value out to invokeai.yaml config file.",
)
async def set_cache_size(
value: float = Query(description="The new value for the maximum cache size"),
cache_type: CacheType = Query(description="The cache type", default=CacheType.RAM),
persist: bool = Query(description="Write new value out to invokeai.yaml", default=False),
) -> float:
"""Set the current RAM or VRAM cache size setting (in GB). ."""
cache = ApiDependencies.invoker.services.model_manager.load.ram_cache
app_config = get_config()
# Record initial state.
vram_old = app_config.vram
ram_old = app_config.ram
# Prepare target state.
vram_new = vram_old
ram_new = ram_old
if cache_type == CacheType.RAM:
ram_new = value
elif cache_type == CacheType.VRAM:
vram_new = value
else:
raise ValueError(f"Unexpected {cache_type=}.")
config_path = app_config.config_file_path
new_config_path = config_path.with_suffix(".yaml.new")
try:
# Try to apply the target state.
cache.max_vram_cache_size = vram_new
cache.max_cache_size = ram_new
app_config.ram = ram_new
app_config.vram = vram_new
if persist:
app_config.write_file(new_config_path)
shutil.move(new_config_path, config_path)
except Exception as e:
# If there was a failure, restore the initial state.
cache.max_cache_size = ram_old
cache.max_vram_cache_size = vram_old
app_config.ram = ram_old
app_config.vram = vram_old
raise RuntimeError("Failed to update cache size") from e
return value
@model_manager_router.get(
"/stats",
operation_id="get_stats",
response_model=Optional[CacheStats],
summary="Get model manager RAM cache performance statistics.",
)
async def get_stats() -> Optional[CacheStats]:
"""Return performance statistics on the model manager's RAM cache. Will return null if no models have been loaded."""
return ApiDependencies.invoker.services.model_manager.load.ram_cache.stats

View File

@@ -11,7 +11,7 @@ from invokeai.app.services.session_queue.session_queue_common import (
Batch,
BatchStatus,
CancelByBatchIDsResult,
CancelByDestinationResult,
CancelByOriginResult,
ClearResult,
EnqueueBatchResult,
PruneResult,
@@ -107,18 +107,16 @@ async def cancel_by_batch_ids(
@session_queue_router.put(
"/{queue_id}/cancel_by_destination",
operation_id="cancel_by_destination",
"/{queue_id}/cancel_by_origin",
operation_id="cancel_by_origin",
responses={200: {"model": CancelByBatchIDsResult}},
)
async def cancel_by_destination(
async def cancel_by_origin(
queue_id: str = Path(description="The queue id to perform this operation on"),
destination: str = Query(description="The destination to cancel all queue items for"),
) -> CancelByDestinationResult:
origin: str = Query(description="The origin to cancel all queue items for"),
) -> CancelByOriginResult:
"""Immediately cancels all queue items with the given origin"""
return ApiDependencies.invoker.services.session_queue.cancel_by_destination(
queue_id=queue_id, destination=destination
)
return ApiDependencies.invoker.services.session_queue.cancel_by_origin(queue_id=queue_id, origin=origin)
@session_queue_router.put(

View File

@@ -19,8 +19,7 @@ from invokeai.app.invocations.model import CLIPField
from invokeai.app.invocations.primitives import ConditioningOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.ti_utils import generate_ti_list
from invokeai.backend.lora.lora_model_raw import LoRAModelRaw
from invokeai.backend.lora.lora_patcher import LoraPatcher
from invokeai.backend.lora import LoRAModelRaw
from invokeai.backend.model_patcher import ModelPatcher
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
BasicConditioningInfo,
@@ -83,10 +82,9 @@ class CompelInvocation(BaseInvocation):
# apply all patches while the model is on the target device
text_encoder_info.model_on_device() as (cached_weights, text_encoder),
tokenizer_info as tokenizer,
LoraPatcher.apply_lora_patches(
model=text_encoder,
patches=_lora_loader(),
prefix="lora_te_",
ModelPatcher.apply_lora_text_encoder(
text_encoder,
loras=_lora_loader(),
cached_weights=cached_weights,
),
# Apply CLIP Skip after LoRA to prevent LoRA application from failing on skipped layers.
@@ -179,9 +177,9 @@ class SDXLPromptInvocationBase:
# apply all patches while the model is on the target device
text_encoder_info.model_on_device() as (cached_weights, text_encoder),
tokenizer_info as tokenizer,
LoraPatcher.apply_lora_patches(
ModelPatcher.apply_lora(
text_encoder,
patches=_lora_loader(),
loras=_lora_loader(),
prefix=lora_prefix,
cached_weights=cached_weights,
),

View File

@@ -36,8 +36,7 @@ from invokeai.app.invocations.t2i_adapter import T2IAdapterField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.controlnet_utils import prepare_control_image
from invokeai.backend.ip_adapter.ip_adapter import IPAdapter
from invokeai.backend.lora.lora_model_raw import LoRAModelRaw
from invokeai.backend.lora.lora_patcher import LoraPatcher
from invokeai.backend.lora import LoRAModelRaw
from invokeai.backend.model_manager import BaseModelType, ModelVariantType
from invokeai.backend.model_patcher import ModelPatcher
from invokeai.backend.stable_diffusion import PipelineIntermediateState
@@ -186,7 +185,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
)
denoise_mask: Optional[DenoiseMaskField] = InputField(
default=None,
description=FieldDescriptions.denoise_mask,
description=FieldDescriptions.mask,
input=Input.Connection,
ui_order=8,
)
@@ -980,10 +979,9 @@ class DenoiseLatentsInvocation(BaseInvocation):
ModelPatcher.apply_freeu(unet, self.unet.freeu_config),
SeamlessExt.static_patch_model(unet, self.unet.seamless_axes), # FIXME
# Apply the LoRA after unet has been moved to its target device for faster patching.
LoraPatcher.apply_lora_patches(
model=unet,
patches=_lora_loader(),
prefix="lora_unet_",
ModelPatcher.apply_lora_unet(
unet,
loras=_lora_loader(),
cached_weights=cached_weights,
),
):

View File

@@ -181,7 +181,7 @@ class FieldDescriptions:
)
num_1 = "The first number"
num_2 = "The second number"
denoise_mask = "A mask of the region to apply the denoising process to."
mask = "The mask to use for the operation"
board = "The board to save the image to"
image = "The image to process"
tile_size = "Tile size"

View File

@@ -1,267 +0,0 @@
from typing import Callable, Iterator, Optional, Tuple
import torch
import torchvision.transforms as tv_transforms
from torchvision.transforms.functional import resize as tv_resize
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.fields import (
DenoiseMaskField,
FieldDescriptions,
FluxConditioningField,
Input,
InputField,
LatentsField,
WithBoard,
WithMetadata,
)
from invokeai.app.invocations.model import TransformerField
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.denoise import denoise
from invokeai.backend.flux.inpaint_extension import InpaintExtension
from invokeai.backend.flux.model import Flux
from invokeai.backend.flux.sampling_utils import (
clip_timestep_schedule,
generate_img_ids,
get_noise,
get_schedule,
pack,
unpack,
)
from invokeai.backend.lora.lora_model_raw import LoRAModelRaw
from invokeai.backend.lora.lora_patcher import LoraPatcher
from invokeai.backend.stable_diffusion.diffusers_pipeline import PipelineIntermediateState
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import FLUXConditioningInfo
from invokeai.backend.util.devices import TorchDevice
@invocation(
"flux_denoise",
title="FLUX Denoise",
tags=["image", "flux"],
category="image",
version="1.0.0",
classification=Classification.Prototype,
)
class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Run denoising process with a FLUX transformer model."""
# If latents is provided, this means we are doing image-to-image.
latents: Optional[LatentsField] = InputField(
default=None,
description=FieldDescriptions.latents,
input=Input.Connection,
)
# denoise_mask is used for image-to-image inpainting. Only the masked region is modified.
denoise_mask: Optional[DenoiseMaskField] = InputField(
default=None,
description=FieldDescriptions.denoise_mask,
input=Input.Connection,
)
denoising_start: float = InputField(
default=0.0,
ge=0,
le=1,
description=FieldDescriptions.denoising_start,
)
denoising_end: float = InputField(default=1.0, ge=0, le=1, description=FieldDescriptions.denoising_end)
transformer: TransformerField = InputField(
description=FieldDescriptions.flux_model,
input=Input.Connection,
title="Transformer",
)
positive_text_conditioning: FluxConditioningField = InputField(
description=FieldDescriptions.positive_cond, input=Input.Connection
)
width: int = InputField(default=1024, multiple_of=16, description="Width of the generated image.")
height: int = InputField(default=1024, multiple_of=16, description="Height of the generated image.")
num_steps: int = InputField(
default=4, description="Number of diffusion steps. Recommended values are schnell: 4, dev: 50."
)
guidance: float = InputField(
default=4.0,
description="The guidance strength. Higher values adhere more strictly to the prompt, and will produce less diverse images. FLUX dev only, ignored for schnell.",
)
seed: int = InputField(default=0, description="Randomness seed for reproducibility.")
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
latents = self._run_diffusion(context)
latents = latents.detach().to("cpu")
name = context.tensors.save(tensor=latents)
return LatentsOutput.build(latents_name=name, latents=latents, seed=None)
def _run_diffusion(
self,
context: InvocationContext,
):
inference_dtype = torch.bfloat16
# Load the conditioning data.
cond_data = context.conditioning.load(self.positive_text_conditioning.conditioning_name)
assert len(cond_data.conditionings) == 1
flux_conditioning = cond_data.conditionings[0]
assert isinstance(flux_conditioning, FLUXConditioningInfo)
flux_conditioning = flux_conditioning.to(dtype=inference_dtype)
t5_embeddings = flux_conditioning.t5_embeds
clip_embeddings = flux_conditioning.clip_embeds
# Load the input latents, if provided.
init_latents = context.tensors.load(self.latents.latents_name) if self.latents else None
if init_latents is not None:
init_latents = init_latents.to(device=TorchDevice.choose_torch_device(), dtype=inference_dtype)
# Prepare input noise.
noise = get_noise(
num_samples=1,
height=self.height,
width=self.width,
device=TorchDevice.choose_torch_device(),
dtype=inference_dtype,
seed=self.seed,
)
transformer_info = context.models.load(self.transformer.transformer)
is_schnell = "schnell" in transformer_info.config.config_path
# Calculate the timestep schedule.
image_seq_len = noise.shape[-1] * noise.shape[-2] // 4
timesteps = get_schedule(
num_steps=self.num_steps,
image_seq_len=image_seq_len,
shift=not is_schnell,
)
# Clip the timesteps schedule based on denoising_start and denoising_end.
timesteps = clip_timestep_schedule(timesteps, self.denoising_start, self.denoising_end)
# Prepare input latent image.
if init_latents is not None:
# If init_latents is provided, we are doing image-to-image.
if is_schnell:
context.logger.warning(
"Running image-to-image with a FLUX schnell model. This is not recommended. The results are likely "
"to be poor. Consider using a FLUX dev model instead."
)
# Noise the orig_latents by the appropriate amount for the first timestep.
t_0 = timesteps[0]
x = t_0 * noise + (1.0 - t_0) * init_latents
else:
# init_latents are not provided, so we are not doing image-to-image (i.e. we are starting from pure noise).
if self.denoising_start > 1e-5:
raise ValueError("denoising_start should be 0 when initial latents are not provided.")
x = noise
# If len(timesteps) == 1, then short-circuit. We are just noising the input latents, but not taking any
# denoising steps.
if len(timesteps) <= 1:
return x
inpaint_mask = self._prep_inpaint_mask(context, x)
b, _c, h, w = x.shape
img_ids = generate_img_ids(h=h, w=w, batch_size=b, device=x.device, dtype=x.dtype)
bs, t5_seq_len, _ = t5_embeddings.shape
txt_ids = torch.zeros(bs, t5_seq_len, 3, dtype=inference_dtype, device=TorchDevice.choose_torch_device())
# Pack all latent tensors.
init_latents = pack(init_latents) if init_latents is not None else None
inpaint_mask = pack(inpaint_mask) if inpaint_mask is not None else None
noise = pack(noise)
x = pack(x)
# Now that we have 'packed' the latent tensors, verify that we calculated the image_seq_len correctly.
assert image_seq_len == x.shape[1]
# Prepare inpaint extension.
inpaint_extension: InpaintExtension | None = None
if inpaint_mask is not None:
assert init_latents is not None
inpaint_extension = InpaintExtension(
init_latents=init_latents,
inpaint_mask=inpaint_mask,
noise=noise,
)
with (
transformer_info.model_on_device() as (cached_weights, transformer),
# Apply the LoRA after transformer has been moved to its target device for faster patching.
LoraPatcher.apply_lora_patches(
model=transformer,
patches=self._lora_iterator(context),
prefix="",
cached_weights=cached_weights,
),
):
assert isinstance(transformer, Flux)
x = denoise(
model=transformer,
img=x,
img_ids=img_ids,
txt=t5_embeddings,
txt_ids=txt_ids,
vec=clip_embeddings,
timesteps=timesteps,
step_callback=self._build_step_callback(context),
guidance=self.guidance,
inpaint_extension=inpaint_extension,
)
x = unpack(x.float(), self.height, self.width)
return x
def _prep_inpaint_mask(self, context: InvocationContext, latents: torch.Tensor) -> torch.Tensor | None:
"""Prepare the inpaint mask.
- Loads the mask
- Resizes if necessary
- Casts to same device/dtype as latents
- Expands mask to the same shape as latents so that they line up after 'packing'
Args:
context (InvocationContext): The invocation context, for loading the inpaint mask.
latents (torch.Tensor): A latent image tensor. In 'unpacked' format. Used to determine the target shape,
device, and dtype for the inpaint mask.
Returns:
torch.Tensor | None: Inpaint mask.
"""
if self.denoise_mask is None:
return None
mask = context.tensors.load(self.denoise_mask.mask_name)
_, _, latent_height, latent_width = latents.shape
mask = tv_resize(
img=mask,
size=[latent_height, latent_width],
interpolation=tv_transforms.InterpolationMode.BILINEAR,
antialias=False,
)
mask = mask.to(device=latents.device, dtype=latents.dtype)
# Expand the inpaint mask to the same shape as `latents` so that when we 'pack' `mask` it lines up with
# `latents`.
return mask.expand_as(latents)
def _lora_iterator(self, context: InvocationContext) -> Iterator[Tuple[LoRAModelRaw, float]]:
for lora in self.transformer.loras:
lora_info = context.models.load(lora.lora)
assert isinstance(lora_info.model, LoRAModelRaw)
yield (lora_info.model, lora.weight)
del lora_info
def _build_step_callback(self, context: InvocationContext) -> Callable[[PipelineIntermediateState], None]:
def step_callback(state: PipelineIntermediateState) -> None:
state.latents = unpack(state.latents.float(), self.height, self.width).squeeze()
context.util.flux_step_callback(state)
return step_callback

View File

@@ -1,53 +0,0 @@
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
from invokeai.app.invocations.model import LoRAField, ModelIdentifierField, TransformerField
from invokeai.app.services.shared.invocation_context import InvocationContext
@invocation_output("flux_lora_loader_output")
class FluxLoRALoaderOutput(BaseInvocationOutput):
"""FLUX LoRA Loader Output"""
transformer: TransformerField = OutputField(
default=None, description=FieldDescriptions.transformer, title="FLUX Transformer"
)
@invocation(
"flux_lora_loader",
title="FLUX LoRA",
tags=["lora", "model", "flux"],
category="model",
version="1.0.0",
)
class FluxLoRALoaderInvocation(BaseInvocation):
"""Apply a LoRA model to a FLUX transformer."""
lora: ModelIdentifierField = InputField(
description=FieldDescriptions.lora_model, title="LoRA", ui_type=UIType.LoRAModel
)
weight: float = InputField(default=0.75, description=FieldDescriptions.lora_weight)
transformer: TransformerField = InputField(
description=FieldDescriptions.transformer,
input=Input.Connection,
title="FLUX Transformer",
)
def invoke(self, context: InvocationContext) -> FluxLoRALoaderOutput:
lora_key = self.lora.key
if not context.models.exists(lora_key):
raise ValueError(f"Unknown lora: {lora_key}!")
if any(lora.lora.key == lora_key for lora in self.transformer.loras):
raise Exception(f'LoRA "{lora_key}" already applied to transformer.')
transformer = self.transformer.model_copy(deep=True)
transformer.loras.append(
LoRAField(
lora=self.lora,
weight=self.weight,
)
)
return FluxLoRALoaderOutput(transformer=transformer)

View File

@@ -0,0 +1,169 @@
import torch
from einops import rearrange
from PIL import Image
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.fields import (
FieldDescriptions,
FluxConditioningField,
Input,
InputField,
WithBoard,
WithMetadata,
)
from invokeai.app.invocations.model import TransformerField, VAEField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.session_processor.session_processor_common import CanceledException
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.model import Flux
from invokeai.backend.flux.modules.autoencoder import AutoEncoder
from invokeai.backend.flux.sampling import denoise, get_noise, get_schedule, prepare_latent_img_patches, unpack
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import FLUXConditioningInfo
from invokeai.backend.util.devices import TorchDevice
@invocation(
"flux_text_to_image",
title="FLUX Text to Image",
tags=["image", "flux"],
category="image",
version="1.0.0",
classification=Classification.Prototype,
)
class FluxTextToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Text-to-image generation using a FLUX model."""
transformer: TransformerField = InputField(
description=FieldDescriptions.flux_model,
input=Input.Connection,
title="Transformer",
)
vae: VAEField = InputField(
description=FieldDescriptions.vae,
input=Input.Connection,
)
positive_text_conditioning: FluxConditioningField = InputField(
description=FieldDescriptions.positive_cond, input=Input.Connection
)
width: int = InputField(default=1024, multiple_of=16, description="Width of the generated image.")
height: int = InputField(default=1024, multiple_of=16, description="Height of the generated image.")
num_steps: int = InputField(
default=4, description="Number of diffusion steps. Recommend values are schnell: 4, dev: 50."
)
guidance: float = InputField(
default=4.0,
description="The guidance strength. Higher values adhere more strictly to the prompt, and will produce less diverse images. FLUX dev only, ignored for schnell.",
)
seed: int = InputField(default=0, description="Randomness seed for reproducibility.")
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ImageOutput:
latents = self._run_diffusion(context)
image = self._run_vae_decoding(context, latents)
image_dto = context.images.save(image=image)
return ImageOutput.build(image_dto)
def _run_diffusion(
self,
context: InvocationContext,
):
inference_dtype = torch.bfloat16
# Load the conditioning data.
cond_data = context.conditioning.load(self.positive_text_conditioning.conditioning_name)
assert len(cond_data.conditionings) == 1
flux_conditioning = cond_data.conditionings[0]
assert isinstance(flux_conditioning, FLUXConditioningInfo)
flux_conditioning = flux_conditioning.to(dtype=inference_dtype)
t5_embeddings = flux_conditioning.t5_embeds
clip_embeddings = flux_conditioning.clip_embeds
transformer_info = context.models.load(self.transformer.transformer)
# Prepare input noise.
x = get_noise(
num_samples=1,
height=self.height,
width=self.width,
device=TorchDevice.choose_torch_device(),
dtype=inference_dtype,
seed=self.seed,
)
x, img_ids = prepare_latent_img_patches(x)
is_schnell = "schnell" in transformer_info.config.config_path
timesteps = get_schedule(
num_steps=self.num_steps,
image_seq_len=x.shape[1],
shift=not is_schnell,
)
bs, t5_seq_len, _ = t5_embeddings.shape
txt_ids = torch.zeros(bs, t5_seq_len, 3, dtype=inference_dtype, device=TorchDevice.choose_torch_device())
with transformer_info as transformer:
assert isinstance(transformer, Flux)
def step_callback() -> None:
if context.util.is_canceled():
raise CanceledException
# TODO: Make this look like the image before re-enabling
# latent_image = unpack(img.float(), self.height, self.width)
# latent_image = latent_image.squeeze() # Remove unnecessary dimensions
# flattened_tensor = latent_image.reshape(-1) # Flatten to shape [48*128*128]
# # Create a new tensor of the required shape [255, 255, 3]
# latent_image = flattened_tensor[: 255 * 255 * 3].reshape(255, 255, 3) # Reshape to RGB format
# # Convert to a NumPy array and then to a PIL Image
# image = Image.fromarray(latent_image.cpu().numpy().astype(np.uint8))
# (width, height) = image.size
# width *= 8
# height *= 8
# dataURL = image_to_dataURL(image, image_format="JPEG")
# # TODO: move this whole function to invocation context to properly reference these variables
# context._services.events.emit_invocation_denoise_progress(
# context._data.queue_item,
# context._data.invocation,
# state,
# ProgressImage(dataURL=dataURL, width=width, height=height),
# )
x = denoise(
model=transformer,
img=x,
img_ids=img_ids,
txt=t5_embeddings,
txt_ids=txt_ids,
vec=clip_embeddings,
timesteps=timesteps,
step_callback=step_callback,
guidance=self.guidance,
)
x = unpack(x.float(), self.height, self.width)
return x
def _run_vae_decoding(
self,
context: InvocationContext,
latents: torch.Tensor,
) -> Image.Image:
vae_info = context.models.load(self.vae.vae)
with vae_info as vae:
assert isinstance(vae, AutoEncoder)
latents = latents.to(dtype=TorchDevice.choose_torch_dtype())
img = vae.decode(latents)
img = img.clamp(-1, 1)
img = rearrange(img[0], "c h w -> h w c")
img_pil = Image.fromarray((127.5 * (img + 1.0)).byte().cpu().numpy())
return img_pil

View File

@@ -1,60 +0,0 @@
import torch
from einops import rearrange
from PIL import Image
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import (
FieldDescriptions,
Input,
InputField,
LatentsField,
WithBoard,
WithMetadata,
)
from invokeai.app.invocations.model import VAEField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.modules.autoencoder import AutoEncoder
from invokeai.backend.model_manager.load.load_base import LoadedModel
from invokeai.backend.util.devices import TorchDevice
@invocation(
"flux_vae_decode",
title="FLUX Latents to Image",
tags=["latents", "image", "vae", "l2i", "flux"],
category="latents",
version="1.0.0",
)
class FluxVaeDecodeInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Generates an image from latents."""
latents: LatentsField = InputField(
description=FieldDescriptions.latents,
input=Input.Connection,
)
vae: VAEField = InputField(
description=FieldDescriptions.vae,
input=Input.Connection,
)
def _vae_decode(self, vae_info: LoadedModel, latents: torch.Tensor) -> Image.Image:
with vae_info as vae:
assert isinstance(vae, AutoEncoder)
latents = latents.to(device=TorchDevice.choose_torch_device(), dtype=TorchDevice.choose_torch_dtype())
img = vae.decode(latents)
img = img.clamp(-1, 1)
img = rearrange(img[0], "c h w -> h w c") # noqa: F821
img_pil = Image.fromarray((127.5 * (img + 1.0)).byte().cpu().numpy())
return img_pil
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ImageOutput:
latents = context.tensors.load(self.latents.latents_name)
vae_info = context.models.load(self.vae.vae)
image = self._vae_decode(vae_info=vae_info, latents=latents)
TorchDevice.empty_cache()
image_dto = context.images.save(image=image)
return ImageOutput.build(image_dto)

View File

@@ -1,67 +0,0 @@
import einops
import torch
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import (
FieldDescriptions,
ImageField,
Input,
InputField,
)
from invokeai.app.invocations.model import VAEField
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.modules.autoencoder import AutoEncoder
from invokeai.backend.model_manager import LoadedModel
from invokeai.backend.stable_diffusion.diffusers_pipeline import image_resized_to_grid_as_tensor
from invokeai.backend.util.devices import TorchDevice
@invocation(
"flux_vae_encode",
title="FLUX Image to Latents",
tags=["latents", "image", "vae", "i2l", "flux"],
category="latents",
version="1.0.0",
)
class FluxVaeEncodeInvocation(BaseInvocation):
"""Encodes an image into latents."""
image: ImageField = InputField(
description="The image to encode.",
)
vae: VAEField = InputField(
description=FieldDescriptions.vae,
input=Input.Connection,
)
@staticmethod
def vae_encode(vae_info: LoadedModel, image_tensor: torch.Tensor) -> torch.Tensor:
# TODO(ryand): Expose seed parameter at the invocation level.
# TODO(ryand): Write a util function for generating random tensors that is consistent across devices / dtypes.
# There's a starting point in get_noise(...), but it needs to be extracted and generalized. This function
# should be used for VAE encode sampling.
generator = torch.Generator(device=TorchDevice.choose_torch_device()).manual_seed(0)
with vae_info as vae:
assert isinstance(vae, AutoEncoder)
image_tensor = image_tensor.to(
device=TorchDevice.choose_torch_device(), dtype=TorchDevice.choose_torch_dtype()
)
latents = vae.encode(image_tensor, sample=True, generator=generator)
return latents
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
image = context.images.get_pil(self.image.image_name)
vae_info = context.models.load(self.vae.vae)
image_tensor = image_resized_to_grid_as_tensor(image.convert("RGB"))
if image_tensor.dim() == 3:
image_tensor = einops.rearrange(image_tensor, "c h w -> 1 c h w")
latents = self.vae_encode(vae_info=vae_info, image_tensor=image_tensor)
latents = latents.to("cpu")
name = context.tensors.save(tensor=latents)
return LatentsOutput.build(latents_name=name, latents=latents, seed=None)

View File

@@ -126,7 +126,7 @@ class ImageMaskToTensorInvocation(BaseInvocation, WithMetadata):
title="Tensor Mask to Image",
tags=["mask"],
category="mask",
version="1.1.0",
version="1.0.0",
)
class MaskTensorToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Convert a mask tensor to an image."""
@@ -135,11 +135,6 @@ class MaskTensorToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
def invoke(self, context: InvocationContext) -> ImageOutput:
mask = context.tensors.load(self.mask.tensor_name)
# Squeeze the channel dimension if it exists.
if mask.dim() == 3:
mask = mask.squeeze(0)
# Ensure that the mask is binary.
if mask.dtype != torch.bool:
mask = mask > 0.5

View File

@@ -69,7 +69,6 @@ class CLIPField(BaseModel):
class TransformerField(BaseModel):
transformer: ModelIdentifierField = Field(description="Info to load Transformer submodel")
loras: List[LoRAField] = Field(description="LoRAs to apply on model loading")
class T5EncoderField(BaseModel):
@@ -203,7 +202,7 @@ class FluxModelLoaderInvocation(BaseInvocation):
assert isinstance(transformer_config, CheckpointConfigBase)
return FluxModelLoaderOutput(
transformer=TransformerField(transformer=transformer, loras=[]),
transformer=TransformerField(transformer=transformer),
clip=CLIPField(tokenizer=tokenizer, text_encoder=clip_encoder, loras=[], skipped_layers=0),
t5_encoder=T5EncoderField(tokenizer=tokenizer2, text_encoder=t5_encoder),
vae=VAEField(vae=vae),

View File

@@ -22,8 +22,8 @@ from invokeai.app.invocations.fields import (
from invokeai.app.invocations.model import UNetField
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.lora.lora_model_raw import LoRAModelRaw
from invokeai.backend.lora.lora_patcher import LoraPatcher
from invokeai.backend.lora import LoRAModelRaw
from invokeai.backend.model_patcher import ModelPatcher
from invokeai.backend.stable_diffusion.diffusers_pipeline import ControlNetData, PipelineIntermediateState
from invokeai.backend.stable_diffusion.multi_diffusion_pipeline import (
MultiDiffusionPipeline,
@@ -204,11 +204,7 @@ class TiledMultiDiffusionDenoiseLatents(BaseInvocation):
# Load the UNet model.
unet_info = context.models.load(self.unet.unet)
with (
ExitStack() as exit_stack,
unet_info as unet,
LoraPatcher.apply_lora_patches(model=unet, patches=_lora_loader(), prefix="lora_unet_"),
):
with ExitStack() as exit_stack, unet_info as unet, ModelPatcher.apply_lora_unet(unet, _lora_loader()):
assert isinstance(unet, UNet2DConditionModel)
latents = latents.to(device=unet.device, dtype=unet.dtype)
if noise is not None:

View File

@@ -103,7 +103,7 @@ class HFModelSource(StringLikeSource):
if self.variant:
base += f":{self.variant or ''}"
if self.subfolder:
base += f"::{self.subfolder.as_posix()}"
base += f":{self.subfolder}"
return base

View File

@@ -6,7 +6,7 @@ from invokeai.app.services.session_queue.session_queue_common import (
Batch,
BatchStatus,
CancelByBatchIDsResult,
CancelByDestinationResult,
CancelByOriginResult,
CancelByQueueIDResult,
ClearResult,
EnqueueBatchResult,
@@ -97,8 +97,8 @@ class SessionQueueBase(ABC):
pass
@abstractmethod
def cancel_by_destination(self, queue_id: str, destination: str) -> CancelByDestinationResult:
"""Cancels all queue items with the given batch destination"""
def cancel_by_origin(self, queue_id: str, origin: str) -> CancelByOriginResult:
"""Cancels all queue items with the given batch origin"""
pass
@abstractmethod

View File

@@ -346,10 +346,10 @@ class CancelByBatchIDsResult(BaseModel):
canceled: int = Field(..., description="Number of queue items canceled")
class CancelByDestinationResult(CancelByBatchIDsResult):
"""Result of canceling by a destination"""
class CancelByOriginResult(BaseModel):
"""Result of canceling by list of batch ids"""
pass
canceled: int = Field(..., description="Number of queue items canceled")
class CancelByQueueIDResult(CancelByBatchIDsResult):

View File

@@ -10,7 +10,7 @@ from invokeai.app.services.session_queue.session_queue_common import (
Batch,
BatchStatus,
CancelByBatchIDsResult,
CancelByDestinationResult,
CancelByOriginResult,
CancelByQueueIDResult,
ClearResult,
EnqueueBatchResult,
@@ -426,19 +426,19 @@ class SqliteSessionQueue(SessionQueueBase):
self.__lock.release()
return CancelByBatchIDsResult(canceled=count)
def cancel_by_destination(self, queue_id: str, destination: str) -> CancelByDestinationResult:
def cancel_by_origin(self, queue_id: str, origin: str) -> CancelByOriginResult:
try:
current_queue_item = self.get_current(queue_id)
self.__lock.acquire()
where = """--sql
WHERE
queue_id == ?
AND destination == ?
AND origin == ?
AND status != 'canceled'
AND status != 'completed'
AND status != 'failed'
"""
params = (queue_id, destination)
params = (queue_id, origin)
self.__cursor.execute(
f"""--sql
SELECT COUNT(*)
@@ -457,14 +457,14 @@ class SqliteSessionQueue(SessionQueueBase):
params,
)
self.__conn.commit()
if current_queue_item is not None and current_queue_item.destination == destination:
if current_queue_item is not None and current_queue_item.origin == origin:
self._set_queue_item_status(current_queue_item.item_id, "canceled")
except Exception:
self.__conn.rollback()
raise
finally:
self.__lock.release()
return CancelByDestinationResult(canceled=count)
return CancelByOriginResult(canceled=count)
def cancel_by_queue_id(self, queue_id: str) -> CancelByQueueIDResult:
try:

View File

@@ -14,7 +14,7 @@ from invokeai.app.services.image_records.image_records_common import ImageCatego
from invokeai.app.services.images.images_common import ImageDTO
from invokeai.app.services.invocation_services import InvocationServices
from invokeai.app.services.model_records.model_records_base import UnknownModelException
from invokeai.app.util.step_callback import flux_step_callback, stable_diffusion_step_callback
from invokeai.app.util.step_callback import stable_diffusion_step_callback
from invokeai.backend.model_manager.config import (
AnyModel,
AnyModelConfig,
@@ -557,24 +557,6 @@ class UtilInterface(InvocationContextInterface):
is_canceled=self.is_canceled,
)
def flux_step_callback(self, intermediate_state: PipelineIntermediateState) -> None:
"""
The step callback emits a progress event with the current step, the total number of
steps, a preview image, and some other internal metadata.
This should be called after each denoising step.
Args:
intermediate_state: The intermediate state of the diffusion pipeline.
"""
flux_step_callback(
context_data=self._data,
intermediate_state=intermediate_state,
events=self._services.events,
is_canceled=self.is_canceled,
)
class InvocationContext:
"""Provides access to various services and data for the current invocation.

View File

@@ -1,407 +0,0 @@
{
"name": "FLUX Image to Image",
"author": "InvokeAI",
"description": "A simple image-to-image workflow using a FLUX dev model. ",
"version": "1.0.4",
"contact": "",
"tags": "image2image, flux, image-to-image",
"notes": "Prerequisite model downloads: T5 Encoder, CLIP-L Encoder, and FLUX VAE. Quantized and un-quantized versions can be found in the starter models tab within your Model Manager. We recommend using FLUX dev models for image-to-image workflows. The image-to-image performance with FLUX schnell models is poor.",
"exposedFields": [
{
"nodeId": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"fieldName": "model"
},
{
"nodeId": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"fieldName": "t5_encoder_model"
},
{
"nodeId": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"fieldName": "clip_embed_model"
},
{
"nodeId": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"fieldName": "vae_model"
},
{
"nodeId": "ace0258f-67d7-4eee-a218-6fff27065214",
"fieldName": "denoising_start"
},
{
"nodeId": "01f674f8-b3d1-4df1-acac-6cb8e0bfb63c",
"fieldName": "prompt"
},
{
"nodeId": "ace0258f-67d7-4eee-a218-6fff27065214",
"fieldName": "num_steps"
}
],
"meta": {
"version": "3.0.0",
"category": "default"
},
"nodes": [
{
"id": "2981a67c-480f-4237-9384-26b68dbf912b",
"type": "invocation",
"data": {
"id": "2981a67c-480f-4237-9384-26b68dbf912b",
"type": "flux_vae_encode",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": true,
"inputs": {
"image": {
"name": "image",
"label": "",
"value": {
"image_name": "8a5c62aa-9335-45d2-9c71-89af9fc1f8d4.png"
}
},
"vae": {
"name": "vae",
"label": ""
}
}
},
"position": {
"x": 732.7680166609682,
"y": -24.37398171806909
}
},
{
"id": "ace0258f-67d7-4eee-a218-6fff27065214",
"type": "invocation",
"data": {
"id": "ace0258f-67d7-4eee-a218-6fff27065214",
"type": "flux_denoise",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": true,
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"denoise_mask": {
"name": "denoise_mask",
"label": ""
},
"denoising_start": {
"name": "denoising_start",
"label": "",
"value": 0.04
},
"denoising_end": {
"name": "denoising_end",
"label": "",
"value": 1
},
"transformer": {
"name": "transformer",
"label": ""
},
"positive_text_conditioning": {
"name": "positive_text_conditioning",
"label": ""
},
"width": {
"name": "width",
"label": "",
"value": 1024
},
"height": {
"name": "height",
"label": "",
"value": 1024
},
"num_steps": {
"name": "num_steps",
"label": "Steps (Recommend 30 for Dev, 4 for Schnell)",
"value": 30
},
"guidance": {
"name": "guidance",
"label": "",
"value": 4
},
"seed": {
"name": "seed",
"label": "",
"value": 0
}
}
},
"position": {
"x": 1182.8836633018684,
"y": -251.38882958913183
}
},
{
"id": "7e5172eb-48c1-44db-a770-8fd83e1435d1",
"type": "invocation",
"data": {
"id": "7e5172eb-48c1-44db-a770-8fd83e1435d1",
"type": "flux_vae_decode",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": false,
"useCache": true,
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
}
}
},
"position": {
"x": 1575.5797431839133,
"y": -209.00150975507415
}
},
{
"id": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"type": "invocation",
"data": {
"id": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"type": "flux_model_loader",
"version": "1.0.4",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": false,
"inputs": {
"model": {
"name": "model",
"label": "Model (dev variant recommended for Image-to-Image)"
},
"t5_encoder_model": {
"name": "t5_encoder_model",
"label": ""
},
"clip_embed_model": {
"name": "clip_embed_model",
"label": "",
"value": {
"key": "fa23a584-b623-415d-832a-21b5098ff1a1",
"hash": "blake3:17c19f0ef941c3b7609a9c94a659ca5364de0be364a91d4179f0e39ba17c3b70",
"name": "clip-vit-large-patch14",
"base": "any",
"type": "clip_embed"
}
},
"vae_model": {
"name": "vae_model",
"label": "",
"value": {
"key": "74fc82ba-c0a8-479d-a890-2126f82da758",
"hash": "blake3:ce21cb76364aa6e2421311cf4a4b5eb052a76c4f1cd207b50703d8978198a068",
"name": "FLUX.1-schnell_ae",
"base": "flux",
"type": "vae"
}
}
}
},
"position": {
"x": 328.1809894659957,
"y": -90.2241133566946
}
},
{
"id": "01f674f8-b3d1-4df1-acac-6cb8e0bfb63c",
"type": "invocation",
"data": {
"id": "01f674f8-b3d1-4df1-acac-6cb8e0bfb63c",
"type": "flux_text_encoder",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": true,
"inputs": {
"clip": {
"name": "clip",
"label": ""
},
"t5_encoder": {
"name": "t5_encoder",
"label": ""
},
"t5_max_seq_len": {
"name": "t5_max_seq_len",
"label": "T5 Max Seq Len",
"value": 256
},
"prompt": {
"name": "prompt",
"label": "",
"value": "a cat wearing a birthday hat"
}
}
},
"position": {
"x": 745.8823365057267,
"y": -299.60249175851914
}
},
{
"id": "4754c534-a5f3-4ad0-9382-7887985e668c",
"type": "invocation",
"data": {
"id": "4754c534-a5f3-4ad0-9382-7887985e668c",
"type": "rand_int",
"version": "1.0.1",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": false,
"inputs": {
"low": {
"name": "low",
"label": "",
"value": 0
},
"high": {
"name": "high",
"label": "",
"value": 2147483647
}
}
},
"position": {
"x": 725.834098928012,
"y": 496.2710031089931
}
}
],
"edges": [
{
"id": "reactflow__edge-2981a67c-480f-4237-9384-26b68dbf912bheight-ace0258f-67d7-4eee-a218-6fff27065214height",
"type": "default",
"source": "2981a67c-480f-4237-9384-26b68dbf912b",
"target": "ace0258f-67d7-4eee-a218-6fff27065214",
"sourceHandle": "height",
"targetHandle": "height"
},
{
"id": "reactflow__edge-2981a67c-480f-4237-9384-26b68dbf912bwidth-ace0258f-67d7-4eee-a218-6fff27065214width",
"type": "default",
"source": "2981a67c-480f-4237-9384-26b68dbf912b",
"target": "ace0258f-67d7-4eee-a218-6fff27065214",
"sourceHandle": "width",
"targetHandle": "width"
},
{
"id": "reactflow__edge-2981a67c-480f-4237-9384-26b68dbf912blatents-ace0258f-67d7-4eee-a218-6fff27065214latents",
"type": "default",
"source": "2981a67c-480f-4237-9384-26b68dbf912b",
"target": "ace0258f-67d7-4eee-a218-6fff27065214",
"sourceHandle": "latents",
"targetHandle": "latents"
},
{
"id": "reactflow__edge-f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90vae-2981a67c-480f-4237-9384-26b68dbf912bvae",
"type": "default",
"source": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"target": "2981a67c-480f-4237-9384-26b68dbf912b",
"sourceHandle": "vae",
"targetHandle": "vae"
},
{
"id": "reactflow__edge-ace0258f-67d7-4eee-a218-6fff27065214latents-7e5172eb-48c1-44db-a770-8fd83e1435d1latents",
"type": "default",
"source": "ace0258f-67d7-4eee-a218-6fff27065214",
"target": "7e5172eb-48c1-44db-a770-8fd83e1435d1",
"sourceHandle": "latents",
"targetHandle": "latents"
},
{
"id": "reactflow__edge-4754c534-a5f3-4ad0-9382-7887985e668cvalue-ace0258f-67d7-4eee-a218-6fff27065214seed",
"type": "default",
"source": "4754c534-a5f3-4ad0-9382-7887985e668c",
"target": "ace0258f-67d7-4eee-a218-6fff27065214",
"sourceHandle": "value",
"targetHandle": "seed"
},
{
"id": "reactflow__edge-f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90transformer-ace0258f-67d7-4eee-a218-6fff27065214transformer",
"type": "default",
"source": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"target": "ace0258f-67d7-4eee-a218-6fff27065214",
"sourceHandle": "transformer",
"targetHandle": "transformer"
},
{
"id": "reactflow__edge-01f674f8-b3d1-4df1-acac-6cb8e0bfb63cconditioning-ace0258f-67d7-4eee-a218-6fff27065214positive_text_conditioning",
"type": "default",
"source": "01f674f8-b3d1-4df1-acac-6cb8e0bfb63c",
"target": "ace0258f-67d7-4eee-a218-6fff27065214",
"sourceHandle": "conditioning",
"targetHandle": "positive_text_conditioning"
},
{
"id": "reactflow__edge-f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90vae-7e5172eb-48c1-44db-a770-8fd83e1435d1vae",
"type": "default",
"source": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"target": "7e5172eb-48c1-44db-a770-8fd83e1435d1",
"sourceHandle": "vae",
"targetHandle": "vae"
},
{
"id": "reactflow__edge-f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90max_seq_len-01f674f8-b3d1-4df1-acac-6cb8e0bfb63ct5_max_seq_len",
"type": "default",
"source": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"target": "01f674f8-b3d1-4df1-acac-6cb8e0bfb63c",
"sourceHandle": "max_seq_len",
"targetHandle": "t5_max_seq_len"
},
{
"id": "reactflow__edge-f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90t5_encoder-01f674f8-b3d1-4df1-acac-6cb8e0bfb63ct5_encoder",
"type": "default",
"source": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"target": "01f674f8-b3d1-4df1-acac-6cb8e0bfb63c",
"sourceHandle": "t5_encoder",
"targetHandle": "t5_encoder"
},
{
"id": "reactflow__edge-f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90clip-01f674f8-b3d1-4df1-acac-6cb8e0bfb63cclip",
"type": "default",
"source": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"target": "01f674f8-b3d1-4df1-acac-6cb8e0bfb63c",
"sourceHandle": "clip",
"targetHandle": "clip"
}
]
}

View File

@@ -1,7 +1,7 @@
{
"name": "FLUX Text to Image",
"author": "InvokeAI",
"description": "A simple text-to-image workflow using FLUX dev or schnell models.",
"description": "A simple text-to-image workflow using FLUX dev or schnell models. Prerequisite model downloads: T5 Encoder, CLIP-L Encoder, and FLUX VAE. Quantized and un-quantized versions can be found in the starter models tab within your Model Manager. We recommend 4 steps for FLUX schnell models and 30 steps for FLUX dev models.",
"version": "1.0.4",
"contact": "",
"tags": "text2image, flux",
@@ -11,25 +11,17 @@
"nodeId": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"fieldName": "model"
},
{
"nodeId": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"fieldName": "t5_encoder_model"
},
{
"nodeId": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"fieldName": "clip_embed_model"
},
{
"nodeId": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"fieldName": "vae_model"
},
{
"nodeId": "01f674f8-b3d1-4df1-acac-6cb8e0bfb63c",
"fieldName": "prompt"
},
{
"nodeId": "4fe24f07-f906-4f55-ab2c-9beee56ef5bd",
"nodeId": "159bdf1b-79e7-4174-b86e-d40e646964c8",
"fieldName": "num_steps"
},
{
"nodeId": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"fieldName": "t5_encoder_model"
}
],
"meta": {
@@ -37,121 +29,6 @@
"category": "default"
},
"nodes": [
{
"id": "4fe24f07-f906-4f55-ab2c-9beee56ef5bd",
"type": "invocation",
"data": {
"id": "4fe24f07-f906-4f55-ab2c-9beee56ef5bd",
"type": "flux_denoise",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": true,
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"denoise_mask": {
"name": "denoise_mask",
"label": ""
},
"denoising_start": {
"name": "denoising_start",
"label": "",
"value": 0
},
"denoising_end": {
"name": "denoising_end",
"label": "",
"value": 1
},
"transformer": {
"name": "transformer",
"label": ""
},
"positive_text_conditioning": {
"name": "positive_text_conditioning",
"label": ""
},
"width": {
"name": "width",
"label": "",
"value": 1024
},
"height": {
"name": "height",
"label": "",
"value": 1024
},
"num_steps": {
"name": "num_steps",
"label": "Steps (Recommend 30 for Dev, 4 for Schnell)",
"value": 30
},
"guidance": {
"name": "guidance",
"label": "",
"value": 4
},
"seed": {
"name": "seed",
"label": "",
"value": 0
}
}
},
"position": {
"x": 1186.1868226120378,
"y": -214.9459927686657
}
},
{
"id": "7e5172eb-48c1-44db-a770-8fd83e1435d1",
"type": "invocation",
"data": {
"id": "7e5172eb-48c1-44db-a770-8fd83e1435d1",
"type": "flux_vae_decode",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": false,
"useCache": true,
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
}
}
},
"position": {
"x": 1575.5797431839133,
"y": -209.00150975507415
}
},
{
"id": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"type": "invocation",
@@ -222,8 +99,8 @@
}
},
"position": {
"x": 778.4899149328337,
"y": -100.36469216659502
"x": 824.1970602278849,
"y": 146.98251001061735
}
},
{
@@ -252,52 +129,77 @@
}
},
"position": {
"x": 800.9667463219505,
"y": 285.8297267547506
"x": 822.9899179655476,
"y": 360.9657214885052
}
},
{
"id": "159bdf1b-79e7-4174-b86e-d40e646964c8",
"type": "invocation",
"data": {
"id": "159bdf1b-79e7-4174-b86e-d40e646964c8",
"type": "flux_text_to_image",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": false,
"useCache": true,
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"transformer": {
"name": "transformer",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
},
"positive_text_conditioning": {
"name": "positive_text_conditioning",
"label": ""
},
"width": {
"name": "width",
"label": "",
"value": 1024
},
"height": {
"name": "height",
"label": "",
"value": 1024
},
"num_steps": {
"name": "num_steps",
"label": "Steps (Recommend 30 for Dev, 4 for Schnell)",
"value": 30
},
"guidance": {
"name": "guidance",
"label": "",
"value": 4
},
"seed": {
"name": "seed",
"label": "",
"value": 0
}
}
},
"position": {
"x": 1216.3900791301849,
"y": 5.500841807102248
}
}
],
"edges": [
{
"id": "reactflow__edge-f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90transformer-4fe24f07-f906-4f55-ab2c-9beee56ef5bdtransformer",
"type": "default",
"source": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"target": "4fe24f07-f906-4f55-ab2c-9beee56ef5bd",
"sourceHandle": "transformer",
"targetHandle": "transformer"
},
{
"id": "reactflow__edge-01f674f8-b3d1-4df1-acac-6cb8e0bfb63cconditioning-4fe24f07-f906-4f55-ab2c-9beee56ef5bdpositive_text_conditioning",
"type": "default",
"source": "01f674f8-b3d1-4df1-acac-6cb8e0bfb63c",
"target": "4fe24f07-f906-4f55-ab2c-9beee56ef5bd",
"sourceHandle": "conditioning",
"targetHandle": "positive_text_conditioning"
},
{
"id": "reactflow__edge-4754c534-a5f3-4ad0-9382-7887985e668cvalue-4fe24f07-f906-4f55-ab2c-9beee56ef5bdseed",
"type": "default",
"source": "4754c534-a5f3-4ad0-9382-7887985e668c",
"target": "4fe24f07-f906-4f55-ab2c-9beee56ef5bd",
"sourceHandle": "value",
"targetHandle": "seed"
},
{
"id": "reactflow__edge-4fe24f07-f906-4f55-ab2c-9beee56ef5bdlatents-7e5172eb-48c1-44db-a770-8fd83e1435d1latents",
"type": "default",
"source": "4fe24f07-f906-4f55-ab2c-9beee56ef5bd",
"target": "7e5172eb-48c1-44db-a770-8fd83e1435d1",
"sourceHandle": "latents",
"targetHandle": "latents"
},
{
"id": "reactflow__edge-f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90vae-7e5172eb-48c1-44db-a770-8fd83e1435d1vae",
"type": "default",
"source": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"target": "7e5172eb-48c1-44db-a770-8fd83e1435d1",
"sourceHandle": "vae",
"targetHandle": "vae"
},
{
"id": "reactflow__edge-f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90max_seq_len-01f674f8-b3d1-4df1-acac-6cb8e0bfb63ct5_max_seq_len",
"type": "default",
@@ -306,6 +208,14 @@
"sourceHandle": "max_seq_len",
"targetHandle": "t5_max_seq_len"
},
{
"id": "reactflow__edge-f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90vae-159bdf1b-79e7-4174-b86e-d40e646964c8vae",
"type": "default",
"source": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"target": "159bdf1b-79e7-4174-b86e-d40e646964c8",
"sourceHandle": "vae",
"targetHandle": "vae"
},
{
"id": "reactflow__edge-f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90t5_encoder-01f674f8-b3d1-4df1-acac-6cb8e0bfb63ct5_encoder",
"type": "default",
@@ -321,6 +231,30 @@
"target": "01f674f8-b3d1-4df1-acac-6cb8e0bfb63c",
"sourceHandle": "clip",
"targetHandle": "clip"
},
{
"id": "reactflow__edge-f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90transformer-159bdf1b-79e7-4174-b86e-d40e646964c8transformer",
"type": "default",
"source": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"target": "159bdf1b-79e7-4174-b86e-d40e646964c8",
"sourceHandle": "transformer",
"targetHandle": "transformer"
},
{
"id": "reactflow__edge-01f674f8-b3d1-4df1-acac-6cb8e0bfb63cconditioning-159bdf1b-79e7-4174-b86e-d40e646964c8positive_text_conditioning",
"type": "default",
"source": "01f674f8-b3d1-4df1-acac-6cb8e0bfb63c",
"target": "159bdf1b-79e7-4174-b86e-d40e646964c8",
"sourceHandle": "conditioning",
"targetHandle": "positive_text_conditioning"
},
{
"id": "reactflow__edge-4754c534-a5f3-4ad0-9382-7887985e668cvalue-159bdf1b-79e7-4174-b86e-d40e646964c8seed",
"type": "default",
"source": "4754c534-a5f3-4ad0-9382-7887985e668c",
"target": "159bdf1b-79e7-4174-b86e-d40e646964c8",
"sourceHandle": "value",
"targetHandle": "seed"
}
]
}

View File

@@ -38,25 +38,6 @@ SD1_5_LATENT_RGB_FACTORS = [
[-0.1307, -0.1874, -0.7445], # L4
]
FLUX_LATENT_RGB_FACTORS = [
[-0.0412, 0.0149, 0.0521],
[0.0056, 0.0291, 0.0768],
[0.0342, -0.0681, -0.0427],
[-0.0258, 0.0092, 0.0463],
[0.0863, 0.0784, 0.0547],
[-0.0017, 0.0402, 0.0158],
[0.0501, 0.1058, 0.1152],
[-0.0209, -0.0218, -0.0329],
[-0.0314, 0.0083, 0.0896],
[0.0851, 0.0665, -0.0472],
[-0.0534, 0.0238, -0.0024],
[0.0452, -0.0026, 0.0048],
[0.0892, 0.0831, 0.0881],
[-0.1117, -0.0304, -0.0789],
[0.0027, -0.0479, -0.0043],
[-0.1146, -0.0827, -0.0598],
]
def sample_to_lowres_estimated_image(
samples: torch.Tensor, latent_rgb_factors: torch.Tensor, smooth_matrix: Optional[torch.Tensor] = None
@@ -113,32 +94,3 @@ def stable_diffusion_step_callback(
intermediate_state,
ProgressImage(dataURL=dataURL, width=width, height=height),
)
def flux_step_callback(
context_data: "InvocationContextData",
intermediate_state: PipelineIntermediateState,
events: "EventServiceBase",
is_canceled: Callable[[], bool],
) -> None:
if is_canceled():
raise CanceledException
sample = intermediate_state.latents
latent_rgb_factors = torch.tensor(FLUX_LATENT_RGB_FACTORS, dtype=sample.dtype, device=sample.device)
latent_image_perm = sample.permute(1, 2, 0).to(dtype=sample.dtype, device=sample.device)
latent_image = latent_image_perm @ latent_rgb_factors
latents_ubyte = (
((latent_image + 1) / 2).clamp(0, 1).mul(0xFF) # change scale from -1..1 to 0..1 # to 0..255
).to(device="cpu", dtype=torch.uint8)
image = Image.fromarray(latents_ubyte.cpu().numpy())
(width, height) = image.size
width *= 8
height *= 8
dataURL = image_to_dataURL(image, image_format="JPEG")
events.emit_invocation_denoise_progress(
context_data.queue_item,
context_data.invocation,
intermediate_state,
ProgressImage(dataURL=dataURL, width=width, height=height),
)

View File

@@ -1,56 +0,0 @@
from typing import Callable
import torch
from tqdm import tqdm
from invokeai.backend.flux.inpaint_extension import InpaintExtension
from invokeai.backend.flux.model import Flux
from invokeai.backend.stable_diffusion.diffusers_pipeline import PipelineIntermediateState
def denoise(
model: Flux,
# model input
img: torch.Tensor,
img_ids: torch.Tensor,
txt: torch.Tensor,
txt_ids: torch.Tensor,
vec: torch.Tensor,
# sampling parameters
timesteps: list[float],
step_callback: Callable[[PipelineIntermediateState], None],
guidance: float,
inpaint_extension: InpaintExtension | None,
):
step = 0
# guidance_vec is ignored for schnell.
guidance_vec = torch.full((img.shape[0],), guidance, device=img.device, dtype=img.dtype)
for t_curr, t_prev in tqdm(list(zip(timesteps[:-1], timesteps[1:], strict=True))):
t_vec = torch.full((img.shape[0],), t_curr, dtype=img.dtype, device=img.device)
pred = model(
img=img,
img_ids=img_ids,
txt=txt,
txt_ids=txt_ids,
y=vec,
timesteps=t_vec,
guidance=guidance_vec,
)
preview_img = img - t_curr * pred
img = img + (t_prev - t_curr) * pred
if inpaint_extension is not None:
img = inpaint_extension.merge_intermediate_latents_with_init_latents(img, t_prev)
step_callback(
PipelineIntermediateState(
step=step,
order=1,
total_steps=len(timesteps),
timestep=int(t_curr),
latents=preview_img,
),
)
step += 1
return img

View File

@@ -1,35 +0,0 @@
import torch
class InpaintExtension:
"""A class for managing inpainting with FLUX."""
def __init__(self, init_latents: torch.Tensor, inpaint_mask: torch.Tensor, noise: torch.Tensor):
"""Initialize InpaintExtension.
Args:
init_latents (torch.Tensor): The initial latents (i.e. un-noised at timestep 0). In 'packed' format.
inpaint_mask (torch.Tensor): A mask specifying which elements to inpaint. Range [0, 1]. Values of 1 will be
re-generated. Values of 0 will remain unchanged. Values between 0 and 1 can be used to blend the
inpainted region with the background. In 'packed' format.
noise (torch.Tensor): The noise tensor used to noise the init_latents. In 'packed' format.
"""
assert init_latents.shape == inpaint_mask.shape == noise.shape
self._init_latents = init_latents
self._inpaint_mask = inpaint_mask
self._noise = noise
def merge_intermediate_latents_with_init_latents(
self, intermediate_latents: torch.Tensor, timestep: float
) -> torch.Tensor:
"""Merge the intermediate latents with the initial latents for the current timestep using the inpaint mask. I.e.
update the intermediate latents to keep the regions that are not being inpainted on the correct noise
trajectory.
This function should be called after each denoising step.
"""
# Noise the init latents for the current timestep.
noised_init_latents = self._noise * timestep + (1.0 - timestep) * self._init_latents
# Merge the intermediate latents with the noised_init_latents using the inpaint_mask.
return intermediate_latents * self._inpaint_mask + noised_init_latents * (1.0 - self._inpaint_mask)

View File

@@ -258,17 +258,16 @@ class Decoder(nn.Module):
class DiagonalGaussian(nn.Module):
def __init__(self, chunk_dim: int = 1):
def __init__(self, sample: bool = True, chunk_dim: int = 1):
super().__init__()
self.sample = sample
self.chunk_dim = chunk_dim
def forward(self, z: Tensor, sample: bool = True, generator: torch.Generator | None = None) -> Tensor:
def forward(self, z: Tensor) -> Tensor:
mean, logvar = torch.chunk(z, 2, dim=self.chunk_dim)
if sample:
if self.sample:
std = torch.exp(0.5 * logvar)
# Unfortunately, torch.randn_like(...) does not accept a generator argument at the time of writing, so we
# have to use torch.randn(...) instead.
return mean + std * torch.randn(size=mean.size(), generator=generator, dtype=mean.dtype, device=mean.device)
return mean + std * torch.randn_like(mean)
else:
return mean
@@ -298,21 +297,8 @@ class AutoEncoder(nn.Module):
self.scale_factor = params.scale_factor
self.shift_factor = params.shift_factor
def encode(self, x: Tensor, sample: bool = True, generator: torch.Generator | None = None) -> Tensor:
"""Run VAE encoding on input tensor x.
Args:
x (Tensor): Input image tensor. Shape: (batch_size, in_channels, height, width).
sample (bool, optional): If True, sample from the encoded distribution, else, return the distribution mean.
Defaults to True.
generator (torch.Generator | None, optional): Optional random number generator for reproducibility.
Defaults to None.
Returns:
Tensor: Encoded latent tensor. Shape: (batch_size, z_channels, latent_height, latent_width).
"""
z = self.reg(self.encoder(x), sample=sample, generator=generator)
def encode(self, x: Tensor) -> Tensor:
z = self.reg(self.encoder(x))
z = self.scale_factor * (z - self.shift_factor)
return z

View File

@@ -0,0 +1,167 @@
# Initially pulled from https://github.com/black-forest-labs/flux
import math
from typing import Callable
import torch
from einops import rearrange, repeat
from torch import Tensor
from tqdm import tqdm
from invokeai.backend.flux.model import Flux
from invokeai.backend.flux.modules.conditioner import HFEncoder
def get_noise(
num_samples: int,
height: int,
width: int,
device: torch.device,
dtype: torch.dtype,
seed: int,
):
# We always generate noise on the same device and dtype then cast to ensure consistency across devices/dtypes.
rand_device = "cpu"
rand_dtype = torch.float16
return torch.randn(
num_samples,
16,
# allow for packing
2 * math.ceil(height / 16),
2 * math.ceil(width / 16),
device=rand_device,
dtype=rand_dtype,
generator=torch.Generator(device=rand_device).manual_seed(seed),
).to(device=device, dtype=dtype)
def prepare(t5: HFEncoder, clip: HFEncoder, img: Tensor, prompt: str | list[str]) -> dict[str, Tensor]:
bs, c, h, w = img.shape
if bs == 1 and not isinstance(prompt, str):
bs = len(prompt)
img = rearrange(img, "b c (h ph) (w pw) -> b (h w) (c ph pw)", ph=2, pw=2)
if img.shape[0] == 1 and bs > 1:
img = repeat(img, "1 ... -> bs ...", bs=bs)
img_ids = torch.zeros(h // 2, w // 2, 3)
img_ids[..., 1] = img_ids[..., 1] + torch.arange(h // 2)[:, None]
img_ids[..., 2] = img_ids[..., 2] + torch.arange(w // 2)[None, :]
img_ids = repeat(img_ids, "h w c -> b (h w) c", b=bs)
if isinstance(prompt, str):
prompt = [prompt]
txt = t5(prompt)
if txt.shape[0] == 1 and bs > 1:
txt = repeat(txt, "1 ... -> bs ...", bs=bs)
txt_ids = torch.zeros(bs, txt.shape[1], 3)
vec = clip(prompt)
if vec.shape[0] == 1 and bs > 1:
vec = repeat(vec, "1 ... -> bs ...", bs=bs)
return {
"img": img,
"img_ids": img_ids.to(img.device),
"txt": txt.to(img.device),
"txt_ids": txt_ids.to(img.device),
"vec": vec.to(img.device),
}
def time_shift(mu: float, sigma: float, t: Tensor):
return math.exp(mu) / (math.exp(mu) + (1 / t - 1) ** sigma)
def get_lin_function(x1: float = 256, y1: float = 0.5, x2: float = 4096, y2: float = 1.15) -> Callable[[float], float]:
m = (y2 - y1) / (x2 - x1)
b = y1 - m * x1
return lambda x: m * x + b
def get_schedule(
num_steps: int,
image_seq_len: int,
base_shift: float = 0.5,
max_shift: float = 1.15,
shift: bool = True,
) -> list[float]:
# extra step for zero
timesteps = torch.linspace(1, 0, num_steps + 1)
# shifting the schedule to favor high timesteps for higher signal images
if shift:
# eastimate mu based on linear estimation between two points
mu = get_lin_function(y1=base_shift, y2=max_shift)(image_seq_len)
timesteps = time_shift(mu, 1.0, timesteps)
return timesteps.tolist()
def denoise(
model: Flux,
# model input
img: Tensor,
img_ids: Tensor,
txt: Tensor,
txt_ids: Tensor,
vec: Tensor,
# sampling parameters
timesteps: list[float],
step_callback: Callable[[], None],
guidance: float = 4.0,
):
# guidance_vec is ignored for schnell.
guidance_vec = torch.full((img.shape[0],), guidance, device=img.device, dtype=img.dtype)
for t_curr, t_prev in tqdm(list(zip(timesteps[:-1], timesteps[1:], strict=True))):
t_vec = torch.full((img.shape[0],), t_curr, dtype=img.dtype, device=img.device)
pred = model(
img=img,
img_ids=img_ids,
txt=txt,
txt_ids=txt_ids,
y=vec,
timesteps=t_vec,
guidance=guidance_vec,
)
img = img + (t_prev - t_curr) * pred
step_callback()
return img
def unpack(x: Tensor, height: int, width: int) -> Tensor:
return rearrange(
x,
"b (h w) (c ph pw) -> b c (h ph) (w pw)",
h=math.ceil(height / 16),
w=math.ceil(width / 16),
ph=2,
pw=2,
)
def prepare_latent_img_patches(latent_img: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]:
"""Convert an input image in latent space to patches for diffusion.
This implementation was extracted from:
https://github.com/black-forest-labs/flux/blob/c00d7c60b085fce8058b9df845e036090873f2ce/src/flux/sampling.py#L32
Returns:
tuple[Tensor, Tensor]: (img, img_ids), as defined in the original flux repo.
"""
bs, c, h, w = latent_img.shape
# Pixel unshuffle with a scale of 2, and flatten the height/width dimensions to get an array of patches.
img = rearrange(latent_img, "b c (h ph) (w pw) -> b (h w) (c ph pw)", ph=2, pw=2)
if img.shape[0] == 1 and bs > 1:
img = repeat(img, "1 ... -> bs ...", bs=bs)
# Generate patch position ids.
img_ids = torch.zeros(h // 2, w // 2, 3, device=img.device, dtype=img.dtype)
img_ids[..., 1] = img_ids[..., 1] + torch.arange(h // 2, device=img.device, dtype=img.dtype)[:, None]
img_ids[..., 2] = img_ids[..., 2] + torch.arange(w // 2, device=img.device, dtype=img.dtype)[None, :]
img_ids = repeat(img_ids, "h w c -> b (h w) c", b=bs)
return img, img_ids

View File

@@ -1,135 +0,0 @@
# Initially pulled from https://github.com/black-forest-labs/flux
import math
from typing import Callable
import torch
from einops import rearrange, repeat
def get_noise(
num_samples: int,
height: int,
width: int,
device: torch.device,
dtype: torch.dtype,
seed: int,
):
# We always generate noise on the same device and dtype then cast to ensure consistency across devices/dtypes.
rand_device = "cpu"
rand_dtype = torch.float16
return torch.randn(
num_samples,
16,
# allow for packing
2 * math.ceil(height / 16),
2 * math.ceil(width / 16),
device=rand_device,
dtype=rand_dtype,
generator=torch.Generator(device=rand_device).manual_seed(seed),
).to(device=device, dtype=dtype)
def time_shift(mu: float, sigma: float, t: torch.Tensor) -> torch.Tensor:
return math.exp(mu) / (math.exp(mu) + (1 / t - 1) ** sigma)
def get_lin_function(x1: float = 256, y1: float = 0.5, x2: float = 4096, y2: float = 1.15) -> Callable[[float], float]:
m = (y2 - y1) / (x2 - x1)
b = y1 - m * x1
return lambda x: m * x + b
def get_schedule(
num_steps: int,
image_seq_len: int,
base_shift: float = 0.5,
max_shift: float = 1.15,
shift: bool = True,
) -> list[float]:
# extra step for zero
timesteps = torch.linspace(1, 0, num_steps + 1)
# shifting the schedule to favor high timesteps for higher signal images
if shift:
# estimate mu based on linear estimation between two points
mu = get_lin_function(y1=base_shift, y2=max_shift)(image_seq_len)
timesteps = time_shift(mu, 1.0, timesteps)
return timesteps.tolist()
def _find_last_index_ge_val(timesteps: list[float], val: float, eps: float = 1e-6) -> int:
"""Find the last index in timesteps that is >= val.
We use epsilon-close equality to avoid potential floating point errors.
"""
idx = len(list(filter(lambda t: t >= (val - eps), timesteps))) - 1
assert idx >= 0
return idx
def clip_timestep_schedule(timesteps: list[float], denoising_start: float, denoising_end: float) -> list[float]:
"""Clip the timestep schedule to the denoising range.
Args:
timesteps (list[float]): The original timestep schedule: [1.0, ..., 0.0].
denoising_start (float): A value in [0, 1] specifying the start of the denoising process. E.g. a value of 0.2
would mean that the denoising process start at the last timestep in the schedule >= 0.8.
denoising_end (float): A value in [0, 1] specifying the end of the denoising process. E.g. a value of 0.8 would
mean that the denoising process end at the last timestep in the schedule >= 0.2.
Returns:
list[float]: The clipped timestep schedule.
"""
assert 0.0 <= denoising_start <= 1.0
assert 0.0 <= denoising_end <= 1.0
assert denoising_start <= denoising_end
t_start_val = 1.0 - denoising_start
t_end_val = 1.0 - denoising_end
t_start_idx = _find_last_index_ge_val(timesteps, t_start_val)
t_end_idx = _find_last_index_ge_val(timesteps, t_end_val)
clipped_timesteps = timesteps[t_start_idx : t_end_idx + 1]
return clipped_timesteps
def unpack(x: torch.Tensor, height: int, width: int) -> torch.Tensor:
"""Unpack flat array of patch embeddings to latent image."""
return rearrange(
x,
"b (h w) (c ph pw) -> b c (h ph) (w pw)",
h=math.ceil(height / 16),
w=math.ceil(width / 16),
ph=2,
pw=2,
)
def pack(x: torch.Tensor) -> torch.Tensor:
"""Pack latent image to flattented array of patch embeddings."""
# Pixel unshuffle with a scale of 2, and flatten the height/width dimensions to get an array of patches.
return rearrange(x, "b c (h ph) (w pw) -> b (h w) (c ph pw)", ph=2, pw=2)
def generate_img_ids(h: int, w: int, batch_size: int, device: torch.device, dtype: torch.dtype) -> torch.Tensor:
"""Generate tensor of image position ids.
Args:
h (int): Height of image in latent space.
w (int): Width of image in latent space.
batch_size (int): Batch size.
device (torch.device): Device.
dtype (torch.dtype): dtype.
Returns:
torch.Tensor: Image position ids.
"""
img_ids = torch.zeros(h // 2, w // 2, 3, device=device, dtype=dtype)
img_ids[..., 1] = img_ids[..., 1] + torch.arange(h // 2, device=device, dtype=dtype)[:, None]
img_ids[..., 2] = img_ids[..., 2] + torch.arange(w // 2, device=device, dtype=dtype)[None, :]
img_ids = repeat(img_ids, "h w c -> b (h w) c", b=batch_size)
return img_ids

672
invokeai/backend/lora.py Normal file
View File

@@ -0,0 +1,672 @@
# Copyright (c) 2024 The InvokeAI Development team
"""LoRA model support."""
import bisect
from pathlib import Path
from typing import Dict, List, Optional, Set, Tuple, Union
import torch
from safetensors.torch import load_file
from typing_extensions import Self
import invokeai.backend.util.logging as logger
from invokeai.backend.model_manager import BaseModelType
from invokeai.backend.raw_model import RawModel
class LoRALayerBase:
# rank: Optional[int]
# alpha: Optional[float]
# bias: Optional[torch.Tensor]
# layer_key: str
# @property
# def scale(self):
# return self.alpha / self.rank if (self.alpha and self.rank) else 1.0
def __init__(
self,
layer_key: str,
values: Dict[str, torch.Tensor],
):
if "alpha" in values:
self.alpha = values["alpha"].item()
else:
self.alpha = None
if "bias_indices" in values and "bias_values" in values and "bias_size" in values:
self.bias: Optional[torch.Tensor] = torch.sparse_coo_tensor(
values["bias_indices"],
values["bias_values"],
tuple(values["bias_size"]),
)
else:
self.bias = None
self.rank = None # set in layer implementation
self.layer_key = layer_key
def get_weight(self, orig_weight: torch.Tensor) -> torch.Tensor:
raise NotImplementedError()
def get_bias(self, orig_bias: torch.Tensor) -> Optional[torch.Tensor]:
return self.bias
def get_parameters(self, orig_module: torch.nn.Module) -> Dict[str, torch.Tensor]:
params = {"weight": self.get_weight(orig_module.weight)}
bias = self.get_bias(orig_module.bias)
if bias is not None:
params["bias"] = bias
return params
def calc_size(self) -> int:
model_size = 0
for val in [self.bias]:
if val is not None:
model_size += val.nelement() * val.element_size()
return model_size
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
if self.bias is not None:
self.bias = self.bias.to(device=device, dtype=dtype)
def check_keys(self, values: Dict[str, torch.Tensor], known_keys: Set[str]):
"""Log a warning if values contains unhandled keys."""
# {"alpha", "bias_indices", "bias_values", "bias_size"} are hard-coded, because they are handled by
# `LoRALayerBase`. Sub-classes should provide the known_keys that they handled.
all_known_keys = known_keys | {"alpha", "bias_indices", "bias_values", "bias_size"}
unknown_keys = set(values.keys()) - all_known_keys
if unknown_keys:
logger.warning(
f"Unexpected keys found in LoRA/LyCORIS layer, model might work incorrectly! Keys: {unknown_keys}"
)
# TODO: find and debug lora/locon with bias
class LoRALayer(LoRALayerBase):
# up: torch.Tensor
# mid: Optional[torch.Tensor]
# down: torch.Tensor
def __init__(
self,
layer_key: str,
values: Dict[str, torch.Tensor],
):
super().__init__(layer_key, values)
self.up = values["lora_up.weight"]
self.down = values["lora_down.weight"]
self.mid = values.get("lora_mid.weight", None)
self.rank = self.down.shape[0]
self.check_keys(
values,
{
"lora_up.weight",
"lora_down.weight",
"lora_mid.weight",
},
)
def get_weight(self, orig_weight: torch.Tensor) -> torch.Tensor:
if self.mid is not None:
up = self.up.reshape(self.up.shape[0], self.up.shape[1])
down = self.down.reshape(self.down.shape[0], self.down.shape[1])
weight = torch.einsum("m n w h, i m, n j -> i j w h", self.mid, up, down)
else:
weight = self.up.reshape(self.up.shape[0], -1) @ self.down.reshape(self.down.shape[0], -1)
return weight
def calc_size(self) -> int:
model_size = super().calc_size()
for val in [self.up, self.mid, self.down]:
if val is not None:
model_size += val.nelement() * val.element_size()
return model_size
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
super().to(device=device, dtype=dtype)
self.up = self.up.to(device=device, dtype=dtype)
self.down = self.down.to(device=device, dtype=dtype)
if self.mid is not None:
self.mid = self.mid.to(device=device, dtype=dtype)
class LoHALayer(LoRALayerBase):
# w1_a: torch.Tensor
# w1_b: torch.Tensor
# w2_a: torch.Tensor
# w2_b: torch.Tensor
# t1: Optional[torch.Tensor] = None
# t2: Optional[torch.Tensor] = None
def __init__(self, layer_key: str, values: Dict[str, torch.Tensor]):
super().__init__(layer_key, values)
self.w1_a = values["hada_w1_a"]
self.w1_b = values["hada_w1_b"]
self.w2_a = values["hada_w2_a"]
self.w2_b = values["hada_w2_b"]
self.t1 = values.get("hada_t1", None)
self.t2 = values.get("hada_t2", None)
self.rank = self.w1_b.shape[0]
self.check_keys(
values,
{
"hada_w1_a",
"hada_w1_b",
"hada_w2_a",
"hada_w2_b",
"hada_t1",
"hada_t2",
},
)
def get_weight(self, orig_weight: torch.Tensor) -> torch.Tensor:
if self.t1 is None:
weight: torch.Tensor = (self.w1_a @ self.w1_b) * (self.w2_a @ self.w2_b)
else:
rebuild1 = torch.einsum("i j k l, j r, i p -> p r k l", self.t1, self.w1_b, self.w1_a)
rebuild2 = torch.einsum("i j k l, j r, i p -> p r k l", self.t2, self.w2_b, self.w2_a)
weight = rebuild1 * rebuild2
return weight
def calc_size(self) -> int:
model_size = super().calc_size()
for val in [self.w1_a, self.w1_b, self.w2_a, self.w2_b, self.t1, self.t2]:
if val is not None:
model_size += val.nelement() * val.element_size()
return model_size
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
super().to(device=device, dtype=dtype)
self.w1_a = self.w1_a.to(device=device, dtype=dtype)
self.w1_b = self.w1_b.to(device=device, dtype=dtype)
if self.t1 is not None:
self.t1 = self.t1.to(device=device, dtype=dtype)
self.w2_a = self.w2_a.to(device=device, dtype=dtype)
self.w2_b = self.w2_b.to(device=device, dtype=dtype)
if self.t2 is not None:
self.t2 = self.t2.to(device=device, dtype=dtype)
class LoKRLayer(LoRALayerBase):
# w1: Optional[torch.Tensor] = None
# w1_a: Optional[torch.Tensor] = None
# w1_b: Optional[torch.Tensor] = None
# w2: Optional[torch.Tensor] = None
# w2_a: Optional[torch.Tensor] = None
# w2_b: Optional[torch.Tensor] = None
# t2: Optional[torch.Tensor] = None
def __init__(
self,
layer_key: str,
values: Dict[str, torch.Tensor],
):
super().__init__(layer_key, values)
self.w1 = values.get("lokr_w1", None)
if self.w1 is None:
self.w1_a = values["lokr_w1_a"]
self.w1_b = values["lokr_w1_b"]
else:
self.w1_b = None
self.w1_a = None
self.w2 = values.get("lokr_w2", None)
if self.w2 is None:
self.w2_a = values["lokr_w2_a"]
self.w2_b = values["lokr_w2_b"]
else:
self.w2_a = None
self.w2_b = None
self.t2 = values.get("lokr_t2", None)
if self.w1_b is not None:
self.rank = self.w1_b.shape[0]
elif self.w2_b is not None:
self.rank = self.w2_b.shape[0]
else:
self.rank = None # unscaled
self.check_keys(
values,
{
"lokr_w1",
"lokr_w1_a",
"lokr_w1_b",
"lokr_w2",
"lokr_w2_a",
"lokr_w2_b",
"lokr_t2",
},
)
def get_weight(self, orig_weight: torch.Tensor) -> torch.Tensor:
w1: Optional[torch.Tensor] = self.w1
if w1 is None:
assert self.w1_a is not None
assert self.w1_b is not None
w1 = self.w1_a @ self.w1_b
w2 = self.w2
if w2 is None:
if self.t2 is None:
assert self.w2_a is not None
assert self.w2_b is not None
w2 = self.w2_a @ self.w2_b
else:
w2 = torch.einsum("i j k l, i p, j r -> p r k l", self.t2, self.w2_a, self.w2_b)
if len(w2.shape) == 4:
w1 = w1.unsqueeze(2).unsqueeze(2)
w2 = w2.contiguous()
assert w1 is not None
assert w2 is not None
weight = torch.kron(w1, w2)
return weight
def calc_size(self) -> int:
model_size = super().calc_size()
for val in [self.w1, self.w1_a, self.w1_b, self.w2, self.w2_a, self.w2_b, self.t2]:
if val is not None:
model_size += val.nelement() * val.element_size()
return model_size
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
super().to(device=device, dtype=dtype)
if self.w1 is not None:
self.w1 = self.w1.to(device=device, dtype=dtype)
else:
assert self.w1_a is not None
assert self.w1_b is not None
self.w1_a = self.w1_a.to(device=device, dtype=dtype)
self.w1_b = self.w1_b.to(device=device, dtype=dtype)
if self.w2 is not None:
self.w2 = self.w2.to(device=device, dtype=dtype)
else:
assert self.w2_a is not None
assert self.w2_b is not None
self.w2_a = self.w2_a.to(device=device, dtype=dtype)
self.w2_b = self.w2_b.to(device=device, dtype=dtype)
if self.t2 is not None:
self.t2 = self.t2.to(device=device, dtype=dtype)
class FullLayer(LoRALayerBase):
# bias handled in LoRALayerBase(calc_size, to)
# weight: torch.Tensor
# bias: Optional[torch.Tensor]
def __init__(
self,
layer_key: str,
values: Dict[str, torch.Tensor],
):
super().__init__(layer_key, values)
self.weight = values["diff"]
self.bias = values.get("diff_b", None)
self.rank = None # unscaled
self.check_keys(values, {"diff", "diff_b"})
def get_weight(self, orig_weight: torch.Tensor) -> torch.Tensor:
return self.weight
def calc_size(self) -> int:
model_size = super().calc_size()
model_size += self.weight.nelement() * self.weight.element_size()
return model_size
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
super().to(device=device, dtype=dtype)
self.weight = self.weight.to(device=device, dtype=dtype)
class IA3Layer(LoRALayerBase):
# weight: torch.Tensor
# on_input: torch.Tensor
def __init__(
self,
layer_key: str,
values: Dict[str, torch.Tensor],
):
super().__init__(layer_key, values)
self.weight = values["weight"]
self.on_input = values["on_input"]
self.rank = None # unscaled
self.check_keys(values, {"weight", "on_input"})
def get_weight(self, orig_weight: torch.Tensor) -> torch.Tensor:
weight = self.weight
if not self.on_input:
weight = weight.reshape(-1, 1)
assert orig_weight is not None
return orig_weight * weight
def calc_size(self) -> int:
model_size = super().calc_size()
model_size += self.weight.nelement() * self.weight.element_size()
model_size += self.on_input.nelement() * self.on_input.element_size()
return model_size
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None):
super().to(device=device, dtype=dtype)
self.weight = self.weight.to(device=device, dtype=dtype)
self.on_input = self.on_input.to(device=device, dtype=dtype)
class NormLayer(LoRALayerBase):
# bias handled in LoRALayerBase(calc_size, to)
# weight: torch.Tensor
# bias: Optional[torch.Tensor]
def __init__(
self,
layer_key: str,
values: Dict[str, torch.Tensor],
):
super().__init__(layer_key, values)
self.weight = values["w_norm"]
self.bias = values.get("b_norm", None)
self.rank = None # unscaled
self.check_keys(values, {"w_norm", "b_norm"})
def get_weight(self, orig_weight: torch.Tensor) -> torch.Tensor:
return self.weight
def calc_size(self) -> int:
model_size = super().calc_size()
model_size += self.weight.nelement() * self.weight.element_size()
return model_size
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
super().to(device=device, dtype=dtype)
self.weight = self.weight.to(device=device, dtype=dtype)
AnyLoRALayer = Union[LoRALayer, LoHALayer, LoKRLayer, FullLayer, IA3Layer, NormLayer]
class LoRAModelRaw(RawModel): # (torch.nn.Module):
_name: str
layers: Dict[str, AnyLoRALayer]
def __init__(
self,
name: str,
layers: Dict[str, AnyLoRALayer],
):
self._name = name
self.layers = layers
@property
def name(self) -> str:
return self._name
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
# TODO: try revert if exception?
for _key, layer in self.layers.items():
layer.to(device=device, dtype=dtype)
def calc_size(self) -> int:
model_size = 0
for _, layer in self.layers.items():
model_size += layer.calc_size()
return model_size
@classmethod
def _convert_sdxl_keys_to_diffusers_format(cls, state_dict: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]:
"""Convert the keys of an SDXL LoRA state_dict to diffusers format.
The input state_dict can be in either Stability AI format or diffusers format. If the state_dict is already in
diffusers format, then this function will have no effect.
This function is adapted from:
https://github.com/bmaltais/kohya_ss/blob/2accb1305979ba62f5077a23aabac23b4c37e935/networks/lora_diffusers.py#L385-L409
Args:
state_dict (Dict[str, Tensor]): The SDXL LoRA state_dict.
Raises:
ValueError: If state_dict contains an unrecognized key, or not all keys could be converted.
Returns:
Dict[str, Tensor]: The diffusers-format state_dict.
"""
converted_count = 0 # The number of Stability AI keys converted to diffusers format.
not_converted_count = 0 # The number of keys that were not converted.
# Get a sorted list of Stability AI UNet keys so that we can efficiently search for keys with matching prefixes.
# For example, we want to efficiently find `input_blocks_4_1` in the list when searching for
# `input_blocks_4_1_proj_in`.
stability_unet_keys = list(SDXL_UNET_STABILITY_TO_DIFFUSERS_MAP)
stability_unet_keys.sort()
new_state_dict = {}
for full_key, value in state_dict.items():
if full_key.startswith("lora_unet_"):
search_key = full_key.replace("lora_unet_", "")
# Use bisect to find the key in stability_unet_keys that *may* match the search_key's prefix.
position = bisect.bisect_right(stability_unet_keys, search_key)
map_key = stability_unet_keys[position - 1]
# Now, check if the map_key *actually* matches the search_key.
if search_key.startswith(map_key):
new_key = full_key.replace(map_key, SDXL_UNET_STABILITY_TO_DIFFUSERS_MAP[map_key])
new_state_dict[new_key] = value
converted_count += 1
else:
new_state_dict[full_key] = value
not_converted_count += 1
elif full_key.startswith("lora_te1_") or full_key.startswith("lora_te2_"):
# The CLIP text encoders have the same keys in both Stability AI and diffusers formats.
new_state_dict[full_key] = value
continue
else:
raise ValueError(f"Unrecognized SDXL LoRA key prefix: '{full_key}'.")
if converted_count > 0 and not_converted_count > 0:
raise ValueError(
f"The SDXL LoRA could only be partially converted to diffusers format. converted={converted_count},"
f" not_converted={not_converted_count}"
)
return new_state_dict
@classmethod
def from_checkpoint(
cls,
file_path: Union[str, Path],
device: Optional[torch.device] = None,
dtype: Optional[torch.dtype] = None,
base_model: Optional[BaseModelType] = None,
) -> Self:
device = device or torch.device("cpu")
dtype = dtype or torch.float32
if isinstance(file_path, str):
file_path = Path(file_path)
model = cls(
name=file_path.stem,
layers={},
)
if file_path.suffix == ".safetensors":
sd = load_file(file_path.absolute().as_posix(), device="cpu")
else:
sd = torch.load(file_path, map_location="cpu")
state_dict = cls._group_state(sd)
if base_model == BaseModelType.StableDiffusionXL:
state_dict = cls._convert_sdxl_keys_to_diffusers_format(state_dict)
for layer_key, values in state_dict.items():
# Detect layers according to LyCORIS detection logic(`weight_list_det`)
# https://github.com/KohakuBlueleaf/LyCORIS/tree/8ad8000efb79e2b879054da8c9356e6143591bad/lycoris/modules
# lora and locon
if "lora_up.weight" in values:
layer: AnyLoRALayer = LoRALayer(layer_key, values)
# loha
elif "hada_w1_a" in values:
layer = LoHALayer(layer_key, values)
# lokr
elif "lokr_w1" in values or "lokr_w1_a" in values:
layer = LoKRLayer(layer_key, values)
# diff
elif "diff" in values:
layer = FullLayer(layer_key, values)
# ia3
elif "on_input" in values:
layer = IA3Layer(layer_key, values)
# norms
elif "w_norm" in values:
layer = NormLayer(layer_key, values)
else:
print(f">> Encountered unknown lora layer module in {model.name}: {layer_key} - {list(values.keys())}")
raise Exception("Unknown lora format!")
# lower memory consumption by removing already parsed layer values
state_dict[layer_key].clear()
layer.to(device=device, dtype=dtype)
model.layers[layer_key] = layer
return model
@staticmethod
def _group_state(state_dict: Dict[str, torch.Tensor]) -> Dict[str, Dict[str, torch.Tensor]]:
state_dict_groupped: Dict[str, Dict[str, torch.Tensor]] = {}
for key, value in state_dict.items():
stem, leaf = key.split(".", 1)
if stem not in state_dict_groupped:
state_dict_groupped[stem] = {}
state_dict_groupped[stem][leaf] = value
return state_dict_groupped
# code from
# https://github.com/bmaltais/kohya_ss/blob/2accb1305979ba62f5077a23aabac23b4c37e935/networks/lora_diffusers.py#L15C1-L97C32
def make_sdxl_unet_conversion_map() -> List[Tuple[str, str]]:
"""Create a dict mapping state_dict keys from Stability AI SDXL format to diffusers SDXL format."""
unet_conversion_map_layer = []
for i in range(3): # num_blocks is 3 in sdxl
# loop over downblocks/upblocks
for j in range(2):
# loop over resnets/attentions for downblocks
hf_down_res_prefix = f"down_blocks.{i}.resnets.{j}."
sd_down_res_prefix = f"input_blocks.{3*i + j + 1}.0."
unet_conversion_map_layer.append((sd_down_res_prefix, hf_down_res_prefix))
if i < 3:
# no attention layers in down_blocks.3
hf_down_atn_prefix = f"down_blocks.{i}.attentions.{j}."
sd_down_atn_prefix = f"input_blocks.{3*i + j + 1}.1."
unet_conversion_map_layer.append((sd_down_atn_prefix, hf_down_atn_prefix))
for j in range(3):
# loop over resnets/attentions for upblocks
hf_up_res_prefix = f"up_blocks.{i}.resnets.{j}."
sd_up_res_prefix = f"output_blocks.{3*i + j}.0."
unet_conversion_map_layer.append((sd_up_res_prefix, hf_up_res_prefix))
# if i > 0: commentout for sdxl
# no attention layers in up_blocks.0
hf_up_atn_prefix = f"up_blocks.{i}.attentions.{j}."
sd_up_atn_prefix = f"output_blocks.{3*i + j}.1."
unet_conversion_map_layer.append((sd_up_atn_prefix, hf_up_atn_prefix))
if i < 3:
# no downsample in down_blocks.3
hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0.conv."
sd_downsample_prefix = f"input_blocks.{3*(i+1)}.0.op."
unet_conversion_map_layer.append((sd_downsample_prefix, hf_downsample_prefix))
# no upsample in up_blocks.3
hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0."
sd_upsample_prefix = f"output_blocks.{3*i + 2}.{2}." # change for sdxl
unet_conversion_map_layer.append((sd_upsample_prefix, hf_upsample_prefix))
hf_mid_atn_prefix = "mid_block.attentions.0."
sd_mid_atn_prefix = "middle_block.1."
unet_conversion_map_layer.append((sd_mid_atn_prefix, hf_mid_atn_prefix))
for j in range(2):
hf_mid_res_prefix = f"mid_block.resnets.{j}."
sd_mid_res_prefix = f"middle_block.{2*j}."
unet_conversion_map_layer.append((sd_mid_res_prefix, hf_mid_res_prefix))
unet_conversion_map_resnet = [
# (stable-diffusion, HF Diffusers)
("in_layers.0.", "norm1."),
("in_layers.2.", "conv1."),
("out_layers.0.", "norm2."),
("out_layers.3.", "conv2."),
("emb_layers.1.", "time_emb_proj."),
("skip_connection.", "conv_shortcut."),
]
unet_conversion_map = []
for sd, hf in unet_conversion_map_layer:
if "resnets" in hf:
for sd_res, hf_res in unet_conversion_map_resnet:
unet_conversion_map.append((sd + sd_res, hf + hf_res))
else:
unet_conversion_map.append((sd, hf))
for j in range(2):
hf_time_embed_prefix = f"time_embedding.linear_{j+1}."
sd_time_embed_prefix = f"time_embed.{j*2}."
unet_conversion_map.append((sd_time_embed_prefix, hf_time_embed_prefix))
for j in range(2):
hf_label_embed_prefix = f"add_embedding.linear_{j+1}."
sd_label_embed_prefix = f"label_emb.0.{j*2}."
unet_conversion_map.append((sd_label_embed_prefix, hf_label_embed_prefix))
unet_conversion_map.append(("input_blocks.0.0.", "conv_in."))
unet_conversion_map.append(("out.0.", "conv_norm_out."))
unet_conversion_map.append(("out.2.", "conv_out."))
return unet_conversion_map
SDXL_UNET_STABILITY_TO_DIFFUSERS_MAP = {
sd.rstrip(".").replace(".", "_"): hf.rstrip(".").replace(".", "_") for sd, hf in make_sdxl_unet_conversion_map()
}

View File

@@ -1,210 +0,0 @@
from typing import Dict
import torch
from invokeai.backend.lora.layers.any_lora_layer import AnyLoRALayer
from invokeai.backend.lora.layers.concatenated_lora_layer import ConcatenatedLoRALayer
from invokeai.backend.lora.layers.lora_layer import LoRALayer
from invokeai.backend.lora.layers.lora_layer_base import LoRALayerBase
from invokeai.backend.lora.lora_model_raw import LoRAModelRaw
def is_state_dict_likely_in_flux_diffusers_format(state_dict: Dict[str, torch.Tensor]) -> bool:
"""Checks if the provided state dict is likely in the Diffusers FLUX LoRA format.
This is intended to be a reasonably high-precision detector, but it is not guaranteed to have perfect precision. (A
perfect-precision detector would require checking all keys against a whitelist and verifying tensor shapes.)
"""
# First, check that all keys end in "lora_A.weight" or "lora_B.weight" (i.e. are in PEFT format).
all_keys_in_peft_format = all(k.endswith(("lora_A.weight", "lora_B.weight")) for k in state_dict.keys())
# Next, check that this is likely a FLUX model by spot-checking a few keys.
expected_keys = [
"transformer.single_transformer_blocks.0.attn.to_q.lora_A.weight",
"transformer.single_transformer_blocks.0.attn.to_q.lora_B.weight",
"transformer.transformer_blocks.0.attn.add_q_proj.lora_A.weight",
"transformer.transformer_blocks.0.attn.add_q_proj.lora_B.weight",
]
all_expected_keys_present = all(k in state_dict for k in expected_keys)
return all_keys_in_peft_format and all_expected_keys_present
# TODO(ryand): What alpha should we use? 1.0? Rank of the LoRA?
def lora_model_from_flux_diffusers_state_dict(state_dict: Dict[str, torch.Tensor], alpha: float = 1.0) -> LoRAModelRaw: # pyright: ignore[reportRedeclaration] (state_dict is intentionally re-declared)
"""Loads a state dict in the Diffusers FLUX LoRA format into a LoRAModelRaw object.
This function is based on:
https://github.com/huggingface/diffusers/blob/55ac421f7bb12fd00ccbef727be4dc2f3f920abb/scripts/convert_flux_to_diffusers.py
"""
# Group keys by layer.
grouped_state_dict: dict[str, dict[str, torch.Tensor]] = _group_by_layer(state_dict)
# Remove the "transformer." prefix from all keys.
grouped_state_dict = {k.replace("transformer.", ""): v for k, v in grouped_state_dict.items()}
# Constants for FLUX.1
num_double_layers = 19
num_single_layers = 38
# inner_dim = 3072
# mlp_ratio = 4.0
layers: dict[str, AnyLoRALayer] = {}
def add_lora_layer_if_present(src_key: str, dst_key: str) -> None:
if src_key in grouped_state_dict:
src_layer_dict = grouped_state_dict.pop(src_key)
layers[dst_key] = LoRALayer(
values={
"lora_down.weight": src_layer_dict.pop("lora_A.weight"),
"lora_up.weight": src_layer_dict.pop("lora_B.weight"),
"alpha": torch.tensor(alpha),
},
)
assert len(src_layer_dict) == 0
def add_qkv_lora_layer_if_present(src_keys: list[str], dst_qkv_key: str) -> None:
"""Handle the Q, K, V matrices for a transformer block. We need special handling because the diffusers format
stores them in separate matrices, whereas the BFL format used internally by InvokeAI concatenates them.
"""
# We expect that either all src keys are present or none of them are. Verify this.
keys_present = [key in grouped_state_dict for key in src_keys]
assert all(keys_present) or not any(keys_present)
# If none of the keys are present, return early.
if not any(keys_present):
return
src_layer_dicts = [grouped_state_dict.pop(key) for key in src_keys]
sub_layers: list[LoRALayerBase] = []
for src_layer_dict in src_layer_dicts:
sub_layers.append(
LoRALayer(
values={
"lora_down.weight": src_layer_dict.pop("lora_A.weight"),
"lora_up.weight": src_layer_dict.pop("lora_B.weight"),
"alpha": torch.tensor(alpha),
},
)
)
assert len(src_layer_dict) == 0
layers[dst_qkv_key] = ConcatenatedLoRALayer(lora_layers=sub_layers, concat_axis=0)
# time_text_embed.timestep_embedder -> time_in.
add_lora_layer_if_present("time_text_embed.timestep_embedder.linear_1", "time_in.in_layer")
add_lora_layer_if_present("time_text_embed.timestep_embedder.linear_2", "time_in.out_layer")
# time_text_embed.text_embedder -> vector_in.
add_lora_layer_if_present("time_text_embed.text_embedder.linear_1", "vector_in.in_layer")
add_lora_layer_if_present("time_text_embed.text_embedder.linear_2", "vector_in.out_layer")
# time_text_embed.guidance_embedder -> guidance_in.
add_lora_layer_if_present("time_text_embed.guidance_embedder.linear_1", "guidance_in")
add_lora_layer_if_present("time_text_embed.guidance_embedder.linear_2", "guidance_in")
# context_embedder -> txt_in.
add_lora_layer_if_present("context_embedder", "txt_in")
# x_embedder -> img_in.
add_lora_layer_if_present("x_embedder", "img_in")
# Double transformer blocks.
for i in range(num_double_layers):
# norms.
add_lora_layer_if_present(f"transformer_blocks.{i}.norm1.linear", f"double_blocks.{i}.img_mod.lin")
add_lora_layer_if_present(f"transformer_blocks.{i}.norm1_context.linear", f"double_blocks.{i}.txt_mod.lin")
# Q, K, V
add_qkv_lora_layer_if_present(
[
f"transformer_blocks.{i}.attn.to_q",
f"transformer_blocks.{i}.attn.to_k",
f"transformer_blocks.{i}.attn.to_v",
],
f"double_blocks.{i}.img_attn.qkv",
)
add_qkv_lora_layer_if_present(
[
f"transformer_blocks.{i}.attn.add_q_proj",
f"transformer_blocks.{i}.attn.add_k_proj",
f"transformer_blocks.{i}.attn.add_v_proj",
],
f"double_blocks.{i}.txt_attn.qkv",
)
# ff img_mlp
add_lora_layer_if_present(
f"transformer_blocks.{i}.ff.net.0.proj",
f"double_blocks.{i}.img_mlp.0",
)
add_lora_layer_if_present(
f"transformer_blocks.{i}.ff.net.2",
f"double_blocks.{i}.img_mlp.2",
)
# ff txt_mlp
add_lora_layer_if_present(
f"transformer_blocks.{i}.ff_context.net.0.proj",
f"double_blocks.{i}.txt_mlp.0",
)
add_lora_layer_if_present(
f"transformer_blocks.{i}.ff_context.net.2",
f"double_blocks.{i}.txt_mlp.2",
)
# output projections.
add_lora_layer_if_present(
f"transformer_blocks.{i}.attn.to_out.0",
f"double_blocks.{i}.img_attn.proj",
)
add_lora_layer_if_present(
f"transformer_blocks.{i}.attn.to_add_out",
f"double_blocks.{i}.txt_attn.proj",
)
# Single transformer blocks.
for i in range(num_single_layers):
# norms
add_lora_layer_if_present(
f"single_transformer_blocks.{i}.norm.linear",
f"single_blocks.{i}.modulation.lin",
)
# Q, K, V, mlp
add_qkv_lora_layer_if_present(
[
f"single_transformer_blocks.{i}.attn.to_q",
f"single_transformer_blocks.{i}.attn.to_k",
f"single_transformer_blocks.{i}.attn.to_v",
f"single_transformer_blocks.{i}.proj_mlp",
],
f"single_blocks.{i}.linear1",
)
# Output projections.
add_lora_layer_if_present(
f"single_transformer_blocks.{i}.proj_out",
f"single_blocks.{i}.linear2",
)
# Final layer.
add_lora_layer_if_present("proj_out", "final_layer.linear")
# Assert that all keys were processed.
assert len(grouped_state_dict) == 0
return LoRAModelRaw(layers=layers)
def _group_by_layer(state_dict: Dict[str, torch.Tensor]) -> dict[str, dict[str, torch.Tensor]]:
"""Groups the keys in the state dict by layer."""
layer_dict: dict[str, dict[str, torch.Tensor]] = {}
for key in state_dict:
# Split the 'lora_A.weight' or 'lora_B.weight' suffix from the layer name.
parts = key.rsplit(".", maxsplit=2)
layer_name = parts[0]
key_name = ".".join(parts[1:])
if layer_name not in layer_dict:
layer_dict[layer_name] = {}
layer_dict[layer_name][key_name] = state_dict[key]
return layer_dict

View File

@@ -1,80 +0,0 @@
import re
from typing import Any, Dict, TypeVar
import torch
from invokeai.backend.lora.layers.any_lora_layer import AnyLoRALayer
from invokeai.backend.lora.layers.utils import any_lora_layer_from_state_dict
from invokeai.backend.lora.lora_model_raw import LoRAModelRaw
# A regex pattern that matches all of the keys in the Kohya FLUX LoRA format.
# Example keys:
# lora_unet_double_blocks_0_img_attn_proj.alpha
# lora_unet_double_blocks_0_img_attn_proj.lora_down.weight
# lora_unet_double_blocks_0_img_attn_proj.lora_up.weight
FLUX_KOHYA_KEY_REGEX = (
r"lora_unet_(\w+_blocks)_(\d+)_(img_attn|img_mlp|img_mod|txt_attn|txt_mlp|txt_mod|linear1|linear2|modulation)_?(.*)"
)
def is_state_dict_likely_in_flux_kohya_format(state_dict: Dict[str, Any]) -> bool:
"""Checks if the provided state dict is likely in the Kohya FLUX LoRA format.
This is intended to be a high-precision detector, but it is not guaranteed to have perfect precision. (A
perfect-precision detector would require checking all keys against a whitelist and verifying tensor shapes.)
"""
return all(re.match(FLUX_KOHYA_KEY_REGEX, k) for k in state_dict.keys())
def lora_model_from_flux_kohya_state_dict(state_dict: Dict[str, torch.Tensor]) -> LoRAModelRaw:
# Group keys by layer.
grouped_state_dict: dict[str, dict[str, torch.Tensor]] = {}
for key, value in state_dict.items():
layer_name, param_name = key.split(".", 1)
if layer_name not in grouped_state_dict:
grouped_state_dict[layer_name] = {}
grouped_state_dict[layer_name][param_name] = value
# Convert the state dict to the InvokeAI format.
grouped_state_dict = convert_flux_kohya_state_dict_to_invoke_format(grouped_state_dict)
# Create LoRA layers.
layers: dict[str, AnyLoRALayer] = {}
for layer_key, layer_state_dict in grouped_state_dict.items():
layers[layer_key] = any_lora_layer_from_state_dict(layer_state_dict)
# Create and return the LoRAModelRaw.
return LoRAModelRaw(layers=layers)
T = TypeVar("T")
def convert_flux_kohya_state_dict_to_invoke_format(state_dict: Dict[str, T]) -> Dict[str, T]:
"""Converts a state dict from the Kohya FLUX LoRA format to LoRA weight format used internally by InvokeAI.
Example key conversions:
"lora_unet_double_blocks_0_img_attn_proj" -> "double_blocks.0.img_attn.proj"
"lora_unet_double_blocks_0_img_attn_proj" -> "double_blocks.0.img_attn.proj"
"lora_unet_double_blocks_0_img_attn_proj" -> "double_blocks.0.img_attn.proj"
"lora_unet_double_blocks_0_img_attn_qkv" -> "double_blocks.0.img_attn.qkv"
"lora_unet_double_blocks_0_img_attn_qkv" -> "double_blocks.0.img.attn.qkv"
"lora_unet_double_blocks_0_img_attn_qkv" -> "double_blocks.0.img.attn.qkv"
"""
def replace_func(match: re.Match[str]) -> str:
s = f"{match.group(1)}.{match.group(2)}.{match.group(3)}"
if match.group(4):
s += f".{match.group(4)}"
return s
converted_dict: dict[str, T] = {}
for k, v in state_dict.items():
match = re.match(FLUX_KOHYA_KEY_REGEX, k)
if match:
new_key = re.sub(FLUX_KOHYA_KEY_REGEX, replace_func, k)
converted_dict[new_key] = v
else:
raise ValueError(f"Key '{k}' does not match the expected pattern for FLUX LoRA weights.")
return converted_dict

View File

@@ -1,29 +0,0 @@
from typing import Dict
import torch
from invokeai.backend.lora.layers.any_lora_layer import AnyLoRALayer
from invokeai.backend.lora.layers.utils import any_lora_layer_from_state_dict
from invokeai.backend.lora.lora_model_raw import LoRAModelRaw
def lora_model_from_sd_state_dict(state_dict: Dict[str, torch.Tensor]) -> LoRAModelRaw:
grouped_state_dict: dict[str, dict[str, torch.Tensor]] = _group_state(state_dict)
layers: dict[str, AnyLoRALayer] = {}
for layer_key, values in grouped_state_dict.items():
layers[layer_key] = any_lora_layer_from_state_dict(values)
return LoRAModelRaw(layers=layers)
def _group_state(state_dict: Dict[str, torch.Tensor]) -> Dict[str, Dict[str, torch.Tensor]]:
state_dict_groupped: Dict[str, Dict[str, torch.Tensor]] = {}
for key, value in state_dict.items():
stem, leaf = key.split(".", 1)
if stem not in state_dict_groupped:
state_dict_groupped[stem] = {}
state_dict_groupped[stem][leaf] = value
return state_dict_groupped

View File

@@ -1,154 +0,0 @@
import bisect
from typing import Dict, List, Tuple, TypeVar
T = TypeVar("T")
def convert_sdxl_keys_to_diffusers_format(state_dict: Dict[str, T]) -> dict[str, T]:
"""Convert the keys of an SDXL LoRA state_dict to diffusers format.
The input state_dict can be in either Stability AI format or diffusers format. If the state_dict is already in
diffusers format, then this function will have no effect.
This function is adapted from:
https://github.com/bmaltais/kohya_ss/blob/2accb1305979ba62f5077a23aabac23b4c37e935/networks/lora_diffusers.py#L385-L409
Args:
state_dict (Dict[str, Tensor]): The SDXL LoRA state_dict.
Raises:
ValueError: If state_dict contains an unrecognized key, or not all keys could be converted.
Returns:
Dict[str, Tensor]: The diffusers-format state_dict.
"""
converted_count = 0 # The number of Stability AI keys converted to diffusers format.
not_converted_count = 0 # The number of keys that were not converted.
# Get a sorted list of Stability AI UNet keys so that we can efficiently search for keys with matching prefixes.
# For example, we want to efficiently find `input_blocks_4_1` in the list when searching for
# `input_blocks_4_1_proj_in`.
stability_unet_keys = list(SDXL_UNET_STABILITY_TO_DIFFUSERS_MAP)
stability_unet_keys.sort()
new_state_dict: dict[str, T] = {}
for full_key, value in state_dict.items():
if full_key.startswith("lora_unet_"):
search_key = full_key.replace("lora_unet_", "")
# Use bisect to find the key in stability_unet_keys that *may* match the search_key's prefix.
position = bisect.bisect_right(stability_unet_keys, search_key)
map_key = stability_unet_keys[position - 1]
# Now, check if the map_key *actually* matches the search_key.
if search_key.startswith(map_key):
new_key = full_key.replace(map_key, SDXL_UNET_STABILITY_TO_DIFFUSERS_MAP[map_key])
new_state_dict[new_key] = value
converted_count += 1
else:
new_state_dict[full_key] = value
not_converted_count += 1
elif full_key.startswith("lora_te1_") or full_key.startswith("lora_te2_"):
# The CLIP text encoders have the same keys in both Stability AI and diffusers formats.
new_state_dict[full_key] = value
continue
else:
raise ValueError(f"Unrecognized SDXL LoRA key prefix: '{full_key}'.")
if converted_count > 0 and not_converted_count > 0:
raise ValueError(
f"The SDXL LoRA could only be partially converted to diffusers format. converted={converted_count},"
f" not_converted={not_converted_count}"
)
return new_state_dict
# code from
# https://github.com/bmaltais/kohya_ss/blob/2accb1305979ba62f5077a23aabac23b4c37e935/networks/lora_diffusers.py#L15C1-L97C32
def _make_sdxl_unet_conversion_map() -> List[Tuple[str, str]]:
"""Create a dict mapping state_dict keys from Stability AI SDXL format to diffusers SDXL format."""
unet_conversion_map_layer: list[tuple[str, str]] = []
for i in range(3): # num_blocks is 3 in sdxl
# loop over downblocks/upblocks
for j in range(2):
# loop over resnets/attentions for downblocks
hf_down_res_prefix = f"down_blocks.{i}.resnets.{j}."
sd_down_res_prefix = f"input_blocks.{3*i + j + 1}.0."
unet_conversion_map_layer.append((sd_down_res_prefix, hf_down_res_prefix))
if i < 3:
# no attention layers in down_blocks.3
hf_down_atn_prefix = f"down_blocks.{i}.attentions.{j}."
sd_down_atn_prefix = f"input_blocks.{3*i + j + 1}.1."
unet_conversion_map_layer.append((sd_down_atn_prefix, hf_down_atn_prefix))
for j in range(3):
# loop over resnets/attentions for upblocks
hf_up_res_prefix = f"up_blocks.{i}.resnets.{j}."
sd_up_res_prefix = f"output_blocks.{3*i + j}.0."
unet_conversion_map_layer.append((sd_up_res_prefix, hf_up_res_prefix))
# if i > 0: commentout for sdxl
# no attention layers in up_blocks.0
hf_up_atn_prefix = f"up_blocks.{i}.attentions.{j}."
sd_up_atn_prefix = f"output_blocks.{3*i + j}.1."
unet_conversion_map_layer.append((sd_up_atn_prefix, hf_up_atn_prefix))
if i < 3:
# no downsample in down_blocks.3
hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0.conv."
sd_downsample_prefix = f"input_blocks.{3*(i+1)}.0.op."
unet_conversion_map_layer.append((sd_downsample_prefix, hf_downsample_prefix))
# no upsample in up_blocks.3
hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0."
sd_upsample_prefix = f"output_blocks.{3*i + 2}.{2}." # change for sdxl
unet_conversion_map_layer.append((sd_upsample_prefix, hf_upsample_prefix))
hf_mid_atn_prefix = "mid_block.attentions.0."
sd_mid_atn_prefix = "middle_block.1."
unet_conversion_map_layer.append((sd_mid_atn_prefix, hf_mid_atn_prefix))
for j in range(2):
hf_mid_res_prefix = f"mid_block.resnets.{j}."
sd_mid_res_prefix = f"middle_block.{2*j}."
unet_conversion_map_layer.append((sd_mid_res_prefix, hf_mid_res_prefix))
unet_conversion_map_resnet = [
# (stable-diffusion, HF Diffusers)
("in_layers.0.", "norm1."),
("in_layers.2.", "conv1."),
("out_layers.0.", "norm2."),
("out_layers.3.", "conv2."),
("emb_layers.1.", "time_emb_proj."),
("skip_connection.", "conv_shortcut."),
]
unet_conversion_map: list[tuple[str, str]] = []
for sd, hf in unet_conversion_map_layer:
if "resnets" in hf:
for sd_res, hf_res in unet_conversion_map_resnet:
unet_conversion_map.append((sd + sd_res, hf + hf_res))
else:
unet_conversion_map.append((sd, hf))
for j in range(2):
hf_time_embed_prefix = f"time_embedding.linear_{j+1}."
sd_time_embed_prefix = f"time_embed.{j*2}."
unet_conversion_map.append((sd_time_embed_prefix, hf_time_embed_prefix))
for j in range(2):
hf_label_embed_prefix = f"add_embedding.linear_{j+1}."
sd_label_embed_prefix = f"label_emb.0.{j*2}."
unet_conversion_map.append((sd_label_embed_prefix, hf_label_embed_prefix))
unet_conversion_map.append(("input_blocks.0.0.", "conv_in."))
unet_conversion_map.append(("out.0.", "conv_norm_out."))
unet_conversion_map.append(("out.2.", "conv_out."))
return unet_conversion_map
SDXL_UNET_STABILITY_TO_DIFFUSERS_MAP = {
sd.rstrip(".").replace(".", "_"): hf.rstrip(".").replace(".", "_") for sd, hf in _make_sdxl_unet_conversion_map()
}

View File

@@ -1,11 +0,0 @@
from typing import Union
from invokeai.backend.lora.layers.concatenated_lora_layer import ConcatenatedLoRALayer
from invokeai.backend.lora.layers.full_layer import FullLayer
from invokeai.backend.lora.layers.ia3_layer import IA3Layer
from invokeai.backend.lora.layers.loha_layer import LoHALayer
from invokeai.backend.lora.layers.lokr_layer import LoKRLayer
from invokeai.backend.lora.layers.lora_layer import LoRALayer
from invokeai.backend.lora.layers.norm_layer import NormLayer
AnyLoRALayer = Union[LoRALayer, LoHALayer, LoKRLayer, FullLayer, IA3Layer, NormLayer, ConcatenatedLoRALayer]

View File

@@ -1,46 +0,0 @@
from typing import List, Optional
import torch
from invokeai.backend.lora.layers.lora_layer_base import LoRALayerBase
class ConcatenatedLoRALayer(LoRALayerBase):
"""A LoRA layer that is composed of multiple LoRA layers concatenated along a specified axis.
This class was created to handle a special case with FLUX LoRA models. In the BFL FLUX model format, the attention
Q, K, V matrices are concatenated along the first dimension. In the diffusers LoRA format, the Q, K, V matrices are
stored as separate tensors. This class enables diffusers LoRA layers to be used in BFL FLUX models.
"""
def __init__(self, lora_layers: List[LoRALayerBase], concat_axis: int = 0):
# Note: We pass values={} to the base class, because the values are handled by the individual LoRA layers.
super().__init__(values={})
self._lora_layers = lora_layers
self._concat_axis = concat_axis
def get_weight(self, orig_weight: torch.Tensor) -> torch.Tensor:
# TODO(ryand): Currently, we pass orig_weight=None to the sub-layers. If we want to support sub-layers that
# require this value, we will need to implement chunking of the original weight tensor here.
layer_weights = [lora_layer.get_weight(None) for lora_layer in self._lora_layers] # pyright: ignore[reportArgumentType]
return torch.cat(layer_weights, dim=self._concat_axis)
def get_bias(self, orig_bias: torch.Tensor) -> Optional[torch.Tensor]:
# TODO(ryand): Currently, we pass orig_bias=None to the sub-layers. If we want to support sub-layers that
# require this value, we will need to implement chunking of the original bias tensor here.
layer_biases = [lora_layer.get_bias(None) for lora_layer in self._lora_layers] # pyright: ignore[reportArgumentType]
layer_bias_is_none = [layer_bias is None for layer_bias in layer_biases]
if any(layer_bias_is_none):
assert all(layer_bias_is_none)
return None
# Ignore the type error, because we have just verified that all layer biases are non-None.
return torch.cat(layer_biases, dim=self._concat_axis)
def calc_size(self) -> int:
return sum(lora_layer.calc_size() for lora_layer in self._lora_layers)
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
for lora_layer in self._lora_layers:
lora_layer.to(device=device, dtype=dtype)

View File

@@ -1,36 +0,0 @@
from typing import Dict, Optional
import torch
from invokeai.backend.lora.layers.lora_layer_base import LoRALayerBase
class FullLayer(LoRALayerBase):
# bias handled in LoRALayerBase(calc_size, to)
# weight: torch.Tensor
# bias: Optional[torch.Tensor]
def __init__(
self,
values: Dict[str, torch.Tensor],
):
super().__init__(values)
self.weight = values["diff"]
self.bias = values.get("diff_b", None)
self.rank = None # unscaled
self.check_keys(values, {"diff", "diff_b"})
def get_weight(self, orig_weight: torch.Tensor) -> torch.Tensor:
return self.weight
def calc_size(self) -> int:
model_size = super().calc_size()
model_size += self.weight.nelement() * self.weight.element_size()
return model_size
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
super().to(device=device, dtype=dtype)
self.weight = self.weight.to(device=device, dtype=dtype)

View File

@@ -1,41 +0,0 @@
from typing import Dict, Optional
import torch
from invokeai.backend.lora.layers.lora_layer_base import LoRALayerBase
class IA3Layer(LoRALayerBase):
# weight: torch.Tensor
# on_input: torch.Tensor
def __init__(
self,
values: Dict[str, torch.Tensor],
):
super().__init__(values)
self.weight = values["weight"]
self.on_input = values["on_input"]
self.rank = None # unscaled
self.check_keys(values, {"weight", "on_input"})
def get_weight(self, orig_weight: torch.Tensor) -> torch.Tensor:
weight = self.weight
if not self.on_input:
weight = weight.reshape(-1, 1)
assert orig_weight is not None
return orig_weight * weight
def calc_size(self) -> int:
model_size = super().calc_size()
model_size += self.weight.nelement() * self.weight.element_size()
model_size += self.on_input.nelement() * self.on_input.element_size()
return model_size
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None):
super().to(device=device, dtype=dtype)
self.weight = self.weight.to(device=device, dtype=dtype)
self.on_input = self.on_input.to(device=device, dtype=dtype)

View File

@@ -1,68 +0,0 @@
from typing import Dict, Optional
import torch
from invokeai.backend.lora.layers.lora_layer_base import LoRALayerBase
class LoHALayer(LoRALayerBase):
# w1_a: torch.Tensor
# w1_b: torch.Tensor
# w2_a: torch.Tensor
# w2_b: torch.Tensor
# t1: Optional[torch.Tensor] = None
# t2: Optional[torch.Tensor] = None
def __init__(self, values: Dict[str, torch.Tensor]):
super().__init__(values)
self.w1_a = values["hada_w1_a"]
self.w1_b = values["hada_w1_b"]
self.w2_a = values["hada_w2_a"]
self.w2_b = values["hada_w2_b"]
self.t1 = values.get("hada_t1", None)
self.t2 = values.get("hada_t2", None)
self.rank = self.w1_b.shape[0]
self.check_keys(
values,
{
"hada_w1_a",
"hada_w1_b",
"hada_w2_a",
"hada_w2_b",
"hada_t1",
"hada_t2",
},
)
def get_weight(self, orig_weight: torch.Tensor) -> torch.Tensor:
if self.t1 is None:
weight: torch.Tensor = (self.w1_a @ self.w1_b) * (self.w2_a @ self.w2_b)
else:
rebuild1 = torch.einsum("i j k l, j r, i p -> p r k l", self.t1, self.w1_b, self.w1_a)
rebuild2 = torch.einsum("i j k l, j r, i p -> p r k l", self.t2, self.w2_b, self.w2_a)
weight = rebuild1 * rebuild2
return weight
def calc_size(self) -> int:
model_size = super().calc_size()
for val in [self.w1_a, self.w1_b, self.w2_a, self.w2_b, self.t1, self.t2]:
if val is not None:
model_size += val.nelement() * val.element_size()
return model_size
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
super().to(device=device, dtype=dtype)
self.w1_a = self.w1_a.to(device=device, dtype=dtype)
self.w1_b = self.w1_b.to(device=device, dtype=dtype)
if self.t1 is not None:
self.t1 = self.t1.to(device=device, dtype=dtype)
self.w2_a = self.w2_a.to(device=device, dtype=dtype)
self.w2_b = self.w2_b.to(device=device, dtype=dtype)
if self.t2 is not None:
self.t2 = self.t2.to(device=device, dtype=dtype)

View File

@@ -1,113 +0,0 @@
from typing import Dict, Optional
import torch
from invokeai.backend.lora.layers.lora_layer_base import LoRALayerBase
class LoKRLayer(LoRALayerBase):
# w1: Optional[torch.Tensor] = None
# w1_a: Optional[torch.Tensor] = None
# w1_b: Optional[torch.Tensor] = None
# w2: Optional[torch.Tensor] = None
# w2_a: Optional[torch.Tensor] = None
# w2_b: Optional[torch.Tensor] = None
# t2: Optional[torch.Tensor] = None
def __init__(
self,
values: Dict[str, torch.Tensor],
):
super().__init__(values)
self.w1 = values.get("lokr_w1", None)
if self.w1 is None:
self.w1_a = values["lokr_w1_a"]
self.w1_b = values["lokr_w1_b"]
else:
self.w1_b = None
self.w1_a = None
self.w2 = values.get("lokr_w2", None)
if self.w2 is None:
self.w2_a = values["lokr_w2_a"]
self.w2_b = values["lokr_w2_b"]
else:
self.w2_a = None
self.w2_b = None
self.t2 = values.get("lokr_t2", None)
if self.w1_b is not None:
self.rank = self.w1_b.shape[0]
elif self.w2_b is not None:
self.rank = self.w2_b.shape[0]
else:
self.rank = None # unscaled
self.check_keys(
values,
{
"lokr_w1",
"lokr_w1_a",
"lokr_w1_b",
"lokr_w2",
"lokr_w2_a",
"lokr_w2_b",
"lokr_t2",
},
)
def get_weight(self, orig_weight: torch.Tensor) -> torch.Tensor:
w1: Optional[torch.Tensor] = self.w1
if w1 is None:
assert self.w1_a is not None
assert self.w1_b is not None
w1 = self.w1_a @ self.w1_b
w2 = self.w2
if w2 is None:
if self.t2 is None:
assert self.w2_a is not None
assert self.w2_b is not None
w2 = self.w2_a @ self.w2_b
else:
w2 = torch.einsum("i j k l, i p, j r -> p r k l", self.t2, self.w2_a, self.w2_b)
if len(w2.shape) == 4:
w1 = w1.unsqueeze(2).unsqueeze(2)
w2 = w2.contiguous()
assert w1 is not None
assert w2 is not None
weight = torch.kron(w1, w2)
return weight
def calc_size(self) -> int:
model_size = super().calc_size()
for val in [self.w1, self.w1_a, self.w1_b, self.w2, self.w2_a, self.w2_b, self.t2]:
if val is not None:
model_size += val.nelement() * val.element_size()
return model_size
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
super().to(device=device, dtype=dtype)
if self.w1 is not None:
self.w1 = self.w1.to(device=device, dtype=dtype)
else:
assert self.w1_a is not None
assert self.w1_b is not None
self.w1_a = self.w1_a.to(device=device, dtype=dtype)
self.w1_b = self.w1_b.to(device=device, dtype=dtype)
if self.w2 is not None:
self.w2 = self.w2.to(device=device, dtype=dtype)
else:
assert self.w2_a is not None
assert self.w2_b is not None
self.w2_a = self.w2_a.to(device=device, dtype=dtype)
self.w2_b = self.w2_b.to(device=device, dtype=dtype)
if self.t2 is not None:
self.t2 = self.t2.to(device=device, dtype=dtype)

View File

@@ -1,58 +0,0 @@
from typing import Dict, Optional
import torch
from invokeai.backend.lora.layers.lora_layer_base import LoRALayerBase
# TODO: find and debug lora/locon with bias
class LoRALayer(LoRALayerBase):
# up: torch.Tensor
# mid: Optional[torch.Tensor]
# down: torch.Tensor
def __init__(
self,
values: Dict[str, torch.Tensor],
):
super().__init__(values)
self.up = values["lora_up.weight"]
self.down = values["lora_down.weight"]
self.mid = values.get("lora_mid.weight", None)
self.rank = self.down.shape[0]
self.check_keys(
values,
{
"lora_up.weight",
"lora_down.weight",
"lora_mid.weight",
},
)
def get_weight(self, orig_weight: torch.Tensor) -> torch.Tensor:
if self.mid is not None:
up = self.up.reshape(self.up.shape[0], self.up.shape[1])
down = self.down.reshape(self.down.shape[0], self.down.shape[1])
weight = torch.einsum("m n w h, i m, n j -> i j w h", self.mid, up, down)
else:
weight = self.up.reshape(self.up.shape[0], -1) @ self.down.reshape(self.down.shape[0], -1)
return weight
def calc_size(self) -> int:
model_size = super().calc_size()
for val in [self.up, self.mid, self.down]:
if val is not None:
model_size += val.nelement() * val.element_size()
return model_size
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
super().to(device=device, dtype=dtype)
self.up = self.up.to(device=device, dtype=dtype)
self.down = self.down.to(device=device, dtype=dtype)
if self.mid is not None:
self.mid = self.mid.to(device=device, dtype=dtype)

View File

@@ -1,71 +0,0 @@
from typing import Dict, Optional, Set
import torch
import invokeai.backend.util.logging as logger
class LoRALayerBase:
# rank: Optional[int]
# alpha: Optional[float]
# bias: Optional[torch.Tensor]
# @property
# def scale(self):
# return self.alpha / self.rank if (self.alpha and self.rank) else 1.0
def __init__(
self,
values: Dict[str, torch.Tensor],
):
if "alpha" in values:
self.alpha = values["alpha"].item()
else:
self.alpha = None
if "bias_indices" in values and "bias_values" in values and "bias_size" in values:
self.bias: Optional[torch.Tensor] = torch.sparse_coo_tensor(
values["bias_indices"],
values["bias_values"],
tuple(values["bias_size"]),
)
else:
self.bias = None
self.rank = None # set in layer implementation
def get_weight(self, orig_weight: torch.Tensor) -> torch.Tensor:
raise NotImplementedError()
def get_bias(self, orig_bias: torch.Tensor) -> Optional[torch.Tensor]:
return self.bias
def get_parameters(self, orig_module: torch.nn.Module) -> Dict[str, torch.Tensor]:
params = {"weight": self.get_weight(orig_module.weight)}
bias = self.get_bias(orig_module.bias)
if bias is not None:
params["bias"] = bias
return params
def calc_size(self) -> int:
model_size = 0
for val in [self.bias]:
if val is not None:
model_size += val.nelement() * val.element_size()
return model_size
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
if self.bias is not None:
self.bias = self.bias.to(device=device, dtype=dtype)
def check_keys(self, values: Dict[str, torch.Tensor], known_keys: Set[str]):
"""Log a warning if values contains unhandled keys."""
# {"alpha", "bias_indices", "bias_values", "bias_size"} are hard-coded, because they are handled by
# `LoRALayerBase`. Sub-classes should provide the known_keys that they handled.
all_known_keys = known_keys | {"alpha", "bias_indices", "bias_values", "bias_size"}
unknown_keys = set(values.keys()) - all_known_keys
if unknown_keys:
logger.warning(
f"Unexpected keys found in LoRA/LyCORIS layer, model might work incorrectly! Keys: {unknown_keys}"
)

View File

@@ -1,36 +0,0 @@
from typing import Dict, Optional
import torch
from invokeai.backend.lora.layers.lora_layer_base import LoRALayerBase
class NormLayer(LoRALayerBase):
# bias handled in LoRALayerBase(calc_size, to)
# weight: torch.Tensor
# bias: Optional[torch.Tensor]
def __init__(
self,
values: Dict[str, torch.Tensor],
):
super().__init__(values)
self.weight = values["w_norm"]
self.bias = values.get("b_norm", None)
self.rank = None # unscaled
self.check_keys(values, {"w_norm", "b_norm"})
def get_weight(self, orig_weight: torch.Tensor) -> torch.Tensor:
return self.weight
def calc_size(self) -> int:
model_size = super().calc_size()
model_size += self.weight.nelement() * self.weight.element_size()
return model_size
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
super().to(device=device, dtype=dtype)
self.weight = self.weight.to(device=device, dtype=dtype)

View File

@@ -1,33 +0,0 @@
from typing import Dict
import torch
from invokeai.backend.lora.layers.any_lora_layer import AnyLoRALayer
from invokeai.backend.lora.layers.full_layer import FullLayer
from invokeai.backend.lora.layers.ia3_layer import IA3Layer
from invokeai.backend.lora.layers.loha_layer import LoHALayer
from invokeai.backend.lora.layers.lokr_layer import LoKRLayer
from invokeai.backend.lora.layers.lora_layer import LoRALayer
from invokeai.backend.lora.layers.norm_layer import NormLayer
def any_lora_layer_from_state_dict(state_dict: Dict[str, torch.Tensor]) -> AnyLoRALayer:
# Detect layers according to LyCORIS detection logic(`weight_list_det`)
# https://github.com/KohakuBlueleaf/LyCORIS/tree/8ad8000efb79e2b879054da8c9356e6143591bad/lycoris/modules
if "lora_up.weight" in state_dict:
# LoRA a.k.a LoCon
return LoRALayer(state_dict)
elif "hada_w1_a" in state_dict:
return LoHALayer(state_dict)
elif "lokr_w1" in state_dict or "lokr_w1_a" in state_dict:
return LoKRLayer(state_dict)
elif "diff" in state_dict:
# Full a.k.a Diff
return FullLayer(state_dict)
elif "on_input" in state_dict:
return IA3Layer(state_dict)
elif "w_norm" in state_dict:
return NormLayer(state_dict)
else:
raise ValueError(f"Unsupported lora format: {state_dict.keys()}")

View File

@@ -1,22 +0,0 @@
# Copyright (c) 2024 The InvokeAI Development team
from typing import Dict, Optional
import torch
from invokeai.backend.lora.layers.any_lora_layer import AnyLoRALayer
from invokeai.backend.raw_model import RawModel
class LoRAModelRaw(RawModel): # (torch.nn.Module):
def __init__(self, layers: Dict[str, AnyLoRALayer]):
self.layers = layers
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
for _key, layer in self.layers.items():
layer.to(device=device, dtype=dtype)
def calc_size(self) -> int:
model_size = 0
for _, layer in self.layers.items():
model_size += layer.calc_size()
return model_size

View File

@@ -1,148 +0,0 @@
from contextlib import contextmanager
from typing import Dict, Iterable, Optional, Tuple
import torch
from invokeai.backend.lora.lora_model_raw import LoRAModelRaw
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.original_weights_storage import OriginalWeightsStorage
class LoraPatcher:
@staticmethod
@torch.no_grad()
@contextmanager
def apply_lora_patches(
model: torch.nn.Module,
patches: Iterable[Tuple[LoRAModelRaw, float]],
prefix: str,
cached_weights: Optional[Dict[str, torch.Tensor]] = None,
):
"""Apply one or more LoRA patches to a model within a context manager.
:param model: The model to patch.
:param loras: An iterator that returns tuples of LoRA patches and associated weights. An iterator is used so
that the LoRA patches do not need to be loaded into memory all at once.
:param prefix: The keys in the patches will be filtered to only include weights with this prefix.
:cached_weights: Read-only copy of the model's state dict in CPU, for efficient unpatching purposes.
"""
original_weights = OriginalWeightsStorage(cached_weights)
try:
for patch, patch_weight in patches:
LoraPatcher.apply_lora_patch(
model=model,
prefix=prefix,
patch=patch,
patch_weight=patch_weight,
original_weights=original_weights,
)
del patch
yield
finally:
for param_key, weight in original_weights.get_changed_weights():
model.get_parameter(param_key).copy_(weight)
@staticmethod
@torch.no_grad()
def apply_lora_patch(
model: torch.nn.Module,
prefix: str,
patch: LoRAModelRaw,
patch_weight: float,
original_weights: OriginalWeightsStorage,
):
"""
Apply a single LoRA patch to a model.
:param model: The model to patch.
:param patch: LoRA model to patch in.
:param patch_weight: LoRA patch weight.
:param prefix: A string prefix that precedes keys used in the LoRAs weight layers.
:param original_weights: Storage with original weights, filled by weights which lora patches, used for unpatching.
"""
if patch_weight == 0:
return
# If the layer keys contain a dot, then they are not flattened, and can be directly used to access model
# submodules. If the layer keys do not contain a dot, then they are flattened, meaning that all '.' have been
# replaced with '_'. Non-flattened keys are preferred, because they allow submodules to be accessed directly
# without searching, but some legacy code still uses flattened keys.
layer_keys_are_flattened = "." not in next(iter(patch.layers.keys()))
prefix_len = len(prefix)
for layer_key, layer in patch.layers.items():
if not layer_key.startswith(prefix):
continue
module_key, module = LoraPatcher._get_submodule(
model, layer_key[prefix_len:], layer_key_is_flattened=layer_keys_are_flattened
)
# All of the LoRA weight calculations will be done on the same device as the module weight.
# (Performance will be best if this is a CUDA device.)
device = module.weight.device
dtype = module.weight.dtype
layer_scale = layer.alpha / layer.rank if (layer.alpha and layer.rank) else 1.0
# We intentionally move to the target device first, then cast. Experimentally, this was found to
# be significantly faster for 16-bit CPU tensors being moved to a CUDA device than doing the
# same thing in a single call to '.to(...)'.
layer.to(device=device)
layer.to(dtype=torch.float32)
# TODO(ryand): Using torch.autocast(...) over explicit casting may offer a speed benefit on CUDA
# devices here. Experimentally, it was found to be very slow on CPU. More investigation needed.
for param_name, lora_param_weight in layer.get_parameters(module).items():
param_key = module_key + "." + param_name
module_param = module.get_parameter(param_name)
# Save original weight
original_weights.save(param_key, module_param)
if module_param.shape != lora_param_weight.shape:
lora_param_weight = lora_param_weight.reshape(module_param.shape)
lora_param_weight *= patch_weight * layer_scale
module_param += lora_param_weight.to(dtype=dtype)
layer.to(device=TorchDevice.CPU_DEVICE)
@staticmethod
def _get_submodule(
model: torch.nn.Module, layer_key: str, layer_key_is_flattened: bool
) -> tuple[str, torch.nn.Module]:
"""Get the submodule corresponding to the given layer key.
:param model: The model to search.
:param layer_key: The layer key to search for.
:param layer_key_is_flattened: Whether the layer key is flattened. If flattened, then all '.' have been replaced
with '_'. Non-flattened keys are preferred, because they allow submodules to be accessed directly without
searching, but some legacy code still uses flattened keys.
:return: A tuple containing the module key and the submodule.
"""
if not layer_key_is_flattened:
return layer_key, model.get_submodule(layer_key)
# Handle flattened keys.
assert "." not in layer_key
module = model
module_key = ""
key_parts = layer_key.split("_")
submodule_name = key_parts.pop(0)
while len(key_parts) > 0:
try:
module = module.get_submodule(submodule_name)
module_key += "." + submodule_name
submodule_name = key_parts.pop(0)
except Exception:
submodule_name += "_" + key_parts.pop(0)
module = module.get_submodule(submodule_name)
module_key = (module_key + "." + submodule_name).lstrip(".")
return module_key, module

View File

@@ -66,9 +66,8 @@ class ModelLoader(ModelLoaderBase):
return (model_base / config.path).resolve()
def _load_and_cache(self, config: AnyModelConfig, submodel_type: Optional[SubModelType] = None) -> ModelLockerBase:
stats_name = ":".join([config.base, config.type, config.name, (submodel_type or "")])
try:
return self._ram_cache.get(config.key, submodel_type, stats_name=stats_name)
return self._ram_cache.get(config.key, submodel_type)
except IndexError:
pass
@@ -85,7 +84,7 @@ class ModelLoader(ModelLoaderBase):
return self._ram_cache.get(
key=config.key,
submodel_type=submodel_type,
stats_name=stats_name,
stats_name=":".join([config.base, config.type, config.name, (submodel_type or "")]),
)
def get_size_fs(

View File

@@ -128,24 +128,7 @@ class ModelCacheBase(ABC, Generic[T]):
@property
@abstractmethod
def max_cache_size(self) -> float:
"""Return the maximum size the RAM cache can grow to."""
pass
@max_cache_size.setter
@abstractmethod
def max_cache_size(self, value: float) -> None:
"""Set the cap on vram cache size."""
@property
@abstractmethod
def max_vram_cache_size(self) -> float:
"""Return the maximum size the VRAM cache can grow to."""
pass
@max_vram_cache_size.setter
@abstractmethod
def max_vram_cache_size(self, value: float) -> float:
"""Set the maximum size the VRAM cache can grow to."""
"""Return true if the cache is configured to lazily offload models in VRAM."""
pass
@abstractmethod

View File

@@ -70,7 +70,6 @@ class ModelCache(ModelCacheBase[AnyModel]):
max_vram_cache_size: float,
execution_device: torch.device = torch.device("cuda"),
storage_device: torch.device = torch.device("cpu"),
precision: torch.dtype = torch.float16,
lazy_offloading: bool = True,
log_memory_usage: bool = False,
logger: Optional[Logger] = None,
@@ -82,13 +81,11 @@ class ModelCache(ModelCacheBase[AnyModel]):
:param max_vram_cache_size: Maximum size of the execution_device cache in GBs.
:param execution_device: Torch device to load active model into [torch.device('cuda')]
:param storage_device: Torch device to save inactive model in [torch.device('cpu')]
:param precision: Precision for loaded models [torch.float16]
:param lazy_offloading: Keep model in VRAM until another model needs to be loaded
:param lazy_offloading: Keep model in VRAM until another model needs to be loaded.
:param log_memory_usage: If True, a memory snapshot will be captured before and after every model cache
operation, and the result will be logged (at debug level). There is a time cost to capturing the memory
snapshots, so it is recommended to disable this feature unless you are actively inspecting the model cache's
behaviour.
:param logger: InvokeAILogger to use (otherwise creates one)
"""
# allow lazy offloading only when vram cache enabled
self._lazy_offloading = lazy_offloading and max_vram_cache_size > 0
@@ -133,16 +130,6 @@ class ModelCache(ModelCacheBase[AnyModel]):
"""Set the cap on cache size."""
self._max_cache_size = value
@property
def max_vram_cache_size(self) -> float:
"""Return the cap on vram cache size."""
return self._max_vram_cache_size
@max_vram_cache_size.setter
def max_vram_cache_size(self, value: float) -> None:
"""Set the cap on vram cache size."""
self._max_vram_cache_size = value
@property
def stats(self) -> Optional[CacheStats]:
"""Return collected CacheStats object."""

View File

@@ -32,9 +32,6 @@ from invokeai.backend.model_manager.config import (
)
from invokeai.backend.model_manager.load.load_default import ModelLoader
from invokeai.backend.model_manager.load.model_loader_registry import ModelLoaderRegistry
from invokeai.backend.model_manager.util.model_util import (
convert_bundle_to_flux_transformer_checkpoint,
)
from invokeai.backend.util.silence_warnings import SilenceWarnings
try:
@@ -193,13 +190,6 @@ class FluxCheckpointModel(ModelLoader):
with SilenceWarnings():
model = Flux(params[config.config_path])
sd = load_file(model_path)
if "model.diffusion_model.double_blocks.0.img_attn.norm.key_norm.scale" in sd:
sd = convert_bundle_to_flux_transformer_checkpoint(sd)
new_sd_size = sum([ten.nelement() * torch.bfloat16.itemsize for ten in sd.values()])
self._ram_cache.make_room(new_sd_size)
for k in sd.keys():
# We need to cast to bfloat16 due to it being the only currently supported dtype for inference
sd[k] = sd[k].to(torch.bfloat16)
model.load_state_dict(sd, assign=True)
return model
@@ -240,7 +230,5 @@ class FluxBnbQuantizednf4bCheckpointModel(ModelLoader):
model = Flux(params[config.config_path])
model = quantize_model_nf4(model, modules_to_not_convert=set(), compute_dtype=torch.bfloat16)
sd = load_file(model_path)
if "model.diffusion_model.double_blocks.0.img_attn.norm.key_norm.scale" in sd:
sd = convert_bundle_to_flux_transformer_checkpoint(sd)
model.load_state_dict(sd, assign=True)
return model

View File

@@ -5,18 +5,8 @@ from logging import Logger
from pathlib import Path
from typing import Optional
import torch
from safetensors.torch import load_file
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.backend.lora.conversions.flux_diffusers_lora_conversion_utils import (
lora_model_from_flux_diffusers_state_dict,
)
from invokeai.backend.lora.conversions.flux_kohya_lora_conversion_utils import (
lora_model_from_flux_kohya_state_dict,
)
from invokeai.backend.lora.conversions.sd_lora_conversion_utils import lora_model_from_sd_state_dict
from invokeai.backend.lora.conversions.sdxl_lora_conversion_utils import convert_sdxl_keys_to_diffusers_format
from invokeai.backend.lora import LoRAModelRaw
from invokeai.backend.model_manager import (
AnyModel,
AnyModelConfig,
@@ -55,33 +45,14 @@ class LoRALoader(ModelLoader):
raise ValueError("There are no submodels in a LoRA model.")
model_path = Path(config.path)
assert self._model_base is not None
# Load the state dict from the model file.
if model_path.suffix == ".safetensors":
state_dict = load_file(model_path.absolute().as_posix(), device="cpu")
else:
state_dict = torch.load(model_path, map_location="cpu")
# Apply state_dict key conversions, if necessary.
if self._model_base == BaseModelType.StableDiffusionXL:
state_dict = convert_sdxl_keys_to_diffusers_format(state_dict)
model = lora_model_from_sd_state_dict(state_dict=state_dict)
elif self._model_base == BaseModelType.Flux:
if config.format == ModelFormat.Diffusers:
model = lora_model_from_flux_diffusers_state_dict(state_dict=state_dict)
elif config.format == ModelFormat.LyCORIS:
model = lora_model_from_flux_kohya_state_dict(state_dict=state_dict)
else:
raise ValueError(f"LoRA model is in unsupported FLUX format: {config.format}")
elif self._model_base in [BaseModelType.StableDiffusion1, BaseModelType.StableDiffusion2]:
# Currently, we don't apply any conversions for SD1 and SD2 LoRA models.
model = lora_model_from_sd_state_dict(state_dict=state_dict)
else:
raise ValueError(f"Unsupported LoRA base model: {self._model_base}")
model.to(dtype=self._torch_dtype)
model = LoRAModelRaw.from_checkpoint(
file_path=model_path,
dtype=self._torch_dtype,
base_model=self._model_base,
)
return model
# override
def _get_model_path(self, config: AnyModelConfig) -> Path:
# cheating a little - we remember this variable for using in the subsequent call to _load_model()
self._model_base = config.base

View File

@@ -15,7 +15,7 @@ from invokeai.backend.image_util.depth_anything.depth_anything_pipeline import D
from invokeai.backend.image_util.grounding_dino.grounding_dino_pipeline import GroundingDinoPipeline
from invokeai.backend.image_util.segment_anything.segment_anything_pipeline import SegmentAnythingPipeline
from invokeai.backend.ip_adapter.ip_adapter import IPAdapter
from invokeai.backend.lora.lora_model_raw import LoRAModelRaw
from invokeai.backend.lora import LoRAModelRaw
from invokeai.backend.model_manager.config import AnyModel
from invokeai.backend.onnx.onnx_runtime import IAIOnnxRuntimeModel
from invokeai.backend.spandrel_image_to_image_model import SpandrelImageToImageModel

View File

@@ -10,10 +10,6 @@ from picklescan.scanner import scan_file_path
import invokeai.backend.util.logging as logger
from invokeai.app.util.misc import uuid_string
from invokeai.backend.lora.conversions.flux_diffusers_lora_conversion_utils import (
is_state_dict_likely_in_flux_diffusers_format,
)
from invokeai.backend.lora.conversions.flux_kohya_lora_conversion_utils import is_state_dict_likely_in_flux_kohya_format
from invokeai.backend.model_hash.model_hash import HASHING_ALGORITHMS, ModelHash
from invokeai.backend.model_manager.config import (
AnyModelConfig,
@@ -112,8 +108,6 @@ class ModelProbe(object):
"CLIPVisionModelWithProjection": ModelType.CLIPVision,
"T2IAdapter": ModelType.T2IAdapter,
"CLIPModel": ModelType.CLIPEmbed,
"CLIPTextModel": ModelType.CLIPEmbed,
"T5EncoderModel": ModelType.T5Encoder,
}
@classmethod
@@ -230,27 +224,14 @@ class ModelProbe(object):
ckpt = ckpt.get("state_dict", ckpt)
for key in [str(k) for k in ckpt.keys()]:
if key.startswith(
(
"cond_stage_model.",
"first_stage_model.",
"model.diffusion_model.",
# FLUX models in the official BFL format contain keys with the "double_blocks." prefix.
"double_blocks.",
# Some FLUX checkpoint files contain transformer keys prefixed with "model.diffusion_model".
# This prefix is typically used to distinguish between multiple models bundled in a single file.
"model.diffusion_model.double_blocks.",
)
):
if key.startswith(("cond_stage_model.", "first_stage_model.", "model.diffusion_model.", "double_blocks.")):
# Keys starting with double_blocks are associated with Flux models
return ModelType.Main
elif key.startswith(("encoder.conv_in", "decoder.conv_in")):
return ModelType.VAE
elif key.startswith(("lora_te_", "lora_unet_")):
return ModelType.LoRA
# "lora_A.weight" and "lora_B.weight" are associated with models in PEFT format. We don't support all PEFT
# LoRA models, but as of the time of writing, we support Diffusers FLUX PEFT LoRA models.
elif key.endswith(("to_k_lora.up.weight", "to_q_lora.down.weight", "lora_A.weight", "lora_B.weight")):
elif key.endswith(("to_k_lora.up.weight", "to_q_lora.down.weight")):
return ModelType.LoRA
elif key.startswith(("controlnet", "control_model", "input_blocks")):
return ModelType.ControlNet
@@ -302,16 +283,9 @@ class ModelProbe(object):
if (folder_path / "image_encoder.txt").exists():
return ModelType.IPAdapter
config_path = None
for p in [
folder_path / "model_index.json", # pipeline
folder_path / "config.json", # most diffusers
folder_path / "text_encoder_2" / "config.json", # T5 text encoder
folder_path / "text_encoder" / "config.json", # T5 CLIP
]:
if p.exists():
config_path = p
break
i = folder_path / "model_index.json"
c = folder_path / "config.json"
config_path = i if i.exists() else c if c.exists() else None
if config_path:
with open(config_path, "r") as file:
@@ -354,10 +328,7 @@ class ModelProbe(object):
# TODO: Decide between dev/schnell
checkpoint = ModelProbe._scan_and_load_checkpoint(model_path)
state_dict = checkpoint.get("state_dict") or checkpoint
if (
"guidance_in.out_layer.weight" in state_dict
or "model.diffusion_model.guidance_in.out_layer.weight" in state_dict
):
if "guidance_in.out_layer.weight" in state_dict:
# For flux, this is a key in invokeai.backend.flux.util.params
# Due to model type and format being the descriminator for model configs this
# is used rather than attempting to support flux with separate model types and format
@@ -365,7 +336,7 @@ class ModelProbe(object):
config_file = "flux-dev"
else:
# For flux, this is a key in invokeai.backend.flux.util.params
# Due to model type and format being the discriminator for model configs this
# Due to model type and format being the descriminator for model configs this
# is used rather than attempting to support flux with separate model types and format
# If changed in the future, please fix me
config_file = "flux-schnell"
@@ -472,10 +443,7 @@ class CheckpointProbeBase(ProbeBase):
def get_format(self) -> ModelFormat:
state_dict = self.checkpoint.get("state_dict") or self.checkpoint
if (
"double_blocks.0.img_attn.proj.weight.quant_state.bitsandbytes__nf4" in state_dict
or "model.diffusion_model.double_blocks.0.img_attn.proj.weight.quant_state.bitsandbytes__nf4" in state_dict
):
if "double_blocks.0.img_attn.proj.weight.quant_state.bitsandbytes__nf4" in state_dict:
return ModelFormat.BnbQuantizednf4b
return ModelFormat("checkpoint")
@@ -502,10 +470,7 @@ class PipelineCheckpointProbe(CheckpointProbeBase):
def get_base_type(self) -> BaseModelType:
checkpoint = self.checkpoint
state_dict = self.checkpoint.get("state_dict") or checkpoint
if (
"double_blocks.0.img_attn.norm.key_norm.scale" in state_dict
or "model.diffusion_model.double_blocks.0.img_attn.norm.key_norm.scale" in state_dict
):
if "double_blocks.0.img_attn.norm.key_norm.scale" in state_dict:
return BaseModelType.Flux
key_name = "model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight"
if key_name in state_dict and state_dict[key_name].shape[-1] == 768:
@@ -560,21 +525,12 @@ class LoRACheckpointProbe(CheckpointProbeBase):
"""Class for LoRA checkpoints."""
def get_format(self) -> ModelFormat:
if is_state_dict_likely_in_flux_diffusers_format(self.checkpoint):
# TODO(ryand): This is an unusual case. In other places throughout the codebase, we treat
# ModelFormat.Diffusers as meaning that the model is in a directory. In this case, the model is a single
# file, but the weight keys are in the diffusers format.
return ModelFormat.Diffusers
return ModelFormat.LyCORIS
return ModelFormat("lycoris")
def get_base_type(self) -> BaseModelType:
if is_state_dict_likely_in_flux_kohya_format(self.checkpoint) or is_state_dict_likely_in_flux_diffusers_format(
self.checkpoint
):
return BaseModelType.Flux
checkpoint = self.checkpoint
token_vector_length = lora_token_vector_length(checkpoint)
# If we've gotten here, we assume that the model is a Stable Diffusion model.
token_vector_length = lora_token_vector_length(self.checkpoint)
if token_vector_length == 768:
return BaseModelType.StableDiffusion1
elif token_vector_length == 1024:
@@ -791,27 +747,8 @@ class TextualInversionFolderProbe(FolderProbeBase):
class T5EncoderFolderProbe(FolderProbeBase):
def get_base_type(self) -> BaseModelType:
return BaseModelType.Any
def get_format(self) -> ModelFormat:
path = self.model_path / "text_encoder_2"
if (path / "model.safetensors.index.json").exists():
return ModelFormat.T5Encoder
files = list(path.glob("*.safetensors"))
if len(files) == 0:
raise InvalidModelConfigException(f"{self.model_path.as_posix()}: no .safetensors files found")
# shortcut: look for the quantization in the name
if any(x for x in files if "llm_int8" in x.as_posix()):
return ModelFormat.BnbQuantizedLlmInt8b
# more reliable path: probe contents for a 'SCB' key
ckpt = read_checkpoint_meta(files[0], scan=True)
if any("SCB" in x for x in ckpt.keys()):
return ModelFormat.BnbQuantizedLlmInt8b
raise InvalidModelConfigException(f"{self.model_path.as_posix()}: unknown model format")
return ModelFormat.T5Encoder
class ONNXFolderProbe(PipelineFolderProbe):

View File

@@ -133,29 +133,3 @@ def lora_token_vector_length(checkpoint: Dict[str, torch.Tensor]) -> Optional[in
break
return lora_token_vector_length
def convert_bundle_to_flux_transformer_checkpoint(
transformer_state_dict: dict[str, torch.Tensor],
) -> dict[str, torch.Tensor]:
original_state_dict: dict[str, torch.Tensor] = {}
keys_to_remove: list[str] = []
for k, v in transformer_state_dict.items():
if not k.startswith("model.diffusion_model"):
keys_to_remove.append(k) # This can be removed in the future if we only want to delete transformer keys
continue
if k.endswith("scale"):
# Scale math must be done at bfloat16 due to our current flux model
# support limitations at inference time
v = v.to(dtype=torch.bfloat16)
new_key = k.replace("model.diffusion_model.", "")
original_state_dict[new_key] = v
keys_to_remove.append(k)
# Remove processed keys from the original dictionary, leaving others in case
# other model state dicts need to be pulled
for k in keys_to_remove:
del transformer_state_dict[k]
return original_state_dict

View File

@@ -5,18 +5,32 @@ from __future__ import annotations
import pickle
from contextlib import contextmanager
from typing import Any, Dict, Iterator, List, Optional, Tuple, Type, Union
from typing import Any, Dict, Generator, Iterator, List, Optional, Tuple, Type, Union
import numpy as np
import torch
from diffusers import UNet2DConditionModel
from diffusers import OnnxRuntimeModel, UNet2DConditionModel
from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer
from invokeai.app.shared.models import FreeUConfig
from invokeai.backend.lora.lora_model_raw import LoRAModelRaw
from invokeai.backend.lora import LoRAModelRaw
from invokeai.backend.model_manager import AnyModel
from invokeai.backend.model_manager.load.optimizations import skip_torch_weight_init
from invokeai.backend.onnx.onnx_runtime import IAIOnnxRuntimeModel
from invokeai.backend.stable_diffusion.extensions.lora import LoRAExt
from invokeai.backend.textual_inversion import TextualInversionManager, TextualInversionModelRaw
from invokeai.backend.util.original_weights_storage import OriginalWeightsStorage
"""
loras = [
(lora_model1, 0.7),
(lora_model2, 0.4),
]
with LoRAHelper.apply_lora_unet(unet, loras):
# unet with applied loras
# unmodified unet
"""
class ModelPatcher:
@@ -40,6 +54,95 @@ class ModelPatcher:
finally:
unet.set_attn_processor(unet_orig_processors)
@staticmethod
def _resolve_lora_key(model: torch.nn.Module, lora_key: str, prefix: str) -> Tuple[str, torch.nn.Module]:
assert "." not in lora_key
if not lora_key.startswith(prefix):
raise Exception(f"lora_key with invalid prefix: {lora_key}, {prefix}")
module = model
module_key = ""
key_parts = lora_key[len(prefix) :].split("_")
submodule_name = key_parts.pop(0)
while len(key_parts) > 0:
try:
module = module.get_submodule(submodule_name)
module_key += "." + submodule_name
submodule_name = key_parts.pop(0)
except Exception:
submodule_name += "_" + key_parts.pop(0)
module = module.get_submodule(submodule_name)
module_key = (module_key + "." + submodule_name).lstrip(".")
return (module_key, module)
@classmethod
@contextmanager
def apply_lora_unet(
cls,
unet: UNet2DConditionModel,
loras: Iterator[Tuple[LoRAModelRaw, float]],
cached_weights: Optional[Dict[str, torch.Tensor]] = None,
) -> Generator[None, None, None]:
with cls.apply_lora(
unet,
loras=loras,
prefix="lora_unet_",
cached_weights=cached_weights,
):
yield
@classmethod
@contextmanager
def apply_lora_text_encoder(
cls,
text_encoder: CLIPTextModel,
loras: Iterator[Tuple[LoRAModelRaw, float]],
cached_weights: Optional[Dict[str, torch.Tensor]] = None,
) -> Generator[None, None, None]:
with cls.apply_lora(text_encoder, loras=loras, prefix="lora_te_", cached_weights=cached_weights):
yield
@classmethod
@contextmanager
def apply_lora(
cls,
model: AnyModel,
loras: Iterator[Tuple[LoRAModelRaw, float]],
prefix: str,
cached_weights: Optional[Dict[str, torch.Tensor]] = None,
) -> Generator[None, None, None]:
"""
Apply one or more LoRAs to a model.
:param model: The model to patch.
:param loras: An iterator that returns the LoRA to patch in and its patch weight.
:param prefix: A string prefix that precedes keys used in the LoRAs weight layers.
:cached_weights: Read-only copy of the model's state dict in CPU, for unpatching purposes.
"""
original_weights = OriginalWeightsStorage(cached_weights)
try:
for lora_model, lora_weight in loras:
LoRAExt.patch_model(
model=model,
prefix=prefix,
lora=lora_model,
lora_weight=lora_weight,
original_weights=original_weights,
)
del lora_model
yield
finally:
with torch.no_grad():
for param_key, weight in original_weights.get_changed_weights():
model.get_parameter(param_key).copy_(weight)
@classmethod
@contextmanager
def apply_ti(
@@ -179,6 +282,26 @@ class ModelPatcher:
class ONNXModelPatcher:
@classmethod
@contextmanager
def apply_lora_unet(
cls,
unet: OnnxRuntimeModel,
loras: Iterator[Tuple[LoRAModelRaw, float]],
) -> None:
with cls.apply_lora(unet, loras, "lora_unet_"):
yield
@classmethod
@contextmanager
def apply_lora_text_encoder(
cls,
text_encoder: OnnxRuntimeModel,
loras: List[Tuple[LoRAModelRaw, float]],
) -> None:
with cls.apply_lora(text_encoder, loras, "lora_te_"):
yield
# based on
# https://github.com/ssube/onnx-web/blob/ca2e436f0623e18b4cfe8a0363fcfcf10508acf7/api/onnx_web/convert/diffusion/lora.py#L323
@classmethod

View File

@@ -1,17 +1,18 @@
from __future__ import annotations
from contextlib import contextmanager
from typing import TYPE_CHECKING
from typing import TYPE_CHECKING, Tuple
import torch
from diffusers import UNet2DConditionModel
from invokeai.backend.lora.lora_model_raw import LoRAModelRaw
from invokeai.backend.lora.lora_patcher import LoraPatcher
from invokeai.backend.stable_diffusion.extensions.base import ExtensionBase
from invokeai.backend.util.devices import TorchDevice
if TYPE_CHECKING:
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.lora import LoRAModelRaw
from invokeai.backend.util.original_weights_storage import OriginalWeightsStorage
@@ -30,14 +31,107 @@ class LoRAExt(ExtensionBase):
@contextmanager
def patch_unet(self, unet: UNet2DConditionModel, original_weights: OriginalWeightsStorage):
lora_model = self._node_context.models.load(self._model_id).model
assert isinstance(lora_model, LoRAModelRaw)
LoraPatcher.apply_lora_patch(
self.patch_model(
model=unet,
prefix="lora_unet_",
patch=lora_model,
patch_weight=self._weight,
lora=lora_model,
lora_weight=self._weight,
original_weights=original_weights,
)
del lora_model
yield
@classmethod
@torch.no_grad()
def patch_model(
cls,
model: torch.nn.Module,
prefix: str,
lora: LoRAModelRaw,
lora_weight: float,
original_weights: OriginalWeightsStorage,
):
"""
Apply one or more LoRAs to a model.
:param model: The model to patch.
:param lora: LoRA model to patch in.
:param lora_weight: LoRA patch weight.
:param prefix: A string prefix that precedes keys used in the LoRAs weight layers.
:param original_weights: Storage with original weights, filled by weights which lora patches, used for unpatching.
"""
if lora_weight == 0:
return
# assert lora.device.type == "cpu"
for layer_key, layer in lora.layers.items():
if not layer_key.startswith(prefix):
continue
# TODO(ryand): A non-negligible amount of time is currently spent resolving LoRA keys. This
# should be improved in the following ways:
# 1. The key mapping could be more-efficiently pre-computed. This would save time every time a
# LoRA model is applied.
# 2. From an API perspective, there's no reason that the `ModelPatcher` should be aware of the
# intricacies of Stable Diffusion key resolution. It should just expect the input LoRA
# weights to have valid keys.
assert isinstance(model, torch.nn.Module)
module_key, module = cls._resolve_lora_key(model, layer_key, prefix)
# All of the LoRA weight calculations will be done on the same device as the module weight.
# (Performance will be best if this is a CUDA device.)
device = module.weight.device
dtype = module.weight.dtype
layer_scale = layer.alpha / layer.rank if (layer.alpha and layer.rank) else 1.0
# We intentionally move to the target device first, then cast. Experimentally, this was found to
# be significantly faster for 16-bit CPU tensors being moved to a CUDA device than doing the
# same thing in a single call to '.to(...)'.
layer.to(device=device)
layer.to(dtype=torch.float32)
# TODO(ryand): Using torch.autocast(...) over explicit casting may offer a speed benefit on CUDA
# devices here. Experimentally, it was found to be very slow on CPU. More investigation needed.
for param_name, lora_param_weight in layer.get_parameters(module).items():
param_key = module_key + "." + param_name
module_param = module.get_parameter(param_name)
# save original weight
original_weights.save(param_key, module_param)
if module_param.shape != lora_param_weight.shape:
# TODO: debug on lycoris
lora_param_weight = lora_param_weight.reshape(module_param.shape)
lora_param_weight *= lora_weight * layer_scale
module_param += lora_param_weight.to(dtype=dtype)
layer.to(device=TorchDevice.CPU_DEVICE)
@staticmethod
def _resolve_lora_key(model: torch.nn.Module, lora_key: str, prefix: str) -> Tuple[str, torch.nn.Module]:
assert "." not in lora_key
if not lora_key.startswith(prefix):
raise Exception(f"lora_key with invalid prefix: {lora_key}, {prefix}")
module = model
module_key = ""
key_parts = lora_key[len(prefix) :].split("_")
submodule_name = key_parts.pop(0)
while len(key_parts) > 0:
try:
module = module.get_submodule(submodule_name)
module_key += "." + submodule_name
submodule_name = key_parts.pop(0)
except Exception:
submodule_name += "_" + key_parts.pop(0)
module = module.get_submodule(submodule_name)
module_key = (module_key + "." + submodule_name).lstrip(".")
return (module_key, module)

View File

@@ -16,14 +16,6 @@ module.exports = {
'no-promise-executor-return': 'error',
// https://eslint.org/docs/latest/rules/require-await
'require-await': 'error',
'no-restricted-properties': [
'error',
{
object: 'crypto',
property: 'randomUUID',
message: 'Use of crypto.randomUUID is not allowed as it is not available in all browsers.',
},
],
},
overrides: [
/**

View File

@@ -11,8 +11,6 @@ const config: KnipConfig = {
'src/features/nodes/types/v2/**',
// TODO(psyche): maybe we can clean up these utils after canvas v2 release
'src/features/controlLayers/konva/util.ts',
// TODO(psyche): restore HRF functionality?
'src/features/hrf/**',
],
ignoreBinaries: ['only-allow'],
paths: {

View File

@@ -58,7 +58,7 @@
"@dnd-kit/sortable": "^8.0.0",
"@dnd-kit/utilities": "^3.2.2",
"@fontsource-variable/inter": "^5.0.20",
"@invoke-ai/ui-library": "^0.0.33",
"@invoke-ai/ui-library": "^0.0.32",
"@nanostores/react": "^0.7.3",
"@reduxjs/toolkit": "2.2.3",
"@roarr/browser-log-writer": "^1.3.0",
@@ -92,7 +92,7 @@
"react-i18next": "^14.1.3",
"react-icons": "^5.2.1",
"react-redux": "9.1.2",
"react-resizable-panels": "^2.1.2",
"react-resizable-panels": "^2.0.23",
"react-use": "^17.5.1",
"react-virtuoso": "^4.9.0",
"reactflow": "^11.11.4",
@@ -136,7 +136,6 @@
"@vitest/coverage-v8": "^1.5.0",
"@vitest/ui": "^1.5.0",
"concurrently": "^8.2.2",
"csstype": "^3.1.3",
"dpdm": "^3.14.0",
"eslint": "^8.57.0",
"eslint-plugin-i18next": "^6.0.9",

View File

@@ -24,8 +24,8 @@ dependencies:
specifier: ^5.0.20
version: 5.0.20
'@invoke-ai/ui-library':
specifier: ^0.0.33
version: 0.0.33(@chakra-ui/form-control@2.2.0)(@chakra-ui/icon@3.2.0)(@chakra-ui/media-query@3.3.0)(@chakra-ui/menu@2.2.1)(@chakra-ui/spinner@2.1.0)(@chakra-ui/system@2.6.2)(@fontsource-variable/inter@5.0.20)(@types/react@18.3.3)(i18next@23.12.2)(react-dom@18.3.1)(react@18.3.1)
specifier: ^0.0.32
version: 0.0.32(@chakra-ui/form-control@2.2.0)(@chakra-ui/icon@3.2.0)(@chakra-ui/media-query@3.3.0)(@chakra-ui/menu@2.2.1)(@chakra-ui/spinner@2.1.0)(@chakra-ui/system@2.6.2)(@fontsource-variable/inter@5.0.20)(@types/react@18.3.3)(i18next@23.12.2)(react-dom@18.3.1)(react@18.3.1)
'@nanostores/react':
specifier: ^0.7.3
version: 0.7.3(nanostores@0.11.2)(react@18.3.1)
@@ -126,8 +126,8 @@ dependencies:
specifier: 9.1.2
version: 9.1.2(@types/react@18.3.3)(react@18.3.1)(redux@5.0.1)
react-resizable-panels:
specifier: ^2.1.2
version: 2.1.2(react-dom@18.3.1)(react@18.3.1)
specifier: ^2.0.23
version: 2.0.23(react-dom@18.3.1)(react@18.3.1)
react-use:
specifier: ^17.5.1
version: 17.5.1(react-dom@18.3.1)(react@18.3.1)
@@ -238,9 +238,6 @@ devDependencies:
concurrently:
specifier: ^8.2.2
version: 8.2.2
csstype:
specifier: ^3.1.3
version: 3.1.3
dpdm:
specifier: ^3.14.0
version: 3.14.0
@@ -2056,7 +2053,7 @@ packages:
dependencies:
'@chakra-ui/dom-utils': 2.1.0
react: 18.3.1
react-focus-lock: 2.13.2(@types/react@18.3.3)(react@18.3.1)
react-focus-lock: 2.13.0(@types/react@18.3.3)(react@18.3.1)
transitivePeerDependencies:
- '@types/react'
dev: false
@@ -2253,7 +2250,7 @@ packages:
framer-motion: 10.18.0(react-dom@18.3.1)(react@18.3.1)
react: 18.3.1
react-dom: 18.3.1(react@18.3.1)
react-remove-scroll: 2.6.0(@types/react@18.3.3)(react@18.3.1)
react-remove-scroll: 2.5.10(@types/react@18.3.3)(react@18.3.1)
transitivePeerDependencies:
- '@types/react'
dev: false
@@ -3574,8 +3571,8 @@ packages:
prettier: 3.3.3
dev: true
/@invoke-ai/ui-library@0.0.33(@chakra-ui/form-control@2.2.0)(@chakra-ui/icon@3.2.0)(@chakra-ui/media-query@3.3.0)(@chakra-ui/menu@2.2.1)(@chakra-ui/spinner@2.1.0)(@chakra-ui/system@2.6.2)(@fontsource-variable/inter@5.0.20)(@types/react@18.3.3)(i18next@23.12.2)(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-YLydTCOTUEgju4Ex6yXt/bvNBcO97y6zc1cGYjt7vtJMS8e6deA89cC5JejjbmVgntdnn49cDyeUxB8Z24gZew==}
/@invoke-ai/ui-library@0.0.32(@chakra-ui/form-control@2.2.0)(@chakra-ui/icon@3.2.0)(@chakra-ui/media-query@3.3.0)(@chakra-ui/menu@2.2.1)(@chakra-ui/spinner@2.1.0)(@chakra-ui/system@2.6.2)(@fontsource-variable/inter@5.0.20)(@types/react@18.3.3)(i18next@23.12.2)(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-JxAoblrDu/cZ4ha9KO4ry5OWvyLUE1Dj28i+ciMaDNUpC/cN+IyiTbUBoFoPaoN5JP8Zpd/MYCcmF2qsziHDzg==}
peerDependencies:
'@fontsource-variable/inter': ^5.0.16
react: ^18.2.0
@@ -10011,8 +10008,8 @@ packages:
resolution: {integrity: sha512-nsO+KSNgo1SbJqJEYRE9ERzo7YtYbou/OqjSQKxV7jcKox7+usiUVZOAC+XnDOABXggQTno0Y1CpVnuWEc1boQ==}
dev: false
/react-focus-lock@2.13.2(@types/react@18.3.3)(react@18.3.1):
resolution: {integrity: sha512-T/7bsofxYqnod2xadvuwjGKHOoL5GH7/EIPI5UyEvaU/c2CcphvGI371opFtuY/SYdbMsNiuF4HsHQ50nA/TKQ==}
/react-focus-lock@2.13.0(@types/react@18.3.3)(react@18.3.1):
resolution: {integrity: sha512-w7aIcTwZwNzUp2fYQDMICy+khFwVmKmOrLF8kNsPS+dz4Oi/oxoVJ2wCMVvX6rWGriM/+mYaTyp1MRmkcs2amw==}
peerDependencies:
'@types/react': ^16.8.0 || ^17.0.0 || ^18.0.0
react: ^16.8.0 || ^17.0.0 || ^18.0.0
@@ -10147,6 +10144,25 @@ packages:
tslib: 2.7.0
dev: false
/react-remove-scroll@2.5.10(@types/react@18.3.3)(react@18.3.1):
resolution: {integrity: sha512-m3zvBRANPBw3qxVVjEIPEQinkcwlFZ4qyomuWVpNJdv4c6MvHfXV0C3L9Jx5rr3HeBHKNRX+1jreB5QloDIJjA==}
engines: {node: '>=10'}
peerDependencies:
'@types/react': ^16.8.0 || ^17.0.0 || ^18.0.0
react: ^16.8.0 || ^17.0.0 || ^18.0.0
peerDependenciesMeta:
'@types/react':
optional: true
dependencies:
'@types/react': 18.3.3
react: 18.3.1
react-remove-scroll-bar: 2.3.6(@types/react@18.3.3)(react@18.3.1)
react-style-singleton: 2.2.1(@types/react@18.3.3)(react@18.3.1)
tslib: 2.7.0
use-callback-ref: 1.3.2(@types/react@18.3.3)(react@18.3.1)
use-sidecar: 1.1.2(@types/react@18.3.3)(react@18.3.1)
dev: false
/react-remove-scroll@2.5.5(@types/react@18.3.3)(react@18.3.1):
resolution: {integrity: sha512-ImKhrzJJsyXJfBZ4bzu8Bwpka14c/fQt0k+cyFp/PBhTfyDnU5hjOtM4AG/0AMyy8oKzOTR0lDgJIM7pYXI0kw==}
engines: {node: '>=10'}
@@ -10166,27 +10182,8 @@ packages:
use-sidecar: 1.1.2(@types/react@18.3.3)(react@18.3.1)
dev: false
/react-remove-scroll@2.6.0(@types/react@18.3.3)(react@18.3.1):
resolution: {integrity: sha512-I2U4JVEsQenxDAKaVa3VZ/JeJZe0/2DxPWL8Tj8yLKctQJQiZM52pn/GWFpSp8dftjM3pSAHVJZscAnC/y+ySQ==}
engines: {node: '>=10'}
peerDependencies:
'@types/react': ^16.8.0 || ^17.0.0 || ^18.0.0
react: ^16.8.0 || ^17.0.0 || ^18.0.0
peerDependenciesMeta:
'@types/react':
optional: true
dependencies:
'@types/react': 18.3.3
react: 18.3.1
react-remove-scroll-bar: 2.3.6(@types/react@18.3.3)(react@18.3.1)
react-style-singleton: 2.2.1(@types/react@18.3.3)(react@18.3.1)
tslib: 2.7.0
use-callback-ref: 1.3.2(@types/react@18.3.3)(react@18.3.1)
use-sidecar: 1.1.2(@types/react@18.3.3)(react@18.3.1)
dev: false
/react-resizable-panels@2.1.2(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-Ku2Bo7JvE8RpHhl4X1uhkdeT9auPBoxAOlGTqomDUUrBAX2mVGuHYZTcWvlnJSgx0QyHIxHECgGB5XVPUbUOkQ==}
/react-resizable-panels@2.0.23(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-8ZKTwTU11t/FYwiwhMdtZYYyFxic5U5ysRu2YwfkAgDbUJXFvnWSJqhnzkSlW+mnDoNAzDCrJhdOSXBPA76wug==}
peerDependencies:
react: ^16.14.0 || ^17.0.0 || ^18.0.0
react-dom: ^16.14.0 || ^17.0.0 || ^18.0.0

View File

@@ -127,14 +127,7 @@
"bulkDownloadRequestedDesc": "Dein Download wird vorbereitet. Dies kann ein paar Momente dauern.",
"bulkDownloadRequestFailed": "Problem beim Download vorbereiten",
"bulkDownloadFailed": "Download fehlgeschlagen",
"alwaysShowImageSizeBadge": "Zeige immer Bilder Größe Abzeichen",
"selectForCompare": "Zum Vergleichen auswählen",
"compareImage": "Bilder vergleichen",
"exitSearch": "Suche beenden",
"newestFirst": "Neueste zuerst",
"oldestFirst": "Älteste zuerst",
"openInViewer": "Im Viewer öffnen",
"swapImages": "Bilder tauschen"
"alwaysShowImageSizeBadge": "Zeige immer Bilder Größe Abzeichen"
},
"hotkeys": {
"keyboardShortcuts": "Tastenkürzel",
@@ -638,8 +631,7 @@
"archived": "Archiviert",
"noBoards": "Kein {boardType}} Ordner",
"hideBoards": "Ordner verstecken",
"viewBoards": "Ordner ansehen",
"deletedPrivateBoardsCannotbeRestored": "Gelöschte Boards können nicht wiederhergestellt werden. Wenn Sie „Nur Board löschen“ wählen, werden die Bilder in einen privaten, nicht kategorisierten Status für den Ersteller des Bildes versetzt."
"viewBoards": "Ordner ansehen"
},
"controlnet": {
"showAdvanced": "Zeige Erweitert",
@@ -789,9 +781,7 @@
"batchFieldValues": "Stapelverarbeitungswerte",
"batchQueued": "Stapelverarbeitung eingereiht",
"graphQueued": "Graph eingereiht",
"graphFailedToQueue": "Fehler beim Einreihen des Graphen",
"generations_one": "Generation",
"generations_other": "Generationen"
"graphFailedToQueue": "Fehler beim Einreihen des Graphen"
},
"metadata": {
"negativePrompt": "Negativ Beschreibung",
@@ -1156,10 +1146,5 @@
"noMatchingTriggers": "Keine passenden Trigger",
"addPromptTrigger": "Prompt-Trigger hinzufügen",
"compatibleEmbeddings": "Kompatible Einbettungen"
},
"ui": {
"tabs": {
"queue": "Warteschlange"
}
}
}

View File

@@ -93,7 +93,6 @@
"copy": "Copy",
"copyError": "$t(gallery.copy) Error",
"on": "On",
"off": "Off",
"or": "or",
"checkpoint": "Checkpoint",
"communityLabel": "Community",
@@ -135,7 +134,6 @@
"nodes": "Workflows",
"notInstalled": "Not $t(common.installed)",
"openInNewTab": "Open in New Tab",
"openInViewer": "Open in Viewer",
"orderBy": "Order By",
"outpaint": "outpaint",
"outputs": "Outputs",
@@ -375,7 +373,6 @@
"useCache": "Use Cache"
},
"gallery": {
"gallery": "Gallery",
"alwaysShowImageSizeBadge": "Always Show Image Size Badge",
"assets": "Assets",
"autoAssignBoardOnClick": "Auto-Assign Board on Click",
@@ -388,11 +385,11 @@
"deleteImage_one": "Delete Image",
"deleteImage_other": "Delete {{count}} Images",
"deleteImagePermanent": "Deleted images cannot be restored.",
"displayBoardSearch": "Board Search",
"displaySearch": "Image Search",
"displayBoardSearch": "Display Board Search",
"displaySearch": "Display Search",
"download": "Download",
"exitBoardSearch": "Exit Board Search",
"exitSearch": "Exit Image Search",
"exitSearch": "Exit Search",
"featuresWillReset": "If you delete this image, those features will immediately be reset.",
"galleryImageSize": "Image Size",
"gallerySettings": "Gallery Settings",
@@ -438,8 +435,7 @@
"compareHelp1": "Hold <Kbd>Alt</Kbd> while clicking a gallery image or using the arrow keys to change the compare image.",
"compareHelp2": "Press <Kbd>M</Kbd> to cycle through comparison modes.",
"compareHelp3": "Press <Kbd>C</Kbd> to swap the compared images.",
"compareHelp4": "Press <Kbd>Z</Kbd> or <Kbd>Esc</Kbd> to exit.",
"toggleMiniViewer": "Toggle Mini Viewer"
"compareHelp4": "Press <Kbd>Z</Kbd> or <Kbd>Esc</Kbd> to exit."
},
"hotkeys": {
"searchHotkeys": "Search Hotkeys",
@@ -1018,8 +1014,6 @@
"noModelForControlAdapter": "Control Adapter #{{number}} has no model selected.",
"incompatibleBaseModelForControlAdapter": "Control Adapter #{{number}} model is incompatible with main model.",
"noModelSelected": "No model selected",
"canvasManagerNotLoaded": "Canvas Manager not loaded",
"canvasBusy": "Canvas is busy",
"noPrompts": "No prompts generated",
"noNodesInGraph": "No nodes in graph",
"systemDisconnected": "System disconnected",
@@ -1051,11 +1045,12 @@
"scaledHeight": "Scaled H",
"scaledWidth": "Scaled W",
"scheduler": "Scheduler",
"seamlessXAxis": "Seamless X Axis",
"seamlessYAxis": "Seamless Y Axis",
"seamlessXAxis": "Seamless Tiling X Axis",
"seamlessYAxis": "Seamless Tiling Y Axis",
"seed": "Seed",
"imageActions": "Image Actions",
"sendToCanvas": "Send To Canvas",
"sendToImg2Img": "Send to Image to Image",
"sendToUnifiedCanvas": "Send To Unified Canvas",
"sendToUpscale": "Send To Upscale",
"showOptionsPanel": "Show Side Panel (O or T)",
"shuffle": "Shuffle Seed",
@@ -1196,8 +1191,8 @@
"problemSavingMaskDesc": "Unable to export mask",
"prunedQueue": "Pruned Queue",
"resetInitialImage": "Reset Initial Image",
"sentToCanvas": "Sent to Canvas",
"sentToUpscale": "Sent to Upscale",
"sentToImageToImage": "Sent To Image To Image",
"sentToUnifiedCanvas": "Sent to Unified Canvas",
"serverError": "Server Error",
"sessionRef": "Session: {{sessionId}}",
"setAsCanvasInitialImage": "Set as canvas initial image",
@@ -1659,9 +1654,6 @@
"storeNotInitialized": "Store is not initialized"
},
"controlLayers": {
"bookmark": "Bookmark for Quick Switch",
"fitBboxToLayers": "Fit Bbox To Layers",
"removeBookmark": "Remove Bookmark",
"saveCanvasToGallery": "Save Canvas To Gallery",
"saveBboxToGallery": "Save Bbox To Gallery",
"savedToGalleryOk": "Saved to Gallery",
@@ -1680,7 +1672,6 @@
"clearCaches": "Clear Caches",
"recalculateRects": "Recalculate Rects",
"clipToBbox": "Clip Strokes to Bbox",
"compositeMaskedRegions": "Composite Masked Regions",
"addLayer": "Add Layer",
"duplicate": "Duplicate",
"moveToFront": "Move to Front",
@@ -1716,8 +1707,6 @@
"inpaintMask": "Inpaint Mask",
"regionalGuidance": "Regional Guidance",
"ipAdapter": "IP Adapter",
"sendingToCanvas": "Sending to Canvas",
"sendingToGallery": "Sending to Gallery",
"sendToGallery": "Send To Gallery",
"sendToGalleryDesc": "Generations will be sent to the gallery.",
"sendToCanvas": "Send To Canvas",
@@ -1736,12 +1725,12 @@
"regionalGuidance_withCount_hidden": "Regional Guidance ({{count}} hidden)",
"controlLayers_withCount_hidden": "Control Layers ({{count}} hidden)",
"rasterLayers_withCount_hidden": "Raster Layers ({{count}} hidden)",
"globalIPAdapters_withCount_hidden": "Global IP Adapters ({{count}} hidden)",
"ipAdapters_withCount_hidden": "IP Adapters ({{count}} hidden)",
"inpaintMasks_withCount_hidden": "Inpaint Masks ({{count}} hidden)",
"regionalGuidance_withCount_visible": "Regional Guidance ({{count}})",
"controlLayers_withCount_visible": "Control Layers ({{count}})",
"rasterLayers_withCount_visible": "Raster Layers ({{count}})",
"globalIPAdapters_withCount_visible": "Global IP Adapters ({{count}})",
"ipAdapters_withCount_visible": "IP Adapters ({{count}})",
"inpaintMasks_withCount_visible": "Inpaint Masks ({{count}})",
"globalControlAdapter": "Global $t(controlnet.controlAdapter_one)",
"globalControlAdapterLayer": "Global $t(controlnet.controlAdapter_one) $t(unifiedCanvas.layer)",
@@ -1754,8 +1743,8 @@
"clearProcessor": "Clear Processor",
"resetProcessor": "Reset Processor to Defaults",
"noLayersAdded": "No Layers Added",
"layer_one": "Layer",
"layer_other": "Layers",
"layers_one": "Layer",
"layers_other": "Layers",
"objects_zero": "empty",
"objects_one": "{{count}} object",
"objects_other": "{{count}} objects",
@@ -1791,54 +1780,16 @@
"bbox": "Bbox",
"move": "Move",
"view": "View",
"transform": "Transform",
"colorPicker": "Color Picker"
},
"filter": {
"filter": "Filter",
"filters": "Filters",
"filterType": "Filter Type",
"autoProcess": "Auto Process",
"reset": "Reset",
"process": "Process",
"apply": "Apply",
"cancel": "Cancel",
"spandrel": {
"label": "Image-to-Image Model",
"description": "Run an image-to-image model on the selected layer.",
"paramModel": "Model",
"paramAutoScale": "Auto Scale",
"paramAutoScaleDesc": "The selected model will be run until the target scale is reached.",
"paramScale": "Target Scale"
}
},
"transform": {
"transform": "Transform",
"fitToBbox": "Fit to Bbox",
"reset": "Reset",
"preview": "Preview",
"apply": "Apply",
"cancel": "Cancel"
},
"settings": {
"snapToGrid": {
"label": "Snap to Grid",
"on": "On",
"off": "Off"
}
},
"HUD": {
"bbox": "Bbox",
"scaledBbox": "Scaled Bbox",
"autoSave": "Auto Save",
"entityStatus": {
"selectedEntity": "Selected Entity",
"selectedEntityIs": "Selected Entity is",
"isFiltering": "is filtering",
"isTransforming": "is transforming",
"isLocked": "is locked",
"isHidden": "is hidden",
"isDisabled": "is disabled",
"enabled": "Enabled"
}
}
},
"upscaling": {

View File

@@ -86,15 +86,15 @@
"loadMore": "Cargar más",
"noImagesInGallery": "No hay imágenes para mostrar",
"deleteImage_one": "Eliminar Imagen",
"deleteImage_many": "Eliminar {{count}} Imágenes",
"deleteImage_other": "Eliminar {{count}} Imágenes",
"deleteImage_many": "",
"deleteImage_other": "",
"deleteImagePermanent": "Las imágenes eliminadas no se pueden restaurar.",
"assets": "Activos",
"autoAssignBoardOnClick": "Asignación automática de tableros al hacer clic"
},
"hotkeys": {
"keyboardShortcuts": "Atajos de teclado",
"appHotkeys": "Atajos de aplicación",
"appHotkeys": "Atajos de applicación",
"generalHotkeys": "Atajos generales",
"galleryHotkeys": "Atajos de galería",
"unifiedCanvasHotkeys": "Atajos de lienzo unificado",
@@ -535,7 +535,7 @@
"bottomMessage": "Al eliminar este panel y las imágenes que contiene, se restablecerán las funciones que los estén utilizando actualmente.",
"deleteBoardAndImages": "Borrar el panel y las imágenes",
"loading": "Cargando...",
"deletedBoardsCannotbeRestored": "Los paneles eliminados no se pueden restaurar. Al Seleccionar 'Borrar Solo el Panel' transferirá las imágenes a un estado sin categorizar.",
"deletedBoardsCannotbeRestored": "Los paneles eliminados no se pueden restaurar",
"move": "Mover",
"menuItemAutoAdd": "Agregar automáticamente a este panel",
"searchBoard": "Buscando paneles…",
@@ -549,13 +549,7 @@
"imagesWithCount_other": "{{count}} imágenes",
"assetsWithCount_one": "{{count}} activo",
"assetsWithCount_many": "{{count}} activos",
"assetsWithCount_other": "{{count}} activos",
"hideBoards": "Ocultar Paneles",
"addPrivateBoard": "Agregar un tablero privado",
"addSharedBoard": "Agregar Panel Compartido",
"boards": "Paneles",
"archiveBoard": "Archivar Panel",
"archived": "Archivado"
"assetsWithCount_other": "{{count}} activos"
},
"accordions": {
"compositing": {

View File

@@ -496,9 +496,7 @@
"main": "Principali",
"noModelsInstalledDesc1": "Installa i modelli con",
"ipAdapters": "Adattatori IP",
"noMatchingModels": "Nessun modello corrispondente",
"starterModelsInModelManager": "I modelli iniziali possono essere trovati in Gestione Modelli",
"spandrelImageToImage": "Immagine a immagine (Spandrel)"
"noMatchingModels": "Nessun modello corrispondente"
},
"parameters": {
"images": "Immagini",
@@ -512,7 +510,7 @@
"perlinNoise": "Rumore Perlin",
"type": "Tipo",
"strength": "Forza",
"upscaling": "Amplia",
"upscaling": "Ampliamento",
"scale": "Scala",
"imageFit": "Adatta l'immagine iniziale alle dimensioni di output",
"scaleBeforeProcessing": "Scala prima dell'elaborazione",
@@ -595,7 +593,7 @@
"globalPositivePromptPlaceholder": "Prompt positivo globale",
"globalNegativePromptPlaceholder": "Prompt negativo globale",
"processImage": "Elabora Immagine",
"sendToUpscale": "Invia a Amplia",
"sendToUpscale": "Invia a Ampliare",
"postProcessing": "Post-elaborazione (Shift + U)"
},
"settings": {
@@ -1422,7 +1420,7 @@
"paramUpscaleMethod": {
"heading": "Metodo di ampliamento",
"paragraphs": [
"Metodo utilizzato per ampliare l'immagine per la correzione ad alta risoluzione."
"Metodo utilizzato per eseguire l'ampliamento dell'immagine per la correzione ad alta risoluzione."
]
},
"patchmatchDownScaleSize": {
@@ -1530,7 +1528,7 @@
},
"upscaleModel": {
"paragraphs": [
"Il modello di ampliamento, scala l'immagine alle dimensioni di uscita prima di aggiungere i dettagli. È possibile utilizzare qualsiasi modello di ampliamento supportato, ma alcuni sono specializzati per diversi tipi di immagini, come foto o disegni al tratto."
"Il modello di ampliamento (Upscale), scala l'immagine alle dimensioni di uscita prima di aggiungere i dettagli. È possibile utilizzare qualsiasi modello di ampliamento supportato, ma alcuni sono specializzati per diversi tipi di immagini, come foto o disegni al tratto."
],
"heading": "Modello di ampliamento"
},
@@ -1722,27 +1720,26 @@
"modelsTab": "$t(ui.tabs.models) $t(common.tab)",
"queue": "Coda",
"queueTab": "$t(ui.tabs.queue) $t(common.tab)",
"upscaling": "Amplia",
"upscaling": "Ampliamento",
"upscalingTab": "$t(ui.tabs.upscaling) $t(common.tab)"
}
},
"upscaling": {
"creativity": "Creatività",
"structure": "Struttura",
"upscaleModel": "Modello di ampliamento",
"upscaleModel": "Modello di Ampliamento",
"scale": "Scala",
"missingModelsWarning": "Visita <LinkComponent>Gestione modelli</LinkComponent> per installare i modelli richiesti:",
"mainModelDesc": "Modello principale (architettura SD1.5 o SDXL)",
"tileControlNetModelDesc": "Modello Tile ControlNet per l'architettura del modello principale scelto",
"upscaleModelDesc": "Modello per l'ampliamento (immagine a immagine)",
"upscaleModelDesc": "Modello per l'ampliamento (da immagine a immagine)",
"missingUpscaleInitialImage": "Immagine iniziale mancante per l'ampliamento",
"missingUpscaleModel": "Modello per lampliamento mancante",
"missingTileControlNetModel": "Nessun modello ControlNet Tile valido installato",
"postProcessingModel": "Modello di post-elaborazione",
"postProcessingMissingModelWarning": "Visita <LinkComponent>Gestione modelli</LinkComponent> per installare un modello di post-elaborazione (da immagine a immagine).",
"exceedsMaxSize": "Le impostazioni di ampliamento superano il limite massimo delle dimensioni",
"exceedsMaxSizeDetails": "Il limite massimo di ampliamento è {{maxUpscaleDimension}}x{{maxUpscaleDimension}} pixel. Prova un'immagine più piccola o diminuisci la scala selezionata.",
"upscale": "Amplia"
"exceedsMaxSizeDetails": "Il limite massimo di ampliamento è {{maxUpscaleDimension}}x{{maxUpscaleDimension}} pixel. Prova un'immagine più piccola o diminuisci la scala selezionata."
},
"upsell": {
"inviteTeammates": "Invita collaboratori",
@@ -1792,7 +1789,6 @@
"positivePromptColumn": "'prompt' o 'positive_prompt'",
"noTemplates": "Nessun modello",
"acceptedColumnsKeys": "Colonne/chiavi accettate:",
"templateActions": "Azioni modello",
"promptTemplateCleared": "Modello di prompt cancellato"
"templateActions": "Azioni modello"
}
}

View File

@@ -501,8 +501,7 @@
"noModelsInstalled": "Нет установленных моделей",
"noModelsInstalledDesc1": "Установите модели с помощью",
"noMatchingModels": "Нет подходящих моделей",
"ipAdapters": "IP адаптеры",
"starterModelsInModelManager": "Стартовые модели можно найти в Менеджере моделей"
"ipAdapters": "IP адаптеры"
},
"parameters": {
"images": "Изображения",
@@ -1759,8 +1758,7 @@
"postProcessingModel": "Модель постобработки",
"tileControlNetModelDesc": "Модель ControlNet для выбранной архитектуры основной модели",
"missingModelsWarning": "Зайдите в <LinkComponent>Менеджер моделей</LinkComponent> чтоб установить необходимые модели:",
"postProcessingMissingModelWarning": "Посетите <LinkComponent>Менеджер моделей</LinkComponent>, чтобы установить модель постобработки (img2img).",
"upscale": "Увеличить"
"postProcessingMissingModelWarning": "Посетите <LinkComponent>Менеджер моделей</LinkComponent>, чтобы установить модель постобработки (img2img)."
},
"stylePresets": {
"noMatchingTemplates": "Нет подходящих шаблонов",
@@ -1806,8 +1804,7 @@
"noTemplates": "Нет шаблонов",
"promptTemplatesDesc2": "Используйте строку-заполнитель <Pre>{{placeholder}}</Pre>, чтобы указать место, куда должен быть включен ваш запрос в шаблоне.",
"searchByName": "Поиск по имени",
"shared": "Общий",
"promptTemplateCleared": "Шаблон запроса создан"
"shared": "Общий"
},
"upsell": {
"inviteTeammates": "Пригласите членов команды",

View File

@@ -154,8 +154,7 @@
"displaySearch": "显示搜索",
"stretchToFit": "拉伸以适应",
"exitCompare": "退出对比",
"compareHelp1": "在点击图库中的图片或使用箭头键切换比较图片时,请按住<Kbd>Alt</Kbd> 键。",
"go": "运行"
"compareHelp1": "在点击图库中的图片或使用箭头键切换比较图片时,请按住<Kbd>Alt</Kbd> 键。"
},
"hotkeys": {
"keyboardShortcuts": "快捷键",
@@ -495,9 +494,7 @@
"huggingFacePlaceholder": "所有者或模型名称",
"huggingFaceRepoID": "HuggingFace仓库ID",
"loraTriggerPhrases": "LoRA 触发词",
"ipAdapters": "IP适配器",
"spandrelImageToImage": "图生图(Spandrel)",
"starterModelsInModelManager": "您可以在模型管理器中找到初始模型"
"ipAdapters": "IP适配器"
},
"parameters": {
"images": "图像",
@@ -698,9 +695,7 @@
"outOfMemoryErrorDesc": "您当前的生成设置已超出系统处理能力.请调整设置后再次尝试.",
"parametersSet": "参数已恢复",
"errorCopied": "错误信息已复制",
"modelImportCanceled": "模型导入已取消",
"importFailed": "导入失败",
"importSuccessful": "导入成功"
"modelImportCanceled": "模型导入已取消"
},
"unifiedCanvas": {
"layer": "图层",
@@ -1710,55 +1705,12 @@
"missingModelsWarning": "请访问<LinkComponent>模型管理器</LinkComponent> 安装所需的模型:",
"mainModelDesc": "主模型SD1.5或SDXL架构",
"exceedsMaxSize": "放大设置超出了最大尺寸限制",
"exceedsMaxSizeDetails": "最大放大限制是 {{maxUpscaleDimension}}x{{maxUpscaleDimension}} 像素.请尝试一个较小的图像或减少您的缩放选择.",
"upscale": "放大"
"exceedsMaxSizeDetails": "最大放大限制是 {{maxUpscaleDimension}}x{{maxUpscaleDimension}} 像素.请尝试一个较小的图像或减少您的缩放选择."
},
"upsell": {
"inviteTeammates": "邀请团队成员",
"professional": "专业",
"professionalUpsell": "可在 Invoke 的专业版中使用.点击此处或访问 invoke.com/pricing 了解更多详情.",
"shareAccess": "共享访问权限"
},
"stylePresets": {
"positivePrompt": "正向提示词",
"preview": "预览",
"deleteImage": "删除图像",
"deleteTemplate": "删除模版",
"deleteTemplate2": "您确定要删除这个模板吗?请注意,删除后无法恢复.",
"importTemplates": "导入提示模板支持CSV或JSON格式",
"insertPlaceholder": "插入一个占位符",
"myTemplates": "我的模版",
"name": "名称",
"type": "类型",
"unableToDeleteTemplate": "无法删除提示模板",
"updatePromptTemplate": "更新提示词模版",
"exportPromptTemplates": "导出我的提示模板为CSV格式",
"exportDownloaded": "导出已下载",
"noMatchingTemplates": "无匹配的模版",
"promptTemplatesDesc1": "提示模板可以帮助您在编写提示时添加预设的文本内容.",
"promptTemplatesDesc3": "如果您没有使用占位符,那么模板的内容将会被添加到您提示的末尾.",
"searchByName": "按名称搜索",
"shared": "已分享",
"sharedTemplates": "已分享的模版",
"templateActions": "模版操作",
"templateDeleted": "提示模版已删除",
"toggleViewMode": "切换显示模式",
"uploadImage": "上传图像",
"active": "激活",
"choosePromptTemplate": "选择提示词模板",
"clearTemplateSelection": "清除模版选择",
"copyTemplate": "拷贝模版",
"createPromptTemplate": "创建提示词模版",
"defaultTemplates": "默认模版",
"editTemplate": "编辑模版",
"exportFailed": "无法生成并下载CSV文件",
"flatten": "将选定的模板内容合并到当前提示中",
"negativePrompt": "反向提示词",
"promptTemplateCleared": "提示模板已清除",
"useForTemplate": "用于提示词模版",
"viewList": "预览模版列表",
"viewModeTooltip": "这是您的提示在当前选定的模板下的预览效果。如需编辑提示,请直接在文本框中点击进行修改.",
"noTemplates": "无模版",
"private": "私密"
}
}

View File

@@ -21,16 +21,10 @@ function ThemeLocaleProvider({ children }: ThemeLocaleProviderProps) {
direction,
shadows: {
..._theme.shadows,
selected:
'inset 0px 0px 0px 3px var(--invoke-colors-invokeBlue-500), inset 0px 0px 0px 4px var(--invoke-colors-invokeBlue-800)',
hoverSelected:
'inset 0px 0px 0px 3px var(--invoke-colors-invokeBlue-400), inset 0px 0px 0px 4px var(--invoke-colors-invokeBlue-800)',
hoverUnselected:
'inset 0px 0px 0px 2px var(--invoke-colors-invokeBlue-300), inset 0px 0px 0px 3px var(--invoke-colors-invokeBlue-800)',
selectedForCompare:
'inset 0px 0px 0px 3px var(--invoke-colors-invokeGreen-300), inset 0px 0px 0px 4px var(--invoke-colors-invokeGreen-800)',
'0px 0px 0px 1px var(--invoke-colors-base-900), 0px 0px 0px 4px var(--invoke-colors-green-400)',
hoverSelectedForCompare:
'inset 0px 0px 0px 3px var(--invoke-colors-invokeGreen-200), inset 0px 0px 0px 4px var(--invoke-colors-invokeGreen-800)',
'0px 0px 0px 1px var(--invoke-colors-base-900), 0px 0px 0px 4px var(--invoke-colors-green-300)',
},
});
}, [direction]);

View File

@@ -1,5 +1,5 @@
import type { TypedStartListening } from '@reduxjs/toolkit';
import { addListener, createListenerMiddleware } from '@reduxjs/toolkit';
import { createListenerMiddleware } from '@reduxjs/toolkit';
import { addAdHocPostProcessingRequestedListener } from 'app/store/middleware/listenerMiddleware/listeners/addAdHocPostProcessingRequestedListener';
import { addStagingListeners } from 'app/store/middleware/listenerMiddleware/listeners/addCommitStagingAreaImageListener';
import { addAnyEnqueuedListener } from 'app/store/middleware/listenerMiddleware/listeners/anyEnqueued';
@@ -9,7 +9,6 @@ import { addBatchEnqueuedListener } from 'app/store/middleware/listenerMiddlewar
import { addDeleteBoardAndImagesFulfilledListener } from 'app/store/middleware/listenerMiddleware/listeners/boardAndImagesDeleted';
import { addBoardIdSelectedListener } from 'app/store/middleware/listenerMiddleware/listeners/boardIdSelected';
import { addBulkDownloadListeners } from 'app/store/middleware/listenerMiddleware/listeners/bulkDownload';
import { addCancellationsListeners } from 'app/store/middleware/listenerMiddleware/listeners/cancellationsListeners';
import { addEnqueueRequestedLinear } from 'app/store/middleware/listenerMiddleware/listeners/enqueueRequestedLinear';
import { addEnqueueRequestedNodes } from 'app/store/middleware/listenerMiddleware/listeners/enqueueRequestedNodes';
import { addGalleryImageClickedListener } from 'app/store/middleware/listenerMiddleware/listeners/galleryImageClicked';
@@ -41,8 +40,6 @@ export type AppStartListening = TypedStartListening<RootState, AppDispatch>;
const startAppListening = listenerMiddleware.startListening as AppStartListening;
export const addAppListener = addListener.withTypes<RootState, AppDispatch>();
/**
* The RTK listener middleware is a lightweight alternative sagas/observables.
*
@@ -122,5 +119,3 @@ addDynamicPromptsListener(startAppListening);
addSetDefaultSettingsListener(startAppListening);
// addControlAdapterPreprocessor(startAppListening);
addCancellationsListeners(startAppListening);

View File

@@ -1,34 +1,37 @@
import { isAnyOf } from '@reduxjs/toolkit';
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { canvasReset, rasterLayerAdded } from 'features/controlLayers/store/canvasSlice';
import { stagingAreaImageAccepted, stagingAreaReset } from 'features/controlLayers/store/canvasStagingAreaSlice';
import {
sessionStagingAreaImageAccepted,
sessionStagingAreaReset,
} from 'features/controlLayers/store/canvasSessionSlice';
import { rasterLayerAdded } from 'features/controlLayers/store/canvasSlice';
import { selectCanvasSlice } from 'features/controlLayers/store/selectors';
import type { CanvasRasterLayerState } from 'features/controlLayers/store/types';
import { imageDTOToImageObject } from 'features/controlLayers/store/types';
import { toast } from 'features/toast/toast';
import { t } from 'i18next';
import { queueApi } from 'services/api/endpoints/queue';
import { $lastCanvasProgressEvent } from 'services/events/setEventListeners';
import { assert } from 'tsafe';
const log = logger('canvas');
const matchCanvasOrStagingAreaRest = isAnyOf(stagingAreaReset, canvasReset);
export const addStagingListeners = (startAppListening: AppStartListening) => {
startAppListening({
matcher: matchCanvasOrStagingAreaRest,
actionCreator: sessionStagingAreaReset,
effect: async (_, { dispatch }) => {
try {
const req = dispatch(
queueApi.endpoints.cancelByBatchDestination.initiate(
{ destination: 'canvas' },
queueApi.endpoints.cancelByBatchOrigin.initiate(
{ origin: 'canvas' },
{ fixedCacheKey: 'cancelByBatchOrigin' }
)
);
const { canceled } = await req.unwrap();
req.reset();
$lastCanvasProgressEvent.set(null);
if (canceled > 0) {
log.debug(`Canceled ${canceled} canvas batches`);
toast({
@@ -49,11 +52,11 @@ export const addStagingListeners = (startAppListening: AppStartListening) => {
});
startAppListening({
actionCreator: stagingAreaImageAccepted,
actionCreator: sessionStagingAreaImageAccepted,
effect: (action, api) => {
const { index } = action.payload;
const state = api.getState();
const stagingAreaImage = state.canvasStagingArea.stagedImages[index];
const stagingAreaImage = state.canvasSession.stagedImages[index];
assert(stagingAreaImage, 'No staged image found to accept');
const { x, y } = selectCanvasSlice(state).bbox.rect;
@@ -66,7 +69,7 @@ export const addStagingListeners = (startAppListening: AppStartListening) => {
};
api.dispatch(rasterLayerAdded({ overrides, isSelected: false }));
api.dispatch(stagingAreaReset());
api.dispatch(sessionStagingAreaReset());
},
});
};

View File

@@ -1,137 +0,0 @@
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { $lastCanvasProgressEvent } from 'features/controlLayers/store/canvasSlice';
import { queueApi } from 'services/api/endpoints/queue';
/**
* To prevent a race condition where a progress event arrives after a successful cancellation, we need to keep track of
* cancellations:
* - In the route handlers above, we track and update the cancellations object
* - When the user queues a, we should reset the cancellations, also handled int he route handlers above
* - When we get a progress event, we should check if the event is cancelled before setting the event
*
* We have a few ways that cancellations are effected, so we need to track them all:
* - by queue item id (in this case, we will compare the session_id and not the item_id)
* - by batch id
* - by destination
* - by clearing the queue
*/
type Cancellations = {
sessionIds: Set<string>;
batchIds: Set<string>;
destinations: Set<string>;
clearQueue: boolean;
};
const resetCancellations = (): void => {
cancellations.clearQueue = false;
cancellations.sessionIds.clear();
cancellations.batchIds.clear();
cancellations.destinations.clear();
};
const cancellations: Cancellations = {
sessionIds: new Set(),
batchIds: new Set(),
destinations: new Set(),
clearQueue: false,
} as Readonly<Cancellations>;
/**
* Checks if an item is cancelled, used to prevent race conditions with event handling.
*
* To use this, provide the session_id, batch_id and destination from the event payload.
*/
export const getIsCancelled = (item: {
session_id: string;
batch_id: string;
destination?: string | null;
}): boolean => {
if (cancellations.clearQueue) {
return true;
}
if (cancellations.sessionIds.has(item.session_id)) {
return true;
}
if (cancellations.batchIds.has(item.batch_id)) {
return true;
}
if (item.destination && cancellations.destinations.has(item.destination)) {
return true;
}
return false;
};
export const addCancellationsListeners = (startAppListening: AppStartListening) => {
// When we get a cancellation, we may need to clear the last progress event - next few listeners handle those cases.
// Maybe we could use the `getIsCancelled` util here, but I think that could introduce _another_ race condition...
startAppListening({
matcher: queueApi.endpoints.enqueueBatch.matchFulfilled,
effect: () => {
resetCancellations();
},
});
startAppListening({
matcher: queueApi.endpoints.cancelByBatchDestination.matchFulfilled,
effect: (action) => {
cancellations.destinations.add(action.meta.arg.originalArgs.destination);
const event = $lastCanvasProgressEvent.get();
if (!event) {
return;
}
const { session_id, batch_id, destination } = event;
if (getIsCancelled({ session_id, batch_id, destination })) {
$lastCanvasProgressEvent.set(null);
}
},
});
startAppListening({
matcher: queueApi.endpoints.cancelQueueItem.matchFulfilled,
effect: (action) => {
cancellations.sessionIds.add(action.payload.session_id);
const event = $lastCanvasProgressEvent.get();
if (!event) {
return;
}
const { session_id, batch_id, destination } = event;
if (getIsCancelled({ session_id, batch_id, destination })) {
$lastCanvasProgressEvent.set(null);
}
},
});
startAppListening({
matcher: queueApi.endpoints.cancelByBatchIds.matchFulfilled,
effect: (action) => {
for (const batch_id of action.meta.arg.originalArgs.batch_ids) {
cancellations.batchIds.add(batch_id);
}
const event = $lastCanvasProgressEvent.get();
if (!event) {
return;
}
const { session_id, batch_id, destination } = event;
if (getIsCancelled({ session_id, batch_id, destination })) {
$lastCanvasProgressEvent.set(null);
}
},
});
startAppListening({
matcher: queueApi.endpoints.clearQueue.matchFulfilled,
effect: () => {
cancellations.clearQueue = true;
const event = $lastCanvasProgressEvent.get();
if (!event) {
return;
}
const { session_id, batch_id, destination } = event;
if (getIsCancelled({ session_id, batch_id, destination })) {
$lastCanvasProgressEvent.set(null);
}
},
});
};

View File

@@ -4,12 +4,8 @@ import type { AppStartListening } from 'app/store/middleware/listenerMiddleware'
import type { SerializableObject } from 'common/types';
import type { Result } from 'common/util/result';
import { isErr, withResult, withResultAsync } from 'common/util/result';
import { $canvasManager } from 'features/controlLayers/store/canvasSlice';
import {
selectIsStaging,
stagingAreaReset,
stagingAreaStartedStaging,
} from 'features/controlLayers/store/canvasStagingAreaSlice';
import { $canvasManager } from 'features/controlLayers/konva/CanvasManager';
import { sessionStagingAreaReset, sessionStartedStaging } from 'features/controlLayers/store/canvasSessionSlice';
import { prepareLinearUIBatch } from 'features/nodes/util/graph/buildLinearBatchConfig';
import { buildSD1Graph } from 'features/nodes/util/graph/generation/buildSD1Graph';
import { buildSDXLGraph } from 'features/nodes/util/graph/generation/buildSDXLGraph';
@@ -35,14 +31,14 @@ export const addEnqueueRequestedLinear = (startAppListening: AppStartListening)
let didStartStaging = false;
if (!selectIsStaging(state) && state.canvasSettings.sendToCanvas) {
dispatch(stagingAreaStartedStaging());
if (!state.canvasSession.isStaging && state.canvasSession.sendToCanvas) {
dispatch(sessionStartedStaging());
didStartStaging = true;
}
const abortStaging = () => {
if (didStartStaging && selectIsStaging(getState())) {
dispatch(stagingAreaReset());
if (didStartStaging && getState().canvasSession.isStaging) {
dispatch(sessionStagingAreaReset());
}
};
@@ -74,7 +70,7 @@ export const addEnqueueRequestedLinear = (startAppListening: AppStartListening)
const { g, noise, posCond } = buildGraphResult.value;
const destination = state.canvasSettings.sendToCanvas ? 'canvas' : 'gallery';
const destination = state.canvasSession.sendToCanvas ? 'canvas' : 'gallery';
const prepareBatchResult = withResult(() =>
prepareLinearUIBatch(state, g, prepend, noise, posCond, 'generation', destination)

View File

@@ -1,5 +1,6 @@
import { enqueueRequested } from 'app/store/actions';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { isImageViewerOpenChanged } from 'features/gallery/store/gallerySlice';
import { prepareLinearUIBatch } from 'features/nodes/util/graph/buildLinearBatchConfig';
import { buildMultidiffusionUpscaleGraph } from 'features/nodes/util/graph/buildMultidiffusionUpscaleGraph';
import { queueApi } from 'services/api/endpoints/queue';
@@ -10,6 +11,7 @@ export const addEnqueueRequestedUpscale = (startAppListening: AppStartListening)
enqueueRequested.match(action) && action.payload.tabName === 'upscaling',
effect: async (action, { getState, dispatch }) => {
const state = getState();
const { shouldShowProgressInViewer } = state.ui;
const { prepend } = action.payload;
const { g, noise, posCond } = await buildMultidiffusionUpscaleGraph(state);
@@ -23,6 +25,9 @@ export const addEnqueueRequestedUpscale = (startAppListening: AppStartListening)
);
try {
await req.unwrap();
if (shouldShowProgressInViewer) {
dispatch(isImageViewerOpenChanged(true));
}
} finally {
req.reset();
}

View File

@@ -17,7 +17,7 @@ import type { ImageDTO } from 'services/api/types';
const log = logger('gallery');
//TODO(psyche): handle image deletion (canvas staging area?)
//TODO(psyche): handle image deletion (canvas sessions?)
// Some utils to delete images from different parts of the app
const deleteNodesImages = (state: RootState, dispatch: AppDispatch, imageDTO: ImageDTO) => {

View File

@@ -1,7 +1,6 @@
import { createAction } from '@reduxjs/toolkit';
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { selectDefaultControlAdapter } from 'features/controlLayers/hooks/addLayerHooks';
import {
controlLayerAdded,
ipaImageChanged,
@@ -13,7 +12,7 @@ import type { CanvasControlLayerState, CanvasRasterLayerState } from 'features/c
import { imageDTOToImageObject } from 'features/controlLayers/store/types';
import type { TypesafeDraggableData, TypesafeDroppableData } from 'features/dnd/types';
import { isValidDrop } from 'features/dnd/util/isValidDrop';
import { imageToCompareChanged, selectionChanged } from 'features/gallery/store/gallerySlice';
import { imageToCompareChanged, isImageViewerOpenChanged, selectionChanged } from 'features/gallery/store/gallerySlice';
import { fieldImageValueChanged } from 'features/nodes/store/nodesSlice';
import { upscaleInitialImageChanged } from 'features/parameters/store/upscaleSlice';
import { imagesApi } from 'services/api/endpoints/images';
@@ -104,14 +103,11 @@ export const addImageDroppedListener = (startAppListening: AppStartListening) =>
activeData.payloadType === 'IMAGE_DTO' &&
activeData.payload.imageDTO
) {
const state = getState();
const imageObject = imageDTOToImageObject(activeData.payload.imageDTO);
const { x, y } = selectCanvasSlice(state).bbox.rect;
const defaultControlAdapter = selectDefaultControlAdapter(state);
const { x, y } = selectCanvasSlice(getState()).bbox.rect;
const overrides: Partial<CanvasControlLayerState> = {
objects: [imageObject],
position: { x, y },
controlAdapter: defaultControlAdapter,
};
dispatch(controlLayerAdded({ overrides, isSelected: true }));
return;
@@ -146,6 +142,7 @@ export const addImageDroppedListener = (startAppListening: AppStartListening) =>
) {
const { imageDTO } = activeData.payload;
dispatch(imageToCompareChanged(imageDTO));
dispatch(isImageViewerOpenChanged(true));
return;
}

View File

@@ -2,7 +2,6 @@ import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { ipaImageChanged, rgIPAdapterImageChanged } from 'features/controlLayers/store/canvasSlice';
import { selectListBoardsQueryArgs } from 'features/gallery/store/gallerySelectors';
import { boardIdSelected, galleryViewChanged } from 'features/gallery/store/gallerySlice';
import { fieldImageValueChanged } from 'features/nodes/store/nodesSlice';
import { upscaleInitialImageChanged } from 'features/parameters/store/upscaleSlice';
import { toast } from 'features/toast/toast';
@@ -45,8 +44,6 @@ export const addImageUploadedFulfilledListener = (startAppListening: AppStartLis
if (!autoAddBoardId || autoAddBoardId === 'none') {
const title = postUploadAction.title || DEFAULT_UPLOADED_TOAST.title;
toast({ ...DEFAULT_UPLOADED_TOAST, title });
dispatch(boardIdSelected({ boardId: 'none' }));
dispatch(galleryViewChanged('assets'));
} else {
// Add this image to the board
dispatch(
@@ -70,8 +67,6 @@ export const addImageUploadedFulfilledListener = (startAppListening: AppStartLis
...DEFAULT_UPLOADED_TOAST,
description,
});
dispatch(boardIdSelected({ boardId: autoAddBoardId }));
dispatch(galleryViewChanged('assets'));
}
return;
}

View File

@@ -13,7 +13,7 @@ import { loraDeleted } from 'features/controlLayers/store/lorasSlice';
import { modelChanged, refinerModelChanged, vaeSelected } from 'features/controlLayers/store/paramsSlice';
import { selectCanvasSlice } from 'features/controlLayers/store/selectors';
import { getEntityIdentifier } from 'features/controlLayers/store/types';
import { calculateNewSize } from 'features/parameters/components/Bbox/calculateNewSize';
import { calculateNewSize } from 'features/parameters/components/DocumentSize/calculateNewSize';
import { postProcessingModelChanged, upscaleModelChanged } from 'features/parameters/store/upscaleSlice';
import { zParameterModel, zParameterVAEModel } from 'features/parameters/types/parameterSchemas';
import { getIsSizeOptimal, getOptimalDimension } from 'features/parameters/util/optimalDimension';

View File

@@ -6,14 +6,12 @@ import { errorHandler } from 'app/store/enhancers/reduxRemember/errors';
import type { SerializableObject } from 'common/types';
import { deepClone } from 'common/util/deepClone';
import { changeBoardModalSlice } from 'features/changeBoardModal/store/slice';
import { canvasSessionPersistConfig, canvasSessionSlice } from 'features/controlLayers/store/canvasSessionSlice';
import { canvasSettingsPersistConfig, canvasSettingsSlice } from 'features/controlLayers/store/canvasSettingsSlice';
import { canvasPersistConfig, canvasSlice, canvasUndoableConfig } from 'features/controlLayers/store/canvasSlice';
import {
canvasStagingAreaPersistConfig,
canvasStagingAreaSlice,
} from 'features/controlLayers/store/canvasStagingAreaSlice';
import { lorasPersistConfig, lorasSlice } from 'features/controlLayers/store/lorasSlice';
import { paramsPersistConfig, paramsSlice } from 'features/controlLayers/store/paramsSlice';
import { toolPersistConfig, toolSlice } from 'features/controlLayers/store/toolSlice';
import { deleteImageModalSlice } from 'features/deleteImageModal/store/slice';
import { dynamicPromptsPersistConfig, dynamicPromptsSlice } from 'features/dynamicPrompts/store/dynamicPromptsSlice';
import { galleryPersistConfig, gallerySlice } from 'features/gallery/store/gallerySlice';
@@ -65,8 +63,9 @@ const allReducers = {
[upscaleSlice.name]: upscaleSlice.reducer,
[stylePresetSlice.name]: stylePresetSlice.reducer,
[paramsSlice.name]: paramsSlice.reducer,
[toolSlice.name]: toolSlice.reducer,
[canvasSettingsSlice.name]: canvasSettingsSlice.reducer,
[canvasStagingAreaSlice.name]: canvasStagingAreaSlice.reducer,
[canvasSessionSlice.name]: canvasSessionSlice.reducer,
[lorasSlice.name]: lorasSlice.reducer,
};
@@ -110,8 +109,9 @@ const persistConfigs: { [key in keyof typeof allReducers]?: PersistConfig } = {
[upscalePersistConfig.name]: upscalePersistConfig,
[stylePresetPersistConfig.name]: stylePresetPersistConfig,
[paramsPersistConfig.name]: paramsPersistConfig,
[toolPersistConfig.name]: toolPersistConfig,
[canvasSettingsPersistConfig.name]: canvasSettingsPersistConfig,
[canvasStagingAreaPersistConfig.name]: canvasStagingAreaPersistConfig,
[canvasSessionPersistConfig.name]: canvasSessionPersistConfig,
[lorasPersistConfig.name]: lorasPersistConfig,
};

View File

@@ -6,51 +6,18 @@ import { useImageUploadButton } from 'common/hooks/useImageUploadButton';
import type { TypesafeDraggableData, TypesafeDroppableData } from 'features/dnd/types';
import ImageContextMenu from 'features/gallery/components/ImageContextMenu/ImageContextMenu';
import type { MouseEvent, ReactElement, ReactNode, SyntheticEvent } from 'react';
import { memo, useCallback, useMemo } from 'react';
import { memo, useCallback, useMemo, useState } from 'react';
import { PiImageBold, PiUploadSimpleBold } from 'react-icons/pi';
import type { ImageDTO, PostUploadAction } from 'services/api/types';
import IAIDraggable from './IAIDraggable';
import IAIDroppable from './IAIDroppable';
import SelectionOverlay from './SelectionOverlay';
const defaultUploadElement = <Icon as={PiUploadSimpleBold} boxSize={16} />;
const defaultNoContentFallback = <IAINoContentFallback icon={PiImageBold} />;
const sx: SystemStyleObject = {
'.gallery-image-container::before': {
content: '""',
display: 'inline-block',
position: 'absolute',
top: 0,
left: 0,
right: 0,
bottom: 0,
pointerEvents: 'none',
borderRadius: 'base',
},
'&[data-selected="selected"]>.gallery-image-container::before': {
boxShadow:
'inset 0px 0px 0px 3px var(--invoke-colors-invokeBlue-500), inset 0px 0px 0px 4px var(--invoke-colors-invokeBlue-800)',
},
'&[data-selected="selectedForCompare"]>.gallery-image-container::before': {
boxShadow:
'inset 0px 0px 0px 3px var(--invoke-colors-invokeGreen-300), inset 0px 0px 0px 4px var(--invoke-colors-invokeGreen-800)',
},
'&:hover>.gallery-image-container::before': {
boxShadow:
'inset 0px 0px 0px 2px var(--invoke-colors-invokeBlue-300), inset 0px 0px 0px 3px var(--invoke-colors-invokeBlue-800)',
},
'&:hover[data-selected="selected"]>.gallery-image-container::before': {
boxShadow:
'inset 0px 0px 0px 3px var(--invoke-colors-invokeBlue-400), inset 0px 0px 0px 4px var(--invoke-colors-invokeBlue-800)',
},
'&:hover[data-selected="selectedForCompare"]>.gallery-image-container::before': {
boxShadow:
'inset 0px 0px 0px 3px var(--invoke-colors-invokeGreen-200), inset 0px 0px 0px 4px var(--invoke-colors-invokeGreen-800)',
},
};
type IAIDndImageProps = FlexProps & {
imageDTO: ImageDTO | undefined;
onError?: (event: SyntheticEvent<HTMLImageElement>) => void;
@@ -108,11 +75,13 @@ const IAIDndImage = (props: IAIDndImageProps) => {
...rest
} = props;
const [isHovered, setIsHovered] = useState(false);
const handleMouseOver = useCallback(
(e: MouseEvent<HTMLDivElement>) => {
if (onMouseOver) {
onMouseOver(e);
}
setIsHovered(true);
},
[onMouseOver]
);
@@ -121,6 +90,7 @@ const IAIDndImage = (props: IAIDndImageProps) => {
if (onMouseOut) {
onMouseOut(e);
}
setIsHovered(false);
},
[onMouseOut]
);
@@ -171,13 +141,10 @@ const IAIDndImage = (props: IAIDndImageProps) => {
minH={minSize ? minSize : undefined}
userSelect="none"
cursor={isDragDisabled || !imageDTO ? 'default' : 'pointer'}
sx={withHoverOverlay ? sx : undefined}
data-selected={isSelectedForCompare ? 'selectedForCompare' : isSelected ? 'selected' : undefined}
{...rest}
>
{imageDTO && (
<Flex
className="gallery-image-container"
w="full"
h="full"
position={fitContainer ? 'absolute' : 'relative'}
@@ -200,6 +167,11 @@ const IAIDndImage = (props: IAIDndImageProps) => {
data-testid={dataTestId}
/>
{withMetadataOverlay && <ImageMetadataOverlay imageDTO={imageDTO} />}
<SelectionOverlay
isSelected={isSelected}
isSelectedForCompare={isSelectedForCompare}
isHovered={withHoverOverlay ? isHovered : false}
/>
</Flex>
)}
{!imageDTO && !isUploadDisabled && (

View File

@@ -0,0 +1,104 @@
import type { SystemStyleObject } from '@invoke-ai/ui-library';
import { Box, Flex, IconButton, Tooltip, useToken } from '@invoke-ai/ui-library';
import type { ReactElement, ReactNode } from 'react';
import { memo, useCallback, useMemo } from 'react';
type IconSwitchProps = {
isChecked: boolean;
onChange: (checked: boolean) => void;
iconChecked: ReactElement;
tooltipChecked?: ReactNode;
iconUnchecked: ReactElement;
tooltipUnchecked?: ReactNode;
ariaLabel: string;
};
const getSx = (padding: string | number): SystemStyleObject => ({
transition: 'left 0.1s ease-in-out, transform 0.1s ease-in-out',
'&[data-checked="true"]': {
left: `calc(100% - ${padding})`,
transform: 'translateX(-100%)',
},
'&[data-checked="false"]': {
left: padding,
transform: 'translateX(0)',
},
});
export const IconSwitch = memo(
({
isChecked,
onChange,
iconChecked,
tooltipChecked,
iconUnchecked,
tooltipUnchecked,
ariaLabel,
}: IconSwitchProps) => {
const onUncheck = useCallback(() => {
onChange(false);
}, [onChange]);
const onCheck = useCallback(() => {
onChange(true);
}, [onChange]);
const gap = useToken('space', 1.5);
const sx = useMemo(() => getSx(gap), [gap]);
return (
<Flex
position="relative"
bg="base.800"
borderRadius="base"
alignItems="center"
justifyContent="center"
h="full"
p={gap}
gap={gap}
>
<Box
position="absolute"
borderRadius="base"
bg="invokeBlue.400"
w={12}
top={gap}
bottom={gap}
data-checked={isChecked}
sx={sx}
/>
<Tooltip hasArrow label={tooltipUnchecked}>
<IconButton
size="sm"
fontSize={16}
icon={iconUnchecked}
onClick={onUncheck}
variant={!isChecked ? 'solid' : 'ghost'}
colorScheme={!isChecked ? 'invokeBlue' : 'base'}
aria-label={ariaLabel}
data-checked={!isChecked}
w={12}
alignSelf="stretch"
h="auto"
/>
</Tooltip>
<Tooltip hasArrow label={tooltipChecked}>
<IconButton
size="sm"
fontSize={16}
icon={iconChecked}
onClick={onCheck}
variant={isChecked ? 'solid' : 'ghost'}
colorScheme={isChecked ? 'invokeBlue' : 'base'}
aria-label={ariaLabel}
data-checked={isChecked}
w={12}
alignSelf="stretch"
h="auto"
/>
</Tooltip>
</Flex>
);
}
);
IconSwitch.displayName = 'IconSwitch';

View File

@@ -0,0 +1,46 @@
import { Box } from '@invoke-ai/ui-library';
import { memo, useMemo } from 'react';
type Props = {
isSelected: boolean;
isSelectedForCompare: boolean;
isHovered: boolean;
};
const SelectionOverlay = ({ isSelected, isSelectedForCompare, isHovered }: Props) => {
const shadow = useMemo(() => {
if (isSelectedForCompare && isHovered) {
return 'hoverSelectedForCompare';
}
if (isSelectedForCompare && !isHovered) {
return 'selectedForCompare';
}
if (isSelected && isHovered) {
return 'hoverSelected';
}
if (isSelected && !isHovered) {
return 'selected';
}
if (!isSelected && isHovered) {
return 'hoverUnselected';
}
return undefined;
}, [isHovered, isSelected, isSelectedForCompare]);
return (
<Box
className="selection-box"
position="absolute"
top={0}
insetInlineEnd={0}
bottom={0}
insetInlineStart={0}
borderRadius="base"
opacity={isSelected || isSelectedForCompare ? 1 : 0.7}
transitionProperty="common"
transitionDuration="0.1s"
pointerEvents="none"
shadow={shadow}
/>
);
};
export default memo(SelectionOverlay);

View File

@@ -1,7 +1,6 @@
import { useStore } from '@nanostores/react';
import type { WritableAtom } from 'nanostores';
import { atom } from 'nanostores';
import { useCallback, useState } from 'react';
type UseBoolean = {
isTrue: boolean;
@@ -51,24 +50,4 @@ export const buildUseBoolean = (initialValue: boolean): [() => UseBoolean, Writa
* Hook to manage a boolean state. Use this for a local boolean state.
* @param initialValue Initial value of the boolean
*/
export const useBoolean = (initialValue: boolean) => {
const [isTrue, set] = useState(initialValue);
const setTrue = useCallback(() => {
set(true);
}, [set]);
const setFalse = useCallback(() => {
set(false);
}, [set]);
const toggle = useCallback(() => {
set((val) => !val);
}, [set]);
return {
isTrue,
setTrue,
setFalse,
set,
toggle,
};
};
export const useBoolean = (initialValue: boolean) => buildUseBoolean(initialValue)[0]();

View File

@@ -114,13 +114,4 @@ export const useGlobalHotkeys = () => {
},
[dispatch, isModelManagerEnabled]
);
useHotkeys(
isModelManagerEnabled ? '6' : '5',
() => {
dispatch(setActiveTab('gallery'));
setScopes([]);
},
[dispatch, isModelManagerEnabled]
);
};

View File

@@ -2,7 +2,6 @@ import { useStore } from '@nanostores/react';
import { $isConnected } from 'app/hooks/useSocketIO';
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppSelector } from 'app/store/storeHooks';
import { useCanvasManagerSafe } from 'features/controlLayers/contexts/CanvasManagerProviderGate';
import { selectParamsSlice } from 'features/controlLayers/store/paramsSlice';
import { selectCanvasSlice } from 'features/controlLayers/store/selectors';
import { selectDynamicPromptsSlice } from 'features/dynamicPrompts/store/dynamicPromptsSlice';
@@ -18,7 +17,6 @@ import { selectSystemSlice } from 'features/system/store/systemSlice';
import { selectActiveTab } from 'features/ui/store/uiSelectors';
import i18n from 'i18next';
import { forEach, upperFirst } from 'lodash-es';
import { atom } from 'nanostores';
import { useMemo } from 'react';
import { getConnectedEdges } from 'reactflow';
@@ -30,7 +28,7 @@ const LAYER_TYPE_TO_TKEY = {
control_layer: 'controlLayers.globalControlAdapter',
} as const;
const createSelector = (templates: Templates, isConnected: boolean, canvasIsBusy: boolean) =>
const createSelector = (templates: Templates, isConnected: boolean) =>
createMemoizedSelector(
[
selectSystemSlice,
@@ -119,10 +117,6 @@ const createSelector = (templates: Templates, isConnected: boolean, canvasIsBusy
reasons.push({ content: i18n.t('upscaling.missingTileControlNetModel') });
}
} else {
if (canvasIsBusy) {
reasons.push({ content: i18n.t('parameters.invoke.canvasBusy') });
}
if (dynamicPrompts.prompts.length === 0 && getShouldProcessPrompt(positivePrompt)) {
reasons.push({ content: i18n.t('parameters.invoke.noPrompts') });
}
@@ -134,7 +128,7 @@ const createSelector = (templates: Templates, isConnected: boolean, canvasIsBusy
canvas.controlLayers.entities
.filter((controlLayer) => controlLayer.isEnabled)
.forEach((controlLayer, i) => {
const layerLiteral = i18n.t('controlLayers.layer_one');
const layerLiteral = i18n.t('controlLayers.layers_one');
const layerNumber = i + 1;
const layerType = i18n.t(LAYER_TYPE_TO_TKEY['control_layer']);
const prefix = `${layerLiteral} #${layerNumber} (${layerType})`;
@@ -164,7 +158,7 @@ const createSelector = (templates: Templates, isConnected: boolean, canvasIsBusy
canvas.ipAdapters.entities
.filter((entity) => entity.isEnabled)
.forEach((entity, i) => {
const layerLiteral = i18n.t('controlLayers.layer_one');
const layerLiteral = i18n.t('controlLayers.layers_one');
const layerNumber = i + 1;
const layerType = i18n.t(LAYER_TYPE_TO_TKEY[entity.type]);
const prefix = `${layerLiteral} #${layerNumber} (${layerType})`;
@@ -192,7 +186,7 @@ const createSelector = (templates: Templates, isConnected: boolean, canvasIsBusy
canvas.regions.entities
.filter((entity) => entity.isEnabled)
.forEach((entity, i) => {
const layerLiteral = i18n.t('controlLayers.layer_one');
const layerLiteral = i18n.t('controlLayers.layers_one');
const layerNumber = i + 1;
const layerType = i18n.t(LAYER_TYPE_TO_TKEY[entity.type]);
const prefix = `${layerLiteral} #${layerNumber} (${layerType})`;
@@ -229,7 +223,7 @@ const createSelector = (templates: Templates, isConnected: boolean, canvasIsBusy
canvas.rasterLayers.entities
.filter((entity) => entity.isEnabled)
.forEach((entity, i) => {
const layerLiteral = i18n.t('controlLayers.layer_one');
const layerLiteral = i18n.t('controlLayers.layers_one');
const layerNumber = i + 1;
const layerType = i18n.t(LAYER_TYPE_TO_TKEY[entity.type]);
const prefix = `${layerLiteral} #${layerNumber} (${layerType})`;
@@ -246,17 +240,10 @@ const createSelector = (templates: Templates, isConnected: boolean, canvasIsBusy
}
);
const dummyAtom = atom(true);
export const useIsReadyToEnqueue = () => {
const templates = useStore($templates);
const isConnected = useStore($isConnected);
const canvasManager = useCanvasManagerSafe();
const canvasIsBusy = useStore(canvasManager?.$isBusy ?? dummyAtom);
const selector = useMemo(
() => createSelector(templates, isConnected, canvasIsBusy),
[templates, isConnected, canvasIsBusy]
);
const selector = useMemo(() => createSelector(templates, isConnected), [templates, isConnected]);
const value = useAppSelector(selector);
return value;
};

View File

@@ -1,9 +1,6 @@
export const roundDownToMultiple = (num: number, multiple: number): number => {
return Math.floor(num / multiple) * multiple;
};
export const roundUpToMultiple = (num: number, multiple: number): number => {
return Math.ceil(num / multiple) * multiple;
};
export const roundToMultiple = (num: number, multiple: number): number => {
return Math.round(num / multiple) * multiple;

View File

@@ -1,22 +1,34 @@
import { Button, ButtonGroup, Flex } from '@invoke-ai/ui-library';
import { useAppDispatch } from 'app/store/storeHooks';
import {
useAddControlLayer,
useAddInpaintMask,
useAddIPAdapter,
useAddRasterLayer,
useAddRegionalGuidance,
} from 'features/controlLayers/hooks/addLayerHooks';
import { memo } from 'react';
controlLayerAdded,
inpaintMaskAdded,
ipaAdded,
rasterLayerAdded,
rgAdded,
} from 'features/controlLayers/store/canvasSlice';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { PiPlusBold } from 'react-icons/pi';
export const CanvasAddEntityButtons = memo(() => {
const { t } = useTranslation();
const addInpaintMask = useAddInpaintMask();
const addRegionalGuidance = useAddRegionalGuidance();
const addRasterLayer = useAddRasterLayer();
const addControlLayer = useAddControlLayer();
const addIPAdapter = useAddIPAdapter();
const dispatch = useAppDispatch();
const addInpaintMask = useCallback(() => {
dispatch(inpaintMaskAdded({ isSelected: true }));
}, [dispatch]);
const addRegionalGuidance = useCallback(() => {
dispatch(rgAdded({ isSelected: true }));
}, [dispatch]);
const addRasterLayer = useCallback(() => {
dispatch(rasterLayerAdded({ isSelected: true }));
}, [dispatch]);
const addControlLayer = useCallback(() => {
dispatch(controlLayerAdded({ isSelected: true }));
}, [dispatch]);
const addIPAdapter = useCallback(() => {
dispatch(ipaAdded({ isSelected: true }));
}, [dispatch]);
return (
<Flex flexDir="column" w="full" h="full" alignItems="center">
@@ -34,7 +46,7 @@ export const CanvasAddEntityButtons = memo(() => {
{t('controlLayers.controlLayer')}
</Button>
<Button variant="ghost" justifyContent="flex-start" leftIcon={<PiPlusBold />} onClick={addIPAdapter}>
{t('controlLayers.globalIPAdapter')}
{t('controlLayers.ipAdapter')}
</Button>
</ButtonGroup>
</Flex>

View File

@@ -1,6 +1,7 @@
import { Flex } from '@invoke-ai/ui-library';
import IAIDroppable from 'common/components/IAIDroppable';
import type { AddControlLayerFromImageDropData, AddRasterLayerFromImageDropData } from 'features/dnd/types';
import { useIsImageViewerOpen } from 'features/gallery/components/ImageViewer/useImageViewer';
import { memo } from 'react';
const addRasterLayerFromImageDropData: AddRasterLayerFromImageDropData = {
@@ -14,6 +15,12 @@ const addControlLayerFromImageDropData: AddControlLayerFromImageDropData = {
};
export const CanvasDropArea = memo(() => {
const isImageViewerOpen = useIsImageViewerOpen();
if (isImageViewerOpen) {
return null;
}
return (
<>
<Flex position="absolute" top={0} right={0} bottom="50%" left={0} gap={2} pointerEvents="none">

View File

@@ -0,0 +1,20 @@
import { Flex, Spacer } from '@invoke-ai/ui-library';
import { EntityListActionBarAddLayerButton } from 'features/controlLayers/components/CanvasEntityList/EntityListActionBarAddLayerMenuButton';
import { EntityListActionBarDeleteButton } from 'features/controlLayers/components/CanvasEntityList/EntityListActionBarDeleteButton';
import { EntityListActionBarSelectedEntityFill } from 'features/controlLayers/components/CanvasEntityList/EntityListActionBarSelectedEntityFill';
import { SelectedEntityOpacity } from 'features/controlLayers/components/CanvasEntityList/EntityListActionBarSelectedEntityOpacity';
import { memo } from 'react';
export const EntityListActionBar = memo(() => {
return (
<Flex w="full" py={1} px={1} gap={2} alignItems="center">
<SelectedEntityOpacity />
<Spacer />
<EntityListActionBarSelectedEntityFill />
<EntityListActionBarAddLayerButton />
<EntityListActionBarDeleteButton />
</Flex>
);
});
EntityListActionBar.displayName = 'EntityListActionBar';

View File

@@ -0,0 +1,28 @@
import { IconButton, Menu, MenuButton, MenuList } from '@invoke-ai/ui-library';
import { CanvasEntityListMenuItems } from 'features/controlLayers/components/CanvasEntityList/EntityListActionBarAddLayerMenuItems';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiPlusBold } from 'react-icons/pi';
export const EntityListActionBarAddLayerButton = memo(() => {
const { t } = useTranslation();
return (
<Menu>
<MenuButton
as={IconButton}
size="sm"
tooltip={t('controlLayers.addLayer')}
aria-label={t('controlLayers.addLayer')}
icon={<PiPlusBold />}
variant="ghost"
data-testid="control-layers-add-layer-menu-button"
/>
<MenuList>
<CanvasEntityListMenuItems />
</MenuList>
</Menu>
);
});
EntityListActionBarAddLayerButton.displayName = 'EntityListActionBarAddLayerButton';

View File

@@ -0,0 +1,54 @@
import { MenuItem } from '@invoke-ai/ui-library';
import { useAppDispatch } from 'app/store/storeHooks';
import {
controlLayerAdded,
inpaintMaskAdded,
ipaAdded,
rasterLayerAdded,
rgAdded,
} from 'features/controlLayers/store/canvasSlice';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { PiPlusBold } from 'react-icons/pi';
export const CanvasEntityListMenuItems = memo(() => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const addInpaintMask = useCallback(() => {
dispatch(inpaintMaskAdded({ isSelected: true }));
}, [dispatch]);
const addRegionalGuidance = useCallback(() => {
dispatch(rgAdded({ isSelected: true }));
}, [dispatch]);
const addRasterLayer = useCallback(() => {
dispatch(rasterLayerAdded({ isSelected: true }));
}, [dispatch]);
const addControlLayer = useCallback(() => {
dispatch(controlLayerAdded({ isSelected: true }));
}, [dispatch]);
const addIPAdapter = useCallback(() => {
dispatch(ipaAdded({ isSelected: true }));
}, [dispatch]);
return (
<>
<MenuItem icon={<PiPlusBold />} onClick={addInpaintMask}>
{t('controlLayers.inpaintMask')}
</MenuItem>
<MenuItem icon={<PiPlusBold />} onClick={addRegionalGuidance}>
{t('controlLayers.regionalGuidance')}
</MenuItem>
<MenuItem icon={<PiPlusBold />} onClick={addRasterLayer}>
{t('controlLayers.rasterLayer')}
</MenuItem>
<MenuItem icon={<PiPlusBold />} onClick={addControlLayer}>
{t('controlLayers.controlLayer')}
</MenuItem>
<MenuItem icon={<PiPlusBold />} onClick={addIPAdapter}>
{t('controlLayers.ipAdapter')}
</MenuItem>
</>
);
});
CanvasEntityListMenuItems.displayName = 'CanvasEntityListMenu';

View File

@@ -0,0 +1,39 @@
import { IconButton, useShiftModifier } from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { allEntitiesDeleted, entityDeleted } from 'features/controlLayers/store/canvasSlice';
import { selectEntityCount, selectSelectedEntityIdentifier } from 'features/controlLayers/store/selectors';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { PiTrashSimpleFill } from 'react-icons/pi';
export const EntityListActionBarDeleteButton = memo(() => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const selectedEntityIdentifier = useAppSelector(selectSelectedEntityIdentifier);
const entityCount = useAppSelector(selectEntityCount);
const shift = useShiftModifier();
const onClick = useCallback(() => {
if (shift) {
dispatch(allEntitiesDeleted());
return;
}
if (!selectedEntityIdentifier) {
return;
}
dispatch(entityDeleted({ entityIdentifier: selectedEntityIdentifier }));
}, [dispatch, selectedEntityIdentifier, shift]);
return (
<IconButton
onClick={onClick}
isDisabled={shift ? entityCount === 0 : !selectedEntityIdentifier}
size="sm"
variant="ghost"
aria-label={shift ? t('controlLayers.deleteAll') : t('controlLayers.deleteSelected')}
tooltip={shift ? t('controlLayers.deleteAll') : t('controlLayers.deleteSelected')}
icon={<PiTrashSimpleFill />}
/>
);
});
EntityListActionBarDeleteButton.displayName = 'EntityListActionBarDeleteButton';

View File

@@ -9,7 +9,7 @@ import { type FillStyle, isMaskEntityIdentifier, type RgbColor } from 'features/
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
export const EntityListSelectedEntityActionBarFill = memo(() => {
export const EntityListActionBarSelectedEntityFill = memo(() => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const selectedEntityIdentifier = useAppSelector(selectSelectedEntityIdentifier);
@@ -67,4 +67,4 @@ export const EntityListSelectedEntityActionBarFill = memo(() => {
);
});
EntityListSelectedEntityActionBarFill.displayName = 'EntityListSelectedEntityActionBarFill';
EntityListActionBarSelectedEntityFill.displayName = 'EntityListActionBarSelectedEntityFill';

View File

@@ -22,7 +22,7 @@ import {
selectEntity,
selectSelectedEntityIdentifier,
} from 'features/controlLayers/store/selectors';
import { isRenderableEntity } from 'features/controlLayers/store/types';
import { isDrawableEntity } from 'features/controlLayers/store/types';
import { clamp, round } from 'lodash-es';
import type { KeyboardEvent } from 'react';
import { memo, useCallback, useEffect, useState } from 'react';
@@ -37,11 +37,11 @@ function formatPct(v: number | string) {
return `${round(Number(v), 2).toLocaleString()}%`;
}
function mapSliderValueToRawValue(value: number) {
function mapSliderValueToOpacity(value: number) {
return value / 100;
}
function mapRawValueToSliderValue(opacity: number) {
function mapOpacityToSliderValue(opacity: number) {
return opacity * 100;
}
@@ -50,14 +50,14 @@ function formatSliderValue(value: number) {
}
const marks = [
mapRawValueToSliderValue(0),
mapRawValueToSliderValue(0.25),
mapRawValueToSliderValue(0.5),
mapRawValueToSliderValue(0.75),
mapRawValueToSliderValue(1),
mapOpacityToSliderValue(0),
mapOpacityToSliderValue(0.25),
mapOpacityToSliderValue(0.5),
mapOpacityToSliderValue(0.75),
mapOpacityToSliderValue(1),
];
const sliderDefaultValue = mapRawValueToSliderValue(1);
const sliderDefaultValue = mapOpacityToSliderValue(1);
const snapCandidates = marks.slice(1, marks.length - 1);
@@ -70,14 +70,14 @@ const selectOpacity = createSelector(selectCanvasSlice, (canvas) => {
if (!selectedEntity) {
return 1; // fallback to 100% opacity
}
if (!isRenderableEntity(selectedEntity)) {
if (!isDrawableEntity(selectedEntity)) {
return 1; // fallback to 100% opacity
}
// Opacity is a float from 0-1, but we want to display it as a percentage
return selectedEntity.opacity;
});
export const EntityListSelectedEntityActionBarOpacity = memo(() => {
export const SelectedEntityOpacity = memo(() => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const selectedEntityIdentifier = useAppSelector(selectSelectedEntityIdentifier);
@@ -95,7 +95,7 @@ export const EntityListSelectedEntityActionBarOpacity = memo(() => {
if (!$shift.get()) {
snappedOpacity = snapToNearest(opacity, snapCandidates, 2);
}
const mappedOpacity = mapSliderValueToRawValue(snappedOpacity);
const mappedOpacity = mapSliderValueToOpacity(snappedOpacity);
dispatch(entityOpacityChanged({ entityIdentifier: selectedEntityIdentifier, opacity: mappedOpacity }));
},
@@ -193,4 +193,4 @@ export const EntityListSelectedEntityActionBarOpacity = memo(() => {
);
});
EntityListSelectedEntityActionBarOpacity.displayName = 'EntityListSelectedEntityActionBarOpacity';
SelectedEntityOpacity.displayName = 'SelectedEntityOpacity';

View File

@@ -1,54 +0,0 @@
import { IconButton, Menu, MenuButton, MenuItem, MenuList } from '@invoke-ai/ui-library';
import {
useAddControlLayer,
useAddInpaintMask,
useAddIPAdapter,
useAddRasterLayer,
useAddRegionalGuidance,
} from 'features/controlLayers/hooks/addLayerHooks';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiPlusBold } from 'react-icons/pi';
export const EntityListGlobalActionBarAddLayerMenu = memo(() => {
const { t } = useTranslation();
const addInpaintMask = useAddInpaintMask();
const addRegionalGuidance = useAddRegionalGuidance();
const addRasterLayer = useAddRasterLayer();
const addControlLayer = useAddControlLayer();
const addIPAdapter = useAddIPAdapter();
return (
<Menu>
<MenuButton
as={IconButton}
size="sm"
variant="link"
alignSelf="stretch"
tooltip={t('controlLayers.addLayer')}
aria-label={t('controlLayers.addLayer')}
icon={<PiPlusBold />}
data-testid="control-layers-add-layer-menu-button"
/>
<MenuList>
<MenuItem icon={<PiPlusBold />} onClick={addInpaintMask}>
{t('controlLayers.inpaintMask')}
</MenuItem>
<MenuItem icon={<PiPlusBold />} onClick={addRegionalGuidance}>
{t('controlLayers.regionalGuidance')}
</MenuItem>
<MenuItem icon={<PiPlusBold />} onClick={addRasterLayer}>
{t('controlLayers.rasterLayer')}
</MenuItem>
<MenuItem icon={<PiPlusBold />} onClick={addControlLayer}>
{t('controlLayers.controlLayer')}
</MenuItem>
<MenuItem icon={<PiPlusBold />} onClick={addIPAdapter}>
{t('controlLayers.globalIPAdapter')}
</MenuItem>
</MenuList>
</Menu>
);
});
EntityListGlobalActionBarAddLayerMenu.displayName = 'EntityListGlobalActionBarAddLayerMenu';

View File

@@ -1,26 +0,0 @@
import { Flex, Spacer } from '@invoke-ai/ui-library';
import { EntityListGlobalActionBarAddLayerMenu } from 'features/controlLayers/components/CanvasEntityList/EntityListGlobalActionBarAddLayerMenu';
import { EntityListSelectedEntityActionBarDuplicateButton } from 'features/controlLayers/components/CanvasEntityList/EntityListSelectedEntityActionBarDuplicateButton';
import { EntityListSelectedEntityActionBarFill } from 'features/controlLayers/components/CanvasEntityList/EntityListSelectedEntityActionBarFill';
import { EntityListSelectedEntityActionBarFilterButton } from 'features/controlLayers/components/CanvasEntityList/EntityListSelectedEntityActionBarFilterButton';
import { EntityListSelectedEntityActionBarOpacity } from 'features/controlLayers/components/CanvasEntityList/EntityListSelectedEntityActionBarOpacity';
import { EntityListSelectedEntityActionBarTransformButton } from 'features/controlLayers/components/CanvasEntityList/EntityListSelectedEntityActionBarTransformButton';
import { memo } from 'react';
export const EntityListSelectedEntityActionBar = memo(() => {
return (
<Flex w="full" gap={2} alignItems="center" ps={1}>
<EntityListSelectedEntityActionBarOpacity />
<Spacer />
<EntityListSelectedEntityActionBarFill />
<Flex h="full">
<EntityListSelectedEntityActionBarFilterButton />
<EntityListSelectedEntityActionBarTransformButton />
<EntityListSelectedEntityActionBarDuplicateButton />
<EntityListGlobalActionBarAddLayerMenu />
</Flex>
</Flex>
);
});
EntityListSelectedEntityActionBar.displayName = 'EntityListSelectedEntityActionBar';

View File

@@ -1,34 +0,0 @@
import { IconButton } from '@invoke-ai/ui-library';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { entityDuplicated } from 'features/controlLayers/store/canvasSlice';
import { selectSelectedEntityIdentifier } from 'features/controlLayers/store/selectors';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { PiCopyFill } from 'react-icons/pi';
export const EntityListSelectedEntityActionBarDuplicateButton = memo(() => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const selectedEntityIdentifier = useAppSelector(selectSelectedEntityIdentifier);
const onClick = useCallback(() => {
if (!selectedEntityIdentifier) {
return;
}
dispatch(entityDuplicated({ entityIdentifier: selectedEntityIdentifier }));
}, [dispatch, selectedEntityIdentifier]);
return (
<IconButton
onClick={onClick}
isDisabled={!selectedEntityIdentifier}
size="sm"
variant="link"
alignSelf="stretch"
aria-label={t('controlLayers.duplicate')}
tooltip={t('controlLayers.duplicate')}
icon={<PiCopyFill />}
/>
);
});
EntityListSelectedEntityActionBarDuplicateButton.displayName = 'EntityListSelectedEntityActionBarDuplicateButton';

View File

@@ -1,37 +0,0 @@
import { IconButton } from '@invoke-ai/ui-library';
import { useAppSelector } from 'app/store/storeHooks';
import { useEntityFilter } from 'features/controlLayers/hooks/useEntityFilter';
import { selectSelectedEntityIdentifier } from 'features/controlLayers/store/selectors';
import { isFilterableEntityIdentifier } from 'features/controlLayers/store/types';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiShootingStarBold } from 'react-icons/pi';
export const EntityListSelectedEntityActionBarFilterButton = memo(() => {
const { t } = useTranslation();
const selectedEntityIdentifier = useAppSelector(selectSelectedEntityIdentifier);
const filter = useEntityFilter(selectedEntityIdentifier);
if (!selectedEntityIdentifier) {
return null;
}
if (!isFilterableEntityIdentifier(selectedEntityIdentifier)) {
return null;
}
return (
<IconButton
onClick={filter.start}
isDisabled={filter.isDisabled}
size="sm"
variant="link"
alignSelf="stretch"
aria-label={t('controlLayers.filter.filter')}
tooltip={t('controlLayers.filter.filter')}
icon={<PiShootingStarBold />}
/>
);
});
EntityListSelectedEntityActionBarFilterButton.displayName = 'EntityListSelectedEntityActionBarFilterButton';

View File

@@ -1,37 +0,0 @@
import { IconButton } from '@invoke-ai/ui-library';
import { useAppSelector } from 'app/store/storeHooks';
import { useEntityTransform } from 'features/controlLayers/hooks/useEntityTransform';
import { selectSelectedEntityIdentifier } from 'features/controlLayers/store/selectors';
import { isTransformableEntityIdentifier } from 'features/controlLayers/store/types';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiFrameCornersBold } from 'react-icons/pi';
export const EntityListSelectedEntityActionBarTransformButton = memo(() => {
const { t } = useTranslation();
const selectedEntityIdentifier = useAppSelector(selectSelectedEntityIdentifier);
const transform = useEntityTransform(selectedEntityIdentifier);
if (!selectedEntityIdentifier) {
return null;
}
if (!isTransformableEntityIdentifier(selectedEntityIdentifier)) {
return null;
}
return (
<IconButton
onClick={transform.start}
isDisabled={transform.isDisabled}
size="sm"
variant="link"
alignSelf="stretch"
aria-label={t('controlLayers.transform.transform')}
tooltip={t('controlLayers.transform.transform')}
icon={<PiFrameCornersBold />}
/>
);
});
EntityListSelectedEntityActionBarTransformButton.displayName = 'EntityListSelectedEntityActionBarTransformButton';

Some files were not shown because too many files have changed in this diff Show More