Compare commits

..

875 Commits

Author SHA1 Message Date
psychedelicious
f8dadc8c0a chore: release v5.0.0.dev13 2024-09-06 21:33:24 +10:00
psychedelicious
d421eaeff0 feat(ui): move seed out of advanced, hide HRF settings 2024-09-06 21:27:58 +10:00
psychedelicious
d434e80e99 chore(ui): lint 2024-09-06 21:27:58 +10:00
psychedelicious
918ed5c264 feat(ui): tweak padding on entity group header 2024-09-06 21:27:58 +10:00
psychedelicious
2f98c876df feat(ui): use plurals for entity group header hidden tooltip 2024-09-06 21:27:58 +10:00
psychedelicious
2b9f9f3d68 feat(ui): move delete entity button down to entity list item 2024-09-06 21:27:58 +10:00
psychedelicious
f326bfe1b0 feat(ui): add fit bbox to layers 2024-09-06 21:27:58 +10:00
psychedelicious
c57966ec38 fix(ui): tidy incorrect component name 2024-09-06 21:27:58 +10:00
psychedelicious
3c5bf48810 feat(ui): do not allow invoke while transforming or filtering 2024-09-06 21:27:58 +10:00
psychedelicious
c0db830398 feat(ui): do not allow transform, filter or merge while staging 2024-09-06 21:27:58 +10:00
psychedelicious
3e3e00b24d fix(ui): prevent stage scale/size from being invalid 2024-09-06 21:27:58 +10:00
psychedelicious
640af85c93 fix(ui): do not save filtered previews to gallery 2024-09-06 21:27:58 +10:00
psychedelicious
81060eef71 feat(ui): filter UI layout 2024-09-06 21:27:58 +10:00
psychedelicious
ea193903ee feat(ui): revised entity list action bars
- Global action bar on top
- Selected Entity action bar below
2024-09-06 21:27:58 +10:00
psychedelicious
5a251a4876 feat(ui): fit bbox to stage on canvas reset 2024-09-06 21:27:58 +10:00
psychedelicious
ce93fdd076 chore(ui): lint 2024-09-06 21:27:58 +10:00
psychedelicious
788e03fbc3 feat(ui): reworked image context menu
- Add `Open in Viewer`
- Remove `Send to Image to Image`
- Fix `Send to Canvas`
- Split out logic for composability
2024-09-06 21:27:57 +10:00
psychedelicious
8cb1584379 feat(ui): restore aspect ratio preview component 2024-09-06 21:27:57 +10:00
psychedelicious
63a2b6cdfe fix(ui): transformer rendered behind layer objects 2024-09-06 21:27:57 +10:00
psychedelicious
b193bc7b14 feat(ui): inverted shift behavior for transformer 2024-09-06 21:27:57 +10:00
psychedelicious
dce488c691 fix(ui): ignore filters when calculating bbox 2024-09-06 21:27:57 +10:00
psychedelicious
615b09d453 feat(ui): cancel by destination, not origin 2024-09-06 21:27:57 +10:00
psychedelicious
1dd521a792 chore(ui): typegen 2024-09-06 21:27:57 +10:00
psychedelicious
f6a703c367 feat(app): cancel by destination, not origin
When resetting the canvas or staging area, we don't want to cancel generations that are going to the gallery - only those going to the canvas.

Thus the method should not cancel by origin, but instead cancel by destination.

Update the queue method and route.
2024-09-06 21:27:57 +10:00
psychedelicious
5a69b62a1a fix(ui): scaled size not correctly reset when canvas reset 2024-09-06 21:27:57 +10:00
psychedelicious
c4b9ba8e12 feat(ui): use black bg when rasterizing control images 2024-09-06 21:27:57 +10:00
psychedelicious
c04790158f fix(ui): ignore Konva filters when previewing filter 2024-09-06 21:27:57 +10:00
psychedelicious
a0ed7baf73 fix(ui): filter preview accidentally committed to layer 2024-09-06 21:27:57 +10:00
psychedelicious
f96148ca52 feat(ui): improved transparency effect
Use the min of each pixel's alpha value and lightness for the output alpha. This prevents artifacts when using the transparency effect, especially with non-black pixels with low alpha.
2024-09-06 21:27:57 +10:00
psychedelicious
96786ed62b chore: release v4.2.9.dev12 2024-09-06 21:27:57 +10:00
psychedelicious
cc9cc62707 fix(ui): missing translation 2024-09-06 21:27:57 +10:00
psychedelicious
07cf5807be fix(ui): save to gallery uses auto-add board 2024-09-06 21:27:57 +10:00
psychedelicious
013414e962 fix(ui): cancel transform/filter when deleting entity 2024-09-06 21:27:57 +10:00
psychedelicious
10bba49e28 chore(ui): lint 2024-09-06 21:27:57 +10:00
psychedelicious
7888b4913a feat(ui): iterate on state flow and rendering 2
- Rely on redux + reselect more
- Remove all nanostores that simply "mirrored" redux state in favor of direct subscriptions to redux store
- Add abstractions for creating redux subs and running selectors
- Add `initialize` method to CanvasModuleBase, for post-instantiation tasks
- Reduce local caching of state in modules to a minimum
2024-09-06 21:27:57 +10:00
psychedelicious
27b00cedc5 feat(ui): iterate on state flow and rendering 2024-09-06 21:27:57 +10:00
psychedelicious
0bfdd5acb8 feat(ui): slight layout change for staging area toolbar 2024-09-06 21:27:57 +10:00
psychedelicious
6e6629ad74 feat(ui): clean up adapter API 2024-09-06 21:27:57 +10:00
psychedelicious
fe257e3e46 feat(ui): streamlined state flow 2024-09-06 21:27:57 +10:00
psychedelicious
0b1dc365d0 fix(ui): handle optimal dimension when resetting canvas 2024-09-06 21:27:57 +10:00
psychedelicious
c75a933b17 feat(ui): background and staging area modules have own store subscription and render themselves 2024-09-06 21:27:57 +10:00
psychedelicious
f1525c2a82 feat(ui): make rendering methods not need args
They should pull from the entity's state directly. This allows more freedom with updating the canvas.
2024-09-06 21:27:57 +10:00
psychedelicious
8266c10c53 feat(ui): restore size of invoke button 2024-09-06 21:27:57 +10:00
psychedelicious
34554f8555 tidy(ui): remove unnecessary awaits in rendering module 2024-09-06 21:27:57 +10:00
psychedelicious
200dbbbf55 tidy(ui): rename some classes to better represent their responsibilities 2024-09-06 21:27:57 +10:00
psychedelicious
5a30ff6045 feat(ui): abstract out CanvasEntityAdapterBase
Things were getting to complex to reason about & classes a bit complicated. Trying to simplify...
2024-09-06 21:27:57 +10:00
psychedelicious
73e8837d45 feat(ui): revise entity rendering flow 2024-09-06 21:27:57 +10:00
psychedelicious
6ba78e7632 tidy(ui): remove unused id on konva nodes 2024-09-06 21:27:57 +10:00
psychedelicious
736fabe327 tidy(ui): remove commented code 2024-09-06 21:27:57 +10:00
psychedelicious
ad53169eba tidy(ui): remove extraneous docstrings 2024-09-06 21:27:57 +10:00
psychedelicious
0a22e6b5a1 feat(ui): clean up unused tool module state 2024-09-06 21:27:57 +10:00
psychedelicious
5eb9e2ba04 tidy(ui): disable isDebugging flag on root component 2024-09-06 21:27:57 +10:00
psychedelicious
376a626419 fix(ui): unable to drag while transforming after switching tools 2024-09-06 21:27:57 +10:00
psychedelicious
5d089ecc77 feat(ui): prevent layer interactions when transforming or filtering 2024-09-06 21:27:57 +10:00
psychedelicious
3ed85d821b feat(ui): add compositeMaskedRegions setting 2024-09-06 21:27:57 +10:00
psychedelicious
7c29caa245 tidy(ui): merge tool slice, sendToCanvas into settings slice 2024-09-06 21:27:57 +10:00
psychedelicious
6161c71aa7 build(ui): add csstype dev dependency 2024-09-06 21:27:57 +10:00
psychedelicious
f7e68ac6d6 feat(ui): clean up tool preview rendering 2024-09-06 21:27:57 +10:00
psychedelicious
5074c97683 feat(ui): tool buttons are only disabled when currently selected 2024-09-06 21:27:57 +10:00
psychedelicious
bb6b5c4941 feat(ui): better types on CanvasStateApiModule.getEntity 2024-09-06 21:27:57 +10:00
psychedelicious
ff12f16343 feat(ui): update default logging context path to be string 2024-09-06 21:27:57 +10:00
psychedelicious
397038e6e8 tidy(ui): mark canvas module attrs readonly 2024-09-06 21:27:57 +10:00
psychedelicious
fcacc0f70a chore: release v4.2.9.dev11 2024-09-06 21:27:57 +10:00
psychedelicious
3ed4cf63ae feat(ui): tidy stateApi atoms & add docstrings 2024-09-06 21:27:57 +10:00
psychedelicious
4b753941e4 feat(ui): streamline manager -> react transform interface 2024-09-06 21:27:57 +10:00
psychedelicious
dbff929be5 tidy(ui): remove unused $isProcessingTransform atom 2024-09-06 21:27:57 +10:00
psychedelicious
44be058ac7 docs(ui): docstrings for $canvasCache 2024-09-06 21:27:57 +10:00
psychedelicious
437129cf70 feat(ui): tweak bookmark verbiage 2024-09-06 21:27:57 +10:00
psychedelicious
2e4a48b71c feat(ui): move transformer state to nanostores
This provides some free reactivity for this canvas-manager-managed state.
2024-09-06 21:27:57 +10:00
psychedelicious
618473db02 fix(ui): transform should ignore konva filters (e.g. transparency effect) 2024-09-06 21:27:57 +10:00
psychedelicious
4f4eee76b9 feat(ui): add fit to bbox as transform helper 2024-09-06 21:27:57 +10:00
psychedelicious
1628eb5900 tidy(ui): transformer organisation 2024-09-06 21:27:57 +10:00
psychedelicious
92c754df79 fix(ui): disable merge visible when 1 or fewer layers of type 2024-09-06 21:27:57 +10:00
psychedelicious
921ade4b0a feat(ui): brush preview opacity at 0.5 when drawing on mask 2024-09-06 21:27:57 +10:00
psychedelicious
c38b05c002 chore(ui): lint 2024-09-06 21:27:57 +10:00
psychedelicious
277708d69e fix(ui): edge cases in quick switch, simpler logic 2024-09-06 21:27:57 +10:00
psychedelicious
02061faaf0 chore(ui): lint 2024-09-06 21:27:57 +10:00
psychedelicious
a4fb8eb171 feat(ui): add bookmark for quick switch 2024-09-06 21:27:57 +10:00
psychedelicious
d3c2432bfb fix(ui): force dims on scaled bbox when manual scaling + locked aspect ratio
Closes #5590
2024-09-06 21:27:57 +10:00
psychedelicious
0c3e2c45f1 feat(ui): "Control Layers" -> "Layers" 2024-09-06 21:27:57 +10:00
psychedelicious
d019dbdbd3 feat(ui): "IP Adapter" -> "Global IP Adapter" 2024-09-06 21:27:57 +10:00
psychedelicious
2538b348c7 tidy(ui): canvas hotkey hooks 2024-09-06 21:27:57 +10:00
psychedelicious
420178c09f feat(ui): add alt+[ and alt+] hotkeys to cycle through layers 2024-09-06 21:27:57 +10:00
psychedelicious
75ecc56ca5 feat(ui): add layer quick switch
Q toggles between the last-selected layers.
2024-09-06 21:27:57 +10:00
psychedelicious
874b96c67d feat(ui): bbox hotkey is c 2024-09-06 21:27:57 +10:00
psychedelicious
b0eddd19d4 fix(ui): select nonexistent entity 2024-09-06 21:27:57 +10:00
psychedelicious
cb9d0bce56 feat(ui): brush & eraser width ui/ux
Use same pattern as canvas scale & opacity sliders w/ scaled slider values for precision at low values.
2024-09-06 21:27:57 +10:00
psychedelicious
fd6d080fb0 tidy(ui): canvas scale & entity opacity sliders 2024-09-06 21:27:57 +10:00
psychedelicious
8f6e6960e0 feat(ui): hotkeys for brush/eraser size 2024-09-06 21:27:57 +10:00
psychedelicious
58064a1ea5 feat(ui): use default IP adapter when creating IP adapter 2024-09-06 21:27:57 +10:00
psychedelicious
0d94a89d98 tidy(ui): organise files 2024-09-06 21:27:57 +10:00
psychedelicious
eb4d447e2f feat(ui): remove object count from entity title
This was used for troubleshooting only.
2024-09-06 21:27:57 +10:00
psychedelicious
487422a53f tidy(ui): misc cleanup 2024-09-06 21:27:57 +10:00
psychedelicious
b24f8e2b26 docs(ui): docstrings for classes (wip) 2024-09-06 21:27:57 +10:00
psychedelicious
fd22ff7c9f feat(ui): revised canvas module base class
Big cleanup. Makes these classes easier to implement, lots of comments and docstrings to clarify how it all works.

- Add default implementations for `destroy`, `repr` and `getLoggingContext`
- Tidy individual module configs
- Update `CanvasManager.buildLogger` to accept a canvas module as the arg
- Add `CanvasManager.buildPath`
2024-09-06 21:27:57 +10:00
psychedelicious
9cd5107955 feat(ui): split canvas tool previews into modules 2024-09-06 21:27:57 +10:00
psychedelicious
fd7f0056e3 fix(ui): reject on dataURLToImageData 2024-09-06 21:27:57 +10:00
psychedelicious
b1673b9222 fix(ui): correctly set last cursor pos to null 2024-09-06 21:27:57 +10:00
psychedelicious
f4e8799561 chore: release v4.2.9.dev10 2024-09-06 21:27:57 +10:00
psychedelicious
6f85743996 feat(ui): remove entity list context menu (again)
stupid events
2024-09-06 21:27:57 +10:00
psychedelicious
243e7c954e fix(ui): entity groups not collapsing 2024-09-06 21:27:57 +10:00
psychedelicious
33d25c7977 chore: release v4.2.9.dev9 2024-09-06 21:27:57 +10:00
psychedelicious
d11251f380 fix(ui): entity opacity number input focus prevents slider from opening 2024-09-06 21:27:57 +10:00
psychedelicious
d5e5cae398 feat(ui): add merge visible for raster and inpaint mask layers
I don't think it makes sense to merge control layers or regional guidance layers because they have additional state.
2024-09-06 21:27:57 +10:00
psychedelicious
fa80452abf fix(ui): save to gallery rect too large
Was including all layer types in the rect - only want the raster layers.
2024-09-06 21:27:57 +10:00
psychedelicious
c9e396bee9 fix(ui): canvasToBlob not raising error correctly 2024-09-06 21:27:57 +10:00
psychedelicious
fd6de78746 feat(ui): add save to gallery button 2024-09-06 21:27:57 +10:00
psychedelicious
6096c8ed0a fix(ui): fix getRectUnion util, add some tests 2024-09-06 21:27:57 +10:00
psychedelicious
5126ec114a fix(ui): modals not staying open
TBH not sure exactly why this broke. Fixed by rollback back the use of a render prop in favor of global state. Also revised the API of `useBoolean` and `buildUseBoolean`.
2024-09-06 21:27:57 +10:00
psychedelicious
221c51b498 fix(ui): correct labels for generation tab origin 2024-09-06 21:27:57 +10:00
psychedelicious
28db39a932 fix(ui): context menu doesn't work for new entities
I do not understand why this fixes the issue, doesn't seem like it should. But it does.
2024-09-06 21:27:57 +10:00
psychedelicious
5512f80066 tidy(ui): organise tool module 2024-09-06 21:27:57 +10:00
psychedelicious
70e27d6fbb fix(ui): staging hotkeys enabled at wrong times 2024-09-06 21:27:57 +10:00
psychedelicious
e91149111e fix(ui): incorrect batch origin preventing progress/staging 2024-09-06 21:27:57 +10:00
psychedelicious
ad7919b3ec feat(ui): restore minimal HUD 2024-09-06 21:27:57 +10:00
psychedelicious
908f4117fc feat(ui): remove unused asPreview for StageComponent 2024-09-06 21:27:57 +10:00
psychedelicious
87b294708a chore(ui): lint 2024-09-06 21:27:57 +10:00
psychedelicious
a0be7c47cd chore: release v4.2.9.dev8 2024-09-06 21:27:57 +10:00
psychedelicious
87bc3f5f0a feat(ui): revise generation mode logic
- Canvas generation mode is replace with a boolean `sendToCanvas` flag. When off, images generated on the canvas go to the gallery. When on, they get added to the staging area.
- When an image result is received, if its destination is the canvas, staging is automatically started.
- Updated queue list to show the destination column.
- Added `IconSwitch` component to represent binary choices, used for the new `sendToCanvas` flag and image viewer toggle.
- Remove the queue actions menu in `QueueControls`. Move the queue count badge to the cancel button.
- Redo layout of `QueueControls` to prevent duplicate queue count badges.
- Fix issue where gallery and options panels could show thru transparent regions of queue tab.
- Disable panel hotkeys when on mm/queue tabs.
2024-09-06 21:27:57 +10:00
psychedelicious
14dea8bad3 chore(ui): typegen 2024-09-06 21:27:57 +10:00
psychedelicious
9ced9efc64 feat(app): add destination column to session_queue
The frontend needs to know where queue items came from (i.e. which tab), and where results are going to (i.e. send images to gallery or canvas). The `origin` column is not quite enough to represent this cleanly.

A `destination` column provides the frontend what it needs to handle incoming generations.
2024-09-06 21:27:57 +10:00
psychedelicious
b0414a81ec tidy(ui): ViewerToggleMenu -> ViewerToggle 2024-09-06 21:27:57 +10:00
psychedelicious
f078e325f8 feat(ui): alt quick switches to color picker 2024-09-06 21:27:57 +10:00
psychedelicious
1a5dc202cb feat(ui): tweak add entity button layout 2024-09-06 21:27:57 +10:00
psychedelicious
79d7f023e8 feat(ui): restore context menu for entity list 2024-09-06 21:27:57 +10:00
psychedelicious
913912d96b feat(ui): add delete button to each layer 2024-09-06 21:27:57 +10:00
psychedelicious
cfa22b345a feat(ui): add + buttons to entity categories 2024-09-06 21:27:57 +10:00
psychedelicious
ce4948ec23 feat(ui): tweak brush fill UI 2024-09-06 21:27:57 +10:00
psychedelicious
86d10ca87b feat(ui): do not select layer on staging accept 2024-09-06 21:27:57 +10:00
psychedelicious
057b5ec646 fix(ui): more fiddly queue count layout stuff 2024-09-06 21:27:57 +10:00
psychedelicious
16eede4288 fix(ui): floating params panel invoke button loading state 2024-09-06 21:27:57 +10:00
psychedelicious
e3c0c638c1 feat(ui): move canvas undo/redo to hook 2024-09-06 21:27:57 +10:00
psychedelicious
e57ee8db35 fix(ui): queue count badge positioning 2024-09-06 21:27:57 +10:00
psychedelicious
cd2a80cf77 fix(ui): add node cmdk only enabled on workflows tab 2024-09-06 21:27:57 +10:00
psychedelicious
300ecd9b16 chore: release v4.2.9.dev7 2024-09-06 21:27:57 +10:00
psychedelicious
a95fbfe6c6 fix(ui): pending node connection stuck 2024-09-06 21:27:57 +10:00
psychedelicious
79168b637e chore(ui): lint 2024-09-06 21:27:57 +10:00
psychedelicious
42c7ddebaa chore: release v4.2.9.dev6 2024-09-06 21:27:56 +10:00
psychedelicious
ac349573f2 feat(ui): migrate add node popover to cmdk
Put this together as a way to figure out the library before moving on to the full app cmdk. Works great.
2024-09-06 21:27:56 +10:00
psychedelicious
28feb013d7 fix(ui): schema parsing now that node_pack is guaranteed to be present 2024-09-06 21:27:56 +10:00
psychedelicious
4129cddbeb chore(ui): typegen 2024-09-06 21:27:56 +10:00
psychedelicious
4463e6da5d fix(app): node_pack not added to openapi schema correctly 2024-09-06 21:27:56 +10:00
psychedelicious
db0b97cca1 fix(ui): unnecessary z-index on invoke button 2024-09-06 21:27:56 +10:00
psychedelicious
be58b03137 feat(ui): split settings modal 2024-09-06 21:27:56 +10:00
psychedelicious
2df0056ef1 perf(ui): disable useInert on modals
This hook forcibly updates _all_ portals with `data-hidden=true` when the modal opens - then reverts it when the modal closes. It's intended to help screen readers. Unfortunately, this absolutely tanks performance because we have many portals. React needs to do alot of layout calculations (not re-renders).

IMO this behaviour is a bug in chakra. The modals which generated the portals are hidden by default, so this data attr should really be set by default. Dunno why it isn't.
2024-09-06 21:27:56 +10:00
psychedelicious
eeca521baf feat(ui): fix queue item count badge positioning
Previously this badge, floating over the queue menu button next to the invoke button, was rendered within the existing layout. When I initially positioned it, the app layout interfered - it would extend into an area reserved for a flex gap, which cut off the badge.

As a (bad) workaround, I had shifted the whole app down a few pixels to make room for it. What I should have done is what I've done in this commit - render the badge in a portal to take it out of the layout so we don't need that extra vertical padding.

Sleekified some styling a bit too.
2024-09-06 21:27:56 +10:00
psychedelicious
93b9792096 fix(ui): transparency effect not updating 2024-09-06 21:27:56 +10:00
psychedelicious
a8079e5854 feat(ui): tidy canvas toolbar buttons 2024-09-06 21:27:56 +10:00
psychedelicious
61f9ba5b46 feat(ui): revised viewer toggle @joshistoast 2024-09-06 21:27:56 +10:00
psychedelicious
a99ff467f6 fix(ui): opacity reset value incorrect 2024-09-06 21:27:56 +10:00
psychedelicious
5abad87ce3 revert(ui): roll back flip, doesn't work with rotate yet 2024-09-06 21:27:56 +10:00
psychedelicious
931a4ddc88 fix(ui): disable opacity slider fully when no valid entity selected 2024-09-06 21:27:56 +10:00
psychedelicious
784a615864 fix(ui): layer preview image sometimes not rendering
The canvas size was dynamic based on the container div's size. When the div was hidden (e.g. when selecting another tab), the container's effective size is 0. This resulted in the preview image canvas being drawn at a scale of 0.

Fixed by using an absolute size for the canvas container.
2024-09-06 21:27:56 +10:00
psychedelicious
697c81b74e feat(ui): tweak regional prompt box styles 2024-09-06 21:27:56 +10:00
psychedelicious
d6b8cb9e7c feat(ui): tweak enabled/locked toggle styles 2024-09-06 21:27:56 +10:00
psychedelicious
f4a031a412 feat(ui): tweak filter styling 2024-09-06 21:27:56 +10:00
psychedelicious
1749abbd97 feat(ui): add flip & reset to transform 2024-09-06 21:27:56 +10:00
psychedelicious
1497afcfda tidy(ui): use helper to sync scaled bbox size on model change 2024-09-06 21:27:56 +10:00
psychedelicious
ec9fdace22 fix(ui): randomize seed toggle linked to prompt concat 2024-09-06 21:27:56 +10:00
psychedelicious
c1a039ef91 chore: release v4.2.9.dev5 2024-09-06 21:27:56 +10:00
psychedelicious
69d1edc036 chore(ui): lint 2024-09-06 21:27:56 +10:00
psychedelicious
b69b001755 feat(ui): generalize mask fill, add to action bar 2024-09-06 21:27:56 +10:00
psychedelicious
97414f1886 feat(ui): implement interaction locking on layers 2024-09-06 21:27:56 +10:00
psychedelicious
478daf8614 feat(ui): iterate on layer actions
- Add lock toggle
- Tweak lock and enabled styles
- Update entity list action bar w/ delete & delete all
- Move add layer menu to action bar
- Adjust opacity slider style
2024-09-06 21:27:56 +10:00
psychedelicious
c74b130303 feat(ui): collapsible entity groups 2024-09-06 21:27:56 +10:00
psychedelicious
11bc318d8d tidy(ui): rename some classes to be consistent 2024-09-06 21:27:56 +10:00
psychedelicious
77125bee7e feat(ui): tuned canvas undo/redo
- Throttle pushing to history for actions of the same type, starting with 1000ms throttle.
- History has a limit of 64 items, same as workflow editor
- Add clear history button
- Fix an issue where entity transformers would reset the entity state when the entity is fully transparent, resetting the redo stack. This could happen when you undo to the starting state of a layer
2024-09-06 21:27:56 +10:00
psychedelicious
4354cd7c38 tidy(ui): move all undoable reducers back to canvas slice 2024-09-06 21:27:56 +10:00
psychedelicious
0e31ccaea0 fix(ui): dnd image count 2024-09-06 21:27:56 +10:00
psychedelicious
e32fa8bd35 fix(ui): canvas entity opacity scale 2024-09-06 21:27:56 +10:00
psychedelicious
d209652caa perf(ui): optimize all selectors 2
Mostly selector optimization. Still a few places to tidy up but I'll get to that later.
2024-09-06 21:27:56 +10:00
psychedelicious
fe672ba5e0 perf(ui): optimize all selectors 1
I learned that the inline selector syntax recreates the selector function on every render:

```ts
const val = useAppSelector((s) => s.slice.val)
```

Not good! Better is to create a selector outside the function and use it. Doing that for all selectors now, most of the way through now. Feels snappier.
2024-09-06 21:27:56 +10:00
psychedelicious
04d41085a3 feat(ui): rough out undo/redo on canvas 2024-09-06 21:27:56 +10:00
psychedelicious
36604f752e chore: release v4.2.9.dev4
Canvas dev build.
2024-09-06 21:27:56 +10:00
psychedelicious
9056f446bb fix(ui): handle error from internal konva method
We are dipping into konva's private API for preview images and it appears to be unsafe (got an error once). Wrapped in a try/catch.
2024-09-06 21:27:56 +10:00
psychedelicious
a33d1b979d feat(ui): split out loras state from canvas rendering state 2024-09-06 21:27:56 +10:00
psychedelicious
4670f82e65 feat(ui): split out session state from canvas rendering state 2024-09-06 21:27:56 +10:00
psychedelicious
a07346b364 feat(ui): split out settings state from canvas rendering state 2024-09-06 21:27:56 +10:00
psychedelicious
2b1e930cdf feat(ui): split out tool state from canvas rendering state 2024-09-06 21:27:56 +10:00
psychedelicious
6b75ea3b01 feat(ui): split out params/compositing state from canvas rendering state
First step to restoring undo/redo - the undoable state must be in its own slice. So params and settings must be isolated.
2024-09-06 21:27:56 +10:00
psychedelicious
ff8bc93080 feat(ui): add CanvasModuleBase class to standardize canvas APIs
I did this ages ago but undid it for some reason, not sure why. Caught a few issues related to subscriptions.
2024-09-06 21:27:56 +10:00
psychedelicious
85eff566dd feat(ui): move selected tool and tool buffer out of redux
This ephemeral state can live in the canvas classes.
2024-09-06 21:27:56 +10:00
psychedelicious
9480691de5 feat(ui): move ephemeral state into canvas classes
Things like `$lastCursorPos` are now created within the canvas drawing classes. Consumers in react access them via `useCanvasManager`.

For example:
```tsx
const canvasManager = useCanvasManager();
const lastCursorPos = useStore(canvasManager.stateApi.$lastCursorPos);
```
2024-09-06 21:27:56 +10:00
psychedelicious
463f3dbb35 feat(ui): normalize all actions to accept an entityIdentifier
Previously, canvas actions specific to an entity type only needed the id of that entity type. This allowed you to pass in the id of an entity of the wrong type.

All actions for a specific entity now take a full entity identifier, and the entity identifier type can be narrowed.

`selectEntity` and `selectEntityOrThrow` now need a full entity identifier, and narrow their return values to a specific entity type _if_ the entity identifier is narrowed.

The types for canvas entities are updated with optional type parameters for this purpose.

All reducers, actions and components have been updated.
2024-09-06 21:27:56 +10:00
psychedelicious
72f304baa5 feat(ui): move events into modules who care about them 2024-09-06 21:27:56 +10:00
psychedelicious
b4d01659e0 fix(ui): color picker resets brush opacity 2024-09-06 21:27:56 +10:00
psychedelicious
a733d72089 fix(ui): scaled bbox loses sync 2024-09-06 21:27:56 +10:00
psychedelicious
8d0a75cb5f feat(ui): add context menu to entity list 2024-09-06 21:27:56 +10:00
psychedelicious
8a56702341 chore(ui): bump @invoke-ai/ui-library 2024-09-06 21:27:56 +10:00
psychedelicious
af638cf5ce fix(ui): missing vae precision in graph builders 2024-09-06 21:27:56 +10:00
psychedelicious
839248c74c chore: release v4.2.9.dev3
Instead of using dates, just going to increment.
2024-09-06 21:27:56 +10:00
psychedelicious
4b488de10e feat(ui): use new Result utils for enqueueing 2024-09-06 21:27:56 +10:00
psychedelicious
112d6ead91 fix(ui): graph building issue w/ controlnet 2024-09-06 21:27:56 +10:00
psychedelicious
a5e2e78dee feat(ui): add Result type & helpers
Wrappers to capture errors and turn into results:
- `withResult` wraps a sync function
- `withResultAsync` wraps an async function

Comments, tests.
2024-09-06 21:27:56 +10:00
psychedelicious
b91f79b0bb chore: release v4.2.9.dev20240824 2024-09-06 21:27:56 +10:00
psychedelicious
bc75b0259b fix(ui): lint & fix issues with adding regional ip adapters 2024-09-06 21:27:56 +10:00
psychedelicious
51663c5439 feat(ui): add knipignore tag
I'm not ready to delete some things but still want to build the app.
2024-09-06 21:27:56 +10:00
psychedelicious
86a5e2538c feat(ui): duplicate entity 2024-09-06 21:27:56 +10:00
psychedelicious
ee94b0ce17 feat(ui): autocomplete on getPrefixeId 2024-09-06 21:27:56 +10:00
psychedelicious
03b3139705 feat(ui): paste canvas gens back on source in generate mode 2024-09-06 21:27:56 +10:00
psychedelicious
44b8480832 chore(ui): typegen 2024-09-06 21:27:56 +10:00
psychedelicious
e6960320dd feat(nodes): CanvasV2MaskAndCropInvocation can paste generated image back on source
This is needed for `Generate` mode.
2024-09-06 21:27:56 +10:00
psychedelicious
775353fe82 fix(ui): extraneous entity preview updates 2024-09-06 21:27:56 +10:00
psychedelicious
e013aff9fe fix(ui): newly-added entities are selected 2024-09-06 21:27:56 +10:00
psychedelicious
a22bf4c296 feat(ui): add crosshair to color picker 2024-09-06 21:27:56 +10:00
psychedelicious
12a00de2ec fix(ui): color picker ignores alpha 2024-09-06 21:27:56 +10:00
psychedelicious
6cbaf7e0ae fix(ui): calculate renderable entities correctly in tool module 2024-09-06 21:27:56 +10:00
psychedelicious
9b8d814082 feat(ui): better color picker 2024-09-06 21:27:56 +10:00
psychedelicious
20a0ed81c5 feat(ui): colored mask preview image 2024-09-06 21:27:56 +10:00
psychedelicious
a040d1d2a6 fix(ui): new rectangles don't trigger rerender 2024-09-06 21:27:56 +10:00
psychedelicious
7c68b889eb chore: bump version v4.2.9.dev20240823 2024-09-06 21:27:56 +10:00
psychedelicious
5ea3c11883 feat(ui): disable most interaction while filtering 2024-09-06 21:27:37 +10:00
psychedelicious
f355d907e6 fix(ui): filter preview offset 2024-09-06 21:27:37 +10:00
psychedelicious
da5b99c840 feat(ui): tweak layout of staging area toolbar 2024-09-06 21:27:37 +10:00
psychedelicious
4f9c4f7b40 chore(ui): typegen 2024-09-06 21:27:37 +10:00
psychedelicious
143c47c887 tidy(app): clean up app changes for canvas v2 2024-09-06 21:27:37 +10:00
psychedelicious
2ca5aeda96 feat(ui): use singleton for clear q confirm dialog 2024-09-06 21:27:37 +10:00
psychedelicious
9b1ee633e6 fix(ui): rip out broken recall logic, NO TS ERRORS 2024-09-06 21:27:37 +10:00
psychedelicious
c06998184e chore(ui): lint 2024-09-06 21:27:37 +10:00
psychedelicious
84aa2b94e5 fix(ui): staging area interaction scopes 2024-09-06 21:27:37 +10:00
psychedelicious
64f7c7bf36 fix(ui): staging area actions 2024-09-06 21:27:37 +10:00
psychedelicious
18d9263d60 tidy(ui): more cleanup 2024-09-06 21:27:37 +10:00
psychedelicious
e0e8165992 fix(ui): upscale tab graph 2024-09-06 21:27:37 +10:00
psychedelicious
e826f8a020 fix(ui): sdxl graph builder 2024-09-06 21:27:37 +10:00
psychedelicious
9486c50e67 fix(ui): select next entity in the list when deleting 2024-09-06 21:27:37 +10:00
psychedelicious
66424c3c93 feat(ui): fix delete layer hotkey 2024-09-06 21:27:37 +10:00
psychedelicious
cff871e8a6 tidy(ui): "eye dropper" -> "color picker" 2024-09-06 21:27:37 +10:00
psychedelicious
51cd435ad8 tidy(ui): regional guidance buttons 2024-09-06 21:27:37 +10:00
psychedelicious
bc8bf989f3 feat(ui): update entity list menu 2024-09-06 21:27:37 +10:00
psychedelicious
87150b7c6b feat(ui): add log debug button 2024-09-06 21:27:37 +10:00
psychedelicious
5797797904 chore(ui): lint 2024-09-06 21:27:37 +10:00
psychedelicious
ab7b9c4523 chore(ui): prettier 2024-09-06 21:27:37 +10:00
psychedelicious
68bf4459c3 chore(ui): eslint 2024-09-06 21:27:37 +10:00
psychedelicious
05052fd21c tidy(ui): remove unused stuff 4 2024-09-06 21:27:37 +10:00
psychedelicious
f4e3f87c2e tidy(ui): remove unused stuff 3 2024-09-06 21:27:37 +10:00
psychedelicious
f47e1afd57 tidy(ui): remove unused pkg @chakra-ui/react-use-size 2024-09-06 21:27:37 +10:00
psychedelicious
aa4be01bdf feat(ui): revise graph building for control layers, fix issues w/ invocation complete events 2024-09-06 21:27:37 +10:00
psychedelicious
f8d4d5e546 feat(ui): use unique id for metadata in Graph class 2024-09-06 21:27:37 +10:00
psychedelicious
f628b8ad9d tidy(ui): remove unused stuff 2 2024-09-06 21:27:37 +10:00
psychedelicious
bb4eb70a4a tidy(ui): remove unused stuff 2024-09-06 21:27:37 +10:00
psychedelicious
7b1a533bf2 tidy(ui): reduce use of parseify util 2024-09-06 21:27:37 +10:00
psychedelicious
d32526f04a feat(ui): refine canvas entity list items & menus 2024-09-06 21:27:37 +10:00
psychedelicious
76bc2cf5db feat(ui): canvas layer preview, revised reactivity for adapters 2024-09-06 21:27:37 +10:00
psychedelicious
90b3ebb27d feat(ui): add SyncableMap
Can be used with useSyncExternal store to make a `Map` reactive.
2024-09-06 21:27:37 +10:00
psychedelicious
4b65d9145e tidy(ui): removed unused transform methods from canvasmanager 2024-09-06 21:27:37 +10:00
psychedelicious
9badbb8dff feat(ui): transform tool ux 2024-09-06 21:27:37 +10:00
psychedelicious
70bcca23a4 feat(ui): rough out canvas mode 2024-09-06 21:27:37 +10:00
psychedelicious
5ba15d0db5 feat(ui): add canvas autosave checkbox 2024-09-06 21:27:37 +10:00
psychedelicious
fef70fdac0 fix(ui): memory leak when getting image DTO
must unsubscribe!
2024-09-06 21:27:37 +10:00
psychedelicious
62eb00aacf feat(ui): rework settings menu 2024-09-06 21:27:37 +10:00
psychedelicious
3f41ea574b feat(ui): no entities fallback buttons 2024-09-06 21:27:37 +10:00
psychedelicious
e865c1a2b5 perf(ui): optimize gallery image delete button rendering 2024-09-06 21:27:37 +10:00
psychedelicious
f4a798c64f feat(ui): remove "solid" background option 2024-09-06 21:27:37 +10:00
psychedelicious
7806604666 tidy(ui): organise files and classes 2024-09-06 21:27:37 +10:00
psychedelicious
917f8087b4 tidy(ui): abstract compositing logic to module 2024-09-06 21:27:37 +10:00
psychedelicious
b7a9a6153a fix(ui): fix canvas cache property access 2024-09-06 21:27:37 +10:00
psychedelicious
fdc5f89060 tidy(ui): clean up CanvasFilter class 2024-09-06 21:27:37 +10:00
psychedelicious
7fc0959c83 tidy(ui): clean up a few bits and bobs 2024-09-06 21:27:37 +10:00
psychedelicious
95ff64d1d6 tidy(ui): abstract canvas rendering logic to module 2024-09-06 21:27:37 +10:00
psychedelicious
67f94007d0 tidy(ui): abstract caching logic to module 2024-09-06 21:27:37 +10:00
psychedelicious
930f27d009 tidy(ui): abstract worker logic to module 2024-09-06 21:27:37 +10:00
psychedelicious
6125e0fd13 tidy(ui): abstract stage logic into module 2024-09-06 21:27:37 +10:00
psychedelicious
c8e8cf9dfb feat(ui): add entity group hiding 2024-09-06 21:27:37 +10:00
psychedelicious
ac8122a154 feat(ui): move all caching out of redux
While we lose the benefit of the caches persisting across reloads, this is a much simpler way to handle things. If we need a persistent cache, we can explore it in the future.
2024-09-06 21:27:37 +10:00
psychedelicious
aa7c909f14 feat(ui): revised rasterization caching
- use `stable-hash` to generate stable, non-crypto hashes for cache entries, instead of using deep object comparisons
- use an object to store image name caches
2024-09-06 21:27:37 +10:00
psychedelicious
40d294b29c feat(ui): revise filter implementation 2024-09-06 21:27:36 +10:00
psychedelicious
ff1fd5b5b5 fix(ui): add button to delete inpaint mask 2024-09-06 21:27:36 +10:00
psychedelicious
6aa87d7df2 feat(ui): add contexts/hooks to access entity adapters directly 2024-09-06 21:27:36 +10:00
psychedelicious
7732eeb815 feat(ui): add CanvasManagerProviderGate
This context waits to render its children its until the canvas manager is available. Then its children have access to the manager directly via hook.
2024-09-06 21:27:36 +10:00
psychedelicious
9eb37feb6c feat(ui) do not set $canvasManager until ready 2024-09-06 21:27:36 +10:00
psychedelicious
e3b38339a0 fix(ui): inpaint mask naming 2024-09-06 21:27:36 +10:00
psychedelicious
f92695e677 feat(ui): efficient canvas compositing
Also solves issue of exporting layers at different opacities than what is visible
2024-09-06 21:27:36 +10:00
psychedelicious
9473d54fc7 feat(ui): allow multiple inpaint masks
This is easier than making it a nullable singleton
2024-09-06 21:27:36 +10:00
psychedelicious
33783eae4f fix(ui): missing rasterization cache invalidations 2024-09-06 21:27:36 +10:00
psychedelicious
f6c3570c26 feat(ui): iterate on filter UI, flow 2024-09-06 21:27:36 +10:00
psychedelicious
96a76a763e fix(ui): rehydration data loss 2024-09-06 21:27:36 +10:00
psychedelicious
209bc925c0 feat(ui): sort log namespaces 2024-09-06 21:27:36 +10:00
psychedelicious
a77fb427d0 fix(ui): do not merge arrays by index during rehydration 2024-09-06 21:27:36 +10:00
psychedelicious
6fc358b9ea fix(ui): clone parsed data during state rehydration
Without this, the objects and arrays in `parsed` could be mutated, and the log statment would show the mutated data.
2024-09-06 21:27:36 +10:00
psychedelicious
742ac3088d fix(ui): fix logger filter
was accidetnally replacing the filter instead of appending to it.
2024-09-06 21:27:36 +10:00
psychedelicious
8b958e5280 fix(ui): race condition queue status
Sequence of events causing the race condition:
- Enqueue batch
- Invalidate `SessionQueueStatus` tag
- Request updated queue status via HTTP - batch still processing at this point
- Batch completes
- Event emitted saying so
- Optimistically update the queue status cache, it is correct
- HTTP request makes it back and overwrites the optimistic update, indicating the batch is still in progress

FIxed by not invalidating the cache.
2024-09-06 21:27:36 +10:00
psychedelicious
71c727e586 fix(ui): handle opacity for masks 2024-09-06 21:27:36 +10:00
psychedelicious
c590e8bd6f feat(ui): default background to checkerboard 2024-09-06 21:27:36 +10:00
psychedelicious
71a13c3e91 feat(ui): clean up logging namespaces, allow skipping namespaces 2024-09-06 21:27:36 +10:00
psychedelicious
d7e7f049e7 chore(ui): bump ui library 2024-09-06 21:27:36 +10:00
psychedelicious
b23332e167 fix(ui): do not allow drawing if layer disabled 2024-09-06 21:27:36 +10:00
psychedelicious
21c637c365 fix(ui): stale state causing race conditions & extraneous renders 2024-09-06 21:27:36 +10:00
psychedelicious
2b7aecb346 fix(ui): do not clear buffer when rendering "real" objects 2024-09-06 21:27:36 +10:00
psychedelicious
5c669f7925 tidy(ui): remove "filter" from CanvasImageState 2024-09-06 21:27:36 +10:00
psychedelicious
748f445de5 feat(ui): better editable title 2024-09-06 21:27:36 +10:00
psychedelicious
2127ee9a09 fix(ui): stroke eraserline 2024-09-06 21:27:36 +10:00
psychedelicious
7ccb1c5938 feat(ui): restore transparency effect for control layers 2024-09-06 21:27:36 +10:00
psychedelicious
26d1a2ebfe feat(ui): use text cursor for entity title 2024-09-06 21:27:36 +10:00
psychedelicious
e0bd683d88 tidy(ui): remove extraneous logging in CanvasStateApi 2024-09-06 21:27:36 +10:00
psychedelicious
3c5332b236 feat(ui): better buffer commit logic 2024-09-06 21:27:36 +10:00
psychedelicious
d4e847bd9a feat(ui): render buffer separately from "real" objects 2024-09-06 21:27:36 +10:00
psychedelicious
cee16536cd fix(ui): pixelRect should always be integer 2024-09-06 21:27:36 +10:00
psychedelicious
811e75dc67 fix(ui): only update stage attrs when stage itself is dragged 2024-09-06 21:27:36 +10:00
psychedelicious
a4a77d70f7 feat(ui): add line simplification
This fixes some awkward issues where line segments stack up.
2024-09-06 21:27:36 +10:00
psychedelicious
13e2de3cb4 fix(ui): various things listening when they need not listen 2024-09-06 21:27:36 +10:00
psychedelicious
7925bbf454 feat(ui): layer opacity via caching 2024-09-06 21:27:36 +10:00
psychedelicious
39eb0a01d2 feat(ui): reset view fits all visible objects 2024-09-06 21:27:36 +10:00
psychedelicious
305e50004a fix(ui): rerenders when changing canvas scale 2024-09-06 21:27:36 +10:00
psychedelicious
9974784596 fix(ui): do not render rasterized layer unless renderObjects=true 2024-09-06 21:27:36 +10:00
psychedelicious
105c5a7fd4 feat(ui): revise app layout strategy, add interaction scopes for hotkeys 2024-09-06 21:27:36 +10:00
psychedelicious
7baad9c72e feat(ui): tweak mask patterns 2024-09-06 21:27:36 +10:00
psychedelicious
a5e8705ea3 fix(ui): dynamic prompts recalcs when presets are loaded 2024-09-06 21:27:36 +10:00
psychedelicious
7b2a5c3a30 fix(ui): use style preset prompts correctly 2024-09-06 21:27:36 +10:00
psychedelicious
3dcb33026a fix(ui): discard selected staging image not all other images 2024-09-06 21:27:36 +10:00
psychedelicious
e956ea5482 fix(ui): respect image size in staging preview 2024-09-06 21:27:36 +10:00
psychedelicious
64f50ab278 tidy(ui): cleanup after events change 2024-09-06 21:27:36 +10:00
psychedelicious
b152937f30 feat(ui): move socket event handling out of redux
Download events and invocation status events (including progress images) are very frequent. There's no real need for these to pass through redux. Handling them outside redux is a significant performance win - far fewer store subscription calls, far fewer trips through middleware.

All event handling is moved outside middleware. Cleanup of unused actions and listeners to follow.
2024-09-06 21:27:36 +10:00
psychedelicious
4e6a9d990c fix(ui): rebase conflicts 2024-09-06 21:27:36 +10:00
psychedelicious
d6fb220b2c fix(ui): update compositing rect when fill changes 2024-09-06 21:27:36 +10:00
psychedelicious
13fc539f8b feat(ui): add canvas background style 2024-09-06 21:27:36 +10:00
psychedelicious
d2b5d6342c feat(ui): mask layers choose own opacity 2024-09-06 21:27:36 +10:00
psychedelicious
eb644d4e6a feat(ui): mask fill patterns 2024-09-06 21:27:36 +10:00
psychedelicious
d8a2efc691 build(ui): add vite types to tsconfig 2024-09-06 21:27:36 +10:00
psychedelicious
163687aef3 fix(ui): do not smooth pixel data when using eyeDropper 2024-09-06 21:27:36 +10:00
psychedelicious
b41f1a897d tidy(ui): tool components & translations 2024-09-06 21:27:36 +10:00
psychedelicious
006a5723da feat(ui): rough out eyedropper tool
It's a bit slow bc we are converting the stage to canvas on every mouse move. Also need to improve the visual but it works.
2024-09-06 21:27:36 +10:00
psychedelicious
daede9c9cf fix(ui): ip adapters work 2024-09-06 21:27:36 +10:00
psychedelicious
af7d14cd59 feat(ui): rename layers 2024-09-06 21:27:36 +10:00
psychedelicious
ba14fe3600 feat(ui): revise entity menus 2024-09-06 21:27:36 +10:00
psychedelicious
af726d1f15 feat(ui): split control layers from raster layers for UI and internal state, same rendering as raster layers 2024-09-06 21:27:36 +10:00
psychedelicious
3a2efb351d feat(ui): implement cache for image rasterization, rip out some old controladapters code 2024-09-06 21:27:36 +10:00
psychedelicious
c9dc61c311 feat(ui, app): use layer as control (wip) 2024-09-06 21:27:36 +10:00
psychedelicious
500f151d96 feat(ui): add contextmenu for canvas entities 2024-09-06 21:27:36 +10:00
psychedelicious
9708fc5d6c feat(ui): more better logging & naming 2024-09-06 21:27:36 +10:00
psychedelicious
c697501285 feat(ui): better logging w/ path 2024-09-06 21:27:36 +10:00
psychedelicious
e213cfc2ba feat(ui): always show marks on canvas scale slider 2024-09-06 21:27:36 +10:00
psychedelicious
270c1304a8 fix(ui): do not import button from chakra 2024-09-06 21:27:36 +10:00
psychedelicious
b887cf4612 fix(ui): scaled bbox preview 2024-09-06 21:27:36 +10:00
psychedelicious
e0573b721e feat(ui): tidy up atoms 2024-09-06 21:27:36 +10:00
psychedelicious
22712c5dac feat(ui): convert all my pubsubs to atoms
its the same but better
2024-09-06 21:27:36 +10:00
psychedelicious
7339b3d8cc feat(ui): add trnalsation 2024-09-06 21:27:36 +10:00
psychedelicious
0eda34b41f fix(ui): give up on thumbnail loading, causes flash during transformer 2024-09-06 21:27:36 +10:00
psychedelicious
9d9e845198 fix(ui): depth anything v2 2024-09-06 21:27:36 +10:00
psychedelicious
6b7ead4461 tidy(ui): remove unused code, comments 2024-09-06 21:27:36 +10:00
psychedelicious
3ce3056c4a fix(ui): staging area works 2024-09-06 21:27:36 +10:00
psychedelicious
aa73cbf459 feat(nodes): temp disable canvas output crop 2024-09-06 21:27:36 +10:00
psychedelicious
fa3f109eb9 fix(ui): max scale 1 when reset view 2024-09-06 21:27:36 +10:00
psychedelicious
c22afa5725 feat(ui): better scale changer component, reset view functionality 2024-09-06 21:27:36 +10:00
psychedelicious
318086571d fix(ui): img2img 2024-09-06 21:27:36 +10:00
psychedelicious
260ef8edd5 feat(ui): add manual scale controls 2024-09-06 21:27:36 +10:00
psychedelicious
a3a933a797 fix(ui): do not await clearBuffer 2024-09-06 21:27:36 +10:00
psychedelicious
d2d298604c feat(ui): dnd image into layer 2024-09-06 21:27:36 +10:00
psychedelicious
318f2ee003 fix(ui): do not await commitBuffer 2024-09-06 21:27:36 +10:00
psychedelicious
657d59268c fix(ui): properly destroy entities in manager cleanup 2024-09-06 21:27:36 +10:00
psychedelicious
54b7931779 tidy(ui): clearer component names for regional guidance 2024-09-06 21:27:36 +10:00
psychedelicious
aadba55796 tidy(ui): clearer component names for ip adapter 2024-09-06 21:27:36 +10:00
psychedelicious
1a8d65d7a9 tidy(ui): clearer component names for inpaint mask 2024-09-06 21:27:36 +10:00
psychedelicious
36f7d0957a tidy(ui): clearer component names for control adapters 2024-09-06 21:27:36 +10:00
psychedelicious
85a33ff6aa feat(ui): simplify canvas list item headers 2024-09-06 21:27:36 +10:00
psychedelicious
ebef4feddb fix(ui): ip adapter list item 2024-09-06 21:27:36 +10:00
psychedelicious
0a34bebd9c tidy(ui): clean up unused logic 2024-09-06 21:27:36 +10:00
psychedelicious
d6acd96dec feat(ui): clean up state, add mutex for image loading, add thumbnail loading 2024-09-06 21:27:36 +10:00
psychedelicious
3ac88acf11 chore(ui): add async-mutex dep 2024-09-06 21:27:36 +10:00
psychedelicious
2a7cffed2a feat(ui): txt2img, img2img, inpaint & outpaint working 2024-09-06 21:27:36 +10:00
psychedelicious
08c2089b3d feat(ui): no padding on transformer outlines 2024-09-06 21:27:36 +10:00
psychedelicious
87521b07ce feat(ui): restore object count to layer titles 2024-09-06 21:27:36 +10:00
psychedelicious
5de7efc1dc tidy(ui): "useIsEntitySelected" -> "useEntityIsSelected" 2024-09-06 21:27:36 +10:00
psychedelicious
d4fb80772e tidy(ui): move transformer statics into class 2024-09-06 21:27:36 +10:00
psychedelicious
97886bf62e tidy(ui): massive cleanup
- create a context for entity identifiers, massively simplifying UI for each entity int he list
- consolidate common redux actions
- remove now-unused code
2024-09-06 21:27:36 +10:00
psychedelicious
83657e0a68 perf(ui): do not add duplicate points to lines 2024-09-06 21:27:36 +10:00
psychedelicious
adf293f6c9 feat(ui): up line tension to 0.3 2024-09-06 21:27:36 +10:00
psychedelicious
d35120feb9 perf(ui): disable stroke, perfect draw on compositing rect 2024-09-06 21:27:36 +10:00
psychedelicious
00020f49fe tidy(ui): remove unused code, initial image 2024-09-06 21:27:36 +10:00
psychedelicious
206d1b231a tidy(ui): remove unused state & actions 2024-09-06 21:27:36 +10:00
psychedelicious
008d8f491c feat(ui): region mask rendering 2024-09-06 21:27:36 +10:00
psychedelicious
ae5543b6fa feat(ui): esc cancels drawing buffer
maybe this is not wanted? we'll see
2024-09-06 21:27:36 +10:00
psychedelicious
2c2e6c5c25 fix(ui): render transformer over objects, fix issue w/ inpaint rect color 2024-09-06 21:27:36 +10:00
psychedelicious
718bed5758 fix(ui): brush preview fill for inpaint/region 2024-09-06 21:27:36 +10:00
psychedelicious
d2c524e7fd fix(ui): no objects rendered until vis toggled 2024-09-06 21:27:36 +10:00
psychedelicious
c172221038 feat(ui): inpaint mask transform 2024-09-06 21:27:36 +10:00
psychedelicious
b006edf9c4 fix(ui): layer accidental early set isFirstRender=false 2024-09-06 21:27:36 +10:00
psychedelicious
9f2dc4d6d1 fix(ui): inpaint mask rendering 2024-09-06 21:27:36 +10:00
psychedelicious
8816d6c0c5 feat(ui): wip inpaint mask uses new API 2024-09-06 21:27:36 +10:00
psychedelicious
6dddad01fc feat(ui): move updatePosition to transformer 2024-09-06 21:27:36 +10:00
psychedelicious
7b7f0a7380 feat(ui): move resetScale to transformer 2024-09-06 21:27:36 +10:00
psychedelicious
8f780e79e7 tidy(ui): more imperative naming 2024-09-06 21:27:36 +10:00
psychedelicious
5a2b00fb95 tidy(ui): use imperative names for setters in stateapi 2024-09-06 21:27:36 +10:00
psychedelicious
25158d4d92 fix(ui): commit drawing buffer on tool change, fixing bbox not calculating 2024-09-06 21:27:36 +10:00
psychedelicious
3c9e9fa746 fix(ui): sync transformer when requesting bbox calc 2024-09-06 21:27:36 +10:00
psychedelicious
e5a7e932cf tidy(ui): rename union CanvasEntity -> CanvasEntityState 2024-09-06 21:27:36 +10:00
psychedelicious
84d15b0c7f fix(ui): request rect calc immediately on transform, hiding rect 2024-09-06 21:27:36 +10:00
psychedelicious
79f216a491 feat(ui): move bbox calculation to transformer 2024-09-06 21:27:36 +10:00
psychedelicious
b86645c1a7 feat(ui): use set for transformer subscriptions 2024-09-06 21:27:36 +10:00
psychedelicious
3f359f1059 tidy(ui): clean up worker tasks when complete 2024-09-06 21:27:36 +10:00
psychedelicious
7c7117c39a tidy(ui): remove unused code in CanvasTool 2024-09-06 21:27:36 +10:00
psychedelicious
92b8c68b94 feat(ui): use pubsub for isTransforming on manager 2024-09-06 21:27:36 +10:00
psychedelicious
7b7d722bae docs(ui): update transformer docstrings 2024-09-06 21:27:36 +10:00
psychedelicious
227c3197b4 feat(ui): revised event pubsub, transformer logic split out 2024-09-06 21:27:35 +10:00
psychedelicious
cab1ba8970 feat(ui): add simple pubsub 2024-09-06 21:27:35 +10:00
psychedelicious
7395c94d00 feat(ui): document & clean up object renderer 2024-09-06 21:27:35 +10:00
psychedelicious
2a2dadb952 feat(ui): split out object renderer 2024-09-06 21:27:35 +10:00
psychedelicious
b60dcf8036 fix(ui): unable to hold shit while transforming to retain ratio 2024-09-06 21:27:35 +10:00
psychedelicious
edeb706d19 tidy(ui): rename canvas stuff 2024-09-06 21:27:35 +10:00
psychedelicious
cd76a2d217 tidy(ui): consolidate getLoggingContext builders 2024-09-06 21:27:35 +10:00
psychedelicious
2aaa6c3986 fix(ui): align all tools to 1px grid
- Offset brush tool by 0.5px when width is odd, ensuring each stroke edge is exactly on a pixel boundary
- Round the rect tool also
2024-09-06 21:27:35 +10:00
psychedelicious
dfdd56dbec feat(ui): disable image smoothing on layers 2024-09-06 21:27:35 +10:00
psychedelicious
f521704ade fix(ui): round position when rasterizing layer 2024-09-06 21:27:35 +10:00
psychedelicious
0e0cf9cd3e feat(ui): continue modularizing transform 2024-09-06 21:27:35 +10:00
psychedelicious
60484bd45e feat(ui): fix a few things that didn't unsubscribe correctly, add helper to manage subscriptions 2024-09-06 21:27:35 +10:00
psychedelicious
5978ba32c0 feat(ui): merge bbox outline into transformer 2024-09-06 21:27:35 +10:00
psychedelicious
45efa8f40d fix(ui): update parent's pos not transformers 2024-09-06 21:27:35 +10:00
psychedelicious
3392e1f0bf feat(ui): merge interaction rect into transformer class 2024-09-06 21:27:35 +10:00
psychedelicious
c734385f18 feat(ui): prepare staging area 2024-09-06 21:27:35 +10:00
psychedelicious
148434d833 feat(ui): typing for logging context 2024-09-06 21:27:35 +10:00
psychedelicious
d5e4a965cc feat(ui): remove inheritance of CanvasObject
JS is terrible
2024-09-06 21:27:35 +10:00
psychedelicious
c8ac4d9e1b feat(ui): split & document transformer logic, iterate on class structures 2024-09-06 21:27:35 +10:00
psychedelicious
1e17461601 feat(ui): rotation snap to nearest 45deg when holding shift 2024-09-06 21:27:35 +10:00
psychedelicious
4432244bc3 feat(ui): expose subscribe method for nanostores 2024-09-06 21:27:35 +10:00
psychedelicious
0d087f84a5 tidy(ui): remove layer scaling reducers 2024-09-06 21:27:35 +10:00
psychedelicious
286deb277b fix(ui): pixel-perfect transforms 2024-09-06 21:27:35 +10:00
psychedelicious
3f2c1139ea fix(ui): layer visibility toggle 2024-09-06 21:27:35 +10:00
psychedelicious
1f3163942a fix(nodes): fix canvas mask erode
it wasn't eroding enough and caused incorrect transparency in result images
2024-09-06 21:27:35 +10:00
psychedelicious
28affcc60a fix(ui): do not reset layer on first render 2024-09-06 21:27:35 +10:00
psychedelicious
f59c2cf7f5 feat(ui): revised logging and naming setup, fix staging area 2024-09-06 21:27:35 +10:00
psychedelicious
86ff52ec36 feat(ui): add repr methods to layer and object classes 2024-09-06 21:27:35 +10:00
psychedelicious
17e7364bdb feat(ui): use nanoid(10) instead of uuidv4 for canvas
Shorter ids makes it much more readable
2024-09-06 21:27:35 +10:00
psychedelicious
71e4b60a30 build(ui): add nanoid as explicit dep 2024-09-06 21:27:35 +10:00
psychedelicious
a3e1ff637d fix(ui): move CanvasImage's konva image to correct object 2024-09-06 21:27:35 +10:00
psychedelicious
a3d2ba1444 fix(ui): prevent flash when applying transform 2024-09-06 21:27:35 +10:00
psychedelicious
9baa594a56 build(ui): add eslint rules for async stuff 2024-09-06 21:27:35 +10:00
psychedelicious
49de11c3ae feat(ui): trying to fix flicker after transform 2024-09-06 21:27:35 +10:00
psychedelicious
690fbdc73d feat(ui): transform cleanup 2024-09-06 21:27:35 +10:00
psychedelicious
4d52824895 feat(ui): fix transform when rotated 2024-09-06 21:27:35 +10:00
psychedelicious
2e13e75fc6 fix(ui): use pixel bbox when image is in layer 2024-09-06 21:27:35 +10:00
psychedelicious
f6f6462590 fix(ui): transforming when axes flipped 2024-09-06 21:27:35 +10:00
psychedelicious
79fee16629 feat(ui): hallelujah (???) 2024-09-06 21:27:35 +10:00
psychedelicious
073f63251a feat(ui): add debug button 2024-09-06 21:27:35 +10:00
psychedelicious
20125dc04b fix(ui): transformer padding 2024-09-06 21:27:35 +10:00
psychedelicious
9f1f8d62f0 feat(ui): wip transform mode 2 2024-09-06 21:27:35 +10:00
psychedelicious
99d432785c feat(ui): wip transform mode 2024-09-06 21:27:35 +10:00
psychedelicious
54e5401a96 feat(ui): wip transform mode 2024-09-06 21:27:35 +10:00
psychedelicious
6b5d7406d6 fix(ui): dnd to canvas broke 2024-09-06 21:27:35 +10:00
psychedelicious
c6bfeba61a fix(ui): conflicts after rebasing 2024-09-06 21:27:35 +10:00
psychedelicious
af1c8cc7e0 fix(ui): imageDropped listener 2024-09-06 21:27:35 +10:00
psychedelicious
879161ed4c wip 2024-09-06 21:27:35 +10:00
psychedelicious
a15944774c fix(ui): transform tool seems to be working 2024-09-06 21:27:35 +10:00
psychedelicious
42612a4f92 fix(ui): move tool fixes, add transform tool 2024-09-06 21:27:35 +10:00
psychedelicious
5c39935e88 feat(ui): move tool now only moves 2024-09-06 21:27:35 +10:00
psychedelicious
35c941c540 feat(ui): layer bbox calc in worker 2024-09-06 21:27:35 +10:00
psychedelicious
92931b0d4d feat(ui): tweaked entity & group selection styles 2024-09-06 21:27:35 +10:00
psychedelicious
7c3d2f5578 feat(ui): canvas entity list headers 2024-09-06 21:27:35 +10:00
psychedelicious
9c809ba147 tidy(ui): CanvasRegion 2024-09-06 21:27:35 +10:00
psychedelicious
25fb1bb837 tidy(ui): CanvasRect 2024-09-06 21:27:35 +10:00
psychedelicious
2d01086a3e tidy(ui): CanvasLayer 2024-09-06 21:27:35 +10:00
psychedelicious
f8af1e9014 tidy(ui): CanvasInpaintMask 2024-09-06 21:27:35 +10:00
psychedelicious
2b34a5c646 tidy(ui): CanvasInitialImage 2024-09-06 21:27:35 +10:00
psychedelicious
22ca3db870 tidy(ui): CanvasImage 2024-09-06 21:27:35 +10:00
psychedelicious
247ca97fbd tidy(ui): CanvasEraserLine 2024-09-06 21:27:35 +10:00
psychedelicious
b346b25a7b tidy(ui): CanvasControlAdapter 2024-09-06 21:27:35 +10:00
psychedelicious
496cf3da4f tidy(ui): CanvasBrushLine 2024-09-06 21:27:35 +10:00
psychedelicious
f1b0130389 tidy(ui): CanvasBbox 2024-09-06 21:27:35 +10:00
psychedelicious
39db3be151 tidy(ui): CanvasBackground 2024-09-06 21:27:35 +10:00
psychedelicious
517ad7e77c tidy(ui): update canvas classes, organise location of konva nodes 2024-09-06 21:27:35 +10:00
psychedelicious
5fa10a3f8e feat(ui): add names to all konva objects
Makes troubleshooting much simpler
2024-09-06 21:27:35 +10:00
psychedelicious
aa3986e9f2 fix(ui): do not await creating new canvas image
If you await this, it causes a race condition where multiple images are created.
2024-09-06 21:27:35 +10:00
psychedelicious
9105c02681 feat(ui): use position and dimensions instead of separate x,y,width,height attrs 2024-09-06 21:27:35 +10:00
psychedelicious
6632727d00 fix(ui): remove weird rtkq hook wrapper
I do not understand why I did that initially but it doesn't work with TS.
2024-09-06 21:27:35 +10:00
psychedelicious
34ccd5aa86 feat(ui): rename types size and position to dimensions and coordinate 2024-09-06 21:27:35 +10:00
psychedelicious
23979bdbee tidy(ui): hide layer settings by default 2024-09-06 21:27:35 +10:00
psychedelicious
dda53292bf fix(ui): layer rendering when starting as disabled 2024-09-06 21:27:35 +10:00
psychedelicious
c98c5f13f7 feat(invocation): reduce canvas v2 mask & crop mask dilation 2024-09-06 21:27:35 +10:00
psychedelicious
2a69967863 feat(ui): de-jank staging area and progress images 2024-09-06 21:27:35 +10:00
psychedelicious
781ef806de feat(ui): update staging handling to work w/ cropped mask 2024-09-06 21:27:35 +10:00
psychedelicious
8794c51e42 chore(ui): typegen 2024-09-06 21:27:35 +10:00
psychedelicious
9e89ddf2f1 feat(app): update CanvasV2MaskAndCropInvocation 2024-09-06 21:27:35 +10:00
psychedelicious
f480a89e7a feat(ui): use new canvas output node 2024-09-06 21:27:35 +10:00
psychedelicious
df44fb9827 chore(ui): typegen 2024-09-06 21:27:35 +10:00
psychedelicious
fca9cacc4e feat(app): add CanvasV2MaskAndCropInvocation & CanvasV2MaskAndCropOutput
This handles some masking and cropping that the canvas needs.
2024-09-06 21:27:35 +10:00
psychedelicious
fa7783972b fix(ui): restore nodes output tracking 2024-09-06 21:27:35 +10:00
psychedelicious
e50b4280f2 feat(ui): rip out document size
barely knew ye
2024-09-06 21:27:35 +10:00
psychedelicious
17c9ccfdb0 feat(ui): convert initial image to layer when starting canvas session 2024-09-06 21:27:35 +10:00
psychedelicious
d35af4c048 fix(ui): fix layer transparency calculation 2024-09-06 21:27:35 +10:00
psychedelicious
5fef9bbceb fix(ui): reset initial image when resetting canvas 2024-09-06 21:27:35 +10:00
psychedelicious
507acaa7c9 fix(ui): reset node executions states when loading workflow 2024-09-06 21:27:35 +10:00
psychedelicious
28eb9b62a8 fix(ui): entity display list 2024-09-06 21:27:35 +10:00
psychedelicious
061eeb809f feat(ui): img2img working 2024-09-06 21:27:35 +10:00
psychedelicious
0db5c6ac8e feat(ui): rough out img2img on canvas 2024-09-06 21:27:35 +10:00
psychedelicious
2fd6fd4624 UNDO ME WIP 2024-09-06 21:27:35 +10:00
psychedelicious
db67ae2de4 feat(ui): log invocation source id on socket event 2024-09-06 21:27:35 +10:00
psychedelicious
fedcebbe4d feat(ui): restore document size overlay renderer 2024-09-06 21:27:35 +10:00
psychedelicious
eaaeb356d7 feat(ui): make documnet size a rect 2024-09-06 21:27:35 +10:00
psychedelicious
0cddbc2420 refactor(ui): remove modular imagesize components
This is no longer necessary with canvas v2 and added a ton of extraneous redux actions when changing the image size. Also renamed to document size
2024-09-06 21:27:35 +10:00
psychedelicious
1b44520ff7 feat(ui): initialState is for generation mode 2024-09-06 21:27:35 +10:00
psychedelicious
ee4dc86c15 feat(ui): split out canvas entity list component 2024-09-06 21:27:35 +10:00
psychedelicious
cbc61daa56 feat(ui): hide bbox button when no canvas session active 2024-09-06 21:27:35 +10:00
psychedelicious
f1cd6a06ec tidy(ui): remove unused naming objects/utils
The canvas manager means we don't need to worry about konva node names as we never directly select konva nodes.
2024-09-06 21:27:35 +10:00
psychedelicious
70248b9684 feat(ui): split up tool chooser buttons
Prep for distinct toolbars for generation vs canvas modes
2024-09-06 21:27:35 +10:00
psychedelicious
0a5cf8ae42 feat(ui): add useAssertSingleton util hook
This simple hook asserts that it is only ever called once. Particularly useful for things like hotkeys hooks.
2024-09-06 21:27:35 +10:00
psychedelicious
4dc5b18671 feat(ui): "stagingArea" -> "session" 2024-09-06 21:27:35 +10:00
psychedelicious
daffb950c3 feat(ui): add reset button to canvas 2024-09-06 21:27:35 +10:00
psychedelicious
1a6ebb9606 feat(ui): add snapToRect util 2024-09-06 21:27:35 +10:00
psychedelicious
e397efb622 fix(ui): fiddle with control adapter filters
some jank still
2024-09-06 21:27:35 +10:00
psychedelicious
748cb0bb8d feat(ui): temp disable doc size overlay 2024-09-06 21:27:35 +10:00
psychedelicious
233a449e53 feat(ui): no animation on layer selection
Felt sluggish
2024-09-06 21:27:35 +10:00
psychedelicious
09ad8b6238 feat(ui): use canvas as source for control images (wip) 2024-09-06 21:27:35 +10:00
psychedelicious
778f703d3d fix(ui): control adapter translate & scale 2024-09-06 21:27:35 +10:00
psychedelicious
8234e4e168 tidy(ui): removed unused state related to non-buffered drawing 2024-09-06 21:27:35 +10:00
psychedelicious
08e588a6c4 feat(ui): control adapter image rendering 2024-09-06 21:27:35 +10:00
psychedelicious
43f550e48b fix(ui): do not floor bbox calc, it cuts off the last pixels 2024-09-06 21:27:35 +10:00
psychedelicious
5663029ba6 feat(ui): fix issue where creating line needs 2 points 2024-09-06 21:27:35 +10:00
psychedelicious
0869e23dc1 fix(ui): edge cases when holding shift and drawing lines 2024-09-06 21:27:35 +10:00
psychedelicious
fc4779e095 fix(ui): set buffered rect color to full alpha 2024-09-06 21:27:35 +10:00
psychedelicious
1667603d1c fix(ui): handle mouseup correctly 2024-09-06 21:27:35 +10:00
psychedelicious
81b210cf14 feat(ui): buffered rect drawing 2024-09-06 21:27:35 +10:00
psychedelicious
ca5fde8ef5 fix(ui): buffered drawing edge cases 2024-09-06 21:27:35 +10:00
psychedelicious
fef88734d0 perf(ui): do not use stage.find 2024-09-06 21:27:35 +10:00
psychedelicious
aa1727d16f perf(ui): object groups do not listen 2024-09-06 21:27:35 +10:00
psychedelicious
7c1afb6493 perf(ui): buffered drawing (wip) 2024-09-06 21:27:35 +10:00
psychedelicious
64e7757872 tidy(ui): organise files 2024-09-06 21:27:35 +10:00
psychedelicious
3b08250331 tidy(ui): organise files 2024-09-06 21:27:35 +10:00
psychedelicious
cd02638db6 tidy(ui): organise files 2024-09-06 21:27:35 +10:00
psychedelicious
5109811182 fix(ui): background rendering 2024-09-06 21:27:35 +10:00
psychedelicious
33ba8cabd1 pkg(ui): remove unused deps react-konva & use-image 2024-09-06 21:27:35 +10:00
psychedelicious
dcac6028a2 feat(ui): organize konva state and files 2024-09-06 21:27:35 +10:00
psychedelicious
25ab435129 fix(ui): merge conflicts in image deletion listener 2024-09-06 21:27:35 +10:00
psychedelicious
cea5dc6216 fix(ui): region rendering 2024-09-06 21:27:35 +10:00
psychedelicious
bc525d29e1 fix(ui): inpaint mask rendering 2024-09-06 21:27:35 +10:00
psychedelicious
cdfe0ca150 fix(ui): staging area rendering 2024-09-06 21:27:35 +10:00
psychedelicious
bd679e018d fix(ui): stale selected entity 2024-09-06 21:27:35 +10:00
psychedelicious
a6ee18448a fix(ui): staging area image offset 2024-09-06 21:27:35 +10:00
psychedelicious
7d96b3e89e feat(ui): tweak layer ui component 2024-09-06 21:27:35 +10:00
psychedelicious
e6723f194a fix(ui): resetting layer resets position 2024-09-06 21:27:35 +10:00
psychedelicious
e55541ea87 feat(ui): updated layer list component styling 2024-09-06 21:27:35 +10:00
psychedelicious
2676ff8ee3 feat(ui): transformable layers 2024-09-06 21:27:34 +10:00
psychedelicious
f024fe4488 feat(ui): move tool icon is pointer like in other apps 2024-09-06 21:27:34 +10:00
psychedelicious
0d3dfb8d0f feat(ui): do not floor cursor position 2024-09-06 21:27:34 +10:00
psychedelicious
65e1951f5d feat(ui): disable gallery hotkeys while staging 2024-09-06 21:27:34 +10:00
psychedelicious
fd6eb91f79 feat(ui): revised canvas progress & staging image handling 2024-09-06 21:27:34 +10:00
psychedelicious
2aea0f2ac5 feat(ui): show queue item origin in queue list 2024-09-06 21:27:34 +10:00
psychedelicious
e1b5aa7011 chore(ui): typegen 2024-09-06 21:27:34 +10:00
psychedelicious
cdc4d29745 feat(app): add origin to session queue
The origin is an optional field indicating the queue item's origin. For example, "canvas" when the queue item originated from the canvas or "workflows" when the queue item originated from the workflows tab. If omitted, we assume the queue item originated from the API directly.

- Add migration to add the nullable column to the `session_queue` table.
- Update relevant event payloads with the new field.
- Add `cancel_by_origin` method to `session_queue` service and corresponding route. This is required for the canvas to bail out early when staging images.
- Add `origin` to both `SessionQueueItem` and `Batch` - it needs to be provided initially via the batch and then passed onto the queue item.
-
2024-09-06 21:27:34 +10:00
psychedelicious
5d00792e1f fix(ui): denoise start on outpainting 2024-09-06 21:27:34 +10:00
psychedelicious
1c334f3231 feat(ui): add redux events for queue cleared & batch enqueued socket events 2024-09-06 21:27:34 +10:00
psychedelicious
9ba3182d19 feat(ui): canvas staging area works 2024-09-06 21:27:34 +10:00
psychedelicious
285ad448d7 feat(ui): switch to view tool when staging 2024-09-06 21:27:34 +10:00
psychedelicious
e81fedba43 tidy(ui): disable preview images on every enqueue 2024-09-06 21:27:34 +10:00
psychedelicious
6a47f973b7 feat(ui): rough out save staging image 2024-09-06 21:27:34 +10:00
psychedelicious
7aa918cd46 feat(ui): staging area image visibility toggle 2024-09-06 21:27:34 +10:00
psychedelicious
183c9dd736 fix(ui): batch building after removing canvas files 2024-09-06 21:27:34 +10:00
psychedelicious
5621075cb7 feat(ui): make Graph class's getMetadataNode public 2024-09-06 21:27:34 +10:00
psychedelicious
b13d2087c2 tidy(ui): remove old canvas graphs 2024-09-06 21:27:34 +10:00
psychedelicious
89740af2ab fix(ui): do not select already-selected entity 2024-09-06 21:27:34 +10:00
psychedelicious
4329dfd128 tidy(ui): naming things 2024-09-06 21:27:34 +10:00
psychedelicious
bb18a82a9c tidy(ui): file organisation 2024-09-06 21:27:34 +10:00
psychedelicious
d073fe467d fix(ui): reset cursor pos when fitting document 2024-09-06 21:27:34 +10:00
psychedelicious
e37e885546 feat(ui): staging area works more better 2024-09-06 21:27:34 +10:00
psychedelicious
471ded85f7 feat(ui): staging area barely works 2024-09-06 21:27:34 +10:00
psychedelicious
e084655e69 feat(ui): consolidate konva API 2024-09-06 21:27:34 +10:00
psychedelicious
330acb55f4 feat(ui): consolidate konva API 2024-09-06 21:27:34 +10:00
psychedelicious
e1ace99e05 feat(ui): staging area (rendering wip) 2024-09-06 21:27:34 +10:00
psychedelicious
c0bfa07ea7 tidy(ui): type "Dimensions" -> "Size" 2024-09-06 21:27:34 +10:00
psychedelicious
f4fceac372 feat(ui): add updateNode to Graph 2024-09-06 21:27:34 +10:00
psychedelicious
c1f5345987 feat(ui): sdxl graphs 2024-09-06 21:27:34 +10:00
psychedelicious
05992130d0 feat(ui): sd1 outpaint graph 2024-09-06 21:27:34 +10:00
psychedelicious
9c646712e0 tests(ui): add missing tests for Graph class 2024-09-06 21:27:34 +10:00
psychedelicious
0edff49957 feat(ui): add Graph.getid() util 2024-09-06 21:27:34 +10:00
psychedelicious
0f709cb06a feat(ui): outpaint graph, organize builder a bit 2024-09-06 21:27:34 +10:00
psychedelicious
656dbbb9f1 feat(ui): inpaint sd1 graph 2024-09-06 21:27:34 +10:00
psychedelicious
9497a75c95 feat(ui): temp disable image caching while testing 2024-09-06 21:27:34 +10:00
psychedelicious
30c4ed87b5 feat(ui): txt2img & img2img graphs 2024-09-06 21:27:34 +10:00
psychedelicious
e839765ddc feat(ui): minor change to canvas bbox state type 2024-09-06 21:27:34 +10:00
psychedelicious
055737a6e8 feat(ui): simplified konva node to blob/imagedata utils 2024-09-06 21:27:34 +10:00
psychedelicious
fbc609230a feat(ui): node manager getter/setter 2024-09-06 21:27:34 +10:00
psychedelicious
46f86a54c1 feat(ui): generation mode calculation, fudged graphs 2024-09-06 21:27:34 +10:00
psychedelicious
70f5231020 feat(ui): add utils for getting images from canvas 2024-09-06 21:27:34 +10:00
psychedelicious
0ec0feed2c feat(ui): even more simplified API - lean on the konva node manager to abstract imperative state API & rendering 2024-09-06 21:27:34 +10:00
psychedelicious
60cd505ee1 feat(ui): revised docstrings for renderers & simplified api 2024-09-06 21:27:34 +10:00
psychedelicious
205a719649 feat(ui): inpaint mask UI components 2024-09-06 21:27:34 +10:00
psychedelicious
af7d222a1e feat(ui): inpaint mask rendering (wip) 2024-09-06 21:27:34 +10:00
psychedelicious
70f430f635 fix(ui): models loaded handler 2024-09-06 21:27:34 +10:00
psychedelicious
2a92a223f6 feat(ui): internal state for inpaint mask 2024-09-06 21:27:34 +10:00
psychedelicious
1ed43614c2 refactor(ui): divvy up canvas state a bit 2024-09-06 21:27:34 +10:00
psychedelicious
3a5295574f feat(ui): get region and base layer canvas to blob logic working 2024-09-06 21:27:34 +10:00
psychedelicious
475b1cb1b8 refactor(ui): node manager handles more tedious annoying stuff 2024-09-06 21:27:34 +10:00
psychedelicious
55260a886d feat(ui): use node manager for addRegions 2024-09-06 21:27:34 +10:00
psychedelicious
f77e03ceb0 feat(ui): persist bbox 2024-09-06 21:27:34 +10:00
psychedelicious
4f7bf5ad58 fix(ui): fix generation graphs 2024-09-06 21:27:34 +10:00
psychedelicious
460ea1aa07 feat(ui): add toggle for clipToBbox 2024-09-06 21:27:34 +10:00
psychedelicious
7aa5f5cb80 feat(ui): rename konva node manager 2024-09-06 21:27:34 +10:00
psychedelicious
9cc4133184 refactor(ui): create classes to abstract mgmt of konva nodes 2024-09-06 21:27:34 +10:00
psychedelicious
0d3721324d tidy(ui): organise renderers 2024-09-06 21:27:34 +10:00
psychedelicious
510249d282 refactor(ui): create entity to konva node map abstraction (wip)
Instead of chaining konva `find` and `findOne` methods, all konva nodes are added to a mapping object. Finding and manipulating them is much simpler.

Done for regions and layers, wip for control adapters.
2024-09-06 21:27:34 +10:00
psychedelicious
77d840593b perf(ui): fix lag w/ region rendering
Needed to memoize these selectors
2024-09-06 21:27:34 +10:00
psychedelicious
8c9a5c4ab5 feat(ui): move canvas fill color picker to right 2024-09-06 21:27:34 +10:00
psychedelicious
ce497fff27 refactor(ui): remove unused ellipse & polygon objects 2024-09-06 21:27:34 +10:00
psychedelicious
c3cada6bd5 fix(ui): incorrect rect/brush/eraser positions 2024-09-06 21:27:34 +10:00
psychedelicious
bbacfe403c refactor(ui): enable global debugging flag 2024-09-06 21:27:34 +10:00
psychedelicious
159031a071 refactor(ui): disable the preview renderer for now 2024-09-06 21:27:34 +10:00
psychedelicious
4067927a23 tweak(ui): canvas editor layout 2024-09-06 21:27:34 +10:00
psychedelicious
d090343083 perf(ui): memoize layeractionsmenu valid actions 2024-09-06 21:27:34 +10:00
psychedelicious
ed130366db refactor(ui): decouple konva renderer from react
Subscribe to redux store directly, skipping all the react overhead.

With react in dev mode, a typical frame while using the brush tool on almost-empty canvas is reduced from ~7.5ms to ~3.5ms. All things considered, this still feels slow, but it's a massive improvement.
2024-09-06 21:27:34 +10:00
psychedelicious
d4ae40fec4 feat(ui): clip lines to bbox 2024-09-06 21:27:34 +10:00
psychedelicious
17c864dba3 fix(ui): document fit positioning 2024-09-06 21:27:34 +10:00
psychedelicious
398d6d1efd feat(ui): document bounds overlay 2024-09-06 21:27:34 +10:00
psychedelicious
30b46b0f9b tidy(ui): background layer 2024-09-06 21:27:34 +10:00
psychedelicious
2d123fa11c refactor(ui): use "entity" instead of "data" for canvas 2024-09-06 21:27:34 +10:00
psychedelicious
352a651ac0 feat(ui): brush size border radius = 1 2024-09-06 21:27:34 +10:00
psychedelicious
26b971978d fix(ui): canvas HUD doesn't interrupt tool 2024-09-06 21:27:34 +10:00
psychedelicious
478324ea62 refactor(ui): split up canvas entity renderers, temp disable preview 2024-09-06 21:27:34 +10:00
psychedelicious
fa447b2813 fix(ui): delete all layers button 2024-09-06 21:27:34 +10:00
psychedelicious
322790bfdb fix(ui): ignore keyboard shortcuts in input/textarea elements 2024-09-06 21:27:34 +10:00
psychedelicious
9ea7c18d66 fix(ui): canvas entity ids getting clobbered 2024-09-06 21:27:34 +10:00
psychedelicious
2a0d16d57d fix(ui): move lora followup fixes 2024-09-06 21:27:34 +10:00
psychedelicious
033e2b27a9 chore(ui): lint 2024-09-06 21:27:34 +10:00
psychedelicious
c7af454bf5 refactor(ui): move loras to canvas slice 2024-09-06 21:27:34 +10:00
psychedelicious
d2be9df7a7 fix(ui): layer is selected when added 2024-09-06 21:27:34 +10:00
psychedelicious
ae81f4e20a feat(ui): r to center & fit stage on document 2024-09-06 21:27:34 +10:00
psychedelicious
261c24b704 feat(ui): better HUD 2024-09-06 21:27:34 +10:00
psychedelicious
cc1995170e fix(ui): always use current brush width when making straight lines 2024-09-06 21:27:34 +10:00
psychedelicious
86c785fded feat(ui): hold shift w/ brush to draw straight line 2024-09-06 21:27:34 +10:00
psychedelicious
31c2b1af19 fix(ui): update bg on canvas resize 2024-09-06 21:27:34 +10:00
psychedelicious
90b973f529 refactor(ui): better hud 2024-09-06 21:27:34 +10:00
psychedelicious
1164763e1a refactor(ui): scaled tool preview border 2024-09-06 21:27:34 +10:00
psychedelicious
b4319630b1 refactor(ui): port remaining canvasV1 rendering logic to V2, remove old code 2024-09-06 21:27:34 +10:00
psychedelicious
0586c6bdf2 refactor(ui): fix more types 2024-09-06 21:27:34 +10:00
psychedelicious
c2f60f33e6 refactor(ui): metadata recall (wip)
just enough let the app run
2024-09-06 21:27:34 +10:00
psychedelicious
b118bc959d refactor(ui): undo/redo button temp fix 2024-09-06 21:27:34 +10:00
psychedelicious
6ef5c2e0c3 refactor(ui): fix renderer stuff 2024-09-06 21:27:34 +10:00
psychedelicious
728fd5b758 refactor(ui): fix misc types 2024-09-06 21:27:34 +10:00
psychedelicious
5bae455e4e refactor(ui): fix gallery stuff 2024-09-06 21:27:34 +10:00
psychedelicious
9387903491 refactor(ui): fix delete image stuff 2024-09-06 21:27:34 +10:00
psychedelicious
12b67bd556 refactor(ui): fix useIsReadyToEnqueue for new adapterType field 2024-09-06 21:27:34 +10:00
psychedelicious
9a30bcbd94 refactor(ui): update generation tab graphs 2024-09-06 21:27:34 +10:00
psychedelicious
ab0f096bf2 refactor(ui): add adapterType to ControlAdapterData 2024-09-06 21:27:34 +10:00
psychedelicious
2d8a29d2fd refactor(ui): update components & logic to use new unified slice (again) 2024-09-06 21:27:34 +10:00
psychedelicious
8d7de1543b refactor(ui): update components & logic to use new unified slice 2024-09-06 21:27:34 +10:00
psychedelicious
b1b41a9b0c refactor(ui): merge compositing, params into canvasV2 slice 2024-09-06 21:27:34 +10:00
psychedelicious
e9033040f6 refactor(ui): add scaled bbox state 2024-09-06 21:27:34 +10:00
psychedelicious
4388f00607 refactor(ui): update dnd/image upload 2024-09-06 21:27:34 +10:00
psychedelicious
9b8db06349 refactor(ui): update size/prompts state 2024-09-06 21:27:34 +10:00
psychedelicious
a4f55f6e5d refactor(ui): rip out old control adapter implementation 2024-09-06 21:27:34 +10:00
psychedelicious
d0cde66e92 refactor(ui): canvas v2 (wip)
fix entity count select
2024-09-06 21:27:34 +10:00
psychedelicious
f636c0eb88 refactor(ui): canvas v2 (wip)
delete unused file
2024-09-06 21:27:34 +10:00
psychedelicious
5760d3180e refactor(ui): canvas v2 (wip)
merge all canvas state reducers into one big slice (but with the logic split across files so it's not hell)
2024-09-06 21:27:34 +10:00
psychedelicious
8c4f98131b refactor(ui): canvas v2 (wip)
Fix a few more components
2024-09-06 21:27:34 +10:00
psychedelicious
959a433e61 refactor(ui): canvas v2 (wip)
missed a spot
2024-09-06 21:27:34 +10:00
psychedelicious
f72845a1b4 refactor(ui): canvas v2 (wip)
Redo all UI components for different canvas entity types
2024-09-06 21:27:34 +10:00
psychedelicious
254f4ba574 refactor(ui): canvas v2 (wip) 2024-09-06 21:27:34 +10:00
psychedelicious
d6f3b1b85f refactor(ui): canvas v2 (wip) 2024-09-06 21:27:34 +10:00
psychedelicious
47b5b7c4b4 refactor(ui): canvas v2 (wip) 2024-09-06 21:27:34 +10:00
psychedelicious
bcfdae62e3 refactor(ui): canvas v2 (wip) 2024-09-06 21:27:34 +10:00
psychedelicious
c60f1c0031 feat(ui): bbox tool 2024-09-06 21:27:34 +10:00
psychedelicious
d76802f563 fix(ui): rect tool preview 2024-09-06 21:27:34 +10:00
psychedelicious
7b39b31f6c fix(ui): multiple stages 2024-09-06 21:27:34 +10:00
psychedelicious
a2585a8bb1 feat(ui): decouple konva logic from nanostores 2024-09-06 21:27:34 +10:00
psychedelicious
fe0c4767c7 feat(ui): store all stage attrs in nanostores 2024-09-06 21:27:34 +10:00
psychedelicious
e0b8b82a15 feat(ui): round stage scale 2024-09-06 21:27:34 +10:00
psychedelicious
f90fa85e77 chore(ui): bump konva 2024-09-06 21:27:34 +10:00
psychedelicious
6d5b4e4471 feat(ui): generation bbox transformation working
whew
2024-09-06 21:27:34 +10:00
psychedelicious
1172a33117 feat(ui): wip generation bbox 2024-09-06 21:27:34 +10:00
psychedelicious
8530f8ddcc feat(ui): wip generation bbox 2024-09-06 21:27:34 +10:00
psychedelicious
e66e4fefed feat(ui): CL zoom and pan, some rendering optimizations 2024-09-06 21:27:34 +10:00
psychedelicious
1237e839ca Revert "feat(ui): add x,y,scaleX,scaleY,rotation to objects"
This reverts commit 53318b396c967c72326a7e4dea09667b2ab20bdd.
2024-09-06 21:27:34 +10:00
psychedelicious
8ec08063f4 feat(ui): layers manage their own bbox 2024-09-06 21:27:34 +10:00
psychedelicious
0d0004018b docs(ui): konva image object docstrings 2024-09-06 21:27:34 +10:00
psychedelicious
9124604dc4 feat(ui): add x,y,scaleX,scaleY,rotation to objects 2024-09-06 21:27:34 +10:00
psychedelicious
4f05a7b8d0 fix(ui): show color picker when using rect tool 2024-09-06 21:27:34 +10:00
psychedelicious
db766bd9ae feat(ui): image loading fallback for raster layers 2024-09-06 21:27:34 +10:00
psychedelicious
557603d2ae feat(ui): bbox calc for raster layers 2024-09-06 21:27:34 +10:00
psychedelicious
b46ca55c7d feat(ui): do not fill brush preview when drawing 2024-09-06 21:27:33 +10:00
psychedelicious
4788172206 fix(ui): brush spacing handling 2024-09-06 21:27:33 +10:00
psychedelicious
6a118f172e fix(ui): jank when starting a shape when not already focused on stage 2024-09-06 21:27:33 +10:00
psychedelicious
d5abdfa3b0 feat(ui): wip raster layers
I meant to split this up into smaller commits and undo some of it, but I committed afterwards and it's tedious to undo.
2024-09-06 21:27:33 +10:00
psychedelicious
1d24cb94b4 feat(ui): support image objects on raster layers
Just the UI and internal state, not rendering yet.
2024-09-06 21:27:33 +10:00
psychedelicious
86e7f24238 tidy(ui): clean up event handlers
Separate logic for each tool in preparation for ellipse and polygon tools.
2024-09-06 21:27:33 +10:00
psychedelicious
0356f970f3 feat(ui): raster layer reset, object group util 2024-09-06 21:27:33 +10:00
psychedelicious
d340038f48 feat(ui): rect shape preview now has fill 2024-09-06 21:27:33 +10:00
psychedelicious
1d5e8e07c6 feat(ui): cancel shape drawing on esc 2024-09-06 21:27:33 +10:00
psychedelicious
24f17479de feat(ui): temp disable history on CL 2024-09-06 21:27:33 +10:00
psychedelicious
84b8096cc9 feat(ui): raster layer logic
- Deduplicate shared logic
- Split up giant renderers file into separate cohesive files
- Tons of cleanup
- Progress on raster layer functionality
2024-09-06 21:27:33 +10:00
psychedelicious
5c38241e33 feat(ui): add raster layer rendering and interaction (WIP) 2024-09-06 21:27:33 +10:00
psychedelicious
b590c73c08 feat(ui): scaffold out raster layers
Raster layers may have images, lines and shapes. These will replace initial image layers and provide sketching functionality like we have on canvas.
2024-09-06 21:27:33 +10:00
psychedelicious
68fcf9d2df refactor(ui): revise types for line and rect objects
- Create separate object types for brush and eraser lines, instead of a single type that has a `tool` field.
- Create new object type for rect shapes.
- Add logic to schemas to migrate old object types to new.
- Update renderers & reducers.
2024-09-06 21:27:33 +10:00
Brandon Rising
bda579577c chore: 4.2.9 version bump 2024-09-05 16:17:48 -04:00
Brandon Rising
a16b555d47 Simplify flux model dtype conversion in model loader 2024-09-05 15:47:14 -04:00
Brandon Rising
6667c39c73 Remove dependency of asizeof 2024-09-05 15:47:14 -04:00
Brandon Rising
5219ac12a6 Add comment explaining the cache make room call 2024-09-05 15:47:14 -04:00
Brandon Rising
445f813fb9 Update flux transformer loader to more efficiently use and release memory during upcasting 2024-09-05 15:47:14 -04:00
Brandon Rising
87f9e59cfb Cast tensors in unquantized flux models to bfloat16 during loading 2024-09-05 15:47:14 -04:00
Phrixus2023
8b03b39aa8 translationBot(ui): update translation (Chinese (Simplified Han script))
Currently translated at 97.6% (1342 of 1374 strings)

Co-authored-by: Phrixus2023 <920414016@qq.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2024-09-05 15:34:13 -04:00
Tobias Lechner
e59b6bb971 translationBot(ui): update translation (German)
Currently translated at 63.3% (870 of 1374 strings)

Co-authored-by: Tobias Lechner <me@tobias-lechner.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-09-05 15:34:13 -04:00
Riccardo Giovanetti
24a7ed467c translationBot(ui): update translation (Italian)
Currently translated at 98.2% (1350 of 1374 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.2% (1350 of 1374 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.2% (1350 of 1374 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.4% (1349 of 1370 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.4% (1348 of 1369 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-09-05 15:34:13 -04:00
Васянатор
f01f1033ac translationBot(ui): update translation (Russian)
Currently translated at 100.0% (1370 of 1370 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (1369 of 1369 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-09-05 15:34:13 -04:00
smk-e
d35f515413 translationBot(ui): update translation (Spanish)
Currently translated at 33.0% (452 of 1369 strings)

Co-authored-by: smk-e <jit-r8@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2024-09-05 15:34:13 -04:00
Brandon Rising
125b459e56 chore: 4.2.9rc2 version bump 2024-09-04 10:42:16 -04:00
Brandon Rising
33edee1ba6 Delete all flux bundle state dict keys when extracting the transformer state dict 2024-09-04 09:36:23 -04:00
Brandon Rising
d20335dabc convert_bundle_to_flux_transformer_checkpoint now removes processed keys to decrease memory usage 2024-09-04 09:36:23 -04:00
Brandon Rising
d10d258213 Add a comment for why we're converting scale tensors in flux models to bfloat16 2024-09-04 09:36:23 -04:00
Brandon
d57ba1ed8b Update invokeai/backend/model_manager/probe.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2024-09-04 09:36:23 -04:00
Brandon Rising
2d0e34e57b Support non-quantized bundles 2024-09-04 09:36:23 -04:00
Brandon Rising
a005d06255 feat: support checkpoint bundles containing more than just the transformer 2024-09-04 09:36:23 -04:00
Eugene Brodsky
a301ef5a5a chore(ci): update github action version pins in container build workflow 2024-09-03 16:01:58 -04:00
Eugene Brodsky
9422df2737 feat(ci): enable a checkbox to push the container image when manually building via workflow dispatch 2024-09-03 16:01:58 -04:00
Lincoln Stein
6dabe4d3ca assign T5 encoder to base type "Any" 2024-09-03 15:55:51 -04:00
Lincoln Stein
00e4652d30 add more reliable fallback method for determining BnbQuantizedLlmInt8b 2024-09-03 15:55:51 -04:00
Lincoln Stein
b6434c5318 correct modelformat probe for t5 encoders 2024-09-03 15:55:51 -04:00
Lincoln Stein
3f7f9f8d61 add probes for T5_encoder and ClipTextModel 2024-09-03 15:55:51 -04:00
Brandon Rising
f3bb592544 Update latents used for preview images in flux 2024-09-03 14:04:16 -04:00
Brandon Rising
69f080fb75 Move flux step callback code into the step_callback util scripts, use other services within the invocation context 2024-09-03 14:04:16 -04:00
Brandon Rising
04272a7cc8 Initial attempt at preview images 2024-09-03 14:04:16 -04:00
Lincoln Stein
8d35af946e [MM] add API routes for getting & setting MM cache sizes (#6523)
* [MM] add API routes for getting & setting MM cache sizes, and retrieving MM stats

* Update invokeai/app/api/routers/model_manager.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* code cleanup after @ryand review

* Update invokeai/app/api/routers/model_manager.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* fix merge conflicts; tested and working

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2024-09-02 12:18:21 -04:00
Ryan Dick
24065ec6b6 Add FLUX image-to-image and inpainting (#6798)
## Summary

This PR adds support for Image-to-Image and inpainting workflows with
the FLUX model.

Full changelog:
- Split out `FLUX VAE Encode` and `FLUX VAE Decode` nodes
- Renamed `FLUX Text-to-Image` node to `FLUX Denoise` (since it now
supports image-to-image too). This is a workflow-breaking change.
- Added support for FLUX image-to-image via the `Latents` param on the
FLUX denoising node.
- Added support for FLUX masked inpainting via the `Denoise Mask` param
on the FLUX denoising node.
- Added "Denoise Start" and "Denoise End" params to the "FLUX Denoise"
node.
- Updated the "FLUX Text to Image" default workflow.
- Added a "FLUX Image to Image" default workflow.

### Example

FLUX inpainting workflow
<img width="1282" alt="image"
src="https://github.com/user-attachments/assets/86fc1170-e620-4412-8fd8-e119f875fc2e">

Input image

![image](https://github.com/user-attachments/assets/9c381b86-9f87-4257-bd2e-da22c56ca26c)

Mask

![image](https://github.com/user-attachments/assets/8f774c5c-2a25-45fe-9d4b-b233e3d58d2c)

Output image

![image](https://github.com/user-attachments/assets/8576a630-24ce-4a00-8052-e86bab59c855)


### Callouts for reviewers:
- I renamed FLUXTextToImageInvocation -> FLUXDenoisingInvocation. This
is, of course, a breaking change. It feels like the right move and now
is the right time to do it. Any objection?
- I added new `FLUX VAE Encode` and `FLUX VAE Decode` nodes.
Alternatively, I could have tried to match these names to the
corresponding SD nodes (e.g. `FLUX Image to Latents`, `FLUX Latents to
Image`). Personally, I prefer the current names, but want to hear other
opinions.

### Usage notes:
- With the default dev timestep scheduler, the image structure is
largely determined in the first ~3 steps. A consequence of this is that
the denoise_start parameter provides limited 'granularity' of control.
This will likely be improved in the future as we add more scheduler
options. In the meantime, you will likely want to use small values for
`denoise_start` (e.g. 0.03) to start denoising on step ~1-4 out of ~30.
- Currently, there is no 'noise' parameter on the `FLUX Denoise` node,
so the `denoise_end` parameter has limited utility. This will be added
in the future.

## QA Instructions

Test the following workflows:
- [x] Vanilla FLUX text-to-image behaviour is unchanged
- [x] Image-to-image with FLUX dev, no mask
- [x] Image-to-image with FLUX dev, with mask
- [x] Image-to-image with FLUX schnell, no mask (smoke test, not
expected to work well)

## Merge Plan

No special instructions.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
2024-09-02 09:50:31 -04:00
Ryan Dick
627b0bf644 Expose all FLUX model params in the default FLUX models. 2024-09-02 09:38:17 -04:00
Ryan Dick
b43da46b82 Rename 'FLUX VAE Encode'/'FLUX VAE Decode' to 'FLUX Image to Latents'/'FLUX Latents to Image' 2024-09-02 09:38:17 -04:00
Ryan Dick
4255a01c64 Restore line that was accidentally removed during development. 2024-09-02 09:38:17 -04:00
Ryan Dick
23adbd4002 Update schema.ts. 2024-09-02 09:38:17 -04:00
Ryan Dick
fb5a24fcc6 Update default workflows for FLUX. 2024-09-02 09:38:17 -04:00
Ryan Dick
cfdd5a1900 Rename flux_text_to_image.py -> flex_denoise.py 2024-09-02 09:38:17 -04:00
Ryan Dick
2313f326df Add denoise_end param to FluxDenoiseInvocation. 2024-09-02 09:38:17 -04:00
Ryan Dick
2e092a2313 Rename FluxTextToImageInvocation -> FluxDenoiseInvocation. 2024-09-02 09:38:17 -04:00
Ryan Dick
763ef06c18 Use the existence of initial latents to decide whether we are doing image-to-image in the FLUX denoising node. Previously we were using the denoising_start value, but in some cases with an inpaintin mask you may want to run image-to-image from densoising_start=0. 2024-09-02 09:38:17 -04:00
Ryan Dick
8292f6cd42 Code cleanup and documentation around FLUX inpainting. 2024-09-02 09:38:17 -04:00
Ryan Dick
278bba499e Split FLUX VAE decoding out into its own node from LatentsToImageInvocation. 2024-09-02 09:38:17 -04:00
Ryan Dick
dd99ed28e0 Split FLUX VAE encoding out into its own node from ImageToLatentsInvocation. 2024-09-02 09:38:17 -04:00
Ryan Dick
9a8aca69bf Get a rough version of FLUX inpainting working. 2024-09-02 09:38:17 -04:00
Ryan Dick
7ad62512eb Update MaskTensorToImageInvocation to support input mask tensors with or without a channel dimension. 2024-09-02 09:38:17 -04:00
Ryan Dick
bd466661ec Remove unused vae field from FLUXTextToImageInvocation. 2024-09-02 09:38:17 -04:00
Ryan Dick
7ebb509d05 Bump FLUX node versions after splitting out VAE encode/decode. 2024-09-02 09:38:17 -04:00
Ryan Dick
0aa13c046c Split VAE decoding out from the FLUXTextToImageInvocation. 2024-09-02 09:38:17 -04:00
Ryan Dick
a7a33d73f5 Get FLUX non-masked image-to-image working - still rough. 2024-09-02 09:38:17 -04:00
Ryan Dick
ffa39857d3 Add FLUX VAE decoding support to LatentsToImageInvocation. 2024-09-02 09:38:17 -04:00
Ryan Dick
e85c3bc465 Add FLUX VAE support to ImageToLatentsInvocation. 2024-09-02 09:38:17 -04:00
psychedelicious
8185ba7054 scripts: add allocate_vram script
Allocates the specified amount of VRAM, or allocates enough VRAM such that you have the specified amount of VRAM free.

Useful to simulate an environment with a specific amount of VRAM.
2024-09-02 18:18:26 +10:00
Lincoln Stein
d501865bec add a new FAQ for converting safetensors (#6736)
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-08-31 18:56:08 +00:00
Brandon Rising
d62310bb5f Support HF repos with subfolders in source on windows OS 2024-08-30 19:31:42 -04:00
Brandon Rising
1835bff196 Fix source string in hugging face installs with subfolders 2024-08-30 19:31:42 -04:00
Ryan Dick
87261bdbc9 FLUX memory management improvements (#6791)
## Summary

This PR contains several improvements to memory management for FLUX
workflows.

It is now possible to achieve better FLUX model caching performance, but
this still requires users to manually configure their `ram`/`vram`
settings. E.g. a `vram` setting of 16.0 should allow for all quantized
FLUX models to be kept in memory on the GPU.

Changes:
- Check the size of a model on disk and free the requisite space in the
model cache before loading it. (This behaviour existed previously, but
was removed in https://github.com/invoke-ai/InvokeAI/pull/6072/files.
The removal did not seem to be intentional).
- Removed the hack to free 24GB of space in the cache before loading the
FLUX model.
- Split the T5 embedding and CLIP embedding steps into separate
functions so that the two models don't both have to be held in RAM at
the same time.
- Fix a bug in `InvokeLinear8bitLt` that was causing some tensors to be
left on the GPU when the model was offloaded to the CPU. (This class is
getting very messy due to the non-standard state_dict handling in
`bnb.nn.Linear8bitLt`. )
- Tidy up some dtype handling in FluxTextToImageInvocation to avoid
situations where we hold references to two copies of the same tensor
unnecessarily.
- (minor) Misc cleanup of ModelCache: improve docs and remove unused
vars.

Future:
We should revisit our default ram/vram configs. The current defaults are
very conservative, and users could see major performance improvements
from tuning these values.

## QA Instructions

I tested the FLUX workflow with the following configurations and
verified that the cache hit rates and memory usage matched the expected
behaviour:
- `ram = 16` and `vram = 16`
- `ram = 16` and `vram = 1`
- `ram = 1` and `vram = 1`

Note that the changes in this PR are not isolated to FLUX. Since we now
check the size of models on disk, we may see slight changes in model
cache offload patterns for other models as well.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
2024-08-29 15:17:45 -04:00
Ryan Dick
4e4b6c6dbc Tidy variable management and dtype handling in FluxTextToImageInvocation. 2024-08-29 19:08:18 +00:00
Ryan Dick
5e8cf9fb6a Remove hack to clear cache from the FluxTextToImageInvocation. We now clear the cache based on the on-disk model size. 2024-08-29 19:08:18 +00:00
Ryan Dick
c738fe051f Split T5 encoding and CLIP encoding into separate functions to ensure that all model references are locally-scoped so that the two models don't have to be help in memory at the same time. 2024-08-29 19:08:18 +00:00
Ryan Dick
29fe1533f2 Fix bug in InvokeLinear8bitLt that was causing old state information to persist after loading from a state dict. This manifested as state tensors being left on the GPU even when a model had been offloaded to the CPU cache. 2024-08-29 19:08:18 +00:00
Ryan Dick
77090070bd Check the size of a model on disk and make room for it in the cache before loading it. 2024-08-29 19:08:18 +00:00
Ryan Dick
6ba9b1b6b0 Tidy up GIG -> GB and remove unused GIG constant. 2024-08-29 19:08:18 +00:00
Ryan Dick
c578b8df1e Improve ModelCache docs. 2024-08-29 19:08:18 +00:00
Ryan Dick
cad9a41433 Remove unused MOdelCache.exists(...) function. 2024-08-29 19:08:18 +00:00
Ryan Dick
5fefb3b0f4 Remove unused param from ModelCache. 2024-08-29 19:08:18 +00:00
Ryan Dick
5284a870b0 Remove unused constructor params from ModelCache. 2024-08-29 19:08:18 +00:00
Ryan Dick
e064377c05 Remove default model cache sizes from model_cache_default.py. These defaults were misleading, because the config defaults take precedence over them. 2024-08-29 19:08:18 +00:00
Mary Hipp
3e569c8312 feat(ui): add fields for CLIP embed models and Flux VAE models in workflows 2024-08-29 11:52:51 -04:00
maryhipp
16825ee6e9 feat(nodes): bump version of flux model node, update default workflow 2024-08-29 11:52:51 -04:00
Mary Hipp
3f5340fa53 feat(nodes): add submodels as inputs to FLUX main model node instead of hardcoded names 2024-08-29 11:52:51 -04:00
chainchompa
f2a1a39b33 Add selectedStylePreset to app parameters (#6787)
## Summary
- Add selectedStylePreset to app parameters
<!--A description of the changes in this PR. Include the kind of change
(fix, feature, docs, etc), the "why" and the "how". Screenshots or
videos are useful for frontend changes.-->

## Related Issues / Discussions

<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->

## QA Instructions

<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->

## Merge Plan

<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->

## Checklist

- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-08-28 10:53:07 -04:00
chainchompa
326de55d3e remove api changes and only preselect style preset 2024-08-28 09:53:29 -04:00
chainchompa
b2df909570 added selectedStylePreset to preload presets when app loads 2024-08-28 09:50:44 -04:00
chainchompa
026ac36b06 Revert "added selectedStylePreset to preload presets when app loads"
This reverts commit e97fd85904.
2024-08-28 09:44:08 -04:00
chainchompa
92125e5fd2 bug fixes 2024-08-27 16:13:38 -04:00
chainchompa
c0c139da88 formatting ruff 2024-08-27 15:46:51 -04:00
chainchompa
404ad6a7fd cleanup 2024-08-27 15:42:42 -04:00
chainchompa
fc39086fb4 call stylePresetSelected 2024-08-27 15:34:31 -04:00
chainchompa
cd215700fe added route for selecting style preset 2024-08-27 15:34:07 -04:00
chainchompa
e97fd85904 added selectedStylePreset to preload presets when app loads 2024-08-27 15:33:24 -04:00
Brandon Rising
0a263fa5b1 chore: bump version to v4.2.9rc1 2024-08-27 12:09:27 -04:00
Mary Hipp
fae3836a8d fix CLIP 2024-08-27 10:29:10 -04:00
Mary Hipp
b3d2eb4178 add translations for new model types in MM, remove clip vision from filter since its not displayed in list 2024-08-27 10:29:10 -04:00
psychedelicious
576f1cbb75 build: remove broken scripts
These two scripts are broken and can cause data loss. Remove them.

They are not in the launcher script, but _are_ available to users in the terminal/file browser.

Hopefully, when we removing them here, `pip` will delete them on next installation of the package...
2024-08-27 22:01:45 +10:00
Ryan Dick
50085b40bb Update starter model size estimates. 2024-08-26 20:17:50 -04:00
Mary Hipp
cff382715a default workflow: add steps to exposed fields, add more notes 2024-08-26 20:17:50 -04:00
Brandon Rising
54d54d1bf2 Run ruff 2024-08-26 20:17:50 -04:00
Mary Hipp
e84ea68282 remove prompt 2024-08-26 20:17:50 -04:00
Mary Hipp
160dd36782 update default workflow for flux 2024-08-26 20:17:50 -04:00
Brandon Rising
65bb46bcca Rename params for flux and flux vae, add comments explaining use of the config_path in model config 2024-08-26 20:17:50 -04:00
Brandon Rising
2d185fb766 Run ruff 2024-08-26 20:17:50 -04:00
Brandon Rising
2ba9b02932 Fix type error in tsc 2024-08-26 20:17:50 -04:00
Brandon Rising
849da67cc7 Remove no longer used code in the flux denoise function 2024-08-26 20:17:50 -04:00
Brandon Rising
3ea6c9666e Remove in progress images until we're able to make the valuable 2024-08-26 20:17:50 -04:00
Brandon Rising
cf633e4ef2 Only install starter models if not already installed 2024-08-26 20:17:50 -04:00
Ryan Dick
bbf934d980 Remove outdated TODO. 2024-08-26 20:17:50 -04:00
Ryan Dick
620f733110 ruff format 2024-08-26 20:17:50 -04:00
Ryan Dick
67928609a3 Downgrade accelerate and huggingface-hub deps to original versions. 2024-08-26 20:17:50 -04:00
Ryan Dick
5f15afb7db Remove flux repo dependency 2024-08-26 20:17:50 -04:00
Ryan Dick
635d2f480d ruff 2024-08-26 20:17:50 -04:00
Brandon Rising
70c278c810 Remove dependency on flux config files 2024-08-26 20:17:50 -04:00
Brandon Rising
56b9906e2e Setup scaffolding for in progress images and add ability to cancel the flux node 2024-08-26 20:17:50 -04:00
Ryan Dick
a808ce81fd Replace swish() with torch.nn.functional.silu(h). They are functionally equivalent, but in my test VAE deconding was ~8% faster after the change. 2024-08-26 20:17:50 -04:00
Ryan Dick
83f82c5ddf Switch the CLIP-L start model to use our hosted version - which is much smaller. 2024-08-26 20:17:50 -04:00
Brandon Rising
101de8c25d Update t5 encoder formats to accurately reflect the quantization strategy and data type 2024-08-26 20:17:50 -04:00
Ryan Dick
3339a4baf0 Downgrade revert torch version after removing optimum-qanto, and other minor version-related fixes. 2024-08-26 20:17:50 -04:00
Ryan Dick
dff4a88baa Move quantization scripts to a scripts/ subdir. 2024-08-26 20:17:50 -04:00
Ryan Dick
a21f6c4964 Update docs for T5 quantization script. 2024-08-26 20:17:50 -04:00
Ryan Dick
97562504b7 Remove all references to optimum-quanto and downgrade diffusers. 2024-08-26 20:17:50 -04:00
Ryan Dick
75d8ac378c Update the T5 8-bit quantized starter model to use the BnB LLM.int8() variant. 2024-08-26 20:17:50 -04:00
Ryan Dick
b9dd354e2b Fixes to the T5XXL quantization script. 2024-08-26 20:17:50 -04:00
Ryan Dick
33c2fbd201 Add script for quantizing a T5 model. 2024-08-26 20:17:50 -04:00
Brandon Rising
5063be92bf Switch flux to using its own conditioning field 2024-08-26 20:17:50 -04:00
Brandon Rising
1047584b3e Only import bnb quantize file if bitsandbytes is installed 2024-08-26 20:17:50 -04:00
Brandon Rising
6764dcfdaa Load and unload clip/t5 encoders and run inference separately in text encoding 2024-08-26 20:17:50 -04:00
Brandon Rising
012864ceb1 Update macos test vm to macOS-14 2024-08-26 20:17:50 -04:00
Ryan Dick
a0bf20bcee Run FLUX VAE decoding in the user's preferred dtype rather than float32. Tested, and seems to work well at float16. 2024-08-26 20:17:50 -04:00
Ryan Dick
14ab339b33 Move prepare_latent_image_patches(...) to sampling.py with all of the related FLUX inference code. 2024-08-26 20:17:50 -04:00
Ryan Dick
25c91efbb6 Rename field positive_prompt -> prompt. 2024-08-26 20:17:50 -04:00
Ryan Dick
1c1f2c6664 Add comment about incorrect T5 Tokenizer size calculation. 2024-08-26 20:17:50 -04:00
Ryan Dick
d7c22b3bf7 Tidy is_schnell detection logic. 2024-08-26 20:17:50 -04:00
Ryan Dick
185f2a395f Make FLUX get_noise(...) consistent across devices/dtypes. 2024-08-26 20:17:50 -04:00
Ryan Dick
0c5649491e Mark FLUX nodes as prototypes. 2024-08-26 20:17:50 -04:00
Brandon Rising
94aba5892a Attribute black-forest-labs/flux for much of the flux code 2024-08-26 20:17:50 -04:00
Brandon Rising
ef093dde29 Don't install bitsandbytes on macOS 2024-08-26 20:17:50 -04:00
maryhipp
34451e5f27 added FLUX dev to starter models 2024-08-26 20:17:50 -04:00
Brandon Rising
1f9bdd1a9a Undo changes to the v2 dir of frontend types 2024-08-26 20:17:50 -04:00
Brandon Rising
c27d59baf7 Run ruff 2024-08-26 20:17:50 -04:00
Brandon Rising
f130ddec7c Remove automatic install of models during flux model loader, remove no longer used import function on context 2024-08-26 20:17:50 -04:00
Ryan Dick
a0a259eef1 Fix max_seq_len field description. 2024-08-26 20:17:50 -04:00
Ryan Dick
b66f19d4d1 Add docs to the quantization scripts. 2024-08-26 20:17:50 -04:00
Ryan Dick
4105a78b83 Update load_flux_model_bnb_llm_int8.py to work with a single-file FLUX transformer checkpoint. 2024-08-26 20:17:50 -04:00
Ryan Dick
19a68afb3a Fix bug in InvokeInt8Params that was causing it to use double the necessary VRAM. 2024-08-26 20:17:50 -04:00
maryhipp
fd68a2475b add better workflow name 2024-08-26 20:17:50 -04:00
maryhipp
28ff7ba830 add better workflow description 2024-08-26 20:17:50 -04:00
maryhipp
5d0b248fdb fix(worker) fix T5 type 2024-08-26 20:17:50 -04:00
maryhipp
01a4e0f6ef update default workflow 2024-08-26 20:17:50 -04:00
Mary Hipp
91e0731506 fix schema 2024-08-26 20:17:50 -04:00
Mary Hipp
d1f904d41f tsc and lint fix 2024-08-26 20:17:50 -04:00
Mary Hipp
269388c9f4 feat(ui): create new field for t5 encoder models in nodes 2024-08-26 20:17:50 -04:00
Mary Hipp
b8486379ce fix(ui): pass base/type when installing models, add flux formats to MM badges 2024-08-26 20:17:50 -04:00
Mary Hipp
400eb94d3b fix(ui): only exclude flux main models from linear UI dropdown, not model manager list 2024-08-26 20:17:50 -04:00
maryhipp
e210c96485 add FLUX schnell starter models and submodels as dependenices or adhoc download options 2024-08-26 20:17:50 -04:00
maryhipp
5f567f41f4 add case for clip embed models in probe 2024-08-26 20:17:50 -04:00
maryhipp
5fed573a29 update flux_model_loader node to take a T5 encoder from node field instead of hardcoded list, assume all models have been downloaded 2024-08-26 20:17:50 -04:00
Ryan Dick
cfac7c8189 Move requantize.py to the quatnization/ dir. 2024-08-26 20:17:50 -04:00
Ryan Dick
1787de6836 Add docs to the requantize(...) function explaining why it was copied from optimum-quanto. 2024-08-26 20:17:50 -04:00
Ryan Dick
ac96f187bd Remove duplicate log_time(...) function. 2024-08-26 20:17:50 -04:00
Brandon Rising
72398350b4 More flux loader cleanup 2024-08-26 20:17:50 -04:00
Brandon Rising
df9445c351 Various styling and exception type updates 2024-08-26 20:17:50 -04:00
Brandon Rising
87b7a2e39b Switch inheritance class of flux model loaders 2024-08-26 20:17:50 -04:00
Brandon Rising
f7e46622a1 Update doc string for import_local_model and remove access_token since it's only usable for local file paths 2024-08-26 20:17:50 -04:00
Ryan Dick
71f18353a9 Address minor review comments. 2024-08-26 20:17:50 -04:00
Ryan Dick
4228de707b Rename t5Encoder -> t5_encoder. 2024-08-26 20:17:50 -04:00
Mary Hipp
b6a05629ef add default workflow for flux t2i 2024-08-26 20:17:50 -04:00
Mary Hipp
fbaa820643 exclude flux models from main model dropdown 2024-08-26 20:17:50 -04:00
Brandon Rising
db2a2d5e38 Some cleanup of the tags and description of flux nodes 2024-08-26 20:17:50 -04:00
Brandon Rising
8ba6e6b1f8 Add t5 encoders and clip embeds to the model manager 2024-08-26 20:17:50 -04:00
Brandon Rising
57168d719b Fix styling/lint 2024-08-26 20:17:50 -04:00
Brandon Rising
dee6d2c98e Fix support for 8b quantized t5 encoders, update exception messages in flux loaders 2024-08-26 20:17:50 -04:00
Ryan Dick
e49105ece5 Add tqdm progress bar to FLUX denoising. 2024-08-26 20:17:50 -04:00
Ryan Dick
0c5e11f521 Fix FLUX output image clamping. And a few other minor fixes to make inference work with the full bfloat16 FLUX transformer model. 2024-08-26 20:17:50 -04:00
Brandon Rising
a63f842a13 Select dev/schnell based on state dict, use correct max seq len based on dev/schnell, and shift in inference, separate vae flux params into separate config 2024-08-26 20:17:50 -04:00
Brandon Rising
4bd7fda694 Install sub directories with folders correctly, ensure consistent dtype of tensors in flux pipeline and vae 2024-08-26 20:17:50 -04:00
Brandon Rising
81f0886d6f Working inference node with quantized bnb nf4 checkpoint 2024-08-26 20:17:50 -04:00
Brandon Rising
2eb87f3306 Remove unused param on _run_vae_decoding in flux text to image 2024-08-26 20:17:50 -04:00
Brandon Rising
723f3ab0a9 Add nf4 bnb quantized format 2024-08-26 20:17:50 -04:00
Brandon Rising
1bd90e0fd4 Run ruff, setup initial text to image node 2024-08-26 20:17:50 -04:00
Brandon Rising
436f18ff55 Add backend functions and classes for Flux implementation, Update the way flux encoders/tokenizers are loaded for prompt encoding, Update way flux vae is loaded 2024-08-26 20:17:50 -04:00
Brandon Rising
cde9696214 Some UI cleanup, regenerate schema 2024-08-26 20:17:50 -04:00
Brandon Rising
2d9042fb93 Run Ruff 2024-08-26 20:17:50 -04:00
Brandon Rising
9ed53af520 Run Ruff 2024-08-26 20:17:50 -04:00
Brandon Rising
56fda669fd Manage quantization of models within the loader 2024-08-26 20:17:50 -04:00
Brandon Rising
1d8545a76c Remove changes to v1 workflow 2024-08-26 20:17:50 -04:00
Brandon Rising
5f59a828f9 Setup flux model loading in the UI 2024-08-26 20:17:50 -04:00
Ryan Dick
1fa6bddc89 WIP on moving from diffusers to FLUX 2024-08-26 20:17:50 -04:00
Ryan Dick
d3a5ca5247 More improvements for LLM.int8() - not fully tested. 2024-08-26 20:17:50 -04:00
Ryan Dick
f01f56a98e LLM.int8() quantization is working, but still some rough edges to solve. 2024-08-26 20:17:50 -04:00
Ryan Dick
99b0f79784 Clean up NF4 implementation. 2024-08-26 20:17:50 -04:00
Ryan Dick
e1eb104345 NF4 inference working 2024-08-26 20:17:50 -04:00
Ryan Dick
5c2f95ef50 NF4 loading working... I think. 2024-08-26 20:17:50 -04:00
Ryan Dick
b63df9bab9 wip 2024-08-26 20:17:50 -04:00
Ryan Dick
a52c899c6d Split a FluxTextEncoderInvocation out from the FluxTextToImageInvocation. This has the advantage that we benfit from automatic caching when the prompt isn't changed. 2024-08-26 20:17:50 -04:00
Ryan Dick
eeabb7ebe5 Make quantized loading fast for both T5XXL and FLUX transformer. 2024-08-26 20:17:50 -04:00
Ryan Dick
8b1cef978c Make quantized loading fast. 2024-08-26 20:17:50 -04:00
Ryan Dick
152da482cd WIP - experimentation 2024-08-26 20:17:50 -04:00
Ryan Dick
3cf0365a35 Make float16 inference work with FLUX on 24GB GPU. 2024-08-26 20:17:50 -04:00
Ryan Dick
5870742bb9 Add support for 8-bit quantizatino of the FLUX T5XXL text encoder. 2024-08-26 20:17:50 -04:00
Ryan Dick
01d8c62c57 Make 8-bit quantization save/reload work for the FLUX transformer. Reload is still very slow with the current optimum.quanto implementation. 2024-08-26 20:17:50 -04:00
Ryan Dick
55a242b2d6 Minor improvements to FLUX workflow. 2024-08-26 20:17:50 -04:00
Ryan Dick
45263b339f Got FLUX schnell working with 8-bit quantization. Still lots of rough edges to clean up. 2024-08-26 20:17:50 -04:00
Ryan Dick
3319491861 Use the FluxPipeline.encode_prompt() api rather than trying to run the two text encoders separately. 2024-08-26 20:17:50 -04:00
Ryan Dick
e687afac90 Add sentencepiece dependency for the T5 tokenizer. 2024-08-26 20:17:50 -04:00
Ryan Dick
b39031ea53 First draft of FluxTextToImageInvocation. 2024-08-26 20:17:50 -04:00
Ryan Dick
0b77511271 Update HF download logic to work for black-forest-labs/FLUX.1-schnell. 2024-08-26 20:17:50 -04:00
Ryan Dick
c99cd989c1 Update imports for compatibility with bumped diffusers version. 2024-08-26 20:17:50 -04:00
Ryan Dick
317fdadb21 Bump diffusers version to include FLUX support. 2024-08-26 20:17:50 -04:00
Mary Hipp
4e294f9e3e disable export button if no non-default presets 2024-08-26 09:23:15 -04:00
Jonathan
526e0f30a0 Added support for bounding boxes in the Invocation API
Adding built-in bounding boxes as a core type would help developers of nodes that include bounding box support.
2024-08-26 08:03:30 +10:00
598 changed files with 17881 additions and 8795 deletions

View File

@@ -13,6 +13,12 @@ on:
tags:
- 'v*.*.*'
workflow_dispatch:
inputs:
push-to-registry:
description: Push the built image to the container registry
required: false
type: boolean
default: false
permissions:
contents: write
@@ -50,16 +56,15 @@ jobs:
df -h
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Docker meta
id: meta
uses: docker/metadata-action@v4
uses: docker/metadata-action@v5
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
images: |
ghcr.io/${{ github.repository }}
${{ env.DOCKERHUB_REPOSITORY }}
tags: |
type=ref,event=branch
type=ref,event=tag
@@ -72,49 +77,33 @@ jobs:
suffix=-${{ matrix.gpu-driver }},onlatest=false
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v3
with:
platforms: ${{ env.PLATFORMS }}
- name: Login to GitHub Container Registry
if: github.event_name != 'pull_request'
uses: docker/login-action@v2
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
# - name: Login to Docker Hub
# if: github.event_name != 'pull_request' && vars.DOCKERHUB_REPOSITORY != ''
# uses: docker/login-action@v2
# with:
# username: ${{ secrets.DOCKERHUB_USERNAME }}
# password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build container
timeout-minutes: 40
id: docker_build
uses: docker/build-push-action@v4
uses: docker/build-push-action@v6
with:
context: .
file: docker/Dockerfile
platforms: ${{ env.PLATFORMS }}
push: ${{ github.ref == 'refs/heads/main' || github.ref_type == 'tag' }}
push: ${{ github.ref == 'refs/heads/main' || github.ref_type == 'tag' || github.event.inputs.push-to-registry }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: |
type=gha,scope=${{ github.ref_name }}-${{ matrix.gpu-driver }}
type=gha,scope=main-${{ matrix.gpu-driver }}
cache-to: type=gha,mode=max,scope=${{ github.ref_name }}-${{ matrix.gpu-driver }}
# - name: Docker Hub Description
# if: github.ref == 'refs/heads/main' || github.ref == 'refs/tags/*' && vars.DOCKERHUB_REPOSITORY != ''
# uses: peter-evans/dockerhub-description@v3
# with:
# username: ${{ secrets.DOCKERHUB_USERNAME }}
# password: ${{ secrets.DOCKERHUB_TOKEN }}
# repository: ${{ vars.DOCKERHUB_REPOSITORY }}
# short-description: ${{ github.event.repository.description }}

View File

@@ -60,7 +60,7 @@ jobs:
extra-index-url: 'https://download.pytorch.org/whl/cpu'
github-env: $GITHUB_ENV
- platform: macos-default
os: macOS-12
os: macOS-14
github-env: $GITHUB_ENV
- platform: windows-cpu
os: windows-2022

View File

@@ -196,6 +196,22 @@ tips to reduce the problem:
=== "12GB VRAM GPU"
This should be sufficient to generate larger images up to about 1280x1280.
## Checkpoint Models Load Slowly or Use Too Much RAM
The difference between diffusers models (a folder containing multiple
subfolders) and checkpoint models (a file ending with .safetensors or
.ckpt) is that InvokeAI is able to load diffusers models into memory
incrementally, while checkpoint models must be loaded all at
once. With very large models, or systems with limited RAM, you may
experience slowdowns and other memory-related issues when loading
checkpoint models.
To solve this, go to the Model Manager tab (the cube), select the
checkpoint model that's giving you trouble, and press the "Convert"
button in the upper right of your browser window. This will conver the
checkpoint into a diffusers model, after which loading should be
faster and less memory-intensive.
## Memory Leak (Linux)

View File

@@ -3,8 +3,10 @@
import io
import pathlib
import shutil
import traceback
from copy import deepcopy
from enum import Enum
from tempfile import TemporaryDirectory
from typing import List, Optional, Type
@@ -17,6 +19,7 @@ from starlette.exceptions import HTTPException
from typing_extensions import Annotated
from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.services.config import get_config
from invokeai.app.services.model_images.model_images_common import ModelImageFileNotFoundException
from invokeai.app.services.model_install.model_install_common import ModelInstallJob
from invokeai.app.services.model_records import (
@@ -31,6 +34,7 @@ from invokeai.backend.model_manager.config import (
ModelFormat,
ModelType,
)
from invokeai.backend.model_manager.load.model_cache.model_cache_base import CacheStats
from invokeai.backend.model_manager.metadata.fetch.huggingface import HuggingFaceMetadataFetch
from invokeai.backend.model_manager.metadata.metadata_base import ModelMetadataWithFiles, UnknownMetadataException
from invokeai.backend.model_manager.search import ModelSearch
@@ -50,6 +54,13 @@ class ModelsList(BaseModel):
model_config = ConfigDict(use_enum_values=True)
class CacheType(str, Enum):
"""Cache type - one of vram or ram."""
RAM = "RAM"
VRAM = "VRAM"
def add_cover_image_to_model_config(config: AnyModelConfig, dependencies: Type[ApiDependencies]) -> AnyModelConfig:
"""Add a cover image URL to a model configuration."""
cover_image = dependencies.invoker.services.model_images.get_url(config.key)
@@ -797,3 +808,83 @@ async def get_starter_models() -> list[StarterModel]:
model.dependencies = missing_deps
return starter_models
@model_manager_router.get(
"/model_cache",
operation_id="get_cache_size",
response_model=float,
summary="Get maximum size of model manager RAM or VRAM cache.",
)
async def get_cache_size(cache_type: CacheType = Query(description="The cache type", default=CacheType.RAM)) -> float:
"""Return the current RAM or VRAM cache size setting (in GB)."""
cache = ApiDependencies.invoker.services.model_manager.load.ram_cache
value = 0.0
if cache_type == CacheType.RAM:
value = cache.max_cache_size
elif cache_type == CacheType.VRAM:
value = cache.max_vram_cache_size
return value
@model_manager_router.put(
"/model_cache",
operation_id="set_cache_size",
response_model=float,
summary="Set maximum size of model manager RAM or VRAM cache, optionally writing new value out to invokeai.yaml config file.",
)
async def set_cache_size(
value: float = Query(description="The new value for the maximum cache size"),
cache_type: CacheType = Query(description="The cache type", default=CacheType.RAM),
persist: bool = Query(description="Write new value out to invokeai.yaml", default=False),
) -> float:
"""Set the current RAM or VRAM cache size setting (in GB). ."""
cache = ApiDependencies.invoker.services.model_manager.load.ram_cache
app_config = get_config()
# Record initial state.
vram_old = app_config.vram
ram_old = app_config.ram
# Prepare target state.
vram_new = vram_old
ram_new = ram_old
if cache_type == CacheType.RAM:
ram_new = value
elif cache_type == CacheType.VRAM:
vram_new = value
else:
raise ValueError(f"Unexpected {cache_type=}.")
config_path = app_config.config_file_path
new_config_path = config_path.with_suffix(".yaml.new")
try:
# Try to apply the target state.
cache.max_vram_cache_size = vram_new
cache.max_cache_size = ram_new
app_config.ram = ram_new
app_config.vram = vram_new
if persist:
app_config.write_file(new_config_path)
shutil.move(new_config_path, config_path)
except Exception as e:
# If there was a failure, restore the initial state.
cache.max_cache_size = ram_old
cache.max_vram_cache_size = vram_old
app_config.ram = ram_old
app_config.vram = vram_old
raise RuntimeError("Failed to update cache size") from e
return value
@model_manager_router.get(
"/stats",
operation_id="get_stats",
response_model=Optional[CacheStats],
summary="Get model manager RAM cache performance statistics.",
)
async def get_stats() -> Optional[CacheStats]:
"""Return performance statistics on the model manager's RAM cache. Will return null if no models have been loaded."""
return ApiDependencies.invoker.services.model_manager.load.ram_cache.stats

View File

@@ -11,7 +11,7 @@ from invokeai.app.services.session_queue.session_queue_common import (
Batch,
BatchStatus,
CancelByBatchIDsResult,
CancelByOriginResult,
CancelByDestinationResult,
ClearResult,
EnqueueBatchResult,
PruneResult,
@@ -107,16 +107,18 @@ async def cancel_by_batch_ids(
@session_queue_router.put(
"/{queue_id}/cancel_by_origin",
operation_id="cancel_by_origin",
"/{queue_id}/cancel_by_destination",
operation_id="cancel_by_destination",
responses={200: {"model": CancelByBatchIDsResult}},
)
async def cancel_by_origin(
async def cancel_by_destination(
queue_id: str = Path(description="The queue id to perform this operation on"),
origin: str = Query(description="The origin to cancel all queue items for"),
) -> CancelByOriginResult:
destination: str = Query(description="The destination to cancel all queue items for"),
) -> CancelByDestinationResult:
"""Immediately cancels all queue items with the given origin"""
return ApiDependencies.invoker.services.session_queue.cancel_by_origin(queue_id=queue_id, origin=origin)
return ApiDependencies.invoker.services.session_queue.cancel_by_destination(
queue_id=queue_id, destination=destination
)
@session_queue_router.put(

View File

@@ -20,7 +20,6 @@ from typing import (
Type,
TypeVar,
Union,
cast,
)
import semver
@@ -80,7 +79,7 @@ class UIConfigBase(BaseModel):
version: str = Field(
description='The node\'s version. Should be a valid semver string e.g. "1.0.0" or "3.8.13".',
)
node_pack: Optional[str] = Field(default=None, description="Whether or not this is a custom node")
node_pack: str = Field(description="The node pack that this node belongs to, will be 'invokeai' for built-in nodes")
classification: Classification = Field(default=Classification.Stable, description="The node's classification")
model_config = ConfigDict(
@@ -230,18 +229,16 @@ class BaseInvocation(ABC, BaseModel):
@staticmethod
def json_schema_extra(schema: dict[str, Any], model_class: Type[BaseInvocation]) -> None:
"""Adds various UI-facing attributes to the invocation's OpenAPI schema."""
uiconfig = cast(UIConfigBase | None, getattr(model_class, "UIConfig", None))
if uiconfig is not None:
if uiconfig.title is not None:
schema["title"] = uiconfig.title
if uiconfig.tags is not None:
schema["tags"] = uiconfig.tags
if uiconfig.category is not None:
schema["category"] = uiconfig.category
if uiconfig.node_pack is not None:
schema["node_pack"] = uiconfig.node_pack
schema["classification"] = uiconfig.classification
schema["version"] = uiconfig.version
if title := model_class.UIConfig.title:
schema["title"] = title
if tags := model_class.UIConfig.tags:
schema["tags"] = tags
if category := model_class.UIConfig.category:
schema["category"] = category
if node_pack := model_class.UIConfig.node_pack:
schema["node_pack"] = node_pack
schema["classification"] = model_class.UIConfig.classification
schema["version"] = model_class.UIConfig.version
if "required" not in schema or not isinstance(schema["required"], list):
schema["required"] = []
schema["class"] = "invocation"
@@ -312,7 +309,7 @@ class BaseInvocation(ABC, BaseModel):
json_schema_extra={"field_kind": FieldKind.NodeAttribute},
)
UIConfig: ClassVar[Type[UIConfigBase]]
UIConfig: ClassVar[UIConfigBase]
model_config = ConfigDict(
protected_namespaces=(),
@@ -441,30 +438,25 @@ def invocation(
validate_fields(cls.model_fields, invocation_type)
# Add OpenAPI schema extras
uiconfig_name = cls.__qualname__ + ".UIConfig"
if not hasattr(cls, "UIConfig") or cls.UIConfig.__qualname__ != uiconfig_name:
cls.UIConfig = type(uiconfig_name, (UIConfigBase,), {})
cls.UIConfig.title = title
cls.UIConfig.tags = tags
cls.UIConfig.category = category
cls.UIConfig.classification = classification
# Grab the node pack's name from the module name, if it's a custom node
is_custom_node = cls.__module__.rsplit(".", 1)[0] == "invokeai.app.invocations"
if is_custom_node:
cls.UIConfig.node_pack = cls.__module__.split(".")[0]
else:
cls.UIConfig.node_pack = None
uiconfig: dict[str, Any] = {}
uiconfig["title"] = title
uiconfig["tags"] = tags
uiconfig["category"] = category
uiconfig["classification"] = classification
# The node pack is the module name - will be "invokeai" for built-in nodes
uiconfig["node_pack"] = cls.__module__.split(".")[0]
if version is not None:
try:
semver.Version.parse(version)
except ValueError as e:
raise InvalidVersionError(f'Invalid version string for node "{invocation_type}": "{version}"') from e
cls.UIConfig.version = version
uiconfig["version"] = version
else:
logger.warn(f'No version specified for node "{invocation_type}", using "1.0.0"')
cls.UIConfig.version = "1.0.0"
uiconfig["version"] = "1.0.0"
cls.UIConfig = UIConfigBase(**uiconfig)
if use_cache is not None:
cls.model_fields["use_cache"].default = use_cache

View File

@@ -185,7 +185,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
)
denoise_mask: Optional[DenoiseMaskField] = InputField(
default=None,
description=FieldDescriptions.mask,
description=FieldDescriptions.denoise_mask,
input=Input.Connection,
ui_order=8,
)

View File

@@ -40,14 +40,18 @@ class UIType(str, Enum, metaclass=MetaEnum):
# region Model Field Types
MainModel = "MainModelField"
FluxMainModel = "FluxMainModelField"
SDXLMainModel = "SDXLMainModelField"
SDXLRefinerModel = "SDXLRefinerModelField"
ONNXModel = "ONNXModelField"
VAEModel = "VAEModelField"
FluxVAEModel = "FluxVAEModelField"
LoRAModel = "LoRAModelField"
ControlNetModel = "ControlNetModelField"
IPAdapterModel = "IPAdapterModelField"
T2IAdapterModel = "T2IAdapterModelField"
T5EncoderModel = "T5EncoderModelField"
CLIPEmbedModel = "CLIPEmbedModelField"
SpandrelImageToImageModel = "SpandrelImageToImageModelField"
# endregion
@@ -125,13 +129,17 @@ class FieldDescriptions:
negative_cond = "Negative conditioning tensor"
noise = "Noise tensor"
clip = "CLIP (tokenizer, text encoder, LoRAs) and skipped layer count"
t5_encoder = "T5 tokenizer and text encoder"
clip_embed_model = "CLIP Embed loader"
unet = "UNet (scheduler, LoRAs)"
transformer = "Transformer"
vae = "VAE"
cond = "Conditioning tensor"
controlnet_model = "ControlNet model to load"
vae_model = "VAE model to load"
lora_model = "LoRA model to load"
main_model = "Main model (UNet, VAE, CLIP) to load"
flux_model = "Flux model (Transformer) to load"
sdxl_main_model = "SDXL Main model (UNet, VAE, CLIP1, CLIP2) to load"
sdxl_refiner_model = "SDXL Refiner Main Modde (UNet, VAE, CLIP2) to load"
onnx_main_model = "ONNX Main model (UNet, VAE, CLIP) to load"
@@ -173,7 +181,7 @@ class FieldDescriptions:
)
num_1 = "The first number"
num_2 = "The second number"
mask = "The mask to use for the operation"
denoise_mask = "A mask of the region to apply the denoising process to."
board = "The board to save the image to"
image = "The image to process"
tile_size = "Tile size"
@@ -231,6 +239,12 @@ class ColorField(BaseModel):
return (self.r, self.g, self.b, self.a)
class FluxConditioningField(BaseModel):
"""A conditioning tensor primitive value"""
conditioning_name: str = Field(description="The name of conditioning tensor")
class ConditioningField(BaseModel):
"""A conditioning tensor primitive value"""

View File

@@ -0,0 +1,249 @@
from typing import Callable, Optional
import torch
import torchvision.transforms as tv_transforms
from torchvision.transforms.functional import resize as tv_resize
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.fields import (
DenoiseMaskField,
FieldDescriptions,
FluxConditioningField,
Input,
InputField,
LatentsField,
WithBoard,
WithMetadata,
)
from invokeai.app.invocations.model import TransformerField
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.denoise import denoise
from invokeai.backend.flux.inpaint_extension import InpaintExtension
from invokeai.backend.flux.model import Flux
from invokeai.backend.flux.sampling_utils import (
clip_timestep_schedule,
generate_img_ids,
get_noise,
get_schedule,
pack,
unpack,
)
from invokeai.backend.stable_diffusion.diffusers_pipeline import PipelineIntermediateState
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import FLUXConditioningInfo
from invokeai.backend.util.devices import TorchDevice
@invocation(
"flux_denoise",
title="FLUX Denoise",
tags=["image", "flux"],
category="image",
version="1.0.0",
classification=Classification.Prototype,
)
class FluxDenoiseInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Run denoising process with a FLUX transformer model."""
# If latents is provided, this means we are doing image-to-image.
latents: Optional[LatentsField] = InputField(
default=None,
description=FieldDescriptions.latents,
input=Input.Connection,
)
# denoise_mask is used for image-to-image inpainting. Only the masked region is modified.
denoise_mask: Optional[DenoiseMaskField] = InputField(
default=None,
description=FieldDescriptions.denoise_mask,
input=Input.Connection,
)
denoising_start: float = InputField(
default=0.0,
ge=0,
le=1,
description=FieldDescriptions.denoising_start,
)
denoising_end: float = InputField(default=1.0, ge=0, le=1, description=FieldDescriptions.denoising_end)
transformer: TransformerField = InputField(
description=FieldDescriptions.flux_model,
input=Input.Connection,
title="Transformer",
)
positive_text_conditioning: FluxConditioningField = InputField(
description=FieldDescriptions.positive_cond, input=Input.Connection
)
width: int = InputField(default=1024, multiple_of=16, description="Width of the generated image.")
height: int = InputField(default=1024, multiple_of=16, description="Height of the generated image.")
num_steps: int = InputField(
default=4, description="Number of diffusion steps. Recommended values are schnell: 4, dev: 50."
)
guidance: float = InputField(
default=4.0,
description="The guidance strength. Higher values adhere more strictly to the prompt, and will produce less diverse images. FLUX dev only, ignored for schnell.",
)
seed: int = InputField(default=0, description="Randomness seed for reproducibility.")
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
latents = self._run_diffusion(context)
latents = latents.detach().to("cpu")
name = context.tensors.save(tensor=latents)
return LatentsOutput.build(latents_name=name, latents=latents, seed=None)
def _run_diffusion(
self,
context: InvocationContext,
):
inference_dtype = torch.bfloat16
# Load the conditioning data.
cond_data = context.conditioning.load(self.positive_text_conditioning.conditioning_name)
assert len(cond_data.conditionings) == 1
flux_conditioning = cond_data.conditionings[0]
assert isinstance(flux_conditioning, FLUXConditioningInfo)
flux_conditioning = flux_conditioning.to(dtype=inference_dtype)
t5_embeddings = flux_conditioning.t5_embeds
clip_embeddings = flux_conditioning.clip_embeds
# Load the input latents, if provided.
init_latents = context.tensors.load(self.latents.latents_name) if self.latents else None
if init_latents is not None:
init_latents = init_latents.to(device=TorchDevice.choose_torch_device(), dtype=inference_dtype)
# Prepare input noise.
noise = get_noise(
num_samples=1,
height=self.height,
width=self.width,
device=TorchDevice.choose_torch_device(),
dtype=inference_dtype,
seed=self.seed,
)
transformer_info = context.models.load(self.transformer.transformer)
is_schnell = "schnell" in transformer_info.config.config_path
# Calculate the timestep schedule.
image_seq_len = noise.shape[-1] * noise.shape[-2] // 4
timesteps = get_schedule(
num_steps=self.num_steps,
image_seq_len=image_seq_len,
shift=not is_schnell,
)
# Clip the timesteps schedule based on denoising_start and denoising_end.
timesteps = clip_timestep_schedule(timesteps, self.denoising_start, self.denoising_end)
# Prepare input latent image.
if init_latents is not None:
# If init_latents is provided, we are doing image-to-image.
if is_schnell:
context.logger.warning(
"Running image-to-image with a FLUX schnell model. This is not recommended. The results are likely "
"to be poor. Consider using a FLUX dev model instead."
)
# Noise the orig_latents by the appropriate amount for the first timestep.
t_0 = timesteps[0]
x = t_0 * noise + (1.0 - t_0) * init_latents
else:
# init_latents are not provided, so we are not doing image-to-image (i.e. we are starting from pure noise).
if self.denoising_start > 1e-5:
raise ValueError("denoising_start should be 0 when initial latents are not provided.")
x = noise
# If len(timesteps) == 1, then short-circuit. We are just noising the input latents, but not taking any
# denoising steps.
if len(timesteps) <= 1:
return x
inpaint_mask = self._prep_inpaint_mask(context, x)
b, _c, h, w = x.shape
img_ids = generate_img_ids(h=h, w=w, batch_size=b, device=x.device, dtype=x.dtype)
bs, t5_seq_len, _ = t5_embeddings.shape
txt_ids = torch.zeros(bs, t5_seq_len, 3, dtype=inference_dtype, device=TorchDevice.choose_torch_device())
# Pack all latent tensors.
init_latents = pack(init_latents) if init_latents is not None else None
inpaint_mask = pack(inpaint_mask) if inpaint_mask is not None else None
noise = pack(noise)
x = pack(x)
# Now that we have 'packed' the latent tensors, verify that we calculated the image_seq_len correctly.
assert image_seq_len == x.shape[1]
# Prepare inpaint extension.
inpaint_extension: InpaintExtension | None = None
if inpaint_mask is not None:
assert init_latents is not None
inpaint_extension = InpaintExtension(
init_latents=init_latents,
inpaint_mask=inpaint_mask,
noise=noise,
)
with transformer_info as transformer:
assert isinstance(transformer, Flux)
x = denoise(
model=transformer,
img=x,
img_ids=img_ids,
txt=t5_embeddings,
txt_ids=txt_ids,
vec=clip_embeddings,
timesteps=timesteps,
step_callback=self._build_step_callback(context),
guidance=self.guidance,
inpaint_extension=inpaint_extension,
)
x = unpack(x.float(), self.height, self.width)
return x
def _prep_inpaint_mask(self, context: InvocationContext, latents: torch.Tensor) -> torch.Tensor | None:
"""Prepare the inpaint mask.
- Loads the mask
- Resizes if necessary
- Casts to same device/dtype as latents
- Expands mask to the same shape as latents so that they line up after 'packing'
Args:
context (InvocationContext): The invocation context, for loading the inpaint mask.
latents (torch.Tensor): A latent image tensor. In 'unpacked' format. Used to determine the target shape,
device, and dtype for the inpaint mask.
Returns:
torch.Tensor | None: Inpaint mask.
"""
if self.denoise_mask is None:
return None
mask = context.tensors.load(self.denoise_mask.mask_name)
_, _, latent_height, latent_width = latents.shape
mask = tv_resize(
img=mask,
size=[latent_height, latent_width],
interpolation=tv_transforms.InterpolationMode.BILINEAR,
antialias=False,
)
mask = mask.to(device=latents.device, dtype=latents.dtype)
# Expand the inpaint mask to the same shape as `latents` so that when we 'pack' `mask` it lines up with
# `latents`.
return mask.expand_as(latents)
def _build_step_callback(self, context: InvocationContext) -> Callable[[PipelineIntermediateState], None]:
def step_callback(state: PipelineIntermediateState) -> None:
state.latents = unpack(state.latents.float(), self.height, self.width).squeeze()
context.util.flux_step_callback(state)
return step_callback

View File

@@ -0,0 +1,92 @@
from typing import Literal
import torch
from transformers import CLIPTextModel, CLIPTokenizer, T5EncoderModel, T5Tokenizer
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField
from invokeai.app.invocations.model import CLIPField, T5EncoderField
from invokeai.app.invocations.primitives import FluxConditioningOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.modules.conditioner import HFEncoder
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningFieldData, FLUXConditioningInfo
@invocation(
"flux_text_encoder",
title="FLUX Text Encoding",
tags=["prompt", "conditioning", "flux"],
category="conditioning",
version="1.0.0",
classification=Classification.Prototype,
)
class FluxTextEncoderInvocation(BaseInvocation):
"""Encodes and preps a prompt for a flux image."""
clip: CLIPField = InputField(
title="CLIP",
description=FieldDescriptions.clip,
input=Input.Connection,
)
t5_encoder: T5EncoderField = InputField(
title="T5Encoder",
description=FieldDescriptions.t5_encoder,
input=Input.Connection,
)
t5_max_seq_len: Literal[256, 512] = InputField(
description="Max sequence length for the T5 encoder. Expected to be 256 for FLUX schnell models and 512 for FLUX dev models."
)
prompt: str = InputField(description="Text prompt to encode.")
@torch.no_grad()
def invoke(self, context: InvocationContext) -> FluxConditioningOutput:
# Note: The T5 and CLIP encoding are done in separate functions to ensure that all model references are locally
# scoped. This ensures that the T5 model can be freed and gc'd before loading the CLIP model (if necessary).
t5_embeddings = self._t5_encode(context)
clip_embeddings = self._clip_encode(context)
conditioning_data = ConditioningFieldData(
conditionings=[FLUXConditioningInfo(clip_embeds=clip_embeddings, t5_embeds=t5_embeddings)]
)
conditioning_name = context.conditioning.save(conditioning_data)
return FluxConditioningOutput.build(conditioning_name)
def _t5_encode(self, context: InvocationContext) -> torch.Tensor:
t5_tokenizer_info = context.models.load(self.t5_encoder.tokenizer)
t5_text_encoder_info = context.models.load(self.t5_encoder.text_encoder)
prompt = [self.prompt]
with (
t5_text_encoder_info as t5_text_encoder,
t5_tokenizer_info as t5_tokenizer,
):
assert isinstance(t5_text_encoder, T5EncoderModel)
assert isinstance(t5_tokenizer, T5Tokenizer)
t5_encoder = HFEncoder(t5_text_encoder, t5_tokenizer, False, self.t5_max_seq_len)
prompt_embeds = t5_encoder(prompt)
assert isinstance(prompt_embeds, torch.Tensor)
return prompt_embeds
def _clip_encode(self, context: InvocationContext) -> torch.Tensor:
clip_tokenizer_info = context.models.load(self.clip.tokenizer)
clip_text_encoder_info = context.models.load(self.clip.text_encoder)
prompt = [self.prompt]
with (
clip_text_encoder_info as clip_text_encoder,
clip_tokenizer_info as clip_tokenizer,
):
assert isinstance(clip_text_encoder, CLIPTextModel)
assert isinstance(clip_tokenizer, CLIPTokenizer)
clip_encoder = HFEncoder(clip_text_encoder, clip_tokenizer, True, 77)
pooled_prompt_embeds = clip_encoder(prompt)
assert isinstance(pooled_prompt_embeds, torch.Tensor)
return pooled_prompt_embeds

View File

@@ -0,0 +1,60 @@
import torch
from einops import rearrange
from PIL import Image
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import (
FieldDescriptions,
Input,
InputField,
LatentsField,
WithBoard,
WithMetadata,
)
from invokeai.app.invocations.model import VAEField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.modules.autoencoder import AutoEncoder
from invokeai.backend.model_manager.load.load_base import LoadedModel
from invokeai.backend.util.devices import TorchDevice
@invocation(
"flux_vae_decode",
title="FLUX Latents to Image",
tags=["latents", "image", "vae", "l2i", "flux"],
category="latents",
version="1.0.0",
)
class FluxVaeDecodeInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Generates an image from latents."""
latents: LatentsField = InputField(
description=FieldDescriptions.latents,
input=Input.Connection,
)
vae: VAEField = InputField(
description=FieldDescriptions.vae,
input=Input.Connection,
)
def _vae_decode(self, vae_info: LoadedModel, latents: torch.Tensor) -> Image.Image:
with vae_info as vae:
assert isinstance(vae, AutoEncoder)
latents = latents.to(device=TorchDevice.choose_torch_device(), dtype=TorchDevice.choose_torch_dtype())
img = vae.decode(latents)
img = img.clamp(-1, 1)
img = rearrange(img[0], "c h w -> h w c") # noqa: F821
img_pil = Image.fromarray((127.5 * (img + 1.0)).byte().cpu().numpy())
return img_pil
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ImageOutput:
latents = context.tensors.load(self.latents.latents_name)
vae_info = context.models.load(self.vae.vae)
image = self._vae_decode(vae_info=vae_info, latents=latents)
TorchDevice.empty_cache()
image_dto = context.images.save(image=image)
return ImageOutput.build(image_dto)

View File

@@ -0,0 +1,67 @@
import einops
import torch
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import (
FieldDescriptions,
ImageField,
Input,
InputField,
)
from invokeai.app.invocations.model import VAEField
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.flux.modules.autoencoder import AutoEncoder
from invokeai.backend.model_manager import LoadedModel
from invokeai.backend.stable_diffusion.diffusers_pipeline import image_resized_to_grid_as_tensor
from invokeai.backend.util.devices import TorchDevice
@invocation(
"flux_vae_encode",
title="FLUX Image to Latents",
tags=["latents", "image", "vae", "i2l", "flux"],
category="latents",
version="1.0.0",
)
class FluxVaeEncodeInvocation(BaseInvocation):
"""Encodes an image into latents."""
image: ImageField = InputField(
description="The image to encode.",
)
vae: VAEField = InputField(
description=FieldDescriptions.vae,
input=Input.Connection,
)
@staticmethod
def vae_encode(vae_info: LoadedModel, image_tensor: torch.Tensor) -> torch.Tensor:
# TODO(ryand): Expose seed parameter at the invocation level.
# TODO(ryand): Write a util function for generating random tensors that is consistent across devices / dtypes.
# There's a starting point in get_noise(...), but it needs to be extracted and generalized. This function
# should be used for VAE encode sampling.
generator = torch.Generator(device=TorchDevice.choose_torch_device()).manual_seed(0)
with vae_info as vae:
assert isinstance(vae, AutoEncoder)
image_tensor = image_tensor.to(
device=TorchDevice.choose_torch_device(), dtype=TorchDevice.choose_torch_dtype()
)
latents = vae.encode(image_tensor, sample=True, generator=generator)
return latents
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
image = context.images.get_pil(self.image.image_name)
vae_info = context.models.load(self.vae.vae)
image_tensor = image_resized_to_grid_as_tensor(image.convert("RGB"))
if image_tensor.dim() == 3:
image_tensor = einops.rearrange(image_tensor, "c h w -> 1 c h w")
latents = self.vae_encode(vae_info=vae_info, image_tensor=image_tensor)
latents = latents.to("cpu")
name = context.tensors.save(tensor=latents)
return LatentsOutput.build(latents_name=name, latents=latents, seed=None)

View File

@@ -126,7 +126,7 @@ class ImageMaskToTensorInvocation(BaseInvocation, WithMetadata):
title="Tensor Mask to Image",
tags=["mask"],
category="mask",
version="1.0.0",
version="1.1.0",
)
class MaskTensorToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Convert a mask tensor to an image."""
@@ -135,6 +135,11 @@ class MaskTensorToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
def invoke(self, context: InvocationContext) -> ImageOutput:
mask = context.tensors.load(self.mask.tensor_name)
# Squeeze the channel dimension if it exists.
if mask.dim() == 3:
mask = mask.squeeze(0)
# Ensure that the mask is binary.
if mask.dtype != torch.bool:
mask = mask > 0.5

View File

@@ -1,5 +1,5 @@
import copy
from typing import List, Optional
from typing import List, Literal, Optional
from pydantic import BaseModel, Field
@@ -13,7 +13,14 @@ from invokeai.app.invocations.baseinvocation import (
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.shared.models import FreeUConfig
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelType, SubModelType
from invokeai.backend.flux.util import max_seq_lengths
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
CheckpointConfigBase,
ModelType,
SubModelType,
)
class ModelIdentifierField(BaseModel):
@@ -60,6 +67,15 @@ class CLIPField(BaseModel):
loras: List[LoRAField] = Field(description="LoRAs to apply on model loading")
class TransformerField(BaseModel):
transformer: ModelIdentifierField = Field(description="Info to load Transformer submodel")
class T5EncoderField(BaseModel):
tokenizer: ModelIdentifierField = Field(description="Info to load tokenizer submodel")
text_encoder: ModelIdentifierField = Field(description="Info to load text_encoder submodel")
class VAEField(BaseModel):
vae: ModelIdentifierField = Field(description="Info to load vae submodel")
seamless_axes: List[str] = Field(default_factory=list, description='Axes("x" and "y") to which apply seamless')
@@ -122,6 +138,78 @@ class ModelIdentifierInvocation(BaseInvocation):
return ModelIdentifierOutput(model=self.model)
@invocation_output("flux_model_loader_output")
class FluxModelLoaderOutput(BaseInvocationOutput):
"""Flux base model loader output"""
transformer: TransformerField = OutputField(description=FieldDescriptions.transformer, title="Transformer")
clip: CLIPField = OutputField(description=FieldDescriptions.clip, title="CLIP")
t5_encoder: T5EncoderField = OutputField(description=FieldDescriptions.t5_encoder, title="T5 Encoder")
vae: VAEField = OutputField(description=FieldDescriptions.vae, title="VAE")
max_seq_len: Literal[256, 512] = OutputField(
description="The max sequence length to used for the T5 encoder. (256 for schnell transformer, 512 for dev transformer)",
title="Max Seq Length",
)
@invocation(
"flux_model_loader",
title="Flux Main Model",
tags=["model", "flux"],
category="model",
version="1.0.4",
classification=Classification.Prototype,
)
class FluxModelLoaderInvocation(BaseInvocation):
"""Loads a flux base model, outputting its submodels."""
model: ModelIdentifierField = InputField(
description=FieldDescriptions.flux_model,
ui_type=UIType.FluxMainModel,
input=Input.Direct,
)
t5_encoder_model: ModelIdentifierField = InputField(
description=FieldDescriptions.t5_encoder, ui_type=UIType.T5EncoderModel, input=Input.Direct, title="T5 Encoder"
)
clip_embed_model: ModelIdentifierField = InputField(
description=FieldDescriptions.clip_embed_model,
ui_type=UIType.CLIPEmbedModel,
input=Input.Direct,
title="CLIP Embed",
)
vae_model: ModelIdentifierField = InputField(
description=FieldDescriptions.vae_model, ui_type=UIType.FluxVAEModel, title="VAE"
)
def invoke(self, context: InvocationContext) -> FluxModelLoaderOutput:
for key in [self.model.key, self.t5_encoder_model.key, self.clip_embed_model.key, self.vae_model.key]:
if not context.models.exists(key):
raise ValueError(f"Unknown model: {key}")
transformer = self.model.model_copy(update={"submodel_type": SubModelType.Transformer})
vae = self.vae_model.model_copy(update={"submodel_type": SubModelType.VAE})
tokenizer = self.clip_embed_model.model_copy(update={"submodel_type": SubModelType.Tokenizer})
clip_encoder = self.clip_embed_model.model_copy(update={"submodel_type": SubModelType.TextEncoder})
tokenizer2 = self.t5_encoder_model.model_copy(update={"submodel_type": SubModelType.Tokenizer2})
t5_encoder = self.t5_encoder_model.model_copy(update={"submodel_type": SubModelType.TextEncoder2})
transformer_config = context.models.get_config(transformer)
assert isinstance(transformer_config, CheckpointConfigBase)
return FluxModelLoaderOutput(
transformer=TransformerField(transformer=transformer),
clip=CLIPField(tokenizer=tokenizer, text_encoder=clip_encoder, loras=[], skipped_layers=0),
t5_encoder=T5EncoderField(tokenizer=tokenizer2, text_encoder=t5_encoder),
vae=VAEField(vae=vae),
max_seq_len=max_seq_lengths[transformer_config.config_path],
)
@invocation(
"main_model_loader",
title="Main Model",

View File

@@ -12,6 +12,7 @@ from invokeai.app.invocations.fields import (
ConditioningField,
DenoiseMaskField,
FieldDescriptions,
FluxConditioningField,
ImageField,
Input,
InputField,
@@ -414,6 +415,17 @@ class MaskOutput(BaseInvocationOutput):
height: int = OutputField(description="The height of the mask in pixels.")
@invocation_output("flux_conditioning_output")
class FluxConditioningOutput(BaseInvocationOutput):
"""Base class for nodes that output a single conditioning tensor"""
conditioning: FluxConditioningField = OutputField(description=FieldDescriptions.cond)
@classmethod
def build(cls, conditioning_name: str) -> "FluxConditioningOutput":
return cls(conditioning=FluxConditioningField(conditioning_name=conditioning_name))
@invocation_output("conditioning_output")
class ConditioningOutput(BaseInvocationOutput):
"""Base class for nodes that output a single conditioning tensor"""

View File

@@ -88,7 +88,8 @@ class QueueItemEventBase(QueueEventBase):
item_id: int = Field(description="The ID of the queue item")
batch_id: str = Field(description="The ID of the queue batch")
origin: str | None = Field(default=None, description="The origin of the batch")
origin: str | None = Field(default=None, description="The origin of the queue item")
destination: str | None = Field(default=None, description="The destination of the queue item")
class InvocationEventBase(QueueItemEventBase):
@@ -114,6 +115,7 @@ class InvocationStartedEvent(InvocationEventBase):
item_id=queue_item.item_id,
batch_id=queue_item.batch_id,
origin=queue_item.origin,
destination=queue_item.destination,
session_id=queue_item.session_id,
invocation=invocation,
invocation_source_id=queue_item.session.prepared_source_mapping[invocation.id],
@@ -148,6 +150,7 @@ class InvocationDenoiseProgressEvent(InvocationEventBase):
item_id=queue_item.item_id,
batch_id=queue_item.batch_id,
origin=queue_item.origin,
destination=queue_item.destination,
session_id=queue_item.session_id,
invocation=invocation,
invocation_source_id=queue_item.session.prepared_source_mapping[invocation.id],
@@ -186,6 +189,7 @@ class InvocationCompleteEvent(InvocationEventBase):
item_id=queue_item.item_id,
batch_id=queue_item.batch_id,
origin=queue_item.origin,
destination=queue_item.destination,
session_id=queue_item.session_id,
invocation=invocation,
invocation_source_id=queue_item.session.prepared_source_mapping[invocation.id],
@@ -219,6 +223,7 @@ class InvocationErrorEvent(InvocationEventBase):
item_id=queue_item.item_id,
batch_id=queue_item.batch_id,
origin=queue_item.origin,
destination=queue_item.destination,
session_id=queue_item.session_id,
invocation=invocation,
invocation_source_id=queue_item.session.prepared_source_mapping[invocation.id],
@@ -257,6 +262,7 @@ class QueueItemStatusChangedEvent(QueueItemEventBase):
item_id=queue_item.item_id,
batch_id=queue_item.batch_id,
origin=queue_item.origin,
destination=queue_item.destination,
session_id=queue_item.session_id,
status=queue_item.status,
error_type=queue_item.error_type,

View File

@@ -103,7 +103,7 @@ class HFModelSource(StringLikeSource):
if self.variant:
base += f":{self.variant or ''}"
if self.subfolder:
base += f":{self.subfolder}"
base += f"::{self.subfolder.as_posix()}"
return base

View File

@@ -783,8 +783,9 @@ class ModelInstallService(ModelInstallServiceBase):
# So what we do is to synthesize a folder named "sdxl-turbo_vae" here.
if subfolder:
top = Path(remote_files[0].path.parts[0]) # e.g. "sdxl-turbo/"
path_to_remove = top / subfolder.parts[-1] # sdxl-turbo/vae/
path_to_add = Path(f"{top}_{subfolder}")
path_to_remove = top / subfolder # sdxl-turbo/vae/
subfolder_rename = subfolder.name.replace("/", "_").replace("\\", "_")
path_to_add = Path(f"{top}_{subfolder_rename}")
else:
path_to_remove = Path(".")
path_to_add = Path(".")

View File

@@ -77,6 +77,7 @@ class ModelRecordChanges(BaseModelExcludeNull):
type: Optional[ModelType] = Field(description="Type of model", default=None)
key: Optional[str] = Field(description="Database ID for this model", default=None)
hash: Optional[str] = Field(description="hash of model file", default=None)
format: Optional[str] = Field(description="format of model file", default=None)
trigger_phrases: Optional[set[str]] = Field(description="Set of trigger phrases for this model", default=None)
default_settings: Optional[MainModelDefaultSettings | ControlAdapterDefaultSettings] = Field(
description="Default settings for this model", default=None

View File

@@ -6,7 +6,7 @@ from invokeai.app.services.session_queue.session_queue_common import (
Batch,
BatchStatus,
CancelByBatchIDsResult,
CancelByOriginResult,
CancelByDestinationResult,
CancelByQueueIDResult,
ClearResult,
EnqueueBatchResult,
@@ -97,8 +97,8 @@ class SessionQueueBase(ABC):
pass
@abstractmethod
def cancel_by_origin(self, queue_id: str, origin: str) -> CancelByOriginResult:
"""Cancels all queue items with the given batch origin"""
def cancel_by_destination(self, queue_id: str, destination: str) -> CancelByDestinationResult:
"""Cancels all queue items with the given batch destination"""
pass
@abstractmethod

View File

@@ -77,7 +77,14 @@ BatchDataCollection: TypeAlias = list[list[BatchDatum]]
class Batch(BaseModel):
batch_id: str = Field(default_factory=uuid_string, description="The ID of the batch")
origin: str | None = Field(default=None, description="The origin of this batch.")
origin: str | None = Field(
default=None,
description="The origin of this queue item. This data is used by the frontend to determine how to handle results.",
)
destination: str | None = Field(
default=None,
description="The origin of this queue item. This data is used by the frontend to determine how to handle results",
)
data: Optional[BatchDataCollection] = Field(default=None, description="The batch data collection.")
graph: Graph = Field(description="The graph to initialize the session with")
workflow: Optional[WorkflowWithoutID] = Field(
@@ -196,7 +203,14 @@ class SessionQueueItemWithoutGraph(BaseModel):
status: QUEUE_ITEM_STATUS = Field(default="pending", description="The status of this queue item")
priority: int = Field(default=0, description="The priority of this queue item")
batch_id: str = Field(description="The ID of the batch associated with this queue item")
origin: str | None = Field(default=None, description="The origin of this queue item. ")
origin: str | None = Field(
default=None,
description="The origin of this queue item. This data is used by the frontend to determine how to handle results.",
)
destination: str | None = Field(
default=None,
description="The origin of this queue item. This data is used by the frontend to determine how to handle results",
)
session_id: str = Field(
description="The ID of the session associated with this queue item. The session doesn't exist in graph_executions until the queue item is executed."
)
@@ -297,6 +311,7 @@ class BatchStatus(BaseModel):
queue_id: str = Field(..., description="The ID of the queue")
batch_id: str = Field(..., description="The ID of the batch")
origin: str | None = Field(..., description="The origin of the batch")
destination: str | None = Field(..., description="The destination of the batch")
pending: int = Field(..., description="Number of queue items with status 'pending'")
in_progress: int = Field(..., description="Number of queue items with status 'in_progress'")
completed: int = Field(..., description="Number of queue items with status 'complete'")
@@ -331,10 +346,10 @@ class CancelByBatchIDsResult(BaseModel):
canceled: int = Field(..., description="Number of queue items canceled")
class CancelByOriginResult(BaseModel):
"""Result of canceling by list of batch ids"""
class CancelByDestinationResult(CancelByBatchIDsResult):
"""Result of canceling by a destination"""
canceled: int = Field(..., description="Number of queue items canceled")
pass
class CancelByQueueIDResult(CancelByBatchIDsResult):
@@ -443,6 +458,7 @@ class SessionQueueValueToInsert(NamedTuple):
priority: int # priority
workflow: Optional[str] # workflow json
origin: str | None
destination: str | None
ValuesToInsert: TypeAlias = list[SessionQueueValueToInsert]
@@ -464,6 +480,7 @@ def prepare_values_to_insert(queue_id: str, batch: Batch, priority: int, max_new
priority, # priority
json.dumps(workflow, default=to_jsonable_python) if workflow else None, # workflow (json)
batch.origin, # origin
batch.destination, # destination
)
)
return values_to_insert

View File

@@ -10,7 +10,7 @@ from invokeai.app.services.session_queue.session_queue_common import (
Batch,
BatchStatus,
CancelByBatchIDsResult,
CancelByOriginResult,
CancelByDestinationResult,
CancelByQueueIDResult,
ClearResult,
EnqueueBatchResult,
@@ -128,8 +128,8 @@ class SqliteSessionQueue(SessionQueueBase):
self.__cursor.executemany(
"""--sql
INSERT INTO session_queue (queue_id, session, session_id, batch_id, field_values, priority, workflow, origin)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
INSERT INTO session_queue (queue_id, session, session_id, batch_id, field_values, priority, workflow, origin, destination)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
""",
values_to_insert,
)
@@ -426,19 +426,19 @@ class SqliteSessionQueue(SessionQueueBase):
self.__lock.release()
return CancelByBatchIDsResult(canceled=count)
def cancel_by_origin(self, queue_id: str, origin: str) -> CancelByOriginResult:
def cancel_by_destination(self, queue_id: str, destination: str) -> CancelByDestinationResult:
try:
current_queue_item = self.get_current(queue_id)
self.__lock.acquire()
where = """--sql
WHERE
queue_id == ?
AND origin == ?
AND destination == ?
AND status != 'canceled'
AND status != 'completed'
AND status != 'failed'
"""
params = (queue_id, origin)
params = (queue_id, destination)
self.__cursor.execute(
f"""--sql
SELECT COUNT(*)
@@ -457,14 +457,14 @@ class SqliteSessionQueue(SessionQueueBase):
params,
)
self.__conn.commit()
if current_queue_item is not None and current_queue_item.origin == origin:
if current_queue_item is not None and current_queue_item.destination == destination:
self._set_queue_item_status(current_queue_item.item_id, "canceled")
except Exception:
self.__conn.rollback()
raise
finally:
self.__lock.release()
return CancelByOriginResult(canceled=count)
return CancelByDestinationResult(canceled=count)
def cancel_by_queue_id(self, queue_id: str) -> CancelByQueueIDResult:
try:
@@ -579,7 +579,8 @@ class SqliteSessionQueue(SessionQueueBase):
session_id,
batch_id,
queue_id,
origin
origin,
destination
FROM session_queue
WHERE queue_id = ?
"""
@@ -659,7 +660,7 @@ class SqliteSessionQueue(SessionQueueBase):
self.__lock.acquire()
self.__cursor.execute(
"""--sql
SELECT status, count(*), origin
SELECT status, count(*), origin, destination
FROM session_queue
WHERE
queue_id = ?
@@ -672,6 +673,7 @@ class SqliteSessionQueue(SessionQueueBase):
total = sum(row[1] for row in result)
counts: dict[str, int] = {row[0]: row[1] for row in result}
origin = result[0]["origin"] if result else None
destination = result[0]["destination"] if result else None
except Exception:
self.__conn.rollback()
raise
@@ -681,6 +683,7 @@ class SqliteSessionQueue(SessionQueueBase):
return BatchStatus(
batch_id=batch_id,
origin=origin,
destination=destination,
queue_id=queue_id,
pending=counts.get("pending", 0),
in_progress=counts.get("in_progress", 0),

View File

@@ -14,7 +14,7 @@ from invokeai.app.services.image_records.image_records_common import ImageCatego
from invokeai.app.services.images.images_common import ImageDTO
from invokeai.app.services.invocation_services import InvocationServices
from invokeai.app.services.model_records.model_records_base import UnknownModelException
from invokeai.app.util.step_callback import stable_diffusion_step_callback
from invokeai.app.util.step_callback import flux_step_callback, stable_diffusion_step_callback
from invokeai.backend.model_manager.config import (
AnyModel,
AnyModelConfig,
@@ -557,6 +557,24 @@ class UtilInterface(InvocationContextInterface):
is_canceled=self.is_canceled,
)
def flux_step_callback(self, intermediate_state: PipelineIntermediateState) -> None:
"""
The step callback emits a progress event with the current step, the total number of
steps, a preview image, and some other internal metadata.
This should be called after each denoising step.
Args:
intermediate_state: The intermediate state of the diffusion pipeline.
"""
flux_step_callback(
context_data=self._data,
intermediate_state=intermediate_state,
events=self._services.events,
is_canceled=self.is_canceled,
)
class InvocationContext:
"""Provides access to various services and data for the current invocation.

View File

@@ -10,9 +10,11 @@ class Migration15Callback:
def _add_origin_col(self, cursor: sqlite3.Cursor) -> None:
"""
- Adds `origin` column to the session queue table.
- Adds `destination` column to the session queue table.
"""
cursor.execute("ALTER TABLE session_queue ADD COLUMN origin TEXT;")
cursor.execute("ALTER TABLE session_queue ADD COLUMN destination TEXT;")
def build_migration_15() -> Migration:
@@ -21,6 +23,7 @@ def build_migration_15() -> Migration:
This migration does the following:
- Adds `origin` column to the session queue table.
- Adds `destination` column to the session queue table.
"""
migration_15 = Migration(
from_version=14,

View File

@@ -0,0 +1,407 @@
{
"name": "FLUX Image to Image",
"author": "InvokeAI",
"description": "A simple image-to-image workflow using a FLUX dev model. ",
"version": "1.0.4",
"contact": "",
"tags": "image2image, flux, image-to-image",
"notes": "Prerequisite model downloads: T5 Encoder, CLIP-L Encoder, and FLUX VAE. Quantized and un-quantized versions can be found in the starter models tab within your Model Manager. We recommend using FLUX dev models for image-to-image workflows. The image-to-image performance with FLUX schnell models is poor.",
"exposedFields": [
{
"nodeId": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"fieldName": "model"
},
{
"nodeId": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"fieldName": "t5_encoder_model"
},
{
"nodeId": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"fieldName": "clip_embed_model"
},
{
"nodeId": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"fieldName": "vae_model"
},
{
"nodeId": "ace0258f-67d7-4eee-a218-6fff27065214",
"fieldName": "denoising_start"
},
{
"nodeId": "01f674f8-b3d1-4df1-acac-6cb8e0bfb63c",
"fieldName": "prompt"
},
{
"nodeId": "ace0258f-67d7-4eee-a218-6fff27065214",
"fieldName": "num_steps"
}
],
"meta": {
"version": "3.0.0",
"category": "default"
},
"nodes": [
{
"id": "2981a67c-480f-4237-9384-26b68dbf912b",
"type": "invocation",
"data": {
"id": "2981a67c-480f-4237-9384-26b68dbf912b",
"type": "flux_vae_encode",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": true,
"inputs": {
"image": {
"name": "image",
"label": "",
"value": {
"image_name": "8a5c62aa-9335-45d2-9c71-89af9fc1f8d4.png"
}
},
"vae": {
"name": "vae",
"label": ""
}
}
},
"position": {
"x": 732.7680166609682,
"y": -24.37398171806909
}
},
{
"id": "ace0258f-67d7-4eee-a218-6fff27065214",
"type": "invocation",
"data": {
"id": "ace0258f-67d7-4eee-a218-6fff27065214",
"type": "flux_denoise",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": true,
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"denoise_mask": {
"name": "denoise_mask",
"label": ""
},
"denoising_start": {
"name": "denoising_start",
"label": "",
"value": 0.04
},
"denoising_end": {
"name": "denoising_end",
"label": "",
"value": 1
},
"transformer": {
"name": "transformer",
"label": ""
},
"positive_text_conditioning": {
"name": "positive_text_conditioning",
"label": ""
},
"width": {
"name": "width",
"label": "",
"value": 1024
},
"height": {
"name": "height",
"label": "",
"value": 1024
},
"num_steps": {
"name": "num_steps",
"label": "Steps (Recommend 30 for Dev, 4 for Schnell)",
"value": 30
},
"guidance": {
"name": "guidance",
"label": "",
"value": 4
},
"seed": {
"name": "seed",
"label": "",
"value": 0
}
}
},
"position": {
"x": 1182.8836633018684,
"y": -251.38882958913183
}
},
{
"id": "7e5172eb-48c1-44db-a770-8fd83e1435d1",
"type": "invocation",
"data": {
"id": "7e5172eb-48c1-44db-a770-8fd83e1435d1",
"type": "flux_vae_decode",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": false,
"useCache": true,
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
}
}
},
"position": {
"x": 1575.5797431839133,
"y": -209.00150975507415
}
},
{
"id": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"type": "invocation",
"data": {
"id": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"type": "flux_model_loader",
"version": "1.0.4",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": false,
"inputs": {
"model": {
"name": "model",
"label": "Model (dev variant recommended for Image-to-Image)"
},
"t5_encoder_model": {
"name": "t5_encoder_model",
"label": ""
},
"clip_embed_model": {
"name": "clip_embed_model",
"label": "",
"value": {
"key": "fa23a584-b623-415d-832a-21b5098ff1a1",
"hash": "blake3:17c19f0ef941c3b7609a9c94a659ca5364de0be364a91d4179f0e39ba17c3b70",
"name": "clip-vit-large-patch14",
"base": "any",
"type": "clip_embed"
}
},
"vae_model": {
"name": "vae_model",
"label": "",
"value": {
"key": "74fc82ba-c0a8-479d-a890-2126f82da758",
"hash": "blake3:ce21cb76364aa6e2421311cf4a4b5eb052a76c4f1cd207b50703d8978198a068",
"name": "FLUX.1-schnell_ae",
"base": "flux",
"type": "vae"
}
}
}
},
"position": {
"x": 328.1809894659957,
"y": -90.2241133566946
}
},
{
"id": "01f674f8-b3d1-4df1-acac-6cb8e0bfb63c",
"type": "invocation",
"data": {
"id": "01f674f8-b3d1-4df1-acac-6cb8e0bfb63c",
"type": "flux_text_encoder",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": true,
"inputs": {
"clip": {
"name": "clip",
"label": ""
},
"t5_encoder": {
"name": "t5_encoder",
"label": ""
},
"t5_max_seq_len": {
"name": "t5_max_seq_len",
"label": "T5 Max Seq Len",
"value": 256
},
"prompt": {
"name": "prompt",
"label": "",
"value": "a cat wearing a birthday hat"
}
}
},
"position": {
"x": 745.8823365057267,
"y": -299.60249175851914
}
},
{
"id": "4754c534-a5f3-4ad0-9382-7887985e668c",
"type": "invocation",
"data": {
"id": "4754c534-a5f3-4ad0-9382-7887985e668c",
"type": "rand_int",
"version": "1.0.1",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": false,
"inputs": {
"low": {
"name": "low",
"label": "",
"value": 0
},
"high": {
"name": "high",
"label": "",
"value": 2147483647
}
}
},
"position": {
"x": 725.834098928012,
"y": 496.2710031089931
}
}
],
"edges": [
{
"id": "reactflow__edge-2981a67c-480f-4237-9384-26b68dbf912bheight-ace0258f-67d7-4eee-a218-6fff27065214height",
"type": "default",
"source": "2981a67c-480f-4237-9384-26b68dbf912b",
"target": "ace0258f-67d7-4eee-a218-6fff27065214",
"sourceHandle": "height",
"targetHandle": "height"
},
{
"id": "reactflow__edge-2981a67c-480f-4237-9384-26b68dbf912bwidth-ace0258f-67d7-4eee-a218-6fff27065214width",
"type": "default",
"source": "2981a67c-480f-4237-9384-26b68dbf912b",
"target": "ace0258f-67d7-4eee-a218-6fff27065214",
"sourceHandle": "width",
"targetHandle": "width"
},
{
"id": "reactflow__edge-2981a67c-480f-4237-9384-26b68dbf912blatents-ace0258f-67d7-4eee-a218-6fff27065214latents",
"type": "default",
"source": "2981a67c-480f-4237-9384-26b68dbf912b",
"target": "ace0258f-67d7-4eee-a218-6fff27065214",
"sourceHandle": "latents",
"targetHandle": "latents"
},
{
"id": "reactflow__edge-f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90vae-2981a67c-480f-4237-9384-26b68dbf912bvae",
"type": "default",
"source": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"target": "2981a67c-480f-4237-9384-26b68dbf912b",
"sourceHandle": "vae",
"targetHandle": "vae"
},
{
"id": "reactflow__edge-ace0258f-67d7-4eee-a218-6fff27065214latents-7e5172eb-48c1-44db-a770-8fd83e1435d1latents",
"type": "default",
"source": "ace0258f-67d7-4eee-a218-6fff27065214",
"target": "7e5172eb-48c1-44db-a770-8fd83e1435d1",
"sourceHandle": "latents",
"targetHandle": "latents"
},
{
"id": "reactflow__edge-4754c534-a5f3-4ad0-9382-7887985e668cvalue-ace0258f-67d7-4eee-a218-6fff27065214seed",
"type": "default",
"source": "4754c534-a5f3-4ad0-9382-7887985e668c",
"target": "ace0258f-67d7-4eee-a218-6fff27065214",
"sourceHandle": "value",
"targetHandle": "seed"
},
{
"id": "reactflow__edge-f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90transformer-ace0258f-67d7-4eee-a218-6fff27065214transformer",
"type": "default",
"source": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"target": "ace0258f-67d7-4eee-a218-6fff27065214",
"sourceHandle": "transformer",
"targetHandle": "transformer"
},
{
"id": "reactflow__edge-01f674f8-b3d1-4df1-acac-6cb8e0bfb63cconditioning-ace0258f-67d7-4eee-a218-6fff27065214positive_text_conditioning",
"type": "default",
"source": "01f674f8-b3d1-4df1-acac-6cb8e0bfb63c",
"target": "ace0258f-67d7-4eee-a218-6fff27065214",
"sourceHandle": "conditioning",
"targetHandle": "positive_text_conditioning"
},
{
"id": "reactflow__edge-f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90vae-7e5172eb-48c1-44db-a770-8fd83e1435d1vae",
"type": "default",
"source": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"target": "7e5172eb-48c1-44db-a770-8fd83e1435d1",
"sourceHandle": "vae",
"targetHandle": "vae"
},
{
"id": "reactflow__edge-f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90max_seq_len-01f674f8-b3d1-4df1-acac-6cb8e0bfb63ct5_max_seq_len",
"type": "default",
"source": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"target": "01f674f8-b3d1-4df1-acac-6cb8e0bfb63c",
"sourceHandle": "max_seq_len",
"targetHandle": "t5_max_seq_len"
},
{
"id": "reactflow__edge-f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90t5_encoder-01f674f8-b3d1-4df1-acac-6cb8e0bfb63ct5_encoder",
"type": "default",
"source": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"target": "01f674f8-b3d1-4df1-acac-6cb8e0bfb63c",
"sourceHandle": "t5_encoder",
"targetHandle": "t5_encoder"
},
{
"id": "reactflow__edge-f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90clip-01f674f8-b3d1-4df1-acac-6cb8e0bfb63cclip",
"type": "default",
"source": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"target": "01f674f8-b3d1-4df1-acac-6cb8e0bfb63c",
"sourceHandle": "clip",
"targetHandle": "clip"
}
]
}

View File

@@ -0,0 +1,326 @@
{
"name": "FLUX Text to Image",
"author": "InvokeAI",
"description": "A simple text-to-image workflow using FLUX dev or schnell models.",
"version": "1.0.4",
"contact": "",
"tags": "text2image, flux",
"notes": "Prerequisite model downloads: T5 Encoder, CLIP-L Encoder, and FLUX VAE. Quantized and un-quantized versions can be found in the starter models tab within your Model Manager. We recommend 4 steps for FLUX schnell models and 30 steps for FLUX dev models.",
"exposedFields": [
{
"nodeId": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"fieldName": "model"
},
{
"nodeId": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"fieldName": "t5_encoder_model"
},
{
"nodeId": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"fieldName": "clip_embed_model"
},
{
"nodeId": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"fieldName": "vae_model"
},
{
"nodeId": "01f674f8-b3d1-4df1-acac-6cb8e0bfb63c",
"fieldName": "prompt"
},
{
"nodeId": "4fe24f07-f906-4f55-ab2c-9beee56ef5bd",
"fieldName": "num_steps"
}
],
"meta": {
"version": "3.0.0",
"category": "default"
},
"nodes": [
{
"id": "4fe24f07-f906-4f55-ab2c-9beee56ef5bd",
"type": "invocation",
"data": {
"id": "4fe24f07-f906-4f55-ab2c-9beee56ef5bd",
"type": "flux_denoise",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": true,
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"denoise_mask": {
"name": "denoise_mask",
"label": ""
},
"denoising_start": {
"name": "denoising_start",
"label": "",
"value": 0
},
"denoising_end": {
"name": "denoising_end",
"label": "",
"value": 1
},
"transformer": {
"name": "transformer",
"label": ""
},
"positive_text_conditioning": {
"name": "positive_text_conditioning",
"label": ""
},
"width": {
"name": "width",
"label": "",
"value": 1024
},
"height": {
"name": "height",
"label": "",
"value": 1024
},
"num_steps": {
"name": "num_steps",
"label": "Steps (Recommend 30 for Dev, 4 for Schnell)",
"value": 30
},
"guidance": {
"name": "guidance",
"label": "",
"value": 4
},
"seed": {
"name": "seed",
"label": "",
"value": 0
}
}
},
"position": {
"x": 1186.1868226120378,
"y": -214.9459927686657
}
},
{
"id": "7e5172eb-48c1-44db-a770-8fd83e1435d1",
"type": "invocation",
"data": {
"id": "7e5172eb-48c1-44db-a770-8fd83e1435d1",
"type": "flux_vae_decode",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": false,
"useCache": true,
"inputs": {
"board": {
"name": "board",
"label": ""
},
"metadata": {
"name": "metadata",
"label": ""
},
"latents": {
"name": "latents",
"label": ""
},
"vae": {
"name": "vae",
"label": ""
}
}
},
"position": {
"x": 1575.5797431839133,
"y": -209.00150975507415
}
},
{
"id": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"type": "invocation",
"data": {
"id": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"type": "flux_model_loader",
"version": "1.0.4",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": false,
"inputs": {
"model": {
"name": "model",
"label": ""
},
"t5_encoder_model": {
"name": "t5_encoder_model",
"label": ""
},
"clip_embed_model": {
"name": "clip_embed_model",
"label": ""
},
"vae_model": {
"name": "vae_model",
"label": ""
}
}
},
"position": {
"x": 381.1882713063478,
"y": -95.89663532854017
}
},
{
"id": "01f674f8-b3d1-4df1-acac-6cb8e0bfb63c",
"type": "invocation",
"data": {
"id": "01f674f8-b3d1-4df1-acac-6cb8e0bfb63c",
"type": "flux_text_encoder",
"version": "1.0.0",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": true,
"inputs": {
"clip": {
"name": "clip",
"label": ""
},
"t5_encoder": {
"name": "t5_encoder",
"label": ""
},
"t5_max_seq_len": {
"name": "t5_max_seq_len",
"label": "T5 Max Seq Len",
"value": 256
},
"prompt": {
"name": "prompt",
"label": "",
"value": "a cat"
}
}
},
"position": {
"x": 778.4899149328337,
"y": -100.36469216659502
}
},
{
"id": "4754c534-a5f3-4ad0-9382-7887985e668c",
"type": "invocation",
"data": {
"id": "4754c534-a5f3-4ad0-9382-7887985e668c",
"type": "rand_int",
"version": "1.0.1",
"label": "",
"notes": "",
"isOpen": true,
"isIntermediate": true,
"useCache": false,
"inputs": {
"low": {
"name": "low",
"label": "",
"value": 0
},
"high": {
"name": "high",
"label": "",
"value": 2147483647
}
}
},
"position": {
"x": 800.9667463219505,
"y": 285.8297267547506
}
}
],
"edges": [
{
"id": "reactflow__edge-f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90transformer-4fe24f07-f906-4f55-ab2c-9beee56ef5bdtransformer",
"type": "default",
"source": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"target": "4fe24f07-f906-4f55-ab2c-9beee56ef5bd",
"sourceHandle": "transformer",
"targetHandle": "transformer"
},
{
"id": "reactflow__edge-01f674f8-b3d1-4df1-acac-6cb8e0bfb63cconditioning-4fe24f07-f906-4f55-ab2c-9beee56ef5bdpositive_text_conditioning",
"type": "default",
"source": "01f674f8-b3d1-4df1-acac-6cb8e0bfb63c",
"target": "4fe24f07-f906-4f55-ab2c-9beee56ef5bd",
"sourceHandle": "conditioning",
"targetHandle": "positive_text_conditioning"
},
{
"id": "reactflow__edge-4754c534-a5f3-4ad0-9382-7887985e668cvalue-4fe24f07-f906-4f55-ab2c-9beee56ef5bdseed",
"type": "default",
"source": "4754c534-a5f3-4ad0-9382-7887985e668c",
"target": "4fe24f07-f906-4f55-ab2c-9beee56ef5bd",
"sourceHandle": "value",
"targetHandle": "seed"
},
{
"id": "reactflow__edge-4fe24f07-f906-4f55-ab2c-9beee56ef5bdlatents-7e5172eb-48c1-44db-a770-8fd83e1435d1latents",
"type": "default",
"source": "4fe24f07-f906-4f55-ab2c-9beee56ef5bd",
"target": "7e5172eb-48c1-44db-a770-8fd83e1435d1",
"sourceHandle": "latents",
"targetHandle": "latents"
},
{
"id": "reactflow__edge-f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90vae-7e5172eb-48c1-44db-a770-8fd83e1435d1vae",
"type": "default",
"source": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"target": "7e5172eb-48c1-44db-a770-8fd83e1435d1",
"sourceHandle": "vae",
"targetHandle": "vae"
},
{
"id": "reactflow__edge-f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90max_seq_len-01f674f8-b3d1-4df1-acac-6cb8e0bfb63ct5_max_seq_len",
"type": "default",
"source": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"target": "01f674f8-b3d1-4df1-acac-6cb8e0bfb63c",
"sourceHandle": "max_seq_len",
"targetHandle": "t5_max_seq_len"
},
{
"id": "reactflow__edge-f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90t5_encoder-01f674f8-b3d1-4df1-acac-6cb8e0bfb63ct5_encoder",
"type": "default",
"source": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"target": "01f674f8-b3d1-4df1-acac-6cb8e0bfb63c",
"sourceHandle": "t5_encoder",
"targetHandle": "t5_encoder"
},
{
"id": "reactflow__edge-f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90clip-01f674f8-b3d1-4df1-acac-6cb8e0bfb63cclip",
"type": "default",
"source": "f8d9d7c8-9ed7-4bd7-9e42-ab0e89bfac90",
"target": "01f674f8-b3d1-4df1-acac-6cb8e0bfb63c",
"sourceHandle": "clip",
"targetHandle": "clip"
}
]
}

View File

@@ -38,6 +38,25 @@ SD1_5_LATENT_RGB_FACTORS = [
[-0.1307, -0.1874, -0.7445], # L4
]
FLUX_LATENT_RGB_FACTORS = [
[-0.0412, 0.0149, 0.0521],
[0.0056, 0.0291, 0.0768],
[0.0342, -0.0681, -0.0427],
[-0.0258, 0.0092, 0.0463],
[0.0863, 0.0784, 0.0547],
[-0.0017, 0.0402, 0.0158],
[0.0501, 0.1058, 0.1152],
[-0.0209, -0.0218, -0.0329],
[-0.0314, 0.0083, 0.0896],
[0.0851, 0.0665, -0.0472],
[-0.0534, 0.0238, -0.0024],
[0.0452, -0.0026, 0.0048],
[0.0892, 0.0831, 0.0881],
[-0.1117, -0.0304, -0.0789],
[0.0027, -0.0479, -0.0043],
[-0.1146, -0.0827, -0.0598],
]
def sample_to_lowres_estimated_image(
samples: torch.Tensor, latent_rgb_factors: torch.Tensor, smooth_matrix: Optional[torch.Tensor] = None
@@ -94,3 +113,32 @@ def stable_diffusion_step_callback(
intermediate_state,
ProgressImage(dataURL=dataURL, width=width, height=height),
)
def flux_step_callback(
context_data: "InvocationContextData",
intermediate_state: PipelineIntermediateState,
events: "EventServiceBase",
is_canceled: Callable[[], bool],
) -> None:
if is_canceled():
raise CanceledException
sample = intermediate_state.latents
latent_rgb_factors = torch.tensor(FLUX_LATENT_RGB_FACTORS, dtype=sample.dtype, device=sample.device)
latent_image_perm = sample.permute(1, 2, 0).to(dtype=sample.dtype, device=sample.device)
latent_image = latent_image_perm @ latent_rgb_factors
latents_ubyte = (
((latent_image + 1) / 2).clamp(0, 1).mul(0xFF) # change scale from -1..1 to 0..1 # to 0..255
).to(device="cpu", dtype=torch.uint8)
image = Image.fromarray(latents_ubyte.cpu().numpy())
(width, height) = image.size
width *= 8
height *= 8
dataURL = image_to_dataURL(image, image_format="JPEG")
events.emit_invocation_denoise_progress(
context_data.queue_item,
context_data.invocation,
intermediate_state,
ProgressImage(dataURL=dataURL, width=width, height=height),
)

View File

@@ -0,0 +1,56 @@
from typing import Callable
import torch
from tqdm import tqdm
from invokeai.backend.flux.inpaint_extension import InpaintExtension
from invokeai.backend.flux.model import Flux
from invokeai.backend.stable_diffusion.diffusers_pipeline import PipelineIntermediateState
def denoise(
model: Flux,
# model input
img: torch.Tensor,
img_ids: torch.Tensor,
txt: torch.Tensor,
txt_ids: torch.Tensor,
vec: torch.Tensor,
# sampling parameters
timesteps: list[float],
step_callback: Callable[[PipelineIntermediateState], None],
guidance: float,
inpaint_extension: InpaintExtension | None,
):
step = 0
# guidance_vec is ignored for schnell.
guidance_vec = torch.full((img.shape[0],), guidance, device=img.device, dtype=img.dtype)
for t_curr, t_prev in tqdm(list(zip(timesteps[:-1], timesteps[1:], strict=True))):
t_vec = torch.full((img.shape[0],), t_curr, dtype=img.dtype, device=img.device)
pred = model(
img=img,
img_ids=img_ids,
txt=txt,
txt_ids=txt_ids,
y=vec,
timesteps=t_vec,
guidance=guidance_vec,
)
preview_img = img - t_curr * pred
img = img + (t_prev - t_curr) * pred
if inpaint_extension is not None:
img = inpaint_extension.merge_intermediate_latents_with_init_latents(img, t_prev)
step_callback(
PipelineIntermediateState(
step=step,
order=1,
total_steps=len(timesteps),
timestep=int(t_curr),
latents=preview_img,
),
)
step += 1
return img

View File

@@ -0,0 +1,35 @@
import torch
class InpaintExtension:
"""A class for managing inpainting with FLUX."""
def __init__(self, init_latents: torch.Tensor, inpaint_mask: torch.Tensor, noise: torch.Tensor):
"""Initialize InpaintExtension.
Args:
init_latents (torch.Tensor): The initial latents (i.e. un-noised at timestep 0). In 'packed' format.
inpaint_mask (torch.Tensor): A mask specifying which elements to inpaint. Range [0, 1]. Values of 1 will be
re-generated. Values of 0 will remain unchanged. Values between 0 and 1 can be used to blend the
inpainted region with the background. In 'packed' format.
noise (torch.Tensor): The noise tensor used to noise the init_latents. In 'packed' format.
"""
assert init_latents.shape == inpaint_mask.shape == noise.shape
self._init_latents = init_latents
self._inpaint_mask = inpaint_mask
self._noise = noise
def merge_intermediate_latents_with_init_latents(
self, intermediate_latents: torch.Tensor, timestep: float
) -> torch.Tensor:
"""Merge the intermediate latents with the initial latents for the current timestep using the inpaint mask. I.e.
update the intermediate latents to keep the regions that are not being inpainted on the correct noise
trajectory.
This function should be called after each denoising step.
"""
# Noise the init latents for the current timestep.
noised_init_latents = self._noise * timestep + (1.0 - timestep) * self._init_latents
# Merge the intermediate latents with the noised_init_latents using the inpaint_mask.
return intermediate_latents * self._inpaint_mask + noised_init_latents * (1.0 - self._inpaint_mask)

View File

@@ -0,0 +1,32 @@
# Initially pulled from https://github.com/black-forest-labs/flux
import torch
from einops import rearrange
from torch import Tensor
def attention(q: Tensor, k: Tensor, v: Tensor, pe: Tensor) -> Tensor:
q, k = apply_rope(q, k, pe)
x = torch.nn.functional.scaled_dot_product_attention(q, k, v)
x = rearrange(x, "B H L D -> B L (H D)")
return x
def rope(pos: Tensor, dim: int, theta: int) -> Tensor:
assert dim % 2 == 0
scale = torch.arange(0, dim, 2, dtype=torch.float64, device=pos.device) / dim
omega = 1.0 / (theta**scale)
out = torch.einsum("...n,d->...nd", pos, omega)
out = torch.stack([torch.cos(out), -torch.sin(out), torch.sin(out), torch.cos(out)], dim=-1)
out = rearrange(out, "b n d (i j) -> b n d i j", i=2, j=2)
return out.float()
def apply_rope(xq: Tensor, xk: Tensor, freqs_cis: Tensor) -> tuple[Tensor, Tensor]:
xq_ = xq.float().reshape(*xq.shape[:-1], -1, 1, 2)
xk_ = xk.float().reshape(*xk.shape[:-1], -1, 1, 2)
xq_out = freqs_cis[..., 0] * xq_[..., 0] + freqs_cis[..., 1] * xq_[..., 1]
xk_out = freqs_cis[..., 0] * xk_[..., 0] + freqs_cis[..., 1] * xk_[..., 1]
return xq_out.reshape(*xq.shape).type_as(xq), xk_out.reshape(*xk.shape).type_as(xk)

View File

@@ -0,0 +1,117 @@
# Initially pulled from https://github.com/black-forest-labs/flux
from dataclasses import dataclass
import torch
from torch import Tensor, nn
from invokeai.backend.flux.modules.layers import (
DoubleStreamBlock,
EmbedND,
LastLayer,
MLPEmbedder,
SingleStreamBlock,
timestep_embedding,
)
@dataclass
class FluxParams:
in_channels: int
vec_in_dim: int
context_in_dim: int
hidden_size: int
mlp_ratio: float
num_heads: int
depth: int
depth_single_blocks: int
axes_dim: list[int]
theta: int
qkv_bias: bool
guidance_embed: bool
class Flux(nn.Module):
"""
Transformer model for flow matching on sequences.
"""
def __init__(self, params: FluxParams):
super().__init__()
self.params = params
self.in_channels = params.in_channels
self.out_channels = self.in_channels
if params.hidden_size % params.num_heads != 0:
raise ValueError(f"Hidden size {params.hidden_size} must be divisible by num_heads {params.num_heads}")
pe_dim = params.hidden_size // params.num_heads
if sum(params.axes_dim) != pe_dim:
raise ValueError(f"Got {params.axes_dim} but expected positional dim {pe_dim}")
self.hidden_size = params.hidden_size
self.num_heads = params.num_heads
self.pe_embedder = EmbedND(dim=pe_dim, theta=params.theta, axes_dim=params.axes_dim)
self.img_in = nn.Linear(self.in_channels, self.hidden_size, bias=True)
self.time_in = MLPEmbedder(in_dim=256, hidden_dim=self.hidden_size)
self.vector_in = MLPEmbedder(params.vec_in_dim, self.hidden_size)
self.guidance_in = (
MLPEmbedder(in_dim=256, hidden_dim=self.hidden_size) if params.guidance_embed else nn.Identity()
)
self.txt_in = nn.Linear(params.context_in_dim, self.hidden_size)
self.double_blocks = nn.ModuleList(
[
DoubleStreamBlock(
self.hidden_size,
self.num_heads,
mlp_ratio=params.mlp_ratio,
qkv_bias=params.qkv_bias,
)
for _ in range(params.depth)
]
)
self.single_blocks = nn.ModuleList(
[
SingleStreamBlock(self.hidden_size, self.num_heads, mlp_ratio=params.mlp_ratio)
for _ in range(params.depth_single_blocks)
]
)
self.final_layer = LastLayer(self.hidden_size, 1, self.out_channels)
def forward(
self,
img: Tensor,
img_ids: Tensor,
txt: Tensor,
txt_ids: Tensor,
timesteps: Tensor,
y: Tensor,
guidance: Tensor | None = None,
) -> Tensor:
if img.ndim != 3 or txt.ndim != 3:
raise ValueError("Input img and txt tensors must have 3 dimensions.")
# running on sequences img
img = self.img_in(img)
vec = self.time_in(timestep_embedding(timesteps, 256))
if self.params.guidance_embed:
if guidance is None:
raise ValueError("Didn't get guidance strength for guidance distilled model.")
vec = vec + self.guidance_in(timestep_embedding(guidance, 256))
vec = vec + self.vector_in(y)
txt = self.txt_in(txt)
ids = torch.cat((txt_ids, img_ids), dim=1)
pe = self.pe_embedder(ids)
for block in self.double_blocks:
img, txt = block(img=img, txt=txt, vec=vec, pe=pe)
img = torch.cat((txt, img), 1)
for block in self.single_blocks:
img = block(img, vec=vec, pe=pe)
img = img[:, txt.shape[1] :, ...]
img = self.final_layer(img, vec) # (N, T, patch_size ** 2 * out_channels)
return img

View File

@@ -0,0 +1,324 @@
# Initially pulled from https://github.com/black-forest-labs/flux
from dataclasses import dataclass
import torch
from einops import rearrange
from torch import Tensor, nn
@dataclass
class AutoEncoderParams:
resolution: int
in_channels: int
ch: int
out_ch: int
ch_mult: list[int]
num_res_blocks: int
z_channels: int
scale_factor: float
shift_factor: float
class AttnBlock(nn.Module):
def __init__(self, in_channels: int):
super().__init__()
self.in_channels = in_channels
self.norm = nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True)
self.q = nn.Conv2d(in_channels, in_channels, kernel_size=1)
self.k = nn.Conv2d(in_channels, in_channels, kernel_size=1)
self.v = nn.Conv2d(in_channels, in_channels, kernel_size=1)
self.proj_out = nn.Conv2d(in_channels, in_channels, kernel_size=1)
def attention(self, h_: Tensor) -> Tensor:
h_ = self.norm(h_)
q = self.q(h_)
k = self.k(h_)
v = self.v(h_)
b, c, h, w = q.shape
q = rearrange(q, "b c h w -> b 1 (h w) c").contiguous()
k = rearrange(k, "b c h w -> b 1 (h w) c").contiguous()
v = rearrange(v, "b c h w -> b 1 (h w) c").contiguous()
h_ = nn.functional.scaled_dot_product_attention(q, k, v)
return rearrange(h_, "b 1 (h w) c -> b c h w", h=h, w=w, c=c, b=b)
def forward(self, x: Tensor) -> Tensor:
return x + self.proj_out(self.attention(x))
class ResnetBlock(nn.Module):
def __init__(self, in_channels: int, out_channels: int):
super().__init__()
self.in_channels = in_channels
out_channels = in_channels if out_channels is None else out_channels
self.out_channels = out_channels
self.norm1 = nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True)
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1)
self.norm2 = nn.GroupNorm(num_groups=32, num_channels=out_channels, eps=1e-6, affine=True)
self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1)
if self.in_channels != self.out_channels:
self.nin_shortcut = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0)
def forward(self, x):
h = x
h = self.norm1(h)
h = torch.nn.functional.silu(h)
h = self.conv1(h)
h = self.norm2(h)
h = torch.nn.functional.silu(h)
h = self.conv2(h)
if self.in_channels != self.out_channels:
x = self.nin_shortcut(x)
return x + h
class Downsample(nn.Module):
def __init__(self, in_channels: int):
super().__init__()
# no asymmetric padding in torch conv, must do it ourselves
self.conv = nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=2, padding=0)
def forward(self, x: Tensor):
pad = (0, 1, 0, 1)
x = nn.functional.pad(x, pad, mode="constant", value=0)
x = self.conv(x)
return x
class Upsample(nn.Module):
def __init__(self, in_channels: int):
super().__init__()
self.conv = nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=1, padding=1)
def forward(self, x: Tensor):
x = nn.functional.interpolate(x, scale_factor=2.0, mode="nearest")
x = self.conv(x)
return x
class Encoder(nn.Module):
def __init__(
self,
resolution: int,
in_channels: int,
ch: int,
ch_mult: list[int],
num_res_blocks: int,
z_channels: int,
):
super().__init__()
self.ch = ch
self.num_resolutions = len(ch_mult)
self.num_res_blocks = num_res_blocks
self.resolution = resolution
self.in_channels = in_channels
# downsampling
self.conv_in = nn.Conv2d(in_channels, self.ch, kernel_size=3, stride=1, padding=1)
curr_res = resolution
in_ch_mult = (1,) + tuple(ch_mult)
self.in_ch_mult = in_ch_mult
self.down = nn.ModuleList()
block_in = self.ch
for i_level in range(self.num_resolutions):
block = nn.ModuleList()
attn = nn.ModuleList()
block_in = ch * in_ch_mult[i_level]
block_out = ch * ch_mult[i_level]
for _ in range(self.num_res_blocks):
block.append(ResnetBlock(in_channels=block_in, out_channels=block_out))
block_in = block_out
down = nn.Module()
down.block = block
down.attn = attn
if i_level != self.num_resolutions - 1:
down.downsample = Downsample(block_in)
curr_res = curr_res // 2
self.down.append(down)
# middle
self.mid = nn.Module()
self.mid.block_1 = ResnetBlock(in_channels=block_in, out_channels=block_in)
self.mid.attn_1 = AttnBlock(block_in)
self.mid.block_2 = ResnetBlock(in_channels=block_in, out_channels=block_in)
# end
self.norm_out = nn.GroupNorm(num_groups=32, num_channels=block_in, eps=1e-6, affine=True)
self.conv_out = nn.Conv2d(block_in, 2 * z_channels, kernel_size=3, stride=1, padding=1)
def forward(self, x: Tensor) -> Tensor:
# downsampling
hs = [self.conv_in(x)]
for i_level in range(self.num_resolutions):
for i_block in range(self.num_res_blocks):
h = self.down[i_level].block[i_block](hs[-1])
if len(self.down[i_level].attn) > 0:
h = self.down[i_level].attn[i_block](h)
hs.append(h)
if i_level != self.num_resolutions - 1:
hs.append(self.down[i_level].downsample(hs[-1]))
# middle
h = hs[-1]
h = self.mid.block_1(h)
h = self.mid.attn_1(h)
h = self.mid.block_2(h)
# end
h = self.norm_out(h)
h = torch.nn.functional.silu(h)
h = self.conv_out(h)
return h
class Decoder(nn.Module):
def __init__(
self,
ch: int,
out_ch: int,
ch_mult: list[int],
num_res_blocks: int,
in_channels: int,
resolution: int,
z_channels: int,
):
super().__init__()
self.ch = ch
self.num_resolutions = len(ch_mult)
self.num_res_blocks = num_res_blocks
self.resolution = resolution
self.in_channels = in_channels
self.ffactor = 2 ** (self.num_resolutions - 1)
# compute in_ch_mult, block_in and curr_res at lowest res
block_in = ch * ch_mult[self.num_resolutions - 1]
curr_res = resolution // 2 ** (self.num_resolutions - 1)
self.z_shape = (1, z_channels, curr_res, curr_res)
# z to block_in
self.conv_in = nn.Conv2d(z_channels, block_in, kernel_size=3, stride=1, padding=1)
# middle
self.mid = nn.Module()
self.mid.block_1 = ResnetBlock(in_channels=block_in, out_channels=block_in)
self.mid.attn_1 = AttnBlock(block_in)
self.mid.block_2 = ResnetBlock(in_channels=block_in, out_channels=block_in)
# upsampling
self.up = nn.ModuleList()
for i_level in reversed(range(self.num_resolutions)):
block = nn.ModuleList()
attn = nn.ModuleList()
block_out = ch * ch_mult[i_level]
for _ in range(self.num_res_blocks + 1):
block.append(ResnetBlock(in_channels=block_in, out_channels=block_out))
block_in = block_out
up = nn.Module()
up.block = block
up.attn = attn
if i_level != 0:
up.upsample = Upsample(block_in)
curr_res = curr_res * 2
self.up.insert(0, up) # prepend to get consistent order
# end
self.norm_out = nn.GroupNorm(num_groups=32, num_channels=block_in, eps=1e-6, affine=True)
self.conv_out = nn.Conv2d(block_in, out_ch, kernel_size=3, stride=1, padding=1)
def forward(self, z: Tensor) -> Tensor:
# z to block_in
h = self.conv_in(z)
# middle
h = self.mid.block_1(h)
h = self.mid.attn_1(h)
h = self.mid.block_2(h)
# upsampling
for i_level in reversed(range(self.num_resolutions)):
for i_block in range(self.num_res_blocks + 1):
h = self.up[i_level].block[i_block](h)
if len(self.up[i_level].attn) > 0:
h = self.up[i_level].attn[i_block](h)
if i_level != 0:
h = self.up[i_level].upsample(h)
# end
h = self.norm_out(h)
h = torch.nn.functional.silu(h)
h = self.conv_out(h)
return h
class DiagonalGaussian(nn.Module):
def __init__(self, chunk_dim: int = 1):
super().__init__()
self.chunk_dim = chunk_dim
def forward(self, z: Tensor, sample: bool = True, generator: torch.Generator | None = None) -> Tensor:
mean, logvar = torch.chunk(z, 2, dim=self.chunk_dim)
if sample:
std = torch.exp(0.5 * logvar)
# Unfortunately, torch.randn_like(...) does not accept a generator argument at the time of writing, so we
# have to use torch.randn(...) instead.
return mean + std * torch.randn(size=mean.size(), generator=generator, dtype=mean.dtype, device=mean.device)
else:
return mean
class AutoEncoder(nn.Module):
def __init__(self, params: AutoEncoderParams):
super().__init__()
self.encoder = Encoder(
resolution=params.resolution,
in_channels=params.in_channels,
ch=params.ch,
ch_mult=params.ch_mult,
num_res_blocks=params.num_res_blocks,
z_channels=params.z_channels,
)
self.decoder = Decoder(
resolution=params.resolution,
in_channels=params.in_channels,
ch=params.ch,
out_ch=params.out_ch,
ch_mult=params.ch_mult,
num_res_blocks=params.num_res_blocks,
z_channels=params.z_channels,
)
self.reg = DiagonalGaussian()
self.scale_factor = params.scale_factor
self.shift_factor = params.shift_factor
def encode(self, x: Tensor, sample: bool = True, generator: torch.Generator | None = None) -> Tensor:
"""Run VAE encoding on input tensor x.
Args:
x (Tensor): Input image tensor. Shape: (batch_size, in_channels, height, width).
sample (bool, optional): If True, sample from the encoded distribution, else, return the distribution mean.
Defaults to True.
generator (torch.Generator | None, optional): Optional random number generator for reproducibility.
Defaults to None.
Returns:
Tensor: Encoded latent tensor. Shape: (batch_size, z_channels, latent_height, latent_width).
"""
z = self.reg(self.encoder(x), sample=sample, generator=generator)
z = self.scale_factor * (z - self.shift_factor)
return z
def decode(self, z: Tensor) -> Tensor:
z = z / self.scale_factor + self.shift_factor
return self.decoder(z)
def forward(self, x: Tensor) -> Tensor:
return self.decode(self.encode(x))

View File

@@ -0,0 +1,33 @@
# Initially pulled from https://github.com/black-forest-labs/flux
from torch import Tensor, nn
from transformers import PreTrainedModel, PreTrainedTokenizer
class HFEncoder(nn.Module):
def __init__(self, encoder: PreTrainedModel, tokenizer: PreTrainedTokenizer, is_clip: bool, max_length: int):
super().__init__()
self.max_length = max_length
self.is_clip = is_clip
self.output_key = "pooler_output" if self.is_clip else "last_hidden_state"
self.tokenizer = tokenizer
self.hf_module = encoder
self.hf_module = self.hf_module.eval().requires_grad_(False)
def forward(self, text: list[str]) -> Tensor:
batch_encoding = self.tokenizer(
text,
truncation=True,
max_length=self.max_length,
return_length=False,
return_overflowing_tokens=False,
padding="max_length",
return_tensors="pt",
)
outputs = self.hf_module(
input_ids=batch_encoding["input_ids"].to(self.hf_module.device),
attention_mask=None,
output_hidden_states=False,
)
return outputs[self.output_key]

View File

@@ -0,0 +1,253 @@
# Initially pulled from https://github.com/black-forest-labs/flux
import math
from dataclasses import dataclass
import torch
from einops import rearrange
from torch import Tensor, nn
from invokeai.backend.flux.math import attention, rope
class EmbedND(nn.Module):
def __init__(self, dim: int, theta: int, axes_dim: list[int]):
super().__init__()
self.dim = dim
self.theta = theta
self.axes_dim = axes_dim
def forward(self, ids: Tensor) -> Tensor:
n_axes = ids.shape[-1]
emb = torch.cat(
[rope(ids[..., i], self.axes_dim[i], self.theta) for i in range(n_axes)],
dim=-3,
)
return emb.unsqueeze(1)
def timestep_embedding(t: Tensor, dim, max_period=10000, time_factor: float = 1000.0):
"""
Create sinusoidal timestep embeddings.
:param t: a 1-D Tensor of N indices, one per batch element.
These may be fractional.
:param dim: the dimension of the output.
:param max_period: controls the minimum frequency of the embeddings.
:return: an (N, D) Tensor of positional embeddings.
"""
t = time_factor * t
half = dim // 2
freqs = torch.exp(-math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half).to(t.device)
args = t[:, None].float() * freqs[None]
embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1)
if dim % 2:
embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1)
if torch.is_floating_point(t):
embedding = embedding.to(t)
return embedding
class MLPEmbedder(nn.Module):
def __init__(self, in_dim: int, hidden_dim: int):
super().__init__()
self.in_layer = nn.Linear(in_dim, hidden_dim, bias=True)
self.silu = nn.SiLU()
self.out_layer = nn.Linear(hidden_dim, hidden_dim, bias=True)
def forward(self, x: Tensor) -> Tensor:
return self.out_layer(self.silu(self.in_layer(x)))
class RMSNorm(torch.nn.Module):
def __init__(self, dim: int):
super().__init__()
self.scale = nn.Parameter(torch.ones(dim))
def forward(self, x: Tensor):
x_dtype = x.dtype
x = x.float()
rrms = torch.rsqrt(torch.mean(x**2, dim=-1, keepdim=True) + 1e-6)
return (x * rrms).to(dtype=x_dtype) * self.scale
class QKNorm(torch.nn.Module):
def __init__(self, dim: int):
super().__init__()
self.query_norm = RMSNorm(dim)
self.key_norm = RMSNorm(dim)
def forward(self, q: Tensor, k: Tensor, v: Tensor) -> tuple[Tensor, Tensor]:
q = self.query_norm(q)
k = self.key_norm(k)
return q.to(v), k.to(v)
class SelfAttention(nn.Module):
def __init__(self, dim: int, num_heads: int = 8, qkv_bias: bool = False):
super().__init__()
self.num_heads = num_heads
head_dim = dim // num_heads
self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
self.norm = QKNorm(head_dim)
self.proj = nn.Linear(dim, dim)
def forward(self, x: Tensor, pe: Tensor) -> Tensor:
qkv = self.qkv(x)
q, k, v = rearrange(qkv, "B L (K H D) -> K B H L D", K=3, H=self.num_heads)
q, k = self.norm(q, k, v)
x = attention(q, k, v, pe=pe)
x = self.proj(x)
return x
@dataclass
class ModulationOut:
shift: Tensor
scale: Tensor
gate: Tensor
class Modulation(nn.Module):
def __init__(self, dim: int, double: bool):
super().__init__()
self.is_double = double
self.multiplier = 6 if double else 3
self.lin = nn.Linear(dim, self.multiplier * dim, bias=True)
def forward(self, vec: Tensor) -> tuple[ModulationOut, ModulationOut | None]:
out = self.lin(nn.functional.silu(vec))[:, None, :].chunk(self.multiplier, dim=-1)
return (
ModulationOut(*out[:3]),
ModulationOut(*out[3:]) if self.is_double else None,
)
class DoubleStreamBlock(nn.Module):
def __init__(self, hidden_size: int, num_heads: int, mlp_ratio: float, qkv_bias: bool = False):
super().__init__()
mlp_hidden_dim = int(hidden_size * mlp_ratio)
self.num_heads = num_heads
self.hidden_size = hidden_size
self.img_mod = Modulation(hidden_size, double=True)
self.img_norm1 = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6)
self.img_attn = SelfAttention(dim=hidden_size, num_heads=num_heads, qkv_bias=qkv_bias)
self.img_norm2 = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6)
self.img_mlp = nn.Sequential(
nn.Linear(hidden_size, mlp_hidden_dim, bias=True),
nn.GELU(approximate="tanh"),
nn.Linear(mlp_hidden_dim, hidden_size, bias=True),
)
self.txt_mod = Modulation(hidden_size, double=True)
self.txt_norm1 = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6)
self.txt_attn = SelfAttention(dim=hidden_size, num_heads=num_heads, qkv_bias=qkv_bias)
self.txt_norm2 = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6)
self.txt_mlp = nn.Sequential(
nn.Linear(hidden_size, mlp_hidden_dim, bias=True),
nn.GELU(approximate="tanh"),
nn.Linear(mlp_hidden_dim, hidden_size, bias=True),
)
def forward(self, img: Tensor, txt: Tensor, vec: Tensor, pe: Tensor) -> tuple[Tensor, Tensor]:
img_mod1, img_mod2 = self.img_mod(vec)
txt_mod1, txt_mod2 = self.txt_mod(vec)
# prepare image for attention
img_modulated = self.img_norm1(img)
img_modulated = (1 + img_mod1.scale) * img_modulated + img_mod1.shift
img_qkv = self.img_attn.qkv(img_modulated)
img_q, img_k, img_v = rearrange(img_qkv, "B L (K H D) -> K B H L D", K=3, H=self.num_heads)
img_q, img_k = self.img_attn.norm(img_q, img_k, img_v)
# prepare txt for attention
txt_modulated = self.txt_norm1(txt)
txt_modulated = (1 + txt_mod1.scale) * txt_modulated + txt_mod1.shift
txt_qkv = self.txt_attn.qkv(txt_modulated)
txt_q, txt_k, txt_v = rearrange(txt_qkv, "B L (K H D) -> K B H L D", K=3, H=self.num_heads)
txt_q, txt_k = self.txt_attn.norm(txt_q, txt_k, txt_v)
# run actual attention
q = torch.cat((txt_q, img_q), dim=2)
k = torch.cat((txt_k, img_k), dim=2)
v = torch.cat((txt_v, img_v), dim=2)
attn = attention(q, k, v, pe=pe)
txt_attn, img_attn = attn[:, : txt.shape[1]], attn[:, txt.shape[1] :]
# calculate the img bloks
img = img + img_mod1.gate * self.img_attn.proj(img_attn)
img = img + img_mod2.gate * self.img_mlp((1 + img_mod2.scale) * self.img_norm2(img) + img_mod2.shift)
# calculate the txt bloks
txt = txt + txt_mod1.gate * self.txt_attn.proj(txt_attn)
txt = txt + txt_mod2.gate * self.txt_mlp((1 + txt_mod2.scale) * self.txt_norm2(txt) + txt_mod2.shift)
return img, txt
class SingleStreamBlock(nn.Module):
"""
A DiT block with parallel linear layers as described in
https://arxiv.org/abs/2302.05442 and adapted modulation interface.
"""
def __init__(
self,
hidden_size: int,
num_heads: int,
mlp_ratio: float = 4.0,
qk_scale: float | None = None,
):
super().__init__()
self.hidden_dim = hidden_size
self.num_heads = num_heads
head_dim = hidden_size // num_heads
self.scale = qk_scale or head_dim**-0.5
self.mlp_hidden_dim = int(hidden_size * mlp_ratio)
# qkv and mlp_in
self.linear1 = nn.Linear(hidden_size, hidden_size * 3 + self.mlp_hidden_dim)
# proj and mlp_out
self.linear2 = nn.Linear(hidden_size + self.mlp_hidden_dim, hidden_size)
self.norm = QKNorm(head_dim)
self.hidden_size = hidden_size
self.pre_norm = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6)
self.mlp_act = nn.GELU(approximate="tanh")
self.modulation = Modulation(hidden_size, double=False)
def forward(self, x: Tensor, vec: Tensor, pe: Tensor) -> Tensor:
mod, _ = self.modulation(vec)
x_mod = (1 + mod.scale) * self.pre_norm(x) + mod.shift
qkv, mlp = torch.split(self.linear1(x_mod), [3 * self.hidden_size, self.mlp_hidden_dim], dim=-1)
q, k, v = rearrange(qkv, "B L (K H D) -> K B H L D", K=3, H=self.num_heads)
q, k = self.norm(q, k, v)
# compute attention
attn = attention(q, k, v, pe=pe)
# compute activation in mlp stream, cat again and run second linear layer
output = self.linear2(torch.cat((attn, self.mlp_act(mlp)), 2))
return x + mod.gate * output
class LastLayer(nn.Module):
def __init__(self, hidden_size: int, patch_size: int, out_channels: int):
super().__init__()
self.norm_final = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6)
self.linear = nn.Linear(hidden_size, patch_size * patch_size * out_channels, bias=True)
self.adaLN_modulation = nn.Sequential(nn.SiLU(), nn.Linear(hidden_size, 2 * hidden_size, bias=True))
def forward(self, x: Tensor, vec: Tensor) -> Tensor:
shift, scale = self.adaLN_modulation(vec).chunk(2, dim=1)
x = (1 + scale[:, None, :]) * self.norm_final(x) + shift[:, None, :]
x = self.linear(x)
return x

View File

@@ -0,0 +1,135 @@
# Initially pulled from https://github.com/black-forest-labs/flux
import math
from typing import Callable
import torch
from einops import rearrange, repeat
def get_noise(
num_samples: int,
height: int,
width: int,
device: torch.device,
dtype: torch.dtype,
seed: int,
):
# We always generate noise on the same device and dtype then cast to ensure consistency across devices/dtypes.
rand_device = "cpu"
rand_dtype = torch.float16
return torch.randn(
num_samples,
16,
# allow for packing
2 * math.ceil(height / 16),
2 * math.ceil(width / 16),
device=rand_device,
dtype=rand_dtype,
generator=torch.Generator(device=rand_device).manual_seed(seed),
).to(device=device, dtype=dtype)
def time_shift(mu: float, sigma: float, t: torch.Tensor) -> torch.Tensor:
return math.exp(mu) / (math.exp(mu) + (1 / t - 1) ** sigma)
def get_lin_function(x1: float = 256, y1: float = 0.5, x2: float = 4096, y2: float = 1.15) -> Callable[[float], float]:
m = (y2 - y1) / (x2 - x1)
b = y1 - m * x1
return lambda x: m * x + b
def get_schedule(
num_steps: int,
image_seq_len: int,
base_shift: float = 0.5,
max_shift: float = 1.15,
shift: bool = True,
) -> list[float]:
# extra step for zero
timesteps = torch.linspace(1, 0, num_steps + 1)
# shifting the schedule to favor high timesteps for higher signal images
if shift:
# estimate mu based on linear estimation between two points
mu = get_lin_function(y1=base_shift, y2=max_shift)(image_seq_len)
timesteps = time_shift(mu, 1.0, timesteps)
return timesteps.tolist()
def _find_last_index_ge_val(timesteps: list[float], val: float, eps: float = 1e-6) -> int:
"""Find the last index in timesteps that is >= val.
We use epsilon-close equality to avoid potential floating point errors.
"""
idx = len(list(filter(lambda t: t >= (val - eps), timesteps))) - 1
assert idx >= 0
return idx
def clip_timestep_schedule(timesteps: list[float], denoising_start: float, denoising_end: float) -> list[float]:
"""Clip the timestep schedule to the denoising range.
Args:
timesteps (list[float]): The original timestep schedule: [1.0, ..., 0.0].
denoising_start (float): A value in [0, 1] specifying the start of the denoising process. E.g. a value of 0.2
would mean that the denoising process start at the last timestep in the schedule >= 0.8.
denoising_end (float): A value in [0, 1] specifying the end of the denoising process. E.g. a value of 0.8 would
mean that the denoising process end at the last timestep in the schedule >= 0.2.
Returns:
list[float]: The clipped timestep schedule.
"""
assert 0.0 <= denoising_start <= 1.0
assert 0.0 <= denoising_end <= 1.0
assert denoising_start <= denoising_end
t_start_val = 1.0 - denoising_start
t_end_val = 1.0 - denoising_end
t_start_idx = _find_last_index_ge_val(timesteps, t_start_val)
t_end_idx = _find_last_index_ge_val(timesteps, t_end_val)
clipped_timesteps = timesteps[t_start_idx : t_end_idx + 1]
return clipped_timesteps
def unpack(x: torch.Tensor, height: int, width: int) -> torch.Tensor:
"""Unpack flat array of patch embeddings to latent image."""
return rearrange(
x,
"b (h w) (c ph pw) -> b c (h ph) (w pw)",
h=math.ceil(height / 16),
w=math.ceil(width / 16),
ph=2,
pw=2,
)
def pack(x: torch.Tensor) -> torch.Tensor:
"""Pack latent image to flattented array of patch embeddings."""
# Pixel unshuffle with a scale of 2, and flatten the height/width dimensions to get an array of patches.
return rearrange(x, "b c (h ph) (w pw) -> b (h w) (c ph pw)", ph=2, pw=2)
def generate_img_ids(h: int, w: int, batch_size: int, device: torch.device, dtype: torch.dtype) -> torch.Tensor:
"""Generate tensor of image position ids.
Args:
h (int): Height of image in latent space.
w (int): Width of image in latent space.
batch_size (int): Batch size.
device (torch.device): Device.
dtype (torch.dtype): dtype.
Returns:
torch.Tensor: Image position ids.
"""
img_ids = torch.zeros(h // 2, w // 2, 3, device=device, dtype=dtype)
img_ids[..., 1] = img_ids[..., 1] + torch.arange(h // 2, device=device, dtype=dtype)[:, None]
img_ids[..., 2] = img_ids[..., 2] + torch.arange(w // 2, device=device, dtype=dtype)[None, :]
img_ids = repeat(img_ids, "h w c -> b (h w) c", b=batch_size)
return img_ids

View File

@@ -0,0 +1,71 @@
# Initially pulled from https://github.com/black-forest-labs/flux
from dataclasses import dataclass
from typing import Dict, Literal
from invokeai.backend.flux.model import FluxParams
from invokeai.backend.flux.modules.autoencoder import AutoEncoderParams
@dataclass
class ModelSpec:
params: FluxParams
ae_params: AutoEncoderParams
ckpt_path: str | None
ae_path: str | None
repo_id: str | None
repo_flow: str | None
repo_ae: str | None
max_seq_lengths: Dict[str, Literal[256, 512]] = {
"flux-dev": 512,
"flux-schnell": 256,
}
ae_params = {
"flux": AutoEncoderParams(
resolution=256,
in_channels=3,
ch=128,
out_ch=3,
ch_mult=[1, 2, 4, 4],
num_res_blocks=2,
z_channels=16,
scale_factor=0.3611,
shift_factor=0.1159,
)
}
params = {
"flux-dev": FluxParams(
in_channels=64,
vec_in_dim=768,
context_in_dim=4096,
hidden_size=3072,
mlp_ratio=4.0,
num_heads=24,
depth=19,
depth_single_blocks=38,
axes_dim=[16, 56, 56],
theta=10_000,
qkv_bias=True,
guidance_embed=True,
),
"flux-schnell": FluxParams(
in_channels=64,
vec_in_dim=768,
context_in_dim=4096,
hidden_size=3072,
mlp_ratio=4.0,
num_heads=24,
depth=19,
depth_single_blocks=38,
axes_dim=[16, 56, 56],
theta=10_000,
qkv_bias=True,
guidance_embed=False,
),
}

View File

@@ -52,6 +52,7 @@ class BaseModelType(str, Enum):
StableDiffusion2 = "sd-2"
StableDiffusionXL = "sdxl"
StableDiffusionXLRefiner = "sdxl-refiner"
Flux = "flux"
# Kandinsky2_1 = "kandinsky-2.1"
@@ -66,7 +67,9 @@ class ModelType(str, Enum):
TextualInversion = "embedding"
IPAdapter = "ip_adapter"
CLIPVision = "clip_vision"
CLIPEmbed = "clip_embed"
T2IAdapter = "t2i_adapter"
T5Encoder = "t5_encoder"
SpandrelImageToImage = "spandrel_image_to_image"
@@ -74,6 +77,7 @@ class SubModelType(str, Enum):
"""Submodel type."""
UNet = "unet"
Transformer = "transformer"
TextEncoder = "text_encoder"
TextEncoder2 = "text_encoder_2"
Tokenizer = "tokenizer"
@@ -104,6 +108,9 @@ class ModelFormat(str, Enum):
EmbeddingFile = "embedding_file"
EmbeddingFolder = "embedding_folder"
InvokeAI = "invokeai"
T5Encoder = "t5_encoder"
BnbQuantizedLlmInt8b = "bnb_quantized_int8b"
BnbQuantizednf4b = "bnb_quantized_nf4b"
class SchedulerPredictionType(str, Enum):
@@ -186,7 +193,9 @@ class ModelConfigBase(BaseModel):
class CheckpointConfigBase(ModelConfigBase):
"""Model config for checkpoint-style models."""
format: Literal[ModelFormat.Checkpoint] = ModelFormat.Checkpoint
format: Literal[ModelFormat.Checkpoint, ModelFormat.BnbQuantizednf4b] = Field(
description="Format of the provided checkpoint model", default=ModelFormat.Checkpoint
)
config_path: str = Field(description="path to the checkpoint model config file")
converted_at: Optional[float] = Field(
description="When this model was last converted to diffusers", default_factory=time.time
@@ -205,6 +214,26 @@ class LoRAConfigBase(ModelConfigBase):
trigger_phrases: Optional[set[str]] = Field(description="Set of trigger phrases for this model", default=None)
class T5EncoderConfigBase(ModelConfigBase):
type: Literal[ModelType.T5Encoder] = ModelType.T5Encoder
class T5EncoderConfig(T5EncoderConfigBase):
format: Literal[ModelFormat.T5Encoder] = ModelFormat.T5Encoder
@staticmethod
def get_tag() -> Tag:
return Tag(f"{ModelType.T5Encoder.value}.{ModelFormat.T5Encoder.value}")
class T5EncoderBnbQuantizedLlmInt8bConfig(T5EncoderConfigBase):
format: Literal[ModelFormat.BnbQuantizedLlmInt8b] = ModelFormat.BnbQuantizedLlmInt8b
@staticmethod
def get_tag() -> Tag:
return Tag(f"{ModelType.T5Encoder.value}.{ModelFormat.BnbQuantizedLlmInt8b.value}")
class LoRALyCORISConfig(LoRAConfigBase):
"""Model config for LoRA/Lycoris models."""
@@ -229,7 +258,6 @@ class VAECheckpointConfig(CheckpointConfigBase):
"""Model config for standalone VAE models."""
type: Literal[ModelType.VAE] = ModelType.VAE
format: Literal[ModelFormat.Checkpoint] = ModelFormat.Checkpoint
@staticmethod
def get_tag() -> Tag:
@@ -268,7 +296,6 @@ class ControlNetCheckpointConfig(CheckpointConfigBase, ControlAdapterConfigBase)
"""Model config for ControlNet models (diffusers version)."""
type: Literal[ModelType.ControlNet] = ModelType.ControlNet
format: Literal[ModelFormat.Checkpoint] = ModelFormat.Checkpoint
@staticmethod
def get_tag() -> Tag:
@@ -317,6 +344,21 @@ class MainCheckpointConfig(CheckpointConfigBase, MainConfigBase):
return Tag(f"{ModelType.Main.value}.{ModelFormat.Checkpoint.value}")
class MainBnbQuantized4bCheckpointConfig(CheckpointConfigBase, MainConfigBase):
"""Model config for main checkpoint models."""
prediction_type: SchedulerPredictionType = SchedulerPredictionType.Epsilon
upcast_attention: bool = False
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.format = ModelFormat.BnbQuantizednf4b
@staticmethod
def get_tag() -> Tag:
return Tag(f"{ModelType.Main.value}.{ModelFormat.BnbQuantizednf4b.value}")
class MainDiffusersConfig(DiffusersConfigBase, MainConfigBase):
"""Model config for main diffusers models."""
@@ -350,6 +392,17 @@ class IPAdapterCheckpointConfig(IPAdapterBaseConfig):
return Tag(f"{ModelType.IPAdapter.value}.{ModelFormat.Checkpoint.value}")
class CLIPEmbedDiffusersConfig(DiffusersConfigBase):
"""Model config for Clip Embeddings."""
type: Literal[ModelType.CLIPEmbed] = ModelType.CLIPEmbed
format: Literal[ModelFormat.Diffusers] = ModelFormat.Diffusers
@staticmethod
def get_tag() -> Tag:
return Tag(f"{ModelType.CLIPEmbed.value}.{ModelFormat.Diffusers.value}")
class CLIPVisionDiffusersConfig(DiffusersConfigBase):
"""Model config for CLIPVision."""
@@ -408,12 +461,15 @@ AnyModelConfig = Annotated[
Union[
Annotated[MainDiffusersConfig, MainDiffusersConfig.get_tag()],
Annotated[MainCheckpointConfig, MainCheckpointConfig.get_tag()],
Annotated[MainBnbQuantized4bCheckpointConfig, MainBnbQuantized4bCheckpointConfig.get_tag()],
Annotated[VAEDiffusersConfig, VAEDiffusersConfig.get_tag()],
Annotated[VAECheckpointConfig, VAECheckpointConfig.get_tag()],
Annotated[ControlNetDiffusersConfig, ControlNetDiffusersConfig.get_tag()],
Annotated[ControlNetCheckpointConfig, ControlNetCheckpointConfig.get_tag()],
Annotated[LoRALyCORISConfig, LoRALyCORISConfig.get_tag()],
Annotated[LoRADiffusersConfig, LoRADiffusersConfig.get_tag()],
Annotated[T5EncoderConfig, T5EncoderConfig.get_tag()],
Annotated[T5EncoderBnbQuantizedLlmInt8bConfig, T5EncoderBnbQuantizedLlmInt8bConfig.get_tag()],
Annotated[TextualInversionFileConfig, TextualInversionFileConfig.get_tag()],
Annotated[TextualInversionFolderConfig, TextualInversionFolderConfig.get_tag()],
Annotated[IPAdapterInvokeAIConfig, IPAdapterInvokeAIConfig.get_tag()],
@@ -421,6 +477,7 @@ AnyModelConfig = Annotated[
Annotated[T2IAdapterConfig, T2IAdapterConfig.get_tag()],
Annotated[SpandrelImageToImageConfig, SpandrelImageToImageConfig.get_tag()],
Annotated[CLIPVisionDiffusersConfig, CLIPVisionDiffusersConfig.get_tag()],
Annotated[CLIPEmbedDiffusersConfig, CLIPEmbedDiffusersConfig.get_tag()],
],
Discriminator(get_model_discriminator_value),
]

View File

@@ -66,12 +66,14 @@ class ModelLoader(ModelLoaderBase):
return (model_base / config.path).resolve()
def _load_and_cache(self, config: AnyModelConfig, submodel_type: Optional[SubModelType] = None) -> ModelLockerBase:
stats_name = ":".join([config.base, config.type, config.name, (submodel_type or "")])
try:
return self._ram_cache.get(config.key, submodel_type)
return self._ram_cache.get(config.key, submodel_type, stats_name=stats_name)
except IndexError:
pass
config.path = str(self._get_model_path(config))
self._ram_cache.make_room(self.get_size_fs(config, Path(config.path), submodel_type))
loaded_model = self._load_model(config, submodel_type)
self._ram_cache.put(
@@ -83,7 +85,7 @@ class ModelLoader(ModelLoaderBase):
return self._ram_cache.get(
key=config.key,
submodel_type=submodel_type,
stats_name=":".join([config.base, config.type, config.name, (submodel_type or "")]),
stats_name=stats_name,
)
def get_size_fs(

View File

@@ -128,7 +128,24 @@ class ModelCacheBase(ABC, Generic[T]):
@property
@abstractmethod
def max_cache_size(self) -> float:
"""Return true if the cache is configured to lazily offload models in VRAM."""
"""Return the maximum size the RAM cache can grow to."""
pass
@max_cache_size.setter
@abstractmethod
def max_cache_size(self, value: float) -> None:
"""Set the cap on vram cache size."""
@property
@abstractmethod
def max_vram_cache_size(self) -> float:
"""Return the maximum size the VRAM cache can grow to."""
pass
@max_vram_cache_size.setter
@abstractmethod
def max_vram_cache_size(self, value: float) -> float:
"""Set the maximum size the VRAM cache can grow to."""
pass
@abstractmethod
@@ -193,15 +210,6 @@ class ModelCacheBase(ABC, Generic[T]):
"""
pass
@abstractmethod
def exists(
self,
key: str,
submodel_type: Optional[SubModelType] = None,
) -> bool:
"""Return true if the model identified by key and submodel_type is in the cache."""
pass
@abstractmethod
def cache_size(self) -> int:
"""Get the total size of the models currently cached."""

View File

@@ -1,22 +1,6 @@
# Copyright (c) 2024 Lincoln D. Stein and the InvokeAI Development team
# TODO: Add Stalker's proper name to copyright
"""
Manage a RAM cache of diffusion/transformer models for fast switching.
They are moved between GPU VRAM and CPU RAM as necessary. If the cache
grows larger than a preset maximum, then the least recently used
model will be cleared and (re)loaded from disk when next needed.
The cache returns context manager generators designed to load the
model into the GPU within the context, and unload outside the
context. Use like this:
cache = ModelCache(max_cache_size=7.5)
with cache.get_model('runwayml/stable-diffusion-1-5') as SD1,
cache.get_model('stabilityai/stable-diffusion-2') as SD2:
do_something_in_GPU(SD1,SD2)
"""
""" """
import gc
import math
@@ -40,53 +24,74 @@ from invokeai.backend.model_manager.load.model_util import calc_model_size_by_da
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.logging import InvokeAILogger
# Maximum size of the cache, in gigs
# Default is roughly enough to hold three fp16 diffusers models in RAM simultaneously
DEFAULT_MAX_CACHE_SIZE = 6.0
# amount of GPU memory to hold in reserve for use by generations (GB)
DEFAULT_MAX_VRAM_CACHE_SIZE = 2.75
# actual size of a gig
GIG = 1073741824
# Size of a GB in bytes.
GB = 2**30
# Size of a MB in bytes.
MB = 2**20
class ModelCache(ModelCacheBase[AnyModel]):
"""Implementation of ModelCacheBase."""
"""A cache for managing models in memory.
The cache is based on two levels of model storage:
- execution_device: The device where most models are executed (typically "cuda", "mps", or "cpu").
- storage_device: The device where models are offloaded when not in active use (typically "cpu").
The model cache is based on the following assumptions:
- storage_device_mem_size > execution_device_mem_size
- disk_to_storage_device_transfer_time >> storage_device_to_execution_device_transfer_time
A copy of all models in the cache is always kept on the storage_device. A subset of the models also have a copy on
the execution_device.
Models are moved between the storage_device and the execution_device as necessary. Cache size limits are enforced
on both the storage_device and the execution_device. The execution_device cache uses a smallest-first offload
policy. The storage_device cache uses a least-recently-used (LRU) offload policy.
Note: Neither of these offload policies has really been compared against alternatives. It's likely that different
policies would be better, although the optimal policies are likely heavily dependent on usage patterns and HW
configuration.
The cache returns context manager generators designed to load the model into the execution device (often GPU) within
the context, and unload outside the context.
Example usage:
```
cache = ModelCache(max_cache_size=7.5, max_vram_cache_size=6.0)
with cache.get_model('runwayml/stable-diffusion-1-5') as SD1:
do_something_on_gpu(SD1)
```
"""
def __init__(
self,
max_cache_size: float = DEFAULT_MAX_CACHE_SIZE,
max_vram_cache_size: float = DEFAULT_MAX_VRAM_CACHE_SIZE,
max_cache_size: float,
max_vram_cache_size: float,
execution_device: torch.device = torch.device("cuda"),
storage_device: torch.device = torch.device("cpu"),
precision: torch.dtype = torch.float16,
sequential_offload: bool = False,
lazy_offloading: bool = True,
sha_chunksize: int = 16777216,
log_memory_usage: bool = False,
logger: Optional[Logger] = None,
):
"""
Initialize the model RAM cache.
:param max_cache_size: Maximum size of the RAM cache [6.0 GB]
:param max_cache_size: Maximum size of the storage_device cache in GBs.
:param max_vram_cache_size: Maximum size of the execution_device cache in GBs.
:param execution_device: Torch device to load active model into [torch.device('cuda')]
:param storage_device: Torch device to save inactive model in [torch.device('cpu')]
:param precision: Precision for loaded models [torch.float16]
:param lazy_offloading: Keep model in VRAM until another model needs to be loaded
:param sequential_offload: Conserve VRAM by loading and unloading each stage of the pipeline sequentially
:param log_memory_usage: If True, a memory snapshot will be captured before and after every model cache
operation, and the result will be logged (at debug level). There is a time cost to capturing the memory
snapshots, so it is recommended to disable this feature unless you are actively inspecting the model cache's
behaviour.
:param logger: InvokeAILogger to use (otherwise creates one)
"""
# allow lazy offloading only when vram cache enabled
self._lazy_offloading = lazy_offloading and max_vram_cache_size > 0
self._precision: torch.dtype = precision
self._max_cache_size: float = max_cache_size
self._max_vram_cache_size: float = max_vram_cache_size
self._execution_device: torch.device = execution_device
@@ -128,6 +133,16 @@ class ModelCache(ModelCacheBase[AnyModel]):
"""Set the cap on cache size."""
self._max_cache_size = value
@property
def max_vram_cache_size(self) -> float:
"""Return the cap on vram cache size."""
return self._max_vram_cache_size
@max_vram_cache_size.setter
def max_vram_cache_size(self, value: float) -> None:
"""Set the cap on vram cache size."""
self._max_vram_cache_size = value
@property
def stats(self) -> Optional[CacheStats]:
"""Return collected CacheStats object."""
@@ -145,15 +160,6 @@ class ModelCache(ModelCacheBase[AnyModel]):
total += cache_record.size
return total
def exists(
self,
key: str,
submodel_type: Optional[SubModelType] = None,
) -> bool:
"""Return true if the model identified by key and submodel_type is in the cache."""
key = self._make_cache_key(key, submodel_type)
return key in self._cached_models
def put(
self,
key: str,
@@ -203,7 +209,7 @@ class ModelCache(ModelCacheBase[AnyModel]):
# more stats
if self.stats:
stats_name = stats_name or key
self.stats.cache_size = int(self._max_cache_size * GIG)
self.stats.cache_size = int(self._max_cache_size * GB)
self.stats.high_watermark = max(self.stats.high_watermark, self.cache_size())
self.stats.in_cache = len(self._cached_models)
self.stats.loaded_model_sizes[stats_name] = max(
@@ -231,10 +237,13 @@ class ModelCache(ModelCacheBase[AnyModel]):
return model_key
def offload_unlocked_models(self, size_required: int) -> None:
"""Move any unused models from VRAM."""
reserved = self._max_vram_cache_size * GIG
"""Offload models from the execution_device to make room for size_required.
:param size_required: The amount of space to clear in the execution_device cache, in bytes.
"""
reserved = self._max_vram_cache_size * GB
vram_in_use = torch.cuda.memory_allocated() + size_required
self.logger.debug(f"{(vram_in_use/GIG):.2f}GB VRAM needed for models; max allowed={(reserved/GIG):.2f}GB")
self.logger.debug(f"{(vram_in_use/GB):.2f}GB VRAM needed for models; max allowed={(reserved/GB):.2f}GB")
for _, cache_entry in sorted(self._cached_models.items(), key=lambda x: x[1].size):
if vram_in_use <= reserved:
break
@@ -245,7 +254,7 @@ class ModelCache(ModelCacheBase[AnyModel]):
cache_entry.loaded = False
vram_in_use = torch.cuda.memory_allocated() + size_required
self.logger.debug(
f"Removing {cache_entry.key} from VRAM to free {(cache_entry.size/GIG):.2f}GB; vram free = {(torch.cuda.memory_allocated()/GIG):.2f}GB"
f"Removing {cache_entry.key} from VRAM to free {(cache_entry.size/GB):.2f}GB; vram free = {(torch.cuda.memory_allocated()/GB):.2f}GB"
)
TorchDevice.empty_cache()
@@ -303,7 +312,7 @@ class ModelCache(ModelCacheBase[AnyModel]):
self.logger.debug(
f"Moved model '{cache_entry.key}' from {source_device} to"
f" {target_device} in {(end_model_to_time-start_model_to_time):.2f}s."
f"Estimated model size: {(cache_entry.size/GIG):.3f} GB."
f"Estimated model size: {(cache_entry.size/GB):.3f} GB."
f"{get_pretty_snapshot_diff(snapshot_before, snapshot_after)}"
)
@@ -326,14 +335,14 @@ class ModelCache(ModelCacheBase[AnyModel]):
f"Moving model '{cache_entry.key}' from {source_device} to"
f" {target_device} caused an unexpected change in VRAM usage. The model's"
" estimated size may be incorrect. Estimated model size:"
f" {(cache_entry.size/GIG):.3f} GB.\n"
f" {(cache_entry.size/GB):.3f} GB.\n"
f"{get_pretty_snapshot_diff(snapshot_before, snapshot_after)}"
)
def print_cuda_stats(self) -> None:
"""Log CUDA diagnostics."""
vram = "%4.2fG" % (torch.cuda.memory_allocated() / GIG)
ram = "%4.2fG" % (self.cache_size() / GIG)
vram = "%4.2fG" % (torch.cuda.memory_allocated() / GB)
ram = "%4.2fG" % (self.cache_size() / GB)
in_ram_models = 0
in_vram_models = 0
@@ -353,17 +362,20 @@ class ModelCache(ModelCacheBase[AnyModel]):
)
def make_room(self, size: int) -> None:
"""Make enough room in the cache to accommodate a new model of indicated size."""
# calculate how much memory this model will require
# multiplier = 2 if self.precision==torch.float32 else 1
"""Make enough room in the cache to accommodate a new model of indicated size.
Note: This function deletes all of the cache's internal references to a model in order to free it. If there are
external references to the model, there's nothing that the cache can do about it, and those models will not be
garbage-collected.
"""
bytes_needed = size
maximum_size = self.max_cache_size * GIG # stored in GB, convert to bytes
maximum_size = self.max_cache_size * GB # stored in GB, convert to bytes
current_size = self.cache_size()
if current_size + bytes_needed > maximum_size:
self.logger.debug(
f"Max cache size exceeded: {(current_size/GIG):.2f}/{self.max_cache_size:.2f} GB, need an additional"
f" {(bytes_needed/GIG):.2f} GB"
f"Max cache size exceeded: {(current_size/GB):.2f}/{self.max_cache_size:.2f} GB, need an additional"
f" {(bytes_needed/GB):.2f} GB"
)
self.logger.debug(f"Before making_room: cached_models={len(self._cached_models)}")
@@ -380,7 +392,7 @@ class ModelCache(ModelCacheBase[AnyModel]):
if not cache_entry.locked:
self.logger.debug(
f"Removing {model_key} from RAM cache to free at least {(size/GIG):.2f} GB (-{(cache_entry.size/GIG):.2f} GB)"
f"Removing {model_key} from RAM cache to free at least {(size/GB):.2f} GB (-{(cache_entry.size/GB):.2f} GB)"
)
current_size -= cache_entry.size
models_cleared += 1

View File

@@ -0,0 +1,246 @@
# Copyright (c) 2024, Brandon W. Rising and the InvokeAI Development Team
"""Class for Flux model loading in InvokeAI."""
from pathlib import Path
from typing import Optional
import accelerate
import torch
from safetensors.torch import load_file
from transformers import AutoConfig, AutoModelForTextEncoding, CLIPTextModel, CLIPTokenizer, T5EncoderModel, T5Tokenizer
from invokeai.app.services.config.config_default import get_config
from invokeai.backend.flux.model import Flux
from invokeai.backend.flux.modules.autoencoder import AutoEncoder
from invokeai.backend.flux.util import ae_params, params
from invokeai.backend.model_manager import (
AnyModel,
AnyModelConfig,
BaseModelType,
ModelFormat,
ModelType,
SubModelType,
)
from invokeai.backend.model_manager.config import (
CheckpointConfigBase,
CLIPEmbedDiffusersConfig,
MainBnbQuantized4bCheckpointConfig,
MainCheckpointConfig,
T5EncoderBnbQuantizedLlmInt8bConfig,
T5EncoderConfig,
VAECheckpointConfig,
)
from invokeai.backend.model_manager.load.load_default import ModelLoader
from invokeai.backend.model_manager.load.model_loader_registry import ModelLoaderRegistry
from invokeai.backend.model_manager.util.model_util import (
convert_bundle_to_flux_transformer_checkpoint,
)
from invokeai.backend.util.silence_warnings import SilenceWarnings
try:
from invokeai.backend.quantization.bnb_llm_int8 import quantize_model_llm_int8
from invokeai.backend.quantization.bnb_nf4 import quantize_model_nf4
bnb_available = True
except ImportError:
bnb_available = False
app_config = get_config()
@ModelLoaderRegistry.register(base=BaseModelType.Flux, type=ModelType.VAE, format=ModelFormat.Checkpoint)
class FluxVAELoader(ModelLoader):
"""Class to load VAE models."""
def _load_model(
self,
config: AnyModelConfig,
submodel_type: Optional[SubModelType] = None,
) -> AnyModel:
if not isinstance(config, VAECheckpointConfig):
raise ValueError("Only VAECheckpointConfig models are currently supported here.")
model_path = Path(config.path)
with SilenceWarnings():
model = AutoEncoder(ae_params[config.config_path])
sd = load_file(model_path)
model.load_state_dict(sd, assign=True)
model.to(dtype=self._torch_dtype)
return model
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.CLIPEmbed, format=ModelFormat.Diffusers)
class ClipCheckpointModel(ModelLoader):
"""Class to load main models."""
def _load_model(
self,
config: AnyModelConfig,
submodel_type: Optional[SubModelType] = None,
) -> AnyModel:
if not isinstance(config, CLIPEmbedDiffusersConfig):
raise ValueError("Only CLIPEmbedDiffusersConfig models are currently supported here.")
match submodel_type:
case SubModelType.Tokenizer:
return CLIPTokenizer.from_pretrained(Path(config.path) / "tokenizer")
case SubModelType.TextEncoder:
return CLIPTextModel.from_pretrained(Path(config.path) / "text_encoder")
raise ValueError(
f"Only Tokenizer and TextEncoder submodels are currently supported. Received: {submodel_type.value if submodel_type else 'None'}"
)
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.T5Encoder, format=ModelFormat.BnbQuantizedLlmInt8b)
class BnbQuantizedLlmInt8bCheckpointModel(ModelLoader):
"""Class to load main models."""
def _load_model(
self,
config: AnyModelConfig,
submodel_type: Optional[SubModelType] = None,
) -> AnyModel:
if not isinstance(config, T5EncoderBnbQuantizedLlmInt8bConfig):
raise ValueError("Only T5EncoderBnbQuantizedLlmInt8bConfig models are currently supported here.")
if not bnb_available:
raise ImportError(
"The bnb modules are not available. Please install bitsandbytes if available on your platform."
)
match submodel_type:
case SubModelType.Tokenizer2:
return T5Tokenizer.from_pretrained(Path(config.path) / "tokenizer_2", max_length=512)
case SubModelType.TextEncoder2:
te2_model_path = Path(config.path) / "text_encoder_2"
model_config = AutoConfig.from_pretrained(te2_model_path)
with accelerate.init_empty_weights():
model = AutoModelForTextEncoding.from_config(model_config)
model = quantize_model_llm_int8(model, modules_to_not_convert=set())
state_dict_path = te2_model_path / "bnb_llm_int8_model.safetensors"
state_dict = load_file(state_dict_path)
self._load_state_dict_into_t5(model, state_dict)
return model
raise ValueError(
f"Only Tokenizer and TextEncoder submodels are currently supported. Received: {submodel_type.value if submodel_type else 'None'}"
)
@classmethod
def _load_state_dict_into_t5(cls, model: T5EncoderModel, state_dict: dict[str, torch.Tensor]):
# There is a shared reference to a single weight tensor in the model.
# Both "encoder.embed_tokens.weight" and "shared.weight" refer to the same tensor, so only the latter should
# be present in the state_dict.
missing_keys, unexpected_keys = model.load_state_dict(state_dict, strict=False, assign=True)
assert len(unexpected_keys) == 0
assert set(missing_keys) == {"encoder.embed_tokens.weight"}
# Assert that the layers we expect to be shared are actually shared.
assert model.encoder.embed_tokens.weight is model.shared.weight
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.T5Encoder, format=ModelFormat.T5Encoder)
class T5EncoderCheckpointModel(ModelLoader):
"""Class to load main models."""
def _load_model(
self,
config: AnyModelConfig,
submodel_type: Optional[SubModelType] = None,
) -> AnyModel:
if not isinstance(config, T5EncoderConfig):
raise ValueError("Only T5EncoderConfig models are currently supported here.")
match submodel_type:
case SubModelType.Tokenizer2:
return T5Tokenizer.from_pretrained(Path(config.path) / "tokenizer_2", max_length=512)
case SubModelType.TextEncoder2:
return T5EncoderModel.from_pretrained(Path(config.path) / "text_encoder_2")
raise ValueError(
f"Only Tokenizer and TextEncoder submodels are currently supported. Received: {submodel_type.value if submodel_type else 'None'}"
)
@ModelLoaderRegistry.register(base=BaseModelType.Flux, type=ModelType.Main, format=ModelFormat.Checkpoint)
class FluxCheckpointModel(ModelLoader):
"""Class to load main models."""
def _load_model(
self,
config: AnyModelConfig,
submodel_type: Optional[SubModelType] = None,
) -> AnyModel:
if not isinstance(config, CheckpointConfigBase):
raise ValueError("Only CheckpointConfigBase models are currently supported here.")
match submodel_type:
case SubModelType.Transformer:
return self._load_from_singlefile(config)
raise ValueError(
f"Only Transformer submodels are currently supported. Received: {submodel_type.value if submodel_type else 'None'}"
)
def _load_from_singlefile(
self,
config: AnyModelConfig,
) -> AnyModel:
assert isinstance(config, MainCheckpointConfig)
model_path = Path(config.path)
with SilenceWarnings():
model = Flux(params[config.config_path])
sd = load_file(model_path)
if "model.diffusion_model.double_blocks.0.img_attn.norm.key_norm.scale" in sd:
sd = convert_bundle_to_flux_transformer_checkpoint(sd)
new_sd_size = sum([ten.nelement() * torch.bfloat16.itemsize for ten in sd.values()])
self._ram_cache.make_room(new_sd_size)
for k in sd.keys():
# We need to cast to bfloat16 due to it being the only currently supported dtype for inference
sd[k] = sd[k].to(torch.bfloat16)
model.load_state_dict(sd, assign=True)
return model
@ModelLoaderRegistry.register(base=BaseModelType.Flux, type=ModelType.Main, format=ModelFormat.BnbQuantizednf4b)
class FluxBnbQuantizednf4bCheckpointModel(ModelLoader):
"""Class to load main models."""
def _load_model(
self,
config: AnyModelConfig,
submodel_type: Optional[SubModelType] = None,
) -> AnyModel:
if not isinstance(config, CheckpointConfigBase):
raise ValueError("Only CheckpointConfigBase models are currently supported here.")
match submodel_type:
case SubModelType.Transformer:
return self._load_from_singlefile(config)
raise ValueError(
f"Only Transformer submodels are currently supported. Received: {submodel_type.value if submodel_type else 'None'}"
)
def _load_from_singlefile(
self,
config: AnyModelConfig,
) -> AnyModel:
assert isinstance(config, MainBnbQuantized4bCheckpointConfig)
if not bnb_available:
raise ImportError(
"The bnb modules are not available. Please install bitsandbytes if available on your platform."
)
model_path = Path(config.path)
with SilenceWarnings():
with accelerate.init_empty_weights():
model = Flux(params[config.config_path])
model = quantize_model_nf4(model, modules_to_not_convert=set(), compute_dtype=torch.bfloat16)
sd = load_file(model_path)
if "model.diffusion_model.double_blocks.0.img_attn.norm.key_norm.scale" in sd:
sd = convert_bundle_to_flux_transformer_checkpoint(sd)
model.load_state_dict(sd, assign=True)
return model

View File

@@ -78,7 +78,12 @@ class GenericDiffusersLoader(ModelLoader):
# TO DO: Add exception handling
def _hf_definition_to_type(self, module: str, class_name: str) -> ModelMixin: # fix with correct type
if module in ["diffusers", "transformers"]:
if module in [
"diffusers",
"transformers",
"invokeai.backend.quantization.fast_quantized_transformers_model",
"invokeai.backend.quantization.fast_quantized_diffusion_model",
]:
res_type = sys.modules[module]
else:
res_type = sys.modules["diffusers"].pipelines

View File

@@ -36,8 +36,18 @@ VARIANT_TO_IN_CHANNEL_MAP = {
}
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.Main, format=ModelFormat.Diffusers)
@ModelLoaderRegistry.register(base=BaseModelType.Any, type=ModelType.Main, format=ModelFormat.Checkpoint)
@ModelLoaderRegistry.register(base=BaseModelType.StableDiffusion1, type=ModelType.Main, format=ModelFormat.Diffusers)
@ModelLoaderRegistry.register(base=BaseModelType.StableDiffusion2, type=ModelType.Main, format=ModelFormat.Diffusers)
@ModelLoaderRegistry.register(base=BaseModelType.StableDiffusionXL, type=ModelType.Main, format=ModelFormat.Diffusers)
@ModelLoaderRegistry.register(
base=BaseModelType.StableDiffusionXLRefiner, type=ModelType.Main, format=ModelFormat.Diffusers
)
@ModelLoaderRegistry.register(base=BaseModelType.StableDiffusion1, type=ModelType.Main, format=ModelFormat.Checkpoint)
@ModelLoaderRegistry.register(base=BaseModelType.StableDiffusion2, type=ModelType.Main, format=ModelFormat.Checkpoint)
@ModelLoaderRegistry.register(base=BaseModelType.StableDiffusionXL, type=ModelType.Main, format=ModelFormat.Checkpoint)
@ModelLoaderRegistry.register(
base=BaseModelType.StableDiffusionXLRefiner, type=ModelType.Main, format=ModelFormat.Checkpoint
)
class StableDiffusionDiffusersModel(GenericDiffusersLoader):
"""Class to load main models."""

View File

@@ -9,7 +9,7 @@ from typing import Optional
import torch
from diffusers.pipelines.pipeline_utils import DiffusionPipeline
from diffusers.schedulers.scheduling_utils import SchedulerMixin
from transformers import CLIPTokenizer
from transformers import CLIPTokenizer, T5Tokenizer, T5TokenizerFast
from invokeai.backend.image_util.depth_anything.depth_anything_pipeline import DepthAnythingPipeline
from invokeai.backend.image_util.grounding_dino.grounding_dino_pipeline import GroundingDinoPipeline
@@ -50,6 +50,17 @@ def calc_model_size_by_data(logger: logging.Logger, model: AnyModel) -> int:
),
):
return model.calc_size()
elif isinstance(
model,
(
T5TokenizerFast,
T5Tokenizer,
),
):
# HACK(ryand): len(model) just returns the vocabulary size, so this is blatantly wrong. It should be small
# relative to the text encoder that it's used with, so shouldn't matter too much, but we should fix this at some
# point.
return len(model)
else:
# TODO(ryand): Promote this from a log to an exception once we are confident that we are handling all of the
# supported model types.

View File

@@ -95,6 +95,7 @@ class ModelProbe(object):
}
CLASS2TYPE = {
"FluxPipeline": ModelType.Main,
"StableDiffusionPipeline": ModelType.Main,
"StableDiffusionInpaintPipeline": ModelType.Main,
"StableDiffusionXLPipeline": ModelType.Main,
@@ -106,6 +107,9 @@ class ModelProbe(object):
"ControlNetModel": ModelType.ControlNet,
"CLIPVisionModelWithProjection": ModelType.CLIPVision,
"T2IAdapter": ModelType.T2IAdapter,
"CLIPModel": ModelType.CLIPEmbed,
"CLIPTextModel": ModelType.CLIPEmbed,
"T5EncoderModel": ModelType.T5Encoder,
}
@classmethod
@@ -161,7 +165,7 @@ class ModelProbe(object):
fields["description"] = (
fields.get("description") or f"{fields['base'].value} {model_type.value} model {fields['name']}"
)
fields["format"] = fields.get("format") or probe.get_format()
fields["format"] = ModelFormat(fields.get("format")) if "format" in fields else probe.get_format()
fields["hash"] = fields.get("hash") or ModelHash(algorithm=hash_algo).hash(model_path)
fields["default_settings"] = fields.get("default_settings")
@@ -176,10 +180,10 @@ class ModelProbe(object):
fields["repo_variant"] = fields.get("repo_variant") or probe.get_repo_variant()
# additional fields needed for main and controlnet models
if (
fields["type"] in [ModelType.Main, ModelType.ControlNet, ModelType.VAE]
and fields["format"] is ModelFormat.Checkpoint
):
if fields["type"] in [ModelType.Main, ModelType.ControlNet, ModelType.VAE] and fields["format"] in [
ModelFormat.Checkpoint,
ModelFormat.BnbQuantizednf4b,
]:
ckpt_config_path = cls._get_checkpoint_config_path(
model_path,
model_type=fields["type"],
@@ -222,7 +226,19 @@ class ModelProbe(object):
ckpt = ckpt.get("state_dict", ckpt)
for key in [str(k) for k in ckpt.keys()]:
if key.startswith(("cond_stage_model.", "first_stage_model.", "model.diffusion_model.")):
if key.startswith(
(
"cond_stage_model.",
"first_stage_model.",
"model.diffusion_model.",
# FLUX models in the official BFL format contain keys with the "double_blocks." prefix.
"double_blocks.",
# Some FLUX checkpoint files contain transformer keys prefixed with "model.diffusion_model".
# This prefix is typically used to distinguish between multiple models bundled in a single file.
"model.diffusion_model.double_blocks.",
)
):
# Keys starting with double_blocks are associated with Flux models
return ModelType.Main
elif key.startswith(("encoder.conv_in", "decoder.conv_in")):
return ModelType.VAE
@@ -280,9 +296,16 @@ class ModelProbe(object):
if (folder_path / "image_encoder.txt").exists():
return ModelType.IPAdapter
i = folder_path / "model_index.json"
c = folder_path / "config.json"
config_path = i if i.exists() else c if c.exists() else None
config_path = None
for p in [
folder_path / "model_index.json", # pipeline
folder_path / "config.json", # most diffusers
folder_path / "text_encoder_2" / "config.json", # T5 text encoder
folder_path / "text_encoder" / "config.json", # T5 CLIP
]:
if p.exists():
config_path = p
break
if config_path:
with open(config_path, "r") as file:
@@ -321,10 +344,30 @@ class ModelProbe(object):
return possible_conf.absolute()
if model_type is ModelType.Main:
config_file = LEGACY_CONFIGS[base_type][variant_type]
if isinstance(config_file, dict): # need another tier for sd-2.x models
config_file = config_file[prediction_type]
config_file = f"stable-diffusion/{config_file}"
if base_type == BaseModelType.Flux:
# TODO: Decide between dev/schnell
checkpoint = ModelProbe._scan_and_load_checkpoint(model_path)
state_dict = checkpoint.get("state_dict") or checkpoint
if (
"guidance_in.out_layer.weight" in state_dict
or "model.diffusion_model.guidance_in.out_layer.weight" in state_dict
):
# For flux, this is a key in invokeai.backend.flux.util.params
# Due to model type and format being the descriminator for model configs this
# is used rather than attempting to support flux with separate model types and format
# If changed in the future, please fix me
config_file = "flux-dev"
else:
# For flux, this is a key in invokeai.backend.flux.util.params
# Due to model type and format being the discriminator for model configs this
# is used rather than attempting to support flux with separate model types and format
# If changed in the future, please fix me
config_file = "flux-schnell"
else:
config_file = LEGACY_CONFIGS[base_type][variant_type]
if isinstance(config_file, dict): # need another tier for sd-2.x models
config_file = config_file[prediction_type]
config_file = f"stable-diffusion/{config_file}"
elif model_type is ModelType.ControlNet:
config_file = (
"controlnet/cldm_v15.yaml"
@@ -333,7 +376,13 @@ class ModelProbe(object):
)
elif model_type is ModelType.VAE:
config_file = (
"stable-diffusion/v1-inference.yaml"
# For flux, this is a key in invokeai.backend.flux.util.ae_params
# Due to model type and format being the descriminator for model configs this
# is used rather than attempting to support flux with separate model types and format
# If changed in the future, please fix me
"flux"
if base_type is BaseModelType.Flux
else "stable-diffusion/v1-inference.yaml"
if base_type is BaseModelType.StableDiffusion1
else "stable-diffusion/sd_xl_base.yaml"
if base_type is BaseModelType.StableDiffusionXL
@@ -416,11 +465,18 @@ class CheckpointProbeBase(ProbeBase):
self.checkpoint = ModelProbe._scan_and_load_checkpoint(model_path)
def get_format(self) -> ModelFormat:
state_dict = self.checkpoint.get("state_dict") or self.checkpoint
if (
"double_blocks.0.img_attn.proj.weight.quant_state.bitsandbytes__nf4" in state_dict
or "model.diffusion_model.double_blocks.0.img_attn.proj.weight.quant_state.bitsandbytes__nf4" in state_dict
):
return ModelFormat.BnbQuantizednf4b
return ModelFormat("checkpoint")
def get_variant_type(self) -> ModelVariantType:
model_type = ModelProbe.get_model_type_from_checkpoint(self.model_path, self.checkpoint)
if model_type != ModelType.Main:
base_type = self.get_base_type()
if model_type != ModelType.Main or base_type == BaseModelType.Flux:
return ModelVariantType.Normal
state_dict = self.checkpoint.get("state_dict") or self.checkpoint
in_channels = state_dict["model.diffusion_model.input_blocks.0.0.weight"].shape[1]
@@ -440,6 +496,11 @@ class PipelineCheckpointProbe(CheckpointProbeBase):
def get_base_type(self) -> BaseModelType:
checkpoint = self.checkpoint
state_dict = self.checkpoint.get("state_dict") or checkpoint
if (
"double_blocks.0.img_attn.norm.key_norm.scale" in state_dict
or "model.diffusion_model.double_blocks.0.img_attn.norm.key_norm.scale" in state_dict
):
return BaseModelType.Flux
key_name = "model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight"
if key_name in state_dict and state_dict[key_name].shape[-1] == 768:
return BaseModelType.StableDiffusion1
@@ -482,6 +543,7 @@ class VaeCheckpointProbe(CheckpointProbeBase):
(r"xl", BaseModelType.StableDiffusionXL),
(r"sd2", BaseModelType.StableDiffusion2),
(r"vae", BaseModelType.StableDiffusion1),
(r"FLUX.1-schnell_ae", BaseModelType.Flux),
]:
if re.search(regexp, self.model_path.name, re.IGNORECASE):
return basetype
@@ -713,6 +775,30 @@ class TextualInversionFolderProbe(FolderProbeBase):
return TextualInversionCheckpointProbe(path).get_base_type()
class T5EncoderFolderProbe(FolderProbeBase):
def get_base_type(self) -> BaseModelType:
return BaseModelType.Any
def get_format(self) -> ModelFormat:
path = self.model_path / "text_encoder_2"
if (path / "model.safetensors.index.json").exists():
return ModelFormat.T5Encoder
files = list(path.glob("*.safetensors"))
if len(files) == 0:
raise InvalidModelConfigException(f"{self.model_path.as_posix()}: no .safetensors files found")
# shortcut: look for the quantization in the name
if any(x for x in files if "llm_int8" in x.as_posix()):
return ModelFormat.BnbQuantizedLlmInt8b
# more reliable path: probe contents for a 'SCB' key
ckpt = read_checkpoint_meta(files[0], scan=True)
if any("SCB" in x for x in ckpt.keys()):
return ModelFormat.BnbQuantizedLlmInt8b
raise InvalidModelConfigException(f"{self.model_path.as_posix()}: unknown model format")
class ONNXFolderProbe(PipelineFolderProbe):
def get_base_type(self) -> BaseModelType:
# Due to the way the installer is set up, the configuration file for safetensors
@@ -805,6 +891,11 @@ class CLIPVisionFolderProbe(FolderProbeBase):
return BaseModelType.Any
class CLIPEmbedFolderProbe(FolderProbeBase):
def get_base_type(self) -> BaseModelType:
return BaseModelType.Any
class SpandrelImageToImageFolderProbe(FolderProbeBase):
def get_base_type(self) -> BaseModelType:
raise NotImplementedError()
@@ -835,8 +926,10 @@ ModelProbe.register_probe("diffusers", ModelType.Main, PipelineFolderProbe)
ModelProbe.register_probe("diffusers", ModelType.VAE, VaeFolderProbe)
ModelProbe.register_probe("diffusers", ModelType.LoRA, LoRAFolderProbe)
ModelProbe.register_probe("diffusers", ModelType.TextualInversion, TextualInversionFolderProbe)
ModelProbe.register_probe("diffusers", ModelType.T5Encoder, T5EncoderFolderProbe)
ModelProbe.register_probe("diffusers", ModelType.ControlNet, ControlNetFolderProbe)
ModelProbe.register_probe("diffusers", ModelType.IPAdapter, IPAdapterFolderProbe)
ModelProbe.register_probe("diffusers", ModelType.CLIPEmbed, CLIPEmbedFolderProbe)
ModelProbe.register_probe("diffusers", ModelType.CLIPVision, CLIPVisionFolderProbe)
ModelProbe.register_probe("diffusers", ModelType.T2IAdapter, T2IAdapterFolderProbe)
ModelProbe.register_probe("diffusers", ModelType.SpandrelImageToImage, SpandrelImageToImageFolderProbe)

View File

@@ -2,7 +2,7 @@ from typing import Optional
from pydantic import BaseModel
from invokeai.backend.model_manager.config import BaseModelType, ModelType
from invokeai.backend.model_manager.config import BaseModelType, ModelFormat, ModelType
class StarterModelWithoutDependencies(BaseModel):
@@ -11,6 +11,7 @@ class StarterModelWithoutDependencies(BaseModel):
name: str
base: BaseModelType
type: ModelType
format: Optional[ModelFormat] = None
is_installed: bool = False
@@ -51,10 +52,76 @@ cyberrealistic_negative = StarterModel(
type=ModelType.TextualInversion,
)
t5_base_encoder = StarterModel(
name="t5_base_encoder",
base=BaseModelType.Any,
source="InvokeAI/t5-v1_1-xxl::bfloat16",
description="T5-XXL text encoder (used in FLUX pipelines). ~8GB",
type=ModelType.T5Encoder,
)
t5_8b_quantized_encoder = StarterModel(
name="t5_bnb_int8_quantized_encoder",
base=BaseModelType.Any,
source="InvokeAI/t5-v1_1-xxl::bnb_llm_int8",
description="T5-XXL text encoder with bitsandbytes LLM.int8() quantization (used in FLUX pipelines). ~5GB",
type=ModelType.T5Encoder,
format=ModelFormat.BnbQuantizedLlmInt8b,
)
clip_l_encoder = StarterModel(
name="clip-vit-large-patch14",
base=BaseModelType.Any,
source="InvokeAI/clip-vit-large-patch14-text-encoder::bfloat16",
description="CLIP-L text encoder (used in FLUX pipelines). ~250MB",
type=ModelType.CLIPEmbed,
)
flux_vae = StarterModel(
name="FLUX.1-schnell_ae",
base=BaseModelType.Flux,
source="black-forest-labs/FLUX.1-schnell::ae.safetensors",
description="FLUX VAE compatible with both schnell and dev variants.",
type=ModelType.VAE,
)
# List of starter models, displayed on the frontend.
# The order/sort of this list is not changed by the frontend - set it how you want it here.
STARTER_MODELS: list[StarterModel] = [
# region: Main
StarterModel(
name="FLUX Schnell (Quantized)",
base=BaseModelType.Flux,
source="InvokeAI/flux_schnell::transformer/bnb_nf4/flux1-schnell-bnb_nf4.safetensors",
description="FLUX schnell transformer quantized to bitsandbytes NF4 format. Total size with dependencies: ~12GB",
type=ModelType.Main,
dependencies=[t5_8b_quantized_encoder, flux_vae, clip_l_encoder],
),
StarterModel(
name="FLUX Dev (Quantized)",
base=BaseModelType.Flux,
source="InvokeAI/flux_dev::transformer/bnb_nf4/flux1-dev-bnb_nf4.safetensors",
description="FLUX dev transformer quantized to bitsandbytes NF4 format. Total size with dependencies: ~12GB",
type=ModelType.Main,
dependencies=[t5_8b_quantized_encoder, flux_vae, clip_l_encoder],
),
StarterModel(
name="FLUX Schnell",
base=BaseModelType.Flux,
source="InvokeAI/flux_schnell::transformer/base/flux1-schnell.safetensors",
description="FLUX schnell transformer in bfloat16. Total size with dependencies: ~33GB",
type=ModelType.Main,
dependencies=[t5_base_encoder, flux_vae, clip_l_encoder],
),
StarterModel(
name="FLUX Dev",
base=BaseModelType.Flux,
source="InvokeAI/flux_dev::transformer/base/flux1-dev.safetensors",
description="FLUX dev transformer in bfloat16. Total size with dependencies: ~33GB",
type=ModelType.Main,
dependencies=[t5_base_encoder, flux_vae, clip_l_encoder],
),
StarterModel(
name="CyberRealistic v4.1",
base=BaseModelType.StableDiffusion1,
@@ -125,6 +192,7 @@ STARTER_MODELS: list[StarterModel] = [
# endregion
# region VAE
sdxl_fp16_vae_fix,
flux_vae,
# endregion
# region LoRA
StarterModel(
@@ -450,6 +518,11 @@ STARTER_MODELS: list[StarterModel] = [
type=ModelType.SpandrelImageToImage,
),
# endregion
# region TextEncoders
t5_base_encoder,
t5_8b_quantized_encoder,
clip_l_encoder,
# endregion
]
assert len(STARTER_MODELS) == len({m.source for m in STARTER_MODELS}), "Duplicate starter models"

View File

@@ -133,3 +133,29 @@ def lora_token_vector_length(checkpoint: Dict[str, torch.Tensor]) -> Optional[in
break
return lora_token_vector_length
def convert_bundle_to_flux_transformer_checkpoint(
transformer_state_dict: dict[str, torch.Tensor],
) -> dict[str, torch.Tensor]:
original_state_dict: dict[str, torch.Tensor] = {}
keys_to_remove: list[str] = []
for k, v in transformer_state_dict.items():
if not k.startswith("model.diffusion_model"):
keys_to_remove.append(k) # This can be removed in the future if we only want to delete transformer keys
continue
if k.endswith("scale"):
# Scale math must be done at bfloat16 due to our current flux model
# support limitations at inference time
v = v.to(dtype=torch.bfloat16)
new_key = k.replace("model.diffusion_model.", "")
original_state_dict[new_key] = v
keys_to_remove.append(k)
# Remove processed keys from the original dictionary, leaving others in case
# other model state dicts need to be pulled
for k in keys_to_remove:
del transformer_state_dict[k]
return original_state_dict

View File

@@ -54,6 +54,7 @@ def filter_files(
"lora_weights.safetensors",
"weights.pb",
"onnx_data",
"spiece.model", # Added for `black-forest-labs/FLUX.1-schnell`.
)
):
paths.append(file)
@@ -62,13 +63,13 @@ def filter_files(
# downloading random checkpoints that might also be in the repo. However there is no guarantee
# that a checkpoint doesn't contain "model" in its name, and no guarantee that future diffusers models
# will adhere to this naming convention, so this is an area to be careful of.
elif re.search(r"model(\.[^.]+)?\.(safetensors|bin|onnx|xml|pth|pt|ckpt|msgpack)$", file.name):
elif re.search(r"model.*\.(safetensors|bin|onnx|xml|pth|pt|ckpt|msgpack)$", file.name):
paths.append(file)
# limit search to subfolder if requested
if subfolder:
subfolder = root / subfolder
paths = [x for x in paths if x.parent == Path(subfolder)]
paths = [x for x in paths if Path(subfolder) in x.parents]
# _filter_by_variant uniquifies the paths and returns a set
return sorted(_filter_by_variant(paths, variant))
@@ -97,7 +98,9 @@ def _filter_by_variant(files: List[Path], variant: ModelRepoVariant) -> Set[Path
if variant == ModelRepoVariant.Flax:
result.add(path)
elif path.suffix in [".json", ".txt"]:
# Note: '.model' was added to support:
# https://huggingface.co/black-forest-labs/FLUX.1-schnell/blob/768d12a373ed5cc9ef9a9dea7504dc09fcc14842/tokenizer_2/spiece.model
elif path.suffix in [".json", ".txt", ".model"]:
result.add(path)
elif variant in [
@@ -140,6 +143,23 @@ def _filter_by_variant(files: List[Path], variant: ModelRepoVariant) -> Set[Path
continue
for candidate_list in subfolder_weights.values():
# Check if at least one of the files has the explicit fp16 variant.
at_least_one_fp16 = False
for candidate in candidate_list:
if len(candidate.path.suffixes) == 2 and candidate.path.suffixes[0] == ".fp16":
at_least_one_fp16 = True
break
if not at_least_one_fp16:
# If none of the candidates in this candidate_list have the explicit fp16 variant label, then this
# candidate_list probably doesn't adhere to the variant naming convention that we expected. In this case,
# we'll simply keep all the candidates. An example of a model that hits this case is
# `black-forest-labs/FLUX.1-schnell` (as of commit 012d2fd).
for candidate in candidate_list:
result.add(candidate.path)
# The candidate_list seems to have the expected variant naming convention. We'll select the highest scoring
# candidate.
highest_score_candidate = max(candidate_list, key=lambda candidate: candidate.score)
if highest_score_candidate:
result.add(highest_score_candidate.path)

View File

@@ -0,0 +1,135 @@
import bitsandbytes as bnb
import torch
# This file contains utils for working with models that use bitsandbytes LLM.int8() quantization.
# The utils in this file are partially inspired by:
# https://github.com/Lightning-AI/pytorch-lightning/blob/1551a16b94f5234a4a78801098f64d0732ef5cb5/src/lightning/fabric/plugins/precision/bitsandbytes.py
# NOTE(ryand): All of the custom state_dict manipulation logic in this file is pretty hacky. This could be made much
# cleaner by re-implementing bnb.nn.Linear8bitLt with proper use of buffers and less magic. But, for now, we try to
# stick close to the bitsandbytes classes to make interoperability easier with other models that might use bitsandbytes.
class InvokeInt8Params(bnb.nn.Int8Params):
"""We override cuda() to avoid re-quantizing the weights in the following cases:
- We loaded quantized weights from a state_dict on the cpu, and then moved the model to the gpu.
- We are moving the model back-and-forth between the cpu and gpu.
"""
def cuda(self, device):
if self.has_fp16_weights:
return super().cuda(device)
elif self.CB is not None and self.SCB is not None:
self.data = self.data.cuda()
self.CB = self.data
self.SCB = self.SCB.cuda()
else:
# we store the 8-bit rows-major weight
# we convert this weight to the turning/ampere weight during the first inference pass
B = self.data.contiguous().half().cuda(device)
CB, CBt, SCB, SCBt, coo_tensorB = bnb.functional.double_quant(B)
del CBt
del SCBt
self.data = CB
self.CB = CB
self.SCB = SCB
return self
class InvokeLinear8bitLt(bnb.nn.Linear8bitLt):
def _load_from_state_dict(
self,
state_dict: dict[str, torch.Tensor],
prefix: str,
local_metadata,
strict,
missing_keys,
unexpected_keys,
error_msgs,
):
weight = state_dict.pop(prefix + "weight")
bias = state_dict.pop(prefix + "bias", None)
# See `bnb.nn.Linear8bitLt._save_to_state_dict()` for the serialization logic of SCB and weight_format.
scb = state_dict.pop(prefix + "SCB", None)
# Currently, we only support weight_format=0.
weight_format = state_dict.pop(prefix + "weight_format", None)
assert weight_format == 0
# TODO(ryand): Technically, we should be using `strict`, `missing_keys`, `unexpected_keys`, and `error_msgs`
# rather than raising an exception to correctly implement this API.
assert len(state_dict) == 0
if scb is not None:
# We are loading a pre-quantized state dict.
self.weight = InvokeInt8Params(
data=weight,
requires_grad=self.weight.requires_grad,
has_fp16_weights=False,
# Note: After quantization, CB is the same as weight.
CB=weight,
SCB=scb,
)
self.bias = bias if bias is None else torch.nn.Parameter(bias)
else:
# We are loading a non-quantized state dict.
# We could simply call the `super()._load_from_state_dict()` method here, but then we wouldn't be able to
# load from a state_dict into a model on the "meta" device. Attempting to load into a model on the "meta"
# device requires setting `assign=True`, doing this with the default `super()._load_from_state_dict()`
# implementation causes `Params4Bit` to be replaced by a `torch.nn.Parameter`. By initializing a new
# `Params4bit` object, we work around this issue. It's a bit hacky, but it gets the job done.
self.weight = InvokeInt8Params(
data=weight,
requires_grad=self.weight.requires_grad,
has_fp16_weights=False,
CB=None,
SCB=None,
)
self.bias = bias if bias is None else torch.nn.Parameter(bias)
# Reset the state. The persisted fields are based on the initialization behaviour in
# `bnb.nn.Linear8bitLt.__init__()`.
new_state = bnb.MatmulLtState()
new_state.threshold = self.state.threshold
new_state.has_fp16_weights = False
new_state.use_pool = self.state.use_pool
self.state = new_state
def _convert_linear_layers_to_llm_8bit(
module: torch.nn.Module, ignore_modules: set[str], outlier_threshold: float, prefix: str = ""
) -> None:
"""Convert all linear layers in the module to bnb.nn.Linear8bitLt layers."""
for name, child in module.named_children():
fullname = f"{prefix}.{name}" if prefix else name
if isinstance(child, torch.nn.Linear) and not any(fullname.startswith(s) for s in ignore_modules):
has_bias = child.bias is not None
replacement = InvokeLinear8bitLt(
child.in_features,
child.out_features,
bias=has_bias,
has_fp16_weights=False,
threshold=outlier_threshold,
)
replacement.weight.data = child.weight.data
if has_bias:
replacement.bias.data = child.bias.data
replacement.requires_grad_(False)
module.__setattr__(name, replacement)
else:
_convert_linear_layers_to_llm_8bit(
child, ignore_modules, outlier_threshold=outlier_threshold, prefix=fullname
)
def quantize_model_llm_int8(model: torch.nn.Module, modules_to_not_convert: set[str], outlier_threshold: float = 6.0):
"""Apply bitsandbytes LLM.8bit() quantization to the model."""
_convert_linear_layers_to_llm_8bit(
module=model, ignore_modules=modules_to_not_convert, outlier_threshold=outlier_threshold
)
return model

View File

@@ -0,0 +1,156 @@
import bitsandbytes as bnb
import torch
# This file contains utils for working with models that use bitsandbytes NF4 quantization.
# The utils in this file are partially inspired by:
# https://github.com/Lightning-AI/pytorch-lightning/blob/1551a16b94f5234a4a78801098f64d0732ef5cb5/src/lightning/fabric/plugins/precision/bitsandbytes.py
# NOTE(ryand): All of the custom state_dict manipulation logic in this file is pretty hacky. This could be made much
# cleaner by re-implementing bnb.nn.LinearNF4 with proper use of buffers and less magic. But, for now, we try to stick
# close to the bitsandbytes classes to make interoperability easier with other models that might use bitsandbytes.
class InvokeLinearNF4(bnb.nn.LinearNF4):
"""A class that extends `bnb.nn.LinearNF4` to add the following functionality:
- Ability to load Linear NF4 layers from a pre-quantized state_dict.
- Ability to load Linear NF4 layers from a state_dict when the model is on the "meta" device.
"""
def _load_from_state_dict(
self,
state_dict: dict[str, torch.Tensor],
prefix: str,
local_metadata,
strict,
missing_keys,
unexpected_keys,
error_msgs,
):
"""This method is based on the logic in the bitsandbytes serialization unit tests for `Linear4bit`:
https://github.com/bitsandbytes-foundation/bitsandbytes/blob/6d714a5cce3db5bd7f577bc447becc7a92d5ccc7/tests/test_linear4bit.py#L52-L71
"""
weight = state_dict.pop(prefix + "weight")
bias = state_dict.pop(prefix + "bias", None)
# We expect the remaining keys to be quant_state keys.
quant_state_sd = state_dict
# During serialization, the quant_state is stored as subkeys of "weight." (See
# `bnb.nn.LinearNF4._save_to_state_dict()`). We validate that they at least have the correct prefix.
# TODO(ryand): Technically, we should be using `strict`, `missing_keys`, `unexpected_keys`, and `error_msgs`
# rather than raising an exception to correctly implement this API.
assert all(k.startswith(prefix + "weight.") for k in quant_state_sd.keys())
if len(quant_state_sd) > 0:
# We are loading a pre-quantized state dict.
self.weight = bnb.nn.Params4bit.from_prequantized(
data=weight, quantized_stats=quant_state_sd, device=weight.device
)
self.bias = bias if bias is None else torch.nn.Parameter(bias, requires_grad=False)
else:
# We are loading a non-quantized state dict.
# We could simply call the `super()._load_from_state_dict()` method here, but then we wouldn't be able to
# load from a state_dict into a model on the "meta" device. Attempting to load into a model on the "meta"
# device requires setting `assign=True`, doing this with the default `super()._load_from_state_dict()`
# implementation causes `Params4Bit` to be replaced by a `torch.nn.Parameter`. By initializing a new
# `Params4bit` object, we work around this issue. It's a bit hacky, but it gets the job done.
self.weight = bnb.nn.Params4bit(
data=weight,
requires_grad=self.weight.requires_grad,
compress_statistics=self.weight.compress_statistics,
quant_type=self.weight.quant_type,
quant_storage=self.weight.quant_storage,
module=self,
)
self.bias = bias if bias is None else torch.nn.Parameter(bias)
def _replace_param(
param: torch.nn.Parameter | bnb.nn.Params4bit,
data: torch.Tensor,
) -> torch.nn.Parameter:
"""A helper function to replace the data of a model parameter with new data in a way that allows replacing params on
the "meta" device.
Supports both `torch.nn.Parameter` and `bnb.nn.Params4bit` parameters.
"""
if param.device.type == "meta":
# Doing `param.data = data` raises a RuntimeError if param.data was on the "meta" device, so we need to
# re-create the param instead of overwriting the data.
if isinstance(param, bnb.nn.Params4bit):
return bnb.nn.Params4bit(
data,
requires_grad=data.requires_grad,
quant_state=param.quant_state,
compress_statistics=param.compress_statistics,
quant_type=param.quant_type,
)
return torch.nn.Parameter(data, requires_grad=data.requires_grad)
param.data = data
return param
def _convert_linear_layers_to_nf4(
module: torch.nn.Module,
ignore_modules: set[str],
compute_dtype: torch.dtype,
compress_statistics: bool = False,
prefix: str = "",
) -> None:
"""Convert all linear layers in the model to NF4 quantized linear layers.
Args:
module: All linear layers in this module will be converted.
ignore_modules: A set of module prefixes to ignore when converting linear layers.
compute_dtype: The dtype to use for computation in the quantized linear layers.
compress_statistics: Whether to enable nested quantization (aka double quantization) where the quantization
constants from the first quantization are quantized again.
prefix: The prefix of the current module in the model. Used to call this function recursively.
"""
for name, child in module.named_children():
fullname = f"{prefix}.{name}" if prefix else name
if isinstance(child, torch.nn.Linear) and not any(fullname.startswith(s) for s in ignore_modules):
has_bias = child.bias is not None
replacement = InvokeLinearNF4(
child.in_features,
child.out_features,
bias=has_bias,
compute_dtype=compute_dtype,
compress_statistics=compress_statistics,
)
if has_bias:
replacement.bias = _replace_param(replacement.bias, child.bias.data)
replacement.weight = _replace_param(replacement.weight, child.weight.data)
replacement.requires_grad_(False)
module.__setattr__(name, replacement)
else:
_convert_linear_layers_to_nf4(child, ignore_modules, compute_dtype=compute_dtype, prefix=fullname)
def quantize_model_nf4(model: torch.nn.Module, modules_to_not_convert: set[str], compute_dtype: torch.dtype):
"""Apply bitsandbytes nf4 quantization to the model.
You likely want to call this function inside a `accelerate.init_empty_weights()` context.
Example usage:
```
# Initialize the model from a config on the meta device.
with accelerate.init_empty_weights():
model = ModelClass.from_config(...)
# Add NF4 quantization linear layers to the model - still on the meta device.
with accelerate.init_empty_weights():
model = quantize_model_nf4(model, modules_to_not_convert=set(), compute_dtype=torch.float16)
# Load a state_dict into the model. (Could be either a prequantized or non-quantized state_dict.)
model.load_state_dict(state_dict, strict=True, assign=True)
# Move the model to the "cuda" device. If the model was non-quantized, this is where the weight quantization takes
# place.
model.to("cuda")
```
"""
_convert_linear_layers_to_nf4(module=model, ignore_modules=modules_to_not_convert, compute_dtype=compute_dtype)
return model

View File

@@ -0,0 +1,79 @@
from pathlib import Path
import accelerate
from safetensors.torch import load_file, save_file
from invokeai.backend.flux.model import Flux
from invokeai.backend.flux.util import params
from invokeai.backend.quantization.bnb_llm_int8 import quantize_model_llm_int8
from invokeai.backend.quantization.scripts.load_flux_model_bnb_nf4 import log_time
def main():
"""A script for quantizing a FLUX transformer model using the bitsandbytes LLM.int8() quantization method.
This script is primarily intended for reference. The script params (e.g. the model_path, modules_to_not_convert,
etc.) are hardcoded and would need to be modified for other use cases.
"""
# Load the FLUX transformer model onto the meta device.
model_path = Path(
"/data/invokeai/models/.download_cache/https__huggingface.co_black-forest-labs_flux.1-schnell_resolve_main_flux1-schnell.safetensors/flux1-schnell.safetensors"
)
with log_time("Intialize FLUX transformer on meta device"):
# TODO(ryand): Determine if this is a schnell model or a dev model and load the appropriate config.
p = params["flux-schnell"]
# Initialize the model on the "meta" device.
with accelerate.init_empty_weights():
model = Flux(p)
# TODO(ryand): We may want to add some modules to not quantize here (e.g. the proj_out layer). See the accelerate
# `get_keys_to_not_convert(...)` function for a heuristic to determine which modules to not quantize.
modules_to_not_convert: set[str] = set()
model_int8_path = model_path.parent / "bnb_llm_int8.safetensors"
if model_int8_path.exists():
# The quantized model already exists, load it and return it.
print(f"A pre-quantized model already exists at '{model_int8_path}'. Attempting to load it...")
# Replace the linear layers with LLM.int8() quantized linear layers (still on the meta device).
with log_time("Replace linear layers with LLM.int8() layers"), accelerate.init_empty_weights():
model = quantize_model_llm_int8(model, modules_to_not_convert=modules_to_not_convert)
with log_time("Load state dict into model"):
sd = load_file(model_int8_path)
model.load_state_dict(sd, strict=True, assign=True)
with log_time("Move model to cuda"):
model = model.to("cuda")
print(f"Successfully loaded pre-quantized model from '{model_int8_path}'.")
else:
# The quantized model does not exist, quantize the model and save it.
print(f"No pre-quantized model found at '{model_int8_path}'. Quantizing the model...")
with log_time("Replace linear layers with LLM.int8() layers"), accelerate.init_empty_weights():
model = quantize_model_llm_int8(model, modules_to_not_convert=modules_to_not_convert)
with log_time("Load state dict into model"):
state_dict = load_file(model_path)
# TODO(ryand): Cast the state_dict to the appropriate dtype?
model.load_state_dict(state_dict, strict=True, assign=True)
with log_time("Move model to cuda and quantize"):
model = model.to("cuda")
with log_time("Save quantized model"):
model_int8_path.parent.mkdir(parents=True, exist_ok=True)
save_file(model.state_dict(), model_int8_path)
print(f"Successfully quantized and saved model to '{model_int8_path}'.")
assert isinstance(model, Flux)
return model
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,96 @@
import time
from contextlib import contextmanager
from pathlib import Path
import accelerate
import torch
from safetensors.torch import load_file, save_file
from invokeai.backend.flux.model import Flux
from invokeai.backend.flux.util import params
from invokeai.backend.quantization.bnb_nf4 import quantize_model_nf4
@contextmanager
def log_time(name: str):
"""Helper context manager to log the time taken by a block of code."""
start = time.time()
try:
yield None
finally:
end = time.time()
print(f"'{name}' took {end - start:.4f} secs")
def main():
"""A script for quantizing a FLUX transformer model using the bitsandbytes NF4 quantization method.
This script is primarily intended for reference. The script params (e.g. the model_path, modules_to_not_convert,
etc.) are hardcoded and would need to be modified for other use cases.
"""
model_path = Path(
"/data/invokeai/models/.download_cache/https__huggingface.co_black-forest-labs_flux.1-schnell_resolve_main_flux1-schnell.safetensors/flux1-schnell.safetensors"
)
# inference_dtype = torch.bfloat16
with log_time("Intialize FLUX transformer on meta device"):
# TODO(ryand): Determine if this is a schnell model or a dev model and load the appropriate config.
p = params["flux-schnell"]
# Initialize the model on the "meta" device.
with accelerate.init_empty_weights():
model = Flux(p)
# TODO(ryand): We may want to add some modules to not quantize here (e.g. the proj_out layer). See the accelerate
# `get_keys_to_not_convert(...)` function for a heuristic to determine which modules to not quantize.
modules_to_not_convert: set[str] = set()
model_nf4_path = model_path.parent / "bnb_nf4.safetensors"
if model_nf4_path.exists():
# The quantized model already exists, load it and return it.
print(f"A pre-quantized model already exists at '{model_nf4_path}'. Attempting to load it...")
# Replace the linear layers with NF4 quantized linear layers (still on the meta device).
with log_time("Replace linear layers with NF4 layers"), accelerate.init_empty_weights():
model = quantize_model_nf4(
model, modules_to_not_convert=modules_to_not_convert, compute_dtype=torch.bfloat16
)
with log_time("Load state dict into model"):
state_dict = load_file(model_nf4_path)
model.load_state_dict(state_dict, strict=True, assign=True)
with log_time("Move model to cuda"):
model = model.to("cuda")
print(f"Successfully loaded pre-quantized model from '{model_nf4_path}'.")
else:
# The quantized model does not exist, quantize the model and save it.
print(f"No pre-quantized model found at '{model_nf4_path}'. Quantizing the model...")
with log_time("Replace linear layers with NF4 layers"), accelerate.init_empty_weights():
model = quantize_model_nf4(
model, modules_to_not_convert=modules_to_not_convert, compute_dtype=torch.bfloat16
)
with log_time("Load state dict into model"):
state_dict = load_file(model_path)
# TODO(ryand): Cast the state_dict to the appropriate dtype?
model.load_state_dict(state_dict, strict=True, assign=True)
with log_time("Move model to cuda and quantize"):
model = model.to("cuda")
with log_time("Save quantized model"):
model_nf4_path.parent.mkdir(parents=True, exist_ok=True)
save_file(model.state_dict(), model_nf4_path)
print(f"Successfully quantized and saved model to '{model_nf4_path}'.")
assert isinstance(model, Flux)
return model
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,92 @@
from pathlib import Path
import accelerate
from safetensors.torch import load_file, save_file
from transformers import AutoConfig, AutoModelForTextEncoding, T5EncoderModel
from invokeai.backend.quantization.bnb_llm_int8 import quantize_model_llm_int8
from invokeai.backend.quantization.scripts.load_flux_model_bnb_nf4 import log_time
def load_state_dict_into_t5(model: T5EncoderModel, state_dict: dict):
# There is a shared reference to a single weight tensor in the model.
# Both "encoder.embed_tokens.weight" and "shared.weight" refer to the same tensor, so only the latter should
# be present in the state_dict.
missing_keys, unexpected_keys = model.load_state_dict(state_dict, strict=False, assign=True)
assert len(unexpected_keys) == 0
assert set(missing_keys) == {"encoder.embed_tokens.weight"}
# Assert that the layers we expect to be shared are actually shared.
assert model.encoder.embed_tokens.weight is model.shared.weight
def main():
"""A script for quantizing a T5 text encoder model using the bitsandbytes LLM.int8() quantization method.
This script is primarily intended for reference. The script params (e.g. the model_path, modules_to_not_convert,
etc.) are hardcoded and would need to be modified for other use cases.
"""
model_path = Path("/data/misc/text_encoder_2")
with log_time("Intialize T5 on meta device"):
model_config = AutoConfig.from_pretrained(model_path)
with accelerate.init_empty_weights():
model = AutoModelForTextEncoding.from_config(model_config)
# TODO(ryand): We may want to add some modules to not quantize here (e.g. the proj_out layer). See the accelerate
# `get_keys_to_not_convert(...)` function for a heuristic to determine which modules to not quantize.
modules_to_not_convert: set[str] = set()
model_int8_path = model_path / "bnb_llm_int8.safetensors"
if model_int8_path.exists():
# The quantized model already exists, load it and return it.
print(f"A pre-quantized model already exists at '{model_int8_path}'. Attempting to load it...")
# Replace the linear layers with LLM.int8() quantized linear layers (still on the meta device).
with log_time("Replace linear layers with LLM.int8() layers"), accelerate.init_empty_weights():
model = quantize_model_llm_int8(model, modules_to_not_convert=modules_to_not_convert)
with log_time("Load state dict into model"):
sd = load_file(model_int8_path)
load_state_dict_into_t5(model, sd)
with log_time("Move model to cuda"):
model = model.to("cuda")
print(f"Successfully loaded pre-quantized model from '{model_int8_path}'.")
else:
# The quantized model does not exist, quantize the model and save it.
print(f"No pre-quantized model found at '{model_int8_path}'. Quantizing the model...")
with log_time("Replace linear layers with LLM.int8() layers"), accelerate.init_empty_weights():
model = quantize_model_llm_int8(model, modules_to_not_convert=modules_to_not_convert)
with log_time("Load state dict into model"):
# Load sharded state dict.
files = list(model_path.glob("*.safetensors"))
state_dict = {}
for file in files:
sd = load_file(file)
state_dict.update(sd)
load_state_dict_into_t5(model, state_dict)
with log_time("Move model to cuda and quantize"):
model = model.to("cuda")
with log_time("Save quantized model"):
model_int8_path.parent.mkdir(parents=True, exist_ok=True)
state_dict = model.state_dict()
state_dict.pop("encoder.embed_tokens.weight")
save_file(state_dict, model_int8_path)
# This handling of shared weights could also be achieved with save_model(...), but then we'd lose control
# over which keys are kept. And, the corresponding load_model(...) function does not support assign=True.
# save_model(model, model_int8_path)
print(f"Successfully quantized and saved model to '{model_int8_path}'.")
assert isinstance(model, T5EncoderModel)
return model
if __name__ == "__main__":
main()

View File

@@ -25,11 +25,6 @@ class BasicConditioningInfo:
return self
@dataclass
class ConditioningFieldData:
conditionings: List[BasicConditioningInfo]
@dataclass
class SDXLConditioningInfo(BasicConditioningInfo):
"""SDXL text conditioning information produced by Compel."""
@@ -43,6 +38,22 @@ class SDXLConditioningInfo(BasicConditioningInfo):
return super().to(device=device, dtype=dtype)
@dataclass
class FLUXConditioningInfo:
clip_embeds: torch.Tensor
t5_embeds: torch.Tensor
def to(self, device: torch.device | None = None, dtype: torch.dtype | None = None):
self.clip_embeds = self.clip_embeds.to(device=device, dtype=dtype)
self.t5_embeds = self.t5_embeds.to(device=device, dtype=dtype)
return self
@dataclass
class ConditioningFieldData:
conditionings: List[BasicConditioningInfo] | List[SDXLConditioningInfo] | List[FLUXConditioningInfo]
@dataclass
class IPAdapterConditioningInfo:
cond_image_prompt_embeds: torch.Tensor

View File

@@ -3,10 +3,9 @@ Initialization file for invokeai.backend.util
"""
from invokeai.backend.util.logging import InvokeAILogger
from invokeai.backend.util.util import GIG, Chdir, directory_size
from invokeai.backend.util.util import Chdir, directory_size
__all__ = [
"GIG",
"directory_size",
"Chdir",
"InvokeAILogger",

View File

@@ -7,9 +7,6 @@ from pathlib import Path
from PIL import Image
# actual size of a gig
GIG = 1073741824
def slugify(value: str, allow_unicode: bool = False) -> str:
"""

View File

@@ -11,6 +11,8 @@ const config: KnipConfig = {
'src/features/nodes/types/v2/**',
// TODO(psyche): maybe we can clean up these utils after canvas v2 release
'src/features/controlLayers/konva/util.ts',
// TODO(psyche): restore HRF functionality?
'src/features/hrf/**',
],
ignoreBinaries: ['only-allow'],
paths: {

View File

@@ -64,6 +64,7 @@
"@roarr/browser-log-writer": "^1.3.0",
"async-mutex": "^0.5.0",
"chakra-react-select": "^4.9.1",
"cmdk": "^1.0.0",
"compare-versions": "^6.1.1",
"dateformat": "^5.0.3",
"fracturedjsonjs": "^4.0.2",
@@ -92,7 +93,6 @@
"react-icons": "^5.2.1",
"react-redux": "9.1.2",
"react-resizable-panels": "^2.0.23",
"react-select": "5.8.0",
"react-use": "^17.5.1",
"react-virtuoso": "^4.9.0",
"reactflow": "^11.11.4",
@@ -136,6 +136,7 @@
"@vitest/coverage-v8": "^1.5.0",
"@vitest/ui": "^1.5.0",
"concurrently": "^8.2.2",
"csstype": "^3.1.3",
"dpdm": "^3.14.0",
"eslint": "^8.57.0",
"eslint-plugin-i18next": "^6.0.9",

View File

@@ -41,6 +41,9 @@ dependencies:
chakra-react-select:
specifier: ^4.9.1
version: 4.9.1(@chakra-ui/form-control@2.2.0)(@chakra-ui/icon@3.2.0)(@chakra-ui/layout@2.3.1)(@chakra-ui/media-query@3.3.0)(@chakra-ui/menu@2.2.1)(@chakra-ui/spinner@2.1.0)(@chakra-ui/system@2.6.2)(@emotion/react@11.13.3)(@types/react@18.3.3)(react-dom@18.3.1)(react@18.3.1)
cmdk:
specifier: ^1.0.0
version: 1.0.0(@types/react-dom@18.3.0)(@types/react@18.3.3)(react-dom@18.3.1)(react@18.3.1)
compare-versions:
specifier: ^6.1.1
version: 6.1.1
@@ -125,9 +128,6 @@ dependencies:
react-resizable-panels:
specifier: ^2.0.23
version: 2.0.23(react-dom@18.3.1)(react@18.3.1)
react-select:
specifier: 5.8.0
version: 5.8.0(@types/react@18.3.3)(react-dom@18.3.1)(react@18.3.1)
react-use:
specifier: ^17.5.1
version: 17.5.1(react-dom@18.3.1)(react@18.3.1)
@@ -238,6 +238,9 @@ devDependencies:
concurrently:
specifier: ^8.2.2
version: 8.2.2
csstype:
specifier: ^3.1.3
version: 3.1.3
dpdm:
specifier: ^3.14.0
version: 3.14.0
@@ -2053,7 +2056,7 @@ packages:
dependencies:
'@chakra-ui/dom-utils': 2.1.0
react: 18.3.1
react-focus-lock: 2.12.1(@types/react@18.3.3)(react@18.3.1)
react-focus-lock: 2.13.0(@types/react@18.3.3)(react@18.3.1)
transitivePeerDependencies:
- '@types/react'
dev: false
@@ -3784,6 +3787,288 @@ packages:
resolution: {integrity: sha512-P1st0aksCrn9sGZhp8GMYwBnQsbvAWsZAX44oXNNvLHGqAOcoVxmjZiohstwQ7SqKnbR47akdNi+uleWD8+g6A==}
dev: false
/@radix-ui/primitive@1.0.1:
resolution: {integrity: sha512-yQ8oGX2GVsEYMWGxcovu1uGWPCxV5BFfeeYxqPmuAzUyLT9qmaMXSAhXpb0WrspIeqYzdJpkh2vHModJPgRIaw==}
dependencies:
'@babel/runtime': 7.25.4
dev: false
/@radix-ui/react-compose-refs@1.0.1(@types/react@18.3.3)(react@18.3.1):
resolution: {integrity: sha512-fDSBgd44FKHa1FRMU59qBMPFcl2PZE+2nmqunj+BWFyYYjnhIDWL2ItDs3rrbJDQOtzt5nIebLCQc4QRfz6LJw==}
peerDependencies:
'@types/react': '*'
react: ^16.8 || ^17.0 || ^18.0
peerDependenciesMeta:
'@types/react':
optional: true
dependencies:
'@babel/runtime': 7.25.4
'@types/react': 18.3.3
react: 18.3.1
dev: false
/@radix-ui/react-context@1.0.1(@types/react@18.3.3)(react@18.3.1):
resolution: {integrity: sha512-ebbrdFoYTcuZ0v4wG5tedGnp9tzcV8awzsxYph7gXUyvnNLuTIcCk1q17JEbnVhXAKG9oX3KtchwiMIAYp9NLg==}
peerDependencies:
'@types/react': '*'
react: ^16.8 || ^17.0 || ^18.0
peerDependenciesMeta:
'@types/react':
optional: true
dependencies:
'@babel/runtime': 7.25.4
'@types/react': 18.3.3
react: 18.3.1
dev: false
/@radix-ui/react-dialog@1.0.5(@types/react-dom@18.3.0)(@types/react@18.3.3)(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-GjWJX/AUpB703eEBanuBnIWdIXg6NvJFCXcNlSZk4xdszCdhrJgBoUd1cGk67vFO+WdA2pfI/plOpqz/5GUP6Q==}
peerDependencies:
'@types/react': '*'
'@types/react-dom': '*'
react: ^16.8 || ^17.0 || ^18.0
react-dom: ^16.8 || ^17.0 || ^18.0
peerDependenciesMeta:
'@types/react':
optional: true
'@types/react-dom':
optional: true
dependencies:
'@babel/runtime': 7.25.4
'@radix-ui/primitive': 1.0.1
'@radix-ui/react-compose-refs': 1.0.1(@types/react@18.3.3)(react@18.3.1)
'@radix-ui/react-context': 1.0.1(@types/react@18.3.3)(react@18.3.1)
'@radix-ui/react-dismissable-layer': 1.0.5(@types/react-dom@18.3.0)(@types/react@18.3.3)(react-dom@18.3.1)(react@18.3.1)
'@radix-ui/react-focus-guards': 1.0.1(@types/react@18.3.3)(react@18.3.1)
'@radix-ui/react-focus-scope': 1.0.4(@types/react-dom@18.3.0)(@types/react@18.3.3)(react-dom@18.3.1)(react@18.3.1)
'@radix-ui/react-id': 1.0.1(@types/react@18.3.3)(react@18.3.1)
'@radix-ui/react-portal': 1.0.4(@types/react-dom@18.3.0)(@types/react@18.3.3)(react-dom@18.3.1)(react@18.3.1)
'@radix-ui/react-presence': 1.0.1(@types/react-dom@18.3.0)(@types/react@18.3.3)(react-dom@18.3.1)(react@18.3.1)
'@radix-ui/react-primitive': 1.0.3(@types/react-dom@18.3.0)(@types/react@18.3.3)(react-dom@18.3.1)(react@18.3.1)
'@radix-ui/react-slot': 1.0.2(@types/react@18.3.3)(react@18.3.1)
'@radix-ui/react-use-controllable-state': 1.0.1(@types/react@18.3.3)(react@18.3.1)
'@types/react': 18.3.3
'@types/react-dom': 18.3.0
aria-hidden: 1.2.4
react: 18.3.1
react-dom: 18.3.1(react@18.3.1)
react-remove-scroll: 2.5.5(@types/react@18.3.3)(react@18.3.1)
dev: false
/@radix-ui/react-dismissable-layer@1.0.5(@types/react-dom@18.3.0)(@types/react@18.3.3)(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-aJeDjQhywg9LBu2t/At58hCvr7pEm0o2Ke1x33B+MhjNmmZ17sy4KImo0KPLgsnc/zN7GPdce8Cnn0SWvwZO7g==}
peerDependencies:
'@types/react': '*'
'@types/react-dom': '*'
react: ^16.8 || ^17.0 || ^18.0
react-dom: ^16.8 || ^17.0 || ^18.0
peerDependenciesMeta:
'@types/react':
optional: true
'@types/react-dom':
optional: true
dependencies:
'@babel/runtime': 7.25.4
'@radix-ui/primitive': 1.0.1
'@radix-ui/react-compose-refs': 1.0.1(@types/react@18.3.3)(react@18.3.1)
'@radix-ui/react-primitive': 1.0.3(@types/react-dom@18.3.0)(@types/react@18.3.3)(react-dom@18.3.1)(react@18.3.1)
'@radix-ui/react-use-callback-ref': 1.0.1(@types/react@18.3.3)(react@18.3.1)
'@radix-ui/react-use-escape-keydown': 1.0.3(@types/react@18.3.3)(react@18.3.1)
'@types/react': 18.3.3
'@types/react-dom': 18.3.0
react: 18.3.1
react-dom: 18.3.1(react@18.3.1)
dev: false
/@radix-ui/react-focus-guards@1.0.1(@types/react@18.3.3)(react@18.3.1):
resolution: {integrity: sha512-Rect2dWbQ8waGzhMavsIbmSVCgYxkXLxxR3ZvCX79JOglzdEy4JXMb98lq4hPxUbLr77nP0UOGf4rcMU+s1pUA==}
peerDependencies:
'@types/react': '*'
react: ^16.8 || ^17.0 || ^18.0
peerDependenciesMeta:
'@types/react':
optional: true
dependencies:
'@babel/runtime': 7.25.4
'@types/react': 18.3.3
react: 18.3.1
dev: false
/@radix-ui/react-focus-scope@1.0.4(@types/react-dom@18.3.0)(@types/react@18.3.3)(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-sL04Mgvf+FmyvZeYfNu1EPAaaxD+aw7cYeIB9L9Fvq8+urhltTRaEo5ysKOpHuKPclsZcSUMKlN05x4u+CINpA==}
peerDependencies:
'@types/react': '*'
'@types/react-dom': '*'
react: ^16.8 || ^17.0 || ^18.0
react-dom: ^16.8 || ^17.0 || ^18.0
peerDependenciesMeta:
'@types/react':
optional: true
'@types/react-dom':
optional: true
dependencies:
'@babel/runtime': 7.25.4
'@radix-ui/react-compose-refs': 1.0.1(@types/react@18.3.3)(react@18.3.1)
'@radix-ui/react-primitive': 1.0.3(@types/react-dom@18.3.0)(@types/react@18.3.3)(react-dom@18.3.1)(react@18.3.1)
'@radix-ui/react-use-callback-ref': 1.0.1(@types/react@18.3.3)(react@18.3.1)
'@types/react': 18.3.3
'@types/react-dom': 18.3.0
react: 18.3.1
react-dom: 18.3.1(react@18.3.1)
dev: false
/@radix-ui/react-id@1.0.1(@types/react@18.3.3)(react@18.3.1):
resolution: {integrity: sha512-tI7sT/kqYp8p96yGWY1OAnLHrqDgzHefRBKQ2YAkBS5ja7QLcZ9Z/uY7bEjPUatf8RomoXM8/1sMj1IJaE5UzQ==}
peerDependencies:
'@types/react': '*'
react: ^16.8 || ^17.0 || ^18.0
peerDependenciesMeta:
'@types/react':
optional: true
dependencies:
'@babel/runtime': 7.25.4
'@radix-ui/react-use-layout-effect': 1.0.1(@types/react@18.3.3)(react@18.3.1)
'@types/react': 18.3.3
react: 18.3.1
dev: false
/@radix-ui/react-portal@1.0.4(@types/react-dom@18.3.0)(@types/react@18.3.3)(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-Qki+C/EuGUVCQTOTD5vzJzJuMUlewbzuKyUy+/iHM2uwGiru9gZeBJtHAPKAEkB5KWGi9mP/CHKcY0wt1aW45Q==}
peerDependencies:
'@types/react': '*'
'@types/react-dom': '*'
react: ^16.8 || ^17.0 || ^18.0
react-dom: ^16.8 || ^17.0 || ^18.0
peerDependenciesMeta:
'@types/react':
optional: true
'@types/react-dom':
optional: true
dependencies:
'@babel/runtime': 7.25.4
'@radix-ui/react-primitive': 1.0.3(@types/react-dom@18.3.0)(@types/react@18.3.3)(react-dom@18.3.1)(react@18.3.1)
'@types/react': 18.3.3
'@types/react-dom': 18.3.0
react: 18.3.1
react-dom: 18.3.1(react@18.3.1)
dev: false
/@radix-ui/react-presence@1.0.1(@types/react-dom@18.3.0)(@types/react@18.3.3)(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-UXLW4UAbIY5ZjcvzjfRFo5gxva8QirC9hF7wRE4U5gz+TP0DbRk+//qyuAQ1McDxBt1xNMBTaciFGvEmJvAZCg==}
peerDependencies:
'@types/react': '*'
'@types/react-dom': '*'
react: ^16.8 || ^17.0 || ^18.0
react-dom: ^16.8 || ^17.0 || ^18.0
peerDependenciesMeta:
'@types/react':
optional: true
'@types/react-dom':
optional: true
dependencies:
'@babel/runtime': 7.25.4
'@radix-ui/react-compose-refs': 1.0.1(@types/react@18.3.3)(react@18.3.1)
'@radix-ui/react-use-layout-effect': 1.0.1(@types/react@18.3.3)(react@18.3.1)
'@types/react': 18.3.3
'@types/react-dom': 18.3.0
react: 18.3.1
react-dom: 18.3.1(react@18.3.1)
dev: false
/@radix-ui/react-primitive@1.0.3(@types/react-dom@18.3.0)(@types/react@18.3.3)(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-yi58uVyoAcK/Nq1inRY56ZSjKypBNKTa/1mcL8qdl6oJeEaDbOldlzrGn7P6Q3Id5d+SYNGc5AJgc4vGhjs5+g==}
peerDependencies:
'@types/react': '*'
'@types/react-dom': '*'
react: ^16.8 || ^17.0 || ^18.0
react-dom: ^16.8 || ^17.0 || ^18.0
peerDependenciesMeta:
'@types/react':
optional: true
'@types/react-dom':
optional: true
dependencies:
'@babel/runtime': 7.25.4
'@radix-ui/react-slot': 1.0.2(@types/react@18.3.3)(react@18.3.1)
'@types/react': 18.3.3
'@types/react-dom': 18.3.0
react: 18.3.1
react-dom: 18.3.1(react@18.3.1)
dev: false
/@radix-ui/react-slot@1.0.2(@types/react@18.3.3)(react@18.3.1):
resolution: {integrity: sha512-YeTpuq4deV+6DusvVUW4ivBgnkHwECUu0BiN43L5UCDFgdhsRUWAghhTF5MbvNTPzmiFOx90asDSUjWuCNapwg==}
peerDependencies:
'@types/react': '*'
react: ^16.8 || ^17.0 || ^18.0
peerDependenciesMeta:
'@types/react':
optional: true
dependencies:
'@babel/runtime': 7.25.4
'@radix-ui/react-compose-refs': 1.0.1(@types/react@18.3.3)(react@18.3.1)
'@types/react': 18.3.3
react: 18.3.1
dev: false
/@radix-ui/react-use-callback-ref@1.0.1(@types/react@18.3.3)(react@18.3.1):
resolution: {integrity: sha512-D94LjX4Sp0xJFVaoQOd3OO9k7tpBYNOXdVhkltUbGv2Qb9OXdrg/CpsjlZv7ia14Sylv398LswWBVVu5nqKzAQ==}
peerDependencies:
'@types/react': '*'
react: ^16.8 || ^17.0 || ^18.0
peerDependenciesMeta:
'@types/react':
optional: true
dependencies:
'@babel/runtime': 7.25.4
'@types/react': 18.3.3
react: 18.3.1
dev: false
/@radix-ui/react-use-controllable-state@1.0.1(@types/react@18.3.3)(react@18.3.1):
resolution: {integrity: sha512-Svl5GY5FQeN758fWKrjM6Qb7asvXeiZltlT4U2gVfl8Gx5UAv2sMR0LWo8yhsIZh2oQ0eFdZ59aoOOMV7b47VA==}
peerDependencies:
'@types/react': '*'
react: ^16.8 || ^17.0 || ^18.0
peerDependenciesMeta:
'@types/react':
optional: true
dependencies:
'@babel/runtime': 7.25.4
'@radix-ui/react-use-callback-ref': 1.0.1(@types/react@18.3.3)(react@18.3.1)
'@types/react': 18.3.3
react: 18.3.1
dev: false
/@radix-ui/react-use-escape-keydown@1.0.3(@types/react@18.3.3)(react@18.3.1):
resolution: {integrity: sha512-vyL82j40hcFicA+M4Ex7hVkB9vHgSse1ZWomAqV2Je3RleKGO5iM8KMOEtfoSB0PnIelMd2lATjTGMYqN5ylTg==}
peerDependencies:
'@types/react': '*'
react: ^16.8 || ^17.0 || ^18.0
peerDependenciesMeta:
'@types/react':
optional: true
dependencies:
'@babel/runtime': 7.25.4
'@radix-ui/react-use-callback-ref': 1.0.1(@types/react@18.3.3)(react@18.3.1)
'@types/react': 18.3.3
react: 18.3.1
dev: false
/@radix-ui/react-use-layout-effect@1.0.1(@types/react@18.3.3)(react@18.3.1):
resolution: {integrity: sha512-v/5RegiJWYdoCvMnITBkNNx6bCj20fiaJnWtRkU18yITptraXjffz5Qbn05uOiQnOvi+dbkznkoaMltz1GnszQ==}
peerDependencies:
'@types/react': '*'
react: ^16.8 || ^17.0 || ^18.0
peerDependenciesMeta:
'@types/react':
optional: true
dependencies:
'@babel/runtime': 7.25.4
'@types/react': 18.3.3
react: 18.3.1
dev: false
/@reactflow/background@11.3.14(@types/react@18.3.3)(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-Gewd7blEVT5Lh6jqrvOgd4G6Qk17eGKQfsDXgyRSqM+CTwDqRldG2LsWN4sNeno6sbqVIC2fZ+rAUBFA9ZEUDA==}
peerDependencies:
@@ -5210,7 +5495,6 @@ packages:
resolution: {integrity: sha512-EhwApuTmMBmXuFOikhQLIBUn6uFg81SwLMOAUgodJF14SOBOCMdU04gDoYi0WOJJHD144TL32z4yDqCW3dnkQg==}
dependencies:
'@types/react': 18.3.3
dev: true
/@types/react-transition-group@4.4.10:
resolution: {integrity: sha512-hT/+s0VQs2ojCX823m60m5f0sL5idt9SO6Tj6Dg+rdphGPIeJbJ6CxvBYkgkGKrYeDjvIpKTR38UzmtHJOGW3Q==}
@@ -6268,6 +6552,21 @@ packages:
requiresBuild: true
dev: true
/cmdk@1.0.0(@types/react-dom@18.3.0)(@types/react@18.3.3)(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-gDzVf0a09TvoJ5jnuPvygTB77+XdOSwEmJ88L6XPFPlv7T3RxbP9jgenfylrAMD0+Le1aO0nVjQUzl2g+vjz5Q==}
peerDependencies:
react: ^18.0.0
react-dom: ^18.0.0
dependencies:
'@radix-ui/react-dialog': 1.0.5(@types/react-dom@18.3.0)(@types/react@18.3.3)(react-dom@18.3.1)(react@18.3.1)
'@radix-ui/react-primitive': 1.0.3(@types/react-dom@18.3.0)(@types/react@18.3.3)(react-dom@18.3.1)(react@18.3.1)
react: 18.3.1
react-dom: 18.3.1(react@18.3.1)
transitivePeerDependencies:
- '@types/react'
- '@types/react-dom'
dev: false
/color-convert@1.9.3:
resolution: {integrity: sha512-QfAUtd+vFdAtFQcC8CCyYt1fYWxSqAiK2cSD6zDB8N3cpsEBAvRxp9zOGg6G/SHHJYAT88/az/IuDGALsNVbGg==}
dependencies:
@@ -9712,8 +10011,8 @@ packages:
resolution: {integrity: sha512-nsO+KSNgo1SbJqJEYRE9ERzo7YtYbou/OqjSQKxV7jcKox7+usiUVZOAC+XnDOABXggQTno0Y1CpVnuWEc1boQ==}
dev: false
/react-focus-lock@2.12.1(@types/react@18.3.3)(react@18.3.1):
resolution: {integrity: sha512-lfp8Dve4yJagkHiFrC1bGtib3mF2ktqwPJw4/WGcgPW+pJ/AVQA5X2vI7xgp13FcxFEpYBBHpXai/N2DBNC0Jw==}
/react-focus-lock@2.13.0(@types/react@18.3.3)(react@18.3.1):
resolution: {integrity: sha512-w7aIcTwZwNzUp2fYQDMICy+khFwVmKmOrLF8kNsPS+dz4Oi/oxoVJ2wCMVvX6rWGriM/+mYaTyp1MRmkcs2amw==}
peerDependencies:
'@types/react': ^16.8.0 || ^17.0.0 || ^18.0.0
react: ^16.8.0 || ^17.0.0 || ^18.0.0
@@ -9867,6 +10166,25 @@ packages:
use-sidecar: 1.1.2(@types/react@18.3.3)(react@18.3.1)
dev: false
/react-remove-scroll@2.5.5(@types/react@18.3.3)(react@18.3.1):
resolution: {integrity: sha512-ImKhrzJJsyXJfBZ4bzu8Bwpka14c/fQt0k+cyFp/PBhTfyDnU5hjOtM4AG/0AMyy8oKzOTR0lDgJIM7pYXI0kw==}
engines: {node: '>=10'}
peerDependencies:
'@types/react': ^16.8.0 || ^17.0.0 || ^18.0.0
react: ^16.8.0 || ^17.0.0 || ^18.0.0
peerDependenciesMeta:
'@types/react':
optional: true
dependencies:
'@types/react': 18.3.3
react: 18.3.1
react-remove-scroll-bar: 2.3.6(@types/react@18.3.3)(react@18.3.1)
react-style-singleton: 2.2.1(@types/react@18.3.3)(react@18.3.1)
tslib: 2.7.0
use-callback-ref: 1.3.2(@types/react@18.3.3)(react@18.3.1)
use-sidecar: 1.1.2(@types/react@18.3.3)(react@18.3.1)
dev: false
/react-resizable-panels@2.0.23(react-dom@18.3.1)(react@18.3.1):
resolution: {integrity: sha512-8ZKTwTU11t/FYwiwhMdtZYYyFxic5U5ysRu2YwfkAgDbUJXFvnWSJqhnzkSlW+mnDoNAzDCrJhdOSXBPA76wug==}
peerDependencies:

View File

@@ -127,7 +127,14 @@
"bulkDownloadRequestedDesc": "Dein Download wird vorbereitet. Dies kann ein paar Momente dauern.",
"bulkDownloadRequestFailed": "Problem beim Download vorbereiten",
"bulkDownloadFailed": "Download fehlgeschlagen",
"alwaysShowImageSizeBadge": "Zeige immer Bilder Größe Abzeichen"
"alwaysShowImageSizeBadge": "Zeige immer Bilder Größe Abzeichen",
"selectForCompare": "Zum Vergleichen auswählen",
"compareImage": "Bilder vergleichen",
"exitSearch": "Suche beenden",
"newestFirst": "Neueste zuerst",
"oldestFirst": "Älteste zuerst",
"openInViewer": "Im Viewer öffnen",
"swapImages": "Bilder tauschen"
},
"hotkeys": {
"keyboardShortcuts": "Tastenkürzel",
@@ -631,7 +638,8 @@
"archived": "Archiviert",
"noBoards": "Kein {boardType}} Ordner",
"hideBoards": "Ordner verstecken",
"viewBoards": "Ordner ansehen"
"viewBoards": "Ordner ansehen",
"deletedPrivateBoardsCannotbeRestored": "Gelöschte Boards können nicht wiederhergestellt werden. Wenn Sie „Nur Board löschen“ wählen, werden die Bilder in einen privaten, nicht kategorisierten Status für den Ersteller des Bildes versetzt."
},
"controlnet": {
"showAdvanced": "Zeige Erweitert",
@@ -781,7 +789,9 @@
"batchFieldValues": "Stapelverarbeitungswerte",
"batchQueued": "Stapelverarbeitung eingereiht",
"graphQueued": "Graph eingereiht",
"graphFailedToQueue": "Fehler beim Einreihen des Graphen"
"graphFailedToQueue": "Fehler beim Einreihen des Graphen",
"generations_one": "Generation",
"generations_other": "Generationen"
},
"metadata": {
"negativePrompt": "Negativ Beschreibung",
@@ -1146,5 +1156,10 @@
"noMatchingTriggers": "Keine passenden Trigger",
"addPromptTrigger": "Prompt-Trigger hinzufügen",
"compatibleEmbeddings": "Kompatible Einbettungen"
},
"ui": {
"tabs": {
"queue": "Warteschlange"
}
}
}

View File

@@ -134,6 +134,7 @@
"nodes": "Workflows",
"notInstalled": "Not $t(common.installed)",
"openInNewTab": "Open in New Tab",
"openInViewer": "Open in Viewer",
"orderBy": "Order By",
"outpaint": "outpaint",
"outputs": "Outputs",
@@ -164,10 +165,10 @@
"alpha": "Alpha",
"selected": "Selected",
"tab": "Tab",
"viewing": "Viewing",
"viewingDesc": "Review images in a large gallery view",
"editing": "Editing",
"editingDesc": "Edit on the Control Layers canvas",
"view": "View",
"viewDesc": "Review images in a large gallery view",
"edit": "Edit",
"editDesc": "Edit on the Canvas",
"comparing": "Comparing",
"comparingDesc": "Comparing two images",
"enabled": "Enabled",
@@ -328,9 +329,13 @@
"completedIn": "Completed in",
"batch": "Batch",
"origin": "Origin",
"originCanvas": "Canvas",
"originWorkflows": "Workflows",
"originOther": "Other",
"destination": "Destination",
"upscaling": "Upscaling",
"canvas": "Canvas",
"generation": "Generation",
"workflows": "Workflows",
"other": "Other",
"gallery": "Gallery",
"batchFieldValues": "Batch Field Values",
"item": "Item",
"session": "Session",
@@ -702,6 +707,8 @@
"availableModels": "Available Models",
"baseModel": "Base Model",
"cancel": "Cancel",
"clipEmbed": "CLIP Embed",
"clipVision": "CLIP Vision",
"config": "Config",
"convert": "Convert",
"convertingModelBegin": "Converting Model. Please wait.",
@@ -789,13 +796,16 @@
"settings": "Settings",
"simpleModelPlaceholder": "URL or path to a local file or diffusers folder",
"source": "Source",
"spandrelImageToImage": "Image to Image (Spandrel)",
"starterModels": "Starter Models",
"starterModelsInModelManager": "Starter Models can be found in Model Manager",
"syncModels": "Sync Models",
"textualInversions": "Textual Inversions",
"triggerPhrases": "Trigger Phrases",
"loraTriggerPhrases": "LoRA Trigger Phrases",
"mainModelTriggerPhrases": "Main Model Trigger Phrases",
"typePhraseHere": "Type phrase here",
"t5Encoder": "T5 Encoder",
"upcastAttention": "Upcast Attention",
"uploadImage": "Upload Image",
"urlOrLocalPath": "URL or Local Path",
@@ -1005,6 +1015,8 @@
"noModelForControlAdapter": "Control Adapter #{{number}} has no model selected.",
"incompatibleBaseModelForControlAdapter": "Control Adapter #{{number}} model is incompatible with main model.",
"noModelSelected": "No model selected",
"canvasManagerNotLoaded": "Canvas Manager not loaded",
"canvasBusy": "Canvas is busy",
"noPrompts": "No prompts generated",
"noNodesInGraph": "No nodes in graph",
"systemDisconnected": "System disconnected",
@@ -1040,8 +1052,7 @@
"seamlessYAxis": "Seamless Tiling Y Axis",
"seed": "Seed",
"imageActions": "Image Actions",
"sendToImg2Img": "Send to Image to Image",
"sendToUnifiedCanvas": "Send To Unified Canvas",
"sendToCanvas": "Send To Canvas",
"sendToUpscale": "Send To Upscale",
"showOptionsPanel": "Show Side Panel (O or T)",
"shuffle": "Shuffle Seed",
@@ -1182,8 +1193,8 @@
"problemSavingMaskDesc": "Unable to export mask",
"prunedQueue": "Pruned Queue",
"resetInitialImage": "Reset Initial Image",
"sentToImageToImage": "Sent To Image To Image",
"sentToUnifiedCanvas": "Sent to Unified Canvas",
"sentToCanvas": "Sent to Canvas",
"sentToUpscale": "Sent to Upscale",
"serverError": "Server Error",
"sessionRef": "Session: {{sessionId}}",
"setAsCanvasInitialImage": "Set as canvas initial image",
@@ -1645,6 +1656,17 @@
"storeNotInitialized": "Store is not initialized"
},
"controlLayers": {
"bookmark": "Bookmark for Quick Switch",
"fitBboxToLayers": "Fit Bbox To Layers",
"removeBookmark": "Remove Bookmark",
"saveCanvasToGallery": "Save Canvas To Gallery",
"saveBboxToGallery": "Save Bbox To Gallery",
"savedToGalleryOk": "Saved to Gallery",
"savedToGalleryError": "Error saving to gallery",
"mergeVisible": "Merge Visible",
"mergeVisibleOk": "Merged visible layers",
"mergeVisibleError": "Error merging visible layers",
"clearHistory": "Clear History",
"generateMode": "Generate",
"generateModeDesc": "Create individual images. Generated images are added directly to the gallery.",
"composeMode": "Compose",
@@ -1652,10 +1674,10 @@
"autoSave": "Auto-save to Gallery",
"resetCanvas": "Reset Canvas",
"resetAll": "Reset All",
"deleteAll": "Delete All",
"clearCaches": "Clear Caches",
"recalculateRects": "Recalculate Rects",
"clipToBbox": "Clip Strokes to Bbox",
"compositeMaskedRegions": "Composite Masked Regions",
"addLayer": "Add Layer",
"duplicate": "Duplicate",
"moveToFront": "Move to Front",
@@ -1674,35 +1696,47 @@
"deletePrompt": "Delete Prompt",
"resetRegion": "Reset Region",
"debugLayers": "Debug Layers",
"showHUD": "Show HUD",
"rectangle": "Rectangle",
"maskPreviewColor": "Mask Preview Color",
"maskFill": "Mask Fill",
"addPositivePrompt": "Add $t(common.positivePrompt)",
"addNegativePrompt": "Add $t(common.negativePrompt)",
"addIPAdapter": "Add $t(common.ipAdapter)",
"addRasterLayer": "Add $t(controlLayers.rasterLayer)",
"addControlLayer": "Add $t(controlLayers.controlLayer)",
"addInpaintMask": "Add $t(controlLayers.inpaintMask)",
"addRegionalGuidance": "Add $t(controlLayers.regionalGuidance)",
"regionalGuidanceLayer": "$t(controlLayers.regionalGuidance) $t(unifiedCanvas.layer)",
"raster": "Raster",
"rasterLayer_one": "Raster Layer",
"controlLayer_one": "Control Layer",
"inpaintMask_one": "Inpaint Mask",
"regionalGuidance_one": "Regional Guidance",
"ipAdapter_one": "IP Adapter",
"rasterLayer_other": "Raster Layers",
"controlLayer_other": "Control Layers",
"inpaintMask_other": "Inpaint Masks",
"regionalGuidance_other": "Regional Guidance",
"ipAdapter_other": "IP Adapters",
"rasterLayer": "Raster Layer",
"controlLayer": "Control Layer",
"inpaintMask": "Inpaint Mask",
"regionalGuidance": "Regional Guidance",
"ipAdapter": "IP Adapter",
"sendToGallery": "Send To Gallery",
"sendToGalleryDesc": "Generations will be sent to the gallery.",
"sendToCanvas": "Send To Canvas",
"sendToCanvasDesc": "Generations will be staged onto the canvas.",
"rasterLayer_withCount_one": "$t(controlLayers.rasterLayer)",
"controlLayer_withCount_one": "$t(controlLayers.controlLayer)",
"inpaintMask_withCount_one": "$t(controlLayers.inpaintMask)",
"regionalGuidance_withCount_one": "$t(controlLayers.regionalGuidance)",
"ipAdapter_withCount_one": "$t(controlLayers.ipAdapter)",
"rasterLayer_withCount_other": "Raster Layers",
"controlLayer_withCount_other": "Control Layers",
"inpaintMask_withCount_other": "Inpaint Masks",
"regionalGuidance_withCount_other": "Regional Guidance",
"ipAdapter_withCount_other": "IP Adapters",
"opacity": "Opacity",
"regionalGuidance_withCount_hidden": "Regional Guidance ({{count}} hidden)",
"controlAdapters_withCount_hidden": "Control Adapters ({{count}} hidden)",
"controlLayers_withCount_hidden": "Control Layers ({{count}} hidden)",
"rasterLayers_withCount_hidden": "Raster Layers ({{count}} hidden)",
"ipAdapters_withCount_hidden": "IP Adapters ({{count}} hidden)",
"globalIPAdapters_withCount_hidden": "Global IP Adapters ({{count}} hidden)",
"inpaintMasks_withCount_hidden": "Inpaint Masks ({{count}} hidden)",
"regionalGuidance_withCount_visible": "Regional Guidance ({{count}})",
"controlAdapters_withCount_visible": "Control Adapters ({{count}})",
"controlLayers_withCount_visible": "Control Layers ({{count}})",
"rasterLayers_withCount_visible": "Raster Layers ({{count}})",
"ipAdapters_withCount_visible": "IP Adapters ({{count}})",
"globalIPAdapters_withCount_visible": "Global IP Adapters ({{count}})",
"inpaintMasks_withCount_visible": "Inpaint Masks ({{count}})",
"globalControlAdapter": "Global $t(controlnet.controlAdapter_one)",
"globalControlAdapterLayer": "Global $t(controlnet.controlAdapter_one) $t(unifiedCanvas.layer)",
@@ -1715,8 +1749,8 @@
"clearProcessor": "Clear Processor",
"resetProcessor": "Reset Processor to Defaults",
"noLayersAdded": "No Layers Added",
"layers_one": "Layer",
"layers_other": "Layers",
"layer_one": "Layer",
"layer_other": "Layers",
"objects_zero": "empty",
"objects_one": "{{count}} object",
"objects_other": "{{count}} objects",
@@ -1729,7 +1763,14 @@
"showingType": "Showing {{type}}",
"dynamicGrid": "Dynamic Grid",
"logDebugInfo": "Log Debug Info",
"locked": "Locked",
"unlocked": "Unlocked",
"deleteSelected": "Delete Selected",
"deleteAll": "Delete All",
"flipHorizontal": "Flip Horizontal",
"flipVertical": "Flip Vertical",
"fill": {
"fillColor": "Fill Color",
"fillStyle": "Fill Style",
"solid": "Solid",
"grid": "Grid",
@@ -1745,7 +1786,6 @@
"bbox": "Bbox",
"move": "Move",
"view": "View",
"transform": "Transform",
"colorPicker": "Color Picker"
},
"filter": {
@@ -1755,6 +1795,13 @@
"preview": "Preview",
"apply": "Apply",
"cancel": "Cancel"
},
"transform": {
"transform": "Transform",
"fitToBbox": "Fit to Bbox",
"reset": "Reset",
"apply": "Apply",
"cancel": "Cancel"
}
},
"upscaling": {

View File

@@ -86,15 +86,15 @@
"loadMore": "Cargar más",
"noImagesInGallery": "No hay imágenes para mostrar",
"deleteImage_one": "Eliminar Imagen",
"deleteImage_many": "",
"deleteImage_other": "",
"deleteImage_many": "Eliminar {{count}} Imágenes",
"deleteImage_other": "Eliminar {{count}} Imágenes",
"deleteImagePermanent": "Las imágenes eliminadas no se pueden restaurar.",
"assets": "Activos",
"autoAssignBoardOnClick": "Asignación automática de tableros al hacer clic"
},
"hotkeys": {
"keyboardShortcuts": "Atajos de teclado",
"appHotkeys": "Atajos de applicación",
"appHotkeys": "Atajos de aplicación",
"generalHotkeys": "Atajos generales",
"galleryHotkeys": "Atajos de galería",
"unifiedCanvasHotkeys": "Atajos de lienzo unificado",
@@ -535,7 +535,7 @@
"bottomMessage": "Al eliminar este panel y las imágenes que contiene, se restablecerán las funciones que los estén utilizando actualmente.",
"deleteBoardAndImages": "Borrar el panel y las imágenes",
"loading": "Cargando...",
"deletedBoardsCannotbeRestored": "Los paneles eliminados no se pueden restaurar",
"deletedBoardsCannotbeRestored": "Los paneles eliminados no se pueden restaurar. Al Seleccionar 'Borrar Solo el Panel' transferirá las imágenes a un estado sin categorizar.",
"move": "Mover",
"menuItemAutoAdd": "Agregar automáticamente a este panel",
"searchBoard": "Buscando paneles…",
@@ -549,7 +549,13 @@
"imagesWithCount_other": "{{count}} imágenes",
"assetsWithCount_one": "{{count}} activo",
"assetsWithCount_many": "{{count}} activos",
"assetsWithCount_other": "{{count}} activos"
"assetsWithCount_other": "{{count}} activos",
"hideBoards": "Ocultar Paneles",
"addPrivateBoard": "Agregar un tablero privado",
"addSharedBoard": "Agregar Panel Compartido",
"boards": "Paneles",
"archiveBoard": "Archivar Panel",
"archived": "Archivado"
},
"accordions": {
"compositing": {

View File

@@ -496,7 +496,9 @@
"main": "Principali",
"noModelsInstalledDesc1": "Installa i modelli con",
"ipAdapters": "Adattatori IP",
"noMatchingModels": "Nessun modello corrispondente"
"noMatchingModels": "Nessun modello corrispondente",
"starterModelsInModelManager": "I modelli iniziali possono essere trovati in Gestione Modelli",
"spandrelImageToImage": "Immagine a immagine (Spandrel)"
},
"parameters": {
"images": "Immagini",
@@ -510,7 +512,7 @@
"perlinNoise": "Rumore Perlin",
"type": "Tipo",
"strength": "Forza",
"upscaling": "Ampliamento",
"upscaling": "Amplia",
"scale": "Scala",
"imageFit": "Adatta l'immagine iniziale alle dimensioni di output",
"scaleBeforeProcessing": "Scala prima dell'elaborazione",
@@ -593,7 +595,7 @@
"globalPositivePromptPlaceholder": "Prompt positivo globale",
"globalNegativePromptPlaceholder": "Prompt negativo globale",
"processImage": "Elabora Immagine",
"sendToUpscale": "Invia a Ampliare",
"sendToUpscale": "Invia a Amplia",
"postProcessing": "Post-elaborazione (Shift + U)"
},
"settings": {
@@ -1420,7 +1422,7 @@
"paramUpscaleMethod": {
"heading": "Metodo di ampliamento",
"paragraphs": [
"Metodo utilizzato per eseguire l'ampliamento dell'immagine per la correzione ad alta risoluzione."
"Metodo utilizzato per ampliare l'immagine per la correzione ad alta risoluzione."
]
},
"patchmatchDownScaleSize": {
@@ -1528,7 +1530,7 @@
},
"upscaleModel": {
"paragraphs": [
"Il modello di ampliamento (Upscale), scala l'immagine alle dimensioni di uscita prima di aggiungere i dettagli. È possibile utilizzare qualsiasi modello di ampliamento supportato, ma alcuni sono specializzati per diversi tipi di immagini, come foto o disegni al tratto."
"Il modello di ampliamento, scala l'immagine alle dimensioni di uscita prima di aggiungere i dettagli. È possibile utilizzare qualsiasi modello di ampliamento supportato, ma alcuni sono specializzati per diversi tipi di immagini, come foto o disegni al tratto."
],
"heading": "Modello di ampliamento"
},
@@ -1720,26 +1722,27 @@
"modelsTab": "$t(ui.tabs.models) $t(common.tab)",
"queue": "Coda",
"queueTab": "$t(ui.tabs.queue) $t(common.tab)",
"upscaling": "Ampliamento",
"upscaling": "Amplia",
"upscalingTab": "$t(ui.tabs.upscaling) $t(common.tab)"
}
},
"upscaling": {
"creativity": "Creatività",
"structure": "Struttura",
"upscaleModel": "Modello di Ampliamento",
"upscaleModel": "Modello di ampliamento",
"scale": "Scala",
"missingModelsWarning": "Visita <LinkComponent>Gestione modelli</LinkComponent> per installare i modelli richiesti:",
"mainModelDesc": "Modello principale (architettura SD1.5 o SDXL)",
"tileControlNetModelDesc": "Modello Tile ControlNet per l'architettura del modello principale scelto",
"upscaleModelDesc": "Modello per l'ampliamento (da immagine a immagine)",
"upscaleModelDesc": "Modello per l'ampliamento (immagine a immagine)",
"missingUpscaleInitialImage": "Immagine iniziale mancante per l'ampliamento",
"missingUpscaleModel": "Modello per lampliamento mancante",
"missingTileControlNetModel": "Nessun modello ControlNet Tile valido installato",
"postProcessingModel": "Modello di post-elaborazione",
"postProcessingMissingModelWarning": "Visita <LinkComponent>Gestione modelli</LinkComponent> per installare un modello di post-elaborazione (da immagine a immagine).",
"exceedsMaxSize": "Le impostazioni di ampliamento superano il limite massimo delle dimensioni",
"exceedsMaxSizeDetails": "Il limite massimo di ampliamento è {{maxUpscaleDimension}}x{{maxUpscaleDimension}} pixel. Prova un'immagine più piccola o diminuisci la scala selezionata."
"exceedsMaxSizeDetails": "Il limite massimo di ampliamento è {{maxUpscaleDimension}}x{{maxUpscaleDimension}} pixel. Prova un'immagine più piccola o diminuisci la scala selezionata.",
"upscale": "Amplia"
},
"upsell": {
"inviteTeammates": "Invita collaboratori",
@@ -1789,6 +1792,7 @@
"positivePromptColumn": "'prompt' o 'positive_prompt'",
"noTemplates": "Nessun modello",
"acceptedColumnsKeys": "Colonne/chiavi accettate:",
"templateActions": "Azioni modello"
"templateActions": "Azioni modello",
"promptTemplateCleared": "Modello di prompt cancellato"
}
}

View File

@@ -501,7 +501,8 @@
"noModelsInstalled": "Нет установленных моделей",
"noModelsInstalledDesc1": "Установите модели с помощью",
"noMatchingModels": "Нет подходящих моделей",
"ipAdapters": "IP адаптеры"
"ipAdapters": "IP адаптеры",
"starterModelsInModelManager": "Стартовые модели можно найти в Менеджере моделей"
},
"parameters": {
"images": "Изображения",
@@ -1758,7 +1759,8 @@
"postProcessingModel": "Модель постобработки",
"tileControlNetModelDesc": "Модель ControlNet для выбранной архитектуры основной модели",
"missingModelsWarning": "Зайдите в <LinkComponent>Менеджер моделей</LinkComponent> чтоб установить необходимые модели:",
"postProcessingMissingModelWarning": "Посетите <LinkComponent>Менеджер моделей</LinkComponent>, чтобы установить модель постобработки (img2img)."
"postProcessingMissingModelWarning": "Посетите <LinkComponent>Менеджер моделей</LinkComponent>, чтобы установить модель постобработки (img2img).",
"upscale": "Увеличить"
},
"stylePresets": {
"noMatchingTemplates": "Нет подходящих шаблонов",
@@ -1804,7 +1806,8 @@
"noTemplates": "Нет шаблонов",
"promptTemplatesDesc2": "Используйте строку-заполнитель <Pre>{{placeholder}}</Pre>, чтобы указать место, куда должен быть включен ваш запрос в шаблоне.",
"searchByName": "Поиск по имени",
"shared": "Общий"
"shared": "Общий",
"promptTemplateCleared": "Шаблон запроса создан"
},
"upsell": {
"inviteTeammates": "Пригласите членов команды",

View File

@@ -154,7 +154,8 @@
"displaySearch": "显示搜索",
"stretchToFit": "拉伸以适应",
"exitCompare": "退出对比",
"compareHelp1": "在点击图库中的图片或使用箭头键切换比较图片时,请按住<Kbd>Alt</Kbd> 键。"
"compareHelp1": "在点击图库中的图片或使用箭头键切换比较图片时,请按住<Kbd>Alt</Kbd> 键。",
"go": "运行"
},
"hotkeys": {
"keyboardShortcuts": "快捷键",
@@ -494,7 +495,9 @@
"huggingFacePlaceholder": "所有者或模型名称",
"huggingFaceRepoID": "HuggingFace仓库ID",
"loraTriggerPhrases": "LoRA 触发词",
"ipAdapters": "IP适配器"
"ipAdapters": "IP适配器",
"spandrelImageToImage": "图生图(Spandrel)",
"starterModelsInModelManager": "您可以在模型管理器中找到初始模型"
},
"parameters": {
"images": "图像",
@@ -695,7 +698,9 @@
"outOfMemoryErrorDesc": "您当前的生成设置已超出系统处理能力.请调整设置后再次尝试.",
"parametersSet": "参数已恢复",
"errorCopied": "错误信息已复制",
"modelImportCanceled": "模型导入已取消"
"modelImportCanceled": "模型导入已取消",
"importFailed": "导入失败",
"importSuccessful": "导入成功"
},
"unifiedCanvas": {
"layer": "图层",
@@ -1705,12 +1710,55 @@
"missingModelsWarning": "请访问<LinkComponent>模型管理器</LinkComponent> 安装所需的模型:",
"mainModelDesc": "主模型SD1.5或SDXL架构",
"exceedsMaxSize": "放大设置超出了最大尺寸限制",
"exceedsMaxSizeDetails": "最大放大限制是 {{maxUpscaleDimension}}x{{maxUpscaleDimension}} 像素.请尝试一个较小的图像或减少您的缩放选择."
"exceedsMaxSizeDetails": "最大放大限制是 {{maxUpscaleDimension}}x{{maxUpscaleDimension}} 像素.请尝试一个较小的图像或减少您的缩放选择.",
"upscale": "放大"
},
"upsell": {
"inviteTeammates": "邀请团队成员",
"professional": "专业",
"professionalUpsell": "可在 Invoke 的专业版中使用.点击此处或访问 invoke.com/pricing 了解更多详情.",
"shareAccess": "共享访问权限"
},
"stylePresets": {
"positivePrompt": "正向提示词",
"preview": "预览",
"deleteImage": "删除图像",
"deleteTemplate": "删除模版",
"deleteTemplate2": "您确定要删除这个模板吗?请注意,删除后无法恢复.",
"importTemplates": "导入提示模板支持CSV或JSON格式",
"insertPlaceholder": "插入一个占位符",
"myTemplates": "我的模版",
"name": "名称",
"type": "类型",
"unableToDeleteTemplate": "无法删除提示模板",
"updatePromptTemplate": "更新提示词模版",
"exportPromptTemplates": "导出我的提示模板为CSV格式",
"exportDownloaded": "导出已下载",
"noMatchingTemplates": "无匹配的模版",
"promptTemplatesDesc1": "提示模板可以帮助您在编写提示时添加预设的文本内容.",
"promptTemplatesDesc3": "如果您没有使用占位符,那么模板的内容将会被添加到您提示的末尾.",
"searchByName": "按名称搜索",
"shared": "已分享",
"sharedTemplates": "已分享的模版",
"templateActions": "模版操作",
"templateDeleted": "提示模版已删除",
"toggleViewMode": "切换显示模式",
"uploadImage": "上传图像",
"active": "激活",
"choosePromptTemplate": "选择提示词模板",
"clearTemplateSelection": "清除模版选择",
"copyTemplate": "拷贝模版",
"createPromptTemplate": "创建提示词模版",
"defaultTemplates": "默认模版",
"editTemplate": "编辑模版",
"exportFailed": "无法生成并下载CSV文件",
"flatten": "将选定的模板内容合并到当前提示中",
"negativePrompt": "反向提示词",
"promptTemplateCleared": "提示模板已清除",
"useForTemplate": "用于提示词模版",
"viewList": "预览模版列表",
"viewModeTooltip": "这是您的提示在当前选定的模板下的预览效果。如需编辑提示,请直接在文本框中点击进行修改.",
"noTemplates": "无模版",
"private": "私密"
}
}

View File

@@ -16,8 +16,11 @@ import { DynamicPromptsModal } from 'features/dynamicPrompts/components/DynamicP
import { useStarterModelsToast } from 'features/modelManagerV2/hooks/useStarterModelsToast';
import { ClearQueueConfirmationsAlertDialog } from 'features/queue/components/ClearQueueConfirmationAlertDialog';
import { StylePresetModal } from 'features/stylePresets/components/StylePresetForm/StylePresetModal';
import { activeStylePresetIdChanged } from 'features/stylePresets/store/stylePresetSlice';
import RefreshAfterResetModal from 'features/system/components/SettingsModal/RefreshAfterResetModal';
import SettingsModal from 'features/system/components/SettingsModal/SettingsModal';
import { configChanged } from 'features/system/store/configSlice';
import { languageSelector } from 'features/system/store/systemSelectors';
import { selectLanguage } from 'features/system/store/systemSelectors';
import { AppContent } from 'features/ui/components/AppContent';
import { setActiveTab } from 'features/ui/store/uiSlice';
import type { TabName } from 'features/ui/store/uiTypes';
@@ -41,11 +44,18 @@ interface Props {
action: 'sendToImg2Img' | 'sendToCanvas' | 'useAllParameters';
};
selectedWorkflowId?: string;
destination?: TabName | undefined;
selectedStylePresetId?: string;
destination?: TabName;
}
const App = ({ config = DEFAULT_CONFIG, selectedImage, selectedWorkflowId, destination }: Props) => {
const language = useAppSelector(languageSelector);
const App = ({
config = DEFAULT_CONFIG,
selectedImage,
selectedWorkflowId,
selectedStylePresetId,
destination,
}: Props) => {
const language = useAppSelector(selectLanguage);
const logger = useLogger('system');
const dispatch = useAppDispatch();
const clearStorage = useClearStorage();
@@ -83,6 +93,12 @@ const App = ({ config = DEFAULT_CONFIG, selectedImage, selectedWorkflowId, desti
}
}, [selectedWorkflowId, getAndLoadWorkflow]);
useEffect(() => {
if (selectedStylePresetId) {
dispatch(activeStylePresetIdChanged(selectedStylePresetId));
}
}, [dispatch, selectedStylePresetId]);
useEffect(() => {
if (destination) {
dispatch(setActiveTab(destination));
@@ -121,6 +137,8 @@ const App = ({ config = DEFAULT_CONFIG, selectedImage, selectedWorkflowId, desti
<StylePresetModal />
<ClearQueueConfirmationsAlertDialog />
<PreselectedImage selectedImage={selectedImage} />
<SettingsModal />
<RefreshAfterResetModal />
</ErrorBoundary>
);
};

View File

@@ -1,5 +1,7 @@
import { Button, Flex, Heading, Image, Link, Text } from '@invoke-ai/ui-library';
import { createSelector } from '@reduxjs/toolkit';
import { useAppSelector } from 'app/store/storeHooks';
import { selectConfigSlice } from 'features/system/store/configSlice';
import { toast } from 'features/toast/toast';
import newGithubIssueUrl from 'new-github-issue-url';
import InvokeLogoYellow from 'public/assets/images/invoke-symbol-ylw-lrg.svg';
@@ -13,9 +15,11 @@ type Props = {
resetErrorBoundary: () => void;
};
const selectIsLocal = createSelector(selectConfigSlice, (config) => config.isLocal);
const AppErrorBoundaryFallback = ({ error, resetErrorBoundary }: Props) => {
const { t } = useTranslation();
const isLocal = useAppSelector((s) => s.config.isLocal);
const isLocal = useAppSelector(selectIsLocal);
const handleCopy = useCallback(() => {
const text = JSON.stringify(serializeError(error), null, 2);

View File

@@ -45,6 +45,7 @@ interface Props extends PropsWithChildren {
action: 'sendToImg2Img' | 'sendToCanvas' | 'useAllParameters';
};
selectedWorkflowId?: string;
selectedStylePresetId?: string;
destination?: TabName;
customStarUi?: CustomStarUi;
socketOptions?: Partial<ManagerOptions & SocketOptions>;
@@ -66,6 +67,7 @@ const InvokeAIUI = ({
queueId,
selectedImage,
selectedWorkflowId,
selectedStylePresetId,
destination,
customStarUi,
socketOptions,
@@ -227,6 +229,7 @@ const InvokeAIUI = ({
config={config}
selectedImage={selectedImage}
selectedWorkflowId={selectedWorkflowId}
selectedStylePresetId={selectedStylePresetId}
destination={destination}
/>
</AppDndContext>

View File

@@ -1,5 +1,10 @@
import { createLogWriter } from '@roarr/browser-log-writer';
import { useAppSelector } from 'app/store/storeHooks';
import {
selectSystemLogIsEnabled,
selectSystemLogLevel,
selectSystemLogNamespaces,
} from 'features/system/store/systemSlice';
import { useEffect, useMemo } from 'react';
import { ROARR, Roarr } from 'roarr';
@@ -7,9 +12,9 @@ import type { LogNamespace } from './logger';
import { $logger, BASE_CONTEXT, LOG_LEVEL_MAP, logger } from './logger';
export const useLogger = (namespace: LogNamespace) => {
const logLevel = useAppSelector((s) => s.system.logLevel);
const logNamespaces = useAppSelector((s) => s.system.logNamespaces);
const logIsEnabled = useAppSelector((s) => s.system.logIsEnabled);
const logLevel = useAppSelector(selectSystemLogLevel);
const logNamespaces = useAppSelector(selectSystemLogNamespaces);
const logIsEnabled = useAppSelector(selectSystemLogIsEnabled);
// The provided Roarr browser log writer uses localStorage to config logging to console
useEffect(() => {

View File

@@ -1,2 +1,3 @@
export const STORAGE_PREFIX = '@@invokeai-';
export const EMPTY_ARRAY = [];
export const EMPTY_OBJECT = {};

View File

@@ -1,10 +1,12 @@
import { isAnyOf } from '@reduxjs/toolkit';
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import {
sessionStagingAreaImageAccepted,
sessionStagingAreaReset,
} from 'features/controlLayers/store/canvasSessionSlice';
import { rasterLayerAdded } from 'features/controlLayers/store/canvasV2Slice';
import { canvasReset, rasterLayerAdded } from 'features/controlLayers/store/canvasSlice';
import { selectCanvasSlice } from 'features/controlLayers/store/selectors';
import type { CanvasRasterLayerState } from 'features/controlLayers/store/types';
import { imageDTOToImageObject } from 'features/controlLayers/store/types';
import { toast } from 'features/toast/toast';
@@ -15,14 +17,16 @@ import { assert } from 'tsafe';
const log = logger('canvas');
const matchCanvasOrStagingAreaRest = isAnyOf(sessionStagingAreaReset, canvasReset);
export const addStagingListeners = (startAppListening: AppStartListening) => {
startAppListening({
actionCreator: sessionStagingAreaReset,
matcher: matchCanvasOrStagingAreaRest,
effect: async (_, { dispatch }) => {
try {
const req = dispatch(
queueApi.endpoints.cancelByBatchOrigin.initiate(
{ origin: 'canvas' },
queueApi.endpoints.cancelByBatchDestination.initiate(
{ destination: 'canvas' },
{ fixedCacheKey: 'cancelByBatchOrigin' }
)
);
@@ -58,7 +62,7 @@ export const addStagingListeners = (startAppListening: AppStartListening) => {
const stagingAreaImage = state.canvasSession.stagedImages[index];
assert(stagingAreaImage, 'No staged image found to accept');
const { x, y } = state.canvasV2.present.bbox.rect;
const { x, y } = selectCanvasSlice(state).bbox.rect;
const { imageDTO, offsetX, offsetY } = stagingAreaImage;
const imageObject = imageDTOToImageObject(imageDTO);
@@ -67,7 +71,7 @@ export const addStagingListeners = (startAppListening: AppStartListening) => {
objects: [imageObject],
};
api.dispatch(rasterLayerAdded({ overrides, isSelected: true }));
api.dispatch(rasterLayerAdded({ overrides, isSelected: false }));
api.dispatch(sessionStagingAreaReset());
},
});

View File

@@ -1,6 +1,8 @@
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { selectCanvasSlice } from 'features/controlLayers/store/selectors';
import { getImageUsage } from 'features/deleteImageModal/store/selectors';
import { nodeEditorReset } from 'features/nodes/store/nodesSlice';
import { selectNodesSlice } from 'features/nodes/store/selectors';
import { imagesApi } from 'services/api/endpoints/images';
export const addDeleteBoardAndImagesFulfilledListener = (startAppListening: AppStartListening) => {
@@ -13,10 +15,12 @@ export const addDeleteBoardAndImagesFulfilledListener = (startAppListening: AppS
let wasNodeEditorReset = false;
const { nodes, canvasV2 } = getState();
const state = getState();
const nodes = selectNodesSlice(state);
const canvas = selectCanvasSlice(state);
deleted_images.forEach((image_name) => {
const imageUsage = getImageUsage(nodes.present, canvasV2.present, image_name);
const imageUsage = getImageUsage(nodes, canvas, image_name);
if (imageUsage.isNodesImage && !wasNodeEditorReset) {
dispatch(nodeEditorReset());

View File

@@ -31,7 +31,7 @@ export const addEnqueueRequestedLinear = (startAppListening: AppStartListening)
let didStartStaging = false;
if (!state.canvasSession.isStaging && state.canvasSession.mode === 'compose') {
if (!state.canvasSession.isStaging && state.canvasSettings.sendToCanvas) {
dispatch(sessionStartedStaging());
didStartStaging = true;
}
@@ -70,7 +70,11 @@ export const addEnqueueRequestedLinear = (startAppListening: AppStartListening)
const { g, noise, posCond } = buildGraphResult.value;
const prepareBatchResult = withResult(() => prepareLinearUIBatch(state, g, prepend, noise, posCond));
const destination = state.canvasSettings.sendToCanvas ? 'canvas' : 'gallery';
const prepareBatchResult = withResult(() =>
prepareLinearUIBatch(state, g, prepend, noise, posCond, 'generation', destination)
);
if (isErr(prepareBatchResult)) {
log.error({ error: serializeError(prepareBatchResult.error) }, 'Failed to prepare batch');

View File

@@ -1,5 +1,6 @@
import { enqueueRequested } from 'app/store/actions';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { selectNodesSlice } from 'features/nodes/store/selectors';
import { buildNodesGraph } from 'features/nodes/util/graph/buildNodesGraph';
import { buildWorkflowWithValidation } from 'features/nodes/util/workflow/buildWorkflow';
import { queueApi } from 'services/api/endpoints/queue';
@@ -11,12 +12,12 @@ export const addEnqueueRequestedNodes = (startAppListening: AppStartListening) =
enqueueRequested.match(action) && action.payload.tabName === 'workflows',
effect: async (action, { getState, dispatch }) => {
const state = getState();
const { nodes, edges } = state.nodes.present;
const nodes = selectNodesSlice(state);
const workflow = state.workflow;
const graph = buildNodesGraph(state.nodes.present);
const graph = buildNodesGraph(nodes);
const builtWorkflow = buildWorkflowWithValidation({
nodes,
edges,
nodes: nodes.nodes,
edges: nodes.edges,
workflow,
});
@@ -31,6 +32,7 @@ export const addEnqueueRequestedNodes = (startAppListening: AppStartListening) =
workflow: builtWorkflow,
runs: state.params.iterations,
origin: 'workflows',
destination: 'gallery',
},
prepend: action.payload.prepend,
};

View File

@@ -16,7 +16,7 @@ export const addEnqueueRequestedUpscale = (startAppListening: AppStartListening)
const { g, noise, posCond } = await buildMultidiffusionUpscaleGraph(state);
const batchConfig = prepareLinearUIBatch(state, g, prepend, noise, posCond);
const batchConfig = prepareLinearUIBatch(state, g, prepend, noise, posCond, 'upscaling', 'gallery');
const req = dispatch(
queueApi.endpoints.enqueueBatch.initiate(batchConfig, {

View File

@@ -1,7 +1,8 @@
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import type { AppDispatch, RootState } from 'app/store/store';
import { entityDeleted, ipaImageChanged } from 'features/controlLayers/store/canvasV2Slice';
import { entityDeleted, ipaImageChanged } from 'features/controlLayers/store/canvasSlice';
import { selectCanvasSlice } from 'features/controlLayers/store/selectors';
import { getEntityIdentifier } from 'features/controlLayers/store/types';
import { imageDeletionConfirmed } from 'features/deleteImageModal/store/actions';
import { isModalOpenChanged } from 'features/deleteImageModal/store/slice';
@@ -40,7 +41,7 @@ const deleteNodesImages = (state: RootState, dispatch: AppDispatch, imageDTO: Im
};
// const deleteControlAdapterImages = (state: RootState, dispatch: AppDispatch, imageDTO: ImageDTO) => {
// state.canvasV2.present.controlAdapters.entities.forEach(({ id, imageObject, processedImageObject }) => {
// state.canvas.present.controlAdapters.entities.forEach(({ id, imageObject, processedImageObject }) => {
// if (
// imageObject?.image.image_name === imageDTO.image_name ||
// processedImageObject?.image.image_name === imageDTO.image_name
@@ -52,7 +53,7 @@ const deleteNodesImages = (state: RootState, dispatch: AppDispatch, imageDTO: Im
// };
const deleteIPAdapterImages = (state: RootState, dispatch: AppDispatch, imageDTO: ImageDTO) => {
state.canvasV2.present.ipAdapters.entities.forEach((entity) => {
selectCanvasSlice(state).ipAdapters.entities.forEach((entity) => {
if (entity.ipAdapter.image?.image_name === imageDTO.image_name) {
dispatch(ipaImageChanged({ entityIdentifier: getEntityIdentifier(entity), imageDTO: null }));
}
@@ -60,7 +61,7 @@ const deleteIPAdapterImages = (state: RootState, dispatch: AppDispatch, imageDTO
};
const deleteLayerImages = (state: RootState, dispatch: AppDispatch, imageDTO: ImageDTO) => {
state.canvasV2.present.rasterLayers.entities.forEach(({ id, objects }) => {
selectCanvasSlice(state).rasterLayers.entities.forEach(({ id, objects }) => {
let shouldDelete = false;
for (const obj of objects) {
if (obj.type === 'image' && obj.image.image_name === imageDTO.image_name) {

View File

@@ -6,7 +6,8 @@ import {
ipaImageChanged,
rasterLayerAdded,
rgIPAdapterImageChanged,
} from 'features/controlLayers/store/canvasV2Slice';
} from 'features/controlLayers/store/canvasSlice';
import { selectCanvasSlice } from 'features/controlLayers/store/selectors';
import type { CanvasControlLayerState, CanvasRasterLayerState } from 'features/controlLayers/store/types';
import { imageDTOToImageObject } from 'features/controlLayers/store/types';
import type { TypesafeDraggableData, TypesafeDroppableData } from 'features/dnd/types';
@@ -85,7 +86,7 @@ export const addImageDroppedListener = (startAppListening: AppStartListening) =>
activeData.payload.imageDTO
) {
const imageObject = imageDTOToImageObject(activeData.payload.imageDTO);
const { x, y } = getState().canvasV2.present.bbox.rect;
const { x, y } = selectCanvasSlice(getState()).bbox.rect;
const overrides: Partial<CanvasRasterLayerState> = {
objects: [imageObject],
position: { x, y },
@@ -103,7 +104,7 @@ export const addImageDroppedListener = (startAppListening: AppStartListening) =>
activeData.payload.imageDTO
) {
const imageObject = imageDTOToImageObject(activeData.payload.imageDTO);
const { x, y } = getState().canvasV2.present.bbox.rect;
const { x, y } = selectCanvasSlice(getState()).bbox.rect;
const overrides: Partial<CanvasControlLayerState> = {
objects: [imageObject],
position: { x, y },

View File

@@ -1,6 +1,6 @@
import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { ipaImageChanged, rgIPAdapterImageChanged } from 'features/controlLayers/store/canvasV2Slice';
import { ipaImageChanged, rgIPAdapterImageChanged } from 'features/controlLayers/store/canvasSlice';
import { selectListBoardsQueryArgs } from 'features/gallery/store/gallerySelectors';
import { fieldImageValueChanged } from 'features/nodes/store/nodesSlice';
import { upscaleInitialImageChanged } from 'features/parameters/store/upscaleSlice';

View File

@@ -46,7 +46,7 @@ export const addModelSelectedListener = (startAppListening: AppStartListening) =
}
// handle incompatible controlnets
// state.canvasV2.present.controlAdapters.entities.forEach((ca) => {
// state.canvas.present.controlAdapters.entities.forEach((ca) => {
// if (ca.model?.base !== newBaseModel) {
// modelsCleared += 1;
// if (ca.isEnabled) {

View File

@@ -8,11 +8,12 @@ import {
controlLayerModelChanged,
ipaModelChanged,
rgIPAdapterModelChanged,
} from 'features/controlLayers/store/canvasV2Slice';
} from 'features/controlLayers/store/canvasSlice';
import { loraDeleted } from 'features/controlLayers/store/lorasSlice';
import { modelChanged, refinerModelChanged, vaeSelected } from 'features/controlLayers/store/paramsSlice';
import { selectCanvasSlice } from 'features/controlLayers/store/selectors';
import { getEntityIdentifier } from 'features/controlLayers/store/types';
import { calculateNewSize } from 'features/parameters/components/DocumentSize/calculateNewSize';
import { calculateNewSize } from 'features/parameters/components/Bbox/calculateNewSize';
import { postProcessingModelChanged, upscaleModelChanged } from 'features/parameters/store/upscaleSlice';
import { zParameterModel, zParameterVAEModel } from 'features/parameters/types/parameterSchemas';
import { getIsSizeOptimal, getOptimalDimension } from 'features/parameters/util/optimalDimension';
@@ -81,21 +82,12 @@ const handleMainModels: ModelHandler = (models, state, dispatch, log) => {
const result = zParameterModel.safeParse(defaultModelInList);
if (result.success) {
dispatch(modelChanged({ model: defaultModelInList, previousModel: currentModel }));
const { bbox } = selectCanvasSlice(state);
const optimalDimension = getOptimalDimension(defaultModelInList);
if (
getIsSizeOptimal(
state.canvasV2.present.bbox.rect.width,
state.canvasV2.present.bbox.rect.height,
optimalDimension
)
) {
if (getIsSizeOptimal(bbox.rect.width, bbox.rect.height, optimalDimension)) {
return;
}
const { width, height } = calculateNewSize(
state.canvasV2.present.bbox.aspectRatio.value,
optimalDimension * optimalDimension
);
const { width, height } = calculateNewSize(bbox.aspectRatio.value, optimalDimension * optimalDimension);
dispatch(bboxWidthChanged({ width }));
dispatch(bboxHeightChanged({ height }));
@@ -178,7 +170,7 @@ const handleLoRAModels: ModelHandler = (models, state, dispatch, _log) => {
const handleControlAdapterModels: ModelHandler = (models, state, dispatch, _log) => {
const caModels = models.filter(isControlNetOrT2IAdapterModelConfig);
state.canvasV2.present.controlLayers.entities.forEach((entity) => {
selectCanvasSlice(state).controlLayers.entities.forEach((entity) => {
const isModelAvailable = caModels.some((m) => m.key === entity.controlAdapter.model?.key);
if (isModelAvailable) {
return;
@@ -189,7 +181,7 @@ const handleControlAdapterModels: ModelHandler = (models, state, dispatch, _log)
const handleIPAdapterModels: ModelHandler = (models, state, dispatch, _log) => {
const ipaModels = models.filter(isIPAdapterModelConfig);
state.canvasV2.present.ipAdapters.entities.forEach((entity) => {
selectCanvasSlice(state).ipAdapters.entities.forEach((entity) => {
const isModelAvailable = ipaModels.some((m) => m.key === entity.ipAdapter.model?.key);
if (isModelAvailable) {
return;
@@ -197,7 +189,7 @@ const handleIPAdapterModels: ModelHandler = (models, state, dispatch, _log) => {
dispatch(ipaModelChanged({ entityIdentifier: getEntityIdentifier(entity), modelConfig: null }));
});
state.canvasV2.present.regions.entities.forEach((entity) => {
selectCanvasSlice(state).regions.entities.forEach((entity) => {
entity.ipAdapters.forEach(({ id: ipAdapterId, model }) => {
const isModelAvailable = ipaModels.some((m) => m.key === model?.key);
if (isModelAvailable) {

View File

@@ -1,5 +1,5 @@
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { bboxHeightChanged, bboxWidthChanged } from 'features/controlLayers/store/canvasV2Slice';
import { bboxHeightChanged, bboxWidthChanged } from 'features/controlLayers/store/canvasSlice';
import {
setCfgRescaleMultiplier,
setCfgScale,

View File

@@ -2,6 +2,7 @@ import { logger } from 'app/logging/logger';
import type { AppStartListening } from 'app/store/middleware/listenerMiddleware';
import { updateAllNodesRequested } from 'features/nodes/store/actions';
import { $templates, nodesChanged } from 'features/nodes/store/nodesSlice';
import { selectNodes } from 'features/nodes/store/selectors';
import { NodeUpdateError } from 'features/nodes/types/error';
import { isInvocationNode } from 'features/nodes/types/invocation';
import { getNeedsUpdate, updateNode } from 'features/nodes/util/node/nodeUpdate';
@@ -14,7 +15,7 @@ export const addUpdateAllNodesRequestedListener = (startAppListening: AppStartLi
startAppListening({
actionCreator: updateAllNodesRequested,
effect: (action, { dispatch, getState }) => {
const { nodes } = getState().nodes.present;
const nodes = selectNodes(getState());
const templates = $templates.get();
let unableToUpdateCount = 0;

View File

@@ -8,10 +8,9 @@ import { deepClone } from 'common/util/deepClone';
import { changeBoardModalSlice } from 'features/changeBoardModal/store/slice';
import { canvasSessionPersistConfig, canvasSessionSlice } from 'features/controlLayers/store/canvasSessionSlice';
import { canvasSettingsPersistConfig, canvasSettingsSlice } from 'features/controlLayers/store/canvasSettingsSlice';
import { canvasV2PersistConfig, canvasV2Slice } from 'features/controlLayers/store/canvasV2Slice';
import { canvasPersistConfig, canvasSlice, canvasUndoableConfig } from 'features/controlLayers/store/canvasSlice';
import { lorasPersistConfig, lorasSlice } from 'features/controlLayers/store/lorasSlice';
import { paramsPersistConfig, paramsSlice } from 'features/controlLayers/store/paramsSlice';
import { toolPersistConfig, toolSlice } from 'features/controlLayers/store/toolSlice';
import { deleteImageModalSlice } from 'features/deleteImageModal/store/slice';
import { dynamicPromptsPersistConfig, dynamicPromptsSlice } from 'features/dynamicPrompts/store/dynamicPromptsSlice';
import { galleryPersistConfig, gallerySlice } from 'features/gallery/store/gallerySlice';
@@ -58,12 +57,11 @@ const allReducers = {
[queueSlice.name]: queueSlice.reducer,
[workflowSlice.name]: workflowSlice.reducer,
[hrfSlice.name]: hrfSlice.reducer,
[canvasV2Slice.name]: undoable(canvasV2Slice.reducer),
[canvasSlice.name]: undoable(canvasSlice.reducer, canvasUndoableConfig),
[workflowSettingsSlice.name]: workflowSettingsSlice.reducer,
[upscaleSlice.name]: upscaleSlice.reducer,
[stylePresetSlice.name]: stylePresetSlice.reducer,
[paramsSlice.name]: paramsSlice.reducer,
[toolSlice.name]: toolSlice.reducer,
[canvasSettingsSlice.name]: canvasSettingsSlice.reducer,
[canvasSessionSlice.name]: canvasSessionSlice.reducer,
[lorasSlice.name]: lorasSlice.reducer,
@@ -104,12 +102,11 @@ const persistConfigs: { [key in keyof typeof allReducers]?: PersistConfig } = {
[dynamicPromptsPersistConfig.name]: dynamicPromptsPersistConfig,
[modelManagerV2PersistConfig.name]: modelManagerV2PersistConfig,
[hrfPersistConfig.name]: hrfPersistConfig,
[canvasV2PersistConfig.name]: canvasV2PersistConfig,
[canvasPersistConfig.name]: canvasPersistConfig,
[workflowSettingsPersistConfig.name]: workflowSettingsPersistConfig,
[upscalePersistConfig.name]: upscalePersistConfig,
[stylePresetPersistConfig.name]: stylePresetPersistConfig,
[paramsPersistConfig.name]: paramsPersistConfig,
[toolPersistConfig.name]: toolPersistConfig,
[canvasSettingsPersistConfig.name]: canvasSettingsPersistConfig,
[canvasSessionPersistConfig.name]: canvasSessionPersistConfig,
[lorasPersistConfig.name]: lorasPersistConfig,

View File

@@ -0,0 +1,104 @@
import type { SystemStyleObject } from '@invoke-ai/ui-library';
import { Box, Flex, IconButton, Tooltip, useToken } from '@invoke-ai/ui-library';
import type { ReactElement, ReactNode } from 'react';
import { memo, useCallback, useMemo } from 'react';
type IconSwitchProps = {
isChecked: boolean;
onChange: (checked: boolean) => void;
iconChecked: ReactElement;
tooltipChecked?: ReactNode;
iconUnchecked: ReactElement;
tooltipUnchecked?: ReactNode;
ariaLabel: string;
};
const getSx = (padding: string | number): SystemStyleObject => ({
transition: 'left 0.1s ease-in-out, transform 0.1s ease-in-out',
'&[data-checked="true"]': {
left: `calc(100% - ${padding})`,
transform: 'translateX(-100%)',
},
'&[data-checked="false"]': {
left: padding,
transform: 'translateX(0)',
},
});
export const IconSwitch = memo(
({
isChecked,
onChange,
iconChecked,
tooltipChecked,
iconUnchecked,
tooltipUnchecked,
ariaLabel,
}: IconSwitchProps) => {
const onUncheck = useCallback(() => {
onChange(false);
}, [onChange]);
const onCheck = useCallback(() => {
onChange(true);
}, [onChange]);
const gap = useToken('space', 1.5);
const sx = useMemo(() => getSx(gap), [gap]);
return (
<Flex
position="relative"
bg="base.800"
borderRadius="base"
alignItems="center"
justifyContent="center"
h="full"
p={gap}
gap={gap}
>
<Box
position="absolute"
borderRadius="base"
bg="invokeBlue.400"
w={12}
top={gap}
bottom={gap}
data-checked={isChecked}
sx={sx}
/>
<Tooltip hasArrow label={tooltipUnchecked}>
<IconButton
size="sm"
fontSize={16}
icon={iconUnchecked}
onClick={onUncheck}
variant={!isChecked ? 'solid' : 'ghost'}
colorScheme={!isChecked ? 'invokeBlue' : 'base'}
aria-label={ariaLabel}
data-checked={!isChecked}
w={12}
alignSelf="stretch"
h="auto"
/>
</Tooltip>
<Tooltip hasArrow label={tooltipChecked}>
<IconButton
size="sm"
fontSize={16}
icon={iconChecked}
onClick={onCheck}
variant={isChecked ? 'solid' : 'ghost'}
colorScheme={isChecked ? 'invokeBlue' : 'base'}
aria-label={ariaLabel}
data-checked={isChecked}
w={12}
alignSelf="stretch"
h="auto"
/>
</Tooltip>
</Flex>
);
}
);
IconSwitch.displayName = 'IconSwitch';

View File

@@ -13,8 +13,9 @@ import {
Spacer,
Text,
} from '@invoke-ai/ui-library';
import { createSelector } from '@reduxjs/toolkit';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { setShouldEnableInformationalPopovers } from 'features/system/store/systemSlice';
import { selectSystemSlice, setShouldEnableInformationalPopovers } from 'features/system/store/systemSlice';
import { toast } from 'features/toast/toast';
import { merge, omit } from 'lodash-es';
import type { ReactElement } from 'react';
@@ -31,8 +32,13 @@ type Props = {
children: ReactElement;
};
const selectShouldEnableInformationalPopovers = createSelector(
selectSystemSlice,
(system) => system.shouldEnableInformationalPopovers
);
export const InformationalPopover = memo(({ feature, children, inPortal = true, ...rest }: Props) => {
const shouldEnableInformationalPopovers = useAppSelector((s) => s.system.shouldEnableInformationalPopovers);
const shouldEnableInformationalPopovers = useAppSelector(selectShouldEnableInformationalPopovers);
const data = useMemo(() => POPOVER_DATA[feature], [feature]);

View File

@@ -1,52 +1,74 @@
import { useStore } from '@nanostores/react';
import type { WritableAtom } from 'nanostores';
import { useCallback, useMemo, useState } from 'react';
import { atom } from 'nanostores';
import { useCallback, useState } from 'react';
export const useBoolean = (initialValue: boolean) => {
const [isTrue, set] = useState(initialValue);
const setTrue = useCallback(() => set(true), []);
const setFalse = useCallback(() => set(false), []);
const toggle = useCallback(() => set((v) => !v), []);
type UseBoolean = {
isTrue: boolean;
setTrue: () => void;
setFalse: () => void;
set: (value: boolean) => void;
toggle: () => void;
};
const api = useMemo(
() => ({
/**
* Creates a hook to manage a boolean state. The boolean is stored in a nanostores atom.
* Returns a tuple containing the hook and the atom. Use this for global boolean state.
* @param initialValue Initial value of the boolean
*/
export const buildUseBoolean = (initialValue: boolean): [() => UseBoolean, WritableAtom<boolean>] => {
const $boolean = atom(initialValue);
const setTrue = () => {
$boolean.set(true);
};
const setFalse = () => {
$boolean.set(false);
};
const set = (value: boolean) => {
$boolean.set(value);
};
const toggle = () => {
$boolean.set(!$boolean.get());
};
const useBoolean = () => {
const isTrue = useStore($boolean);
return {
isTrue,
set,
setTrue,
setFalse,
set,
toggle,
}),
[isTrue, set, setTrue, setFalse, toggle]
);
};
};
return api;
return [useBoolean, $boolean] as const;
};
export const buildUseBoolean = ($boolean: WritableAtom<boolean>) => {
return () => {
const setTrue = useCallback(() => {
$boolean.set(true);
}, []);
const setFalse = useCallback(() => {
$boolean.set(false);
}, []);
const set = useCallback((value: boolean) => {
$boolean.set(value);
}, []);
const toggle = useCallback(() => {
$boolean.set(!$boolean.get());
}, []);
/**
* Hook to manage a boolean state. Use this for a local boolean state.
* @param initialValue Initial value of the boolean
*/
export const useBoolean = (initialValue: boolean) => {
const [isTrue, set] = useState(initialValue);
const api = useMemo(
() => ({
setTrue,
setFalse,
set,
toggle,
$boolean,
}),
[set, setFalse, setTrue, toggle]
);
const setTrue = useCallback(() => {
set(true);
}, [set]);
const setFalse = useCallback(() => {
set(false);
}, [set]);
const toggle = useCallback(() => {
set((val) => !val);
}, [set]);
return api;
return {
isTrue,
setTrue,
setFalse,
set,
toggle,
};
};

View File

@@ -1,5 +1,6 @@
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppSelector } from 'app/store/storeHooks';
import { selectAutoAddBoardId } from 'features/gallery/store/gallerySelectors';
import { toast } from 'features/toast/toast';
import { selectActiveTab } from 'features/ui/store/uiSelectors';
import { useCallback, useEffect, useState } from 'react';
@@ -26,7 +27,7 @@ const selectPostUploadAction = createMemoizedSelector(selectActiveTab, (activeTa
export const useFullscreenDropzone = () => {
const { t } = useTranslation();
const autoAddBoardId = useAppSelector((s) => s.gallery.autoAddBoardId);
const autoAddBoardId = useAppSelector(selectAutoAddBoardId);
const [isHandlingUpload, setIsHandlingUpload] = useState<boolean>(false);
const postUploadAction = useAppSelector(selectPostUploadAction);
const [uploadImage] = useUploadImageMutation();

View File

@@ -1,7 +1,7 @@
import { useAppDispatch } from 'app/store/storeHooks';
import { addScope, removeScope, setScopes } from 'common/hooks/interactionScopes';
import { useClearQueue } from 'features/queue/components/ClearQueueConfirmationAlertDialog';
import { useCancelCurrentQueueItem } from 'features/queue/hooks/useCancelCurrentQueueItem';
import { useClearQueue } from 'features/queue/hooks/useClearQueue';
import { useQueueBack } from 'features/queue/hooks/useQueueBack';
import { useQueueFront } from 'features/queue/hooks/useQueueFront';
import { useFeatureStatus } from 'features/system/hooks/useFeatureStatus';

View File

@@ -1,6 +1,8 @@
import type { ComboboxOnChange, ComboboxOption } from '@invoke-ai/ui-library';
import { createSelector } from '@reduxjs/toolkit';
import { useAppSelector } from 'app/store/storeHooks';
import type { GroupBase } from 'chakra-react-select';
import { selectParamsSlice } from 'features/controlLayers/store/paramsSlice';
import type { ModelIdentifierField } from 'features/nodes/types/common';
import { groupBy, reduce } from 'lodash-es';
import { useCallback, useMemo } from 'react';
@@ -28,11 +30,13 @@ const groupByBaseFunc = <T extends AnyModelConfig>(model: T) => model.base.toUpp
const groupByBaseAndTypeFunc = <T extends AnyModelConfig>(model: T) =>
`${model.base.toUpperCase()} / ${model.type.replaceAll('_', ' ').toUpperCase()}`;
const selectBaseWithSDXLFallback = createSelector(selectParamsSlice, (params) => params.model?.base ?? 'sdxl');
export const useGroupedModelCombobox = <T extends AnyModelConfig>(
arg: UseGroupedModelComboboxArg<T>
): UseGroupedModelComboboxReturn => {
const { t } = useTranslation();
const base_model = useAppSelector((s) => s.params.model?.base ?? 'sdxl');
const base = useAppSelector(selectBaseWithSDXLFallback);
const { modelConfigs, selectedModel, getIsDisabled, onChange, isLoading, groupByType = false } = arg;
const options = useMemo<GroupBase<ComboboxOption>[]>(() => {
if (!modelConfigs) {
@@ -54,9 +58,9 @@ export const useGroupedModelCombobox = <T extends AnyModelConfig>(
},
[] as GroupBase<ComboboxOption>[]
);
_options.sort((a) => (a.label?.split('/')[0]?.toLowerCase().includes(base_model) ? -1 : 1));
_options.sort((a) => (a.label?.split('/')[0]?.toLowerCase().includes(base) ? -1 : 1));
return _options;
}, [modelConfigs, groupByType, getIsDisabled, base_model]);
}, [modelConfigs, groupByType, getIsDisabled, base]);
const value = useMemo(
() =>

View File

@@ -1,4 +1,5 @@
import { useAppSelector } from 'app/store/storeHooks';
import { selectAutoAddBoardId } from 'features/gallery/store/gallerySelectors';
import { useCallback } from 'react';
import { useDropzone } from 'react-dropzone';
import { useUploadImageMutation } from 'services/api/endpoints/images';
@@ -29,7 +30,7 @@ type UseImageUploadButtonArgs = {
* <input {...getUploadInputProps()} /> // hidden, handles native upload functionality
*/
export const useImageUploadButton = ({ postUploadAction, isDisabled }: UseImageUploadButtonArgs) => {
const autoAddBoardId = useAppSelector((s) => s.gallery.autoAddBoardId);
const autoAddBoardId = useAppSelector(selectAutoAddBoardId);
const [uploadImage] = useUploadImageMutation();
const onDropAccepted = useCallback(
(files: File[]) => {

View File

@@ -2,11 +2,13 @@ import { useStore } from '@nanostores/react';
import { $isConnected } from 'app/hooks/useSocketIO';
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppSelector } from 'app/store/storeHooks';
import { useCanvasManagerSafe } from 'features/controlLayers/contexts/CanvasManagerProviderGate';
import { selectParamsSlice } from 'features/controlLayers/store/paramsSlice';
import { selectCanvasV2Slice } from 'features/controlLayers/store/selectors';
import { selectCanvasSlice } from 'features/controlLayers/store/selectors';
import { selectDynamicPromptsSlice } from 'features/dynamicPrompts/store/dynamicPromptsSlice';
import { getShouldProcessPrompt } from 'features/dynamicPrompts/util/getShouldProcessPrompt';
import { $templates, selectNodesSlice } from 'features/nodes/store/nodesSlice';
import { $templates } from 'features/nodes/store/nodesSlice';
import { selectNodesSlice } from 'features/nodes/store/selectors';
import type { Templates } from 'features/nodes/store/types';
import { selectWorkflowSettingsSlice } from 'features/nodes/store/workflowSettingsSlice';
import { isInvocationNode } from 'features/nodes/types/invocation';
@@ -16,6 +18,7 @@ import { selectSystemSlice } from 'features/system/store/systemSlice';
import { selectActiveTab } from 'features/ui/store/uiSelectors';
import i18n from 'i18next';
import { forEach, upperFirst } from 'lodash-es';
import { atom } from 'nanostores';
import { useMemo } from 'react';
import { getConnectedEdges } from 'reactflow';
@@ -27,21 +30,21 @@ const LAYER_TYPE_TO_TKEY = {
control_layer: 'controlLayers.globalControlAdapter',
} as const;
const createSelector = (templates: Templates, isConnected: boolean) =>
const createSelector = (templates: Templates, isConnected: boolean, canvasIsBusy: boolean) =>
createMemoizedSelector(
[
selectSystemSlice,
selectNodesSlice,
selectWorkflowSettingsSlice,
selectDynamicPromptsSlice,
selectCanvasV2Slice,
selectCanvasSlice,
selectParamsSlice,
selectUpscalelice,
selectConfigSlice,
selectActiveTab,
],
(system, nodes, workflowSettings, dynamicPrompts, canvasV2, params, upscale, config, activeTabName) => {
const { bbox } = canvasV2;
(system, nodes, workflowSettings, dynamicPrompts, canvas, params, upscale, config, activeTabName) => {
const { bbox } = canvas;
const { model, positivePrompt } = params;
const reasons: { prefix?: string; content: string }[] = [];
@@ -116,6 +119,10 @@ const createSelector = (templates: Templates, isConnected: boolean) =>
reasons.push({ content: i18n.t('upscaling.missingTileControlNetModel') });
}
} else {
if (canvasIsBusy) {
reasons.push({ content: i18n.t('parameters.invoke.canvasBusy') });
}
if (dynamicPrompts.prompts.length === 0 && getShouldProcessPrompt(positivePrompt)) {
reasons.push({ content: i18n.t('parameters.invoke.noPrompts') });
}
@@ -124,10 +131,10 @@ const createSelector = (templates: Templates, isConnected: boolean) =>
reasons.push({ content: i18n.t('parameters.invoke.noModelSelected') });
}
canvasV2.controlLayers.entities
canvas.controlLayers.entities
.filter((controlLayer) => controlLayer.isEnabled)
.forEach((controlLayer, i) => {
const layerLiteral = i18n.t('controlLayers.layers_one');
const layerLiteral = i18n.t('controlLayers.layer_one');
const layerNumber = i + 1;
const layerType = i18n.t(LAYER_TYPE_TO_TKEY['control_layer']);
const prefix = `${layerLiteral} #${layerNumber} (${layerType})`;
@@ -154,10 +161,10 @@ const createSelector = (templates: Templates, isConnected: boolean) =>
}
});
canvasV2.ipAdapters.entities
canvas.ipAdapters.entities
.filter((entity) => entity.isEnabled)
.forEach((entity, i) => {
const layerLiteral = i18n.t('controlLayers.layers_one');
const layerLiteral = i18n.t('controlLayers.layer_one');
const layerNumber = i + 1;
const layerType = i18n.t(LAYER_TYPE_TO_TKEY[entity.type]);
const prefix = `${layerLiteral} #${layerNumber} (${layerType})`;
@@ -182,10 +189,10 @@ const createSelector = (templates: Templates, isConnected: boolean) =>
}
});
canvasV2.regions.entities
canvas.regions.entities
.filter((entity) => entity.isEnabled)
.forEach((entity, i) => {
const layerLiteral = i18n.t('controlLayers.layers_one');
const layerLiteral = i18n.t('controlLayers.layer_one');
const layerNumber = i + 1;
const layerType = i18n.t(LAYER_TYPE_TO_TKEY[entity.type]);
const prefix = `${layerLiteral} #${layerNumber} (${layerType})`;
@@ -219,10 +226,10 @@ const createSelector = (templates: Templates, isConnected: boolean) =>
}
});
canvasV2.rasterLayers.entities
canvas.rasterLayers.entities
.filter((entity) => entity.isEnabled)
.forEach((entity, i) => {
const layerLiteral = i18n.t('controlLayers.layers_one');
const layerLiteral = i18n.t('controlLayers.layer_one');
const layerNumber = i + 1;
const layerType = i18n.t(LAYER_TYPE_TO_TKEY[entity.type]);
const prefix = `${layerLiteral} #${layerNumber} (${layerType})`;
@@ -239,10 +246,17 @@ const createSelector = (templates: Templates, isConnected: boolean) =>
}
);
const dummyAtom = atom(true);
export const useIsReadyToEnqueue = () => {
const templates = useStore($templates);
const isConnected = useStore($isConnected);
const selector = useMemo(() => createSelector(templates, isConnected), [templates, isConnected]);
const canvasManager = useCanvasManagerSafe();
const canvasIsBusy = useStore(canvasManager?.$isBusy ?? dummyAtom);
const selector = useMemo(
() => createSelector(templates, isConnected, canvasIsBusy),
[templates, isConnected, canvasIsBusy]
);
const value = useAppSelector(selector);
return value;
};

View File

@@ -1,6 +1,9 @@
export const roundDownToMultiple = (num: number, multiple: number): number => {
return Math.floor(num / multiple) * multiple;
};
export const roundUpToMultiple = (num: number, multiple: number): number => {
return Math.ceil(num / multiple) * multiple;
};
export const roundToMultiple = (num: number, multiple: number): number => {
return Math.round(num / multiple) * multiple;

View File

@@ -1,7 +1,3 @@
export const stopPropagation = (e: React.MouseEvent) => {
e.stopPropagation();
};
export const preventDefault = (e: React.MouseEvent) => {
e.preventDefault();
};

View File

@@ -1,5 +1,6 @@
import type { ComboboxOnChange, ComboboxOption } from '@invoke-ai/ui-library';
import { Combobox, ConfirmationAlertDialog, Flex, FormControl, Text } from '@invoke-ai/ui-library';
import { createSelector } from '@reduxjs/toolkit';
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import {
@@ -18,12 +19,17 @@ const selectImagesToChange = createMemoizedSelector(
(changeBoardModal) => changeBoardModal.imagesToChange
);
const selectIsModalOpen = createSelector(
selectChangeBoardModalSlice,
(changeBoardModal) => changeBoardModal.isModalOpen
);
const ChangeBoardModal = () => {
const dispatch = useAppDispatch();
const [selectedBoard, setSelectedBoard] = useState<string | null>();
const queryArgs = useAppSelector(selectListBoardsQueryArgs);
const { data: boards, isFetching } = useListAllBoardsQuery(queryArgs);
const isModalOpen = useAppSelector((s) => s.changeBoardModal.isModalOpen);
const isModalOpen = useAppSelector(selectIsModalOpen);
const imagesToChange = useAppSelector(selectImagesToChange);
const [addImagesToBoard] = useAddImagesToBoardMutation();
const [removeImagesFromBoard] = useRemoveImagesFromBoardMutation();
@@ -77,6 +83,7 @@ const ChangeBoardModal = () => {
acceptCallback={handleChangeBoard}
acceptButtonText={t('boards.move')}
cancelButtonText={t('boards.cancel')}
useInert={false}
>
<Flex flexDir="column" gap={4}>
<Text>

View File

@@ -6,7 +6,7 @@ import {
ipaAdded,
rasterLayerAdded,
rgAdded,
} from 'features/controlLayers/store/canvasV2Slice';
} from 'features/controlLayers/store/canvasSlice';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { PiPlusBold } from 'react-icons/pi';
@@ -31,22 +31,22 @@ export const CanvasAddEntityButtons = memo(() => {
}, [dispatch]);
return (
<Flex flexDir="column" w="full" h="full" alignItems="center" justifyContent="center">
<ButtonGroup orientation="vertical" isAttached={false}>
<Flex flexDir="column" w="full" h="full" alignItems="center">
<ButtonGroup position="relative" orientation="vertical" isAttached={false} top="20%">
<Button variant="ghost" justifyContent="flex-start" leftIcon={<PiPlusBold />} onClick={addInpaintMask}>
{t('controlLayers.inpaintMask', { count: 1 })}
{t('controlLayers.inpaintMask')}
</Button>
<Button variant="ghost" justifyContent="flex-start" leftIcon={<PiPlusBold />} onClick={addRegionalGuidance}>
{t('controlLayers.regionalGuidance', { count: 1 })}
{t('controlLayers.regionalGuidance')}
</Button>
<Button variant="ghost" justifyContent="flex-start" leftIcon={<PiPlusBold />} onClick={addRasterLayer}>
{t('controlLayers.rasterLayer', { count: 1 })}
{t('controlLayers.rasterLayer')}
</Button>
<Button variant="ghost" justifyContent="flex-start" leftIcon={<PiPlusBold />} onClick={addControlLayer}>
{t('controlLayers.controlLayer', { count: 1 })}
{t('controlLayers.controlLayer')}
</Button>
<Button variant="ghost" justifyContent="flex-start" leftIcon={<PiPlusBold />} onClick={addIPAdapter}>
{t('controlLayers.ipAdapter', { count: 1 })}
{t('controlLayers.globalIPAdapter')}
</Button>
</ButtonGroup>
</Flex>

View File

@@ -1,6 +1,6 @@
import { Flex } from '@invoke-ai/ui-library';
import type { Meta, StoryObj } from '@storybook/react';
import { CanvasEditor } from 'features/controlLayers/components/ControlLayersEditor';
import { CanvasEditor } from 'features/controlLayers/components/CanvasEditor';
const meta: Meta<typeof CanvasEditor> = {
title: 'Feature/ControlLayers',

View File

@@ -2,12 +2,12 @@
import { Flex } from '@invoke-ai/ui-library';
import { useScopeOnFocus } from 'common/hooks/interactionScopes';
import { CanvasDropArea } from 'features/controlLayers/components/CanvasDropArea';
import { ControlLayersToolbar } from 'features/controlLayers/components/ControlLayersToolbar';
import { Filter } from 'features/controlLayers/components/Filters/Filter';
import { StageComponent } from 'features/controlLayers/components/StageComponent';
import { StagingAreaIsStagingGate } from 'features/controlLayers/components/StagingArea/StagingAreaIsStagingGate';
import { StagingAreaToolbar } from 'features/controlLayers/components/StagingArea/StagingAreaToolbar';
import { Transform } from 'features/controlLayers/components/Transform';
import { CanvasToolbar } from 'features/controlLayers/components/Toolbar/CanvasToolbar';
import { Transform } from 'features/controlLayers/components/Transform/Transform';
import { CanvasManagerProviderGate } from 'features/controlLayers/contexts/CanvasManagerProviderGate';
import { memo, useRef } from 'react';
@@ -19,8 +19,6 @@ export const CanvasEditor = memo(() => {
<Flex
tabIndex={-1}
ref={ref}
layerStyle="first"
p={2}
borderRadius="base"
position="relative"
flexDirection="column"
@@ -30,16 +28,16 @@ export const CanvasEditor = memo(() => {
alignItems="center"
justifyContent="center"
>
<ControlLayersToolbar />
<CanvasToolbar />
<StageComponent />
<Flex position="absolute" bottom={16} gap={2} align="center" justify="center">
<Flex position="absolute" bottom={4} gap={2} align="center" justify="center">
<CanvasManagerProviderGate>
<StagingAreaIsStagingGate>
<StagingAreaToolbar />
</StagingAreaIsStagingGate>
</CanvasManagerProviderGate>
</Flex>
<Flex position="absolute" bottom={16}>
<Flex position="absolute" bottom={4}>
<CanvasManagerProviderGate>
<Filter />
<Transform />

View File

@@ -1,6 +1,5 @@
import { Flex } from '@invoke-ai/ui-library';
import ScrollableContent from 'common/components/OverlayScrollbars/ScrollableContent';
import { CanvasEntityOpacity } from 'features/controlLayers/components/common/CanvasEntityOpacity';
import { ControlLayerEntityList } from 'features/controlLayers/components/ControlLayer/ControlLayerEntityList';
import { InpaintMaskList } from 'features/controlLayers/components/InpaintMask/InpaintMaskList';
import { IPAdapterList } from 'features/controlLayers/components/IPAdapter/IPAdapterList';
@@ -11,8 +10,7 @@ import { memo } from 'react';
export const CanvasEntityList = memo(() => {
return (
<ScrollableContent>
<Flex flexDir="column" gap={4} pt={2} data-testid="control-layers-layer-list" w="full" h="full">
<CanvasEntityOpacity />
<Flex flexDir="column" gap={2} data-testid="control-layers-layer-list" w="full" h="full">
<InpaintMaskList />
<RegionalGuidanceEntityList />
<IPAdapterList />

View File

@@ -1,27 +0,0 @@
import { IconButton, Menu, MenuButton, MenuList } from '@invoke-ai/ui-library';
import { CanvasEntityListMenuItems } from 'features/controlLayers/components/CanvasEntityList/CanvasEntityListMenuItems';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { PiDotsThreeOutlineFill } from 'react-icons/pi';
export const CanvasEntityListMenuButton = memo(() => {
const { t } = useTranslation();
return (
<Menu>
<MenuButton
as={IconButton}
aria-label={t('accessibility.menu')}
icon={<PiDotsThreeOutlineFill />}
variant="link"
data-testid="control-layers-add-layer-menu-button"
alignSelf="stretch"
/>
<MenuList>
<CanvasEntityListMenuItems />
</MenuList>
</Menu>
);
});
CanvasEntityListMenuButton.displayName = 'CanvasEntityListMenuButton';

Some files were not shown because too many files have changed in this diff Show More