Compare commits

...

390 Commits

Author SHA1 Message Date
Brandon Rising
52568c7329 Just comment out cache emptying 2024-02-07 14:47:31 -05:00
Brandon Rising
db5f1c8623 Revert "L2I Performance updates"
This reverts commit 8f0352f3ad.
2024-02-07 14:46:53 -05:00
Brandon Rising
8f0352f3ad L2I Performance updates 2024-02-07 13:01:41 -05:00
Hosted Weblate
14472dc09d translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-02-05 11:16:38 +11:00
psychedelicious
e8095b73ae feat(ui): improve types for language picker
Makes it impossible to miss a language or typo.
2024-02-05 10:47:36 +11:00
psychedelicious
c979cf5ecc tidy(ui): remove language translation strings
There's no need to do things like translate Arabic into Finnish. We never use those strings. Remove these translations entirely.
2024-02-05 10:47:36 +11:00
psychedelicious
1b4dbd283e fix(ui): hardcode language picker languages
Hardcode the options in the dropdown, don't rely on translators to fill this in.

Also, add a number of missing languages (Azerbaijani, Finnish, Hungarian, Swedish, Turkish).
2024-02-05 10:47:36 +11:00
psychedelicious
fb50a221f8 fix(ui): fix color input field alpha
Closes #5647

The alpha values in the UI are `0-1` but the backend wants `0-255`.

Previously, this was handled in `parseFIeldValue` when building the graph. In a recent release, field types were refactored and broke the alpha handling.

The logic for handling alpha values is moved into `ColorFieldInputComponent`, and `parseFieldValue` now just does no value transformations.

Though it would be a minor change, I'm leaving this function in because I don't want to change the rest of the logic except when necessary.
2024-02-05 09:28:20 +11:00
skunkworxdark
52e07db06b Update communityNodes.md
added Autostereogram nodes
2024-02-05 09:26:41 +11:00
psychedelicious
6643b5cec4 feat(ui): log trace when skipping reserved input field type 2024-02-05 09:24:46 +11:00
psychedelicious
e8bf9ea058 fix(ui): do not swallow errors during schema parsing
Unknown errors were swallowed during schema parsing. Now they log a warning.
2024-02-05 09:24:46 +11:00
psychedelicious
ce3d37e829 fix(ui): handle fields with single option literal
Closes #5616

Turns out the OpenAPI schema definition for a pydantic field with a `Literal` type annotation is different depending on the number of options.

When there is a single value (e.g. `Literal["foo"]`, this results in a `const` schema object. The schema parser didn't know how to handle this, and displayed a warning in the JS console.

 This situation is now handled. When a `const` schema object is encountered, we interpret that as an `EnumField` with a single option.

 I think this makes sense - if you had a truly constant value, you wouldn't make it a field, so a `const` must mean a dynamically generated enum that ended up with only a single option.
2024-02-05 09:15:09 +11:00
Ufuk Sarp Selçok
8a61063e84 translationBot(ui): update translation (Turkish)
Currently translated at 57.5% (825 of 1433 strings)

Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2024-02-05 08:31:44 +11:00
Hosted Weblate
87ff96553a translationBot(ui): update translation files
Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-02-05 08:31:44 +11:00
Ufuk Sarp Selçok
209bf105bc translationBot(ui): update translation (Turkish)
Currently translated at 57.3% (822 of 1433 strings)

Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2024-02-05 08:31:44 +11:00
Hosted Weblate
804dbeba34 translationBot(ui): update translation files
Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-02-05 08:31:44 +11:00
Ufuk Sarp Selçok
067cd4dc2e translationBot(ui): update translation (Turkish)
Currently translated at 40.6% (582 of 1433 strings)

translationBot(ui): update translation (Turkish)

Currently translated at 38.8% (557 of 1433 strings)

Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2024-02-05 08:31:44 +11:00
Mehrab Poladov
feb4a3f242 translationBot(ui): update translation (Azerbaijani)
Currently translated at 0.1% (1 of 1433 strings)

translationBot(ui): added translation (Azerbaijani)

Co-authored-by: Mehrab Poladov <thepoladov@protonmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/az/
Translation: InvokeAI/Web UI
2024-02-05 08:31:44 +11:00
Wubbbi
4a886c0a4a Minor dep updates 2024-02-04 13:04:36 -05:00
Lincoln Stein
8e500283b6 Fix broken import in checkpoint_convert (#5635)
* Fix broken import in checkpoint_convert

* simplify the fix

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-02-04 12:56:51 +00:00
psychedelicious
3205371654 feat(ui): better error handling for persist serialize function 2024-02-03 07:39:19 -05:00
psychedelicious
d713620d9e refactor(ui): refactor reducer list
Instead of manually naming reducers, use each slice's `name` property. Makes typos impossible.
2024-02-03 07:39:19 -05:00
psychedelicious
c1300fa8b1 refactor(ui): refactor persist config
Add more structure around persist configs to avoid bugs from typos and misplaced persist denylists.
2024-02-03 07:39:19 -05:00
psychedelicious
0976ddba23 chore(invocation-stats): improve types in _prune_stale_stats 2024-02-03 07:34:06 -05:00
psychedelicious
3ebb806410 fix(invocation-stats): use appropriate method to get the type of an invocation 2024-02-03 07:34:06 -05:00
psychedelicious
9f274c79dc chore(item-storage): improve types
Provide type args to the generics.
2024-02-03 07:34:06 -05:00
psychedelicious
88c08bbfc7 fix(item-storage-memory): throw when requested item does not exist
- `ItemStorageMemory.get` now throws an `ItemNotFoundError` when the requested `item_id` is not found.
- Update docstrings in ABC and tests.

The new memory item storage implementation implemented the `get` method incorrectly, by returning `None` if the item didn't exist.

The ABC typed `get` as returning `T`, while the SQLite implementation typed `get` as returning `Optional[T]`. The SQLite implementation was referenced when writing the memory implementation.

This mismatched typing is a violation of the Liskov substitution principle, because the signature of the implementation of `get` in the implementation is wider than the abstract class's definition. Using `pyright` in strict mode catches this.

In `invocation_stats_default`, this introduced an error. The `_prune_stats` method calls `get`, expecting the method to throw if the item is not found. If the graph is no longer stored in the bounded item storage, we will call `is_complete()` on `None`, causing the error.

Note: This error condition never arose the SQLite implementation because it parsed the item with pydantic before returning it, which would throw if the item was not found. It implicitly threw, while the memory implementation did not.
2024-02-03 07:34:06 -05:00
psychedelicious
c2af124622 fix(ui): refetch intermediates count when settings modal open
The `getIntermediatesCount` query is set to `refetchOnMountOrArgsChange`. The intention was for when the settings modal opens (i.e. mounts), the `getIntermediatesCount` query is refetched. But it doesn't work - modals only mount once, there is no lazy rendering for them.

So we have to imperatively refetch, by refetching as we open the modal.

Closes #5639
2024-02-03 12:14:37 +11:00
Peanut
f972fe9836 pref: annotate 2024-02-03 10:18:26 +11:00
Peanut
dcfc883ab3 perf: remove TypeAdapter 2024-02-03 10:18:26 +11:00
Peanut
1d2bd6b8f7 perf: TypeAdapter instantiated once 2024-02-03 10:18:26 +11:00
Lincoln Stein
f2777f5096 Port the command-line tools to use model_manager2 (#5546)
* Port the command-line tools to use model_manager2

1.Reimplement the following:

  - invokeai-model-install
  - invokeai-merge
  - invokeai-ti

  To avoid breaking the original modeal manager, the udpated tools
  have been renamed invokeai-model-install2 and invokeai-merge2. The
  textual inversion training script should continue to work with
  existing installations. The "starter" models now live in
  `invokeai/configs/INITIAL_MODELS2.yaml`.

  When the full model manager 2 is in place and working, I'll rename
  these files and commands.

2. Add the `merge` route to the web API. This will merge two or three models,
   resulting a new one.

   - Note that because the model installer selectively installs the `fp16` variant
     of models (rather than both 16- and 32-bit versions as previous),
     the diffusers merge script will choke on any huggingface diffuserse models
     that were downloaded with the new installer. Previously-downloaded models
     should continue to merge correctly. I have a PR
     upstream https://github.com/huggingface/diffusers/pull/6670 to fix
     this.

3. (more important!)
  During implementation of the CLI tools, found and fixed a number of small
  runtime bugs in the model_manager2 implementation:

  - During model database migration, if a registered models file was
    not found on disk, the migration would be aborted. Now the
    offending model is skipped with a log warning.

  - Caught and fixed a condition in which the installer would download the
    entire diffusers repo when the user provided a single `.safetensors`
    file URL.

  - Caught and fixed a condition in which the installer would raise an
    exception and stop the app when a request for an unknown model's metadata
    was passed to Civitai. Now an error is logged and the installer continues.

  - Replaced the LoWRA starter LoRA with FlatColor. The former has been removed
    from Civitai.

* fix ruff issue

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-02-02 17:18:47 +00:00
Brandon
d3320dc4ee convert checkpoints to safetensors (#5620)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description

Seems we elected to convert checkpoints into .bin files when we set it
up. This doesn't seem to corrupt them anymore.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-02-02 10:27:24 -05:00
Brandon
72db2ee352 Merge branch 'main' into sdxl-convert-safetensors 2024-02-02 10:10:49 -05:00
blessedcoolant
60c3a4ad5e chore: add Hand Refiner to communityNodes.md 2024-02-02 08:12:32 -05:00
Kent Keirsey
cf7a7928af Update mkdocs.yml 2024-02-01 20:43:49 -05:00
Wubbbi
1057314508 Fix ruff? 2024-02-01 20:40:28 -05:00
Wubbbi
73a077956b Why did my IDE change the comment? 2024-02-01 20:40:28 -05:00
Wubbbi
5e1e50bd47 Fix hopefully last import 2024-02-01 20:40:28 -05:00
Wubbbi
413fe566b8 Fix imports 2024-02-01 20:40:28 -05:00
Wubbbi
c9b5f06c42 Update diffusers + hotfix 2024-02-01 20:40:28 -05:00
Alexander Eichhorn
b53e432b0f translationBot(ui): update translation (German)
Currently translated at 60.8% (871 of 1432 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-02-02 11:16:45 +11:00
psychedelicious
88164447e9 fix(ui): hide HRF if SDXL model selected 2024-02-02 11:10:54 +11:00
psychedelicious
1ac85fd049 tidy(migrator): remove logic to check if graph_executions exists in migration 5
Initially I wanted to show how many sessions were being deleted. In hindsight, this is not great:
- It requires extra logic in the migrator, which should be as simple as possible.
- It may be alarming to see "Clearing 224591 old sessions".

The app still reports on freed space during the DB startup logic.
2024-02-02 09:20:41 +11:00
psychedelicious
ee6fc4ab1d chore(item_storage): excise SqliteItemStorage 2024-02-02 09:20:41 +11:00
psychedelicious
9f793bdae8 feat(item_storage): implement item_storage_memory with LRU eviction strategy
Implemented with OrderedDict.
2024-02-02 09:20:41 +11:00
psychedelicious
a0eecaecd0 feat(item_storage): implement item_storage_memory max_size
Implemented with unordered dict and set.
2024-02-02 09:20:41 +11:00
psychedelicious
d532073f5b fix(db): check for graph_executions table before dropping
This is needed to not fail tests; see comment in code.
2024-02-02 09:20:41 +11:00
psychedelicious
198e8c9d55 feat(db): add migration 5 to drop graph_executions table 2024-02-02 09:20:41 +11:00
psychedelicious
30367deeca feat(nodes): use memory item storage 2024-02-02 09:20:41 +11:00
psychedelicious
e73298aea2 tidy(item_storage): remove extraneous class attribute declarations 2024-02-02 09:20:41 +11:00
psychedelicious
59279851a3 tidy(item_storage): remove unused list and search methods 2024-02-02 09:20:41 +11:00
psychedelicious
2965357d99 feat(nodes): add ItemStorageMemory
The sqlite item storage class can be swapped for this eliminate costly network calls.
2024-02-02 09:20:41 +11:00
psychedelicious
8bd32ee142 feat(nodes): add delete method to ItemStorageABC 2024-02-02 09:20:41 +11:00
psychedelicious
a4f892dcfb tidy(nodes): remove unused get_raw method on ItemStorageABC 2024-02-02 09:20:41 +11:00
psychedelicious
e675983e20 fix(ui): download image opens in new tab (#5625)
* fix(ui): download image opens in new tab

In some environments, a simple `a` element cannot trigger a download of an image. Fetching the image directly can get around this and provide more reliable download functionality.

* use hook for imageUrlToBlob so token gets sent if needed

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-02-01 20:25:01 +00:00
psychedelicious
e9558f97c4 perf(config): change default png_compress_level to 1
This substantially reduces the time spent encoding PNGs. In workflows with many image outputs, this is a drastic improvement.

For a tiled upscaling workflow going from 512x512 to a scale factor of 4, this can provide over 15% speed increase.
2024-02-02 00:32:00 +11:00
psychedelicious
a1a611f8cb chore(ui): lint 2024-02-02 00:20:28 +11:00
psychedelicious
182dc859a0 chore(ui): update eslint rules
- Add `i18next/no-literal-string` (was removed from upstream config)
- Restore `path/no-relative-imports`, this was lost in the shuffle a while ago
2024-02-02 00:20:28 +11:00
psychedelicious
c0240a8568 chore(ui): bump @invoke-ai/eslint-config-react 2024-02-02 00:20:28 +11:00
Eugene Brodsky
02bcff29e8 feat: update ROCm to 5.6 everywhere 2024-02-01 00:07:16 -05:00
Eugene Brodsky
d4ed64df7d feat: add force-reinstall option to the updater 2024-02-01 00:07:16 -05:00
Eugene Brodsky
701f14c1e3 fix: add PyTorch extra-index-url to the updater command 2024-02-01 00:07:16 -05:00
Eugene Brodsky
45bf2c7da6 chore(updater): address deprecation of pkg_resources
as per module docstring:
This module is deprecated. Users are directed to importlib.resources,
importlib.metadata and packaging instead.
2024-02-01 00:07:16 -05:00
psychedelicious
67ada70a26 docs: update link to frontend README 2024-01-31 22:34:59 -05:00
Brandon
06bcc07f65 Merge branch 'main' into sdxl-convert-safetensors 2024-01-31 17:00:19 -05:00
psychedelicious
4410ecf62c fix(stats): log errors at error level
They were erroneously at warning before.
2024-02-01 08:50:56 +11:00
psychedelicious
9f6b9d4d23 fix(stats): preserve stack when raising GESStatsNotFoundError 2024-02-01 08:50:56 +11:00
psychedelicious
b24e8dd829 feat(stats): refactor InvocationStatsService to output stats as dataclasses
This allows the stats to be written to disk as JSON and analyzed.

- Add dataclasses to hold stats.
- Move stats pretty-print logic to `__str__` of the new `InvocationStatsSummary` class.
- Add `get_stats` and `dump_stats` methods to `InvocationStatsServiceBase`.
- `InvocationStatsService` now throws if stats are requested for a session it doesn't know about. This avoids needing to do a lot of messy null checks.
- Update `DefaultInvocationProcessor` to use the new stats methods and suppresses the new errors.
2024-02-01 08:50:56 +11:00
Mary Hipp
25291a2e01 select first image if no selectedImageName 2024-01-31 11:52:47 -05:00
Brandon
332f3930a5 Allow civit ai API Key on Imports (#5608)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description
Small PR to allow users to pass in a civit api key via config options

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-01-31 10:51:33 -05:00
Brandon
ed466a99ec Merge branch 'main' into fix-civit-model-imports 2024-01-31 10:12:44 -05:00
Mary Hipp Rogers
f68f8898c0 Workflow navigation & save-as (#5607)
* redo top panel of workflow editor

* add checkbox option to save to project, integrate save-as flow into first time saving workflow

* remove log

* remove workflowLibrary as a feature that can be disabled

* lint

* feat(ui): make SaveWorkflowAsDialog a singleton

Fixes an issue where the workflow name would erroneously be an empty string (which it should show the current workflow name).

Also makes it easier to interact with this component.

- Extract the dialog state to a hook
- Render the dialog once in `<NodeEditor />`
- Use the hook in the various buttons that should open the dialog
- Fix a few wonkily named components (pre-existing issue)

* fix(ui): when saving a never-before-saved workflow, do not append " (copy)" to the name

* fix(ui): do not obscure workflow library button with add node popover

This component is kinda janky :/ the popover content somehow renders invisibly over the button. I think it's related to the `<PopoverAnchor />.

Need to redo this in the future, but for now, making the popover render lazily fixes this.

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-01-31 13:32:31 +00:00
Brandon Rising
a0996b1c0a Fix ruff styling 2024-01-31 07:16:14 -06:00
Brandon Rising
522ff4a042 civit -> civitai 2024-01-31 07:16:14 -06:00
Brandon Rising
a769f93be0 Remove unnecessary change 2024-01-31 07:16:14 -06:00
Brandon Rising
2c5ef92979 Move location of config property, comment for explanation of use 2024-01-31 07:16:14 -06:00
Brandon Rising
5d773dc94c Remove debug line 2024-01-31 07:16:14 -06:00
Brandon Rising
088e3420e6 Allow passing of civit api key via config 2024-01-31 07:16:14 -06:00
Brandon Rising
14efc95707 Allow passing of a civit api key 2024-01-31 07:16:14 -06:00
psychedelicious
f48a2c5fd2 fix(ui): workflow settings styling
Got borked in the redesign.
2024-01-31 07:16:01 -06:00
Hosted Weblate
74ae4d7774 translationBot(ui): update translation files
Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-01-31 23:05:11 +11:00
Ufuk Sarp Selçok
191203ea0c translationBot(ui): update translation (Turkish)
Currently translated at 36.1% (516 of 1427 strings)

Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2024-01-31 23:05:11 +11:00
Riccardo Giovanetti
6aceae5c22 translationBot(ui): update translation (Italian)
Currently translated at 97.2% (1388 of 1427 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-01-31 23:05:11 +11:00
Thomas Mello
8c6b3efd39 fix(ui): remove hard reset of cursor on canvas during state reset
Remove resetting cursor when resetting state letting event handlers to take care of presentation
2024-01-31 23:03:14 +11:00
psychedelicious
4602efd598 feat: add profiler util (#5601)
* feat(config): add profiling config settings

- `profile_graphs` enables graph profiling with cProfile
- `profiles_dir` sets the output for profiles

* feat(nodes): add Profiler util

Simple wrapper around cProfile.

* feat(nodes): use Profiler in invocation processor

* scripts: add generate_profile_graphs.sh script

Helper to generate graphs for profiles.

* pkg: add snakeviz and gprof2dot to dev deps

These are useful for profiling.

* tests: add tests for profiler util

* fix(profiler): handle previous profile not stopped cleanly

* feat(profiler): add profile_prefix config setting

The prefix is used when writing profile output files. Useful to organise profiles into sessions.

* tidy(profiler): add `_` to private API

* feat(profiler): simplify API

* feat(profiler): use child logger for profiler logs

* chore(profiler): update docstrings

* feat(profiler): stop() returns output path

* chore(profiler): fix docstring

* tests(profiler): update tests

* chore: ruff
2024-01-31 10:51:57 +00:00
Josh Corbett
f70c0936ca feat: disable/enable LoRas with a switch (#5591)
* feat:  disable/enable LorRas with a switch

* feat:  visually display previous weight when disabled

* style: 🚨 linting

* feat:  lora badge count reflects active loras

* style: 🚨 linting

* feat:  track disabled lora on state instead of weight

* style: 🚨 linting

* feat:  it all works now

tracking isEnabled on lora state, disabled slider when disabled, removed disabled loras from graph, updated badge counting and renamed lora add function

* style: 🚨 linting

* fix: 🐛 enabledLoRAs filter nullish coalescing

* refactor: 🎨 minor changes

renamed lora toggle action, removed errent comment, removed extraneous type annotation

* style: 🚨 linting
2024-01-31 05:50:03 +00:00
Rohinish
0d4de4cc63 changed hotkeys (#5542)
Adds adds ctrl/meta + scroll to change brush size on canvas.

* changed hotkeys

* new hotkey as an additional

* lint fixed"

* added ctrl scroll and removed hotkey

* using

* added fix

* feedbck_changes

* brush size change logic

* feat(ui): also check for meta key when modifying brush size

* feat(ui): add comment linking to where brush size algo was determined

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-01-31 15:57:16 +11:00
Wubbbi
1e855f8290 Update safetensors and transformers to their latest versions (#5562)
* Update Safetensors to the lastest version

* Update Transformers while at it

* Update transformers again
2024-01-31 04:54:56 +00:00
dependabot[bot]
bb2787584d chore(deps-dev): bump vite in /invokeai/frontend/web
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 5.0.11 to 5.0.12.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/v5.0.12/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v5.0.12/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-31 15:47:13 +11:00
Brandon Rising
a04981b418 This seems to work now 2024-01-30 21:32:08 -05:00
Thomas Mello
d7f16b7c87 fix(ui): the bottom button on floating side panel clears all queue items 2024-01-31 01:04:24 +11:00
Thomas Mello
4477e04d59 fix(ui): filter out interactive targets when pressing space on canvas tab
Improve input filtering for better accessibility
2024-01-30 09:56:21 +11:00
Thomas Mello
30e11b4b42 feat(ui): save the current staging image with shift+s 2024-01-30 09:56:21 +11:00
Thomas Mello
b93695b78f feat(ui): discard all staging images in canvas on escape 2024-01-30 09:56:21 +11:00
Thomas Mello
b01311813b fix(ui): activate move tool on pressing space
canvas element is not guaranteed to be in focus (e.g. after accepting new staging image) so we check for the active tab name instead
2024-01-30 09:56:21 +11:00
Thomas Mello
5ae80fab87 fix(ui): accept staging image hotkey callback 2024-01-30 09:56:21 +11:00
Thomas Mello
c4291f2136 fix(ui): block gallery navigation when staging images on canvas 2024-01-30 09:56:21 +11:00
Mary Hipp Rogers
287d3c2b04 add UI library to rollup config (#5598)
* try rolling up ui library

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-29 13:13:09 -05:00
Ufuk Sarp Selçok
7fde19730e translationBot(ui): update translation (Turkish)
Currently translated at 22.8% (326 of 1426 strings)

Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2024-01-29 14:15:29 +11:00
psychedelicious
13575642d8 chore: update issue template
- Improve spelling and grammar
- Add browser, GPU model, python deps fields
- Revise other fields
2024-01-29 14:11:00 +11:00
psychedelicious
3f5370b284 feat(ui): add a copy button to the about modal
This copies the dependencies as JSON.
2024-01-28 20:50:08 -06:00
psychedelicious
d048eb5b20 docs(ui): add STATE_MGMT.md
Supersedes the mini nanostores doc.
2024-01-29 07:28:20 +11:00
psychedelicious
dd7031a472 docs(ui): update README.md
Also moved it to the frontend package root
2024-01-29 07:28:20 +11:00
Millun Atluri
4160d5ef26 update contributors list to bring into sync with discord roles (#5586)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

This brings `docs/other/CONTRIBUTORS.md` into sync with collaborator
roles in Discord as of January 27, 2024.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

## QA Instructions, Screenshots, Recordings

N/A

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

Merge when approved.

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-01-28 11:28:22 -05:00
Millun Atluri
51bdf2fd19 Merge branch 'main' into docs/update-contributors 2024-01-28 11:26:35 -05:00
Ufuk Sarp Selçok
6a44697911 translationBot(ui): update translation (Turkish)
Currently translated at 10.5% (151 of 1426 strings)

translationBot(ui): update translation (Turkish)

Currently translated at 8.1% (116 of 1426 strings)

translationBot(ui): update translation (Turkish)

Currently translated at 6.6% (95 of 1426 strings)

Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2024-01-28 22:27:25 +11:00
Hosted Weblate
7a1d0ec228 translationBot(ui): update translation files
Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-01-28 22:27:25 +11:00
Riccardo Giovanetti
b5928fd411 translationBot(ui): update translation (Italian)
Currently translated at 97.2% (1387 of 1426 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-01-28 22:27:25 +11:00
psychedelicious
2f345d1976 chore(ui): lint 2024-01-28 19:57:53 +11:00
psychedelicious
f5d0721fa8 chore(ui): bump @invoke-ai/eslint-config-react 2024-01-28 19:57:53 +11:00
psychedelicious
c3b36cb61d chore(ui): remove chakra CLI
It doesn't work now that the theme is external. I'm not sure how to fix it and not sure if it really did much (I don't think I ever got autocomplete...). Maybe it can be implemented in `@invoke-ai/ui-library`.
2024-01-28 19:57:53 +11:00
psychedelicious
189c430e46 chore(ui): format
Lots of changed bc the line length is now 120. May as well do it now.
2024-01-28 19:57:53 +11:00
psychedelicious
b922ee566a chore(ui): use new prettier config 2024-01-28 19:57:53 +11:00
psychedelicious
89da69f647 fix(ui): correct import in ReduxInit 2024-01-28 19:57:53 +11:00
psychedelicious
138caa34de chore(ui): lint 2024-01-28 19:57:53 +11:00
psychedelicious
26c3378ede chore(ui): use new eslint config, add some overrides 2024-01-28 19:57:53 +11:00
psychedelicious
aa134a2db8 chore(ui): remove postinstall script 2024-01-28 19:57:53 +11:00
Mary Hipp
d0391cb430 chore(ui): bump @invoke-ai/ui-library, add @invoke-ai/eslint-config-react & @invoke-ai/prettier-config-react 2024-01-28 19:57:53 +11:00
Millun Atluri
c955ea9de0 Update CONTRIBUTORS.md 2024-01-27 17:04:32 -05:00
Lincoln Stein
fc29a5d439 update contributors list to bring into sync with discord roles 2024-01-27 16:59:56 -05:00
Millun Atluri
7e9942dbab {fix} install docs house keeping (#5583)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
- Update docs to make link to automated installer easier to find
- Fixed issue in SDXL + refiner example workflow 

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings
Read over docs changes
<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan
Merge when approved
<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## [optional] Are there any post deployment tasks we need to perform?
Deploy new docs
2024-01-27 12:10:47 -05:00
Millun Atluri
c003967eaa Merge branch 'main' into feat/install_docs_update 2024-01-27 11:55:19 -05:00
Mary Hipp
b28fcc6be5 lint 2024-01-27 21:36:42 +11:00
Mary Hipp
418cdbabb7 add option for workflowCategories 2024-01-27 21:36:42 +11:00
Millun Atluri
18e61e92d9 {fix} install docs house keeping 2024-01-26 21:19:48 -06:00
Mary Hipp
de20711637 add nanostore for open API schema 2024-01-27 12:43:47 +11:00
Mary Hipp
55e91b97be dep 2024-01-27 12:43:47 +11:00
Mary Hipp
f79bbd2d6e account for baseUrl 2024-01-27 12:43:47 +11:00
Brandon
e1c2c3905d Github action for ensuring PRs are labeled in a way that makes it eas… (#5543)
…y to distinguish what's being changed

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [x] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-01-25 20:37:39 -05:00
Brandon
03ac93bfc7 Merge branch 'main' into pr-labeler 2024-01-25 20:36:12 -05:00
Mary Hipp Rogers
89da976949 workflow library updates (#5568)
* dont show duplicate toasts if workflow actions fail due to auth

* dynamic order by options based on projectId

* add endpointName to authtoast to makeit unique per endpoint

* lint

* update toast logic to check based on endpoint name w type safety

* fix save as endpoit name

* lint

* fix type

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-25 11:43:47 -05:00
Millun Atluri
57dafd294d {release} v3.6.2
## What type of PR is this? (check all applicable)

Invoke v3.6.2 release


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Invoke v3.6.2

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

[InvokeAI-installer-v3.6.2.zip](https://github.com/invoke-ai/InvokeAI/files/14046191/InvokeAI-installer-v3.6.2.zip)
2024-01-24 22:05:10 -05:00
Millun Atluri
e611baa4b4 {release} v3.6.2 2024-01-24 21:40:03 -05:00
psychedelicious
fc448d5b6d feat(ui): handle proxy configs rewriting paths
We can't assume that the base URL is `host:port/` - it could be `host:port/some/path/`. Make the path handling dynamic to account for this.
2024-01-25 13:29:56 +11:00
Mary Hipp Rogers
e59954f956 fix workflow updating (#5567)
* retain id through workflow state so that we correctly update or save new

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-24 16:10:19 -05:00
Brandon
e160cbb1e9 Merge branch 'main' into pr-labeler 2024-01-24 15:44:35 -05:00
Millun Atluri
86c857b9c2 {release} v3.6.1 (#5564)
## What type of PR is this? (check all applicable)

Invoke 3.6.1 release

## QA Instructions, Screenshots, Recordings

[InvokeAI-installer-v3.6.1.zip](https://github.com/invoke-ai/InvokeAI/files/14041411/InvokeAI-installer-v3.6.1.zip)

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

This PR can be merged when approved

## [optional] Are there any post deployment tasks we need to perform?
PyPI Release & GitHub Release
2024-01-24 12:31:10 -05:00
Millun Atluri
0a13d7d2c7 {release} v3.6.1 2024-01-24 11:27:36 -05:00
blessedcoolant
68da5c6d22 feat: Add Depth Anything PreProcessor (#5548)
## What type of PR is this? (check all applicable)

- [x] Feature

## Have you discussed this change with the InvokeAI team?
- [x] Yes

## Have you updated all relevant documentation?
- [x] No


## Description

- This adds the newly released Depth Anything to InvokeAI. A new node
`Depth Anything Processor` has been added to generate depth maps using
this new technique. https://depth-anything.github.io

- All related checkpoints will be downloaded automatically on first
boot. The `DinoV2` models will be loaded to your torch cache dir and the
checkpoints pertaining to Depth Anything will be downloaded to
`any/annotators/depth_anything`.

- Alternatively you can find the checkpoints here and download them to
that folder:
https://huggingface.co/spaces/LiheYoung/Depth-Anything/tree/main/checkpoints

- This depth map can be used with any depth ControlNet model out there
but the folks at DepthAnything have also released a custom fine tuned
ControlNet model. From my limited testing, I still prefer the original
depth model because this one seems to be producing weird artifacts. Not
sure if that is a specific problem to Invoke or just the model itself.
I'll test more later. Place these in your controlnet folder like your
other ControlNets. You can get that here:
https://huggingface.co/spaces/LiheYoung/Depth-Anything/tree/main/checkpoints_controlnet

- Also available in the LinearUI

- DepthAnything has three models `large`, `base` and `small` -- I've
defaulted the processor to small but a user can change to the large
model if they wish to do so. Small is way faster but obviously somewhat
of a lesser quality.

- DepthAnything is now the default processor for depth controlnet
models.

## Screenshots


![opera_o3jHnWxVRi](https://github.com/invoke-ai/InvokeAI/assets/54517381/573c66f3-1492-45b0-b6df-25756f5e1d1a)

## Merge Plan

DO NOT MERGE YET. Test it first and I'm sure the model caching can be
done better. Coz I don't think I've done that at all. I would appreciate
if @brandonrising or @lstein or anyone can take a look at that part of
it.
2024-01-24 19:14:34 +05:30
blessedcoolant
f82744b95e fix: linting issues 2024-01-24 18:45:54 +05:30
Kent Keirsey
5a67bc68a1 Merge branch 'main' into depth-anything 2024-01-23 22:31:19 -06:00
Josh Corbett
61cf4d4c70 feat: "Remix Image" option on images (#5553)
* feat:  "Remix Image" option on images

Adds a new "remix image" option where applicable, recalls all metadata except the seed

* refactor: 🚨 lint code

* feat:  "Remix Image" option on images

Adds a new "remix image" option where applicable, recalls all metadata except the seed

* refactor: 🚨 lint code

* feat:  add new remix hotkey to hotkeys modal

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-01-24 00:45:30 +00:00
Wubbbi
9d20a2d5a3 Update huggingface deps to their lastest versions 2024-01-24 11:14:21 +11:00
Riccardo Giovanetti
8b0ac451e3 translationBot(ui): update translation (Italian)
Currently translated at 97.3% (1378 of 1415 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-01-24 11:05:17 +11:00
Alexander Eichhorn
470dbe75a2 translationBot(ui): update translation (German)
Currently translated at 60.0% (850 of 1415 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-01-24 11:05:17 +11:00
maryhipp
b7d19b8130 add project as category to back-end 2024-01-24 10:59:04 +11:00
Mary Hipp
3dc13221d8 add project as a workflow category in the front-end 2024-01-24 10:59:04 +11:00
blessedcoolant
35184dbd86 fix: incorrect local file path 2024-01-24 03:37:16 +05:30
blessedcoolant
0868fc2558 Merge branch 'main' into depth-anything 2024-01-24 03:36:25 +05:30
blessedcoolant
92fb09c4df fix: Move the models to any folder to avoid boot warnings 2024-01-24 03:35:37 +05:30
psychedelicious
b4cf5496b6 fix(ui): handle model names with spaces
Remove `trim()` from model identifier schema, which prevented parsed model identifiers from matching.

The root issue here is that model names are identifiers. This will be resolved in the model manager refactor.

Closes #5556
2024-01-23 15:48:10 -06:00
psychedelicious
a0e68705dd feat(ui): improved dynamic prompts behaviour
- Bump `@invoke-ai/ui` for updated styles
- Update regex to parse prompts with newlines
- Update styling of overlay button when prompt has an error
- Fix bug where loading and error state sometimes weren't cleared
2024-01-23 15:26:12 -06:00
blessedcoolant
7cb49e65bd feat: Add Resolution to DepthAnything 2024-01-23 14:13:50 -06:00
blessedcoolant
39fedb090b feat: Make depth anything the default processor for depth controlnet 2024-01-23 14:13:50 -06:00
blessedcoolant
f36a691219 feat: Make the depth anything small model the default 2024-01-23 14:13:50 -06:00
blessedcoolant
6a2eb1d2e4 fix: Change the path of the annotator folder to annotators
Just making this change in case there are other models added to the folder in the future
2024-01-23 14:13:50 -06:00
blessedcoolant
13123daa3f feat: Add DepthAnything to Linear UI 2024-01-23 14:13:50 -06:00
blessedcoolant
c859eb865e fix: lint & other minor issues 2024-01-23 14:13:50 -06:00
blessedcoolant
8f5e2cbcc7 feat: Add Depth Anything PreProcessor 2024-01-23 14:13:50 -06:00
psychedelicious
2aed6e2dba fix(ui): duplicate "base model" in merge UI
closes #5505
2024-01-23 14:13:18 -06:00
psychedelicious
52b51a6088 fix(ui): recall/use size sets aspect ratio correctly
Added a new action that resets the aspect ratio when dispatched.

Closes #5456
2024-01-23 14:13:18 -06:00
psychedelicious
52b24e01e2 feat(ui): remove chakra as direct dependency
Moved a number of things to `@invoke-ai/ui` to support this.

Unfortunately, the bundle size has increased a bit. I will work on that later.
2024-01-23 14:13:18 -06:00
psychedelicious
1178fd8bd3 fix(ui): fix styling of some form elements 2024-01-23 14:13:18 -06:00
psychedelicious
a0187cc9df fix(ui): remove unused import in storybook 2024-01-23 14:13:18 -06:00
psychedelicious
2f656cc357 fix(ui): fix field context menu jank
Closes #5551
2024-01-23 14:13:18 -06:00
psychedelicious
71f9ac9985 feat(ui): scollable areas support x-axis scrolling
Closes #5490
2024-01-23 14:13:18 -06:00
psychedelicious
8bbdfc45fa fix(ui): increase size of control adapters advanced toggle button 2024-01-23 14:13:18 -06:00
psychedelicious
3cbb1a7671 chore(ui): bump @invoke-ai/ui
This includes a minor enhancement, increasing the contrast on tabs.
2024-01-23 14:13:18 -06:00
psychedelicious
b74e0de74a tidy(ui): remove unused state from uiSlice 2024-01-23 14:13:18 -06:00
psychedelicious
e7e7793896 feat(ui): remember open/closed state of accordions/expanders 2024-01-23 14:13:18 -06:00
psychedelicious
504bdac14a tidy(ui): remove unused state from uiSlice 2024-01-23 14:13:18 -06:00
psychedelicious
b76d2cd716 fix(ui): handle base model compat when recalling parameters
We had a one-behind issue with recalling metadata items that had a model.

For example, when recalling LoRAs, we check against the current main model to decide whether or not the requested LoRA is compatible and may be recalled.

When recalling all params, we are often also recalling the main model, but the compat logic didn't compare against this new main model.

The logic is updated to check against the new main model, if one is being set.

Closes #5512
2024-01-23 14:13:18 -06:00
psychedelicious
022b32c724 fix(ui): reset clip skip to 0 if new model is sdxl
Clip skip wasn't actually used in SDXL graphs so enabling it didn't do anything, just a UI quirk.

Closes #5508
2024-01-23 14:13:18 -06:00
psychedelicious
653b820da1 tidy(ui): clearer variable names in modelSelected listener 2024-01-23 14:13:18 -06:00
Brandon
68232e642f Merge branch 'main' into pr-labeler 2024-01-23 09:20:43 -05:00
blessedcoolant
4ba0bf4dcf docs(ui): update README (#5473)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission

## Description

Update UI README


## Merge Plan

This PR can be merged when approved

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
2024-01-23 18:47:24 +05:30
psychedelicious
5e4daf4bc6 docs: remove separate frontend docs and link to repo
The frontend docs should just be in the frontend. This is a standard practice for monorepos with developer information for specific packages within the monorepo.
2024-01-23 18:04:41 +11:00
psychedelicious
7e0713c869 docs(ui): fix typo 2024-01-23 18:04:41 +11:00
psychedelicious
099d516ac0 docs(ui): update README 2024-01-23 18:04:41 +11:00
Brandon Rising
b94f6a4a29 Fix python label, add test label 2024-01-22 15:14:02 -05:00
Brandon Rising
4caf63d53d Added a few more labels 2024-01-22 15:08:11 -05:00
Brandon Rising
6057229ceb Github action for ensuring PRs are labeled in a way that makes it easy to distinguish what's being changed 2024-01-22 11:22:33 -05:00
JPPhoto
6a2856e46f Updated field descriptions 2024-01-23 02:26:30 +11:00
Jonathan
4dedd63b74 Update defaultNodes.md
Added ideal size node
2024-01-23 02:26:30 +11:00
Jonathan
db74837eb1 Update communityNodes.md
Removed ideal size node
2024-01-23 02:26:30 +11:00
Jonathan
892fe62264 Add Ideal Size node to core nodes
The Ideal Size node is useful for High-Res Optimization as it gives the optimum size for creating an initial generation with minimal artifacts (duplication and other strangeness) from today's models.

After inclusion, front end graph generation can be simplified by offloading calculations for HRO initial generation to this node.
2024-01-23 02:26:30 +11:00
Rohinish
3c79476785 requested changes made 2024-01-22 21:06:40 +11:00
Rohinish
dad364da17 rebased and made more chnges 2024-01-22 21:06:40 +11:00
Rohinish
37bc4f78d0 last 2024-01-22 21:06:40 +11:00
Rohinish
de0b43c81d more strings and translations added 2024-01-22 21:06:40 +11:00
Rohinish
ea1d2d6a4c added translation where needed 2024-01-22 21:06:40 +11:00
psychedelicious
fafe8ccc59 fix(api): typo in no_cache_staticfiles.py 2024-01-22 16:10:25 +11:00
psychedelicious
4b88cfac19 fix(api): type in no_cache_staticfiles.py 2024-01-22 16:10:25 +11:00
psychedelicious
5fa13fba36 chore: ruff 2024-01-22 16:10:25 +11:00
psychedelicious
f28f761436 fix(api): add NoCacheStaticFiles to prevent *all* caching
The previous method wasn't totally foolproof, and locales/assets were cached.

To solve this once and for all (famous last words, I know), we can subclass `StaticFiles` and use maximally strict no-caching headers to disable caching on all static files.
2024-01-22 16:10:25 +11:00
psychedelicious
27d7889780 tidy(ui): remove commented line 2024-01-22 09:37:26 +11:00
psychedelicious
a1cf153097 fix(ui): generation accordion defaults to open 2024-01-22 09:37:26 +11:00
psychedelicious
d121eefa12 tidy(ui): organise accordions files 2024-01-22 09:37:26 +11:00
psychedelicious
c92e25a6a7 fix(ui): add tooltip to model select 2024-01-22 09:37:26 +11:00
psychedelicious
8be03dead5 chore(ui): bump @invoke-ai/ui 2024-01-22 09:37:26 +11:00
psychedelicious
1197133d06 fix(ui): fix lora name wrap 2024-01-22 09:37:26 +11:00
psychedelicious
850458a554 chore(ui): lint 2024-01-22 09:37:26 +11:00
psychedelicious
e96ad41729 feat(ui): use @invoke-ai/ui hooks for modifiers, global menu state 2024-01-22 09:37:26 +11:00
psychedelicious
53cf518390 chore(ui): bump @invoke-ai/ui 2024-01-22 09:37:26 +11:00
psychedelicious
b00ace852d fix(ui): settings modal switch widths 2024-01-22 09:37:26 +11:00
psychedelicious
be72765d02 fix(ui): bump @invoke-ai/ui, fix TS issues 2024-01-22 09:37:26 +11:00
psychedelicious
580d29257c feat(ui): add missing translations 2024-01-22 09:37:26 +11:00
psychedelicious
5d068c1da1 feat(ui): migrate to @invoke-ai/ui 2024-01-22 09:37:26 +11:00
psychedelicious
8e2ccab1f0 feat(ui): add @invoke-ai/ui 2024-01-22 09:37:26 +11:00
psychedelicious
6f478eef62 chore(ui): bump deps 2024-01-22 09:37:26 +11:00
Riccardo Giovanetti
1ff1c370df translationBot(ui): update translation (Italian)
Currently translated at 97.3% (1365 of 1402 strings)

translationBot(ui): update translation (Italian)

Currently translated at 97.3% (1365 of 1402 strings)

translationBot(ui): update translation (Italian)

Currently translated at 97.3% (1365 of 1402 strings)

translationBot(ui): update translation (Italian)

Currently translated at 97.3% (1365 of 1402 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-01-19 15:11:32 +11:00
psychedelicious
5ef87ef2a6 fix(ui): tidy component/props naming in ClearQueueIconButton.tsx 2024-01-19 15:03:16 +11:00
psychedelicious
d0709d4f4e fix(ui): use trash icon for clear all queue 2024-01-19 15:03:16 +11:00
psychedelicious
2a081b0a27 fix(ui): remove unnecessary fragments 2024-01-19 15:03:16 +11:00
psychedelicious
d902533387 feat(ui): use global modifier state for clear queue button mode switch 2024-01-19 15:03:16 +11:00
psychedelicious
1174713223 fix(ui): use cancel strings for cancel button 2024-01-19 15:03:16 +11:00
psychedelicious
4b1740ad19 chore(ui): format 2024-01-19 15:03:16 +11:00
Josh Corbett
e03c88ce32 feat: 🚸 shift key queue cancellations 2024-01-19 15:03:16 +11:00
psychedelicious
b917ffecbe chore(ui): format 2024-01-19 14:42:31 +11:00
Josh Corbett
2967a78c5a feat: 💄 update lots of icons 2024-01-19 14:42:31 +11:00
Neil Wang
aa25ea62a5 fix(backend) installed models being redownloaded (#5526)
* fix

* fix ruff errors

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-01-18 16:53:53 -05:00
Rohinish
1ab0e86085 feat(ui): add about modal w/ app deps (#5462)
* resolved conflicts

* changed logo and some design changes

* feedback changes

* resolved conflicts

* changed logo and some design changes

* feedback changes

* lint fixed

* added translations

* some requested changes done

* all feedback changes done and replace links in settingsmenu comp

* fixed the gap between deps verisons & chnaged heights

* feat(ui): minor about modal styling

* feat(ui): tag app endpoints with FetchOnReconnect

* fix(ui): remove unused translation string

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
2024-01-18 13:30:56 +00:00
Mary Hipp
c9ddbb4241 lint 2024-01-18 23:29:47 +11:00
Mary Hipp
415a1c7a4f add ids 2024-01-18 23:29:47 +11:00
Mary Hipp
84a4836ab7 change actions to be for any InvExpander or InvSingleAccordion that has id passed in 2024-01-18 23:29:47 +11:00
Mary Hipp
dbd6c9c6ed remove actions we can get from mutation data 2024-01-18 23:29:47 +11:00
Mary Hipp
4f95c077d4 lint fix 2024-01-18 23:29:47 +11:00
Mary Hipp
0a4cbc4e16 undo 2024-01-18 23:29:47 +11:00
Mary Hipp
d45b76fab4 undo 2024-01-18 23:29:47 +11:00
Mary Hipp
9722135cda add various actions for commercial purposes 2024-01-18 23:29:47 +11:00
dependabot[bot]
7366913a31 chore(deps): bump tj-actions/changed-files in /.github/workflows
Bumps [tj-actions/changed-files](https://github.com/tj-actions/changed-files) from 37 to 41.
- [Release notes](https://github.com/tj-actions/changed-files/releases)
- [Changelog](https://github.com/tj-actions/changed-files/blob/main/HISTORY.md)
- [Commits](https://github.com/tj-actions/changed-files/compare/v37...v41)

---
updated-dependencies:
- dependency-name: tj-actions/changed-files
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-17 00:25:21 -05:00
Lincoln Stein
bd31b5606c Retain suffix (.safetensors, .bin) when renaming a checkpoint file or LoRA
- closes #5518
2024-01-16 17:27:50 -05:00
Mary Hipp
2953dea4a0 set width and height based on defaultModel if passed in 2024-01-17 07:08:11 +11:00
psychedelicious
f3fed0b10f fix(ui): use native langs for language select
Use each language's own language for their option in the language select. This falls back to the english translation if the language name isn't translated.
2024-01-17 06:50:16 +11:00
psychedelicious
db57d426d9 fix(ui): force dark mode 2024-01-17 06:48:04 +11:00
Lincoln Stein
4536e4a8b6 Model Manager Refactor: Install remote models and store their tags and other metadata (#5361)
* add basic functionality for model metadata fetching from hf and civitai

* add storage

* start unit tests

* add unit tests and documentation

* add missing dependency for pytests

* remove redundant fetch; add modified/published dates; updated docs

* add code to select diffusers files based on the variant type

* implement Civitai installs

* make huggingface parallel downloading work

* add unit tests for model installation manager

- Fixed race condition on selection of download destination path
- Add fixtures common to several model_manager_2 unit tests
- Added dummy model files for testing diffusers and safetensors downloading/probing
- Refactored code for selecting proper variant from list of huggingface repo files
- Regrouped ordering of methods in model_install_default.py

* improve Civitai model downloading

- Provide a better error message when Civitai requires an access token (doesn't give a 403 forbidden, but redirects
  to the HTML of an authorization page -- arrgh)
- Handle case of Civitai providing a primary download link plus additional links for VAEs, config files, etc

* add routes for retrieving metadata and tags

* code tidying and documentation

* fix ruff errors

* add file needed to maintain test root diretory in repo for unit tests

* fix self->cls in classmethod

* add pydantic plugin for mypy

* use TestSession instead of requests.Session to prevent any internet activity

improve logging

fix error message formatting

fix logging again

fix forward vs reverse slash issue in Windows install tests

* Several fixes of problems detected during PR review:

- Implement cancel_model_install_job and get_model_install_job routes
  to allow for better control of model download and install.
- Fix thread deadlock that occurred after cancelling an install.
- Remove unneeded pytest_plugins section from tests/conftest.py
- Remove unused _in_terminal_state() from model_install_default.
- Remove outdated documentation from several spots.
- Add workaround for Civitai API results which don't return correct
  URL for the default model.

* fix docs and tests to match get_job_by_source() rather than get_job()

* Update invokeai/backend/model_manager/metadata/fetch/huggingface.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* Call CivitaiMetadata.model_validate_json() directly

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* Second round of revisions suggested by @ryanjdick:

- Fix type mismatch in `list_all_metadata()` route.
- Do not have a default value for the model install job id
- Remove static class variable declarations from non Pydantic classes
- Change `id` field to `model_id` for the sqlite3 `model_tags` table.
- Changed AFTER DELETE triggers to ON DELETE CASCADE for the metadata and tags tables.
- Made the `id` field of the `model_metadata` table into a primary key to achieve uniqueness.

* Code cleanup suggested in PR review:

- Narrowed the declaration of the `parts` attribute of the download progress event
- Removed auto-conversion of str to Url in Url-containing sources
- Fixed handling of `InvalidModelConfigException`
- Made unknown sources raise `NotImplementedError` rather than `Exception`
- Improved status reporting on cached HuggingFace access tokens

* Multiple fixes:

- `job.total_size` returns a valid size for locally installed models
- new route `list_models` returns a paged summary of model, name,
  description, tags and other essential info
- fix a few type errors

* consolidated all invokeai root pytest fixtures into a single location

* Update invokeai/backend/model_manager/metadata/metadata_store.py

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>

* Small tweaks in response to review comments:

- Remove flake8 configuration from pyproject.toml
- Use `id` rather than `modelId` for huggingface `ModelInfo` object
- Use `last_modified` rather than `LastModified` for huggingface `ModelInfo` object
- Add `sha256` field to file metadata downloaded from huggingface
- Add `Invoker` argument to the model installer `start()` and `stop()` routines
  (but made it optional in order to facilitate use of the service outside the API)
- Removed redundant `PRAGMA foreign_keys` from metadata store initialization code.

* Additional tweaks and minor bug fixes

- Fix calculation of aggregate diffusers model size to only count the
  size of files, not files + directories (which gives different unit test
  results on different filesystems).
- Refactor _get_metadata() and _get_download_urls() to have distinct code paths
  for Civitai, HuggingFace and URL sources.
- Forward the `inplace` flag from the source to the job and added unit test for this.
- Attach cached model metadata to the job rather than to the model install service.

* fix unit test that was breaking on windows due to CR/LF changing size of test json files

* fix ruff formatting

* a few last minor fixes before merging:

- Turn job `error` and `error_type` into properties derived from the exception.
- Add TODO comment about the reason for handling temporary directory destruction
  manually rather than using tempfile.tmpdir().

* add unit tests for reporting HTTP download errors

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-01-14 19:54:53 +00:00
psychedelicious
426a7b900f feat(ui): resize options/gallery panels to min on window resize
Per user feedback, this is preferrable to letting them expand when the window grows.

Also bumps `react-resizable-panels` now that one of my PRs is merged to fix an issue.
2024-01-14 11:33:44 +11:00
mickr777
cc571d9ab2 Quick Fix for right gallery button 2024-01-14 09:49:52 +11:00
Ryan Dick
296c861e7d Handle bad id in log_stats(...). 2024-01-13 15:19:57 -05:00
Ryan Dick
aa45d21fd2 Reduce the number of graph_execution_manager.get(...) calls from the InvocationStatsService. 2024-01-13 15:19:57 -05:00
Ryan Dick
ac42513da9 Remove unused reset_all_stats(...). 2024-01-13 15:19:57 -05:00
Ryan Dick
e2387546fe Rename GIG -> GB. And move it to where it's being used. 2024-01-13 15:19:57 -05:00
Ryan Dick
c8929b35f0 Refactor the invocation stats service for better readability and to support reporting the execution wall time. 2024-01-13 15:19:57 -05:00
Millun Atluri
c000e270a0 Release/v3.6.0 (#5485)
## What type of PR is this? (check all applicable)

Release v3.6.0

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Invoke v3.6.0

## QA Instructions, Screenshots, Recordings



[InvokeAI-installer-v3.6.0.zip](https://github.com/invoke-ai/InvokeAI/files/13923761/InvokeAI-installer-v3.6.0.zip)

## [optional] Are there any post deployment tasks we need to perform?
1. Release on PyPi
2. Release on GitHub
3. Announce in #releases
2024-01-12 15:26:28 -05:00
Millun Atluri
8ff28da3b4 {release} v3.6.0 2024-01-12 15:00:49 -05:00
Millun Atluri
b7b376103c Update default workflows 2024-01-12 14:59:44 -05:00
Millun Atluri
08d379bb29 Update default workflows 2024-01-12 14:58:21 -05:00
Millun Atluri
74e644c4ba Allow bfloat16 to be configurable in invoke.yaml (#5469)
* feat: allow bfloat16 to be configurable in invoke.yaml

* fix: `torch_dtype()` util

- Use `choose_precision` to get the precision string
- Do not reference deprecated `config.full_precision` flat (why does this still exist?), if a user had this enabled it would override their actual precision setting and potentially cause a lot of confusion.

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-01-12 18:40:37 +00:00
Riccardo Giovanetti
d4c36da3ee translationBot(ui): update translation (Italian)
Currently translated at 97.3% (1365 of 1402 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-01-12 22:52:38 +11:00
psychedelicious
dfe0b73890 fix(ui): fix usages of panel helpers
Upstream breaking change.
2024-01-12 09:31:07 +11:00
psychedelicious
c0c8fa9a89 fix(ui): use nodrag on invinput in workflow editor
Closes #5476
2024-01-12 09:31:07 +11:00
psychedelicious
ad7139829c fix(ui): fix canvas space hotkey
Need to do some checks to ensure we aren't taking over input elements, and are focused on the canvas.

Closes #5478
2024-01-12 09:31:07 +11:00
psychedelicious
a24e63d440 fix(ui): do not focus board search on load 2024-01-12 09:31:07 +11:00
psychedelicious
59437a02c3 feat(ui): restore resizable prompt boxes
The autosize proved to be unpopular. Changed back to resizable.
2024-01-12 09:31:07 +11:00
psychedelicious
98a44d7fa1 feat(ui): update assets
- Add various brand images, organise images
- Create favicon for docs pages (light blue version of key logo)
- Rename app title to `Invoke - Community Edition`
2024-01-12 08:02:59 +11:00
psychedelicious
07416753be feat(ui): more context in storage errors 2024-01-12 07:54:18 +11:00
Millun Atluri
630854ce26 3.6 Docs updates (#5412)
* Update UNIFIED_CANVAS.md

* Update index.md

* Update structure

* Docs updates
2024-01-11 16:52:22 +00:00
psychedelicious
b55c2b99a7 feat(ui): workflow library styling 2024-01-11 09:42:12 -05:00
psychedelicious
f81d36c95f fix(ui): do not string workflow id on rehydrate 2024-01-11 09:42:12 -05:00
psychedelicious
26b7aadd32 fix(db): fix workflows pagination math 2024-01-11 09:42:12 -05:00
Kent Keirsey
8e7e3c2b4a Initial Styling Commit 2024-01-11 09:42:12 -05:00
Lincoln Stein
f2e8b66be4 Fix "Cannot import name 'PagingArgumentParser' error when starting textual inversion
- Closes #5395
2024-01-11 13:57:06 +11:00
psychedelicious
ff09fd30dc feat(ui): if in dev mode, reset API on reconnect
This retains the current good developer experience when working on the server - the UI should fully reset when you restart the server.
2024-01-11 12:51:15 +11:00
psychedelicious
9fcc30c3d6 feat(ui): optimize reconnect queries
Add `FetchOnReconnect` tag, tagging relevant queries with it. This tag is invalidated in the socketConnected listener, when it is determined that the queue changed.
2024-01-11 12:51:15 +11:00
psychedelicious
b29a6522ef feat(ui): always check for change to queue status when reconnecting 2024-01-11 12:51:15 +11:00
psychedelicious
936d19cd60 feat(ui): improve comments on socketConnected listener 2024-01-11 12:51:15 +11:00
psychedelicious
f25b6ee5d1 chore(ui): lint 2024-01-11 12:51:15 +11:00
psychedelicious
7dea079220 fix(ui): reduce reconnect requests
- Add checks to the "recovery" logic for socket connect events to reduce the number of network requests.
- Remove the `isInitialized` state from `systemSlice` and make it a nanostore local to the socketConnected listener. It didn't need to be global state. It's also now more clearly named `isFirstConnection`.
- Export the queue status selector (minor improvement, memoizes it correctly).
2024-01-11 12:51:15 +11:00
Васянатор
7fc08962fb translationBot(ui): update translation (Russian)
Currently translated at 97.0% (1361 of 1402 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-01-11 12:48:23 +11:00
ItzAttila
71155d9e72 translationBot(ui): update translation (Hungarian)
Currently translated at 1.9% (28 of 1402 strings)

translationBot(ui): added translation (Hungarian)

Co-authored-by: ItzAttila <attila.gm.studio@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/hu/
Translation: InvokeAI/Web UI
2024-01-11 12:48:23 +11:00
Millun Atluri
6ccd72349d {release} v3.6.0rc6 (#5467)
## What type of PR is this? (check all applicable)

Release v3.6.0rc6

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Release candidate $6 

## QA Instructions, Screenshots, Recordings

[InvokeAI-installer-v3.6.0rc6.zip](https://github.com/invoke-ai/InvokeAI/files/13890206/InvokeAI-installer-v3.6.0rc6.zip)



## Merge Plan
Merge when approved

## [optional] Are there any post deployment tasks we need to perform?
Release on PyPi & Github
2024-01-10 11:19:39 -05:00
Millun Atluri
30e12376d3 {release} v3.6.0rc6 2024-01-10 10:45:33 -05:00
psychedelicious
23c8a893e1 fix(ui): fix gallery display bug, major lag
- Fixed a bug where after you load more, changing boards doesn't work. The offset and limit for the list image query had some wonky logic, now resolved.
- Addressed major lag in gallery when selecting an image.

Both issues were related to the useMultiselect and useGalleryImages hooks, which caused every image in the gallery to re-render on whenever the selection changed. There's no way to memoize away this - we need to know when the selection changes. This is a longstanding issue.

The selection is only used in a callback, though - the onClick handler for an image to select it (or add it to the existing selection). We don't really need the reactivity for a callback, so we don't need to listen for changes to the selection.

The logic to handle multiple selection is moved to a new `galleryImageClicked` listener, which does all the selection right when it is needed.

The result is that gallery images no long need to do heavy re-renders on any selection change.

Besides the multiselect click handler, there was also inefficient use of DND payloads. Previously, the `IMAGE_DTOS` type had a payload of image DTO objects. This was only used to drag gallery selection into a board. There is no need to hold onto image DTOs when we have the selection state already in redux. We were recalculating this payload for every image, on every tick.

This payload is now just the board id (the only piece of information we need for this particular DND event).

- I also removed some unused DND types while making this change.
2024-01-10 08:22:46 -05:00
psychedelicious
7d93329401 feat(ui): de-jank context menu
There was a lot of convoluted, janky logic related to trying to not mount the context menu's portal until its needed. This was in the library where the component was originally copied from.

I've removed that and resolved the jank, at the cost of there being an extra portal for each instance of the context menu. Don't think this is going to be an issue. If it is, the whole context menu could be refactored to be a singleton.
2024-01-10 08:22:46 -05:00
Eugene Brodsky
968fb655a4 Report ci disk space + minor docker fixes (#5461)
* ci: add docker build timout; log free space on runner before and after build

* docker: bump frontend builder to node=20.x; skip linting on build

* chore: gitignore .pnpm-store

* update code owners for docker and CI

---------

Co-authored-by: Millun Atluri <Millu@users.noreply.github.com>
2024-01-10 05:20:26 +00:00
psychedelicious
80ec9f4131 chore(ui): lint 2024-01-10 00:11:05 -05:00
psychedelicious
f19def5f7b feat(ui): replace aspect ratio icon
closes #5448
2024-01-10 00:11:05 -05:00
psychedelicious
9e1dd8ac9c fix(ui): reset canvas coords/dims on reset 2024-01-10 00:11:05 -05:00
psychedelicious
ebd68b7a6c feat(ui): support reset canvas view when no image on canvas 2024-01-10 00:11:05 -05:00
psychedelicious
68a231afea feat(ui): move canvas stage and base layer to nanostores 2024-01-10 00:11:05 -05:00
psychedelicious
21ab650ac0 feat(ui): move canvas tool to nanostores
I was troubleshooting a hotkeys issue on canvas and thought I had broken the tool logic in a past change so I redid it moving it to nanostores. In the end, the issue was an upstream but with the hotkeys library, but I like having tool in nanostores so I'm leaving it.

It's ephemeral interaction state anyways, doesn't need to be in redux.
2024-01-10 00:11:05 -05:00
psychedelicious
b501bd709f fix(ui): canvas bbox number input wonky
It was rounding dimensions when it shouldn't.

Closes #5453
2024-01-10 00:11:05 -05:00
psychedelicious
4082f25062 feat(ui): do not optimize size when changing between models with same base model
There's a challenge to accomplish this due to our slice structure - the model is stored in `generationSlice`, but `canvasSlice` also needs to have awareness of it. For example, when the model changes, the canvas slice doesn't know what the previous model was, so it doesn't know whether or not to optimize the size.

This means we need to lift the "should we optimize size" information up. To do this, the `modelChanged` action creator accepts the previous model as an optional second arg.

Now the canvas has access to both the previous model and new model selection, and can decide whether or not it should optimize its size setting in the same way that the generation slice does.

Closes  #5452
2024-01-10 00:11:05 -05:00
psychedelicious
63d74b4ba6 feat(ui): remove unnecessary tabChanged listener
This was needed when we didn't support SDXL on canvas.
2024-01-10 00:11:05 -05:00
psychedelicious
da5907613b fix(ui): fix typing of usGalleryImages
For some reason `ReturnType<typeof useListImagesQuery>` isn't working correctly, and destructuring `queryResult` it results in `any`, when the hook is used.

I've removed the explicit return typing so that consumers of the hook get correct types.
2024-01-10 00:11:05 -05:00
psychedelicious
3a9201bd31 feat: pin deps
Organise deps into ~3 categories:
- Core generation dependencies, pinned for reproducible builds.
- Core application dependencies, pinned for reproducible builds.
- Auxiliary dependencies, pinned only if necessary.

I pinned / bumped these to latest:
- `controlnet_aux`
- `fastapi`
- `fastapi-events`
- `huggingface-hub`
- `numpy`
- `python-socketio`
- `torchmetrics`
- `transformers`
- `uvicorn`

I checked the release notes for these and didn't see any breaking changes that would affect us. There is a `fastapi` breaking change in v108 related to background tasks but it doesn't affect us.

I tested on a fresh venv. The app still works and I can generate on macOS.

Hopefully, enforcing explicit pinned versions will reduce the issues where people get CPU torch.

It also means we should periodically bump versions up to ensure we don't get too far behind on our dependencies and have to do painful upgrades.
2024-01-10 00:03:29 -05:00
psychedelicious
d6e2cb7cef fix(ui): use memoized selector for workflow watcher
Minor perf improvement.
2024-01-10 15:32:16 +11:00
psychedelicious
0809e832d4 fix(ui): use less brutally strict workflow validation
Workflow building would fail when a current image node was in the workflow due to the strict validation.

So we need to use the other workflow builder util first, which strips out extraneous data.

This bug was introduced during an attempt to optimize the workflow building logic, which was causing slowdowns on the workflow editor.
2024-01-10 14:31:14 +11:00
Lincoln Stein
7269c9f02e Enable correct probing of LoRA latent-consistency/lcm-lora-sdxl (#5449)
- Closes #5435

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-01-08 17:18:26 -05:00
Mary Hipp Rogers
d86d7e5c33 do not show toast if 403 is triggered by forbidden image (#5447)
* do not show toast if 403 is triggered by lack of image access

* remove log

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-08 12:15:46 -05:00
Millun Atluri
5d87578746 {release} v3.5.0rc5 (#5446)
## What type of PR is this? (check all applicable)

Release - InvokeAI v3.5.0rc5


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Release - InvokeAI v3.5.0rc5


## QA Instructions, Screenshots, Recordings

[InvokeAI-installer-v3.6.0rc5.zip](https://github.com/invoke-ai/InvokeAI/files/13863661/InvokeAI-installer-v3.6.0rc5.zip)


## [optional] Are there any post deployment tasks we need to perform?
Releasee on PyPi & GitHub
2024-01-09 03:40:34 +11:00
Millun Atluri
04aef021fc {release} v3.5.0rc5 2024-01-08 10:42:16 -05:00
psychedelicious
0fc08bb384 ui: redesign followups 8 (#5445)
* feat(ui): get rid of convoluted socket vs appSocket redux actions

There's no need to have `socket...` and `appSocket...` actions.

I did this initially due to a misunderstanding about the sequence of handling from middleware to reducers.

* feat(ui): bump deps

Mainly bumping to get latest `redux-remember`.

A change to socket.io required a change to the types in `useSocketIO`.

* chore(ui): format

* feat(ui): add error handling to redux persistence layer

- Add an error handler to `redux-remember` config using our logger
- Add custom errors representing storage set and get failures
- Update storage driver to raise these accordingly
- wrap method to clear idbkeyval storage and tidy its logic up

* feat(ui): add debuggingLoggerMiddleware

This simply logs every action and a diff of the state change.

Due to the noise this creates, it's not added by default at all. Add it to the middlewares if you want to use it.

* feat(ui): add $socket to window if in dev mode

* fix(ui): do not enable cancel hotkeys on inputs

* fix(ui): use JSON.stringify for ROARR logger serializer

A recent change to ROARR introduced limits to the size of data that will logged. This ends up making our logs far less useful. Change the serializer back to what it was previously.

* feat(ui): change diff util, update debuggerLoggerMiddleware

The previous diff library would present deleted things as `undefined`. Unfortunately, a JSON.stringify cycle will strip those values out. The ROARR logger does this and so the diffs end up being a lot less useful, not showing removed keys.

The new diff library uses a different format for the delta that serializes nicely.

* feat(ui): add migrations to redux persistence layer

- All persisted slices must now have a slice config, consisting of their initial state and a migrate callback. The migrate callback is very simple for now, with no type safety. It adds missing properties to the state. A future enhancement might be to model the each slice's state with e.g. zod and have proper validation and types.
- Persisted slices now have a `_version` property
- The migrate callback is called inside `redux-remember`'s `unserialize` handler. I couldn't figure out a good way to put this into the reducer and do logging (reducers should have no side effects). Also I ran into a weird race condition that I couldn't figure out. And finally, the typings are tricky. This works for now.
- `generationSlice` and `canvasSlice` both need migrations for the new aspect ratio setup, this has been added
- Stuff related to persistence has been moved in to `store.ts` for simplicity

* feat(ui): clean up StorageError class

* fix(ui): scale method default is now 'auto'

* feat(ui): when changing controlnet model, enable autoconfig

* fix(ui): make embedding popover immediately accessible

Prevents hotkeys from being captured when embeddings are still loading.
2024-01-08 09:11:45 -05:00
Josh Corbett
5779542084 Updated icons + Minor UI Tweaks (#5427)
* feat: 💄 updated icons + minor ui tweaks

* revert: 💄 removes ui tweaks

* revert: 💄 removed more ui tweaks

removed more ui tweaks and a commented-out icon import

* style: 🚨 satisfy the linter

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-01-07 14:14:44 +11:00
blessedcoolant
ebda81e96e fix(ui): fix add node autoconnect (#5434)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Description

The new select component appears to close itself before calling the
onchange handler. This short-circuits the autoconnect logic. Tweaked so
the ordering is correct.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Closes #5425

## QA Instructions, Screenshots, Recordings

bug should be fixed

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

This PR can be merged when approved

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
2024-01-07 08:32:52 +05:30
psychedelicious
3fe332e85f fix(ui): fix add node autoconnect
The new select component appears to close itself before calling the onchange handler. This short-circuits the autoconnect logic. Tweaked so the ordering is correct.
2024-01-07 14:00:22 +11:00
psychedelicious
3428ea1b3c feat(ui): use config for all numerical params
Centralize the initial/min/max/etc values for all numerical params. We used this for some but at some point stopped updating it.

All numerical params now use their respective configs. Far fewer hardcoded values throughout the app now.

Also updated the config types a bit to better accommodate slider vs number input constraints.
2024-01-07 13:49:29 +11:00
Wubbbi
6024fc7baf Update diffusers to the lastest version 2024-01-06 21:47:51 -05:00
psychedelicious
75c1c4ce5a fix(ui): fix gallery nav math
- Use the virtuoso grid item container and list containers to calculate imagesPerRow, skipping manual compensation for padding of images
- Round the imagesPerRow instead of flooring - we often will end up with values like 4.99999 due to floating point precision
- Update `getDownImage` comments & logic to be clearer
- Use variables for the ids in query selectors, preventing future typos
- Only scroll if the new selected image is different from the prev one
2024-01-06 20:52:09 -05:00
Lincoln Stein
ffa05a0bb3 Only replace vae when it is the broken SDXL 1.0 version 2024-01-06 14:06:47 -05:00
Lincoln Stein
a20e17330b blackify 2024-01-06 14:06:47 -05:00
Lincoln Stein
4e83644433 if sdxl-vae-fp16-fix model is available then bake it in when converting ckpts 2024-01-06 14:06:47 -05:00
Surisen
604f0083f2 translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 100.0% (1402 of 1402 strings)

Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2024-01-07 01:21:04 +11:00
Васянатор
2a8a158823 translationBot(ui): update translation (Russian)
Currently translated at 96.2% (1349 of 1402 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-01-07 01:21:04 +11:00
psychedelicious
f8c3db72e9 feat(ui): improved arrow key navigation in gallery
- Fix preexisting bug where gallery network requests were duplicated when triggering infinite scroll
- Refactor `useNextPrevImage` to not use `state => state` as an input selector - logic split up into different hooks
- Remove use instant scroll for arrow key navigation - smooth scroll is janky when you hold the arrow down and it fires rapidly
- Move gallery nav hotkeys to GalleryImageGrid component, so they work whenever the gallery is open (previously didn't work on canvas or workflow editor tabs)
- Use nanostores for gallery grid refs instead of passing context with virtuoso's context feature, making it much simpler to do the imperative gallery nav
- General gallery hook/component cleanup
2024-01-07 01:19:32 +11:00
psychedelicious
60815807f9 fix(ui): fix merge issue w/ selectors 2024-01-07 01:19:32 +11:00
Rohinish
196fb0e014 added support for bottom key 2024-01-07 01:19:32 +11:00
Rohinish
eba668956d up button support in gallery navigation 2024-01-07 01:19:32 +11:00
Millun Atluri
ee5ec023f4 {release} v3.6.0rc4 (#5424)
## What type of PR is this? (check all applicable)

Release v3.6.0rc4


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Release for v3.6.0rc4

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings
[Uploading InvokeAI-installer-v3.6.0rc4.zip…](Installer Zip)

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan
- This PR can be merged when approved
<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
Release on PyPi & GitHub
2024-01-06 12:02:57 +11:00
Millun Atluri
d59661e0af {release} v3.6.0rc4 2024-01-06 11:08:00 +11:00
psychedelicious
f51e8eeae1 fix(ui): better node footer spacing 2024-01-06 09:09:38 +11:00
psychedelicious
6e06935e75 fix(ui): fix favicon
It wasn't in the right place to be bundled into `assets/` by vite.

Also replaced uncategorized board's fallback image with new logo.
2024-01-06 09:09:38 +11:00
Ryan Dick
f7f697849c Skip weight initialization when resizing text encoder token embeddings to accomodate new TI embeddings. This saves time. 2024-01-05 15:16:00 -05:00
Mary Hipp Rogers
8e17e29a5c fix text color for lora card (#5417)
* use label

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-05 09:57:16 -05:00
Mary Hipp Rogers
12e9f17f7a only GET intermediates if that setting is an option (#5416)
* only GET intermediates if that setting is an option

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-05 09:40:34 -05:00
psychedelicious
cb7e56a9a3 chore(ui): lint 2024-01-05 08:34:46 -05:00
psychedelicious
1a710a4c12 fix(ui): restore prev colors for workflow editor
Brand colors are now prefixed with "invoke".
2024-01-05 08:34:46 -05:00
psychedelicious
d8d266d3be fix(ui): fix field title spacing
Closes #5405
2024-01-05 08:34:46 -05:00
psychedelicious
4716632c23 fix(ui): tsc 2024-01-06 00:03:07 +11:00
psychedelicious
3c4150d153 fix(ui): update most other selectors
Just a few stragglers left. Good enough for now.
2024-01-06 00:03:07 +11:00
psychedelicious
b71b14d582 fix(ui): update workflow selectors 2024-01-06 00:03:07 +11:00
psychedelicious
73481d4aec feat(ui): clean up canvas selectors
Do not memoize unless absolutely necessary. Minor perf improvement
2024-01-06 00:03:07 +11:00
psychedelicious
2c049a3b94 feat(ui): clean up a few selectors that do not need to be memoized 2024-01-06 00:03:07 +11:00
psychedelicious
367de44a8b fix(ui): tidy remaining selectors
These were just using overly verbose syntax - like explicitly typing `state: RootState`, which is unnecessary.
2024-01-06 00:03:07 +11:00
psychedelicious
f5f378d04b fix(ui): revert back to lrumemoize 2024-01-06 00:03:07 +11:00
psychedelicious
823edbfdef fix(ui): fix more state => state selectors 2024-01-06 00:03:07 +11:00
psychedelicious
29bbb27289 fix(ui): re-add reselect patch
Accidentally removed it last commit.
2024-01-06 00:03:07 +11:00
psychedelicious
a23502f7ff fix(ui): do not use state => state as an input selector
This is a no-no, whoops!
2024-01-06 00:03:07 +11:00
psychedelicious
ce64dbefce chore(ui): lint 2024-01-06 00:03:07 +11:00
psychedelicious
b47afdc3b5 feat(ui): patch reselect to use lruMemoize only
Pending resolution of https://github.com/reduxjs/reselect/issues/635, we can patch `reselect` to use `lruMemoize` exclusively.

Pin RTK and react-redux versions too just to be safe.

This reduces the major GC events that were causing lag/stutters in the app, particularly in canvas and workflow editor.
2024-01-06 00:03:07 +11:00
psychedelicious
cde9c3090f fix(ui): use useAppSelector instead of useSelector 2024-01-06 00:03:07 +11:00
psychedelicious
6924b04d7c feat(ui): use lruMemoize for all entity adapter selectors 2024-01-06 00:03:07 +11:00
blessedcoolant
83fbd4bdf2 ui: slightly reposition floating bars. 2024-01-05 23:59:08 +11:00
Lincoln Stein
6460dcc7e0 use torch.bfloat16 on cuda systems 2024-01-04 23:25:52 -05:00
Millun Atluri
59aa009c93 {release} v3.6.0rc3 (#5408)
## What type of PR is this? (check all applicable)

Release v3.6.0rc3


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [] Yes
- [X] No


## Description
Next release candidate

## Related Tickets & Documents
N/A
<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings
[Uploading InvokeAI-installer-v3.6.0rc3.zip…](Installer zip)

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan
This PR can be merged when approved
<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
Release on PyPI & Github
2024-01-05 10:39:20 +11:00
Millun Atluri
59d2a012cd {release} v3.6.0rc3 2024-01-05 09:40:21 +11:00
Kent Keirsey
7e3b620830 Update README.md 2024-01-04 15:56:44 -05:00
psychedelicious
e16b55816f fix(ui): clarify comparison in usePanel 2024-01-05 07:09:37 +11:00
psychedelicious
895cb8637e fix(ui): fix panel resize bug
A bug that caused panels to be collapsed on a fresh indexedDb in was fixed in dd32c632cd, but this re-introduced a different bug that caused the panels to expand on window resize, if they were already collapsed.

Revert the previous change and instead add one imperative resize outside the observer, so that on startup, we set both panels to their minimum sizes.
2024-01-05 07:09:37 +11:00
Riccardo Giovanetti
fe5bceb1ed translationBot(ui): update translation (Italian)
Currently translated at 97.3% (1363 of 1400 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-01-05 07:04:04 +11:00
Mary Hipp
5d475a40f5 lint 2024-01-04 12:51:36 -05:00
Mary Hipp
bca7ea1674 option to override logo component 2024-01-04 12:51:36 -05:00
Mary Hipp
f27bb402fb render one or the other 2024-01-04 11:11:10 -05:00
Mary Hipp Rogers
dd32c632cd fix default panel width (#5403)
* fix default panel width

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-04 11:04:21 -05:00
Mary Hipp Rogers
9e2e740033 custom components for nav, gallery header, and app info (#5400)
* replace custom header with custom nav component to go below settings

* add option for custom gallery header

* add option for custom app info text on logo hover

* add data-testid for tabs

* remove descriptions

* lint

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-04 15:30:27 +00:00
psychedelicious
d6362ce0bd fix(ui): fix invalid nesting of button in button 2024-01-04 09:36:59 -05:00
psychedelicious
2347a00a70 fix(ui): do not show loading state on floating invoke button if disabled 2024-01-04 09:36:59 -05:00
psychedelicious
0b7dc721cf chore(ui): lint 2024-01-04 09:36:59 -05:00
psychedelicious
ac04a834ef feat(ui): tidy hotkeysmodal state 2024-01-04 09:36:59 -05:00
psychedelicious
bbca053b48 feat(ui): style settings modal 2024-01-04 09:36:59 -05:00
psychedelicious
fcf2006502 feat(ui): increase brightnesst of accordion title 2024-01-04 09:36:59 -05:00
psychedelicious
ac0d0019bd chore(ui): lint 2024-01-04 09:36:59 -05:00
psychedelicious
2d922a0a65 feat(ui): restore floating options and gallery buttons 2024-01-04 09:36:59 -05:00
psychedelicious
8db14911d7 feat(ui): give tooltips padding from screen edge
We can pass a popperjs modifier to the tooltip to give it this padding.
2024-01-04 09:36:59 -05:00
psychedelicious
01bab58b20 fix(ui): do not resize panel when window resizes if panel is collapsed 2024-01-04 09:36:59 -05:00
psychedelicious
7a57bc99cf feat(ui): statusindicator changes
We are now using the lefthand vertical strip for the settings menu button. This is a good place for the status indicator.

Really, we only need to display something *if there is a problem*. If the app is processing, the progress bar indicates that.

For the case where the panels are collapsed, I'll add the floating buttons back in some form, and we'll indicate via those if the app is processing something.
2024-01-04 09:36:59 -05:00
psychedelicious
d3b6d86e74 feat(ui): tweak badge styles 2024-01-04 09:36:59 -05:00
psychedelicious
360b6cb286 fix(ui): fix logo version tooltip 2024-01-04 09:36:59 -05:00
psychedelicious
8f9e9e639e fix(ui): fix hotkey key & untranslated string 2024-01-04 09:36:59 -05:00
psychedelicious
6930d8ba41 feat(ui): do not wrap expander content in box 2024-01-04 09:36:59 -05:00
psychedelicious
7ad74e680d feat(ui): make invexpander button styles less complex
just make it like a normal button - normal and hover state, no difference when its expanded. the icon clearly indicates this, and you see the extra components
2024-01-04 09:36:59 -05:00
psychedelicious
c56a6a4ddd feat(ui): make expander divider button, add hover, remove color
On one hand I like the color but on the other it makes this divider a focus point, which doesn't really makes sense to me. I tried several shades but think it adds a bit too much distraction for your eyes.
2024-01-04 09:36:59 -05:00
psychedelicious
afad764a00 feat(ui): make badges a bit paler
too stabby in the eye region
2024-01-04 09:36:59 -05:00
psychedelicious
49a72bd714 feat(ui): use wrench icon in settings menu for settings 2024-01-04 09:36:59 -05:00
psychedelicious
8cf14287b6 feat(ui): simplify App.tsx layout
There was an extra div, needed for the fullscreen file upload dropzone, that made styling the main app containers a bit awkward.

Refactor the uploader a bit to simplify this - no longer need so many app-level wrappers. Much cleaner.
2024-01-04 09:36:59 -05:00
blessedcoolant
0db47dd5e7 ui: Bolden text & add activation color for expanded state 2024-01-04 09:36:59 -05:00
blessedcoolant
71f6f77ae8 ui: Change background and padding of advanced settings 2024-01-04 09:36:59 -05:00
blessedcoolant
6f16229c41 fix: tone down the base color saturation by one step 2024-01-04 09:36:59 -05:00
blessedcoolant
0cc0d794d1 fix: Minor alignment issues with the queue badge 2024-01-04 09:36:59 -05:00
blessedcoolant
535639cb95 feat: Update status and progress colors to match new theme 2024-01-04 09:36:59 -05:00
blessedcoolant
2250bca8d9 feat: Remove Header
Remove header and incorporate everything else into the side bar and other areas
2024-01-04 09:36:59 -05:00
psychedelicious
4ce39a5974 fix(ui): remove unused icons 2024-01-04 13:59:25 +11:00
psychedelicious
644e9287f0 chore(ui): control adapter docstrings 2024-01-04 13:59:25 +11:00
psychedelicious
6a5e0be022 fix(ui): reduce minStepsBetweenThumbs for ca begin/end 2024-01-04 13:59:25 +11:00
psychedelicious
707f0f7091 feat(ui): update favicon to new logo 2024-01-04 13:59:25 +11:00
psychedelicious
8e709fe05a fix(ui): remove shift+enter to cancel
Whoops!
2024-01-04 13:59:25 +11:00
Hosted Weblate
154da609cb translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-01-04 01:19:31 +11:00
psychedelicious
21975d6268 chore(ui): lint 2024-01-03 09:09:50 -05:00
psychedelicious
31035b3e63 fix(ui): fix scaled bbox sliders
Removed logic related to aspect ratio from the components.

When the main bbox changes, if the scale method is auto, the reducers will handle the scaled bbox size appropriately.

Somehow linking up the manual mode to the aspect ratio is tricky, and instead of adding complexity for a rarely-used mode, I'm leaving manual mode as fully manual.
2024-01-03 09:09:50 -05:00
psychedelicious
6c05818887 fix(ui): workaround canvas weirdness with locked aspect ratio
Cannot figure out how to allow the bbox to be transformed when aspect ratio is locked from all handles. Only the bottom right handle works as expected.

As a workaround, when the aspect ratio is locked, you can only resize the bbox from the bottom right handle.
2024-01-03 09:09:50 -05:00
psychedelicious
77c5b051f0 fix(ui): clean up actionsDenylist 2024-01-03 09:09:50 -05:00
psychedelicious
4fdc4c15f9 feat(ui): add optimal size handling 2024-01-03 09:09:50 -05:00
psychedelicious
1a4be78013 fix(ui): make aspect ratio preview pixel-perfect size 2024-01-03 09:09:50 -05:00
psychedelicious
eb16ad3d6f fix(ui): remove old esc hotkey 2024-01-03 09:09:50 -05:00
psychedelicious
1fee08639d feat(ui): tweak board search UI 2024-01-03 09:09:50 -05:00
psychedelicious
7caaf40835 feat(ui): reworked hotkeys modal
- Displays all as list
- Uses chakra `Kbd` component for keys
- Provides search box
2024-01-03 09:09:50 -05:00
psychedelicious
6bfe994622 feat(ui): bump fontSize in fallback compoennt 2024-01-03 09:09:50 -05:00
psychedelicious
8a6f03cd46 feat(ui): improved panel interactions 2024-01-03 09:09:50 -05:00
psychedelicious
4ce9f9dc36 fix(ui): scaled bounding box uses canvas aspect ratio 2024-01-03 09:09:50 -05:00
1082 changed files with 26956 additions and 28283 deletions

8
.github/CODEOWNERS vendored
View File

@@ -1,5 +1,5 @@
# continuous integration
/.github/workflows/ @lstein @blessedcoolant @hipsterusername
/.github/workflows/ @lstein @blessedcoolant @hipsterusername @ebr
# documentation
/docs/ @lstein @blessedcoolant @hipsterusername @Millu
@@ -10,7 +10,7 @@
# installation and configuration
/pyproject.toml @lstein @blessedcoolant @hipsterusername
/docker/ @lstein @blessedcoolant @hipsterusername
/docker/ @lstein @blessedcoolant @hipsterusername @ebr
/scripts/ @ebr @lstein @hipsterusername
/installer/ @lstein @ebr @hipsterusername
/invokeai/assets @lstein @ebr @hipsterusername
@@ -26,9 +26,7 @@
# front ends
/invokeai/frontend/CLI @lstein @hipsterusername
/invokeai/frontend/install @lstein @ebr @hipsterusername
/invokeai/frontend/install @lstein @ebr @hipsterusername
/invokeai/frontend/merge @lstein @blessedcoolant @hipsterusername
/invokeai/frontend/training @lstein @blessedcoolant @hipsterusername
/invokeai/frontend/web @psychedelicious @blessedcoolant @maryhipp @hipsterusername

View File

@@ -6,10 +6,6 @@ title: '[bug]: '
labels: ['bug']
# assignees:
# - moderator_bot
# - lstein
body:
- type: markdown
attributes:
@@ -18,10 +14,9 @@ body:
- type: checkboxes
attributes:
label: Is there an existing issue for this?
label: Is there an existing issue for this problem?
description: |
Please use the [search function](https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen+label%3Abug)
irst to see if an issue already exists for the bug you encountered.
Please [search](https://github.com/invoke-ai/InvokeAI/issues) first to see if an issue already exists for the problem.
options:
- label: I have searched the existing issues
required: true
@@ -33,80 +28,119 @@ body:
- type: dropdown
id: os_dropdown
attributes:
label: OS
description: Which operating System did you use when the bug occured
label: Operating system
description: Your computer's operating system.
multiple: false
options:
- 'Linux'
- 'Windows'
- 'macOS'
- 'other'
validations:
required: true
- type: dropdown
id: gpu_dropdown
attributes:
label: GPU
description: Which kind of Graphic-Adapter is your System using
label: GPU vendor
description: Your GPU's vendor.
multiple: false
options:
- 'cuda'
- 'amd'
- 'mps'
- 'cpu'
- 'Nvidia (CUDA)'
- 'AMD (ROCm)'
- 'Apple Silicon (MPS)'
- 'None (CPU)'
validations:
required: true
- type: input
id: gpu_model
attributes:
label: GPU model
description: Your GPU's model. If on Apple Silicon, this is your Mac's chip. Leave blank if on CPU.
placeholder: ex. RTX 2080 Ti, Mac M1 Pro
validations:
required: false
- type: input
id: vram
attributes:
label: VRAM
description: Size of the VRAM if known
label: GPU VRAM
description: Your GPU's VRAM. If on Apple Silicon, this is your Mac's unified memory. Leave blank if on CPU.
placeholder: 8GB
validations:
required: false
- type: input
id: version-number
attributes:
label: What version did you experience this issue on?
label: Version number
description: |
Please share the version of Invoke AI that you experienced the issue on. If this is not the latest version, please update first to confirm the issue still exists. If you are testing main, please include the commit hash instead.
placeholder: X.X.X
The version of Invoke you have installed. If it is not the latest version, please update and try again to confirm the issue still exists. If you are testing main, please include the commit hash instead.
placeholder: ex. 3.6.1
validations:
required: true
- type: input
id: browser-version
attributes:
label: Browser
description: Your web browser and version.
placeholder: ex. Firefox 123.0b3
validations:
required: true
- type: textarea
id: python-deps
attributes:
label: Python dependencies
description: |
If the problem occurred during image generation, click the gear icon at the bottom left corner, click "About", click the copy button and then paste here.
validations:
required: false
- type: textarea
id: what-happened
attributes:
label: What happened?
label: What happened
description: |
Briefly describe what happened, what you expected to happen and how to reproduce this bug.
placeholder: When using the webinterface and right-clicking on button X instead of the popup-menu there error Y appears
Describe what happened. Include any relevant error messages, stack traces and screenshots here.
placeholder: I clicked button X and then Y happened.
validations:
required: true
- type: textarea
id: what-you-expected
attributes:
label: Screenshots
description: If applicable, add screenshots to help explain your problem
placeholder: this is what the result looked like <screenshot>
label: What you expected to happen
description: Describe what you expected to happen.
placeholder: I expected Z to happen.
validations:
required: true
- type: textarea
id: how-to-repro
attributes:
label: How to reproduce the problem
description: List steps to reproduce the problem.
placeholder: Start the app, generate an image with these settings, then click button X.
validations:
required: false
- type: textarea
id: additional-context
attributes:
label: Additional context
description: Add any other context about the problem here
description: Any other context that might help us to understand the problem.
placeholder: Only happens when there is full moon and Friday the 13th on Christmas Eve 🎅🏻
validations:
required: false
- type: input
id: contact
id: discord-username
attributes:
label: Contact Details
description: __OPTIONAL__ How can we get in touch with you if we need more info (besides this issue)?
placeholder: ex. email@example.com, discordname, twitter, ...
label: Discord username
description: If you are on the Invoke discord and would prefer to be contacted there, please provide your username.
placeholder: supercoolusername123
validations:
required: false

59
.github/pr_labels.yml vendored Normal file
View File

@@ -0,0 +1,59 @@
Root:
- changed-files:
- any-glob-to-any-file: '*'
PythonDeps:
- changed-files:
- any-glob-to-any-file: 'pyproject.toml'
Python:
- changed-files:
- all-globs-to-any-file:
- 'invokeai/**'
- '!invokeai/frontend/web/**'
PythonTests:
- changed-files:
- any-glob-to-any-file: 'tests/**'
CICD:
- changed-files:
- any-glob-to-any-file: .github/**
Docker:
- changed-files:
- any-glob-to-any-file: docker/**
Installer:
- changed-files:
- any-glob-to-any-file: installer/**
Documentation:
- changed-files:
- any-glob-to-any-file: docs/**
Invocations:
- changed-files:
- any-glob-to-any-file: 'invokeai/app/invocations/**'
Backend:
- changed-files:
- any-glob-to-any-file: 'invokeai/backend/**'
Api:
- changed-files:
- any-glob-to-any-file: 'invokeai/app/api/**'
Services:
- changed-files:
- any-glob-to-any-file: 'invokeai/app/services/**'
FrontendDeps:
- changed-files:
- any-glob-to-any-file:
- '**/*/package.json'
- '**/*/pnpm-lock.yaml'
Frontend:
- changed-files:
- any-glob-to-any-file: 'invokeai/frontend/web/**'

View File

@@ -40,10 +40,14 @@ jobs:
- name: Free up more disk space on the runner
# https://github.com/actions/runner-images/issues/2840#issuecomment-1284059930
run: |
echo "----- Free space before cleanup"
df -h
sudo rm -rf /usr/share/dotnet
sudo rm -rf "$AGENT_TOOLSDIRECTORY"
sudo swapoff /mnt/swapfile
sudo rm -rf /mnt/swapfile
echo "----- Free space after cleanup"
df -h
- name: Checkout
uses: actions/checkout@v3
@@ -91,6 +95,7 @@ jobs:
# password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build container
timeout-minutes: 40
id: docker_build
uses: docker/build-push-action@v4
with:

16
.github/workflows/label-pr.yml vendored Normal file
View File

@@ -0,0 +1,16 @@
name: "Pull Request Labeler"
on:
- pull_request_target
jobs:
labeler:
permissions:
contents: read
pull-requests: write
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- uses: actions/labeler@v5
with:
configuration-path: .github/pr_labels.yml

View File

@@ -58,7 +58,7 @@ jobs:
- name: Check for changed python files
id: changed-files
uses: tj-actions/changed-files@v37
uses: tj-actions/changed-files@v41
with:
files_yaml: |
python:

View File

@@ -1,10 +1,10 @@
<div align="center">
![project hero](https://github.com/invoke-ai/InvokeAI/assets/31807370/1a917d94-e099-4fa1-a70f-7dd8d0691018)
![project hero](https://github.com/invoke-ai/InvokeAI/assets/31807370/6e3728c7-e90e-4711-905c-3b55844ff5be)
# Invoke AI - Generative AI for Professional Creatives
## Professional Creative Tools for Stable Diffusion, Custom-Trained Models, and more.
To learn more about Invoke AI, get started instantly, or implement our Business solutions, visit [invoke.ai](https://invoke.ai)
# Invoke - Professional Creative AI Tools for Visual Media
## To learn more about Invoke, or implement our Business solutions, visit [invoke.com](https://www.invoke.com/about)
[![discord badge]][discord link]
@@ -56,7 +56,9 @@ the foundation for multiple commercial products.
<div align="center">
![canvas preview](https://github.com/invoke-ai/InvokeAI/raw/main/docs/assets/canvas_preview.png)
![Highlighted Features - Canvas and Workflows](https://github.com/invoke-ai/InvokeAI/assets/31807370/708f7a82-084f-4860-bfbe-e2588c53548d)
</div>
@@ -167,7 +169,7 @@ the command `npm install -g pnpm` if needed)
_For Linux with an AMD GPU:_
```sh
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.4.2
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.6
```
_For non-GPU systems:_

View File

@@ -59,7 +59,7 @@ RUN --mount=type=cache,target=/root/.cache/pip \
# #### Build the Web UI ------------------------------------
FROM node:18-slim AS web-builder
FROM node:20-slim AS web-builder
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable
@@ -68,7 +68,7 @@ WORKDIR /build
COPY invokeai/frontend/web/ ./
RUN --mount=type=cache,target=/pnpm/store \
pnpm install --frozen-lockfile
RUN pnpm run build
RUN npx vite build
#### Runtime stage ---------------------------------------

Binary file not shown.

Before

Width:  |  Height:  |  Size: 297 KiB

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 MiB

After

Width:  |  Height:  |  Size: 4.9 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 169 KiB

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 194 KiB

After

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 209 KiB

After

Width:  |  Height:  |  Size: 122 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 114 KiB

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 187 KiB

After

Width:  |  Height:  |  Size: 123 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 112 KiB

After

Width:  |  Height:  |  Size: 107 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 132 KiB

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 167 KiB

After

Width:  |  Height:  |  Size: 119 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 59 KiB

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

View File

@@ -15,8 +15,13 @@ model. These are the:
their metadata, and `ModelRecordServiceBase` to store that
information. It is also responsible for managing the InvokeAI
`models` directory and its contents.
* _DownloadQueueServiceBase_ (**CURRENTLY UNDER DEVELOPMENT - NOT IMPLEMENTED**)
* _ModelMetadataStore_ and _ModelMetaDataFetch_ Backend modules that
are able to retrieve metadata from online model repositories,
transform them into Pydantic models, and cache them to the InvokeAI
SQL database.
* _DownloadQueueServiceBase_
A multithreaded downloader responsible
for downloading models from a remote source to disk. The download
queue has special methods for downloading repo_id folders from
@@ -30,13 +35,13 @@ model. These are the:
## Location of the Code
All four of these services can be found in
The four main services can be found in
`invokeai/app/services` in the following directories:
* `invokeai/app/services/model_records/`
* `invokeai/app/services/model_install/`
* `invokeai/app/services/downloads/`
* `invokeai/app/services/model_loader/` (**under development**)
* `invokeai/app/services/downloads/`(**under development**)
Code related to the FastAPI web API can be found in
`invokeai/app/api/routers/model_records.py`.
@@ -402,15 +407,18 @@ functionality:
the download, installation and registration process.
- Downloading a model from an arbitrary URL and installing it in
`models_dir` (_implementation pending_).
`models_dir`.
- Special handling for Civitai model URLs which allow the user to
paste in a model page's URL or download link (_implementation pending_).
paste in a model page's URL or download link
- Special handling for HuggingFace repo_ids to recursively download
the contents of the repository, paying attention to alternative
variants such as fp16. (_implementation pending_)
variants such as fp16.
- Saving tags and other metadata about the model into the invokeai database
when fetching from a repo that provides that type of information,
(currently only Civitai and HuggingFace).
### Initializing the installer
@@ -426,16 +434,24 @@ following initialization pattern:
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.model_records import ModelRecordServiceSQL
from invokeai.app.services.model_install import ModelInstallService
from invokeai.app.services.download import DownloadQueueService
from invokeai.app.services.shared.sqlite import SqliteDatabase
from invokeai.backend.util.logging import InvokeAILogger
config = InvokeAIAppConfig.get_config()
config.parse_args()
logger = InvokeAILogger.get_logger(config=config)
db = SqliteDatabase(config, logger)
record_store = ModelRecordServiceSQL(db)
queue = DownloadQueueService()
queue.start()
store = ModelRecordServiceSQL(db)
installer = ModelInstallService(config, store)
installer = ModelInstallService(app_config=config,
record_store=record_store,
download_queue=queue
)
installer.start()
```
The full form of `ModelInstallService()` takes the following
@@ -443,9 +459,12 @@ required parameters:
| **Argument** | **Type** | **Description** |
|------------------|------------------------------|------------------------------|
| `config` | InvokeAIAppConfig | InvokeAI app configuration object |
| `app_config` | InvokeAIAppConfig | InvokeAI app configuration object |
| `record_store` | ModelRecordServiceBase | Config record storage database |
| `event_bus` | EventServiceBase | Optional event bus to send download/install progress events to |
| `download_queue` | DownloadQueueServiceBase | Download queue object |
| `metadata_store` | Optional[ModelMetadataStore] | Metadata storage object |
|`session` | Optional[requests.Session] | Swap in a different Session object (usually for debugging) |
Once initialized, the installer will provide the following methods:
@@ -474,14 +493,14 @@ source7 = URLModelSource(url='https://civitai.com/api/download/models/63006', ac
for source in [source1, source2, source3, source4, source5, source6, source7]:
install_job = installer.install_model(source)
source2job = installer.wait_for_installs()
source2job = installer.wait_for_installs(timeout=120)
for source in sources:
job = source2job[source]
if job.status == "completed":
if job.complete:
model_config = job.config_out
model_key = model_config.key
print(f"{source} installed as {model_key}")
elif job.status == "error":
elif job.errored:
print(f"{source}: {job.error_type}.\nStack trace:\n{job.error}")
```
@@ -515,43 +534,117 @@ The full list of arguments to `import_model()` is as follows:
| **Argument** | **Type** | **Default** | **Description** |
|------------------|------------------------------|-------------|-------------------------------------------|
| `source` | Union[str, Path, AnyHttpUrl] | | The source of the model, Path, URL or repo_id |
| `inplace` | bool | True | Leave a local model in its current location |
| `variant` | str | None | Desired variant, such as 'fp16' or 'onnx' (HuggingFace only) |
| `subfolder` | str | None | Repository subfolder (HuggingFace only) |
| `source` | ModelSource | None | The source of the model, Path, URL or repo_id |
| `config` | Dict[str, Any] | None | Override all or a portion of model's probed attributes |
| `access_token` | str | None | Provide authorization information needed to download |
The `inplace` field controls how local model Paths are handled. If
True (the default), then the model is simply registered in its current
location by the installer's `ModelConfigRecordService`. Otherwise, a
copy of the model put into the location specified by the `models_dir`
application configuration parameter.
The `variant` field is used for HuggingFace repo_ids only. If
provided, the repo_id download handler will look for and download
tensors files that follow the convention for the selected variant:
- "fp16" will select files named "*model.fp16.{safetensors,bin}"
- "onnx" will select files ending with the suffix ".onnx"
- "openvino" will select files beginning with "openvino_model"
In the special case of the "fp16" variant, the installer will select
the 32-bit version of the files if the 16-bit version is unavailable.
`subfolder` is used for HuggingFace repo_ids only. If provided, the
model will be downloaded from the designated subfolder rather than the
top-level repository folder. If a subfolder is attached to the repo_id
using the format `repo_owner/repo_name:subfolder`, then the subfolder
specified by the repo_id will override the subfolder argument.
The next few sections describe the various types of ModelSource that
can be passed to `import_model()`.
`config` can be used to override all or a portion of the configuration
attributes returned by the model prober. See the section below for
details.
`access_token` is passed to the download queue and used to access
repositories that require it.
#### LocalModelSource
This is used for a model that is located on a locally-accessible Posix
filesystem, such as a local disk or networked fileshare.
| **Argument** | **Type** | **Default** | **Description** |
|------------------|------------------------------|-------------|-------------------------------------------|
| `path` | str | Path | None | Path to the model file or directory |
| `inplace` | bool | False | If set, the model file(s) will be left in their location; otherwise they will be copied into the InvokeAI root's `models` directory |
#### URLModelSource
This is used for a single-file model that is accessible via a URL. The
fields are:
| **Argument** | **Type** | **Default** | **Description** |
|------------------|------------------------------|-------------|-------------------------------------------|
| `url` | AnyHttpUrl | None | The URL for the model file. |
| `access_token` | str | None | An access token needed to gain access to this file. |
The `AnyHttpUrl` class can be imported from `pydantic.networks`.
Ordinarily, no metadata is retrieved from these sources. However,
there is special-case code in the installer that looks for HuggingFace
and Civitai URLs and fetches the corresponding model metadata from
the corresponding repo.
#### CivitaiModelSource
This is used for a model that is hosted by the Civitai web site.
| **Argument** | **Type** | **Default** | **Description** |
|------------------|------------------------------|-------------|-------------------------------------------|
| `version_id` | int | None | The ID of the particular version of the desired model. |
| `access_token` | str | None | An access token needed to gain access to a subscriber's-only model. |
Civitai has two model IDs, both of which are integers. The `model_id`
corresponds to a collection of model versions that may different in
arbitrary ways, such as derivation from different checkpoint training
steps, SFW vs NSFW generation, pruned vs non-pruned, etc. The
`version_id` points to a specific version. Please use the latter.
Some Civitai models require an access token to download. These can be
generated from the Civitai profile page of a logged-in
account. Somewhat annoyingly, if you fail to provide the access token
when downloading a model that needs it, Civitai generates a redirect
to a login page rather than a 403 Forbidden error. The installer
attempts to catch this event and issue an informative error
message. Otherwise you will get an "unrecognized model suffix" error
when the model prober tries to identify the type of the HTML login
page.
#### HFModelSource
HuggingFace has the most complicated `ModelSource` structure:
| **Argument** | **Type** | **Default** | **Description** |
|------------------|------------------------------|-------------|-------------------------------------------|
| `repo_id` | str | None | The ID of the desired model. |
| `variant` | ModelRepoVariant | ModelRepoVariant('fp16') | The desired variant. |
| `subfolder` | Path | None | Look for the model in a subfolder of the repo. |
| `access_token` | str | None | An access token needed to gain access to a subscriber's-only model. |
The `repo_id` is the repository ID, such as `stabilityai/sdxl-turbo`.
The `variant` is one of the various diffusers formats that HuggingFace
supports and is used to pick out from the hodgepodge of files that in
a typical HuggingFace repository the particular components needed for
a complete diffusers model. `ModelRepoVariant` is an enum that can be
imported from `invokeai.backend.model_manager` and has the following
values:
| **Name** | **String Value** |
|----------------------------|---------------------------|
| ModelRepoVariant.DEFAULT | "default" |
| ModelRepoVariant.FP16 | "fp16" |
| ModelRepoVariant.FP32 | "fp32" |
| ModelRepoVariant.ONNX | "onnx" |
| ModelRepoVariant.OPENVINO | "openvino" |
| ModelRepoVariant.FLAX | "flax" |
You can also pass the string forms to `variant` directly. Note that
InvokeAI may not be able to load and run all variants. At the current
time, specifying `ModelRepoVariant.DEFAULT` will retrieve model files
that are unqualified, e.g. `pytorch_model.safetensors` rather than
`pytorch_model.fp16.safetensors`. These are usually the 32-bit
safetensors forms of the model.
If `subfolder` is specified, then the requested model resides in a
subfolder of the main model repository. This is typically used to
fetch and install VAEs.
Some models require you to be registered with HuggingFace and logged
in. To download these files, you must provide an
`access_token`. Internally, if no access token is provided, then
`HfFolder.get_token()` will be called to fill it in with the cached
one.
#### Monitoring the install job process
@@ -563,7 +656,8 @@ The `ModelInstallJob` class has the following structure:
| **Attribute** | **Type** | **Description** |
|----------------|-----------------|------------------|
| `status` | `InstallStatus` | An enum of ["waiting", "running", "completed" and "error" |
| `id` | `int` | Integer ID for this job |
| `status` | `InstallStatus` | An enum of [`waiting`, `downloading`, `running`, `completed`, `error` and `cancelled`]|
| `config_in` | `dict` | Overriding configuration values provided by the caller |
| `config_out` | `AnyModelConfig`| After successful completion, contains the configuration record written to the database |
| `inplace` | `boolean` | True if the caller asked to install the model in place using its local path |
@@ -578,30 +672,70 @@ broadcast to the InvokeAI event bus. The events will appear on the bus
as an event of type `EventServiceBase.model_event`, a timestamp and
the following event names:
- `model_install_started`
##### `model_install_downloading`
The payload will contain the keys `timestamp` and `source`. The latter
indicates the requested model source for installation.
For remote models only, `model_install_downloading` events will be issued at regular
intervals as the download progresses. The event's payload contains the
following keys:
- `model_install_progress`
| **Key** | **Type** | **Description** |
|----------------|-----------|------------------|
| `source` | str | String representation of the requested source |
| `local_path` | str | String representation of the path to the downloading model (usually a temporary directory) |
| `bytes` | int | How many bytes downloaded so far |
| `total_bytes` | int | Total size of all the files that make up the model |
| `parts` | List[Dict]| Information on the progress of the individual files that make up the model |
Emitted at regular intervals when downloading a remote model, the
payload will contain the keys `timestamp`, `source`, `current_bytes`
and `total_bytes`. These events are _not_ emitted when a local model
already on the filesystem is imported.
- `model_install_completed`
The parts is a list of dictionaries that give information on each of
the components pieces of the download. The dictionary's keys are
`source`, `local_path`, `bytes` and `total_bytes`, and correspond to
the like-named keys in the main event.
Issued once at the end of a successful installation. The payload will
contain the keys `timestamp`, `source` and `key`, where `key` is the
ID under which the model has been registered.
Note that downloading events will not be issued for local models, and
that downloading events occur *before* the running event.
- `model_install_error`
##### `model_install_running`
`model_install_running` is issued when all the required downloads have completed (if applicable) and the
model probing, copying and registration process has now started.
The payload will contain the key `source`.
##### `model_install_completed`
`model_install_completed` is issued once at the end of a successful
installation. The payload will contain the keys `source`,
`total_bytes` and `key`, where `key` is the ID under which the model
has been registered.
##### `model_install_error`
`model_install_error` is emitted if the installation process fails for
some reason. The payload will contain the keys `source`, `error_type`
and `error`. `error_type` is a short message indicating the nature of
the error, and `error` is the long traceback to help debug the
problem.
##### `model_install_cancelled`
`model_install_cancelled` is issued if the model installation is
cancelled, or if one or more of its files' downloads are
cancelled. The payload will contain `source`.
##### Following the model status
You may poll the `ModelInstallJob` object returned by `import_model()`
to ascertain the state of the install. The job status can be read from
the job's `status` attribute, an `InstallStatus` enum which has the
enumerated values `WAITING`, `DOWNLOADING`, `RUNNING`, `COMPLETED`,
`ERROR` and `CANCELLED`.
For convenience, install jobs also provided the following boolean
properties: `waiting`, `downloading`, `running`, `complete`, `errored`
and `cancelled`, as well as `in_terminal_state`. The last will return
True if the job is in the complete, errored or cancelled states.
Emitted if the installation process fails for some reason. The payload
will contain the keys `timestamp`, `source`, `error_type` and
`error`. `error_type` is a short message indicating the nature of the
error, and `error` is the long traceback to help debug the problem.
#### Model confguration and probing
@@ -621,17 +755,9 @@ overriding values for any of the model's configuration
attributes. Here is an example of setting the
`SchedulerPredictionType` and `name` for an sd-2 model:
This is typically used to set
the model's name and description, but can also be used to overcome
cases in which automatic probing is unable to (correctly) determine
the model's attribute. The most common situation is the
`prediction_type` field for sd-2 (and rare sd-1) models. Here is an
example of how it works:
```
install_job = installer.import_model(
source='stabilityai/stable-diffusion-2-1',
variant='fp16',
source=HFModelSource(repo_id='stabilityai/stable-diffusion-2-1',variant='fp32'),
config=dict(
prediction_type=SchedulerPredictionType('v_prediction')
name='stable diffusion 2 base model',
@@ -643,29 +769,38 @@ install_job = installer.import_model(
This section describes additional methods provided by the installer class.
#### jobs = installer.wait_for_installs()
#### jobs = installer.wait_for_installs([timeout])
Block until all pending installs are completed or errored and then
returns a list of completed jobs.
returns a list of completed jobs. The optional `timeout` argument will
return from the call if jobs aren't completed in the specified
time. An argument of 0 (the default) will block indefinitely.
#### jobs = installer.list_jobs([source])
#### jobs = installer.list_jobs()
Return a list of all active and complete `ModelInstallJobs`. An
optional `source` argument allows you to filter the returned list by a
model source string pattern using a partial string match.
Return a list of all active and complete `ModelInstallJobs`.
#### jobs = installer.get_job(source)
#### jobs = installer.get_job_by_source(source)
Return a list of `ModelInstallJob` corresponding to the indicated
model source.
#### jobs = installer.get_job_by_id(id)
Return a list of `ModelInstallJob` corresponding to the indicated
model id.
#### jobs = installer.cancel_job(job)
Cancel the indicated job.
#### installer.prune_jobs
Remove non-pending jobs (completed or errored) from the job list
returned by `list_jobs()` and `get_job()`.
Remove jobs that are in a terminal state (i.e. complete, errored or
cancelled) from the job list returned by `list_jobs()` and
`get_job()`.
#### installer.app_config, installer.record_store,
installer.event_bus
#### installer.app_config, installer.record_store, installer.event_bus
Properties that provide access to the installer's `InvokeAIAppConfig`,
`ModelRecordServiceBase` and `EventServiceBase` objects.
@@ -726,120 +861,6 @@ the API starts up. Its effect is to call `sync_to_config()` to
synchronize the model record store database with what's currently on
disk.
# The remainder of this documentation is provisional, pending implementation of the Download and Load services
## Let's get loaded, the lowdown on ModelLoadService
The `ModelLoadService` is responsible for loading a named model into
memory so that it can be used for inference. Despite the fact that it
does a lot under the covers, it is very straightforward to use.
An application-wide model loader is created at API initialization time
and stored in
`ApiDependencies.invoker.services.model_loader`. However, you can
create alternative instances if you wish.
### Creating a ModelLoadService object
The class is defined in
`invokeai.app.services.model_loader_service`. It is initialized with
an InvokeAIAppConfig object, from which it gets configuration
information such as the user's desired GPU and precision, and with a
previously-created `ModelRecordServiceBase` object, from which it
loads the requested model's configuration information.
Here is a typical initialization pattern:
```
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.model_record_service import ModelRecordServiceBase
from invokeai.app.services.model_loader_service import ModelLoadService
config = InvokeAIAppConfig.get_config()
store = ModelRecordServiceBase.open(config)
loader = ModelLoadService(config, store)
```
Note that we are relying on the contents of the application
configuration to choose the implementation of
`ModelRecordServiceBase`.
### get_model(key, [submodel_type], [context]) -> ModelInfo:
*** TO DO: change to get_model(key, context=None, **kwargs)
The `get_model()` method, like its similarly-named cousin in
`ModelRecordService`, receives the unique key that identifies the
model. It loads the model into memory, gets the model ready for use,
and returns a `ModelInfo` object.
The optional second argument, `subtype` is a `SubModelType` string
enum, such as "vae". It is mandatory when used with a main model, and
is used to select which part of the main model to load.
The optional third argument, `context` can be provided by
an invocation to trigger model load event reporting. See below for
details.
The returned `ModelInfo` object shares some fields in common with
`ModelConfigBase`, but is otherwise a completely different beast:
| **Field Name** | **Type** | **Description** |
|----------------|-----------------|------------------|
| `key` | str | The model key derived from the ModelRecordService database |
| `name` | str | Name of this model |
| `base_model` | BaseModelType | Base model for this model |
| `type` | ModelType or SubModelType | Either the model type (non-main) or the submodel type (main models)|
| `location` | Path or str | Location of the model on the filesystem |
| `precision` | torch.dtype | The torch.precision to use for inference |
| `context` | ModelCache.ModelLocker | A context class used to lock the model in VRAM while in use |
The types for `ModelInfo` and `SubModelType` can be imported from
`invokeai.app.services.model_loader_service`.
To use the model, you use the `ModelInfo` as a context manager using
the following pattern:
```
model_info = loader.get_model('f13dd932c0c35c22dcb8d6cda4203764', SubModelType('vae'))
with model_info as vae:
image = vae.decode(latents)[0]
```
The `vae` model will stay locked in the GPU during the period of time
it is in the context manager's scope.
`get_model()` may raise any of the following exceptions:
- `UnknownModelException` -- key not in database
- `ModelNotFoundException` -- key in database but model not found at path
- `InvalidModelException` -- the model is guilty of a variety of sins
** TO DO: ** Resolve discrepancy between ModelInfo.location and
ModelConfig.path.
### Emitting model loading events
When the `context` argument is passed to `get_model()`, it will
retrieve the invocation event bus from the passed `InvocationContext`
object to emit events on the invocation bus. The two events are
"model_load_started" and "model_load_completed". Both carry the
following payload:
```
payload=dict(
queue_id=queue_id,
queue_item_id=queue_item_id,
queue_batch_id=queue_batch_id,
graph_execution_state_id=graph_execution_state_id,
model_key=model_key,
submodel=submodel,
hash=model_info.hash,
location=str(model_info.location),
precision=str(model_info.precision),
)
```
***
## Get on line: The Download Queue
@@ -879,7 +900,6 @@ following fields:
| `job_started` | float | | Timestamp for when the job started running |
| `job_ended` | float | | Timestamp for when the job completed or errored out |
| `job_sequence` | int | | A counter that is incremented each time a model is dequeued |
| `preserve_partial_downloads`| bool | False | Resume partial downloads when relaunched. |
| `error` | Exception | | A copy of the Exception that caused an error during download |
When you create a job, you can assign it a `priority`. If multiple
@@ -1184,3 +1204,362 @@ other resources that it might have been using.
This will start/pause/cancel all jobs that have been submitted to the
queue and have not yet reached a terminal state.
***
## This Meta be Good: Model Metadata Storage
The modules found under `invokeai.backend.model_manager.metadata`
provide a straightforward API for fetching model metadatda from online
repositories. Currently two repositories are supported: HuggingFace
and Civitai. However, the modules are easily extended for additional
repos, provided that they have defined APIs for metadata access.
Metadata comprises any descriptive information that is not essential
for getting the model to run. For example "author" is metadata, while
"type", "base" and "format" are not. The latter fields are part of the
model's config, as defined in `invokeai.backend.model_manager.config`.
### Example Usage:
```
from invokeai.backend.model_manager.metadata import (
AnyModelRepoMetadata,
CivitaiMetadataFetch,
CivitaiMetadata
ModelMetadataStore,
)
# to access the initialized sql database
from invokeai.app.api.dependencies import ApiDependencies
civitai = CivitaiMetadataFetch()
# fetch the metadata
model_metadata = civitai.from_url("https://civitai.com/models/215796")
# get some common metadata fields
author = model_metadata.author
tags = model_metadata.tags
# get some Civitai-specific fields
assert isinstance(model_metadata, CivitaiMetadata)
trained_words = model_metadata.trained_words
base_model = model_metadata.base_model_trained_on
thumbnail = model_metadata.thumbnail_url
# cache the metadata to the database using the key corresponding to
# an existing model config record in the `model_config` table
sql_cache = ModelMetadataStore(ApiDependencies.invoker.services.db)
sql_cache.add_metadata('fb237ace520b6716adc98bcb16e8462c', model_metadata)
# now we can search the database by tag, author or model name
# matches will contain a list of model keys that match the search
matches = sql_cache.search_by_tag({"tool", "turbo"})
```
### Structure of the Metadata objects
There is a short class hierarchy of Metadata objects, all of which
descend from the Pydantic `BaseModel`.
#### `ModelMetadataBase`
This is the common base class for metadata:
| **Field Name** | **Type** | **Description** |
|----------------|-----------------|------------------|
| `name` | str | Repository's name for the model |
| `author` | str | Model's author |
| `tags` | Set[str] | Model tags |
Note that the model config record also has a `name` field. It is
intended that the config record version be locally customizable, while
the metadata version is read-only. However, enforcing this is expected
to be part of the business logic.
Descendents of the base add additional fields.
#### `HuggingFaceMetadata`
This descends from `ModelMetadataBase` and adds the following fields:
| **Field Name** | **Type** | **Description** |
|----------------|-----------------|------------------|
| `type` | Literal["huggingface"] | Used for the discriminated union of metadata classes|
| `id` | str | HuggingFace repo_id |
| `tag_dict` | Dict[str, Any] | A dictionary of tag/value pairs provided in addition to `tags` |
| `last_modified`| datetime | Date of last commit of this model to the repo |
| `files` | List[Path] | List of the files in the model repo |
#### `CivitaiMetadata`
This descends from `ModelMetadataBase` and adds the following fields:
| **Field Name** | **Type** | **Description** |
|----------------|-----------------|------------------|
| `type` | Literal["civitai"] | Used for the discriminated union of metadata classes|
| `id` | int | Civitai model id |
| `version_name` | str | Name of this version of the model (distinct from model name) |
| `version_id` | int | Civitai model version id (distinct from model id) |
| `created` | datetime | Date this version of the model was created |
| `updated` | datetime | Date this version of the model was last updated |
| `published` | datetime | Date this version of the model was published to Civitai |
| `description` | str | Model description. Quite verbose and contains HTML tags |
| `version_description` | str | Model version description, usually describes changes to the model |
| `nsfw` | bool | Whether the model tends to generate NSFW content |
| `restrictions` | LicenseRestrictions | An object that describes what is and isn't allowed with this model |
| `trained_words`| Set[str] | Trigger words for this model, if any |
| `download_url` | AnyHttpUrl | URL for downloading this version of the model |
| `base_model_trained_on` | str | Name of the model that this version was trained on |
| `thumbnail_url` | AnyHttpUrl | URL to access a representative thumbnail image of the model's output |
| `weight_min` | int | For LoRA sliders, the minimum suggested weight to apply |
| `weight_max` | int | For LoRA sliders, the maximum suggested weight to apply |
Note that `weight_min` and `weight_max` are not currently populated
and take the default values of (-1.0, +2.0). The issue is that these
values aren't part of the structured data but appear in the text
description. Some regular expression or LLM coding may be able to
extract these values.
Also be aware that `base_model_trained_on` is free text and doesn't
correspond to our `ModelType` enum.
`CivitaiMetadata` also defines some convenience properties relating to
licensing restrictions: `credit_required`, `allow_commercial_use`,
`allow_derivatives` and `allow_different_license`.
#### `AnyModelRepoMetadata`
This is a discriminated Union of `CivitaiMetadata` and
`HuggingFaceMetadata`.
### Fetching Metadata from Online Repos
The `HuggingFaceMetadataFetch` and `CivitaiMetadataFetch` classes will
retrieve metadata from their corresponding repositories and return
`AnyModelRepoMetadata` objects. Their base class
`ModelMetadataFetchBase` is an abstract class that defines two
methods: `from_url()` and `from_id()`. The former accepts the type of
model URLs that the user will try to cut and paste into the model
import form. The latter accepts a string ID in the format recognized
by the repository of choice. Both methods return an
`AnyModelRepoMetadata`.
The base class also has a class method `from_json()` which will take
the JSON representation of a `ModelMetadata` object, validate it, and
return the corresponding `AnyModelRepoMetadata` object.
When initializing one of the metadata fetching classes, you may
provide a `requests.Session` argument. This allows you to customize
the low-level HTTP fetch requests and is used, for instance, in the
testing suite to avoid hitting the internet.
The HuggingFace and Civitai fetcher subclasses add additional
repo-specific fetching methods:
#### HuggingFaceMetadataFetch
This overrides its base class `from_json()` method to return a
`HuggingFaceMetadata` object directly.
#### CivitaiMetadataFetch
This adds the following methods:
`from_civitai_modelid()` This takes the ID of a model, finds the
default version of the model, and then retrieves the metadata for
that version, returning a `CivitaiMetadata` object directly.
`from_civitai_versionid()` This takes the ID of a model version and
retrieves its metadata. Functionally equivalent to `from_id()`, the
only difference is that it returna a `CivitaiMetadata` object rather
than an `AnyModelRepoMetadata`.
### Metadata Storage
The `ModelMetadataStore` provides a simple facility to store model
metadata in the `invokeai.db` database. The data is stored as a JSON
blob, with a few common fields (`name`, `author`, `tags`) broken out
to be searchable.
When a metadata object is saved to the database, it is identified
using the model key, _and this key must correspond to an existing
model key in the model_config table_. There is a foreign key integrity
constraint between the `model_config.id` field and the
`model_metadata.id` field such that if you attempt to save metadata
under an unknown key, the attempt will result in an
`UnknownModelException`. Likewise, when a model is deleted from
`model_config`, the deletion of the corresponding metadata record will
be triggered.
Tags are stored in a normalized fashion in the tables `model_tags` and
`tags`. Triggers keep the tag table in sync with the `model_metadata`
table.
To create the storage object, initialize it with the InvokeAI
`SqliteDatabase` object. This is often done this way:
```
from invokeai.app.api.dependencies import ApiDependencies
metadata_store = ModelMetadataStore(ApiDependencies.invoker.services.db)
```
You can then access the storage with the following methods:
#### `add_metadata(key, metadata)`
Add the metadata using a previously-defined model key.
There is currently no `delete_metadata()` method. The metadata will
persist until the matching config is deleted from the `model_config`
table.
#### `get_metadata(key) -> AnyModelRepoMetadata`
Retrieve the metadata corresponding to the model key.
#### `update_metadata(key, new_metadata)`
Update an existing metadata record with new metadata.
#### `search_by_tag(tags: Set[str]) -> Set[str]`
Given a set of tags, find models that are tagged with them. If
multiple tags are provided then a matching model must be tagged with
*all* the tags in the set. This method returns a set of model keys and
is intended to be used in conjunction with the `ModelRecordService`:
```
model_config_store = ApiDependencies.invoker.services.model_records
matches = metadata_store.search_by_tag({'license:other'})
models = [model_config_store.get(x) for x in matches]
```
#### `search_by_name(name: str) -> Set[str]
Find all model metadata records that have the given name and return a
set of keys to the corresponding model config objects.
#### `search_by_author(author: str) -> Set[str]
Find all model metadata records that have the given author and return
a set of keys to the corresponding model config objects.
# The remainder of this documentation is provisional, pending implementation of the Load service
## Let's get loaded, the lowdown on ModelLoadService
The `ModelLoadService` is responsible for loading a named model into
memory so that it can be used for inference. Despite the fact that it
does a lot under the covers, it is very straightforward to use.
An application-wide model loader is created at API initialization time
and stored in
`ApiDependencies.invoker.services.model_loader`. However, you can
create alternative instances if you wish.
### Creating a ModelLoadService object
The class is defined in
`invokeai.app.services.model_loader_service`. It is initialized with
an InvokeAIAppConfig object, from which it gets configuration
information such as the user's desired GPU and precision, and with a
previously-created `ModelRecordServiceBase` object, from which it
loads the requested model's configuration information.
Here is a typical initialization pattern:
```
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.model_record_service import ModelRecordServiceBase
from invokeai.app.services.model_loader_service import ModelLoadService
config = InvokeAIAppConfig.get_config()
store = ModelRecordServiceBase.open(config)
loader = ModelLoadService(config, store)
```
Note that we are relying on the contents of the application
configuration to choose the implementation of
`ModelRecordServiceBase`.
### get_model(key, [submodel_type], [context]) -> ModelInfo:
*** TO DO: change to get_model(key, context=None, **kwargs)
The `get_model()` method, like its similarly-named cousin in
`ModelRecordService`, receives the unique key that identifies the
model. It loads the model into memory, gets the model ready for use,
and returns a `ModelInfo` object.
The optional second argument, `subtype` is a `SubModelType` string
enum, such as "vae". It is mandatory when used with a main model, and
is used to select which part of the main model to load.
The optional third argument, `context` can be provided by
an invocation to trigger model load event reporting. See below for
details.
The returned `ModelInfo` object shares some fields in common with
`ModelConfigBase`, but is otherwise a completely different beast:
| **Field Name** | **Type** | **Description** |
|----------------|-----------------|------------------|
| `key` | str | The model key derived from the ModelRecordService database |
| `name` | str | Name of this model |
| `base_model` | BaseModelType | Base model for this model |
| `type` | ModelType or SubModelType | Either the model type (non-main) or the submodel type (main models)|
| `location` | Path or str | Location of the model on the filesystem |
| `precision` | torch.dtype | The torch.precision to use for inference |
| `context` | ModelCache.ModelLocker | A context class used to lock the model in VRAM while in use |
The types for `ModelInfo` and `SubModelType` can be imported from
`invokeai.app.services.model_loader_service`.
To use the model, you use the `ModelInfo` as a context manager using
the following pattern:
```
model_info = loader.get_model('f13dd932c0c35c22dcb8d6cda4203764', SubModelType('vae'))
with model_info as vae:
image = vae.decode(latents)[0]
```
The `vae` model will stay locked in the GPU during the period of time
it is in the context manager's scope.
`get_model()` may raise any of the following exceptions:
- `UnknownModelException` -- key not in database
- `ModelNotFoundException` -- key in database but model not found at path
- `InvalidModelException` -- the model is guilty of a variety of sins
** TO DO: ** Resolve discrepancy between ModelInfo.location and
ModelConfig.path.
### Emitting model loading events
When the `context` argument is passed to `get_model()`, it will
retrieve the invocation event bus from the passed `InvocationContext`
object to emit events on the invocation bus. The two events are
"model_load_started" and "model_load_completed". Both carry the
following payload:
```
payload=dict(
queue_id=queue_id,
queue_item_id=queue_item_id,
queue_batch_id=queue_batch_id,
graph_execution_state_id=graph_execution_state_id,
model_key=model_key,
submodel=submodel,
hash=model_info.hash,
location=str(model_info.location),
precision=str(model_info.precision),
)
```

View File

@@ -1,76 +0,0 @@
# Contributing to the Frontend
# InvokeAI Web UI
- [InvokeAI Web UI](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#invokeai-web-ui)
- [Stack](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#stack)
- [Contributing](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#contributing)
- [Dev Environment](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#dev-environment)
- [Production builds](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#production-builds)
The UI is a fairly straightforward Typescript React app, with the Unified Canvas being more complex.
Code is located in `invokeai/frontend/web/` for review.
## Stack
State management is Redux via [Redux Toolkit](https://github.com/reduxjs/redux-toolkit). We lean heavily on RTK:
- `createAsyncThunk` for HTTP requests
- `createEntityAdapter` for fetching images and models
- `createListenerMiddleware` for workflows
The API client and associated types are generated from the OpenAPI schema. See API_CLIENT.md.
Communication with server is a mix of HTTP and [socket.io](https://github.com/socketio/socket.io-client) (with a simple socket.io redux middleware to help).
[Chakra-UI](https://github.com/chakra-ui/chakra-ui) & [Mantine](https://github.com/mantinedev/mantine) for components and styling.
[Konva](https://github.com/konvajs/react-konva) for the canvas, but we are pushing the limits of what is feasible with it (and HTML canvas in general). We plan to rebuild it with [PixiJS](https://github.com/pixijs/pixijs) to take advantage of WebGL's improved raster handling.
[Vite](https://vitejs.dev/) for bundling.
Localisation is via [i18next](https://github.com/i18next/react-i18next), but translation happens on our [Weblate](https://hosted.weblate.org/engage/invokeai/) project. Only the English source strings should be changed on this repo.
## Contributing
Thanks for your interest in contributing to the InvokeAI Web UI!
We encourage you to ping @psychedelicious and @blessedcoolant on [Discord](https://discord.gg/ZmtBAhwWhy) if you want to contribute, just to touch base and ensure your work doesn't conflict with anything else going on. The project is very active.
### Dev Environment
**Setup**
1. Install [node](https://nodejs.org/en/download/). You can confirm node is installed with:
```bash
node --version
```
2. Install [pnpm](https://pnpm.io/) and confirm it is installed by running this:
```bash
npm install --global pnpm
pnpm --version
```
From `invokeai/frontend/web/` run `pnpm install` to get everything set up.
Start everything in dev mode:
1. Ensure your virtual environment is running
2. Start the dev server: `pnpm dev`
3. Start the InvokeAI Nodes backend: `python scripts/invokeai-web.py # run from the repo root`
4. Point your browser to the dev server address e.g. [http://localhost:5173/](http://localhost:5173/)
### VSCode Remote Dev
We've noticed an intermittent issue with the VSCode Remote Dev port forwarding. If you use this feature of VSCode, you may intermittently click the Invoke button and then get nothing until the request times out. Suggest disabling the IDE's port forwarding feature and doing it manually via SSH:
`ssh -L 9090:localhost:9090 -L 5173:localhost:5173 user@host`
### Production builds
For a number of technical and logistical reasons, we need to commit UI build artefacts to the repo.
If you submit a PR, there is a good chance we will ask you to include a separate commit with a build of the app.
To build for production, run `pnpm build`.

View File

@@ -12,7 +12,7 @@ To get started, take a look at our [new contributors checklist](newContributorCh
Once you're setup, for more information, you can review the documentation specific to your area of interest:
* #### [InvokeAI Architecure](../ARCHITECTURE.md)
* #### [Frontend Documentation](./contributingToFrontend.md)
* #### [Frontend Documentation](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web)
* #### [Node Documentation](../INVOCATIONS.md)
* #### [Local Development](../LOCAL_DEVELOPMENT.md)

53
docs/deprecated/2to3.md Normal file
View File

@@ -0,0 +1,53 @@
## :octicons-log-16: Important Changes Since Version 2.3
### Nodes
Behind the scenes, InvokeAI has been completely rewritten to support
"nodes," small unitary operations that can be combined into graphs to
form arbitrary workflows. For example, there is a prompt node that
processes the prompt string and feeds it to a text2latent node that
generates a latent image. The latents are then fed to a latent2image
node that translates the latent image into a PNG.
The WebGUI has a node editor that allows you to graphically design and
execute custom node graphs. The ability to save and load graphs is
still a work in progress, but coming soon.
### Command-Line Interface Retired
All "invokeai" command-line interfaces have been retired as of version
3.4.
To launch the Web GUI from the command-line, use the command
`invokeai-web` rather than the traditional `invokeai --web`.
### ControlNet
This version of InvokeAI features ControlNet, a system that allows you
to achieve exact poses for human and animal figures by providing a
model to follow. Full details are found in [ControlNet](features/CONTROLNET.md)
### New Schedulers
The list of schedulers has been completely revamped and brought up to date:
| **Short Name** | **Scheduler** | **Notes** |
|----------------|---------------------------------|-----------------------------|
| **ddim** | DDIMScheduler | |
| **ddpm** | DDPMScheduler | |
| **deis** | DEISMultistepScheduler | |
| **lms** | LMSDiscreteScheduler | |
| **pndm** | PNDMScheduler | |
| **heun** | HeunDiscreteScheduler | original noise schedule |
| **heun_k** | HeunDiscreteScheduler | using karras noise schedule |
| **euler** | EulerDiscreteScheduler | original noise schedule |
| **euler_k** | EulerDiscreteScheduler | using karras noise schedule |
| **kdpm_2** | KDPM2DiscreteScheduler | |
| **kdpm_2_a** | KDPM2AncestralDiscreteScheduler | |
| **dpmpp_2s** | DPMSolverSinglestepScheduler | |
| **dpmpp_2m** | DPMSolverMultistepScheduler | original noise scnedule |
| **dpmpp_2m_k** | DPMSolverMultistepScheduler | using karras noise schedule |
| **unipc** | UniPCMultistepScheduler | CPU only |
| **lcm** | LCMScheduler | |
Please see [3.0.0 Release Notes](https://github.com/invoke-ai/InvokeAI/releases/tag/v3.0.0) for further details.

View File

@@ -229,29 +229,28 @@ clarity on the intent and common use cases we expect for utilizing them.
currently being rendered by your browser into a merged copy of the image. This
lowers the resource requirements and should improve performance.
### Seam Correction
### Compositing / Seam Correction
When doing Inpainting or Outpainting, Invoke needs to merge the pixels generated
by Stable Diffusion into your existing image. To do this, the area around the
`seam` at the boundary between your image and the new generation is
by Stable Diffusion into your existing image. This is achieved through compositing - the area around the the boundary between your image and the new generation is
automatically blended to produce a seamless output. In a fully automatic
process, a mask is generated to cover the seam, and then the area of the seam is
process, a mask is generated to cover the boundary, and then the area of the boundary is
Inpainted.
Although the default options should work well most of the time, sometimes it can
help to alter the parameters that control the seam Inpainting. A wider seam and
a blur setting of about 1/3 of the seam have been noted as producing
consistently strong results (e.g. 96 wide and 16 blur - adds up to 32 blur with
both sides). Seam strength of 0.7 is best for reducing hard seams.
help to alter the parameters that control the Compositing. A larger blur and
a blur setting have been noted as producing
consistently strong results . Strength of 0.7 is best for reducing hard seams.
- **Mode** - What part of the image will have the the Compositing applied to it.
- **Mask edge** will apply Compositing to the edge of the masked area
- **Mask** will apply Compositing to the entire masked area
- **Unmasked** will apply Compositing to the entire image
- **Steps** - Number of generation steps that will occur during the Coherence Pass, similar to Denoising Steps. Higher step counts will generally have better results.
- **Strength** - How much noise is added for the Coherence Pass, similar to Denoising Strength. A strength of 0 will result in an unchanged image, while a strength of 1 will result in an image with a completely new area as defined by the Mode setting.
- **Blur** - Adjusts the pixel radius of the the mask. A larger blur radius will cause the mask to extend past the visibly masked area, while too small of a blur radius will result in a mask that is smaller than the visibly masked area.
- **Blur Method** - The method of blur applied to the masked area.
- **Seam Size** - The size of the seam masked area. Set higher to make a larger
mask around the seam.
- **Seam Blur** - The size of the blur that is applied on _each_ side of the
masked area.
- **Seam Strength** - The Image To Image Strength parameter used for the
Inpainting generation that is applied to the seam area.
- **Seam Steps** - The number of generation steps that should be used to Inpaint
the seam.
### Infill & Scaling

BIN
docs/img/favicon.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.2 KiB

View File

@@ -18,7 +18,7 @@ title: Home
width: 100%;
max-width: 100%;
height: 50px;
background-color: #448AFF;
background-color: #35A4DB;
color: #fff;
font-size: 16px;
border: none;
@@ -43,7 +43,7 @@ title: Home
<div align="center" markdown>
[![project logo](assets/invoke_ai_banner.png)](https://github.com/invoke-ai/InvokeAI)
[![project logo](https://github.com/invoke-ai/InvokeAI/assets/31807370/6e3728c7-e90e-4711-905c-3b55844ff5be)](https://github.com/invoke-ai/InvokeAI)
[![discord badge]][discord link]
@@ -117,6 +117,11 @@ Mac and Linux machines, and runs on GPU cards with as little as 4 GB of RAM.
## :octicons-gift-24: InvokeAI Features
### Installation
- [Automated Installer](installation/010_INSTALL_AUTOMATED.md)
- [Manual Installation](installation/020_INSTALL_MANUAL.md)
- [Docker Installation](installation/040_INSTALL_DOCKER.md)
### The InvokeAI Web Interface
- [WebUI overview](features/WEB.md)
- [WebUI hotkey reference guide](features/WEBUIHOTKEYS.md)
@@ -145,60 +150,6 @@ Mac and Linux machines, and runs on GPU cards with as little as 4 GB of RAM.
- [Guide to InvokeAI Runtime Settings](features/CONFIGURATION.md)
- [Database Maintenance and other Command Line Utilities](features/UTILITIES.md)
## :octicons-log-16: Important Changes Since Version 2.3
### Nodes
Behind the scenes, InvokeAI has been completely rewritten to support
"nodes," small unitary operations that can be combined into graphs to
form arbitrary workflows. For example, there is a prompt node that
processes the prompt string and feeds it to a text2latent node that
generates a latent image. The latents are then fed to a latent2image
node that translates the latent image into a PNG.
The WebGUI has a node editor that allows you to graphically design and
execute custom node graphs. The ability to save and load graphs is
still a work in progress, but coming soon.
### Command-Line Interface Retired
All "invokeai" command-line interfaces have been retired as of version
3.4.
To launch the Web GUI from the command-line, use the command
`invokeai-web` rather than the traditional `invokeai --web`.
### ControlNet
This version of InvokeAI features ControlNet, a system that allows you
to achieve exact poses for human and animal figures by providing a
model to follow. Full details are found in [ControlNet](features/CONTROLNET.md)
### New Schedulers
The list of schedulers has been completely revamped and brought up to date:
| **Short Name** | **Scheduler** | **Notes** |
|----------------|---------------------------------|-----------------------------|
| **ddim** | DDIMScheduler | |
| **ddpm** | DDPMScheduler | |
| **deis** | DEISMultistepScheduler | |
| **lms** | LMSDiscreteScheduler | |
| **pndm** | PNDMScheduler | |
| **heun** | HeunDiscreteScheduler | original noise schedule |
| **heun_k** | HeunDiscreteScheduler | using karras noise schedule |
| **euler** | EulerDiscreteScheduler | original noise schedule |
| **euler_k** | EulerDiscreteScheduler | using karras noise schedule |
| **kdpm_2** | KDPM2DiscreteScheduler | |
| **kdpm_2_a** | KDPM2AncestralDiscreteScheduler | |
| **dpmpp_2s** | DPMSolverSinglestepScheduler | |
| **dpmpp_2m** | DPMSolverMultistepScheduler | original noise scnedule |
| **dpmpp_2m_k** | DPMSolverMultistepScheduler | using karras noise schedule |
| **unipc** | UniPCMultistepScheduler | CPU only |
| **lcm** | LCMScheduler | |
Please see [3.0.0 Release Notes](https://github.com/invoke-ai/InvokeAI/releases/tag/v3.0.0) for further details.
## :material-target: Troubleshooting
Please check out our **[:material-frequently-asked-questions:

View File

@@ -477,7 +477,7 @@ Then type the following commands:
=== "AMD System"
```bash
pip install torch torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/rocm5.4.2
pip install torch torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/rocm5.6
```
### Corrupted configuration file

View File

@@ -154,7 +154,7 @@ manager, please follow these steps:
=== "ROCm (AMD)"
```bash
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.4.2
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.6
```
=== "CPU (Intel Macs & non-GPU systems)"
@@ -313,7 +313,7 @@ code for InvokeAI. For this to work, you will need to install the
on your system, please see the [Git Installation
Guide](https://github.com/git-guides/install-git)
You will also need to install the [frontend development toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/docs/contributing/contribution_guides/contributingToFrontend.md).
You will also need to install the [frontend development toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/README.md).
If you have a "normal" installation, you should create a totally separate virtual environment for the git-based installation, else the two may interfere.
@@ -345,7 +345,7 @@ installation protocol (important!)
=== "ROCm (AMD)"
```bash
pip install -e . --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.4.2
pip install -e . --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.6
```
=== "CPU (Intel Macs & non-GPU systems)"
@@ -361,7 +361,7 @@ installation protocol (important!)
Be sure to pass `-e` (for an editable install) and don't forget the
dot ("."). It is part of the command.
5. Install the [frontend toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/docs/contributing/contribution_guides/contributingToFrontend.md) and do a production build of the UI as described.
5. Install the [frontend toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/README.md) and do a production build of the UI as described.
6. You can now run `invokeai` and its related commands. The code will be
read from the repository, so that you can edit the .py source files

View File

@@ -134,7 +134,7 @@ recipes are available
When installing torch and torchvision manually with `pip`, remember to provide
the argument `--extra-index-url
https://download.pytorch.org/whl/rocm5.4.2` as described in the [Manual
https://download.pytorch.org/whl/rocm5.6` as described in the [Manual
Installation Guide](020_INSTALL_MANUAL.md).
This will be done automatically for you if you use the installer

View File

@@ -18,13 +18,18 @@ either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm
driver).
## **[Automated Installer](010_INSTALL_AUTOMATED.md)**
✅ This is the recommended installation method for first-time users.
## **[Automated Installer (Recommended)](010_INSTALL_AUTOMATED.md)**
✅ This is the recommended installation method for first-time users.
This is a script that will install all of InvokeAI's essential
third party libraries and InvokeAI itself. It includes access to a
"developer console" which will help us debug problems with you and
give you to access experimental features.
third party libraries and InvokeAI itself.
🖥️ **Download the latest installer .zip file here** : https://github.com/invoke-ai/InvokeAI/releases/latest
- *Look for the file labelled "InvokeAI-installer-v3.X.X.zip" at the bottom of the page*
- If you experience issues, read through the full [installation instructions](010_INSTALL_AUTOMATED.md) to make sure you have met all of the installation requirements. If you need more help, join the [Discord](discord.gg/invoke-ai) or create an issue on [Github](https://github.com/invoke-ai/InvokeAI).
## **[Manual Installation](020_INSTALL_MANUAL.md)**
This method is recommended for experienced users and developers.

View File

@@ -6,10 +6,17 @@ If you're not familiar with Diffusion, take a look at our [Diffusion Overview.](
## Features
### Workflow Library
The Workflow Library enables you to save workflows to the Invoke database, allowing you to easily creating, modify and share workflows as needed.
A curated set of workflows are provided by default - these are designed to help explain important nodes' usage in the Workflow Editor.
![workflow_library](../assets/nodes/workflow_library.png)
### Linear View
The Workflow Editor allows you to create a UI for your workflow, to make it easier to iterate on your generations.
To add an input to the Linear UI, right click on the input label and select "Add to Linear View".
To add an input to the Linear UI, right click on the **input label** and select "Add to Linear View".
The Linear UI View will also be part of the saved workflow, allowing you share workflows and enable other to use them, regardless of complexity.
@@ -30,7 +37,7 @@ Any node or input field can be renamed in the workflow editor. If the input fiel
Nodes have a "Use Cache" option in their footer. This allows for performance improvements by using the previously cached values during the workflow processing.
## Important Concepts
## Important Nodes & Concepts
There are several node grouping concepts that can be examined with a narrow focus. These (and other) groupings can be pieced together to make up functional graph setups, and are important to understanding how groups of nodes work together as part of a whole. Note that the screenshots below aren't examples of complete functioning node graphs (see Examples).
@@ -56,7 +63,7 @@ The ImageToLatents node takes in a pixel image and a VAE and outputs a latents.
It is common to want to use both the same seed (for continuity) and random seeds (for variety). To define a seed, simply enter it into the 'Seed' field on a noise node. Conversely, the RandomInt node generates a random integer between 'Low' and 'High', and can be used as input to the 'Seed' edge point on a noise node to randomize your seed.
![groupsrandseed](../assets/nodes/groupsrandseed.png)
![groupsrandseed](../assets/nodes/groupsnoise.png)
### ControlNet

View File

@@ -14,6 +14,7 @@ To use a community workflow, download the the `.json` node graph file and load i
- Community Nodes
+ [Adapters-Linked](#adapters-linked-nodes)
+ [Autostereogram](#autostereogram-nodes)
+ [Average Images](#average-images)
+ [Clean Image Artifacts After Cut](#clean-image-artifacts-after-cut)
+ [Close Color Mask](#close-color-mask)
@@ -25,7 +26,7 @@ To use a community workflow, download the the `.json` node graph file and load i
+ [GPT2RandomPromptMaker](#gpt2randompromptmaker)
+ [Grid to Gif](#grid-to-gif)
+ [Halftone](#halftone)
+ [Ideal Size](#ideal-size)
+ [Hand Refiner with MeshGraphormer](#hand-refiner-with-meshgraphormer)
+ [Image and Mask Composition Pack](#image-and-mask-composition-pack)
+ [Image Dominant Color](#image-dominant-color)
+ [Image to Character Art Image Nodes](#image-to-character-art-image-nodes)
@@ -67,6 +68,17 @@ Note: These are inherited from the core nodes so any update to the core nodes sh
**Node Link:** https://github.com/skunkworxdark/adapters-linked-nodes
--------------------------------
### Autostereogram Nodes
**Description:** Generate autostereogram images from a depth map. This is not a very practically useful node but more a 90s nostalgic indulgence as I used to love these images as a kid.
**Node Link:** https://github.com/skunkworxdark/autostereogram_nodes
**Example Usage:**
</br>
<img src="https://github.com/skunkworxdark/autostereogram_nodes/blob/main/images/spider.png" width="200" /> -> <img src="https://github.com/skunkworxdark/autostereogram_nodes/blob/main/images/spider-depth.png" width="200" /> -> <img src="https://github.com/skunkworxdark/autostereogram_nodes/raw/main/images/spider-dots.png" width="200" /> <img src="https://github.com/skunkworxdark/autostereogram_nodes/raw/main/images/spider-pattern.png" width="200" />
--------------------------------
### Average Images
@@ -197,13 +209,18 @@ CMYK Halftone Output:
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/c59c578f-db8e-4d66-8c66-2851752d75ea" width="300" />
--------------------------------
### Ideal Size
**Description:** This node calculates an ideal image size for a first pass of a multi-pass upscaling. The aim is to avoid duplication that results from choosing a size larger than the model is capable of.
### Hand Refiner with MeshGraphormer
**Node Link:** https://github.com/JPPhoto/ideal-size-node
**Description**: Hand Refiner takes in your image and automatically generates a fixed depth map for the hands along with a mask of the hands region that will conveniently allow you to use them along with ControlNet to fix the wonky hands generated by Stable Diffusion
**Node Link:** https://github.com/blessedcoolant/invoke_meshgraphormer
**View**
<img src="https://raw.githubusercontent.com/blessedcoolant/invoke_meshgraphormer/main/assets/preview.jpg" />
--------------------------------
### Image and Mask Composition Pack
**Description:** This is a pack of nodes for composing masks and images, including a simple text mask creator and both image and latent offset nodes. The offsets wrap around, so these can be used in conjunction with the Seamless node to progressively generate centered on different parts of the seamless tiling.

View File

@@ -36,6 +36,7 @@ their descriptions.
| Integer Math | Perform basic math operations on two integers |
| Convert Image Mode | Converts an image to a different mode. |
| Crop Image | Crops an image to a specified box. The box can be outside of the image. |
| Ideal Size | Calculates an ideal image size for latents for a first pass of a multi-pass upscaling to avoid duplication and other artifacts |
| Image Hue Adjustment | Adjusts the Hue of an image. |
| Inverse Lerp Image | Inverse linear interpolation of all pixels of an image |
| Image Primitive | An image primitive value |

View File

@@ -1,6 +1,6 @@
# Example Workflows
We've curated some example workflows for you to get started with Workflows in InvokeAI
We've curated some example workflows for you to get started with Workflows in InvokeAI! These can also be found in the Workflow Library, located in the Workflow Editor of Invoke.
To use them, right click on your desired workflow, follow the link to GitHub and click the "⬇" button to download the raw file. You can then use the "Load Workflow" functionality in InvokeAI to load the workflow and start generating images!

View File

@@ -13,46 +13,69 @@ We thank them for all of their time and hard work.
- [Lincoln D. Stein](mailto:lincoln.stein@gmail.com)
## **Current core team**
## **Current Core Team**
* @lstein (Lincoln Stein) - Co-maintainer
* @blessedcoolant - Co-maintainer
* @hipsterusername (Kent Keirsey) - Co-maintainer, CEO, Positive Vibes
* @psychedelicious (Spencer Mabrito) - Web Team Leader
* @Kyle0654 (Kyle Schouviller) - Node Architect and General Backend Wizard
* @damian0815 - Attention Systems and Compel Maintainer
* @ebr (Eugene Brodsky) - Cloud/DevOps/Sofware engineer; your friendly neighbourhood cluster-autoscaler
* @genomancer (Gregg Helt) - Controlnet support
* @StAlKeR7779 (Sergey Borisov) - Torch stack, ONNX, model management, optimization
* @chainchompa (Jennifer Player) - Web Development & Chain-Chomping
* @josh is toast (Josh Corbett) - Web Development
* @cheerio (Mary Rogers) - Lead Engineer & Web App Development
* @ebr (Eugene Brodsky) - Cloud/DevOps/Sofware engineer; your friendly neighbourhood cluster-autoscaler
* @sunija - Standalone version
* @genomancer (Gregg Helt) - Controlnet support
* @brandon (Brandon Rising) - Platform, Infrastructure, Backend Systems
* @ryanjdick (Ryan Dick) - Machine Learning & Training
* @millu (Millun Atluri) - Community Manager, Documentation, Node-wrangler
* @chainchompa (Jennifer Player) - Web Development & Chain-Chomping
* @JPPhoto - Core image generation nodes
* @dunkeroni - Image generation backend
* @SkunkWorxDark - Image generation backend
* @keturn (Kevin Turner) - Diffusers
* @millu (Millun Atluri) - Community Wizard, Documentation, Node-wrangler,
* @glimmerleaf (Devon Hopkins) - Community Wizard
* @gogurt enjoyer - Discord moderator and end user support
* @whosawhatsis - Discord moderator and end user support
* @dwinrger - Discord moderator and end user support
* @526christian - Discord moderator and end user support
* @harvester62 - Discord moderator and end user support
## **Honored Team Alumni**
* @StAlKeR7779 (Sergey Borisov) - Torch stack, ONNX, model management, optimization
* @damian0815 - Attention Systems and Compel Maintainer
* @netsvetaev (Artur) - Localization support
* @Kyle0654 (Kyle Schouviller) - Node Architect and General Backend Wizard
* @tildebyte - Installation and configuration
* @mauwii (Matthias Wilde) - Installation, release, continuous integration
## **Full List of Contributors by Commit Name**
- 이승석
- AbdBarho
- ablattmann
- AdamOStark
- Adam Rice
- Airton Silva
- Aldo Hoeben
- Alexander Eichhorn
- Alexandre D. Roberge
- Alexandre Macabies
- Alfie John
- Andreas Rozek
- Andre LaBranche
- Andy Bearman
- Andy Luhrs
- Andy Pilate
- Anonymous
- Anthony Monthe
- Any-Winter-4079
- apolinario
- Ar7ific1al
- ArDiouscuros
- Armando C. Santisbon
- Arnold Cordewiner
- Arthur Holstvoogd
- artmen1516
- Artur
@@ -64,13 +87,16 @@ We thank them for all of their time and hard work.
- blhook
- BlueAmulet
- Bouncyknighter
- Brandon
- Brandon Rising
- Brent Ozar
- Brian Racer
- bsilvereagle
- c67e708d
- camenduru
- CapableWeb
- Carson Katri
- chainchompa
- Chloe
- Chris Dawson
- Chris Hayes
@@ -86,30 +112,45 @@ We thank them for all of their time and hard work.
- cpacker
- Cragin Godley
- creachec
- CrypticWit
- d8ahazard
- damian
- damian0815
- Damian at mba
- Damian Stewart
- Daniel Manzke
- Danny Beer
- Dan Sully
- Darren Ringer
- David Burnett
- David Ford
- David Regla
- David Sisco
- David Wager
- Daya Adianto
- db3000
- DekitaRPG
- Denis Olshin
- Dennis
- dependabot[bot]
- Dmitry Parnas
- Dobrynia100
- Dominic Letz
- DrGunnarMallon
- Drun555
- dunkeroni
- Edward Johan
- elliotsayes
- Elrik
- ElrikUnderlake
- Eric Khun
- Eric Wolf
- Eugene
- Eugene Brodsky
- ExperimentalCyborg
- Fabian Bahl
- Fabio 'MrWHO' Torchetti
- Fattire
- fattire
- Felipe Nogueira
- Félix Sanz
@@ -118,8 +159,12 @@ We thank them for all of their time and hard work.
- gabrielrotbart
- gallegonovato
- Gérald LONLAS
- Gille
- GitHub Actions Bot
- glibesyck
- gogurtenjoyer
- Gohsuke Shimada
- greatwolf
- greentext2
- Gregg Helt
- H4rk
@@ -131,6 +176,7 @@ We thank them for all of their time and hard work.
- Hosted Weblate
- Iman Karim
- ismail ihsan bülbül
- ItzAttila
- Ivan Efimov
- jakehl
- Jakub Kolčář
@@ -141,6 +187,7 @@ We thank them for all of their time and hard work.
- Jason Toffaletti
- Jaulustus
- Jeff Mahoney
- Jennifer Player
- jeremy
- Jeremy Clark
- JigenD
@@ -148,19 +195,26 @@ We thank them for all of their time and hard work.
- Johan Roxendal
- Johnathon Selstad
- Jonathan
- Jordan Hewitt
- Joseph Dries III
- Josh Corbett
- JPPhoto
- jspraul
- junzi
- Justin Wong
- Juuso V
- Kaspar Emanuel
- Katsuyuki-Karasawa
- Keerigan45
- Kent Keirsey
- Kevin Brack
- Kevin Coakley
- Kevin Gibbons
- Kevin Schaul
- Kevin Turner
- Kieran Klaassen
- krummrey
- Kyle
- Kyle Lacy
- Kyle Schouviller
- Lawrence Norton
@@ -171,10 +225,15 @@ We thank them for all of their time and hard work.
- Lynne Whitehorn
- majick
- Marco Labarile
- Marta Nahorniuk
- Martin Kristiansen
- Mary Hipp
- maryhipp
- Mary Hipp Rogers
- mastercaster
- mastercaster9000
- Matthias Wild
- mauwii
- michaelk71
- mickr777
- Mihai
@@ -182,11 +241,15 @@ We thank them for all of their time and hard work.
- Mikhail Tishin
- Millun Atluri
- Minjune Song
- Mitchell Allain
- mitien
- mofuzz
- Muhammad Usama
- Name
- _nderscore
- Neil Wang
- nekowaiz
- nemuruibai
- Netzer R
- Nicholas Koh
- Nicholas Körfer
@@ -197,9 +260,11 @@ We thank them for all of their time and hard work.
- ofirkris
- Olivier Louvignes
- owenvincent
- pand4z31
- Patrick Esser
- Patrick Tien
- Patrick von Platen
- Paul Curry
- Paul Sajna
- pejotr
- Peter Baylies
@@ -207,6 +272,7 @@ We thank them for all of their time and hard work.
- plucked
- prixt
- psychedelicious
- psychedelicious@windows
- Rainer Bernhardt
- Riccardo Giovanetti
- Rich Jones
@@ -215,16 +281,22 @@ We thank them for all of their time and hard work.
- Robert Bolender
- Robin Rombach
- Rohan Barar
- Rohinish
- rpagliuca
- rromb
- Rupesh Sreeraman
- Ryan
- Ryan Cao
- Ryan Dick
- Saifeddine
- Saifeddine ALOUI
- Sam
- SammCheese
- Sam McLeod
- Sammy
- sammyf
- Samuel Husso
- Saurav Maheshkar
- Scott Lahteine
- Sean McLellan
- Sebastian Aigner
@@ -232,16 +304,21 @@ We thank them for all of their time and hard work.
- Sergey Krashevich
- Shapor Naghibzadeh
- Shawn Zhong
- Simona Liliac
- Simon Vans-Colina
- skunkworxdark
- slashtechno
- SoheilRezaei
- Song, Pengcheng
- spezialspezial
- ssantos
- StAlKeR7779
- Stefan Tobler
- Stephan Koglin-Fischer
- SteveCaruso
- Steve Martinelli
- Steven Frank
- Surisen
- System X - Files
- Taylor Kems
- techicode
@@ -260,26 +337,34 @@ We thank them for all of their time and hard work.
- tyler
- unknown
- user1
- vedant-3010
- Vedant Madane
- veprogames
- wa.code
- wfng92
- whjms
- whosawhatsis
- Will
- William Becher
- William Chong
- Wilson E. Alvarez
- woweenie
- Wubbbi
- xra
- Yeung Yiu Hung
- ymgenesis
- Yorzaren
- Yosuke Shinya
- yun saki
- ZachNagengast
- Zadagu
- zeptofine
- Zerdoumi
- Васянатор
- 冯不游
- 唐澤 克幸
## **Original CompVis Authors**
## **Original CompVis (Stable Diffusion) Authors**
- [Robin Rombach](https://github.com/rromb)
- [Patrick von Platen](https://github.com/patrickvonplaten)

View File

@@ -0,0 +1,5 @@
:root {
--md-primary-fg-color: #35A4DB;
--md-primary-fg-color--light: #35A4DB;
--md-primary-fg-color--dark: #35A4DB;
}

File diff suppressed because it is too large Load Diff

View File

@@ -241,12 +241,12 @@ class InvokeAiInstance:
pip[
"install",
"--require-virtualenv",
"numpy~=1.24.0", # choose versions that won't be uninstalled during phase 2
"numpy==1.26.3", # choose versions that won't be uninstalled during phase 2
"urllib3~=1.26.0",
"requests~=2.28.0",
"torch==2.1.2",
"torchmetrics==0.11.4",
"torchvision>=0.16.2",
"torchvision==0.16.2",
"--force-reinstall",
"--find-links" if find_links is not None else None,
find_links,
@@ -455,7 +455,7 @@ def get_torch_source() -> (Union[str, None], str):
optional_modules = "[onnx]"
if OS == "Linux":
if device == "rocm":
url = "https://download.pytorch.org/whl/rocm5.4.2"
url = "https://download.pytorch.org/whl/rocm5.6"
elif device == "cpu":
url = "https://download.pytorch.org/whl/cpu"

View File

@@ -2,7 +2,9 @@
from logging import Logger
from invokeai.app.services.item_storage.item_storage_memory import ItemStorageMemory
from invokeai.app.services.shared.sqlite.sqlite_util import init_db
from invokeai.backend.model_manager.metadata import ModelMetadataStore
from invokeai.backend.util.logging import InvokeAILogger
from invokeai.version.invokeai_version import __version__
@@ -21,7 +23,6 @@ from ..services.invocation_queue.invocation_queue_memory import MemoryInvocation
from ..services.invocation_services import InvocationServices
from ..services.invocation_stats.invocation_stats_default import InvocationStatsService
from ..services.invoker import Invoker
from ..services.item_storage.item_storage_sqlite import SqliteItemStorage
from ..services.latents_storage.latents_storage_disk import DiskLatentsStorage
from ..services.latents_storage.latents_storage_forward_cache import ForwardCacheLatentsStorage
from ..services.model_install import ModelInstallService
@@ -61,7 +62,7 @@ class ApiDependencies:
invoker: Invoker
@staticmethod
def initialize(config: InvokeAIAppConfig, event_handler_id: int, logger: Logger = logger):
def initialize(config: InvokeAIAppConfig, event_handler_id: int, logger: Logger = logger) -> None:
logger.info(f"InvokeAI version {__version__}")
logger.info(f"Root directory = {str(config.root_path)}")
logger.debug(f"Internet connectivity is {config.internet_available}")
@@ -79,7 +80,7 @@ class ApiDependencies:
board_records = SqliteBoardRecordStorage(db=db)
boards = BoardService()
events = FastAPIEventService(event_handler_id)
graph_execution_manager = SqliteItemStorage[GraphExecutionState](db=db, table_name="graph_executions")
graph_execution_manager = ItemStorageMemory[GraphExecutionState]()
image_records = SqliteImageRecordStorage(db=db)
images = ImageService()
invocation_cache = MemoryInvocationCache(max_cache_size=config.node_cache_size)
@@ -87,8 +88,13 @@ class ApiDependencies:
model_manager = ModelManagerService(config, logger)
model_record_service = ModelRecordServiceSQL(db=db)
download_queue_service = DownloadQueueService(event_bus=events)
metadata_store = ModelMetadataStore(db=db)
model_install_service = ModelInstallService(
app_config=config, record_store=model_record_service, event_bus=events
app_config=config,
record_store=model_record_service,
download_queue=download_queue_service,
metadata_store=metadata_store,
event_bus=events,
)
names = SimpleNameService()
performance_statistics = InvocationStatsService()
@@ -131,6 +137,6 @@ class ApiDependencies:
db.clean()
@staticmethod
def shutdown():
def shutdown() -> None:
if ApiDependencies.invoker:
ApiDependencies.invoker.stop()

View File

@@ -0,0 +1,28 @@
from typing import Any
from starlette.responses import Response
from starlette.staticfiles import StaticFiles
class NoCacheStaticFiles(StaticFiles):
"""
This class is used to override the default caching behavior of starlette for static files,
ensuring we *never* cache static files. It modifies the file response headers to strictly
never cache the files.
Static files include the javascript bundles, fonts, locales, and some images. Generated
images are not included, as they are served by a router.
"""
def __init__(self, *args: Any, **kwargs: Any):
self.cachecontrol = "max-age=0, no-cache, no-store, , must-revalidate"
self.pragma = "no-cache"
self.expires = "0"
super().__init__(*args, **kwargs)
def file_response(self, *args: Any, **kwargs: Any) -> Response:
resp = super().file_response(*args, **kwargs)
resp.headers.setdefault("Cache-Control", self.cachecontrol)
resp.headers.setdefault("Pragma", self.pragma)
resp.headers.setdefault("Expires", self.expires)
return resp

View File

@@ -1,10 +1,10 @@
# Copyright (c) 2023 Lincoln D. Stein
"""FastAPI route for model configuration records."""
import pathlib
from hashlib import sha1
from random import randbytes
from typing import Any, Dict, List, Optional
from typing import Any, Dict, List, Optional, Set
from fastapi import Body, Path, Query, Response
from fastapi.routing import APIRouter
@@ -16,13 +16,19 @@ from invokeai.app.services.model_install import ModelInstallJob, ModelSource
from invokeai.app.services.model_records import (
DuplicateModelException,
InvalidModelException,
ModelRecordOrderBy,
ModelSummary,
UnknownModelException,
)
from invokeai.app.services.shared.pagination import PaginatedResults
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
ModelFormat,
ModelType,
)
from invokeai.backend.model_manager.merge import MergeInterpolationMethod, ModelMerger
from invokeai.backend.model_manager.metadata import AnyModelRepoMetadata
from ..dependencies import ApiDependencies
@@ -32,11 +38,20 @@ model_records_router = APIRouter(prefix="/v1/model/record", tags=["model_manager
class ModelsList(BaseModel):
"""Return list of configs."""
models: list[AnyModelConfig]
models: List[AnyModelConfig]
model_config = ConfigDict(use_enum_values=True)
class ModelTagSet(BaseModel):
"""Return tags for a set of models."""
key: str
name: str
author: str
tags: Set[str]
@model_records_router.get(
"/",
operation_id="list_model_records",
@@ -45,7 +60,7 @@ async def list_model_records(
base_models: Optional[List[BaseModelType]] = Query(default=None, description="Base models to include"),
model_type: Optional[ModelType] = Query(default=None, description="The type of model to get"),
model_name: Optional[str] = Query(default=None, description="Exact match on the name of the model"),
model_format: Optional[str] = Query(
model_format: Optional[ModelFormat] = Query(
default=None, description="Exact match on the format of the model (e.g. 'diffusers')"
),
) -> ModelsList:
@@ -86,6 +101,59 @@ async def get_model_record(
raise HTTPException(status_code=404, detail=str(e))
@model_records_router.get("/meta", operation_id="list_model_summary")
async def list_model_summary(
page: int = Query(default=0, description="The page to get"),
per_page: int = Query(default=10, description="The number of models per page"),
order_by: ModelRecordOrderBy = Query(default=ModelRecordOrderBy.Default, description="The attribute to order by"),
) -> PaginatedResults[ModelSummary]:
"""Gets a page of model summary data."""
return ApiDependencies.invoker.services.model_records.list_models(page=page, per_page=per_page, order_by=order_by)
@model_records_router.get(
"/meta/i/{key}",
operation_id="get_model_metadata",
responses={
200: {"description": "Success"},
400: {"description": "Bad request"},
404: {"description": "No metadata available"},
},
)
async def get_model_metadata(
key: str = Path(description="Key of the model repo metadata to fetch."),
) -> Optional[AnyModelRepoMetadata]:
"""Get a model metadata object."""
record_store = ApiDependencies.invoker.services.model_records
result = record_store.get_metadata(key)
if not result:
raise HTTPException(status_code=404, detail="No metadata for a model with this key")
return result
@model_records_router.get(
"/tags",
operation_id="list_tags",
)
async def list_tags() -> Set[str]:
"""Get a unique set of all the model tags."""
record_store = ApiDependencies.invoker.services.model_records
return record_store.list_tags()
@model_records_router.get(
"/tags/search",
operation_id="search_by_metadata_tags",
)
async def search_by_metadata_tags(
tags: Set[str] = Query(default=None, description="Tags to search for"),
) -> ModelsList:
"""Get a list of models."""
record_store = ApiDependencies.invoker.services.model_records
results = record_store.search_by_metadata_tag(tags)
return ModelsList(models=results)
@model_records_router.patch(
"/i/{key}",
operation_id="update_model_record",
@@ -159,9 +227,7 @@ async def del_model_record(
async def add_model_record(
config: Annotated[AnyModelConfig, Body(description="Model config", discriminator="type")],
) -> AnyModelConfig:
"""
Add a model using the configuration information appropriate for its type.
"""
"""Add a model using the configuration information appropriate for its type."""
logger = ApiDependencies.invoker.services.logger
record_store = ApiDependencies.invoker.services.model_records
if config.key == "<NOKEY>":
@@ -243,7 +309,7 @@ async def import_model(
Installation occurs in the background. Either use list_model_install_jobs()
to poll for completion, or listen on the event bus for the following events:
"model_install_started"
"model_install_running"
"model_install_completed"
"model_install_error"
@@ -279,16 +345,46 @@ async def import_model(
operation_id="list_model_install_jobs",
)
async def list_model_install_jobs() -> List[ModelInstallJob]:
"""
Return list of model install jobs.
If the optional 'source' argument is provided, then the list will be filtered
for partial string matches against the install source.
"""
"""Return list of model install jobs."""
jobs: List[ModelInstallJob] = ApiDependencies.invoker.services.model_install.list_jobs()
return jobs
@model_records_router.get(
"/import/{id}",
operation_id="get_model_install_job",
responses={
200: {"description": "Success"},
404: {"description": "No such job"},
},
)
async def get_model_install_job(id: int = Path(description="Model install id")) -> ModelInstallJob:
"""Return model install job corresponding to the given source."""
try:
return ApiDependencies.invoker.services.model_install.get_job_by_id(id)
except ValueError as e:
raise HTTPException(status_code=404, detail=str(e))
@model_records_router.delete(
"/import/{id}",
operation_id="cancel_model_install_job",
responses={
201: {"description": "The job was cancelled successfully"},
415: {"description": "No such job"},
},
status_code=201,
)
async def cancel_model_install_job(id: int = Path(description="Model install job ID")) -> None:
"""Cancel the model install job(s) corresponding to the given job ID."""
installer = ApiDependencies.invoker.services.model_install
try:
job = installer.get_job_by_id(id)
except ValueError as e:
raise HTTPException(status_code=415, detail=str(e))
installer.cancel_job(job)
@model_records_router.patch(
"/import",
operation_id="prune_model_install_jobs",
@@ -298,9 +394,7 @@ async def list_model_install_jobs() -> List[ModelInstallJob]:
},
)
async def prune_model_install_jobs() -> Response:
"""
Prune all completed and errored jobs from the install job list.
"""
"""Prune all completed and errored jobs from the install job list."""
ApiDependencies.invoker.services.model_install.prune_jobs()
return Response(status_code=204)
@@ -315,8 +409,64 @@ async def prune_model_install_jobs() -> Response:
)
async def sync_models_to_config() -> Response:
"""
Traverse the models and autoimport directories. Model files without a corresponding
Traverse the models and autoimport directories.
Model files without a corresponding
record in the database are added. Orphan records without a models file are deleted.
"""
ApiDependencies.invoker.services.model_install.sync_to_config()
return Response(status_code=204)
@model_records_router.put(
"/merge",
operation_id="merge",
)
async def merge(
keys: List[str] = Body(description="Keys for two to three models to merge", min_length=2, max_length=3),
merged_model_name: Optional[str] = Body(description="Name of destination model", default=None),
alpha: float = Body(description="Alpha weighting strength to apply to 2d and 3d models", default=0.5),
force: bool = Body(
description="Force merging of models created with different versions of diffusers",
default=False,
),
interp: Optional[MergeInterpolationMethod] = Body(description="Interpolation method", default=None),
merge_dest_directory: Optional[str] = Body(
description="Save the merged model to the designated directory (with 'merged_model_name' appended)",
default=None,
),
) -> AnyModelConfig:
"""
Merge diffusers models.
keys: List of 2-3 model keys to merge together. All models must use the same base type.
merged_model_name: Name for the merged model [Concat model names]
alpha: Alpha value (0.0-1.0). Higher values give more weight to the second model [0.5]
force: If true, force the merge even if the models were generated by different versions of the diffusers library [False]
interp: Interpolation method. One of "weighted_sum", "sigmoid", "inv_sigmoid" or "add_difference" [weighted_sum]
merge_dest_directory: Specify a directory to store the merged model in [models directory]
"""
print(f"here i am, keys={keys}")
logger = ApiDependencies.invoker.services.logger
try:
logger.info(f"Merging models: {keys} into {merge_dest_directory or '<MODELS>'}/{merged_model_name}")
dest = pathlib.Path(merge_dest_directory) if merge_dest_directory else None
installer = ApiDependencies.invoker.services.model_install
merger = ModelMerger(installer)
model_names = [installer.record_store.get_model(x).name for x in keys]
response = merger.merge_diffusion_models_and_save(
model_keys=keys,
merged_model_name=merged_model_name or "+".join(model_names),
alpha=alpha,
interp=interp,
force=force,
merge_dest_directory=dest,
)
except UnknownModelException:
raise HTTPException(
status_code=404,
detail=f"One or more of the models '{keys}' not found",
)
except ValueError as e:
raise HTTPException(status_code=400, detail=str(e))
return response

View File

@@ -3,6 +3,7 @@
# values from the command line or config file.
import sys
from invokeai.app.api.no_cache_staticfiles import NoCacheStaticFiles
from invokeai.version.invokeai_version import __version__
from .services.config import InvokeAIAppConfig
@@ -27,8 +28,7 @@ if True: # hack to make flake8 happy with imports coming after setting up the c
from fastapi.middleware.gzip import GZipMiddleware
from fastapi.openapi.docs import get_redoc_html, get_swagger_ui_html
from fastapi.openapi.utils import get_openapi
from fastapi.responses import FileResponse, HTMLResponse
from fastapi.staticfiles import StaticFiles
from fastapi.responses import HTMLResponse
from fastapi_events.handlers.local import local_handler
from fastapi_events.middleware import EventHandlerASGIMiddleware
from pydantic.json_schema import models_json_schema
@@ -76,7 +76,7 @@ mimetypes.add_type("text/css", ".css")
# Create the app
# TODO: create this all in a method so configuration/etc. can be passed in?
app = FastAPI(title="Invoke AI", docs_url=None, redoc_url=None, separate_input_output_schemas=False)
app = FastAPI(title="Invoke - Community Edition", docs_url=None, redoc_url=None, separate_input_output_schemas=False)
# Add event handler
event_handler_id: int = id(app)
@@ -205,8 +205,8 @@ app.openapi = custom_openapi # type: ignore [method-assign] # this is a valid a
def overridden_swagger() -> HTMLResponse:
return get_swagger_ui_html(
openapi_url=app.openapi_url, # type: ignore [arg-type] # this is always a string
title=app.title,
swagger_favicon_url="/static/docs/favicon.ico",
title=f"{app.title} - Swagger UI",
swagger_favicon_url="static/docs/invoke-favicon-docs.svg",
)
@@ -214,26 +214,20 @@ def overridden_swagger() -> HTMLResponse:
def overridden_redoc() -> HTMLResponse:
return get_redoc_html(
openapi_url=app.openapi_url, # type: ignore [arg-type] # this is always a string
title=app.title,
redoc_favicon_url="/static/docs/favicon.ico",
title=f"{app.title} - Redoc",
redoc_favicon_url="static/docs/invoke-favicon-docs.svg",
)
web_root_path = Path(list(web_dir.__path__)[0])
# Only serve the UI if we it has a build
if (web_root_path / "dist").exists():
# Cannot add headers to StaticFiles, so we must serve index.html with a custom route
# Add cache-control: no-store header to prevent caching of index.html, which leads to broken UIs at release
@app.get("/", include_in_schema=False, name="ui_root")
def get_index() -> FileResponse:
return FileResponse(Path(web_root_path, "dist/index.html"), headers={"Cache-Control": "no-store"})
# # Must mount *after* the other routes else it borks em
app.mount("/assets", StaticFiles(directory=Path(web_root_path, "dist/assets/")), name="assets")
app.mount("/locales", StaticFiles(directory=Path(web_root_path, "dist/locales/")), name="locales")
app.mount("/static", StaticFiles(directory=Path(web_root_path, "static/")), name="static") # docs favicon is in here
try:
app.mount("/", NoCacheStaticFiles(directory=Path(web_root_path, "dist"), html=True), name="ui")
except RuntimeError:
logger.warn(f"No UI found at {web_root_path}/dist, skipping UI mount")
app.mount(
"/static", NoCacheStaticFiles(directory=Path(web_root_path, "static/")), name="static"
) # docs favicon is in here
def invoke_api() -> None:

View File

@@ -30,6 +30,7 @@ from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.app.shared.fields import FieldDescriptions
from invokeai.backend.image_util.depth_anything import DepthAnythingDetector
from ...backend.model_management import BaseModelType
from .baseinvocation import (
@@ -602,3 +603,33 @@ class ColorMapImageProcessorInvocation(ImageProcessorInvocation):
color_map = cv2.resize(color_map, (width, height), interpolation=cv2.INTER_NEAREST)
color_map = Image.fromarray(color_map)
return color_map
DEPTH_ANYTHING_MODEL_SIZES = Literal["large", "base", "small"]
@invocation(
"depth_anything_image_processor",
title="Depth Anything Processor",
tags=["controlnet", "depth", "depth anything"],
category="controlnet",
version="1.0.0",
)
class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
"""Generates a depth map based on the Depth Anything algorithm"""
model_size: DEPTH_ANYTHING_MODEL_SIZES = InputField(
default="small", description="The size of the depth model to use"
)
resolution: int = InputField(default=512, ge=64, multiple_of=64, description=FieldDescriptions.image_res)
offload: bool = InputField(default=False)
def run_processor(self, image):
depth_anything_detector = DepthAnythingDetector()
depth_anything_detector.load_model(model_size=self.model_size)
if image.mode == "RGBA":
image = image.convert("RGB")
processed_image = depth_anything_detector(image=image, resolution=self.resolution, offload=self.offload)
return processed_image

View File

@@ -1,5 +1,6 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654)
import math
from contextlib import ExitStack
from functools import singledispatchmethod
from typing import List, Literal, Optional, Union
@@ -859,9 +860,9 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata):
vae.disable_tiling()
# clear memory as vae decode can request a lot
torch.cuda.empty_cache()
if choose_torch_device() == torch.device("mps"):
mps.empty_cache()
# torch.cuda.empty_cache()
# if choose_torch_device() == torch.device("mps"):
# mps.empty_cache()
with torch.inference_mode():
# copied from diffusers pipeline
@@ -873,9 +874,9 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata):
image = VaeImageProcessor.numpy_to_pil(np_image)[0]
torch.cuda.empty_cache()
if choose_torch_device() == torch.device("mps"):
mps.empty_cache()
# torch.cuda.empty_cache()
# if choose_torch_device() == torch.device("mps"):
# mps.empty_cache()
image_dto = context.services.images.create(
image=image,
@@ -1228,3 +1229,57 @@ class CropLatentsCoreInvocation(BaseInvocation):
context.services.latents.save(name, cropped_latents)
return build_latents_output(latents_name=name, latents=cropped_latents)
@invocation_output("ideal_size_output")
class IdealSizeOutput(BaseInvocationOutput):
"""Base class for invocations that output an image"""
width: int = OutputField(description="The ideal width of the image (in pixels)")
height: int = OutputField(description="The ideal height of the image (in pixels)")
@invocation(
"ideal_size",
title="Ideal Size",
tags=["latents", "math", "ideal_size"],
version="1.0.2",
)
class IdealSizeInvocation(BaseInvocation):
"""Calculates the ideal size for generation to avoid duplication"""
width: int = InputField(default=1024, description="Final image width")
height: int = InputField(default=576, description="Final image height")
unet: UNetField = InputField(default=None, description=FieldDescriptions.unet)
multiplier: float = InputField(
default=1.0,
description="Amount to multiply the model's dimensions by when calculating the ideal size (may result in initial generation artifacts if too large)",
)
def trim_to_multiple_of(self, *args, multiple_of=LATENT_SCALE_FACTOR):
return tuple((x - x % multiple_of) for x in args)
def invoke(self, context: InvocationContext) -> IdealSizeOutput:
aspect = self.width / self.height
dimension = 512
if self.unet.unet.base_model == BaseModelType.StableDiffusion2:
dimension = 768
elif self.unet.unet.base_model == BaseModelType.StableDiffusionXL:
dimension = 1024
dimension = dimension * self.multiplier
min_dimension = math.floor(dimension * 0.5)
model_area = dimension * dimension # hardcoded for now since all models are trained on square images
if aspect > 1.0:
init_height = max(min_dimension, math.sqrt(model_area / aspect))
init_width = init_height * aspect
else:
init_width = max(min_dimension, math.sqrt(model_area * aspect))
init_height = init_width / aspect
scaled_width, scaled_height = self.trim_to_multiple_of(
math.floor(init_width),
math.floor(init_height),
)
return IdealSizeOutput(width=scaled_width, height=scaled_height)

View File

@@ -1,5 +1,7 @@
"""Init file for InvokeAI configure package."""
from invokeai.app.services.config.config_common import PagingArgumentParser
from .config_default import InvokeAIAppConfig, get_invokeai_config
__all__ = ["InvokeAIAppConfig", "get_invokeai_config"]
__all__ = ["InvokeAIAppConfig", "get_invokeai_config", "PagingArgumentParser"]

View File

@@ -173,10 +173,10 @@ from __future__ import annotations
import os
from pathlib import Path
from typing import Any, ClassVar, Dict, List, Literal, Optional, Union, get_type_hints
from typing import Any, ClassVar, Dict, List, Literal, Optional, Union
from omegaconf import DictConfig, OmegaConf
from pydantic import Field, TypeAdapter
from pydantic import Field
from pydantic.config import JsonDict
from pydantic_settings import SettingsConfigDict
@@ -209,7 +209,7 @@ class InvokeAIAppConfig(InvokeAISettings):
"""Configuration object for InvokeAI App."""
singleton_config: ClassVar[Optional[InvokeAIAppConfig]] = None
singleton_init: ClassVar[Optional[Dict]] = None
singleton_init: ClassVar[Optional[Dict[str, Any]]] = None
# fmt: off
type: Literal["InvokeAI"] = "InvokeAI"
@@ -251,7 +251,11 @@ class InvokeAIAppConfig(InvokeAISettings):
log_level : Literal["debug", "info", "warning", "error", "critical"] = Field(default="info", description="Emit logging messages at this level or higher", json_schema_extra=Categories.Logging)
log_sql : bool = Field(default=False, description="Log SQL queries", json_schema_extra=Categories.Logging)
# Development
dev_reload : bool = Field(default=False, description="Automatically reload when Python sources are changed.", json_schema_extra=Categories.Development)
profile_graphs : bool = Field(default=False, description="Enable graph profiling", json_schema_extra=Categories.Development)
profile_prefix : Optional[str] = Field(default=None, description="An optional prefix for profile output files.", json_schema_extra=Categories.Development)
profiles_dir : Path = Field(default=Path('profiles'), description="Directory for graph profiles", json_schema_extra=Categories.Development)
version : bool = Field(default=False, description="Show InvokeAI version and exit", json_schema_extra=Categories.Other)
@@ -263,14 +267,14 @@ class InvokeAIAppConfig(InvokeAISettings):
# DEVICE
device : Literal["auto", "cpu", "cuda", "cuda:1", "mps"] = Field(default="auto", description="Generation device", json_schema_extra=Categories.Device)
precision : Literal["auto", "float16", "float32", "autocast"] = Field(default="auto", description="Floating point precision", json_schema_extra=Categories.Device)
precision : Literal["auto", "float16", "bfloat16", "float32", "autocast"] = Field(default="auto", description="Floating point precision", json_schema_extra=Categories.Device)
# GENERATION
sequential_guidance : bool = Field(default=False, description="Whether to calculate guidance in serial instead of in parallel, lowering memory requirements", json_schema_extra=Categories.Generation)
attention_type : Literal["auto", "normal", "xformers", "sliced", "torch-sdp"] = Field(default="auto", description="Attention type", json_schema_extra=Categories.Generation)
attention_slice_size: Literal["auto", "balanced", "max", 1, 2, 3, 4, 5, 6, 7, 8] = Field(default="auto", description='Slice size, valid when attention_type=="sliced"', json_schema_extra=Categories.Generation)
force_tiled_decode : bool = Field(default=False, description="Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty)", json_schema_extra=Categories.Generation)
png_compress_level : int = Field(default=6, description="The compress_level setting of PIL.Image.save(), used for PNG encoding. All settings are lossless. 0 = fastest, largest filesize, 9 = slowest, smallest filesize", json_schema_extra=Categories.Generation)
png_compress_level : int = Field(default=1, description="The compress_level setting of PIL.Image.save(), used for PNG encoding. All settings are lossless. 0 = fastest, largest filesize, 9 = slowest, smallest filesize", json_schema_extra=Categories.Generation)
# QUEUE
max_queue_size : int = Field(default=10000, gt=0, description="Maximum number of items in the session queue", json_schema_extra=Categories.Queue)
@@ -280,6 +284,9 @@ class InvokeAIAppConfig(InvokeAISettings):
deny_nodes : Optional[List[str]] = Field(default=None, description="List of nodes to deny. Omit to deny none.", json_schema_extra=Categories.Nodes)
node_cache_size : int = Field(default=512, description="How many cached nodes to keep in memory", json_schema_extra=Categories.Nodes)
# MODEL IMPORT
civitai_api_key : Optional[str] = Field(default=os.environ.get("CIVITAI_API_KEY"), description="API key for CivitAI", json_schema_extra=Categories.Other)
# DEPRECATED FIELDS - STILL HERE IN ORDER TO OBTAN VALUES FROM PRE-3.1 CONFIG FILES
always_use_cpu : bool = Field(default=False, description="If true, use the CPU for rendering even if a GPU is available.", json_schema_extra=Categories.MemoryPerformance)
max_cache_size : Optional[float] = Field(default=None, gt=0, description="Maximum memory amount used by model cache for rapid switching", json_schema_extra=Categories.MemoryPerformance)
@@ -289,6 +296,7 @@ class InvokeAIAppConfig(InvokeAISettings):
lora_dir : Optional[Path] = Field(default=None, description='Path to a directory of LoRA/LyCORIS models to be imported on startup.', json_schema_extra=Categories.Paths)
embedding_dir : Optional[Path] = Field(default=None, description='Path to a directory of Textual Inversion embeddings to be imported on startup.', json_schema_extra=Categories.Paths)
controlnet_dir : Optional[Path] = Field(default=None, description='Path to a directory of ControlNet embeddings to be imported on startup.', json_schema_extra=Categories.Paths)
# this is not referred to in the source code and can be removed entirely
#free_gpu_mem : Optional[bool] = Field(default=None, description="If true, purge model from GPU after each generation.", json_schema_extra=Categories.MemoryPerformance)
@@ -301,8 +309,8 @@ class InvokeAIAppConfig(InvokeAISettings):
self,
argv: Optional[list[str]] = None,
conf: Optional[DictConfig] = None,
clobber=False,
):
clobber: Optional[bool] = False,
) -> None:
"""
Update settings with contents of init file, environment, and command-line settings.
@@ -328,16 +336,12 @@ class InvokeAIAppConfig(InvokeAISettings):
super().parse_args(argv)
if self.singleton_init and not clobber:
hints = get_type_hints(self.__class__)
for k in self.singleton_init:
setattr(
self,
k,
TypeAdapter(hints[k]).validate_python(self.singleton_init[k]),
)
# When setting values in this way, set validate_assignment to true if you want to validate the value.
for k, v in self.singleton_init.items():
setattr(self, k, v)
@classmethod
def get_config(cls, **kwargs: Dict[str, Any]) -> InvokeAIAppConfig:
def get_config(cls, **kwargs: Any) -> InvokeAIAppConfig:
"""Return a singleton InvokeAIAppConfig configuration object."""
if (
cls.singleton_config is None
@@ -449,13 +453,18 @@ class InvokeAIAppConfig(InvokeAISettings):
disabled_in_config = not self.xformers_enabled
return disabled_in_config and self.attention_type != "xformers"
@property
def profiles_path(self) -> Path:
"""Path to the graph profiles directory."""
return self._resolve(self.profiles_dir)
@staticmethod
def find_root() -> Path:
"""Choose the runtime root directory when not specified on command line or init file."""
return _find_root()
def get_invokeai_config(**kwargs) -> InvokeAIAppConfig:
def get_invokeai_config(**kwargs: Any) -> InvokeAIAppConfig:
"""Legacy function which returns InvokeAIAppConfig.get_config()."""
return InvokeAIAppConfig.get_config(**kwargs)

View File

@@ -34,6 +34,7 @@ class ServiceInactiveException(Exception):
DownloadEventHandler = Callable[["DownloadJob"], None]
DownloadExceptionHandler = Callable[["DownloadJob", Optional[Exception]], None]
@total_ordering
@@ -55,6 +56,7 @@ class DownloadJob(BaseModel):
job_ended: Optional[str] = Field(
default=None, description="Timestamp for when the download job ende1d (completed or errored)"
)
content_type: Optional[str] = Field(default=None, description="Content type of downloaded file")
bytes: int = Field(default=0, description="Bytes downloaded so far")
total_bytes: int = Field(default=0, description="Total file size (bytes)")
@@ -70,7 +72,11 @@ class DownloadJob(BaseModel):
_on_progress: Optional[DownloadEventHandler] = PrivateAttr(default=None)
_on_complete: Optional[DownloadEventHandler] = PrivateAttr(default=None)
_on_cancelled: Optional[DownloadEventHandler] = PrivateAttr(default=None)
_on_error: Optional[DownloadEventHandler] = PrivateAttr(default=None)
_on_error: Optional[DownloadExceptionHandler] = PrivateAttr(default=None)
def __hash__(self) -> int:
"""Return hash of the string representation of this object, for indexing."""
return hash(str(self))
def __le__(self, other: "DownloadJob") -> bool:
"""Return True if this job's priority is less than another's."""
@@ -87,6 +93,26 @@ class DownloadJob(BaseModel):
"""Call to cancel the job."""
return self._cancelled
@property
def complete(self) -> bool:
"""Return true if job completed without errors."""
return self.status == DownloadJobStatus.COMPLETED
@property
def running(self) -> bool:
"""Return true if the job is running."""
return self.status == DownloadJobStatus.RUNNING
@property
def errored(self) -> bool:
"""Return true if the job is errored."""
return self.status == DownloadJobStatus.ERROR
@property
def in_terminal_state(self) -> bool:
"""Return true if job has finished, one way or another."""
return self.status not in [DownloadJobStatus.WAITING, DownloadJobStatus.RUNNING]
@property
def on_start(self) -> Optional[DownloadEventHandler]:
"""Return the on_start event handler."""
@@ -103,7 +129,7 @@ class DownloadJob(BaseModel):
return self._on_complete
@property
def on_error(self) -> Optional[DownloadEventHandler]:
def on_error(self) -> Optional[DownloadExceptionHandler]:
"""Return the on_error event handler."""
return self._on_error
@@ -118,7 +144,7 @@ class DownloadJob(BaseModel):
on_progress: Optional[DownloadEventHandler] = None,
on_complete: Optional[DownloadEventHandler] = None,
on_cancelled: Optional[DownloadEventHandler] = None,
on_error: Optional[DownloadEventHandler] = None,
on_error: Optional[DownloadExceptionHandler] = None,
) -> None:
"""Set the callbacks for download events."""
self._on_start = on_start
@@ -150,10 +176,10 @@ class DownloadQueueServiceBase(ABC):
on_progress: Optional[DownloadEventHandler] = None,
on_complete: Optional[DownloadEventHandler] = None,
on_cancelled: Optional[DownloadEventHandler] = None,
on_error: Optional[DownloadEventHandler] = None,
on_error: Optional[DownloadExceptionHandler] = None,
) -> DownloadJob:
"""
Create a download job.
Create and enqueue download job.
:param source: Source of the download as a URL.
:param dest: Path to download to. See below.
@@ -175,6 +201,25 @@ class DownloadQueueServiceBase(ABC):
"""
pass
@abstractmethod
def submit_download_job(
self,
job: DownloadJob,
on_start: Optional[DownloadEventHandler] = None,
on_progress: Optional[DownloadEventHandler] = None,
on_complete: Optional[DownloadEventHandler] = None,
on_cancelled: Optional[DownloadEventHandler] = None,
on_error: Optional[DownloadExceptionHandler] = None,
) -> None:
"""
Enqueue a download job.
:param job: The DownloadJob
:param on_start, on_progress, on_complete, on_error: Callbacks for the indicated
events.
"""
pass
@abstractmethod
def list_jobs(self) -> List[DownloadJob]:
"""
@@ -197,21 +242,21 @@ class DownloadQueueServiceBase(ABC):
pass
@abstractmethod
def cancel_all_jobs(self):
def cancel_all_jobs(self) -> None:
"""Cancel all active and enquedjobs."""
pass
@abstractmethod
def prune_jobs(self):
def prune_jobs(self) -> None:
"""Prune completed and errored queue items from the job list."""
pass
@abstractmethod
def cancel_job(self, job: DownloadJob):
def cancel_job(self, job: DownloadJob) -> None:
"""Cancel the job, clearing partial downloads and putting it into ERROR state."""
pass
@abstractmethod
def join(self):
def join(self) -> None:
"""Wait until all jobs are off the queue."""
pass

View File

@@ -5,10 +5,9 @@ import os
import re
import threading
import traceback
from logging import Logger
from pathlib import Path
from queue import Empty, PriorityQueue
from typing import Any, Dict, List, Optional, Set
from typing import Any, Dict, List, Optional
import requests
from pydantic.networks import AnyHttpUrl
@@ -21,6 +20,7 @@ from invokeai.backend.util.logging import InvokeAILogger
from .download_base import (
DownloadEventHandler,
DownloadExceptionHandler,
DownloadJob,
DownloadJobCancelledException,
DownloadJobStatus,
@@ -36,18 +36,6 @@ DOWNLOAD_CHUNK_SIZE = 100000
class DownloadQueueService(DownloadQueueServiceBase):
"""Class for queued download of models."""
_jobs: Dict[int, DownloadJob]
_max_parallel_dl: int = 5
_worker_pool: Set[threading.Thread]
_queue: PriorityQueue[DownloadJob]
_stop_event: threading.Event
_lock: threading.Lock
_logger: Logger
_events: Optional[EventServiceBase] = None
_next_job_id: int = 0
_accept_download_requests: bool = False
_requests: requests.sessions.Session
def __init__(
self,
max_parallel_dl: int = 5,
@@ -99,6 +87,33 @@ class DownloadQueueService(DownloadQueueServiceBase):
self._stop_event.set()
self._worker_pool.clear()
def submit_download_job(
self,
job: DownloadJob,
on_start: Optional[DownloadEventHandler] = None,
on_progress: Optional[DownloadEventHandler] = None,
on_complete: Optional[DownloadEventHandler] = None,
on_cancelled: Optional[DownloadEventHandler] = None,
on_error: Optional[DownloadExceptionHandler] = None,
) -> None:
"""Enqueue a download job."""
if not self._accept_download_requests:
raise ServiceInactiveException(
"The download service is not currently accepting requests. Please call start() to initialize the service."
)
with self._lock:
job.id = self._next_job_id
self._next_job_id += 1
job.set_callbacks(
on_start=on_start,
on_progress=on_progress,
on_complete=on_complete,
on_cancelled=on_cancelled,
on_error=on_error,
)
self._jobs[job.id] = job
self._queue.put(job)
def download(
self,
source: AnyHttpUrl,
@@ -109,32 +124,27 @@ class DownloadQueueService(DownloadQueueServiceBase):
on_progress: Optional[DownloadEventHandler] = None,
on_complete: Optional[DownloadEventHandler] = None,
on_cancelled: Optional[DownloadEventHandler] = None,
on_error: Optional[DownloadEventHandler] = None,
on_error: Optional[DownloadExceptionHandler] = None,
) -> DownloadJob:
"""Create a download job and return its ID."""
"""Create and enqueue a download job and return it."""
if not self._accept_download_requests:
raise ServiceInactiveException(
"The download service is not currently accepting requests. Please call start() to initialize the service."
)
with self._lock:
id = self._next_job_id
self._next_job_id += 1
job = DownloadJob(
id=id,
source=source,
dest=dest,
priority=priority,
access_token=access_token,
)
job.set_callbacks(
on_start=on_start,
on_progress=on_progress,
on_complete=on_complete,
on_cancelled=on_cancelled,
on_error=on_error,
)
self._jobs[id] = job
self._queue.put(job)
job = DownloadJob(
source=source,
dest=dest,
priority=priority,
access_token=access_token,
)
self.submit_download_job(
job,
on_start=on_start,
on_progress=on_progress,
on_complete=on_complete,
on_cancelled=on_cancelled,
on_error=on_error,
)
return job
def join(self) -> None:
@@ -150,7 +160,7 @@ class DownloadQueueService(DownloadQueueServiceBase):
with self._lock:
to_delete = set()
for job_id, job in self._jobs.items():
if self._in_terminal_state(job):
if job.in_terminal_state:
to_delete.add(job_id)
for job_id in to_delete:
del self._jobs[job_id]
@@ -172,19 +182,12 @@ class DownloadQueueService(DownloadQueueServiceBase):
with self._lock:
job.cancel()
def cancel_all_jobs(self, preserve_partial: bool = False) -> None:
def cancel_all_jobs(self) -> None:
"""Cancel all jobs (those not in enqueued, running or paused state)."""
for job in self._jobs.values():
if not self._in_terminal_state(job):
if not job.in_terminal_state:
self.cancel_job(job)
def _in_terminal_state(self, job: DownloadJob) -> bool:
return job.status in [
DownloadJobStatus.COMPLETED,
DownloadJobStatus.CANCELLED,
DownloadJobStatus.ERROR,
]
def _start_workers(self, max_workers: int) -> None:
"""Start the requested number of worker threads."""
self._stop_event.clear()
@@ -205,7 +208,6 @@ class DownloadQueueService(DownloadQueueServiceBase):
job = self._queue.get(timeout=1)
except Empty:
continue
try:
job.job_started = get_iso_timestamp()
self._do_download(job)
@@ -214,7 +216,7 @@ class DownloadQueueService(DownloadQueueServiceBase):
except (OSError, HTTPError) as excp:
job.error_type = excp.__class__.__name__ + f"({str(excp)})"
job.error = traceback.format_exc()
self._signal_job_error(job)
self._signal_job_error(job, excp)
except DownloadJobCancelledException:
self._signal_job_cancelled(job)
self._cleanup_cancelled_job(job)
@@ -235,6 +237,8 @@ class DownloadQueueService(DownloadQueueServiceBase):
resp = self._requests.get(str(url), headers=header, stream=True)
if not resp.ok:
raise HTTPError(resp.reason)
job.content_type = resp.headers.get("Content-Type")
content_length = int(resp.headers.get("content-length", 0))
job.total_bytes = content_length
@@ -296,6 +300,7 @@ class DownloadQueueService(DownloadQueueServiceBase):
self._signal_job_progress(job)
# if we get here we are done and can rename the file to the original dest
self._logger.debug(f"{job.source}: saved to {job.download_path} (bytes={job.bytes})")
in_progress_path.rename(job.download_path)
def _validate_filename(self, directory: str, filename: str) -> bool:
@@ -322,7 +327,9 @@ class DownloadQueueService(DownloadQueueServiceBase):
try:
job.on_start(job)
except Exception as e:
self._logger.error(e)
self._logger.error(
f"An error occurred while processing the on_start callback: {traceback.format_exception(e)}"
)
if self._event_bus:
assert job.download_path
self._event_bus.emit_download_started(str(job.source), job.download_path.as_posix())
@@ -332,7 +339,9 @@ class DownloadQueueService(DownloadQueueServiceBase):
try:
job.on_progress(job)
except Exception as e:
self._logger.error(e)
self._logger.error(
f"An error occurred while processing the on_progress callback: {traceback.format_exception(e)}"
)
if self._event_bus:
assert job.download_path
self._event_bus.emit_download_progress(
@@ -348,7 +357,9 @@ class DownloadQueueService(DownloadQueueServiceBase):
try:
job.on_complete(job)
except Exception as e:
self._logger.error(e)
self._logger.error(
f"An error occurred while processing the on_complete callback: {traceback.format_exception(e)}"
)
if self._event_bus:
assert job.download_path
self._event_bus.emit_download_complete(
@@ -356,29 +367,36 @@ class DownloadQueueService(DownloadQueueServiceBase):
)
def _signal_job_cancelled(self, job: DownloadJob) -> None:
if job.status not in [DownloadJobStatus.RUNNING, DownloadJobStatus.WAITING]:
return
job.status = DownloadJobStatus.CANCELLED
if job.on_cancelled:
try:
job.on_cancelled(job)
except Exception as e:
self._logger.error(e)
self._logger.error(
f"An error occurred while processing the on_cancelled callback: {traceback.format_exception(e)}"
)
if self._event_bus:
self._event_bus.emit_download_cancelled(str(job.source))
def _signal_job_error(self, job: DownloadJob) -> None:
def _signal_job_error(self, job: DownloadJob, excp: Optional[Exception] = None) -> None:
job.status = DownloadJobStatus.ERROR
self._logger.error(f"{str(job.source)}: {traceback.format_exception(excp)}")
if job.on_error:
try:
job.on_error(job)
job.on_error(job, excp)
except Exception as e:
self._logger.error(e)
self._logger.error(
f"An error occurred while processing the on_error callback: {traceback.format_exception(e)}"
)
if self._event_bus:
assert job.error_type
assert job.error
self._event_bus.emit_download_error(str(job.source), error_type=job.error_type, error=job.error)
def _cleanup_cancelled_job(self, job: DownloadJob) -> None:
self._logger.warning(f"Cleaning up leftover files from cancelled download job {job.download_path}")
self._logger.debug(f"Cleaning up leftover files from cancelled download job {job.download_path}")
try:
if job.download_path:
partial_file = self._in_progress_path(job.download_path)

View File

@@ -1,7 +1,7 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from typing import Any, Optional
from typing import Any, Dict, List, Optional, Union
from invokeai.app.services.invocation_processor.invocation_processor_common import ProgressImage
from invokeai.app.services.session_queue.session_queue_common import (
@@ -404,53 +404,72 @@ class EventServiceBase:
},
)
def emit_model_install_started(self, source: str) -> None:
def emit_model_install_downloading(
self,
source: str,
local_path: str,
bytes: int,
total_bytes: int,
parts: List[Dict[str, Union[str, int]]],
) -> None:
"""
Emitted when an install job is started.
Emit at intervals while the install job is in progress (remote models only).
:param source: Source of the model
:param local_path: Where model is downloading to
:param parts: Progress of downloading URLs that comprise the model, if any.
:param bytes: Number of bytes downloaded so far.
:param total_bytes: Total size of download, including all files.
This emits a Dict with keys "source", "local_path", "bytes" and "total_bytes".
"""
self.__emit_model_event(
event_name="model_install_downloading",
payload={
"source": source,
"local_path": local_path,
"bytes": bytes,
"total_bytes": total_bytes,
"parts": parts,
},
)
def emit_model_install_running(self, source: str) -> None:
"""
Emit once when an install job becomes active.
:param source: Source of the model; local path, repo_id or url
"""
self.__emit_model_event(
event_name="model_install_started",
event_name="model_install_running",
payload={"source": source},
)
def emit_model_install_completed(self, source: str, key: str) -> None:
def emit_model_install_completed(self, source: str, key: str, total_bytes: Optional[int] = None) -> None:
"""
Emitted when an install job is completed successfully.
Emit when an install job is completed successfully.
:param source: Source of the model; local path, repo_id or url
:param key: Model config record key
:param total_bytes: Size of the model (may be None for installation of a local path)
"""
self.__emit_model_event(
event_name="model_install_completed",
payload={
"source": source,
"total_bytes": total_bytes,
"key": key,
},
)
def emit_model_install_progress(
self,
source: str,
current_bytes: int,
total_bytes: int,
) -> None:
def emit_model_install_cancelled(self, source: str) -> None:
"""
Emitted while the install job is in progress.
(Downloaded models only)
Emit when an install job is cancelled.
:param source: Source of the model
:param current_bytes: Number of bytes downloaded so far
:param total_bytes: Total bytes to download
:param source: Source of the model; local path, repo_id or url
"""
self.__emit_model_event(
event_name="model_install_progress",
payload={
"source": source,
"current_bytes": int,
"total_bytes": int,
},
event_name="model_install_cancelled",
payload={"source": source},
)
def emit_model_install_error(
@@ -460,10 +479,11 @@ class EventServiceBase:
error: str,
) -> None:
"""
Emitted when an install job encounters an exception.
Emit when an install job encounters an exception.
:param source: Source of the model
:param exception: The exception that raised the error
:param error_type: The name of the exception
:param error: A text description of the exception
"""
self.__emit_model_event(
event_name="model_install_error",

View File

@@ -1,11 +1,16 @@
import time
import traceback
from contextlib import suppress
from threading import BoundedSemaphore, Event, Thread
from typing import Optional
import invokeai.backend.util.logging as logger
from invokeai.app.invocations.baseinvocation import InvocationContext
from invokeai.app.services.invocation_queue.invocation_queue_common import InvocationQueueItem
from invokeai.app.services.invocation_stats.invocation_stats_common import (
GESStatsNotFoundError,
)
from invokeai.app.util.profiler import Profiler
from ..invoker import Invoker
from .invocation_processor_base import InvocationProcessorABC
@@ -18,7 +23,7 @@ class DefaultInvocationProcessor(InvocationProcessorABC):
__invoker: Invoker
__threadLimit: BoundedSemaphore
def start(self, invoker) -> None:
def start(self, invoker: Invoker) -> None:
# if we do want multithreading at some point, we could make this configurable
self.__threadLimit = BoundedSemaphore(1)
self.__invoker = invoker
@@ -39,6 +44,16 @@ class DefaultInvocationProcessor(InvocationProcessorABC):
self.__threadLimit.acquire()
queue_item: Optional[InvocationQueueItem] = None
profiler = (
Profiler(
logger=self.__invoker.services.logger,
output_dir=self.__invoker.services.configuration.profiles_path,
prefix=self.__invoker.services.configuration.profile_prefix,
)
if self.__invoker.services.configuration.profile_graphs
else None
)
while not stop_event.is_set():
try:
queue_item = self.__invoker.services.queue.get()
@@ -49,6 +64,10 @@ class DefaultInvocationProcessor(InvocationProcessorABC):
# do not hammer the queue
time.sleep(0.5)
continue
if profiler and profiler.profile_id != queue_item.graph_execution_state_id:
profiler.start(profile_id=queue_item.graph_execution_state_id)
try:
graph_execution_state = self.__invoker.services.graph_execution_manager.get(
queue_item.graph_execution_state_id
@@ -132,13 +151,13 @@ class DefaultInvocationProcessor(InvocationProcessorABC):
source_node_id=source_node_id,
result=outputs.model_dump(),
)
self.__invoker.services.performance_statistics.log_stats()
except KeyboardInterrupt:
pass
except CanceledException:
self.__invoker.services.performance_statistics.reset_stats(graph_execution_state.id)
with suppress(GESStatsNotFoundError):
self.__invoker.services.performance_statistics.reset_stats(graph_execution_state.id)
pass
except Exception as e:
@@ -163,7 +182,8 @@ class DefaultInvocationProcessor(InvocationProcessorABC):
error_type=e.__class__.__name__,
error=error,
)
self.__invoker.services.performance_statistics.reset_stats(graph_execution_state.id)
with suppress(GESStatsNotFoundError):
self.__invoker.services.performance_statistics.reset_stats(graph_execution_state.id)
pass
# Check queue to see if this is canceled, and skip if so
@@ -195,12 +215,21 @@ class DefaultInvocationProcessor(InvocationProcessorABC):
error=traceback.format_exc(),
)
elif is_complete:
self.__invoker.services.events.emit_graph_execution_complete(
queue_batch_id=queue_item.session_queue_batch_id,
queue_item_id=queue_item.session_queue_item_id,
queue_id=queue_item.session_queue_id,
graph_execution_state_id=graph_execution_state.id,
)
with suppress(GESStatsNotFoundError):
self.__invoker.services.performance_statistics.log_stats(graph_execution_state.id)
self.__invoker.services.events.emit_graph_execution_complete(
queue_batch_id=queue_item.session_queue_batch_id,
queue_item_id=queue_item.session_queue_item_id,
queue_id=queue_item.session_queue_id,
graph_execution_state_id=graph_execution_state.id,
)
if profiler:
profile_path = profiler.stop()
stats_path = profile_path.with_suffix(".json")
self.__invoker.services.performance_statistics.dump_stats(
graph_execution_state_id=graph_execution_state.id, output_path=stats_path
)
self.__invoker.services.performance_statistics.reset_stats(graph_execution_state.id)
except KeyboardInterrupt:
pass # Log something? KeyboardInterrupt is probably not going to be seen by the processor

View File

@@ -30,23 +30,15 @@ writes to the system log is stored in InvocationServices.performance_statistics.
from abc import ABC, abstractmethod
from contextlib import AbstractContextManager
from typing import Dict
from pathlib import Path
from invokeai.app.invocations.baseinvocation import BaseInvocation
from invokeai.backend.model_management.model_cache import CacheStats
from .invocation_stats_common import NodeLog
from invokeai.app.services.invocation_stats.invocation_stats_common import InvocationStatsSummary
class InvocationStatsServiceBase(ABC):
"Abstract base class for recording node memory/time performance statistics"
# {graph_id => NodeLog}
_stats: Dict[str, NodeLog]
_cache_stats: Dict[str, CacheStats]
ram_used: float
ram_changed: float
@abstractmethod
def __init__(self):
"""
@@ -71,51 +63,36 @@ class InvocationStatsServiceBase(ABC):
@abstractmethod
def reset_stats(self, graph_execution_state_id: str):
"""
Reset all statistics for the indicated graph
:param graph_execution_state_id
Reset all statistics for the indicated graph.
:param graph_execution_state_id: The id of the session whose stats to reset.
:raises GESStatsNotFoundError: if the graph isn't tracked in the stats.
"""
pass
@abstractmethod
def reset_all_stats(self):
"""Zero all statistics"""
pass
@abstractmethod
def update_invocation_stats(
self,
graph_id: str,
invocation_type: str,
time_used: float,
vram_used: float,
):
"""
Add timing information on execution of a node. Usually
used internally.
:param graph_id: ID of the graph that is currently executing
:param invocation_type: String literal type of the node
:param time_used: Time used by node's exection (sec)
:param vram_used: Maximum VRAM used during exection (GB)
"""
pass
@abstractmethod
def log_stats(self):
def log_stats(self, graph_execution_state_id: str):
"""
Write out the accumulated statistics to the log or somewhere else.
:param graph_execution_state_id: The id of the session whose stats to log.
:raises GESStatsNotFoundError: if the graph isn't tracked in the stats.
"""
pass
@abstractmethod
def update_mem_stats(
self,
ram_used: float,
ram_changed: float,
):
def get_stats(self, graph_execution_state_id: str) -> InvocationStatsSummary:
"""
Update the collector with RAM memory usage info.
:param ram_used: How much RAM is currently in use.
:param ram_changed: How much RAM changed since last generation.
Gets the accumulated statistics for the indicated graph.
:param graph_execution_state_id: The id of the session whose stats to get.
:raises GESStatsNotFoundError: if the graph isn't tracked in the stats.
"""
pass
@abstractmethod
def dump_stats(self, graph_execution_state_id: str, output_path: Path) -> None:
"""
Write out the accumulated statistics to the indicated path as JSON.
:param graph_execution_state_id: The id of the session whose stats to dump.
:param output_path: The file to write the stats to.
:raises GESStatsNotFoundError: if the graph isn't tracked in the stats.
"""
pass

View File

@@ -1,25 +1,183 @@
from dataclasses import dataclass, field
from typing import Dict
from collections import defaultdict
from dataclasses import asdict, dataclass
from typing import Any, Optional
# size of GIG in bytes
GIG = 1073741824
class GESStatsNotFoundError(Exception):
"""Raised when execution stats are not found for a given Graph Execution State."""
@dataclass
class NodeStats:
"""Class for tracking execution stats of an invocation node"""
class NodeExecutionStatsSummary:
"""The stats for a specific type of node."""
calls: int = 0
time_used: float = 0.0 # seconds
max_vram: float = 0.0 # GB
cache_hits: int = 0
cache_misses: int = 0
cache_high_watermark: int = 0
node_type: str
num_calls: int
time_used_seconds: float
peak_vram_gb: float
@dataclass
class NodeLog:
"""Class for tracking node usage"""
class ModelCacheStatsSummary:
"""The stats for the model cache."""
# {node_type => NodeStats}
nodes: Dict[str, NodeStats] = field(default_factory=dict)
high_water_mark_gb: float
cache_size_gb: float
total_usage_gb: float
cache_hits: int
cache_misses: int
models_cached: int
models_cleared: int
@dataclass
class GraphExecutionStatsSummary:
"""The stats for the graph execution state."""
graph_execution_state_id: str
execution_time_seconds: float
# `wall_time_seconds`, `ram_usage_gb` and `ram_change_gb` are derived from the node execution stats.
# In some situations, there are no node stats, so these values are optional.
wall_time_seconds: Optional[float]
ram_usage_gb: Optional[float]
ram_change_gb: Optional[float]
@dataclass
class InvocationStatsSummary:
"""
The accumulated stats for a graph execution.
Its `__str__` method returns a human-readable stats summary.
"""
vram_usage_gb: Optional[float]
graph_stats: GraphExecutionStatsSummary
model_cache_stats: ModelCacheStatsSummary
node_stats: list[NodeExecutionStatsSummary]
def __str__(self) -> str:
_str = ""
_str = f"Graph stats: {self.graph_stats.graph_execution_state_id}\n"
_str += f"{'Node':>30} {'Calls':>7} {'Seconds':>9} {'VRAM Used':>10}\n"
for summary in self.node_stats:
_str += f"{summary.node_type:>30} {summary.num_calls:>7} {summary.time_used_seconds:>8.3f}s {summary.peak_vram_gb:>9.3f}G\n"
_str += f"TOTAL GRAPH EXECUTION TIME: {self.graph_stats.execution_time_seconds:7.3f}s\n"
if self.graph_stats.wall_time_seconds is not None:
_str += f"TOTAL GRAPH WALL TIME: {self.graph_stats.wall_time_seconds:7.3f}s\n"
if self.graph_stats.ram_usage_gb is not None and self.graph_stats.ram_change_gb is not None:
_str += f"RAM used by InvokeAI process: {self.graph_stats.ram_usage_gb:4.2f}G ({self.graph_stats.ram_change_gb:+5.3f}G)\n"
_str += f"RAM used to load models: {self.model_cache_stats.total_usage_gb:4.2f}G\n"
if self.vram_usage_gb:
_str += f"VRAM in use: {self.vram_usage_gb:4.3f}G\n"
_str += "RAM cache statistics:\n"
_str += f" Model cache hits: {self.model_cache_stats.cache_hits}\n"
_str += f" Model cache misses: {self.model_cache_stats.cache_misses}\n"
_str += f" Models cached: {self.model_cache_stats.models_cached}\n"
_str += f" Models cleared from cache: {self.model_cache_stats.models_cleared}\n"
_str += f" Cache high water mark: {self.model_cache_stats.high_water_mark_gb:4.2f}/{self.model_cache_stats.cache_size_gb:4.2f}G\n"
return _str
def as_dict(self) -> dict[str, Any]:
"""Returns the stats as a dictionary."""
return asdict(self)
@dataclass
class NodeExecutionStats:
"""Class for tracking execution stats of an invocation node."""
invocation_type: str
start_time: float # Seconds since the epoch.
end_time: float # Seconds since the epoch.
start_ram_gb: float # GB
end_ram_gb: float # GB
peak_vram_gb: float # GB
def total_time(self) -> float:
return self.end_time - self.start_time
class GraphExecutionStats:
"""Class for tracking execution stats of a graph."""
def __init__(self):
self._node_stats_list: list[NodeExecutionStats] = []
def add_node_execution_stats(self, node_stats: NodeExecutionStats):
self._node_stats_list.append(node_stats)
def get_total_run_time(self) -> float:
"""Get the total time spent executing nodes in the graph."""
total = 0.0
for node_stats in self._node_stats_list:
total += node_stats.total_time()
return total
def get_first_node_stats(self) -> NodeExecutionStats | None:
"""Get the stats of the first node in the graph (by start_time)."""
first_node = None
for node_stats in self._node_stats_list:
if first_node is None or node_stats.start_time < first_node.start_time:
first_node = node_stats
assert first_node is not None
return first_node
def get_last_node_stats(self) -> NodeExecutionStats | None:
"""Get the stats of the last node in the graph (by end_time)."""
last_node = None
for node_stats in self._node_stats_list:
if last_node is None or node_stats.end_time > last_node.end_time:
last_node = node_stats
return last_node
def get_graph_stats_summary(self, graph_execution_state_id: str) -> GraphExecutionStatsSummary:
"""Get a summary of the graph stats."""
first_node = self.get_first_node_stats()
last_node = self.get_last_node_stats()
wall_time_seconds: Optional[float] = None
ram_usage_gb: Optional[float] = None
ram_change_gb: Optional[float] = None
if last_node and first_node:
wall_time_seconds = last_node.end_time - first_node.start_time
ram_usage_gb = last_node.end_ram_gb
ram_change_gb = last_node.end_ram_gb - first_node.start_ram_gb
return GraphExecutionStatsSummary(
graph_execution_state_id=graph_execution_state_id,
execution_time_seconds=self.get_total_run_time(),
wall_time_seconds=wall_time_seconds,
ram_usage_gb=ram_usage_gb,
ram_change_gb=ram_change_gb,
)
def get_node_stats_summaries(self) -> list[NodeExecutionStatsSummary]:
"""Get a summary of the node stats."""
summaries: list[NodeExecutionStatsSummary] = []
node_stats_by_type: dict[str, list[NodeExecutionStats]] = defaultdict(list)
for node_stats in self._node_stats_list:
node_stats_by_type[node_stats.invocation_type].append(node_stats)
for node_type, node_type_stats_list in node_stats_by_type.items():
num_calls = len(node_type_stats_list)
time_used = sum([n.total_time() for n in node_type_stats_list])
peak_vram = max([n.peak_vram_gb for n in node_type_stats_list])
summary = NodeExecutionStatsSummary(
node_type=node_type, num_calls=num_calls, time_used_seconds=time_used, peak_vram_gb=peak_vram
)
summaries.append(summary)
return summaries

View File

@@ -1,5 +1,7 @@
import json
import time
from typing import Dict
from contextlib import contextmanager
from pathlib import Path
import psutil
import torch
@@ -7,161 +9,163 @@ import torch
import invokeai.backend.util.logging as logger
from invokeai.app.invocations.baseinvocation import BaseInvocation
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.model_manager.model_manager_base import ModelManagerServiceBase
from invokeai.app.services.item_storage.item_storage_common import ItemNotFoundError
from invokeai.backend.model_management.model_cache import CacheStats
from .invocation_stats_base import InvocationStatsServiceBase
from .invocation_stats_common import GIG, NodeLog, NodeStats
from .invocation_stats_common import (
GESStatsNotFoundError,
GraphExecutionStats,
GraphExecutionStatsSummary,
InvocationStatsSummary,
ModelCacheStatsSummary,
NodeExecutionStats,
NodeExecutionStatsSummary,
)
# Size of 1GB in bytes.
GB = 2**30
class InvocationStatsService(InvocationStatsServiceBase):
"""Accumulate performance information about a running graph. Collects time spent in each node,
as well as the maximum and current VRAM utilisation for CUDA systems"""
_invoker: Invoker
def __init__(self):
# {graph_id => NodeLog}
self._stats: Dict[str, NodeLog] = {}
self._cache_stats: Dict[str, CacheStats] = {}
self.ram_used: float = 0.0
self.ram_changed: float = 0.0
# Maps graph_execution_state_id to GraphExecutionStats.
self._stats: dict[str, GraphExecutionStats] = {}
# Maps graph_execution_state_id to model manager CacheStats.
self._cache_stats: dict[str, CacheStats] = {}
def start(self, invoker: Invoker) -> None:
self._invoker = invoker
class StatsContext:
"""Context manager for collecting statistics."""
invocation: BaseInvocation
collector: "InvocationStatsServiceBase"
graph_id: str
start_time: float
ram_used: int
model_manager: ModelManagerServiceBase
def __init__(
self,
invocation: BaseInvocation,
graph_id: str,
model_manager: ModelManagerServiceBase,
collector: "InvocationStatsServiceBase",
):
"""Initialize statistics for this run."""
self.invocation = invocation
self.collector = collector
self.graph_id = graph_id
self.start_time = 0.0
self.ram_used = 0
self.model_manager = model_manager
def __enter__(self):
self.start_time = time.time()
if torch.cuda.is_available():
torch.cuda.reset_peak_memory_stats()
self.ram_used = psutil.Process().memory_info().rss
if self.model_manager:
self.model_manager.collect_cache_stats(self.collector._cache_stats[self.graph_id])
def __exit__(self, *args):
"""Called on exit from the context."""
ram_used = psutil.Process().memory_info().rss
self.collector.update_mem_stats(
ram_used=ram_used / GIG,
ram_changed=(ram_used - self.ram_used) / GIG,
)
self.collector.update_invocation_stats(
graph_id=self.graph_id,
invocation_type=self.invocation.type, # type: ignore # `type` is not on the `BaseInvocation` model, but *is* on all invocations
time_used=time.time() - self.start_time,
vram_used=torch.cuda.max_memory_allocated() / GIG if torch.cuda.is_available() else 0.0,
)
def collect_stats(
self,
invocation: BaseInvocation,
graph_execution_state_id: str,
) -> StatsContext:
if not self._stats.get(graph_execution_state_id): # first time we're seeing this
self._stats[graph_execution_state_id] = NodeLog()
@contextmanager
def collect_stats(self, invocation: BaseInvocation, graph_execution_state_id: str):
if not self._stats.get(graph_execution_state_id):
# First time we're seeing this graph_execution_state_id.
self._stats[graph_execution_state_id] = GraphExecutionStats()
self._cache_stats[graph_execution_state_id] = CacheStats()
return self.StatsContext(invocation, graph_execution_state_id, self._invoker.services.model_manager, self)
def reset_all_stats(self):
"""Zero all statistics"""
self._stats = {}
# Prune stale stats. There should be none since we're starting a new graph, but just in case.
self._prune_stale_stats()
# Record state before the invocation.
start_time = time.time()
start_ram = psutil.Process().memory_info().rss
if torch.cuda.is_available():
torch.cuda.reset_peak_memory_stats()
if self._invoker.services.model_manager:
self._invoker.services.model_manager.collect_cache_stats(self._cache_stats[graph_execution_state_id])
def reset_stats(self, graph_execution_id: str):
try:
self._stats.pop(graph_execution_id)
except KeyError:
logger.warning(f"Attempted to clear statistics for unknown graph {graph_execution_id}")
# Let the invocation run.
yield None
finally:
# Record state after the invocation.
node_stats = NodeExecutionStats(
invocation_type=invocation.get_type(),
start_time=start_time,
end_time=time.time(),
start_ram_gb=start_ram / GB,
end_ram_gb=psutil.Process().memory_info().rss / GB,
peak_vram_gb=torch.cuda.max_memory_allocated() / GB if torch.cuda.is_available() else 0.0,
)
self._stats[graph_execution_state_id].add_node_execution_stats(node_stats)
def update_mem_stats(
self,
ram_used: float,
ram_changed: float,
):
self.ram_used = ram_used
self.ram_changed = ram_changed
def _prune_stale_stats(self):
"""Check all graphs being tracked and prune any that have completed/errored.
def update_invocation_stats(
self,
graph_id: str,
invocation_type: str,
time_used: float,
vram_used: float,
):
if not self._stats[graph_id].nodes.get(invocation_type):
self._stats[graph_id].nodes[invocation_type] = NodeStats()
stats = self._stats[graph_id].nodes[invocation_type]
stats.calls += 1
stats.time_used += time_used
stats.max_vram = max(stats.max_vram, vram_used)
def log_stats(self):
completed = set()
errored = set()
for graph_id, _node_log in self._stats.items():
This shouldn't be necessary, but we don't have totally robust upstream handling of graph completions/errors, so
for now we call this function periodically to prevent them from accumulating.
"""
to_prune: list[str] = []
for graph_execution_state_id in self._stats:
try:
current_graph_state = self._invoker.services.graph_execution_manager.get(graph_id)
except Exception:
errored.add(graph_id)
graph_execution_state = self._invoker.services.graph_execution_manager.get(graph_execution_state_id)
except ItemNotFoundError:
# TODO(ryand): What would cause this? Should this exception just be allowed to propagate?
logger.warning(f"Failed to get graph state for {graph_execution_state_id}.")
continue
if not current_graph_state.is_complete():
if not graph_execution_state.is_complete():
# The graph is still running, don't prune it.
continue
total_time = 0
logger.info(f"Graph stats: {graph_id}")
logger.info(f"{'Node':>30} {'Calls':>7}{'Seconds':>9} {'VRAM Used':>10}")
for node_type, stats in self._stats[graph_id].nodes.items():
logger.info(f"{node_type:>30} {stats.calls:>4} {stats.time_used:7.3f}s {stats.max_vram:4.3f}G")
total_time += stats.time_used
to_prune.append(graph_execution_state_id)
cache_stats = self._cache_stats[graph_id]
hwm = cache_stats.high_watermark / GIG
tot = cache_stats.cache_size / GIG
loaded = sum(list(cache_stats.loaded_model_sizes.values())) / GIG
for graph_execution_state_id in to_prune:
del self._stats[graph_execution_state_id]
del self._cache_stats[graph_execution_state_id]
logger.info(f"TOTAL GRAPH EXECUTION TIME: {total_time:7.3f}s")
logger.info("RAM used by InvokeAI process: " + "%4.2fG" % self.ram_used + f" ({self.ram_changed:+5.3f}G)")
logger.info(f"RAM used to load models: {loaded:4.2f}G")
if torch.cuda.is_available():
logger.info("VRAM in use: " + "%4.3fG" % (torch.cuda.memory_allocated() / GIG))
logger.info("RAM cache statistics:")
logger.info(f" Model cache hits: {cache_stats.hits}")
logger.info(f" Model cache misses: {cache_stats.misses}")
logger.info(f" Models cached: {cache_stats.in_cache}")
logger.info(f" Models cleared from cache: {cache_stats.cleared}")
logger.info(f" Cache high water mark: {hwm:4.2f}/{tot:4.2f}G")
if len(to_prune) > 0:
logger.info(f"Pruned stale graph stats for {to_prune}.")
completed.add(graph_id)
def reset_stats(self, graph_execution_state_id: str):
try:
del self._stats[graph_execution_state_id]
del self._cache_stats[graph_execution_state_id]
except KeyError as e:
msg = f"Attempted to clear statistics for unknown graph {graph_execution_state_id}: {e}."
logger.error(msg)
raise GESStatsNotFoundError(msg) from e
for graph_id in completed:
del self._stats[graph_id]
del self._cache_stats[graph_id]
def get_stats(self, graph_execution_state_id: str) -> InvocationStatsSummary:
graph_stats_summary = self._get_graph_summary(graph_execution_state_id)
node_stats_summaries = self._get_node_summaries(graph_execution_state_id)
model_cache_stats_summary = self._get_model_cache_summary(graph_execution_state_id)
vram_usage_gb = torch.cuda.memory_allocated() / GB if torch.cuda.is_available() else None
for graph_id in errored:
del self._stats[graph_id]
del self._cache_stats[graph_id]
return InvocationStatsSummary(
graph_stats=graph_stats_summary,
model_cache_stats=model_cache_stats_summary,
node_stats=node_stats_summaries,
vram_usage_gb=vram_usage_gb,
)
def log_stats(self, graph_execution_state_id: str) -> None:
stats = self.get_stats(graph_execution_state_id)
logger.info(str(stats))
def dump_stats(self, graph_execution_state_id: str, output_path: Path) -> None:
stats = self.get_stats(graph_execution_state_id)
with open(output_path, "w") as f:
f.write(json.dumps(stats.as_dict(), indent=2))
def _get_model_cache_summary(self, graph_execution_state_id: str) -> ModelCacheStatsSummary:
try:
cache_stats = self._cache_stats[graph_execution_state_id]
except KeyError as e:
msg = f"Attempted to get model cache statistics for unknown graph {graph_execution_state_id}: {e}."
logger.error(msg)
raise GESStatsNotFoundError(msg) from e
return ModelCacheStatsSummary(
cache_hits=cache_stats.hits,
cache_misses=cache_stats.misses,
high_water_mark_gb=cache_stats.high_watermark / GB,
cache_size_gb=cache_stats.cache_size / GB,
total_usage_gb=sum(list(cache_stats.loaded_model_sizes.values())) / GB,
models_cached=cache_stats.in_cache,
models_cleared=cache_stats.cleared,
)
def _get_graph_summary(self, graph_execution_state_id: str) -> GraphExecutionStatsSummary:
try:
graph_stats = self._stats[graph_execution_state_id]
except KeyError as e:
msg = f"Attempted to get graph statistics for unknown graph {graph_execution_state_id}: {e}."
logger.error(msg)
raise GESStatsNotFoundError(msg) from e
return graph_stats.get_graph_stats_summary(graph_execution_state_id)
def _get_node_summaries(self, graph_execution_state_id: str) -> list[NodeExecutionStatsSummary]:
try:
graph_stats = self._stats[graph_execution_state_id]
except KeyError as e:
msg = f"Attempted to get node statistics for unknown graph {graph_execution_state_id}: {e}."
logger.error(msg)
raise GESStatsNotFoundError(msg) from e
return graph_stats.get_node_stats_summaries()

View File

@@ -1,10 +1,8 @@
from abc import ABC, abstractmethod
from typing import Callable, Generic, Optional, TypeVar
from typing import Callable, Generic, TypeVar
from pydantic import BaseModel
from invokeai.app.services.shared.pagination import PaginatedResults
T = TypeVar("T", bound=BaseModel)
@@ -22,26 +20,26 @@ class ItemStorageABC(ABC, Generic[T]):
@abstractmethod
def get(self, item_id: str) -> T:
"""Gets the item, parsing it into a Pydantic model"""
pass
@abstractmethod
def get_raw(self, item_id: str) -> Optional[str]:
"""Gets the raw item as a string, skipping Pydantic parsing"""
"""
Gets the item.
:param item_id: the id of the item to get
:raises ItemNotFoundError: if the item is not found
"""
pass
@abstractmethod
def set(self, item: T) -> None:
"""Sets the item"""
"""
Sets the item. The id will be extracted based on id_field.
:param item: the item to set
"""
pass
@abstractmethod
def list(self, page: int = 0, per_page: int = 10) -> PaginatedResults[T]:
"""Gets a paginated list of items"""
pass
@abstractmethod
def search(self, query: str, page: int = 0, per_page: int = 10) -> PaginatedResults[T]:
def delete(self, item_id: str) -> None:
"""
Deletes the item, if it exists.
"""
pass
def on_changed(self, on_changed: Callable[[T], None]) -> None:

View File

@@ -0,0 +1,5 @@
class ItemNotFoundError(KeyError):
"""Raised when an item is not found in storage"""
def __init__(self, item_id: str) -> None:
super().__init__(f"Item with id {item_id} not found")

View File

@@ -0,0 +1,52 @@
from collections import OrderedDict
from contextlib import suppress
from typing import Generic, TypeVar
from pydantic import BaseModel
from invokeai.app.services.item_storage.item_storage_base import ItemStorageABC
from invokeai.app.services.item_storage.item_storage_common import ItemNotFoundError
T = TypeVar("T", bound=BaseModel)
class ItemStorageMemory(ItemStorageABC[T], Generic[T]):
"""
Provides a simple in-memory storage for items, with a maximum number of items to store.
The storage uses the LRU strategy to evict items from storage when the max has been reached.
"""
def __init__(self, id_field: str = "id", max_items: int = 10) -> None:
super().__init__()
if max_items < 1:
raise ValueError("max_items must be at least 1")
if not id_field:
raise ValueError("id_field must not be empty")
self._id_field = id_field
self._items: OrderedDict[str, T] = OrderedDict()
self._max_items = max_items
def get(self, item_id: str) -> T:
# If the item exists, move it to the end of the OrderedDict.
item = self._items.pop(item_id, None)
if item is None:
raise ItemNotFoundError(item_id)
self._items[item_id] = item
return item
def set(self, item: T) -> None:
item_id = getattr(item, self._id_field)
if item_id in self._items:
# If item already exists, remove it and add it to the end
self._items.pop(item_id)
elif len(self._items) >= self._max_items:
# If cache is full, evict the least recently used item
self._items.popitem(last=False)
self._items[item_id] = item
self._on_changed(item)
def delete(self, item_id: str) -> None:
# This is a no-op if the item doesn't exist.
with suppress(KeyError):
del self._items[item_id]
self._on_deleted(item_id)

View File

@@ -1,147 +0,0 @@
import sqlite3
import threading
from typing import Generic, Optional, TypeVar, get_args
from pydantic import BaseModel, TypeAdapter
from invokeai.app.services.shared.pagination import PaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from .item_storage_base import ItemStorageABC
T = TypeVar("T", bound=BaseModel)
class SqliteItemStorage(ItemStorageABC, Generic[T]):
_table_name: str
_conn: sqlite3.Connection
_cursor: sqlite3.Cursor
_id_field: str
_lock: threading.RLock
_validator: Optional[TypeAdapter[T]]
def __init__(self, db: SqliteDatabase, table_name: str, id_field: str = "id"):
super().__init__()
self._lock = db.lock
self._conn = db.conn
self._table_name = table_name
self._id_field = id_field # TODO: validate that T has this field
self._cursor = self._conn.cursor()
self._validator: Optional[TypeAdapter[T]] = None
self._create_table()
def _create_table(self):
try:
self._lock.acquire()
self._cursor.execute(
f"""CREATE TABLE IF NOT EXISTS {self._table_name} (
item TEXT,
id TEXT GENERATED ALWAYS AS (json_extract(item, '$.{self._id_field}')) VIRTUAL NOT NULL);"""
)
self._cursor.execute(
f"""CREATE UNIQUE INDEX IF NOT EXISTS {self._table_name}_id ON {self._table_name}(id);"""
)
finally:
self._lock.release()
def _parse_item(self, item: str) -> T:
if self._validator is None:
"""
We don't get access to `__orig_class__` in `__init__()`, and we need this before start(), so
we can create it when it is first needed instead.
__orig_class__ is technically an implementation detail of the typing module, not a supported API
"""
self._validator = TypeAdapter(get_args(self.__orig_class__)[0]) # type: ignore [attr-defined]
return self._validator.validate_json(item)
def set(self, item: T):
try:
self._lock.acquire()
self._cursor.execute(
f"""INSERT OR REPLACE INTO {self._table_name} (item) VALUES (?);""",
(item.model_dump_json(warnings=False, exclude_none=True),),
)
self._conn.commit()
finally:
self._lock.release()
self._on_changed(item)
def get(self, id: str) -> Optional[T]:
try:
self._lock.acquire()
self._cursor.execute(f"""SELECT item FROM {self._table_name} WHERE id = ?;""", (str(id),))
result = self._cursor.fetchone()
finally:
self._lock.release()
if not result:
return None
return self._parse_item(result[0])
def get_raw(self, id: str) -> Optional[str]:
try:
self._lock.acquire()
self._cursor.execute(f"""SELECT item FROM {self._table_name} WHERE id = ?;""", (str(id),))
result = self._cursor.fetchone()
finally:
self._lock.release()
if not result:
return None
return result[0]
def delete(self, id: str):
try:
self._lock.acquire()
self._cursor.execute(f"""DELETE FROM {self._table_name} WHERE id = ?;""", (str(id),))
self._conn.commit()
finally:
self._lock.release()
self._on_deleted(id)
def list(self, page: int = 0, per_page: int = 10) -> PaginatedResults[T]:
try:
self._lock.acquire()
self._cursor.execute(
f"""SELECT item FROM {self._table_name} LIMIT ? OFFSET ?;""",
(per_page, page * per_page),
)
result = self._cursor.fetchall()
items = [self._parse_item(r[0]) for r in result]
self._cursor.execute(f"""SELECT count(*) FROM {self._table_name};""")
count = self._cursor.fetchone()[0]
finally:
self._lock.release()
pageCount = int(count / per_page) + 1
return PaginatedResults[T](items=items, page=page, pages=pageCount, per_page=per_page, total=count)
def search(self, query: str, page: int = 0, per_page: int = 10) -> PaginatedResults[T]:
try:
self._lock.acquire()
self._cursor.execute(
f"""SELECT item FROM {self._table_name} WHERE item LIKE ? LIMIT ? OFFSET ?;""",
(f"%{query}%", per_page, page * per_page),
)
result = self._cursor.fetchall()
items = [self._parse_item(r[0]) for r in result]
self._cursor.execute(
f"""SELECT count(*) FROM {self._table_name} WHERE item LIKE ?;""",
(f"%{query}%",),
)
count = self._cursor.fetchone()[0]
finally:
self._lock.release()
pageCount = int(count / per_page) + 1
return PaginatedResults[T](items=items, page=page, pages=pageCount, per_page=per_page, total=count)

View File

@@ -1,6 +1,7 @@
"""Initialization file for model install service package."""
from .model_install_base import (
CivitaiModelSource,
HFModelSource,
InstallStatus,
LocalModelSource,
@@ -22,4 +23,5 @@ __all__ = [
"LocalModelSource",
"HFModelSource",
"URLModelSource",
"CivitaiModelSource",
]

View File

@@ -1,27 +1,42 @@
# Copyright 2023 Lincoln D. Stein and the InvokeAI development team
"""Baseclass definitions for the model installer."""
import re
import traceback
from abc import ABC, abstractmethod
from enum import Enum
from pathlib import Path
from typing import Any, Dict, List, Literal, Optional, Union
from typing import Any, Dict, List, Literal, Optional, Set, Union
from pydantic import BaseModel, Field, field_validator
from pydantic import BaseModel, Field, PrivateAttr, field_validator
from pydantic.networks import AnyHttpUrl
from typing_extensions import Annotated
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.download import DownloadJob, DownloadQueueServiceBase
from invokeai.app.services.events import EventServiceBase
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.model_records import ModelRecordServiceBase
from invokeai.backend.model_manager import AnyModelConfig
from invokeai.backend.model_manager import AnyModelConfig, ModelRepoVariant
from invokeai.backend.model_manager.metadata import AnyModelRepoMetadata, ModelMetadataStore
class InstallStatus(str, Enum):
"""State of an install job running in the background."""
WAITING = "waiting" # waiting to be dequeued
DOWNLOADING = "downloading" # downloading of model files in process
RUNNING = "running" # being processed
COMPLETED = "completed" # finished running
ERROR = "error" # terminated with an error message
CANCELLED = "cancelled" # terminated with an error message
class ModelInstallPart(BaseModel):
url: AnyHttpUrl
path: Path
bytes: int = 0
total_bytes: int = 0
class UnknownInstallJobException(Exception):
@@ -74,12 +89,31 @@ class LocalModelSource(StringLikeSource):
return Path(self.path).as_posix()
class CivitaiModelSource(StringLikeSource):
"""A Civitai version id, with optional variant and access token."""
version_id: int
variant: Optional[ModelRepoVariant] = None
access_token: Optional[str] = None
type: Literal["civitai"] = "civitai"
def __str__(self) -> str:
"""Return string version of repoid when string rep needed."""
base: str = str(self.version_id)
base += f" ({self.variant})" if self.variant else ""
return base
class HFModelSource(StringLikeSource):
"""A HuggingFace repo_id, with optional variant and sub-folder."""
"""
A HuggingFace repo_id with optional variant, sub-folder and access token.
Note that the variant option, if not provided to the constructor, will default to fp16, which is
what people (almost) always want.
"""
repo_id: str
variant: Optional[str] = None
subfolder: Optional[str | Path] = None
variant: Optional[ModelRepoVariant] = ModelRepoVariant.FP16
subfolder: Optional[Path] = None
access_token: Optional[str] = None
type: Literal["hf"] = "hf"
@@ -103,19 +137,22 @@ class URLModelSource(StringLikeSource):
url: AnyHttpUrl
access_token: Optional[str] = None
type: Literal["generic_url"] = "generic_url"
type: Literal["url"] = "url"
def __str__(self) -> str:
"""Return string version of the url when string rep needed."""
return str(self.url)
ModelSource = Annotated[Union[LocalModelSource, HFModelSource, URLModelSource], Field(discriminator="type")]
ModelSource = Annotated[
Union[LocalModelSource, HFModelSource, CivitaiModelSource, URLModelSource], Field(discriminator="type")
]
class ModelInstallJob(BaseModel):
"""Object that tracks the current status of an install request."""
id: int = Field(description="Unique ID for this job")
status: InstallStatus = Field(default=InstallStatus.WAITING, description="Current status of install process")
config_in: Dict[str, Any] = Field(
default_factory=dict, description="Configuration information (e.g. 'description') to apply to model."
@@ -128,15 +165,74 @@ class ModelInstallJob(BaseModel):
)
source: ModelSource = Field(description="Source (URL, repo_id, or local path) of model")
local_path: Path = Field(description="Path to locally-downloaded model; may be the same as the source")
error_type: Optional[str] = Field(default=None, description="Class name of the exception that led to status==ERROR")
error: Optional[str] = Field(default=None, description="Error traceback") # noqa #501
bytes: int = Field(
default=0, description="For a remote model, the number of bytes downloaded so far (may not be available)"
)
total_bytes: int = Field(default=0, description="Total size of the model to be installed")
source_metadata: Optional[AnyModelRepoMetadata] = Field(
default=None, description="Metadata provided by the model source"
)
download_parts: Set[DownloadJob] = Field(
default_factory=set, description="Download jobs contributing to this install"
)
# internal flags and transitory settings
_install_tmpdir: Optional[Path] = PrivateAttr(default=None)
_exception: Optional[Exception] = PrivateAttr(default=None)
def set_error(self, e: Exception) -> None:
"""Record the error and traceback from an exception."""
self.error_type = e.__class__.__name__
self.error = "".join(traceback.format_exception(e))
self._exception = e
self.status = InstallStatus.ERROR
def cancel(self) -> None:
"""Call to cancel the job."""
self.status = InstallStatus.CANCELLED
@property
def error_type(self) -> Optional[str]:
"""Class name of the exception that led to status==ERROR."""
return self._exception.__class__.__name__ if self._exception else None
@property
def error(self) -> Optional[str]:
"""Error traceback."""
return "".join(traceback.format_exception(self._exception)) if self._exception else None
@property
def cancelled(self) -> bool:
"""Set status to CANCELLED."""
return self.status == InstallStatus.CANCELLED
@property
def errored(self) -> bool:
"""Return true if job has errored."""
return self.status == InstallStatus.ERROR
@property
def waiting(self) -> bool:
"""Return true if job is waiting to run."""
return self.status == InstallStatus.WAITING
@property
def downloading(self) -> bool:
"""Return true if job is downloading."""
return self.status == InstallStatus.DOWNLOADING
@property
def running(self) -> bool:
"""Return true if job is running."""
return self.status == InstallStatus.RUNNING
@property
def complete(self) -> bool:
"""Return true if job completed without errors."""
return self.status == InstallStatus.COMPLETED
@property
def in_terminal_state(self) -> bool:
"""Return true if job is in a terminal state."""
return self.status in [InstallStatus.COMPLETED, InstallStatus.ERROR, InstallStatus.CANCELLED]
class ModelInstallServiceBase(ABC):
"""Abstract base class for InvokeAI model installation."""
@@ -146,6 +242,8 @@ class ModelInstallServiceBase(ABC):
self,
app_config: InvokeAIAppConfig,
record_store: ModelRecordServiceBase,
download_queue: DownloadQueueServiceBase,
metadata_store: ModelMetadataStore,
event_bus: Optional["EventServiceBase"] = None,
):
"""
@@ -156,12 +254,14 @@ class ModelInstallServiceBase(ABC):
:param event_bus: InvokeAI event bus for reporting events to.
"""
# make the invoker optional here because we don't need it and it
# makes the installer harder to use outside the web app
@abstractmethod
def start(self, *args: Any, **kwarg: Any) -> None:
def start(self, invoker: Optional[Invoker] = None) -> None:
"""Start the installer service."""
@abstractmethod
def stop(self, *args: Any, **kwarg: Any) -> None:
def stop(self, invoker: Optional[Invoker] = None) -> None:
"""Stop the model install service. After this the objection can be safely deleted."""
@property
@@ -264,9 +364,13 @@ class ModelInstallServiceBase(ABC):
"""
@abstractmethod
def get_job(self, source: ModelSource) -> List[ModelInstallJob]:
def get_job_by_source(self, source: ModelSource) -> List[ModelInstallJob]:
"""Return the ModelInstallJob(s) corresponding to the provided source."""
@abstractmethod
def get_job_by_id(self, id: int) -> ModelInstallJob:
"""Return the ModelInstallJob corresponding to the provided id. Raises ValueError if no job has that ID."""
@abstractmethod
def list_jobs(self) -> List[ModelInstallJob]: # noqa D102
"""
@@ -278,16 +382,19 @@ class ModelInstallServiceBase(ABC):
"""Prune all completed and errored jobs."""
@abstractmethod
def wait_for_installs(self) -> List[ModelInstallJob]:
def cancel_job(self, job: ModelInstallJob) -> None:
"""Cancel the indicated job."""
@abstractmethod
def wait_for_installs(self, timeout: int = 0) -> List[ModelInstallJob]:
"""
Wait for all pending installs to complete.
This will block until all pending installs have
completed, been cancelled, or errored out. It will
block indefinitely if one or more jobs are in the
paused state.
completed, been cancelled, or errored out.
It will return the current list of jobs.
:param timeout: Wait up to indicated number of seconds. Raise an Exception('timeout') if
installs do not complete within the indicated time.
"""
@abstractmethod

View File

@@ -1,60 +1,72 @@
"""Model installation class."""
import os
import re
import threading
import time
from hashlib import sha256
from logging import Logger
from pathlib import Path
from queue import Queue
from queue import Empty, Queue
from random import randbytes
from shutil import copyfile, copytree, move, rmtree
from tempfile import mkdtemp
from typing import Any, Dict, List, Optional, Set, Union
from huggingface_hub import HfFolder
from pydantic.networks import AnyHttpUrl
from requests import Session
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.events import EventServiceBase
from invokeai.app.services.model_records import DuplicateModelException, ModelRecordServiceBase, UnknownModelException
from invokeai.app.services.download import DownloadJob, DownloadQueueServiceBase
from invokeai.app.services.events.events_base import EventServiceBase
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.model_records import DuplicateModelException, ModelRecordServiceBase, ModelRecordServiceSQL
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
InvalidModelConfigException,
ModelRepoVariant,
ModelType,
)
from invokeai.backend.model_manager.hash import FastModelHash
from invokeai.backend.model_manager.metadata import (
AnyModelRepoMetadata,
CivitaiMetadataFetch,
HuggingFaceMetadataFetch,
ModelMetadataStore,
ModelMetadataWithFiles,
RemoteModelFile,
)
from invokeai.backend.model_manager.probe import ModelProbe
from invokeai.backend.model_manager.search import ModelSearch
from invokeai.backend.util import Chdir, InvokeAILogger
from invokeai.backend.util.devices import choose_precision, choose_torch_device
from .model_install_base import (
CivitaiModelSource,
HFModelSource,
InstallStatus,
LocalModelSource,
ModelInstallJob,
ModelInstallServiceBase,
ModelSource,
URLModelSource,
)
# marker that the queue is done and that thread should exit
STOP_JOB = ModelInstallJob(
source=LocalModelSource(path="stop"),
local_path=Path("/dev/null"),
)
TMPDIR_PREFIX = "tmpinstall_"
class ModelInstallService(ModelInstallServiceBase):
"""class for InvokeAI model installation."""
_app_config: InvokeAIAppConfig
_record_store: ModelRecordServiceBase
_event_bus: Optional[EventServiceBase] = None
_install_queue: Queue[ModelInstallJob]
_install_jobs: List[ModelInstallJob]
_logger: Logger
_cached_model_paths: Set[Path]
_models_installed: Set[str]
def __init__(
self,
app_config: InvokeAIAppConfig,
record_store: ModelRecordServiceBase,
download_queue: DownloadQueueServiceBase,
metadata_store: Optional[ModelMetadataStore] = None,
event_bus: Optional[EventServiceBase] = None,
session: Optional[Session] = None,
):
"""
Initialize the installer object.
@@ -67,10 +79,26 @@ class ModelInstallService(ModelInstallServiceBase):
self._record_store = record_store
self._event_bus = event_bus
self._logger = InvokeAILogger.get_logger(name=self.__class__.__name__)
self._install_jobs = []
self._install_queue = Queue()
self._cached_model_paths = set()
self._models_installed = set()
self._install_jobs: List[ModelInstallJob] = []
self._install_queue: Queue[ModelInstallJob] = Queue()
self._cached_model_paths: Set[Path] = set()
self._models_installed: Set[str] = set()
self._lock = threading.Lock()
self._stop_event = threading.Event()
self._downloads_changed_event = threading.Event()
self._download_queue = download_queue
self._download_cache: Dict[AnyHttpUrl, ModelInstallJob] = {}
self._running = False
self._session = session
self._next_job_id = 0
# There may not necessarily be a metadata store initialized
# so we create one and initialize it with the same sql database
# used by the record store service.
if metadata_store:
self._metadata_store = metadata_store
else:
assert isinstance(record_store, ModelRecordServiceSQL)
self._metadata_store = ModelMetadataStore(record_store.db)
@property
def app_config(self) -> InvokeAIAppConfig: # noqa D102
@@ -84,69 +112,31 @@ class ModelInstallService(ModelInstallServiceBase):
def event_bus(self) -> Optional[EventServiceBase]: # noqa D102
return self._event_bus
def start(self, *args: Any, **kwarg: Any) -> None:
# make the invoker optional here because we don't need it and it
# makes the installer harder to use outside the web app
def start(self, invoker: Optional[Invoker] = None) -> None:
"""Start the installer thread."""
self._start_installer_thread()
self.sync_to_config()
with self._lock:
if self._running:
raise Exception("Attempt to start the installer service twice")
self._start_installer_thread()
self._remove_dangling_install_dirs()
self.sync_to_config()
def stop(self, *args: Any, **kwarg: Any) -> None:
def stop(self, invoker: Optional[Invoker] = None) -> None:
"""Stop the installer thread; after this the object can be deleted and garbage collected."""
self._install_queue.put(STOP_JOB)
def _start_installer_thread(self) -> None:
threading.Thread(target=self._install_next_item, daemon=True).start()
def _install_next_item(self) -> None:
done = False
while not done:
job = self._install_queue.get()
if job == STOP_JOB:
done = True
continue
assert job.local_path is not None
try:
self._signal_job_running(job)
if job.inplace:
key = self.register_path(job.local_path, job.config_in)
else:
key = self.install_path(job.local_path, job.config_in)
job.config_out = self.record_store.get_model(key)
self._signal_job_completed(job)
except (OSError, DuplicateModelException, InvalidModelConfigException) as excp:
self._signal_job_errored(job, excp)
finally:
self._install_queue.task_done()
self._logger.info("Install thread exiting")
def _signal_job_running(self, job: ModelInstallJob) -> None:
job.status = InstallStatus.RUNNING
self._logger.info(f"{job.source}: model installation started")
if self._event_bus:
self._event_bus.emit_model_install_started(str(job.source))
def _signal_job_completed(self, job: ModelInstallJob) -> None:
job.status = InstallStatus.COMPLETED
assert job.config_out
self._logger.info(
f"{job.source}: model installation completed. {job.local_path} registered key {job.config_out.key}"
)
if self._event_bus:
assert job.local_path is not None
assert job.config_out is not None
key = job.config_out.key
self._event_bus.emit_model_install_completed(str(job.source), key)
def _signal_job_errored(self, job: ModelInstallJob, excp: Exception) -> None:
job.set_error(excp)
self._logger.info(f"{job.source}: model installation encountered an exception: {job.error_type}")
if self._event_bus:
error_type = job.error_type
error = job.error
assert error_type is not None
assert error is not None
self._event_bus.emit_model_install_error(str(job.source), error_type, error)
with self._lock:
if not self._running:
raise Exception("Attempt to stop the install service before it was started")
self._stop_event.set()
with self._install_queue.mutex:
self._install_queue.queue.clear() # get rid of pending jobs
active_jobs = [x for x in self.list_jobs() if x.running]
if active_jobs:
self._logger.warning("Waiting for active install job to complete")
self.wait_for_installs()
self._download_cache.clear()
self._running = False
def register_path(
self,
@@ -172,7 +162,12 @@ class ModelInstallService(ModelInstallServiceBase):
info: AnyModelConfig = self._probe_model(Path(model_path), config)
old_hash = info.original_hash
dest_path = self.app_config.models_path / info.base.value / info.type.value / model_path.name
new_path = self._copy_model(model_path, dest_path)
try:
new_path = self._copy_model(model_path, dest_path)
except FileExistsError as excp:
raise DuplicateModelException(
f"A model named {model_path.name} is already installed at {dest_path.as_posix()}"
) from excp
new_hash = FastModelHash.hash(new_path)
assert new_hash == old_hash, f"{model_path}: Model hash changed during installation, possibly corrupted."
@@ -182,43 +177,56 @@ class ModelInstallService(ModelInstallServiceBase):
info,
)
def import_model(
self,
source: ModelSource,
config: Optional[Dict[str, Any]] = None,
) -> ModelInstallJob: # noqa D102
if not config:
config = {}
def import_model(self, source: ModelSource, config: Optional[Dict[str, Any]] = None) -> ModelInstallJob: # noqa D102
if isinstance(source, LocalModelSource):
install_job = self._import_local_model(source, config)
self._install_queue.put(install_job) # synchronously install
elif isinstance(source, CivitaiModelSource):
install_job = self._import_from_civitai(source, config)
elif isinstance(source, HFModelSource):
install_job = self._import_from_hf(source, config)
elif isinstance(source, URLModelSource):
install_job = self._import_from_url(source, config)
else:
raise ValueError(f"Unsupported model source: '{type(source)}'")
# Installing a local path
if isinstance(source, LocalModelSource) and Path(source.path).exists(): # a path that is already on disk
job = ModelInstallJob(
source=source,
config_in=config,
local_path=Path(source.path),
)
self._install_jobs.append(job)
self._install_queue.put(job)
return job
else: # here is where we'd download a URL or repo_id. Implementation pending download queue.
raise UnknownModelException("File or directory not found")
self._install_jobs.append(install_job)
return install_job
def list_jobs(self) -> List[ModelInstallJob]: # noqa D102
return self._install_jobs
def get_job(self, source: ModelSource) -> List[ModelInstallJob]: # noqa D102
def get_job_by_source(self, source: ModelSource) -> List[ModelInstallJob]: # noqa D102
return [x for x in self._install_jobs if x.source == source]
def wait_for_installs(self) -> List[ModelInstallJob]: # noqa D102
def get_job_by_id(self, id: int) -> ModelInstallJob: # noqa D102
jobs = [x for x in self._install_jobs if x.id == id]
if not jobs:
raise ValueError(f"No job with id {id} known")
assert len(jobs) == 1
assert isinstance(jobs[0], ModelInstallJob)
return jobs[0]
def wait_for_installs(self, timeout: int = 0) -> List[ModelInstallJob]: # noqa D102
"""Block until all installation jobs are done."""
start = time.time()
while len(self._download_cache) > 0:
if self._downloads_changed_event.wait(timeout=5): # in case we miss an event
self._downloads_changed_event.clear()
if timeout > 0 and time.time() - start > timeout:
raise Exception("Timeout exceeded")
self._install_queue.join()
return self._install_jobs
def cancel_job(self, job: ModelInstallJob) -> None:
"""Cancel the indicated job."""
job.cancel()
with self._lock:
self._cancel_download_parts(job)
def prune_jobs(self) -> None:
"""Prune all completed and errored jobs."""
unfinished_jobs = [
x for x in self._install_jobs if x.status not in [InstallStatus.COMPLETED, InstallStatus.ERROR]
]
unfinished_jobs = [x for x in self._install_jobs if not x.in_terminal_state]
self._install_jobs = unfinished_jobs
def sync_to_config(self) -> None:
@@ -234,10 +242,108 @@ class ModelInstallService(ModelInstallServiceBase):
self._cached_model_paths = {Path(x.path) for x in self.record_store.all_models()}
callback = self._scan_install if install else self._scan_register
search = ModelSearch(on_model_found=callback)
self._models_installed: Set[str] = set()
self._models_installed.clear()
search.search(scan_dir)
return list(self._models_installed)
def unregister(self, key: str) -> None: # noqa D102
self.record_store.del_model(key)
def delete(self, key: str) -> None: # noqa D102
"""Unregister the model. Delete its files only if they are within our models directory."""
model = self.record_store.get_model(key)
models_dir = self.app_config.models_path
model_path = models_dir / model.path
if model_path.is_relative_to(models_dir):
self.unconditionally_delete(key)
else:
self.unregister(key)
def unconditionally_delete(self, key: str) -> None: # noqa D102
model = self.record_store.get_model(key)
path = self.app_config.models_path / model.path
if path.is_dir():
rmtree(path)
else:
path.unlink()
self.unregister(key)
# --------------------------------------------------------------------------------------------
# Internal functions that manage the installer threads
# --------------------------------------------------------------------------------------------
def _start_installer_thread(self) -> None:
threading.Thread(target=self._install_next_item, daemon=True).start()
self._running = True
def _install_next_item(self) -> None:
done = False
while not done:
if self._stop_event.is_set():
done = True
continue
try:
job = self._install_queue.get(timeout=1)
except Empty:
continue
assert job.local_path is not None
try:
if job.cancelled:
self._signal_job_cancelled(job)
elif job.errored:
self._signal_job_errored(job)
elif (
job.waiting or job.downloading
): # local jobs will be in waiting state, remote jobs will be downloading state
job.total_bytes = self._stat_size(job.local_path)
job.bytes = job.total_bytes
self._signal_job_running(job)
if job.inplace:
key = self.register_path(job.local_path, job.config_in)
else:
key = self.install_path(job.local_path, job.config_in)
job.config_out = self.record_store.get_model(key)
# enter the metadata, if there is any
if job.source_metadata:
self._metadata_store.add_metadata(key, job.source_metadata)
self._signal_job_completed(job)
except InvalidModelConfigException as excp:
if any(x.content_type is not None and "text/html" in x.content_type for x in job.download_parts):
job.set_error(
InvalidModelConfigException(
f"At least one file in {job.local_path} is an HTML page, not a model. This can happen when an access token is required to download."
)
)
else:
job.set_error(excp)
self._signal_job_errored(job)
except (OSError, DuplicateModelException) as excp:
job.set_error(excp)
self._signal_job_errored(job)
finally:
# if this is an install of a remote file, then clean up the temporary directory
if job._install_tmpdir is not None:
rmtree(job._install_tmpdir)
self._install_queue.task_done()
self._logger.info("Install thread exiting")
# --------------------------------------------------------------------------------------------
# Internal functions that manage the models directory
# --------------------------------------------------------------------------------------------
def _remove_dangling_install_dirs(self) -> None:
"""Remove leftover tmpdirs from aborted installs."""
path = self._app_config.models_path
for tmpdir in path.glob(f"{TMPDIR_PREFIX}*"):
self._logger.info(f"Removing dangling temporary directory {tmpdir}")
rmtree(tmpdir)
def _scan_models_directory(self) -> None:
"""
Scan the models directory for new and missing models.
@@ -320,28 +426,6 @@ class ModelInstallService(ModelInstallServiceBase):
pass
return True
def unregister(self, key: str) -> None: # noqa D102
self.record_store.del_model(key)
def delete(self, key: str) -> None: # noqa D102
"""Unregister the model. Delete its files only if they are within our models directory."""
model = self.record_store.get_model(key)
models_dir = self.app_config.models_path
model_path = models_dir / model.path
if model_path.is_relative_to(models_dir):
self.unconditionally_delete(key)
else:
self.unregister(key)
def unconditionally_delete(self, key: str) -> None: # noqa D102
model = self.record_store.get_model(key)
path = self.app_config.models_path / model.path
if path.is_dir():
rmtree(path)
else:
path.unlink()
self.unregister(key)
def _copy_model(self, old_path: Path, new_path: Path) -> Path:
if old_path == new_path:
return old_path
@@ -397,3 +481,280 @@ class ModelInstallService(ModelInstallServiceBase):
info.config = legacy_conf.relative_to(self.app_config.root_dir).as_posix()
self.record_store.add_model(key, info)
return key
def _next_id(self) -> int:
with self._lock:
id = self._next_job_id
self._next_job_id += 1
return id
@staticmethod
def _guess_variant() -> ModelRepoVariant:
"""Guess the best HuggingFace variant type to download."""
precision = choose_precision(choose_torch_device())
return ModelRepoVariant.FP16 if precision == "float16" else ModelRepoVariant.DEFAULT
def _import_local_model(self, source: LocalModelSource, config: Optional[Dict[str, Any]]) -> ModelInstallJob:
return ModelInstallJob(
id=self._next_id(),
source=source,
config_in=config or {},
local_path=Path(source.path),
inplace=source.inplace,
)
def _import_from_civitai(self, source: CivitaiModelSource, config: Optional[Dict[str, Any]]) -> ModelInstallJob:
if not source.access_token:
self._logger.info("No Civitai access token provided; some models may not be downloadable.")
metadata = CivitaiMetadataFetch(self._session).from_id(str(source.version_id))
assert isinstance(metadata, ModelMetadataWithFiles)
remote_files = metadata.download_urls(session=self._session)
return self._import_remote_model(source=source, config=config, metadata=metadata, remote_files=remote_files)
def _import_from_hf(self, source: HFModelSource, config: Optional[Dict[str, Any]]) -> ModelInstallJob:
# Add user's cached access token to HuggingFace requests
source.access_token = source.access_token or HfFolder.get_token()
if not source.access_token:
self._logger.info("No HuggingFace access token present; some models may not be downloadable.")
metadata = HuggingFaceMetadataFetch(self._session).from_id(source.repo_id)
assert isinstance(metadata, ModelMetadataWithFiles)
remote_files = metadata.download_urls(
variant=source.variant or self._guess_variant(),
subfolder=source.subfolder,
session=self._session,
)
return self._import_remote_model(
source=source,
config=config,
remote_files=remote_files,
metadata=metadata,
)
def _import_from_url(self, source: URLModelSource, config: Optional[Dict[str, Any]]) -> ModelInstallJob:
# URLs from Civitai or HuggingFace will be handled specially
url_patterns = {
r"^https?://civitai.com/": CivitaiMetadataFetch,
r"^https?://huggingface.co/[^/]+/[^/]+$": HuggingFaceMetadataFetch,
}
metadata = None
for pattern, fetcher in url_patterns.items():
if re.match(pattern, str(source.url), re.IGNORECASE):
metadata = fetcher(self._session).from_url(source.url)
break
self._logger.debug(f"metadata={metadata}")
if metadata and isinstance(metadata, ModelMetadataWithFiles):
remote_files = metadata.download_urls(session=self._session)
else:
remote_files = [RemoteModelFile(url=source.url, path=Path("."), size=0)]
return self._import_remote_model(
source=source,
config=config,
metadata=metadata,
remote_files=remote_files,
)
def _import_remote_model(
self,
source: ModelSource,
remote_files: List[RemoteModelFile],
metadata: Optional[AnyModelRepoMetadata],
config: Optional[Dict[str, Any]],
) -> ModelInstallJob:
# TODO: Replace with tempfile.tmpdir() when multithreading is cleaned up.
# Currently the tmpdir isn't automatically removed at exit because it is
# being held in a daemon thread.
tmpdir = Path(
mkdtemp(
dir=self._app_config.models_path,
prefix=TMPDIR_PREFIX,
)
)
install_job = ModelInstallJob(
id=self._next_id(),
source=source,
config_in=config or {},
source_metadata=metadata,
local_path=tmpdir, # local path may change once the download has started due to content-disposition handling
bytes=0,
total_bytes=0,
)
# we remember the path up to the top of the tmpdir so that it may be
# removed safely at the end of the install process.
install_job._install_tmpdir = tmpdir
assert install_job.total_bytes is not None # to avoid type checking complaints in the loop below
self._logger.info(f"Queuing {source} for downloading")
self._logger.debug(f"remote_files={remote_files}")
for model_file in remote_files:
url = model_file.url
path = model_file.path
self._logger.info(f"Downloading {url} => {path}")
install_job.total_bytes += model_file.size
assert hasattr(source, "access_token")
dest = tmpdir / path.parent
dest.mkdir(parents=True, exist_ok=True)
download_job = DownloadJob(
source=url,
dest=dest,
access_token=source.access_token,
)
self._download_cache[download_job.source] = install_job # matches a download job to an install job
install_job.download_parts.add(download_job)
self._download_queue.submit_download_job(
download_job,
on_start=self._download_started_callback,
on_progress=self._download_progress_callback,
on_complete=self._download_complete_callback,
on_error=self._download_error_callback,
on_cancelled=self._download_cancelled_callback,
)
return install_job
def _stat_size(self, path: Path) -> int:
size = 0
if path.is_file():
size = path.stat().st_size
elif path.is_dir():
for root, _, files in os.walk(path):
size += sum(self._stat_size(Path(root, x)) for x in files)
return size
# ------------------------------------------------------------------
# Callbacks are executed by the download queue in a separate thread
# ------------------------------------------------------------------
def _download_started_callback(self, download_job: DownloadJob) -> None:
self._logger.info(f"{download_job.source}: model download started")
with self._lock:
install_job = self._download_cache[download_job.source]
install_job.status = InstallStatus.DOWNLOADING
assert download_job.download_path
if install_job.local_path == install_job._install_tmpdir:
partial_path = download_job.download_path.relative_to(install_job._install_tmpdir)
dest_name = partial_path.parts[0]
install_job.local_path = install_job._install_tmpdir / dest_name
# Update the total bytes count for remote sources.
if not install_job.total_bytes:
install_job.total_bytes = sum(x.total_bytes for x in install_job.download_parts)
def _download_progress_callback(self, download_job: DownloadJob) -> None:
with self._lock:
install_job = self._download_cache[download_job.source]
if install_job.cancelled: # This catches the case in which the caller directly calls job.cancel()
self._cancel_download_parts(install_job)
else:
# update sizes
install_job.bytes = sum(x.bytes for x in install_job.download_parts)
self._signal_job_downloading(install_job)
def _download_complete_callback(self, download_job: DownloadJob) -> None:
with self._lock:
install_job = self._download_cache[download_job.source]
self._download_cache.pop(download_job.source, None)
# are there any more active jobs left in this task?
if all(x.complete for x in install_job.download_parts):
# now enqueue job for actual installation into the models directory
self._install_queue.put(install_job)
# Let other threads know that the number of downloads has changed
self._downloads_changed_event.set()
def _download_error_callback(self, download_job: DownloadJob, excp: Optional[Exception] = None) -> None:
with self._lock:
install_job = self._download_cache.pop(download_job.source, None)
assert install_job is not None
assert excp is not None
install_job.set_error(excp)
self._logger.error(
f"Cancelling {install_job.source} due to an error while downloading {download_job.source}: {str(excp)}"
)
self._cancel_download_parts(install_job)
# Let other threads know that the number of downloads has changed
self._downloads_changed_event.set()
def _download_cancelled_callback(self, download_job: DownloadJob) -> None:
with self._lock:
install_job = self._download_cache.pop(download_job.source, None)
if not install_job:
return
self._downloads_changed_event.set()
self._logger.warning(f"Download {download_job.source} cancelled.")
# if install job has already registered an error, then do not replace its status with cancelled
if not install_job.errored:
install_job.cancel()
self._cancel_download_parts(install_job)
# Let other threads know that the number of downloads has changed
self._downloads_changed_event.set()
def _cancel_download_parts(self, install_job: ModelInstallJob) -> None:
# on multipart downloads, _cancel_components() will get called repeatedly from the download callbacks
# do not lock here because it gets called within a locked context
for s in install_job.download_parts:
self._download_queue.cancel_job(s)
if all(x.in_terminal_state for x in install_job.download_parts):
# When all parts have reached their terminal state, we finalize the job to clean up the temporary directory and other resources
self._install_queue.put(install_job)
# ------------------------------------------------------------------------------------------------
# Internal methods that put events on the event bus
# ------------------------------------------------------------------------------------------------
def _signal_job_running(self, job: ModelInstallJob) -> None:
job.status = InstallStatus.RUNNING
self._logger.info(f"{job.source}: model installation started")
if self._event_bus:
self._event_bus.emit_model_install_running(str(job.source))
def _signal_job_downloading(self, job: ModelInstallJob) -> None:
if self._event_bus:
parts: List[Dict[str, str | int]] = [
{
"url": str(x.source),
"local_path": str(x.download_path),
"bytes": x.bytes,
"total_bytes": x.total_bytes,
}
for x in job.download_parts
]
assert job.bytes is not None
assert job.total_bytes is not None
self._event_bus.emit_model_install_downloading(
str(job.source),
local_path=job.local_path.as_posix(),
parts=parts,
bytes=job.bytes,
total_bytes=job.total_bytes,
)
def _signal_job_completed(self, job: ModelInstallJob) -> None:
job.status = InstallStatus.COMPLETED
assert job.config_out
self._logger.info(
f"{job.source}: model installation completed. {job.local_path} registered key {job.config_out.key}"
)
if self._event_bus:
assert job.local_path is not None
assert job.config_out is not None
key = job.config_out.key
self._event_bus.emit_model_install_completed(str(job.source), key)
def _signal_job_errored(self, job: ModelInstallJob) -> None:
self._logger.info(f"{job.source}: model installation encountered an exception: {job.error_type}\n{job.error}")
if self._event_bus:
error_type = job.error_type
error = job.error
assert error_type is not None
assert error is not None
self._event_bus.emit_model_install_error(str(job.source), error_type, error)
def _signal_job_cancelled(self, job: ModelInstallJob) -> None:
self._logger.info(f"{job.source}: model installation was cancelled")
if self._event_bus:
self._event_bus.emit_model_install_cancelled(str(job.source))

View File

@@ -4,6 +4,8 @@ from .model_records_base import ( # noqa F401
InvalidModelException,
ModelRecordServiceBase,
UnknownModelException,
ModelSummary,
ModelRecordOrderBy,
)
from .model_records_sql import ModelRecordServiceSQL # noqa F401
@@ -13,4 +15,6 @@ __all__ = [
"DuplicateModelException",
"InvalidModelException",
"UnknownModelException",
"ModelSummary",
"ModelRecordOrderBy",
]

View File

@@ -4,10 +4,15 @@ Abstract base class for storing and retrieving model configuration records.
"""
from abc import ABC, abstractmethod
from enum import Enum
from pathlib import Path
from typing import List, Optional, Union
from typing import Any, Dict, List, Optional, Set, Tuple, Union
from pydantic import BaseModel, Field
from invokeai.app.services.shared.pagination import PaginatedResults
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelFormat, ModelType
from invokeai.backend.model_manager.metadata import AnyModelRepoMetadata, ModelMetadataStore
class DuplicateModelException(Exception):
@@ -26,11 +31,33 @@ class ConfigFileVersionMismatchException(Exception):
"""Raised on an attempt to open a config with an incompatible version."""
class ModelRecordOrderBy(str, Enum):
"""The order in which to return model summaries."""
Default = "default" # order by type, base, format and name
Type = "type"
Base = "base"
Name = "name"
Format = "format"
class ModelSummary(BaseModel):
"""A short summary of models for UI listing purposes."""
key: str = Field(description="model key")
type: ModelType = Field(description="model type")
base: BaseModelType = Field(description="base model")
format: ModelFormat = Field(description="model format")
name: str = Field(description="model name")
description: str = Field(description="short description of model")
tags: Set[str] = Field(description="tags associated with model")
class ModelRecordServiceBase(ABC):
"""Abstract base class for storage and retrieval of model configs."""
@abstractmethod
def add_model(self, key: str, config: Union[dict, AnyModelConfig]) -> AnyModelConfig:
def add_model(self, key: str, config: Union[Dict[str, Any], AnyModelConfig]) -> AnyModelConfig:
"""
Add a model to the database.
@@ -54,7 +81,7 @@ class ModelRecordServiceBase(ABC):
pass
@abstractmethod
def update_model(self, key: str, config: Union[dict, AnyModelConfig]) -> AnyModelConfig:
def update_model(self, key: str, config: Union[Dict[str, Any], AnyModelConfig]) -> AnyModelConfig:
"""
Update the model, returning the updated version.
@@ -75,6 +102,47 @@ class ModelRecordServiceBase(ABC):
"""
pass
@property
@abstractmethod
def metadata_store(self) -> ModelMetadataStore:
"""Return a ModelMetadataStore initialized on the same database."""
pass
@abstractmethod
def get_metadata(self, key: str) -> Optional[AnyModelRepoMetadata]:
"""
Retrieve metadata (if any) from when model was downloaded from a repo.
:param key: Model key
"""
pass
@abstractmethod
def list_all_metadata(self) -> List[Tuple[str, AnyModelRepoMetadata]]:
"""List metadata for all models that have it."""
pass
@abstractmethod
def search_by_metadata_tag(self, tags: Set[str]) -> List[AnyModelConfig]:
"""
Search model metadata for ones with all listed tags and return their corresponding configs.
:param tags: Set of tags to search for. All tags must be present.
"""
pass
@abstractmethod
def list_tags(self) -> Set[str]:
"""Return a unique set of all the model tags in the metadata database."""
pass
@abstractmethod
def list_models(
self, page: int = 0, per_page: int = 10, order_by: ModelRecordOrderBy = ModelRecordOrderBy.Default
) -> PaginatedResults[ModelSummary]:
"""Return a paginated summary listing of each model in the database."""
pass
@abstractmethod
def exists(self, key: str) -> bool:
"""

View File

@@ -42,9 +42,11 @@ Typical usage:
import json
import sqlite3
from math import ceil
from pathlib import Path
from typing import List, Optional, Union
from typing import Any, Dict, List, Optional, Set, Tuple, Union
from invokeai.app.services.shared.pagination import PaginatedResults
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
@@ -52,11 +54,14 @@ from invokeai.backend.model_manager.config import (
ModelFormat,
ModelType,
)
from invokeai.backend.model_manager.metadata import AnyModelRepoMetadata, ModelMetadataStore, UnknownMetadataException
from ..shared.sqlite.sqlite_database import SqliteDatabase
from .model_records_base import (
DuplicateModelException,
ModelRecordOrderBy,
ModelRecordServiceBase,
ModelSummary,
UnknownModelException,
)
@@ -64,9 +69,6 @@ from .model_records_base import (
class ModelRecordServiceSQL(ModelRecordServiceBase):
"""Implementation of the ModelConfigStore ABC using a SQL database."""
_db: SqliteDatabase
_cursor: sqlite3.Cursor
def __init__(self, db: SqliteDatabase):
"""
Initialize a new object from preexisting sqlite3 connection and threading lock objects.
@@ -78,7 +80,12 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
self._db = db
self._cursor = self._db.conn.cursor()
def add_model(self, key: str, config: Union[dict, AnyModelConfig]) -> AnyModelConfig:
@property
def db(self) -> SqliteDatabase:
"""Return the underlying database."""
return self._db
def add_model(self, key: str, config: Union[Dict[str, Any], AnyModelConfig]) -> AnyModelConfig:
"""
Add a model to the database.
@@ -293,3 +300,95 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
)
results = [ModelConfigFactory.make_config(json.loads(x[0])) for x in self._cursor.fetchall()]
return results
@property
def metadata_store(self) -> ModelMetadataStore:
"""Return a ModelMetadataStore initialized on the same database."""
return ModelMetadataStore(self._db)
def get_metadata(self, key: str) -> Optional[AnyModelRepoMetadata]:
"""
Retrieve metadata (if any) from when model was downloaded from a repo.
:param key: Model key
"""
store = self.metadata_store
try:
metadata = store.get_metadata(key)
return metadata
except UnknownMetadataException:
return None
def search_by_metadata_tag(self, tags: Set[str]) -> List[AnyModelConfig]:
"""
Search model metadata for ones with all listed tags and return their corresponding configs.
:param tags: Set of tags to search for. All tags must be present.
"""
store = ModelMetadataStore(self._db)
keys = store.search_by_tag(tags)
return [self.get_model(x) for x in keys]
def list_tags(self) -> Set[str]:
"""Return a unique set of all the model tags in the metadata database."""
store = ModelMetadataStore(self._db)
return store.list_tags()
def list_all_metadata(self) -> List[Tuple[str, AnyModelRepoMetadata]]:
"""List metadata for all models that have it."""
store = ModelMetadataStore(self._db)
return store.list_all_metadata()
def list_models(
self, page: int = 0, per_page: int = 10, order_by: ModelRecordOrderBy = ModelRecordOrderBy.Default
) -> PaginatedResults[ModelSummary]:
"""Return a paginated summary listing of each model in the database."""
ordering = {
ModelRecordOrderBy.Default: "a.type, a.base, a.format, a.name",
ModelRecordOrderBy.Type: "a.type",
ModelRecordOrderBy.Base: "a.base",
ModelRecordOrderBy.Name: "a.name",
ModelRecordOrderBy.Format: "a.format",
}
def _fixup(summary: Dict[str, str]) -> Dict[str, Union[str, int, Set[str]]]:
"""Fix up results so that there are no null values."""
result: Dict[str, Union[str, int, Set[str]]] = {}
for key, item in summary.items():
result[key] = item or ""
result["tags"] = set(json.loads(summary["tags"] or "[]"))
return result
# Lock so that the database isn't updated while we're doing the two queries.
with self._db.lock:
# query1: get the total number of model configs
self._cursor.execute(
"""--sql
select count(*) from model_config;
""",
(),
)
total = int(self._cursor.fetchone()[0])
# query2: fetch key fields from the join of model_config and model_metadata
self._cursor.execute(
f"""--sql
SELECT a.id as key, a.type, a.base, a.format, a.name,
json_extract(a.config, '$.description') as description,
json_extract(b.metadata, '$.tags') as tags
FROM model_config AS a
LEFT JOIN model_metadata AS b on a.id=b.id
ORDER BY {ordering[order_by]} -- using ? to bind doesn't work here for some reason
LIMIT ?
OFFSET ?;
""",
(
per_page,
page * per_page,
),
)
rows = self._cursor.fetchall()
items = [ModelSummary.model_validate(_fixup(dict(x))) for x in rows]
return PaginatedResults(
page=page, pages=ceil(total / per_page), per_page=per_page, total=total, items=items
)

View File

@@ -6,6 +6,8 @@ from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_1 import build_migration_1
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_2 import build_migration_2
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_3 import build_migration_3
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_4 import build_migration_4
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_5 import build_migration_5
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_impl import SqliteMigrator
@@ -28,7 +30,9 @@ def init_db(config: InvokeAIAppConfig, logger: Logger, image_files: ImageFileSto
migrator = SqliteMigrator(db=db)
migrator.register_migration(build_migration_1())
migrator.register_migration(build_migration_2(image_files=image_files, logger=logger))
migrator.register_migration(build_migration_3())
migrator.register_migration(build_migration_3(app_config=config, logger=logger))
migrator.register_migration(build_migration_4())
migrator.register_migration(build_migration_5())
migrator.run_migrations()
return db

View File

@@ -11,8 +11,6 @@ from invokeai.app.services.workflow_records.workflow_records_common import (
UnsafeWorkflowWithVersionValidator,
)
from .util.migrate_yaml_config_1 import MigrateModelYamlToDb1
class Migration2Callback:
def __init__(self, image_files: ImageFileStorageBase, logger: Logger):
@@ -25,8 +23,6 @@ class Migration2Callback:
self._drop_old_workflow_tables(cursor)
self._add_workflow_library(cursor)
self._drop_model_manager_metadata(cursor)
self._recreate_model_config(cursor)
self._migrate_model_config_records(cursor)
self._migrate_embedded_workflows(cursor)
def _add_images_has_workflow(self, cursor: sqlite3.Cursor) -> None:
@@ -100,45 +96,6 @@ class Migration2Callback:
"""Drops the `model_manager_metadata` table."""
cursor.execute("DROP TABLE IF EXISTS model_manager_metadata;")
def _recreate_model_config(self, cursor: sqlite3.Cursor) -> None:
"""
Drops the `model_config` table, recreating it.
In 3.4.0, this table used explicit columns but was changed to use json_extract 3.5.0.
Because this table is not used in production, we are able to simply drop it and recreate it.
"""
cursor.execute("DROP TABLE IF EXISTS model_config;")
cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS model_config (
id TEXT NOT NULL PRIMARY KEY,
-- The next 3 fields are enums in python, unrestricted string here
base TEXT GENERATED ALWAYS as (json_extract(config, '$.base')) VIRTUAL NOT NULL,
type TEXT GENERATED ALWAYS as (json_extract(config, '$.type')) VIRTUAL NOT NULL,
name TEXT GENERATED ALWAYS as (json_extract(config, '$.name')) VIRTUAL NOT NULL,
path TEXT GENERATED ALWAYS as (json_extract(config, '$.path')) VIRTUAL NOT NULL,
format TEXT GENERATED ALWAYS as (json_extract(config, '$.format')) VIRTUAL NOT NULL,
original_hash TEXT, -- could be null
-- Serialized JSON representation of the whole config object,
-- which will contain additional fields from subclasses
config TEXT NOT NULL,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- unique constraint on combo of name, base and type
UNIQUE(name, base, type)
);
"""
)
def _migrate_model_config_records(self, cursor: sqlite3.Cursor) -> None:
"""After updating the model config table, we repopulate it."""
model_record_migrator = MigrateModelYamlToDb1(cursor)
model_record_migrator.migrate()
def _migrate_embedded_workflows(self, cursor: sqlite3.Cursor) -> None:
"""
In the v3.5.0 release, InvokeAI changed how it handles embedded workflows. The `images` table in

View File

@@ -1,13 +1,16 @@
import sqlite3
from logging import Logger
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration
from .util.migrate_yaml_config_1 import MigrateModelYamlToDb1
class Migration3Callback:
def __init__(self) -> None:
pass
def __init__(self, app_config: InvokeAIAppConfig, logger: Logger) -> None:
self._app_config = app_config
self._logger = logger
def __call__(self, cursor: sqlite3.Cursor) -> None:
self._drop_model_manager_metadata(cursor)
@@ -54,11 +57,12 @@ class Migration3Callback:
def _migrate_model_config_records(self, cursor: sqlite3.Cursor) -> None:
"""After updating the model config table, we repopulate it."""
model_record_migrator = MigrateModelYamlToDb1(cursor)
self._logger.info("Migrating model config records from models.yaml to database")
model_record_migrator = MigrateModelYamlToDb1(self._app_config, self._logger, cursor)
model_record_migrator.migrate()
def build_migration_3() -> Migration:
def build_migration_3(app_config: InvokeAIAppConfig, logger: Logger) -> Migration:
"""
Build the migration from database version 2 to 3.
@@ -69,7 +73,7 @@ def build_migration_3() -> Migration:
migration_3 = Migration(
from_version=2,
to_version=3,
callback=Migration3Callback(),
callback=Migration3Callback(app_config=app_config, logger=logger),
)
return migration_3

View File

@@ -0,0 +1,83 @@
import sqlite3
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration
class Migration4Callback:
"""Callback to do step 4 of migration."""
def __call__(self, cursor: sqlite3.Cursor) -> None: # noqa D102
self._create_model_metadata(cursor)
self._create_model_tags(cursor)
self._create_tags(cursor)
self._create_triggers(cursor)
def _create_model_metadata(self, cursor: sqlite3.Cursor) -> None:
"""Create the table used to store model metadata downloaded from remote sources."""
cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS model_metadata (
id TEXT NOT NULL PRIMARY KEY,
name TEXT GENERATED ALWAYS AS (json_extract(metadata, '$.name')) VIRTUAL NOT NULL,
author TEXT GENERATED ALWAYS AS (json_extract(metadata, '$.author')) VIRTUAL NOT NULL,
-- Serialized JSON representation of the whole metadata object,
-- which will contain additional fields from subclasses
metadata TEXT NOT NULL,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
FOREIGN KEY(id) REFERENCES model_config(id) ON DELETE CASCADE
);
"""
)
def _create_model_tags(self, cursor: sqlite3.Cursor) -> None:
cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS model_tags (
model_id TEXT NOT NULL,
tag_id INTEGER NOT NULL,
FOREIGN KEY(model_id) REFERENCES model_config(id) ON DELETE CASCADE,
FOREIGN KEY(tag_id) REFERENCES tags(tag_id) ON DELETE CASCADE,
UNIQUE(model_id,tag_id)
);
"""
)
def _create_tags(self, cursor: sqlite3.Cursor) -> None:
cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS tags (
tag_id INTEGER NOT NULL PRIMARY KEY,
tag_text TEXT NOT NULL UNIQUE
);
"""
)
def _create_triggers(self, cursor: sqlite3.Cursor) -> None:
cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS model_metadata_updated_at
AFTER UPDATE
ON model_metadata FOR EACH ROW
BEGIN
UPDATE model_metadata SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE id = old.id;
END;
"""
)
def build_migration_4() -> Migration:
"""
Build the migration from database version 3 to 4.
Adds the tables needed to store model metadata and tags.
"""
migration_4 = Migration(
from_version=3,
to_version=4,
callback=Migration4Callback(),
)
return migration_4

View File

@@ -0,0 +1,34 @@
import sqlite3
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration
class Migration5Callback:
def __call__(self, cursor: sqlite3.Cursor) -> None:
self._drop_graph_executions(cursor)
def _drop_graph_executions(self, cursor: sqlite3.Cursor) -> None:
"""Drops the `graph_executions` table."""
cursor.execute(
"""--sql
DROP TABLE IF EXISTS graph_executions;
"""
)
def build_migration_5() -> Migration:
"""
Build the migration from database version 4 to 5.
Introduced in v3.6.3, this migration:
- Drops the `graph_executions` table. We are able to do this because we are moving the graph storage
to be purely in-memory.
"""
migration_5 = Migration(
from_version=4,
to_version=5,
callback=Migration5Callback(),
)
return migration_5

View File

@@ -23,7 +23,6 @@ from invokeai.backend.model_manager.config import (
ModelType,
)
from invokeai.backend.model_manager.hash import FastModelHash
from invokeai.backend.util.logging import InvokeAILogger
ModelsValidator = TypeAdapter(AnyModelConfig)
@@ -46,10 +45,9 @@ class MigrateModelYamlToDb1:
logger: Logger
cursor: sqlite3.Cursor
def __init__(self, cursor: sqlite3.Cursor = None) -> None:
self.config = InvokeAIAppConfig.get_config()
self.config.parse_args()
self.logger = InvokeAILogger.get_logger()
def __init__(self, config: InvokeAIAppConfig, logger: Logger, cursor: sqlite3.Cursor = None) -> None:
self.config = config
self.logger = logger
self.cursor = cursor
def get_yaml(self) -> DictConfig:
@@ -74,7 +72,12 @@ class MigrateModelYamlToDb1:
continue
base_type, model_type, model_name = str(model_key).split("/")
hash = FastModelHash.hash(self.config.models_path / stanza.path)
try:
hash = FastModelHash.hash(self.config.models_path / stanza.path)
except OSError:
self.logger.warning(f"The model at {stanza.path} is not a valid file or directory. Skipping migration.")
continue
assert isinstance(model_key, str)
new_key = sha1(model_key.encode("utf-8")).hexdigest()

View File

@@ -1,5 +1,4 @@
{
"id": "6bfa0b3a-7090-4cd9-ad2d-a4b8662b6e71",
"name": "ESRGAN Upscaling with Canny ControlNet",
"author": "InvokeAI",
"description": "Sample workflow for using Upscaling with ControlNet with SD1.5",
@@ -77,12 +76,12 @@
}
}
},
"width": 320,
"height": 256,
"position": {
"x": 1250,
"y": 1500
}
},
"width": 320,
"height": 219
},
{
"id": "d8ace142-c05f-4f1d-8982-88dc7473958d",
@@ -148,12 +147,12 @@
}
}
},
"width": 320,
"height": 227,
"position": {
"x": 700,
"y": 1375
}
},
"width": 320,
"height": 193
},
{
"id": "771bdf6a-0813-4099-a5d8-921a138754d4",
@@ -214,12 +213,12 @@
}
}
},
"width": 320,
"height": 225,
"position": {
"x": 375,
"y": 1900
}
},
"width": 320,
"height": 189
},
{
"id": "f7564dd2-9539-47f2-ac13-190804461f4e",
@@ -315,12 +314,12 @@
}
}
},
"width": 320,
"height": 340,
"position": {
"x": 775,
"y": 1900
}
},
"width": 320,
"height": 295
},
{
"id": "1d887701-df21-4966-ae6e-a7d82307d7bd",
@@ -416,12 +415,12 @@
}
}
},
"width": 320,
"height": 340,
"position": {
"x": 1200,
"y": 1900
}
},
"width": 320,
"height": 293
},
{
"id": "ca1d020c-89a8-4958-880a-016d28775cfa",
@@ -434,7 +433,7 @@
"notes": "",
"isIntermediate": true,
"useCache": true,
"version": "1.1.0",
"version": "1.1.1",
"nodePack": "invokeai",
"inputs": {
"image": {
@@ -537,12 +536,12 @@
}
}
},
"width": 320,
"height": 511,
"position": {
"x": 1650,
"y": 1900
}
},
"width": 320,
"height": 451
},
{
"id": "f50624ce-82bf-41d0-bdf7-8aab11a80d48",
@@ -640,12 +639,12 @@
}
}
},
"width": 320,
"height": 32,
"position": {
"x": 1650,
"y": 1775
}
},
"width": 320,
"height": 24
},
{
"id": "c3737554-8d87-48ff-a6f8-e71d2867f434",
@@ -658,7 +657,7 @@
"notes": "",
"isIntermediate": true,
"useCache": true,
"version": "1.5.0",
"version": "1.5.1",
"nodePack": "invokeai",
"inputs": {
"positive_conditioning": {
@@ -866,12 +865,12 @@
}
}
},
"width": 320,
"height": 705,
"position": {
"x": 2128.740065979906,
"y": 1232.6219060454753
}
},
"width": 320,
"height": 612
},
{
"id": "3ed9b2ef-f4ec-40a7-94db-92e63b583ec0",
@@ -978,12 +977,12 @@
}
}
},
"width": 320,
"height": 267,
"position": {
"x": 2559.4751127537957,
"y": 1246.6000376741406
}
},
"width": 320,
"height": 224
},
{
"id": "5ca498a4-c8c8-4580-a396-0c984317205d",
@@ -1079,12 +1078,12 @@
}
}
},
"width": 320,
"height": 32,
"position": {
"x": 1650,
"y": 1675
}
},
"width": 320,
"height": 24
},
{
"id": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b",
@@ -1137,12 +1136,12 @@
}
}
},
"width": 320,
"height": 256,
"position": {
"x": 1250,
"y": 1200
}
},
"width": 320,
"height": 219
},
{
"id": "eb8f6f8a-c7b1-4914-806e-045ee2717a35",
@@ -1195,168 +1194,168 @@
}
}
},
"width": 320,
"height": 32,
"position": {
"x": 1650,
"y": 1600
}
},
"width": 320,
"height": 24
}
],
"edges": [
{
"id": "5ca498a4-c8c8-4580-a396-0c984317205d-f50624ce-82bf-41d0-bdf7-8aab11a80d48-collapsed",
"type": "collapsed",
"source": "5ca498a4-c8c8-4580-a396-0c984317205d",
"target": "f50624ce-82bf-41d0-bdf7-8aab11a80d48",
"type": "collapsed"
"target": "f50624ce-82bf-41d0-bdf7-8aab11a80d48"
},
{
"id": "eb8f6f8a-c7b1-4914-806e-045ee2717a35-f50624ce-82bf-41d0-bdf7-8aab11a80d48-collapsed",
"type": "collapsed",
"source": "eb8f6f8a-c7b1-4914-806e-045ee2717a35",
"target": "f50624ce-82bf-41d0-bdf7-8aab11a80d48",
"type": "collapsed"
"target": "f50624ce-82bf-41d0-bdf7-8aab11a80d48"
},
{
"id": "reactflow__edge-771bdf6a-0813-4099-a5d8-921a138754d4image-f7564dd2-9539-47f2-ac13-190804461f4eimage",
"type": "default",
"source": "771bdf6a-0813-4099-a5d8-921a138754d4",
"target": "f7564dd2-9539-47f2-ac13-190804461f4e",
"type": "default",
"sourceHandle": "image",
"targetHandle": "image"
},
{
"id": "reactflow__edge-f7564dd2-9539-47f2-ac13-190804461f4eimage-1d887701-df21-4966-ae6e-a7d82307d7bdimage",
"type": "default",
"source": "f7564dd2-9539-47f2-ac13-190804461f4e",
"target": "1d887701-df21-4966-ae6e-a7d82307d7bd",
"type": "default",
"sourceHandle": "image",
"targetHandle": "image"
},
{
"id": "reactflow__edge-5ca498a4-c8c8-4580-a396-0c984317205dwidth-f50624ce-82bf-41d0-bdf7-8aab11a80d48width",
"type": "default",
"source": "5ca498a4-c8c8-4580-a396-0c984317205d",
"target": "f50624ce-82bf-41d0-bdf7-8aab11a80d48",
"type": "default",
"sourceHandle": "width",
"targetHandle": "width"
},
{
"id": "reactflow__edge-5ca498a4-c8c8-4580-a396-0c984317205dheight-f50624ce-82bf-41d0-bdf7-8aab11a80d48height",
"type": "default",
"source": "5ca498a4-c8c8-4580-a396-0c984317205d",
"target": "f50624ce-82bf-41d0-bdf7-8aab11a80d48",
"type": "default",
"sourceHandle": "height",
"targetHandle": "height"
},
{
"id": "reactflow__edge-f50624ce-82bf-41d0-bdf7-8aab11a80d48noise-c3737554-8d87-48ff-a6f8-e71d2867f434noise",
"type": "default",
"source": "f50624ce-82bf-41d0-bdf7-8aab11a80d48",
"target": "c3737554-8d87-48ff-a6f8-e71d2867f434",
"type": "default",
"sourceHandle": "noise",
"targetHandle": "noise"
},
{
"id": "reactflow__edge-5ca498a4-c8c8-4580-a396-0c984317205dlatents-c3737554-8d87-48ff-a6f8-e71d2867f434latents",
"type": "default",
"source": "5ca498a4-c8c8-4580-a396-0c984317205d",
"target": "c3737554-8d87-48ff-a6f8-e71d2867f434",
"type": "default",
"sourceHandle": "latents",
"targetHandle": "latents"
},
{
"id": "reactflow__edge-e8bf67fe-67de-4227-87eb-79e86afdfc74conditioning-c3737554-8d87-48ff-a6f8-e71d2867f434negative_conditioning",
"type": "default",
"source": "e8bf67fe-67de-4227-87eb-79e86afdfc74",
"target": "c3737554-8d87-48ff-a6f8-e71d2867f434",
"type": "default",
"sourceHandle": "conditioning",
"targetHandle": "negative_conditioning"
},
{
"id": "reactflow__edge-63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16bconditioning-c3737554-8d87-48ff-a6f8-e71d2867f434positive_conditioning",
"type": "default",
"source": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b",
"target": "c3737554-8d87-48ff-a6f8-e71d2867f434",
"type": "default",
"sourceHandle": "conditioning",
"targetHandle": "positive_conditioning"
},
{
"id": "reactflow__edge-d8ace142-c05f-4f1d-8982-88dc7473958dclip-63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16bclip",
"type": "default",
"source": "d8ace142-c05f-4f1d-8982-88dc7473958d",
"target": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b",
"type": "default",
"sourceHandle": "clip",
"targetHandle": "clip"
},
{
"id": "reactflow__edge-d8ace142-c05f-4f1d-8982-88dc7473958dclip-e8bf67fe-67de-4227-87eb-79e86afdfc74clip",
"type": "default",
"source": "d8ace142-c05f-4f1d-8982-88dc7473958d",
"target": "e8bf67fe-67de-4227-87eb-79e86afdfc74",
"type": "default",
"sourceHandle": "clip",
"targetHandle": "clip"
},
{
"id": "reactflow__edge-1d887701-df21-4966-ae6e-a7d82307d7bdimage-ca1d020c-89a8-4958-880a-016d28775cfaimage",
"type": "default",
"source": "1d887701-df21-4966-ae6e-a7d82307d7bd",
"target": "ca1d020c-89a8-4958-880a-016d28775cfa",
"type": "default",
"sourceHandle": "image",
"targetHandle": "image"
},
{
"id": "reactflow__edge-ca1d020c-89a8-4958-880a-016d28775cfacontrol-c3737554-8d87-48ff-a6f8-e71d2867f434control",
"type": "default",
"source": "ca1d020c-89a8-4958-880a-016d28775cfa",
"target": "c3737554-8d87-48ff-a6f8-e71d2867f434",
"type": "default",
"sourceHandle": "control",
"targetHandle": "control"
},
{
"id": "reactflow__edge-c3737554-8d87-48ff-a6f8-e71d2867f434latents-3ed9b2ef-f4ec-40a7-94db-92e63b583ec0latents",
"type": "default",
"source": "c3737554-8d87-48ff-a6f8-e71d2867f434",
"target": "3ed9b2ef-f4ec-40a7-94db-92e63b583ec0",
"type": "default",
"sourceHandle": "latents",
"targetHandle": "latents"
},
{
"id": "reactflow__edge-d8ace142-c05f-4f1d-8982-88dc7473958dvae-3ed9b2ef-f4ec-40a7-94db-92e63b583ec0vae",
"type": "default",
"source": "d8ace142-c05f-4f1d-8982-88dc7473958d",
"target": "3ed9b2ef-f4ec-40a7-94db-92e63b583ec0",
"type": "default",
"sourceHandle": "vae",
"targetHandle": "vae"
},
{
"id": "reactflow__edge-f7564dd2-9539-47f2-ac13-190804461f4eimage-5ca498a4-c8c8-4580-a396-0c984317205dimage",
"type": "default",
"source": "f7564dd2-9539-47f2-ac13-190804461f4e",
"target": "5ca498a4-c8c8-4580-a396-0c984317205d",
"type": "default",
"sourceHandle": "image",
"targetHandle": "image"
},
{
"id": "reactflow__edge-d8ace142-c05f-4f1d-8982-88dc7473958dunet-c3737554-8d87-48ff-a6f8-e71d2867f434unet",
"type": "default",
"source": "d8ace142-c05f-4f1d-8982-88dc7473958d",
"target": "c3737554-8d87-48ff-a6f8-e71d2867f434",
"type": "default",
"sourceHandle": "unet",
"targetHandle": "unet"
},
{
"id": "reactflow__edge-d8ace142-c05f-4f1d-8982-88dc7473958dvae-5ca498a4-c8c8-4580-a396-0c984317205dvae",
"type": "default",
"source": "d8ace142-c05f-4f1d-8982-88dc7473958d",
"target": "5ca498a4-c8c8-4580-a396-0c984317205d",
"type": "default",
"sourceHandle": "vae",
"targetHandle": "vae"
},
{
"id": "reactflow__edge-eb8f6f8a-c7b1-4914-806e-045ee2717a35value-f50624ce-82bf-41d0-bdf7-8aab11a80d48seed",
"type": "default",
"source": "eb8f6f8a-c7b1-4914-806e-045ee2717a35",
"target": "f50624ce-82bf-41d0-bdf7-8aab11a80d48",
"type": "default",
"sourceHandle": "value",
"targetHandle": "seed"
}

View File

@@ -1,5 +1,4 @@
{
"id": "1e385b84-86f8-452e-9697-9e5abed20518",
"name": "Multi ControlNet (Canny & Depth)",
"author": "InvokeAI",
"description": "A sample workflow using canny & depth ControlNets to guide the generation process. ",
@@ -93,12 +92,12 @@
}
}
},
"width": 320,
"height": 225,
"position": {
"x": 3625,
"y": -75
}
},
"width": 320,
"height": 189
},
{
"id": "a33199c2-8340-401e-b8a2-42ffa875fc1c",
@@ -111,7 +110,7 @@
"notes": "",
"isIntermediate": true,
"useCache": true,
"version": "1.1.0",
"version": "1.1.1",
"nodePack": "invokeai",
"inputs": {
"image": {
@@ -214,12 +213,12 @@
}
}
},
"width": 320,
"height": 511,
"position": {
"x": 4477.604342844504,
"y": -49.39005411272677
}
},
"width": 320,
"height": 451
},
{
"id": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
@@ -272,12 +271,12 @@
}
}
},
"width": 320,
"height": 256,
"position": {
"x": 4075,
"y": -825
}
},
"width": 320,
"height": 219
},
{
"id": "54486974-835b-4d81-8f82-05f9f32ce9e9",
@@ -343,12 +342,12 @@
}
}
},
"width": 320,
"height": 227,
"position": {
"x": 3600,
"y": -1000
}
},
"width": 320,
"height": 193
},
{
"id": "7ce68934-3419-42d4-ac70-82cfc9397306",
@@ -401,12 +400,12 @@
}
}
},
"width": 320,
"height": 256,
"position": {
"x": 4075,
"y": -1125
}
},
"width": 320,
"height": 219
},
{
"id": "d204d184-f209-4fae-a0a1-d152800844e1",
@@ -419,7 +418,7 @@
"notes": "",
"isIntermediate": true,
"useCache": true,
"version": "1.1.0",
"version": "1.1.1",
"nodePack": "invokeai",
"inputs": {
"image": {
@@ -522,12 +521,12 @@
}
}
},
"width": 320,
"height": 511,
"position": {
"x": 4479.68542130465,
"y": -618.4221638099414
}
},
"width": 320,
"height": 451
},
{
"id": "c4b23e64-7986-40c4-9cad-46327b12e204",
@@ -588,12 +587,12 @@
}
}
},
"width": 320,
"height": 225,
"position": {
"x": 3625,
"y": -425
}
},
"width": 320,
"height": 189
},
{
"id": "ca4d5059-8bfb-447f-b415-da0faba5a143",
@@ -633,12 +632,12 @@
}
}
},
"width": 320,
"height": 104,
"position": {
"x": 4875,
"y": -575
}
},
"width": 320,
"height": 87
},
{
"id": "018b1214-c2af-43a7-9910-fb687c6726d7",
@@ -734,12 +733,12 @@
}
}
},
"width": 320,
"height": 340,
"position": {
"x": 4100,
"y": -75
}
},
"width": 320,
"height": 293
},
{
"id": "c826ba5e-9676-4475-b260-07b85e88753c",
@@ -835,12 +834,12 @@
}
}
},
"width": 320,
"height": 340,
"position": {
"x": 4095.757337055795,
"y": -455.63440891935863
}
},
"width": 320,
"height": 293
},
{
"id": "9db25398-c869-4a63-8815-c6559341ef12",
@@ -947,12 +946,12 @@
}
}
},
"width": 320,
"height": 267,
"position": {
"x": 5675,
"y": -825
}
},
"width": 320,
"height": 224
},
{
"id": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
@@ -965,7 +964,7 @@
"notes": "",
"isIntermediate": true,
"useCache": true,
"version": "1.5.0",
"version": "1.5.1",
"nodePack": "invokeai",
"inputs": {
"positive_conditioning": {
@@ -1173,12 +1172,12 @@
}
}
},
"width": 320,
"height": 705,
"position": {
"x": 5274.672987098195,
"y": -823.0752416664332
}
},
"width": 320,
"height": 612
},
{
"id": "2e77a0a1-db6a-47a2-a8bf-1e003be6423b",
@@ -1275,12 +1274,12 @@
}
}
},
"width": 320,
"height": 32,
"position": {
"x": 4875,
"y": -675
}
},
"width": 320,
"height": 24
},
{
"id": "8b260b4d-3fd6-44d4-b1be-9f0e43c628ce",
@@ -1333,146 +1332,146 @@
}
}
},
"width": 320,
"height": 32,
"position": {
"x": 4875,
"y": -750
}
},
"width": 320,
"height": 24
}
],
"edges": [
{
"id": "8b260b4d-3fd6-44d4-b1be-9f0e43c628ce-2e77a0a1-db6a-47a2-a8bf-1e003be6423b-collapsed",
"type": "collapsed",
"source": "8b260b4d-3fd6-44d4-b1be-9f0e43c628ce",
"target": "2e77a0a1-db6a-47a2-a8bf-1e003be6423b",
"type": "collapsed"
"target": "2e77a0a1-db6a-47a2-a8bf-1e003be6423b"
},
{
"id": "reactflow__edge-54486974-835b-4d81-8f82-05f9f32ce9e9clip-7ce68934-3419-42d4-ac70-82cfc9397306clip",
"type": "default",
"source": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"target": "7ce68934-3419-42d4-ac70-82cfc9397306",
"type": "default",
"sourceHandle": "clip",
"targetHandle": "clip"
},
{
"id": "reactflow__edge-54486974-835b-4d81-8f82-05f9f32ce9e9clip-273e3f96-49ea-4dc5-9d5b-9660390f14e1clip",
"type": "default",
"source": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"target": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
"type": "default",
"sourceHandle": "clip",
"targetHandle": "clip"
},
{
"id": "reactflow__edge-a33199c2-8340-401e-b8a2-42ffa875fc1ccontrol-ca4d5059-8bfb-447f-b415-da0faba5a143item",
"type": "default",
"source": "a33199c2-8340-401e-b8a2-42ffa875fc1c",
"target": "ca4d5059-8bfb-447f-b415-da0faba5a143",
"type": "default",
"sourceHandle": "control",
"targetHandle": "item"
},
{
"id": "reactflow__edge-d204d184-f209-4fae-a0a1-d152800844e1control-ca4d5059-8bfb-447f-b415-da0faba5a143item",
"type": "default",
"source": "d204d184-f209-4fae-a0a1-d152800844e1",
"target": "ca4d5059-8bfb-447f-b415-da0faba5a143",
"type": "default",
"sourceHandle": "control",
"targetHandle": "item"
},
{
"id": "reactflow__edge-8e860e51-5045-456e-bf04-9a62a2a5c49eimage-018b1214-c2af-43a7-9910-fb687c6726d7image",
"type": "default",
"source": "8e860e51-5045-456e-bf04-9a62a2a5c49e",
"target": "018b1214-c2af-43a7-9910-fb687c6726d7",
"type": "default",
"sourceHandle": "image",
"targetHandle": "image"
},
{
"id": "reactflow__edge-018b1214-c2af-43a7-9910-fb687c6726d7image-a33199c2-8340-401e-b8a2-42ffa875fc1cimage",
"type": "default",
"source": "018b1214-c2af-43a7-9910-fb687c6726d7",
"target": "a33199c2-8340-401e-b8a2-42ffa875fc1c",
"type": "default",
"sourceHandle": "image",
"targetHandle": "image"
},
{
"id": "reactflow__edge-c4b23e64-7986-40c4-9cad-46327b12e204image-c826ba5e-9676-4475-b260-07b85e88753cimage",
"type": "default",
"source": "c4b23e64-7986-40c4-9cad-46327b12e204",
"target": "c826ba5e-9676-4475-b260-07b85e88753c",
"type": "default",
"sourceHandle": "image",
"targetHandle": "image"
},
{
"id": "reactflow__edge-c826ba5e-9676-4475-b260-07b85e88753cimage-d204d184-f209-4fae-a0a1-d152800844e1image",
"type": "default",
"source": "c826ba5e-9676-4475-b260-07b85e88753c",
"target": "d204d184-f209-4fae-a0a1-d152800844e1",
"type": "default",
"sourceHandle": "image",
"targetHandle": "image"
},
{
"id": "reactflow__edge-54486974-835b-4d81-8f82-05f9f32ce9e9vae-9db25398-c869-4a63-8815-c6559341ef12vae",
"type": "default",
"source": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"target": "9db25398-c869-4a63-8815-c6559341ef12",
"type": "default",
"sourceHandle": "vae",
"targetHandle": "vae"
},
{
"id": "reactflow__edge-ac481b7f-08bf-4a9d-9e0c-3a82ea5243celatents-9db25398-c869-4a63-8815-c6559341ef12latents",
"type": "default",
"source": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"target": "9db25398-c869-4a63-8815-c6559341ef12",
"type": "default",
"sourceHandle": "latents",
"targetHandle": "latents"
},
{
"id": "reactflow__edge-ca4d5059-8bfb-447f-b415-da0faba5a143collection-ac481b7f-08bf-4a9d-9e0c-3a82ea5243cecontrol",
"type": "default",
"source": "ca4d5059-8bfb-447f-b415-da0faba5a143",
"target": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"type": "default",
"sourceHandle": "collection",
"targetHandle": "control"
},
{
"id": "reactflow__edge-54486974-835b-4d81-8f82-05f9f32ce9e9unet-ac481b7f-08bf-4a9d-9e0c-3a82ea5243ceunet",
"type": "default",
"source": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"target": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"type": "default",
"sourceHandle": "unet",
"targetHandle": "unet"
},
{
"id": "reactflow__edge-273e3f96-49ea-4dc5-9d5b-9660390f14e1conditioning-ac481b7f-08bf-4a9d-9e0c-3a82ea5243cenegative_conditioning",
"type": "default",
"source": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
"target": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"type": "default",
"sourceHandle": "conditioning",
"targetHandle": "negative_conditioning"
},
{
"id": "reactflow__edge-7ce68934-3419-42d4-ac70-82cfc9397306conditioning-ac481b7f-08bf-4a9d-9e0c-3a82ea5243cepositive_conditioning",
"type": "default",
"source": "7ce68934-3419-42d4-ac70-82cfc9397306",
"target": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"type": "default",
"sourceHandle": "conditioning",
"targetHandle": "positive_conditioning"
},
{
"id": "reactflow__edge-2e77a0a1-db6a-47a2-a8bf-1e003be6423bnoise-ac481b7f-08bf-4a9d-9e0c-3a82ea5243cenoise",
"type": "default",
"source": "2e77a0a1-db6a-47a2-a8bf-1e003be6423b",
"target": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"type": "default",
"sourceHandle": "noise",
"targetHandle": "noise"
},
{
"id": "reactflow__edge-8b260b4d-3fd6-44d4-b1be-9f0e43c628cevalue-2e77a0a1-db6a-47a2-a8bf-1e003be6423bseed",
"type": "default",
"source": "8b260b4d-3fd6-44d4-b1be-9f0e43c628ce",
"target": "2e77a0a1-db6a-47a2-a8bf-1e003be6423b",
"type": "default",
"sourceHandle": "value",
"targetHandle": "seed"
}

View File

@@ -20,7 +20,6 @@
"category": "default",
"version": "2.0.0"
},
"id": "d1609af5-eb0a-4f73-b573-c9af96a8d6bf",
"nodes": [
{
"id": "c2eaf1ba-5708-4679-9e15-945b8b432692",
@@ -73,12 +72,12 @@
}
}
},
"width": 320,
"height": 32,
"position": {
"x": 925,
"y": -200
}
},
"width": 320,
"height": 24
},
{
"id": "1b7e0df8-8589-4915-a4ea-c0088f15d642",
@@ -168,12 +167,12 @@
}
}
},
"width": 320,
"height": 580,
"position": {
"x": 475,
"y": -400
}
},
"width": 320,
"height": 506
},
{
"id": "1b89067c-3f6b-42c8-991f-e3055789b251",
@@ -233,12 +232,12 @@
}
}
},
"width": 320,
"height": 32,
"position": {
"x": 925,
"y": -400
}
},
"width": 320,
"height": 24
},
{
"id": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
@@ -304,12 +303,12 @@
}
}
},
"width": 320,
"height": 227,
"position": {
"x": 0,
"y": -375
}
},
"width": 320,
"height": 193
},
{
"id": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
@@ -362,12 +361,12 @@
}
}
},
"width": 320,
"height": 32,
"position": {
"x": 925,
"y": -275
}
},
"width": 320,
"height": 24
},
{
"id": "0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77",
@@ -465,12 +464,12 @@
}
}
},
"width": 320,
"height": 32,
"position": {
"x": 925,
"y": 25
}
},
"width": 320,
"height": 24
},
{
"id": "dfc20e07-7aef-4fc0-a3a1-7bf68ec6a4e5",
@@ -524,12 +523,12 @@
}
}
},
"width": 320,
"height": 32,
"position": {
"x": 925,
"y": -50
}
},
"width": 320,
"height": 24
},
{
"id": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
@@ -636,12 +635,12 @@
}
}
},
"width": 320,
"height": 267,
"position": {
"x": 2037.861329274915,
"y": -329.8393457509562
}
},
"width": 320,
"height": 224
},
{
"id": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
@@ -654,7 +653,7 @@
"notes": "",
"isIntermediate": true,
"useCache": true,
"version": "1.5.0",
"version": "1.5.1",
"nodePack": "invokeai",
"inputs": {
"positive_conditioning": {
@@ -862,112 +861,112 @@
}
}
},
"width": 320,
"height": 705,
"position": {
"x": 1570.9941088179146,
"y": -407.6505491604564
}
},
"width": 320,
"height": 612
}
],
"edges": [
{
"id": "1b89067c-3f6b-42c8-991f-e3055789b251-fc9d0e35-a6de-4a19-84e1-c72497c823f6-collapsed",
"type": "collapsed",
"source": "1b89067c-3f6b-42c8-991f-e3055789b251",
"target": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
"type": "collapsed"
"target": "fc9d0e35-a6de-4a19-84e1-c72497c823f6"
},
{
"id": "dfc20e07-7aef-4fc0-a3a1-7bf68ec6a4e5-0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77-collapsed",
"type": "collapsed",
"source": "dfc20e07-7aef-4fc0-a3a1-7bf68ec6a4e5",
"target": "0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77",
"type": "collapsed"
"target": "0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77"
},
{
"id": "reactflow__edge-1b7e0df8-8589-4915-a4ea-c0088f15d642collection-1b89067c-3f6b-42c8-991f-e3055789b251collection",
"type": "default",
"source": "1b7e0df8-8589-4915-a4ea-c0088f15d642",
"target": "1b89067c-3f6b-42c8-991f-e3055789b251",
"type": "default",
"sourceHandle": "collection",
"targetHandle": "collection"
},
{
"id": "reactflow__edge-d6353b7f-b447-4e17-8f2e-80a88c91d426clip-fc9d0e35-a6de-4a19-84e1-c72497c823f6clip",
"type": "default",
"source": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"target": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
"type": "default",
"sourceHandle": "clip",
"targetHandle": "clip"
},
{
"id": "reactflow__edge-1b89067c-3f6b-42c8-991f-e3055789b251item-fc9d0e35-a6de-4a19-84e1-c72497c823f6prompt",
"type": "default",
"source": "1b89067c-3f6b-42c8-991f-e3055789b251",
"target": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
"type": "default",
"sourceHandle": "item",
"targetHandle": "prompt"
},
{
"id": "reactflow__edge-d6353b7f-b447-4e17-8f2e-80a88c91d426clip-c2eaf1ba-5708-4679-9e15-945b8b432692clip",
"type": "default",
"source": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"target": "c2eaf1ba-5708-4679-9e15-945b8b432692",
"type": "default",
"sourceHandle": "clip",
"targetHandle": "clip"
},
{
"id": "reactflow__edge-dfc20e07-7aef-4fc0-a3a1-7bf68ec6a4e5value-0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77seed",
"type": "default",
"source": "dfc20e07-7aef-4fc0-a3a1-7bf68ec6a4e5",
"target": "0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77",
"type": "default",
"sourceHandle": "value",
"targetHandle": "seed"
},
{
"id": "reactflow__edge-fc9d0e35-a6de-4a19-84e1-c72497c823f6conditioning-2fb1577f-0a56-4f12-8711-8afcaaaf1d5epositive_conditioning",
"type": "default",
"source": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
"target": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"type": "default",
"sourceHandle": "conditioning",
"targetHandle": "positive_conditioning"
},
{
"id": "reactflow__edge-c2eaf1ba-5708-4679-9e15-945b8b432692conditioning-2fb1577f-0a56-4f12-8711-8afcaaaf1d5enegative_conditioning",
"type": "default",
"source": "c2eaf1ba-5708-4679-9e15-945b8b432692",
"target": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"type": "default",
"sourceHandle": "conditioning",
"targetHandle": "negative_conditioning"
},
{
"id": "reactflow__edge-0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77noise-2fb1577f-0a56-4f12-8711-8afcaaaf1d5enoise",
"type": "default",
"source": "0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77",
"target": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"type": "default",
"sourceHandle": "noise",
"targetHandle": "noise"
},
{
"id": "reactflow__edge-d6353b7f-b447-4e17-8f2e-80a88c91d426unet-2fb1577f-0a56-4f12-8711-8afcaaaf1d5eunet",
"type": "default",
"source": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"target": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"type": "default",
"sourceHandle": "unet",
"targetHandle": "unet"
},
{
"id": "reactflow__edge-2fb1577f-0a56-4f12-8711-8afcaaaf1d5elatents-491ec988-3c77-4c37-af8a-39a0c4e7a2a1latents",
"type": "default",
"source": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"target": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
"type": "default",
"sourceHandle": "latents",
"targetHandle": "latents"
},
{
"id": "reactflow__edge-d6353b7f-b447-4e17-8f2e-80a88c91d426vae-491ec988-3c77-4c37-af8a-39a0c4e7a2a1vae",
"type": "default",
"source": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"target": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
"type": "default",
"sourceHandle": "vae",
"targetHandle": "vae"
}

View File

@@ -80,12 +80,12 @@
}
}
},
"width": 320,
"height": 32,
"position": {
"x": 750,
"y": -225
}
},
"width": 320,
"height": 24
},
{
"id": "719dabe8-8297-4749-aea1-37be301cd425",
@@ -126,12 +126,12 @@
}
}
},
"width": 320,
"height": 258,
"position": {
"x": 750,
"y": -125
}
},
"width": 320,
"height": 219
},
{
"id": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
@@ -279,12 +279,12 @@
}
}
},
"width": 320,
"height": 32,
"position": {
"x": 750,
"y": 200
}
},
"width": 320,
"height": 24
},
{
"id": "55705012-79b9-4aac-9f26-c0b10309785b",
@@ -382,12 +382,12 @@
}
}
},
"width": 320,
"height": 388,
"position": {
"x": 375,
"y": 0
}
},
"width": 320,
"height": 336
},
{
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
@@ -441,12 +441,12 @@
}
}
},
"width": 320,
"height": 32,
"position": {
"x": 375,
"y": -50
}
},
"width": 320,
"height": 24
},
{
"id": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
@@ -471,8 +471,7 @@
"isCollection": false,
"isCollectionOrScalar": false,
"name": "SDXLMainModelField"
},
"value": null
}
}
},
"outputs": {
@@ -518,12 +517,12 @@
}
}
},
"width": 320,
"height": 257,
"position": {
"x": 375,
"y": -500
}
},
"width": 320,
"height": 219
},
{
"id": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
@@ -671,12 +670,12 @@
}
}
},
"width": 320,
"height": 32,
"position": {
"x": 750,
"y": -175
}
},
"width": 320,
"height": 24
},
{
"id": "63e91020-83b2-4f35-b174-ad9692aabb48",
@@ -783,12 +782,12 @@
}
}
},
"width": 320,
"height": 266,
"position": {
"x": 1475,
"y": -500
}
},
"width": 320,
"height": 224
},
{
"id": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
@@ -801,7 +800,7 @@
"notes": "",
"isIntermediate": true,
"useCache": true,
"version": "1.5.0",
"version": "1.5.1",
"nodePack": "invokeai",
"inputs": {
"positive_conditioning": {
@@ -1009,12 +1008,12 @@
}
}
},
"width": 320,
"height": 702,
"position": {
"x": 1125,
"y": -500
}
},
"width": 320,
"height": 612
},
{
"id": "0093692f-9cf4-454d-a5b8-62f0e3eb3bb8",
@@ -1038,8 +1037,7 @@
"isCollection": false,
"isCollectionOrScalar": false,
"name": "VAEModelField"
},
"value": null
}
}
},
"outputs": {
@@ -1055,12 +1053,12 @@
}
}
},
"width": 320,
"height": 161,
"position": {
"x": 375,
"y": -225
}
},
"width": 320,
"height": 139
},
{
"id": "ade2c0d3-0384-4157-b39b-29ce429cfa15",
@@ -1101,12 +1099,12 @@
}
}
},
"width": 320,
"height": 258,
"position": {
"x": 750,
"y": -500
}
},
"width": 320,
"height": 219
},
{
"id": "ad8fa655-3a76-43d0-9c02-4d7644dea650",
@@ -1159,162 +1157,162 @@
}
}
},
"width": 320,
"height": 32,
"position": {
"x": 750,
"y": 150
}
},
"width": 320,
"height": 24
}
],
"edges": [
{
"id": "3774ec24-a69e-4254-864c-097d07a6256f-faf965a4-7530-427b-b1f3-4ba6505c2a08-collapsed",
"type": "collapsed",
"source": "3774ec24-a69e-4254-864c-097d07a6256f",
"target": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"type": "collapsed"
"target": "faf965a4-7530-427b-b1f3-4ba6505c2a08"
},
{
"id": "ad8fa655-3a76-43d0-9c02-4d7644dea650-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204-collapsed",
"type": "collapsed",
"source": "ad8fa655-3a76-43d0-9c02-4d7644dea650",
"target": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"type": "collapsed"
"target": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204"
},
{
"id": "reactflow__edge-ea94bc37-d995-4a83-aa99-4af42479f2f2value-55705012-79b9-4aac-9f26-c0b10309785bseed",
"type": "default",
"source": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"target": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "default",
"sourceHandle": "value",
"targetHandle": "seed"
},
{
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22clip-faf965a4-7530-427b-b1f3-4ba6505c2a08clip",
"type": "default",
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"target": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"type": "default",
"sourceHandle": "clip",
"targetHandle": "clip"
},
{
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22clip2-faf965a4-7530-427b-b1f3-4ba6505c2a08clip2",
"type": "default",
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"target": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"type": "default",
"sourceHandle": "clip2",
"targetHandle": "clip2"
},
{
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22clip-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204clip",
"type": "default",
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"target": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"type": "default",
"sourceHandle": "clip",
"targetHandle": "clip"
},
{
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22clip2-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204clip2",
"type": "default",
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"target": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"type": "default",
"sourceHandle": "clip2",
"targetHandle": "clip2"
},
{
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22unet-50a36525-3c0a-4cc5-977c-e4bfc3fd6dfbunet",
"type": "default",
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"target": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
"type": "default",
"sourceHandle": "unet",
"targetHandle": "unet"
},
{
"id": "reactflow__edge-faf965a4-7530-427b-b1f3-4ba6505c2a08conditioning-50a36525-3c0a-4cc5-977c-e4bfc3fd6dfbpositive_conditioning",
"type": "default",
"source": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"target": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
"type": "default",
"sourceHandle": "conditioning",
"targetHandle": "positive_conditioning"
},
{
"id": "reactflow__edge-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204conditioning-50a36525-3c0a-4cc5-977c-e4bfc3fd6dfbnegative_conditioning",
"type": "default",
"source": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"target": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
"type": "default",
"sourceHandle": "conditioning",
"targetHandle": "negative_conditioning"
},
{
"id": "reactflow__edge-55705012-79b9-4aac-9f26-c0b10309785bnoise-50a36525-3c0a-4cc5-977c-e4bfc3fd6dfbnoise",
"type": "default",
"source": "55705012-79b9-4aac-9f26-c0b10309785b",
"target": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
"type": "default",
"sourceHandle": "noise",
"targetHandle": "noise"
},
{
"id": "reactflow__edge-50a36525-3c0a-4cc5-977c-e4bfc3fd6dfblatents-63e91020-83b2-4f35-b174-ad9692aabb48latents",
"type": "default",
"source": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
"target": "63e91020-83b2-4f35-b174-ad9692aabb48",
"type": "default",
"sourceHandle": "latents",
"targetHandle": "latents"
},
{
"id": "reactflow__edge-0093692f-9cf4-454d-a5b8-62f0e3eb3bb8vae-63e91020-83b2-4f35-b174-ad9692aabb48vae",
"type": "default",
"source": "0093692f-9cf4-454d-a5b8-62f0e3eb3bb8",
"target": "63e91020-83b2-4f35-b174-ad9692aabb48",
"type": "default",
"sourceHandle": "vae",
"targetHandle": "vae"
},
{
"id": "reactflow__edge-ade2c0d3-0384-4157-b39b-29ce429cfa15value-faf965a4-7530-427b-b1f3-4ba6505c2a08prompt",
"type": "default",
"source": "ade2c0d3-0384-4157-b39b-29ce429cfa15",
"target": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"type": "default",
"sourceHandle": "value",
"targetHandle": "prompt"
},
{
"id": "reactflow__edge-719dabe8-8297-4749-aea1-37be301cd425value-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204prompt",
"type": "default",
"source": "719dabe8-8297-4749-aea1-37be301cd425",
"target": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"type": "default",
"sourceHandle": "value",
"targetHandle": "prompt"
},
{
"id": "reactflow__edge-719dabe8-8297-4749-aea1-37be301cd425value-ad8fa655-3a76-43d0-9c02-4d7644dea650string_left",
"type": "default",
"source": "719dabe8-8297-4749-aea1-37be301cd425",
"target": "ad8fa655-3a76-43d0-9c02-4d7644dea650",
"type": "default",
"sourceHandle": "value",
"targetHandle": "string_left"
},
{
"id": "reactflow__edge-ad8fa655-3a76-43d0-9c02-4d7644dea650value-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204style",
"type": "default",
"source": "ad8fa655-3a76-43d0-9c02-4d7644dea650",
"target": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"type": "default",
"sourceHandle": "value",
"targetHandle": "style"
},
{
"id": "reactflow__edge-ade2c0d3-0384-4157-b39b-29ce429cfa15value-3774ec24-a69e-4254-864c-097d07a6256fstring_left",
"type": "default",
"source": "ade2c0d3-0384-4157-b39b-29ce429cfa15",
"target": "3774ec24-a69e-4254-864c-097d07a6256f",
"type": "default",
"sourceHandle": "value",
"targetHandle": "string_left"
},
{
"id": "reactflow__edge-3774ec24-a69e-4254-864c-097d07a6256fvalue-faf965a4-7530-427b-b1f3-4ba6505c2a08style",
"type": "default",
"source": "3774ec24-a69e-4254-864c-097d07a6256f",
"target": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"type": "default",
"sourceHandle": "value",
"targetHandle": "style"
}
]
}
}

View File

@@ -84,12 +84,12 @@
}
}
},
"width": 320,
"height": 259,
"position": {
"x": 1000,
"y": 350
}
},
"width": 320,
"height": 219
},
{
"id": "55705012-79b9-4aac-9f26-c0b10309785b",
@@ -187,12 +187,12 @@
}
}
},
"width": 320,
"height": 388,
"position": {
"x": 600,
"y": 325
}
},
"width": 320,
"height": 388
},
{
"id": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
@@ -258,12 +258,12 @@
}
}
},
"width": 320,
"height": 226,
"position": {
"x": 600,
"y": 25
}
},
"width": 320,
"height": 193
},
{
"id": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
@@ -316,12 +316,12 @@
}
}
},
"width": 320,
"height": 259,
"position": {
"x": 1000,
"y": 25
}
},
"width": 320,
"height": 219
},
{
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
@@ -375,12 +375,12 @@
}
}
},
"width": 320,
"height": 32,
"position": {
"x": 600,
"y": 275
}
},
"width": 320,
"height": 32
},
{
"id": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
@@ -393,7 +393,7 @@
"notes": "",
"isIntermediate": true,
"useCache": true,
"version": "1.5.0",
"version": "1.5.1",
"nodePack": "invokeai",
"inputs": {
"positive_conditioning": {
@@ -601,12 +601,12 @@
}
}
},
"width": 320,
"height": 703,
"position": {
"x": 1400,
"y": 25
}
},
"width": 320,
"height": 612
},
{
"id": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
@@ -713,86 +713,86 @@
}
}
},
"width": 320,
"height": 266,
"position": {
"x": 1800,
"y": 25
}
},
"width": 320,
"height": 224
}
],
"edges": [
{
"id": "reactflow__edge-ea94bc37-d995-4a83-aa99-4af42479f2f2value-55705012-79b9-4aac-9f26-c0b10309785bseed",
"type": "default",
"source": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"target": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "default",
"sourceHandle": "value",
"targetHandle": "seed"
},
{
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8clip-7d8bf987-284f-413a-b2fd-d825445a5d6cclip",
"type": "default",
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"target": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"type": "default",
"sourceHandle": "clip",
"targetHandle": "clip"
},
{
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8clip-93dc02a4-d05b-48ed-b99c-c9b616af3402clip",
"type": "default",
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"target": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"type": "default",
"sourceHandle": "clip",
"targetHandle": "clip"
},
{
"id": "reactflow__edge-55705012-79b9-4aac-9f26-c0b10309785bnoise-eea2702a-19fb-45b5-9d75-56b4211ec03cnoise",
"type": "default",
"source": "55705012-79b9-4aac-9f26-c0b10309785b",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"type": "default",
"sourceHandle": "noise",
"targetHandle": "noise"
},
{
"id": "reactflow__edge-7d8bf987-284f-413a-b2fd-d825445a5d6cconditioning-eea2702a-19fb-45b5-9d75-56b4211ec03cpositive_conditioning",
"type": "default",
"source": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"type": "default",
"sourceHandle": "conditioning",
"targetHandle": "positive_conditioning"
},
{
"id": "reactflow__edge-93dc02a4-d05b-48ed-b99c-c9b616af3402conditioning-eea2702a-19fb-45b5-9d75-56b4211ec03cnegative_conditioning",
"type": "default",
"source": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"type": "default",
"sourceHandle": "conditioning",
"targetHandle": "negative_conditioning"
},
{
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8unet-eea2702a-19fb-45b5-9d75-56b4211ec03cunet",
"type": "default",
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"type": "default",
"sourceHandle": "unet",
"targetHandle": "unet"
},
{
"id": "reactflow__edge-eea2702a-19fb-45b5-9d75-56b4211ec03clatents-58c957f5-0d01-41fc-a803-b2bbf0413d4flatents",
"type": "default",
"source": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"target": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
"type": "default",
"sourceHandle": "latents",
"targetHandle": "latents"
},
{
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8vae-58c957f5-0d01-41fc-a803-b2bbf0413d4fvae",
"type": "default",
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"target": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
"type": "default",
"sourceHandle": "vae",
"targetHandle": "vae"
}
]
}
}

View File

@@ -25,10 +25,9 @@
}
],
"meta": {
"version": "2.0.0",
"category": "default"
"category": "default",
"version": "2.0.0"
},
"id": "a9d70c39-4cdd-4176-9942-8ff3fe32d3b1",
"nodes": [
{
"id": "85b77bb2-c67a-416a-b3e8-291abe746c44",
@@ -80,12 +79,12 @@
}
}
},
"width": 320,
"height": 256,
"position": {
"x": 3425,
"y": -300
}
},
"width": 320,
"height": 219
},
{
"id": "24e9d7ed-4836-4ec4-8f9e-e747721f9818",
@@ -150,12 +149,12 @@
}
}
},
"width": 320,
"height": 227,
"position": {
"x": 2500,
"y": -600
}
},
"width": 320,
"height": 193
},
{
"id": "c41e705b-f2e3-4d1a-83c4-e34bb9344966",
@@ -243,12 +242,12 @@
}
}
},
"width": 320,
"height": 252,
"position": {
"x": 2975,
"y": -600
}
},
"width": 320,
"height": 218
},
{
"id": "c3fa6872-2599-4a82-a596-b3446a66cf8b",
@@ -300,12 +299,12 @@
}
}
},
"width": 320,
"height": 256,
"position": {
"x": 3425,
"y": -575
}
},
"width": 320,
"height": 219
},
{
"id": "ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63",
@@ -318,7 +317,7 @@
"notes": "",
"isIntermediate": true,
"useCache": true,
"version": "1.5.0",
"version": "1.5.1",
"inputs": {
"positive_conditioning": {
"id": "025ff44b-c4c6-4339-91b4-5f461e2cadc5",
@@ -525,12 +524,12 @@
}
}
},
"width": 320,
"height": 705,
"position": {
"x": 3975,
"y": -575
}
},
"width": 320,
"height": 612
},
{
"id": "ea18915f-2c5b-4569-b725-8e9e9122e8d3",
@@ -627,12 +626,12 @@
}
}
},
"width": 320,
"height": 32,
"position": {
"x": 3425,
"y": 75
}
},
"width": 320,
"height": 24
},
{
"id": "6fd74a17-6065-47a5-b48b-f4e2b8fa7953",
@@ -685,12 +684,12 @@
}
}
},
"width": 320,
"height": 32,
"position": {
"x": 3425,
"y": 0
}
},
"width": 320,
"height": 24
},
{
"id": "a9683c0a-6b1f-4a5e-8187-c57e764b3400",
@@ -796,106 +795,106 @@
}
}
},
"width": 320,
"height": 267,
"position": {
"x": 4450,
"y": -550
}
},
"width": 320,
"height": 224
}
],
"edges": [
{
"id": "6fd74a17-6065-47a5-b48b-f4e2b8fa7953-ea18915f-2c5b-4569-b725-8e9e9122e8d3-collapsed",
"type": "collapsed",
"source": "6fd74a17-6065-47a5-b48b-f4e2b8fa7953",
"target": "ea18915f-2c5b-4569-b725-8e9e9122e8d3",
"type": "collapsed"
"target": "ea18915f-2c5b-4569-b725-8e9e9122e8d3"
},
{
"id": "reactflow__edge-24e9d7ed-4836-4ec4-8f9e-e747721f9818clip-c41e705b-f2e3-4d1a-83c4-e34bb9344966clip",
"type": "default",
"source": "24e9d7ed-4836-4ec4-8f9e-e747721f9818",
"target": "c41e705b-f2e3-4d1a-83c4-e34bb9344966",
"type": "default",
"sourceHandle": "clip",
"targetHandle": "clip"
},
{
"id": "reactflow__edge-c41e705b-f2e3-4d1a-83c4-e34bb9344966clip-c3fa6872-2599-4a82-a596-b3446a66cf8bclip",
"type": "default",
"source": "c41e705b-f2e3-4d1a-83c4-e34bb9344966",
"target": "c3fa6872-2599-4a82-a596-b3446a66cf8b",
"type": "default",
"sourceHandle": "clip",
"targetHandle": "clip"
},
{
"id": "reactflow__edge-24e9d7ed-4836-4ec4-8f9e-e747721f9818unet-c41e705b-f2e3-4d1a-83c4-e34bb9344966unet",
"type": "default",
"source": "24e9d7ed-4836-4ec4-8f9e-e747721f9818",
"target": "c41e705b-f2e3-4d1a-83c4-e34bb9344966",
"type": "default",
"sourceHandle": "unet",
"targetHandle": "unet"
},
{
"id": "reactflow__edge-c41e705b-f2e3-4d1a-83c4-e34bb9344966unet-ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63unet",
"type": "default",
"source": "c41e705b-f2e3-4d1a-83c4-e34bb9344966",
"target": "ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63",
"type": "default",
"sourceHandle": "unet",
"targetHandle": "unet"
},
{
"id": "reactflow__edge-85b77bb2-c67a-416a-b3e8-291abe746c44conditioning-ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63negative_conditioning",
"type": "default",
"source": "85b77bb2-c67a-416a-b3e8-291abe746c44",
"target": "ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63",
"type": "default",
"sourceHandle": "conditioning",
"targetHandle": "negative_conditioning"
},
{
"id": "reactflow__edge-c3fa6872-2599-4a82-a596-b3446a66cf8bconditioning-ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63positive_conditioning",
"type": "default",
"source": "c3fa6872-2599-4a82-a596-b3446a66cf8b",
"target": "ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63",
"type": "default",
"sourceHandle": "conditioning",
"targetHandle": "positive_conditioning"
},
{
"id": "reactflow__edge-ea18915f-2c5b-4569-b725-8e9e9122e8d3noise-ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63noise",
"type": "default",
"source": "ea18915f-2c5b-4569-b725-8e9e9122e8d3",
"target": "ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63",
"type": "default",
"sourceHandle": "noise",
"targetHandle": "noise"
},
{
"id": "reactflow__edge-6fd74a17-6065-47a5-b48b-f4e2b8fa7953value-ea18915f-2c5b-4569-b725-8e9e9122e8d3seed",
"type": "default",
"source": "6fd74a17-6065-47a5-b48b-f4e2b8fa7953",
"target": "ea18915f-2c5b-4569-b725-8e9e9122e8d3",
"type": "default",
"sourceHandle": "value",
"targetHandle": "seed"
},
{
"id": "reactflow__edge-ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63latents-a9683c0a-6b1f-4a5e-8187-c57e764b3400latents",
"type": "default",
"source": "ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63",
"target": "a9683c0a-6b1f-4a5e-8187-c57e764b3400",
"type": "default",
"sourceHandle": "latents",
"targetHandle": "latents"
},
{
"id": "reactflow__edge-24e9d7ed-4836-4ec4-8f9e-e747721f9818vae-a9683c0a-6b1f-4a5e-8187-c57e764b3400vae",
"type": "default",
"source": "24e9d7ed-4836-4ec4-8f9e-e747721f9818",
"target": "a9683c0a-6b1f-4a5e-8187-c57e764b3400",
"type": "default",
"sourceHandle": "vae",
"targetHandle": "vae"
},
{
"id": "reactflow__edge-c41e705b-f2e3-4d1a-83c4-e34bb9344966clip-85b77bb2-c67a-416a-b3e8-291abe746c44clip",
"type": "default",
"source": "c41e705b-f2e3-4d1a-83c4-e34bb9344966",
"target": "85b77bb2-c67a-416a-b3e8-291abe746c44",
"type": "default",
"sourceHandle": "clip",
"targetHandle": "clip"
}

View File

@@ -31,6 +31,7 @@ class WorkflowRecordOrderBy(str, Enum, metaclass=MetaEnum):
class WorkflowCategory(str, Enum, metaclass=MetaEnum):
User = "user"
Default = "default"
Project = "project"
class WorkflowMeta(BaseModel):

View File

@@ -169,7 +169,7 @@ class SqliteWorkflowRecordsStorage(WorkflowRecordsStorageBase):
self._cursor.execute(count_query, count_params)
total = self._cursor.fetchone()[0]
pages = int(total / per_page) + 1
pages = total // per_page + (total % per_page > 0)
return PaginatedResults(
items=workflows,

View File

@@ -0,0 +1,67 @@
import cProfile
from logging import Logger
from pathlib import Path
from typing import Optional
class Profiler:
"""
Simple wrapper around cProfile.
Usage
```
# Create a profiler
profiler = Profiler(logger, output_dir, "sql_query_perf")
# Start a new profile
profiler.start("my_profile")
# Do stuff
profiler.stop()
```
Visualize a profile as a flamegraph with [snakeviz](https://jiffyclub.github.io/snakeviz/)
```sh
snakeviz my_profile.prof
```
Visualize a profile as directed graph with [graphviz](https://graphviz.org/download/) & [gprof2dot](https://github.com/jrfonseca/gprof2dot)
```sh
gprof2dot -f pstats my_profile.prof | dot -Tpng -o my_profile.png
# SVG or PDF may be nicer - you can search for function names
gprof2dot -f pstats my_profile.prof | dot -Tsvg -o my_profile.svg
gprof2dot -f pstats my_profile.prof | dot -Tpdf -o my_profile.pdf
```
"""
def __init__(self, logger: Logger, output_dir: Path, prefix: Optional[str] = None) -> None:
self._logger = logger.getChild(f"profiler.{prefix}" if prefix else "profiler")
self._output_dir = output_dir
self._output_dir.mkdir(parents=True, exist_ok=True)
self._profiler: Optional[cProfile.Profile] = None
self._prefix = prefix
self.profile_id: Optional[str] = None
def start(self, profile_id: str) -> None:
if self._profiler:
self.stop()
self.profile_id = profile_id
self._profiler = cProfile.Profile()
self._profiler.enable()
self._logger.info(f"Started profiling {self.profile_id}.")
def stop(self) -> Path:
if not self._profiler:
raise RuntimeError("Profiler not initialized. Call start() first.")
self._profiler.disable()
filename = f"{self._prefix}_{self.profile_id}.prof" if self._prefix else f"{self.profile_id}.prof"
path = Path(self._output_dir, filename)
self._profiler.dump_stats(path)
self._logger.info(f"Stopped profiling, profile dumped to {path}.")
self._profiler = None
self.profile_id = None
return path

View File

@@ -0,0 +1,109 @@
import pathlib
from typing import Literal, Union
import cv2
import numpy as np
import torch
import torch.nn.functional as F
from einops import repeat
from PIL import Image
from torchvision.transforms import Compose
from invokeai.app.services.config.config_default import InvokeAIAppConfig
from invokeai.backend.image_util.depth_anything.model.dpt import DPT_DINOv2
from invokeai.backend.image_util.depth_anything.utilities.util import NormalizeImage, PrepareForNet, Resize
from invokeai.backend.util.devices import choose_torch_device
from invokeai.backend.util.util import download_with_progress_bar
config = InvokeAIAppConfig.get_config()
DEPTH_ANYTHING_MODELS = {
"large": {
"url": "https://huggingface.co/spaces/LiheYoung/Depth-Anything/resolve/main/checkpoints/depth_anything_vitl14.pth?download=true",
"local": "any/annotators/depth_anything/depth_anything_vitl14.pth",
},
"base": {
"url": "https://huggingface.co/spaces/LiheYoung/Depth-Anything/resolve/main/checkpoints/depth_anything_vitb14.pth?download=true",
"local": "any/annotators/depth_anything/depth_anything_vitb14.pth",
},
"small": {
"url": "https://huggingface.co/spaces/LiheYoung/Depth-Anything/resolve/main/checkpoints/depth_anything_vits14.pth?download=true",
"local": "any/annotators/depth_anything/depth_anything_vits14.pth",
},
}
transform = Compose(
[
Resize(
width=518,
height=518,
resize_target=False,
keep_aspect_ratio=True,
ensure_multiple_of=14,
resize_method="lower_bound",
image_interpolation_method=cv2.INTER_CUBIC,
),
NormalizeImage(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
PrepareForNet(),
]
)
class DepthAnythingDetector:
def __init__(self) -> None:
self.model = None
self.model_size: Union[Literal["large", "base", "small"], None] = None
def load_model(self, model_size=Literal["large", "base", "small"]):
DEPTH_ANYTHING_MODEL_PATH = pathlib.Path(config.models_path / DEPTH_ANYTHING_MODELS[model_size]["local"])
if not DEPTH_ANYTHING_MODEL_PATH.exists():
download_with_progress_bar(DEPTH_ANYTHING_MODELS[model_size]["url"], DEPTH_ANYTHING_MODEL_PATH)
if not self.model or model_size != self.model_size:
del self.model
self.model_size = model_size
match self.model_size:
case "small":
self.model = DPT_DINOv2(encoder="vits", features=64, out_channels=[48, 96, 192, 384])
case "base":
self.model = DPT_DINOv2(encoder="vitb", features=128, out_channels=[96, 192, 384, 768])
case "large":
self.model = DPT_DINOv2(encoder="vitl", features=256, out_channels=[256, 512, 1024, 1024])
case _:
raise TypeError("Not a supported model")
self.model.load_state_dict(torch.load(DEPTH_ANYTHING_MODEL_PATH.as_posix(), map_location="cpu"))
self.model.eval()
self.model.to(choose_torch_device())
return self.model
def to(self, device):
self.model.to(device)
return self
def __call__(self, image, resolution=512, offload=False):
image = np.array(image, dtype=np.uint8)
image = image[:, :, ::-1] / 255.0
image_height, image_width = image.shape[:2]
image = transform({"image": image})["image"]
image = torch.from_numpy(image).unsqueeze(0).to(choose_torch_device())
with torch.no_grad():
depth = self.model(image)
depth = F.interpolate(depth[None], (image_height, image_width), mode="bilinear", align_corners=False)[0, 0]
depth = (depth - depth.min()) / (depth.max() - depth.min()) * 255.0
depth_map = repeat(depth, "h w -> h w 3").cpu().numpy().astype(np.uint8)
depth_map = Image.fromarray(depth_map)
new_height = int(image_height * (resolution / image_width))
depth_map = depth_map.resize((resolution, new_height))
if offload:
del self.model
return depth_map

View File

@@ -0,0 +1,145 @@
import torch.nn as nn
def _make_scratch(in_shape, out_shape, groups=1, expand=False):
scratch = nn.Module()
out_shape1 = out_shape
out_shape2 = out_shape
out_shape3 = out_shape
if len(in_shape) >= 4:
out_shape4 = out_shape
if expand:
out_shape1 = out_shape
out_shape2 = out_shape * 2
out_shape3 = out_shape * 4
if len(in_shape) >= 4:
out_shape4 = out_shape * 8
scratch.layer1_rn = nn.Conv2d(
in_shape[0], out_shape1, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
)
scratch.layer2_rn = nn.Conv2d(
in_shape[1], out_shape2, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
)
scratch.layer3_rn = nn.Conv2d(
in_shape[2], out_shape3, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
)
if len(in_shape) >= 4:
scratch.layer4_rn = nn.Conv2d(
in_shape[3], out_shape4, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
)
return scratch
class ResidualConvUnit(nn.Module):
"""Residual convolution module."""
def __init__(self, features, activation, bn):
"""Init.
Args:
features (int): number of features
"""
super().__init__()
self.bn = bn
self.groups = 1
self.conv1 = nn.Conv2d(features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups)
self.conv2 = nn.Conv2d(features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups)
if self.bn:
self.bn1 = nn.BatchNorm2d(features)
self.bn2 = nn.BatchNorm2d(features)
self.activation = activation
self.skip_add = nn.quantized.FloatFunctional()
def forward(self, x):
"""Forward pass.
Args:
x (tensor): input
Returns:
tensor: output
"""
out = self.activation(x)
out = self.conv1(out)
if self.bn:
out = self.bn1(out)
out = self.activation(out)
out = self.conv2(out)
if self.bn:
out = self.bn2(out)
if self.groups > 1:
out = self.conv_merge(out)
return self.skip_add.add(out, x)
class FeatureFusionBlock(nn.Module):
"""Feature fusion block."""
def __init__(self, features, activation, deconv=False, bn=False, expand=False, align_corners=True, size=None):
"""Init.
Args:
features (int): number of features
"""
super(FeatureFusionBlock, self).__init__()
self.deconv = deconv
self.align_corners = align_corners
self.groups = 1
self.expand = expand
out_features = features
if self.expand:
out_features = features // 2
self.out_conv = nn.Conv2d(features, out_features, kernel_size=1, stride=1, padding=0, bias=True, groups=1)
self.resConfUnit1 = ResidualConvUnit(features, activation, bn)
self.resConfUnit2 = ResidualConvUnit(features, activation, bn)
self.skip_add = nn.quantized.FloatFunctional()
self.size = size
def forward(self, *xs, size=None):
"""Forward pass.
Returns:
tensor: output
"""
output = xs[0]
if len(xs) == 2:
res = self.resConfUnit1(xs[1])
output = self.skip_add.add(output, res)
output = self.resConfUnit2(output)
if (size is None) and (self.size is None):
modifier = {"scale_factor": 2}
elif size is None:
modifier = {"size": self.size}
else:
modifier = {"size": size}
output = nn.functional.interpolate(output, **modifier, mode="bilinear", align_corners=self.align_corners)
output = self.out_conv(output)
return output

View File

@@ -0,0 +1,183 @@
from pathlib import Path
import torch
import torch.nn as nn
import torch.nn.functional as F
from .blocks import FeatureFusionBlock, _make_scratch
torchhub_path = Path(__file__).parent.parent / "torchhub"
def _make_fusion_block(features, use_bn, size=None):
return FeatureFusionBlock(
features,
nn.ReLU(False),
deconv=False,
bn=use_bn,
expand=False,
align_corners=True,
size=size,
)
class DPTHead(nn.Module):
def __init__(self, nclass, in_channels, features, out_channels, use_bn=False, use_clstoken=False):
super(DPTHead, self).__init__()
self.nclass = nclass
self.use_clstoken = use_clstoken
self.projects = nn.ModuleList(
[
nn.Conv2d(
in_channels=in_channels,
out_channels=out_channel,
kernel_size=1,
stride=1,
padding=0,
)
for out_channel in out_channels
]
)
self.resize_layers = nn.ModuleList(
[
nn.ConvTranspose2d(
in_channels=out_channels[0], out_channels=out_channels[0], kernel_size=4, stride=4, padding=0
),
nn.ConvTranspose2d(
in_channels=out_channels[1], out_channels=out_channels[1], kernel_size=2, stride=2, padding=0
),
nn.Identity(),
nn.Conv2d(
in_channels=out_channels[3], out_channels=out_channels[3], kernel_size=3, stride=2, padding=1
),
]
)
if use_clstoken:
self.readout_projects = nn.ModuleList()
for _ in range(len(self.projects)):
self.readout_projects.append(nn.Sequential(nn.Linear(2 * in_channels, in_channels), nn.GELU()))
self.scratch = _make_scratch(
out_channels,
features,
groups=1,
expand=False,
)
self.scratch.stem_transpose = None
self.scratch.refinenet1 = _make_fusion_block(features, use_bn)
self.scratch.refinenet2 = _make_fusion_block(features, use_bn)
self.scratch.refinenet3 = _make_fusion_block(features, use_bn)
self.scratch.refinenet4 = _make_fusion_block(features, use_bn)
head_features_1 = features
head_features_2 = 32
if nclass > 1:
self.scratch.output_conv = nn.Sequential(
nn.Conv2d(head_features_1, head_features_1, kernel_size=3, stride=1, padding=1),
nn.ReLU(True),
nn.Conv2d(head_features_1, nclass, kernel_size=1, stride=1, padding=0),
)
else:
self.scratch.output_conv1 = nn.Conv2d(
head_features_1, head_features_1 // 2, kernel_size=3, stride=1, padding=1
)
self.scratch.output_conv2 = nn.Sequential(
nn.Conv2d(head_features_1 // 2, head_features_2, kernel_size=3, stride=1, padding=1),
nn.ReLU(True),
nn.Conv2d(head_features_2, 1, kernel_size=1, stride=1, padding=0),
nn.ReLU(True),
nn.Identity(),
)
def forward(self, out_features, patch_h, patch_w):
out = []
for i, x in enumerate(out_features):
if self.use_clstoken:
x, cls_token = x[0], x[1]
readout = cls_token.unsqueeze(1).expand_as(x)
x = self.readout_projects[i](torch.cat((x, readout), -1))
else:
x = x[0]
x = x.permute(0, 2, 1).reshape((x.shape[0], x.shape[-1], patch_h, patch_w))
x = self.projects[i](x)
x = self.resize_layers[i](x)
out.append(x)
layer_1, layer_2, layer_3, layer_4 = out
layer_1_rn = self.scratch.layer1_rn(layer_1)
layer_2_rn = self.scratch.layer2_rn(layer_2)
layer_3_rn = self.scratch.layer3_rn(layer_3)
layer_4_rn = self.scratch.layer4_rn(layer_4)
path_4 = self.scratch.refinenet4(layer_4_rn, size=layer_3_rn.shape[2:])
path_3 = self.scratch.refinenet3(path_4, layer_3_rn, size=layer_2_rn.shape[2:])
path_2 = self.scratch.refinenet2(path_3, layer_2_rn, size=layer_1_rn.shape[2:])
path_1 = self.scratch.refinenet1(path_2, layer_1_rn)
out = self.scratch.output_conv1(path_1)
out = F.interpolate(out, (int(patch_h * 14), int(patch_w * 14)), mode="bilinear", align_corners=True)
out = self.scratch.output_conv2(out)
return out
class DPT_DINOv2(nn.Module):
def __init__(
self,
features,
out_channels,
encoder="vitl",
use_bn=False,
use_clstoken=False,
):
super(DPT_DINOv2, self).__init__()
assert encoder in ["vits", "vitb", "vitl"]
# # in case the Internet connection is not stable, please load the DINOv2 locally
# if use_local:
# self.pretrained = torch.hub.load(
# torchhub_path / "facebookresearch_dinov2_main",
# "dinov2_{:}14".format(encoder),
# source="local",
# pretrained=False,
# )
# else:
# self.pretrained = torch.hub.load(
# "facebookresearch/dinov2",
# "dinov2_{:}14".format(encoder),
# )
self.pretrained = torch.hub.load(
"facebookresearch/dinov2",
"dinov2_{:}14".format(encoder),
)
dim = self.pretrained.blocks[0].attn.qkv.in_features
self.depth_head = DPTHead(1, dim, features, out_channels=out_channels, use_bn=use_bn, use_clstoken=use_clstoken)
def forward(self, x):
h, w = x.shape[-2:]
features = self.pretrained.get_intermediate_layers(x, 4, return_class_token=True)
patch_h, patch_w = h // 14, w // 14
depth = self.depth_head(features, patch_h, patch_w)
depth = F.interpolate(depth, size=(h, w), mode="bilinear", align_corners=True)
depth = F.relu(depth)
return depth.squeeze(1)

View File

@@ -0,0 +1,227 @@
import math
import cv2
import numpy as np
import torch
import torch.nn.functional as F
def apply_min_size(sample, size, image_interpolation_method=cv2.INTER_AREA):
"""Rezise the sample to ensure the given size. Keeps aspect ratio.
Args:
sample (dict): sample
size (tuple): image size
Returns:
tuple: new size
"""
shape = list(sample["disparity"].shape)
if shape[0] >= size[0] and shape[1] >= size[1]:
return sample
scale = [0, 0]
scale[0] = size[0] / shape[0]
scale[1] = size[1] / shape[1]
scale = max(scale)
shape[0] = math.ceil(scale * shape[0])
shape[1] = math.ceil(scale * shape[1])
# resize
sample["image"] = cv2.resize(sample["image"], tuple(shape[::-1]), interpolation=image_interpolation_method)
sample["disparity"] = cv2.resize(sample["disparity"], tuple(shape[::-1]), interpolation=cv2.INTER_NEAREST)
sample["mask"] = cv2.resize(
sample["mask"].astype(np.float32),
tuple(shape[::-1]),
interpolation=cv2.INTER_NEAREST,
)
sample["mask"] = sample["mask"].astype(bool)
return tuple(shape)
class Resize(object):
"""Resize sample to given size (width, height)."""
def __init__(
self,
width,
height,
resize_target=True,
keep_aspect_ratio=False,
ensure_multiple_of=1,
resize_method="lower_bound",
image_interpolation_method=cv2.INTER_AREA,
):
"""Init.
Args:
width (int): desired output width
height (int): desired output height
resize_target (bool, optional):
True: Resize the full sample (image, mask, target).
False: Resize image only.
Defaults to True.
keep_aspect_ratio (bool, optional):
True: Keep the aspect ratio of the input sample.
Output sample might not have the given width and height, and
resize behaviour depends on the parameter 'resize_method'.
Defaults to False.
ensure_multiple_of (int, optional):
Output width and height is constrained to be multiple of this parameter.
Defaults to 1.
resize_method (str, optional):
"lower_bound": Output will be at least as large as the given size.
"upper_bound": Output will be at max as large as the given size. (Output size might be smaller
than given size.)
"minimal": Scale as least as possible. (Output size might be smaller than given size.)
Defaults to "lower_bound".
"""
self.__width = width
self.__height = height
self.__resize_target = resize_target
self.__keep_aspect_ratio = keep_aspect_ratio
self.__multiple_of = ensure_multiple_of
self.__resize_method = resize_method
self.__image_interpolation_method = image_interpolation_method
def constrain_to_multiple_of(self, x, min_val=0, max_val=None):
y = (np.round(x / self.__multiple_of) * self.__multiple_of).astype(int)
if max_val is not None and y > max_val:
y = (np.floor(x / self.__multiple_of) * self.__multiple_of).astype(int)
if y < min_val:
y = (np.ceil(x / self.__multiple_of) * self.__multiple_of).astype(int)
return y
def get_size(self, width, height):
# determine new height and width
scale_height = self.__height / height
scale_width = self.__width / width
if self.__keep_aspect_ratio:
if self.__resize_method == "lower_bound":
# scale such that output size is lower bound
if scale_width > scale_height:
# fit width
scale_height = scale_width
else:
# fit height
scale_width = scale_height
elif self.__resize_method == "upper_bound":
# scale such that output size is upper bound
if scale_width < scale_height:
# fit width
scale_height = scale_width
else:
# fit height
scale_width = scale_height
elif self.__resize_method == "minimal":
# scale as least as possbile
if abs(1 - scale_width) < abs(1 - scale_height):
# fit width
scale_height = scale_width
else:
# fit height
scale_width = scale_height
else:
raise ValueError(f"resize_method {self.__resize_method} not implemented")
if self.__resize_method == "lower_bound":
new_height = self.constrain_to_multiple_of(scale_height * height, min_val=self.__height)
new_width = self.constrain_to_multiple_of(scale_width * width, min_val=self.__width)
elif self.__resize_method == "upper_bound":
new_height = self.constrain_to_multiple_of(scale_height * height, max_val=self.__height)
new_width = self.constrain_to_multiple_of(scale_width * width, max_val=self.__width)
elif self.__resize_method == "minimal":
new_height = self.constrain_to_multiple_of(scale_height * height)
new_width = self.constrain_to_multiple_of(scale_width * width)
else:
raise ValueError(f"resize_method {self.__resize_method} not implemented")
return (new_width, new_height)
def __call__(self, sample):
width, height = self.get_size(sample["image"].shape[1], sample["image"].shape[0])
# resize sample
sample["image"] = cv2.resize(
sample["image"],
(width, height),
interpolation=self.__image_interpolation_method,
)
if self.__resize_target:
if "disparity" in sample:
sample["disparity"] = cv2.resize(
sample["disparity"],
(width, height),
interpolation=cv2.INTER_NEAREST,
)
if "depth" in sample:
sample["depth"] = cv2.resize(sample["depth"], (width, height), interpolation=cv2.INTER_NEAREST)
if "semseg_mask" in sample:
# sample["semseg_mask"] = cv2.resize(
# sample["semseg_mask"], (width, height), interpolation=cv2.INTER_NEAREST
# )
sample["semseg_mask"] = F.interpolate(
torch.from_numpy(sample["semseg_mask"]).float()[None, None, ...], (height, width), mode="nearest"
).numpy()[0, 0]
if "mask" in sample:
sample["mask"] = cv2.resize(
sample["mask"].astype(np.float32),
(width, height),
interpolation=cv2.INTER_NEAREST,
)
# sample["mask"] = sample["mask"].astype(bool)
# print(sample['image'].shape, sample['depth'].shape)
return sample
class NormalizeImage(object):
"""Normlize image by given mean and std."""
def __init__(self, mean, std):
self.__mean = mean
self.__std = std
def __call__(self, sample):
sample["image"] = (sample["image"] - self.__mean) / self.__std
return sample
class PrepareForNet(object):
"""Prepare sample for usage as network input."""
def __init__(self):
pass
def __call__(self, sample):
image = np.transpose(sample["image"], (2, 0, 1))
sample["image"] = np.ascontiguousarray(image).astype(np.float32)
if "mask" in sample:
sample["mask"] = sample["mask"].astype(np.float32)
sample["mask"] = np.ascontiguousarray(sample["mask"])
if "depth" in sample:
depth = sample["depth"].astype(np.float32)
sample["depth"] = np.ascontiguousarray(depth)
if "semseg_mask" in sample:
sample["semseg_mask"] = sample["semseg_mask"].astype(np.float32)
sample["semseg_mask"] = np.ascontiguousarray(sample["semseg_mask"])
return sample

View File

@@ -0,0 +1,281 @@
"""Utility (backend) functions used by model_install.py"""
import re
from logging import Logger
from pathlib import Path
from typing import Any, Dict, List, Optional
import omegaconf
from huggingface_hub import HfFolder
from pydantic import BaseModel, Field
from pydantic.dataclasses import dataclass
from pydantic.networks import AnyHttpUrl
from requests import HTTPError
from tqdm import tqdm
import invokeai.configs as configs
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.download import DownloadQueueService
from invokeai.app.services.events.events_base import EventServiceBase
from invokeai.app.services.image_files.image_files_disk import DiskImageFileStorage
from invokeai.app.services.model_install import (
HFModelSource,
LocalModelSource,
ModelInstallService,
ModelInstallServiceBase,
ModelSource,
URLModelSource,
)
from invokeai.app.services.model_records import ModelRecordServiceBase, ModelRecordServiceSQL
from invokeai.app.services.shared.sqlite.sqlite_util import init_db
from invokeai.backend.model_manager import (
BaseModelType,
InvalidModelConfigException,
ModelType,
)
from invokeai.backend.model_manager.metadata import UnknownMetadataException
from invokeai.backend.util.logging import InvokeAILogger
# name of the starter models file
INITIAL_MODELS = "INITIAL_MODELS2.yaml"
def initialize_record_store(app_config: InvokeAIAppConfig) -> ModelRecordServiceBase:
"""Return an initialized ModelConfigRecordServiceBase object."""
logger = InvokeAILogger.get_logger(config=app_config)
image_files = DiskImageFileStorage(f"{app_config.output_path}/images")
db = init_db(config=app_config, logger=logger, image_files=image_files)
obj: ModelRecordServiceBase = ModelRecordServiceSQL(db)
return obj
def initialize_installer(
app_config: InvokeAIAppConfig, event_bus: Optional[EventServiceBase] = None
) -> ModelInstallServiceBase:
"""Return an initialized ModelInstallService object."""
record_store = initialize_record_store(app_config)
metadata_store = record_store.metadata_store
download_queue = DownloadQueueService()
installer = ModelInstallService(
app_config=app_config,
record_store=record_store,
metadata_store=metadata_store,
download_queue=download_queue,
event_bus=event_bus,
)
download_queue.start()
installer.start()
return installer
class UnifiedModelInfo(BaseModel):
"""Catchall class for information in INITIAL_MODELS2.yaml."""
name: Optional[str] = None
base: Optional[BaseModelType] = None
type: Optional[ModelType] = None
source: Optional[str] = None
subfolder: Optional[str] = None
description: Optional[str] = None
recommended: bool = False
installed: bool = False
default: bool = False
requires: List[str] = Field(default_factory=list)
@dataclass
class InstallSelections:
"""Lists of models to install and remove."""
install_models: List[UnifiedModelInfo] = Field(default_factory=list)
remove_models: List[str] = Field(default_factory=list)
class TqdmEventService(EventServiceBase):
"""An event service to track downloads."""
def __init__(self) -> None:
"""Create a new TqdmEventService object."""
super().__init__()
self._bars: Dict[str, tqdm] = {}
self._last: Dict[str, int] = {}
def dispatch(self, event_name: str, payload: Any) -> None:
"""Dispatch an event by appending it to self.events."""
if payload["event"] == "model_install_downloading":
data = payload["data"]
dest = data["local_path"]
total_bytes = data["total_bytes"]
bytes = data["bytes"]
if dest not in self._bars:
self._bars[dest] = tqdm(desc=Path(dest).name, initial=0, total=total_bytes, unit="iB", unit_scale=True)
self._last[dest] = 0
self._bars[dest].update(bytes - self._last[dest])
self._last[dest] = bytes
class InstallHelper(object):
"""Capture information stored jointly in INITIAL_MODELS.yaml and the installed models db."""
def __init__(self, app_config: InvokeAIAppConfig, logger: Logger):
"""Create new InstallHelper object."""
self._app_config = app_config
self.all_models: Dict[str, UnifiedModelInfo] = {}
omega = omegaconf.OmegaConf.load(Path(configs.__path__[0]) / INITIAL_MODELS)
assert isinstance(omega, omegaconf.dictconfig.DictConfig)
self._installer = initialize_installer(app_config, TqdmEventService())
self._initial_models = omega
self._installed_models: List[str] = []
self._starter_models: List[str] = []
self._default_model: Optional[str] = None
self._logger = logger
self._initialize_model_lists()
@property
def installer(self) -> ModelInstallServiceBase:
"""Return the installer object used internally."""
return self._installer
def _initialize_model_lists(self) -> None:
"""
Initialize our model slots.
Set up the following:
installed_models -- list of installed model keys
starter_models -- list of starter model keys from INITIAL_MODELS
all_models -- dict of key => UnifiedModelInfo
default_model -- key to default model
"""
# previously-installed models
for model in self._installer.record_store.all_models():
info = UnifiedModelInfo.parse_obj(model.dict())
info.installed = True
model_key = f"{model.base.value}/{model.type.value}/{model.name}"
self.all_models[model_key] = info
self._installed_models.append(model_key)
for key in self._initial_models.keys():
assert isinstance(key, str)
if key in self.all_models:
# we want to preserve the description
description = self.all_models[key].description or self._initial_models[key].get("description")
self.all_models[key].description = description
else:
base_model, model_type, model_name = key.split("/")
info = UnifiedModelInfo(
name=model_name,
type=ModelType(model_type),
base=BaseModelType(base_model),
source=self._initial_models[key].source,
description=self._initial_models[key].get("description"),
recommended=self._initial_models[key].get("recommended", False),
default=self._initial_models[key].get("default", False),
subfolder=self._initial_models[key].get("subfolder"),
requires=list(self._initial_models[key].get("requires", [])),
)
self.all_models[key] = info
if not self.default_model():
self._default_model = key
elif self._initial_models[key].get("default", False):
self._default_model = key
self._starter_models.append(key)
# previously-installed models
for model in self._installer.record_store.all_models():
info = UnifiedModelInfo.parse_obj(model.dict())
info.installed = True
model_key = f"{model.base.value}/{model.type.value}/{model.name}"
self.all_models[model_key] = info
self._installed_models.append(model_key)
def recommended_models(self) -> List[UnifiedModelInfo]:
"""List of the models recommended in INITIAL_MODELS.yaml."""
return [self._to_model(x) for x in self._starter_models if self._to_model(x).recommended]
def installed_models(self) -> List[UnifiedModelInfo]:
"""List of models already installed."""
return [self._to_model(x) for x in self._installed_models]
def starter_models(self) -> List[UnifiedModelInfo]:
"""List of starter models."""
return [self._to_model(x) for x in self._starter_models]
def default_model(self) -> Optional[UnifiedModelInfo]:
"""Return the default model."""
return self._to_model(self._default_model) if self._default_model else None
def _to_model(self, key: str) -> UnifiedModelInfo:
return self.all_models[key]
def _add_required_models(self, model_list: List[UnifiedModelInfo]) -> None:
installed = {x.source for x in self.installed_models()}
reverse_source = {x.source: x for x in self.all_models.values()}
additional_models: List[UnifiedModelInfo] = []
for model_info in model_list:
for requirement in model_info.requires:
if requirement not in installed and reverse_source.get(requirement):
additional_models.append(reverse_source[requirement])
model_list.extend(additional_models)
def _make_install_source(self, model_info: UnifiedModelInfo) -> ModelSource:
assert model_info.source
model_path_id_or_url = model_info.source.strip("\"' ")
model_path = Path(model_path_id_or_url)
if model_path.exists(): # local file on disk
return LocalModelSource(path=model_path.absolute(), inplace=True)
if re.match(r"^[^/]+/[^/]+$", model_path_id_or_url): # hugging face repo_id
return HFModelSource(
repo_id=model_path_id_or_url,
access_token=HfFolder.get_token(),
subfolder=model_info.subfolder,
)
if re.match(r"^(http|https):", model_path_id_or_url):
return URLModelSource(url=AnyHttpUrl(model_path_id_or_url))
raise ValueError(f"Unsupported model source: {model_path_id_or_url}")
def add_or_delete(self, selections: InstallSelections) -> None:
"""Add or delete selected models."""
installer = self._installer
self._add_required_models(selections.install_models)
for model in selections.install_models:
source = self._make_install_source(model)
config = (
{
"description": model.description,
"name": model.name,
}
if model.name
else None
)
try:
installer.import_model(
source=source,
config=config,
)
except (UnknownMetadataException, InvalidModelConfigException, HTTPError, OSError) as e:
self._logger.warning(f"{source}: {e}")
for model_to_remove in selections.remove_models:
parts = model_to_remove.split("/")
if len(parts) == 1:
base_model, model_type, model_name = (None, None, model_to_remove)
else:
base_model, model_type, model_name = parts
matches = installer.record_store.search_by_attr(
base_model=BaseModelType(base_model) if base_model else None,
model_type=ModelType(model_type) if model_type else None,
model_name=model_name,
)
if len(matches) > 1:
print(f"{model} is ambiguous. Please use model_type:model_name (e.g. main:my_model) to disambiguate.")
elif not matches:
print(f"{model}: unknown model")
else:
for m in matches:
print(f"Deleting {m.type}:{m.name}")
installer.delete(m.key)
installer.wait_for_installs()

View File

@@ -849,7 +849,7 @@ def migrate_if_needed(opt: Namespace, root: Path) -> bool:
# -------------------------------------
def main():
def main() -> None:
parser = argparse.ArgumentParser(description="InvokeAI model downloader")
parser.add_argument(
"--skip-sd-weights",

View File

@@ -104,12 +104,14 @@ class ModelInstall(object):
prediction_type_helper: Optional[Callable[[Path], SchedulerPredictionType]] = None,
model_manager: Optional[ModelManager] = None,
access_token: Optional[str] = None,
civitai_api_key: Optional[str] = None,
):
self.config = config
self.mgr = model_manager or ModelManager(config.model_conf_path)
self.datasets = OmegaConf.load(Dataset_path)
self.prediction_helper = prediction_type_helper
self.access_token = access_token or HfFolder.get_token()
self.civitai_api_key = civitai_api_key or config.civitai_api_key
self.reverse_paths = self._reverse_paths(self.datasets)
def all_models(self) -> Dict[str, ModelLoadInfo]:
@@ -283,11 +285,16 @@ class ModelInstall(object):
def _remove_installed(self, model_list: List[str]):
all_models = self.all_models()
models_to_remove = []
for path in model_list:
key = self.reverse_paths.get(path)
if key and all_models[key].installed:
logger.warning(f"{path} already installed. Skipping.")
model_list.remove(path)
models_to_remove.append(path)
for path in models_to_remove:
logger.warning(f"{path} already installed. Skipping")
model_list.remove(path)
def _add_required_models(self, model_list: List[str]):
additional_models = []
@@ -321,7 +328,11 @@ class ModelInstall(object):
def _install_url(self, url: str) -> AddModelResult:
with TemporaryDirectory(dir=self.config.models_path) as staging:
location = download_with_resume(url, Path(staging))
CIVITAI_RE = r".*civitai.com.*"
civit_url = re.match(CIVITAI_RE, url, re.IGNORECASE)
location = download_with_resume(
url, Path(staging), access_token=self.civitai_api_key if civit_url else None
)
if not location:
logger.error(f"Unable to download {url}. Skipping.")
info = ModelProbe().heuristic_probe(location, self.prediction_helper)

View File

@@ -42,8 +42,7 @@ from diffusers.schedulers import (
PNDMScheduler,
UnCLIPScheduler,
)
from diffusers.utils import is_accelerate_available, is_omegaconf_available
from diffusers.utils.import_utils import BACKENDS_MAPPING
from diffusers.utils import is_accelerate_available
from picklescan.scanner import scan_file_path
from transformers import (
AutoFeatureExtractor,
@@ -1211,9 +1210,6 @@ def download_from_original_stable_diffusion_ckpt(
if prediction_type == "v-prediction":
prediction_type = "v_prediction"
if not is_omegaconf_available():
raise ValueError(BACKENDS_MAPPING["omegaconf"][1])
if from_safetensors:
from safetensors.torch import load_file as safe_load
@@ -1647,11 +1643,6 @@ def download_controlnet_from_original_ckpt(
cross_attention_dim: Optional[bool] = None,
scan_needed: bool = False,
) -> DiffusionPipeline:
if not is_omegaconf_available():
raise ValueError(BACKENDS_MAPPING["omegaconf"][1])
from omegaconf import OmegaConf
if from_safetensors:
from safetensors import safe_open

View File

@@ -0,0 +1,31 @@
# Copyright (c) 2024 Lincoln Stein and the InvokeAI Development Team
"""
This module exports the function has_baked_in_sdxl_vae().
It returns True if an SDXL checkpoint model has the original SDXL 1.0 VAE,
which doesn't work properly in fp16 mode.
"""
import hashlib
from pathlib import Path
from safetensors.torch import load_file
SDXL_1_0_VAE_HASH = "bc40b16c3a0fa4625abdfc01c04ffc21bf3cefa6af6c7768ec61eb1f1ac0da51"
def has_baked_in_sdxl_vae(checkpoint_path: Path) -> bool:
"""Return true if the checkpoint contains a custom (non SDXL-1.0) VAE."""
hash = _vae_hash(checkpoint_path)
return hash != SDXL_1_0_VAE_HASH
def _vae_hash(checkpoint_path: Path) -> str:
checkpoint = load_file(checkpoint_path, device="cpu")
vae_keys = [x for x in checkpoint.keys() if x.startswith("first_stage_model.")]
hash = hashlib.new("sha256")
for key in vae_keys:
value = checkpoint[key]
hash.update(bytes(key, "UTF-8"))
hash.update(bytes(str(value), "UTF-8"))
return hash.hexdigest()

View File

@@ -13,6 +13,7 @@ from safetensors.torch import load_file
from transformers import CLIPTextModel, CLIPTokenizer
from invokeai.app.shared.models import FreeUConfig
from invokeai.backend.model_management.model_load_optimizations import skip_torch_weight_init
from .models.lora import LoRAModel
@@ -211,8 +212,12 @@ class ModelPatcher:
for i in range(ti_embedding.shape[0]):
new_tokens_added += ti_tokenizer.add_tokens(_get_trigger(ti_name, i))
# modify text_encoder
text_encoder.resize_token_embeddings(init_tokens_count + new_tokens_added, pad_to_multiple_of)
# Modify text_encoder.
# resize_token_embeddings(...) constructs a new torch.nn.Embedding internally. Initializing the weights of
# this embedding is slow and unnecessary, so we wrap this step in skip_torch_weight_init() to save some
# time.
with skip_torch_weight_init():
text_encoder.resize_token_embeddings(init_tokens_count + new_tokens_added, pad_to_multiple_of)
model_embeddings = text_encoder.get_input_embeddings()
for ti_name, ti in ti_list:

View File

@@ -759,7 +759,7 @@ class ModelManager(object):
model_type: ModelType,
new_name: Optional[str] = None,
new_base: Optional[BaseModelType] = None,
):
) -> None:
"""
Rename or rebase a model.
"""
@@ -781,6 +781,9 @@ class ModelManager(object):
# if this is a model file/directory that we manage ourselves, we need to move it
if old_path.is_relative_to(self.app_config.models_path):
# keep the suffix!
if old_path.is_file():
new_name = Path(new_name).with_suffix(old_path.suffix).as_posix()
new_path = self.resolve_model_path(
Path(
BaseModelType(new_base).value,

View File

@@ -370,6 +370,8 @@ class LoRACheckpointProbe(CheckpointProbeBase):
return BaseModelType.StableDiffusion1
elif token_vector_length == 1024:
return BaseModelType.StableDiffusion2
elif token_vector_length == 1280:
return BaseModelType.StableDiffusionXL # recognizes format at https://civitai.com/models/224641
elif token_vector_length == 2048:
return BaseModelType.StableDiffusionXL
else:

View File

@@ -1,11 +1,16 @@
import json
import os
from enum import Enum
from pathlib import Path
from typing import Literal, Optional
from omegaconf import OmegaConf
from pydantic import Field
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.backend.model_management.detect_baked_in_vae import has_baked_in_sdxl_vae
from invokeai.backend.util.logging import InvokeAILogger
from .base import (
BaseModelType,
DiffusersModel,
@@ -116,14 +121,28 @@ class StableDiffusionXLModel(DiffusersModel):
# The convert script adapted from the diffusers package uses
# strings for the base model type. To avoid making too many
# source code changes, we simply translate here
if Path(output_path).exists():
return output_path
if isinstance(config, cls.CheckpointConfig):
from invokeai.backend.model_management.models.stable_diffusion import _convert_ckpt_and_cache
# Hack in VAE-fp16 fix - If model sdxl-vae-fp16-fix is installed,
# then we bake it into the converted model unless there is already
# a nonstandard VAE installed.
kwargs = {}
app_config = InvokeAIAppConfig.get_config()
vae_path = app_config.models_path / "sdxl/vae/sdxl-vae-fp16-fix"
if vae_path.exists() and not has_baked_in_sdxl_vae(Path(model_path)):
InvokeAILogger.get_logger().warning("No baked-in VAE detected. Inserting sdxl-vae-fp16-fix.")
kwargs["vae_path"] = vae_path
return _convert_ckpt_and_cache(
version=base_model,
model_config=config,
output_path=output_path,
use_safetensors=False, # corrupts sdxl models for some reason
use_safetensors=True,
**kwargs,
)
else:
return model_path

View File

@@ -6,6 +6,7 @@ from .config import (
InvalidModelConfigException,
ModelConfigFactory,
ModelFormat,
ModelRepoVariant,
ModelType,
ModelVariantType,
SchedulerPredictionType,
@@ -15,15 +16,16 @@ from .probe import ModelProbe
from .search import ModelSearch
__all__ = [
"ModelProbe",
"ModelSearch",
"AnyModelConfig",
"BaseModelType",
"ModelRepoVariant",
"InvalidModelConfigException",
"ModelConfigFactory",
"BaseModelType",
"ModelType",
"SubModelType",
"ModelVariantType",
"ModelFormat",
"ModelProbe",
"ModelSearch",
"ModelType",
"ModelVariantType",
"SchedulerPredictionType",
"AnyModelConfig",
"SubModelType",
]

View File

@@ -99,6 +99,17 @@ class SchedulerPredictionType(str, Enum):
Sample = "sample"
class ModelRepoVariant(str, Enum):
"""Various hugging face variants on the diffusers format."""
DEFAULT = "default" # model files without "fp16" or other qualifier
FP16 = "fp16"
FP32 = "fp32"
ONNX = "onnx"
OPENVINO = "openvino"
FLAX = "flax"
class ModelConfigBase(BaseModel):
"""Base class for model configuration information."""

View File

@@ -0,0 +1,177 @@
"""
invokeai.backend.model_manager.merge exports:
merge_diffusion_models() -- combine multiple models by location and return a pipeline object
merge_diffusion_models_and_commit() -- combine multiple models by ModelManager ID and write to models.yaml
Copyright (c) 2023 Lincoln Stein and the InvokeAI Development Team
"""
import warnings
from enum import Enum
from pathlib import Path
from typing import Any, List, Optional, Set
import torch
from diffusers import AutoPipelineForText2Image
from diffusers import logging as dlogging
from invokeai.app.services.model_install import ModelInstallServiceBase
from invokeai.backend.util.devices import choose_torch_device, torch_dtype
from . import (
AnyModelConfig,
BaseModelType,
ModelType,
ModelVariantType,
)
from .config import MainDiffusersConfig
class MergeInterpolationMethod(str, Enum):
WeightedSum = "weighted_sum"
Sigmoid = "sigmoid"
InvSigmoid = "inv_sigmoid"
AddDifference = "add_difference"
class ModelMerger(object):
"""Wrapper class for model merge function."""
def __init__(self, installer: ModelInstallServiceBase):
"""
Initialize a ModelMerger object.
:param store: Underlying storage manager for the running process.
:param config: InvokeAIAppConfig object (if not provided, default will be selected).
"""
self._installer = installer
def merge_diffusion_models(
self,
model_paths: List[Path],
alpha: float = 0.5,
interp: Optional[MergeInterpolationMethod] = None,
force: bool = False,
variant: Optional[str] = None,
**kwargs: Any,
) -> Any: # pipe.merge is an untyped function.
"""
:param model_paths: up to three models, designated by their local paths or HuggingFace repo_ids
:param alpha: The interpolation parameter. Ranges from 0 to 1. It affects the ratio in which the checkpoints are merged. A 0.8 alpha
would mean that the first model checkpoints would affect the final result far less than an alpha of 0.2
:param interp: The interpolation method to use for the merging. Supports "sigmoid", "inv_sigmoid", "add_difference" and None.
Passing None uses the default interpolation which is weighted sum interpolation. For merging three checkpoints, only "add_difference" is supported.
:param force: Whether to ignore mismatch in model_config.json for the current models. Defaults to False.
**kwargs - the default DiffusionPipeline.get_config_dict kwargs:
cache_dir, resume_download, force_download, proxies, local_files_only, use_auth_token, revision, torch_dtype, device_map
"""
with warnings.catch_warnings():
warnings.simplefilter("ignore")
verbosity = dlogging.get_verbosity()
dlogging.set_verbosity_error()
dtype = torch.float16 if variant == "fp16" else torch_dtype(choose_torch_device())
# Note that checkpoint_merger will not work with downloaded HuggingFace fp16 models
# until upstream https://github.com/huggingface/diffusers/pull/6670 is merged and released.
pipe = AutoPipelineForText2Image.from_pretrained(
model_paths[0],
custom_pipeline="checkpoint_merger",
torch_dtype=dtype,
variant=variant,
)
merged_pipe = pipe.merge(
pretrained_model_name_or_path_list=model_paths,
alpha=alpha,
interp=interp.value if interp else None, # diffusers API treats None as "weighted sum"
force=force,
torch_dtype=dtype,
variant=variant,
**kwargs,
)
dlogging.set_verbosity(verbosity)
return merged_pipe
def merge_diffusion_models_and_save(
self,
model_keys: List[str],
merged_model_name: str,
alpha: float = 0.5,
force: bool = False,
interp: Optional[MergeInterpolationMethod] = None,
merge_dest_directory: Optional[Path] = None,
variant: Optional[str] = None,
**kwargs: Any,
) -> AnyModelConfig:
"""
:param models: up to three models, designated by their InvokeAI models.yaml model name
:param merged_model_name: name for new model
:param alpha: The interpolation parameter. Ranges from 0 to 1. It affects the ratio in which the checkpoints are merged. A 0.8 alpha
would mean that the first model checkpoints would affect the final result far less than an alpha of 0.2
:param interp: The interpolation method to use for the merging. Supports "weighted_average", "sigmoid", "inv_sigmoid", "add_difference" and None.
Passing None uses the default interpolation which is weighted sum interpolation. For merging three checkpoints, only "add_difference" is supported. Add_difference is A+(B-C).
:param force: Whether to ignore mismatch in model_config.json for the current models. Defaults to False.
:param merge_dest_directory: Save the merged model to the designated directory (with 'merged_model_name' appended)
**kwargs - the default DiffusionPipeline.get_config_dict kwargs:
cache_dir, resume_download, force_download, proxies, local_files_only, use_auth_token, revision, torch_dtype, device_map
"""
model_paths: List[Path] = []
model_names: List[str] = []
config = self._installer.app_config
store = self._installer.record_store
base_models: Set[BaseModelType] = set()
vae = None
variant = None if self._installer.app_config.full_precision else "fp16"
assert (
len(model_keys) <= 2 or interp == MergeInterpolationMethod.AddDifference
), "When merging three models, only the 'add_difference' merge method is supported"
for key in model_keys:
info = store.get_model(key)
model_names.append(info.name)
assert isinstance(
info, MainDiffusersConfig
), f"{info.name} ({info.key}) is not a diffusers model. It must be optimized before merging"
assert info.variant == ModelVariantType(
"normal"
), f"{info.name} ({info.key}) is a {info.variant} model, which cannot currently be merged"
# pick up the first model's vae
if key == model_keys[0]:
vae = info.vae
# tally base models used
base_models.add(info.base)
model_paths.extend([config.models_path / info.path])
assert len(base_models) == 1, f"All models to merge must have same base model, but found bases {base_models}"
base_model = base_models.pop()
merge_method = None if interp == "weighted_sum" else MergeInterpolationMethod(interp)
merged_pipe = self.merge_diffusion_models(model_paths, alpha, merge_method, force, variant=variant, **kwargs)
dump_path = (
Path(merge_dest_directory)
if merge_dest_directory
else config.models_path / base_model.value / ModelType.Main.value
)
dump_path.mkdir(parents=True, exist_ok=True)
dump_path = dump_path / merged_model_name
dtype = torch.float16 if variant == "fp16" else torch_dtype(choose_torch_device())
merged_pipe.save_pretrained(dump_path.as_posix(), safe_serialization=True, torch_dtype=dtype, variant=variant)
# register model and get its unique key
key = self._installer.register_path(dump_path)
# update model's config
model_config = self._installer.record_store.get_model(key)
model_config.update(
{
"name": merged_model_name,
"description": f"Merge of models {', '.join(model_names)}",
"vae": vae,
}
)
self._installer.record_store.update_model(key, model_config)
return model_config

View File

@@ -0,0 +1,50 @@
"""
Initialization file for invokeai.backend.model_manager.metadata
Usage:
from invokeai.backend.model_manager.metadata import(
AnyModelRepoMetadata,
CommercialUsage,
LicenseRestrictions,
HuggingFaceMetadata,
CivitaiMetadata,
)
from invokeai.backend.model_manager.metadata.fetch import CivitaiMetadataFetch
data = CivitaiMetadataFetch().from_url("https://civitai.com/models/206883/split")
assert isinstance(data, CivitaiMetadata)
if data.allow_commercial_use:
print("Commercial use of this model is allowed")
"""
from .fetch import CivitaiMetadataFetch, HuggingFaceMetadataFetch
from .metadata_base import (
AnyModelRepoMetadata,
AnyModelRepoMetadataValidator,
BaseMetadata,
CivitaiMetadata,
CommercialUsage,
HuggingFaceMetadata,
LicenseRestrictions,
ModelMetadataWithFiles,
RemoteModelFile,
UnknownMetadataException,
)
from .metadata_store import ModelMetadataStore
__all__ = [
"AnyModelRepoMetadata",
"AnyModelRepoMetadataValidator",
"CivitaiMetadata",
"CivitaiMetadataFetch",
"CommercialUsage",
"HuggingFaceMetadata",
"HuggingFaceMetadataFetch",
"LicenseRestrictions",
"ModelMetadataStore",
"BaseMetadata",
"ModelMetadataWithFiles",
"RemoteModelFile",
"UnknownMetadataException",
]

View File

@@ -0,0 +1,21 @@
"""
Initialization file for invokeai.backend.model_manager.metadata.fetch
Usage:
from invokeai.backend.model_manager.metadata.fetch import (
CivitaiMetadataFetch,
HuggingFaceMetadataFetch,
)
from invokeai.backend.model_manager.metadata import CivitaiMetadata
data = CivitaiMetadataFetch().from_url("https://civitai.com/models/206883/split")
assert isinstance(data, CivitaiMetadata)
if data.allow_commercial_use:
print("Commercial use of this model is allowed")
"""
from .civitai import CivitaiMetadataFetch
from .fetch_base import ModelMetadataFetchBase
from .huggingface import HuggingFaceMetadataFetch
__all__ = ["ModelMetadataFetchBase", "CivitaiMetadataFetch", "HuggingFaceMetadataFetch"]

Some files were not shown because too many files have changed in this diff Show More