Compare commits

...

125 Commits

Author SHA1 Message Date
Lincoln Stein
ded521b019 pass regression tests when translator package not installed 2023-07-31 12:35:39 -04:00
Lincoln Stein
a3980cc756 changed declaration of dummy translate class for pytest 2023-07-31 09:04:26 -04:00
Lincoln Stein
6f15a67592 Merge branch 'main' into feat/translate 2023-07-31 08:40:17 -04:00
Lincoln Stein
a597b4bfaf add dummy ts object to pass pytests 2023-07-31 08:28:23 -04:00
Lincoln Stein
6ac4338f00 blackified 2023-07-31 08:07:36 -04:00
Alexandre Macabies
eb642653cb Add Nix Flake for development, which uses Python virtualenv. 2023-07-31 19:14:30 +10:00
Lincoln Stein
f4ead5e07f fix keyerror bug that was causing merge script to crash 2023-07-30 19:25:44 -04:00
Lincoln Stein
6d24ca7f52 3.0.1post3 (#4082)
This is a relatively stable release that corrects the urgent windows
install and model manager problems in 3.0.1. It still has two known
bugs:

1. Many inpainting models are not loading correctly.
2. The merge script is failing to start.
2023-07-30 18:03:35 -04:00
Lincoln Stein
2164da8592 blackify 2023-07-30 16:25:06 -04:00
Lincoln Stein
4121c261a0 fix missing models when INVOKEAI_ROOT="." 2023-07-30 13:37:18 -04:00
Lincoln Stein
99823d5039 more fixes to update and install 2023-07-30 11:57:06 -04:00
Lincoln Stein
0abceb0e7b Merge branch 'main' of github.com:invoke-ai/InvokeAI 2023-07-30 11:08:27 -04:00
Lincoln Stein
83d3f2347e fix "unrecognized arguments: --yes" bug on unattended upgrade 2023-07-30 11:07:06 -04:00
ymgenesis
73e25d8dbe Update communityNodes.md
- Remove FaceMask and add link FaceTools repository, which includes FaceMask, FaceOff, and FacePlace
- Move Ideal Size up from under the template entry
2023-07-30 10:59:56 -04:00
Lincoln Stein
03594c949a blackified 2023-07-30 10:18:39 -04:00
Lincoln Stein
adb85036e6 dependency tweaks to avoid installing/uninstalling pkgs 2023-07-30 10:17:04 -04:00
Lincoln Stein
7d7a9273ed Merge branch 'main' of github.com:invoke-ai/InvokeAI 2023-07-30 09:19:14 -04:00
Lincoln Stein
ba817b5648 added popup for translation service 2023-07-30 09:14:26 -04:00
Lincoln Stein
f17ad227cf fix relative model paths to be against config.models_path, not root (#4061)
## What type of PR is this? (check all applicable)

- [ X] Bug Fix

## Have you discussed this change with the InvokeAI team?
- [X] Yes - bug discovered by jpphoto
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ X] Not needed

## Description

The user can customize the location of the models directory by setting
configuration variable `models_dir`. However, the model manager and the
TUI installer were all treating model relative paths as relative to the
invokeai root rather than the designated models directory. This has been
fixed by changing path resolution calls from using `config.root_path` to
`config.models_path`

Unfortunately there were many instances that needed replacement, so this
needs a bit of functional testing - try adding models, removing models,
renaming them, converting checkpoints, etc.
2023-07-30 08:51:35 -04:00
Lincoln Stein
f91d01eb38 Merge branch 'main' into bugfix/model-manager-rel-paths 2023-07-30 08:25:37 -04:00
Lincoln Stein
adfcb610b6 Installer tweaks (#4070)
## What type of PR is this? (check all applicable)


- [ X] Optimization

## Have you discussed this change with the InvokeAI team?
- [X ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X ] Yes
- [ ] No


## Description

This PR does two things:

1. if the environment variable INVOKEAI_ROOT is defined at install time,
the zipfile installer will default to its value when asking the user
where to install the software
2. If the user has more than 72 models of any type installed, then the
list will be truncated in the TUI and the user given a warning. Anything
larger than this number of models causes the vertical space to overflow.
The only effect of truncation is that the user will not be able to see
and delete the models that were truncated. The message advises the user
to go to the Web Model Manager interface in this event.
2023-07-30 08:25:11 -04:00
Lincoln Stein
cafcd16657 Merge branch 'main' into install/tui-tweaks 2023-07-30 08:19:45 -04:00
Lincoln Stein
2537ff0280 Merge branch 'main' into bugfix/model-manager-rel-paths 2023-07-30 08:17:36 -04:00
Lincoln Stein
0f5f08e494 Merge branch 'bugfix/model-manager-rel-paths' of github.com:invoke-ai/InvokeAI into bugfix/model-manager-rel-paths 2023-07-30 08:17:21 -04:00
Lincoln Stein
e20c4dc1e8 blackified 2023-07-30 08:17:10 -04:00
Lincoln Stein
6dc4ddef1b Fix various bugs in ckpt to diffusers conversion script (#4065)
## What type of PR is this? (check all applicable)


- [X ] Bug Fix


## Have you discussed this change with the InvokeAI team?
- [ X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ X] No


## Description

This PR fixes several issues with the 3.0.0 conversion script:

- Handles checkpoint variants that don't put dots between fields in the
long state dict key names
- Handles ema, non-ema, pruned and non-pruned ckpts
- Does not add safety checker to converted checkpoints
- Respects precision of original checkpoint file
2023-07-30 08:16:37 -04:00
Lincoln Stein
26af5ec341 Merge branch 'main' into bugfix/model-manager-rel-paths 2023-07-30 08:08:17 -04:00
Lincoln Stein
10b182f316 Merge branch 'main' into bugfix/convert-script 2023-07-30 08:07:51 -04:00
Lincoln Stein
ac84a9f915 reenable display of autoloaded models 2023-07-30 08:05:05 -04:00
Lincoln Stein
844578ab88 fix lora loading crash 2023-07-30 07:57:10 -04:00
Lincoln Stein
444390617f rebuild front end 2023-07-29 22:00:16 -04:00
Lincoln Stein
6cb40d9d7b bump version for hotfix 3.0.1post1 2023-07-29 21:58:57 -04:00
Lincoln Stein
ca895a9cd0 Unpin pydantic and numpy in pyproject.toml (#4062)
## What type of PR is this? (check all applicable)

- [ X] Bug Fix


## Have you discussed this change with the InvokeAI team?
- [ X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] Not needed

## Description

Windows users have been getting a lot of OSErrors while installing 3.0.1
during the pip dependency installation phase. Generally the errors have
involved just two packages, pydantic and numpy. Looking at the install
logs, I see that both of these packages are first installed under one
version number by a dependency, and then uninstalled and replaced by a
slightly different version specified in invoke's `pyproject.toml`. I
think this is the problem - maybe the earlier package is not completely
closed before it is uninstalled and reinstalled.

This PR relaxes pinning of numpy and pydantic in `pyproject.toml`.
Everything seems to install and run properly. Hopefully it will address
the windows install bug as well.
2023-07-29 21:57:21 -04:00
Lincoln Stein
7d27c7b1a4 Merge branch 'main' into lstein/no-pydantic-in-pyproject 2023-07-29 21:47:16 -04:00
Lincoln Stein
0c31eaee61 blackified 2023-07-29 21:14:18 -04:00
Lincoln Stein
e73c12cac2 add non-English language translator node 2023-07-29 20:21:57 -04:00
Lincoln Stein
6c82229910 fix recovery recipe 2023-07-29 20:00:06 -04:00
Lincoln Stein
43b1eb8e84 wording changes 2023-07-29 19:49:58 -04:00
Lincoln Stein
b10b07220e blackify code 2023-07-29 19:20:20 -04:00
Lincoln Stein
c2eb50d1cd make installer use initial INVOKEAI_ROOT as default install location 2023-07-29 19:19:42 -04:00
Lincoln Stein
73f3b7f84b remove dangling comment 2023-07-29 17:32:33 -04:00
Lincoln Stein
bb18251fad Merge branch 'bugfix/convert-script' of github.com:invoke-ai/InvokeAI into bugfix/convert-script 2023-07-29 17:31:02 -04:00
Lincoln Stein
348bee8981 blackified 2023-07-29 17:30:54 -04:00
Lincoln Stein
078b33bda2 Merge branch 'main' into bugfix/convert-script 2023-07-29 17:30:40 -04:00
Lincoln Stein
e82eb0b9fc add correct optional annotation to precision arg 2023-07-29 17:30:21 -04:00
Lincoln Stein
ad976e5198 Merge branch 'main' into bugfix/model-manager-rel-paths 2023-07-29 17:27:16 -04:00
Lincoln Stein
0e28961e69 Merge branch 'main' into lstein/no-pydantic-in-pyproject 2023-07-29 17:27:02 -04:00
Lincoln Stein
6ce059f063 blackified again 2023-07-29 17:26:40 -04:00
Lincoln Stein
1de783b1ce fix mistake in indexing flat_ema_key 2023-07-29 17:20:26 -04:00
Lincoln Stein
3f9105be50 make convert script respect setting of use_ema in config file 2023-07-29 17:17:45 -04:00
Lincoln Stein
781322a647 installer respects INVOKEAI_ROOT for default root location 2023-07-29 16:16:44 -04:00
Lincoln Stein
9a1cfadd8b fix: SDXL Metadata not being retrieved (#4057)
## What type of PR is this? (check all applicable)

- [x] Bug Fix

## Have you discussed this change with the InvokeAI team?
- [x] Yes

## Description

- SDXL Metadata was not being retrieved. This PR fixes it.
2023-07-29 15:37:02 -04:00
Lincoln Stein
2a2d988928 convert script handles more ckpt variants 2023-07-29 15:28:39 -04:00
Lincoln Stein
72c519c6ad fix incorrect key construction 2023-07-29 13:51:47 -04:00
Lincoln Stein
af12f67948 Merge branch 'lstein/no-pydantic-in-pyproject' of github.com:invoke-ai/InvokeAI into lstein/no-pydantic-in-pyproject 2023-07-29 13:28:38 -04:00
Lincoln Stein
60f5606c2d downgrade torchmetrics to fix model import problem 2023-07-29 13:28:29 -04:00
Lincoln Stein
24b19166dd further refactoring 2023-07-29 13:13:22 -04:00
Lincoln Stein
0396bce4f9 Merge branch 'main' into lstein/no-pydantic-in-pyproject 2023-07-29 13:06:30 -04:00
Lincoln Stein
71768f5988 restore unpinned versions of pydantic and numpy 2023-07-29 13:04:34 -04:00
Lincoln Stein
0fb7328022 blackify code 2023-07-29 13:00:43 -04:00
Lincoln Stein
99daa97978 more refactoring; fixed place where rel conversion missed 2023-07-29 13:00:07 -04:00
Lincoln Stein
982a568349 blackify pr 2023-07-29 10:47:55 -04:00
Lincoln Stein
d79d5a4ff7 modest refactoring 2023-07-29 10:45:26 -04:00
Lincoln Stein
9968ff2893 fix relative model paths to be against config.models_path, not root 2023-07-29 10:30:27 -04:00
blessedcoolant
6d82a1019a fix: Black linting 2023-07-29 17:34:43 +12:00
blessedcoolant
6ed1bf7084 Merge branch 'main' into metadata-fix 2023-07-29 17:33:30 +12:00
blessedcoolant
974175be45 fix: Prompt Node using incorrect output type (#4058)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-29 17:32:10 +12:00
blessedcoolant
bee678fdd1 fix: Prompt Node using incorrect output type 2023-07-29 17:12:25 +12:00
blessedcoolant
c5caf1e8fe fix: SDXL Metadata not being retrieved 2023-07-29 17:03:19 +12:00
blessedcoolant
72708eb53c Feat/Nodes: Change Input to Textbox (#3853)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because:
not yet, making pr to show
      
## Have you updated relevant documentation?
- [ ] Yes
- [ ] No


## Description
Temp Change Node String input to a textbox, to allow easier input of
prompts and larger strings, it works for me but please tell me if I did
it wrong and if the size is ok

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-29 16:10:32 +12:00
blessedcoolant
aae1670080 fix: Incorrect Prompt Node output type 2023-07-29 16:04:19 +12:00
blessedcoolant
1e776d2523 chore: Regen types 2023-07-29 15:59:52 +12:00
blessedcoolant
8e06e6abbc feat: Update 'style' string input to also display text area 2023-07-29 15:52:59 +12:00
blessedcoolant
8a0e1b6cfc feat: Create Prompt Input Node 2023-07-29 15:52:37 +12:00
mickr777
2d9bc79ca4 Merge branch 'main' into nodepromptsize 2023-07-29 12:43:29 +10:00
mickr777
6886eb094d Make more Simple 2023-07-29 12:40:17 +10:00
Lincoln Stein
d633eb1612 remove pydantic and numpy from pyproject.toml 2023-07-28 21:56:22 -04:00
Lincoln Stein
ac22652686 rebuild front end 2023-07-28 18:21:05 -04:00
Lincoln Stein
77cfec5cc8 Release 3.0.1 release candidate 3 (#4025)
Branch used to rebuild front end and make minor doc changes during
release process. Merge before next release.
2023-07-28 18:03:44 -04:00
Lincoln Stein
3e4420c1ae bugfix: Float64 error for mps devices on set_timesteps (#4040)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: minor fix, let me know your thoughts

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No

## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue # https://github.com/invoke-ai/InvokeAI/issues/4017
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [x] No : Requires mps device

## [optional] Are there any post deployment tasks we need to perform?

Please test on an MPS (M1/M2) device. 

Relevant code causing the error in #4017 


01b6ec21fa/src/diffusers/schedulers/scheduling_euler_discrete.py (L263C3-L268C75)

```
        self.sigmas = torch.from_numpy(sigmas).to(device=device)
        if str(device).startswith("mps"):
            # mps does not support float64
            self.timesteps = torch.from_numpy(timesteps).to(device, dtype=torch.float32)
        else:
            self.timesteps = torch.from_numpy(timesteps).to(device=device)
```
2023-07-28 18:02:39 -04:00
Lincoln Stein
f8181ab1b3 fix: Concat Link Styling (#4048)
## What type of PR is this? (check all applicable)

- [x] Bug Fix

## Description

- Fix SDXL Concat Link animation not considering the fact that prompt
boxes can be resized.
- Also fixed a minor issue where the overlaying animation box would
block the prompt input resize slightly. Should be good now.
2023-07-28 18:02:22 -04:00
Lincoln Stein
3dfeead9b8 Update troubleshooting guide with ~ydantic and SDXL unet issue advice (#4054)
## What type of PR is this? (check all applicable)


- [X ] Documentation Update


## Have you discussed this change with the InvokeAI team?
- [X ] Yes

      
## Have you updated all relevant documentation?
- [X ] Yes

## Description

Added solutions for installation issues related to large SDXL files and
Windows dependency glitches.
2023-07-28 18:02:04 -04:00
Lincoln Stein
3da0be7eb9 update troubleshooting guide with ~ydantic and SDXL unet issue workarounds 2023-07-28 16:42:57 -04:00
ZachNagengast
31e5f4bb0e Merge branch 'main' into set-timestep-mps-fix 2023-07-28 08:58:12 -07:00
ZachNagengast
2164674b01 Black format 2023-07-28 07:49:29 -07:00
blessedcoolant
8f2a646286 fix: Lint errors 2023-07-29 02:37:59 +12:00
blessedcoolant
5ff4dd26bb fix: Concat Link Styling 2023-07-29 02:30:10 +12:00
Lincoln Stein
e342ca872f fix to work on non-MPS systems 2023-07-28 10:27:49 -04:00
blessedcoolant
062a369044 feat: Unify Promp Area Styling (#4033)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description

Making the prompt area styling match across all tabs / models and move
all prompt related components into a parent components for quick add.

Cherry picked stuff from the Styles PR coz we ain't gonna merge that.


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-28 22:10:08 +12:00
psychedelicious
e4a2f56ad1 feat(ui): tweak link colors
- make the `SDXLConcatLink` icon match existing colors in light mode
- make the link toggle button accent color when active (its not super obvious but at least there is *some* visual difference for the button)
2023-07-28 19:57:05 +10:00
blessedcoolant
1df30f7260 feat: Pulse Animate SDXL Concat Link 2023-07-28 20:45:39 +12:00
blessedcoolant
14c4650801 fix: Lint Errors (returning possible null component) 2023-07-28 19:03:29 +12:00
blessedcoolant
f155b03eee feat: New animation for Concat Link 2023-07-28 18:55:59 +12:00
ZachNagengast
ddaf753f7b Merge branch 'set-timestep-mps-fix' of ssh://github.com/ZachNagengast/InvokeAI into set-timestep-mps-fix 2023-07-27 23:40:44 -07:00
ZachNagengast
e6d14c708c Fix variable name 2023-07-27 23:40:23 -07:00
Millun Atluri
7f81a95b20 Merge branch 'main' into set-timestep-mps-fix 2023-07-28 16:12:07 +10:00
blessedcoolant
6a49eec606 feat: Add Concat Link Animation
Might remove the lines. Just pushing to save changes for now.
2023-07-28 15:01:40 +12:00
blessedcoolant
fd67b18c9a Merge branch 'main' into unify-prompt 2023-07-28 14:48:13 +12:00
psychedelicious
9affdbbaad chore: black 2023-07-28 11:38:52 +10:00
psychedelicious
8d300bddd0 feat(ui): alias existing type for UpdateLoRAModelResponse 2023-07-28 11:38:52 +10:00
Lincoln Stein
aa2c94be9e make LoRAs editable 2023-07-28 11:38:52 +10:00
Lincoln Stein
4c79350300 persist LoRA model info in models.yaml 2023-07-28 11:38:52 +10:00
Alexandre Macabies
10e1d623c3 Add LoRAs to the model manager. 2023-07-28 11:38:52 +10:00
ZachNagengast
aa1f827271 Fix unet_info location, can have no device prop 2023-07-27 14:47:09 -07:00
Lincoln Stein
fb113b9077 Merge branch 'main' into release/invokeai-3-0-1 2023-07-27 16:24:29 -04:00
ZachNagengast
6edeb4e072 Pass device to set_timestep to avoid float64 error 2023-07-27 12:52:18 -07:00
Lincoln Stein
006075483d Merge branch 'main' into release/invokeai-3-0-1 2023-07-27 15:21:08 -04:00
Lincoln Stein
52bd29d484 Merge branch 'main' into release/invokeai-3-0-1 2023-07-27 15:19:05 -04:00
blessedcoolant
3bb81bedbd Merge branch 'main' into unify-prompt 2023-07-28 05:36:04 +12:00
blessedcoolant
b8b46aec09 Revert "fix: Lint Errors"
This reverts commit f057d5c85b.
2023-07-28 04:34:41 +12:00
psychedelicious
4d2b87ea01 fix(ui): fix types for controlnet models
`ControlNetModelConfig` was split into `ControlNetModelCheckpointConfig` and `ControlNetModelDiffusersConfig`, need to update the UI types
2023-07-28 04:34:29 +12:00
blessedcoolant
611f31c057 fix: Adjust embedding button on PositivePrompt for new changes 2023-07-28 03:07:50 +12:00
blessedcoolant
b60adc31d0 feat: Unify Prompt Area Design Between SDXL and Regular Models 2023-07-28 03:07:50 +12:00
blessedcoolant
a98ed3a5ba fix: TextArea Resizer styling when disabled 2023-07-28 03:06:31 +12:00
blessedcoolant
f057d5c85b fix: Lint Errors 2023-07-28 03:06:31 +12:00
mickr777
36455f6cac Merge branch 'main' into nodepromptsize 2023-07-26 18:54:54 +10:00
mickr777
2d0f932737 Lint Code 2023-07-26 18:35:04 +10:00
mickr777
de73e4f5b9 Merge branch 'main' into nodepromptsize 2023-07-23 18:28:25 +10:00
mickr777
0689e36390 Merge branch 'main' into nodepromptsize 2023-07-22 07:20:28 +10:00
mickr777
13e7614508 add text so string node uses textarea 2023-07-21 19:36:27 +10:00
mickr777
4e1786d9ae Remove Resize: none 2023-07-21 13:55:40 +10:00
mickr777
585520d8d2 Only apply Textaera to Prompt 2023-07-21 13:17:27 +10:00
mickr777
98b2734240 Merge branch 'main' into nodepromptsize 2023-07-21 08:07:55 +10:00
mickr777
7b428b5240 Make height smaller and allow width to change with node 2023-07-21 08:03:01 +10:00
mickr777
f73b45bcb5 Feat: Change Input to Textbox 2023-07-20 19:11:18 +10:00
66 changed files with 2044 additions and 1120 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 131 KiB

View File

@@ -80,11 +80,11 @@ Q&A</a>]
!!! note
This fork is rapidly evolving. Please use the [Issues tab](https://github.com/invoke-ai/InvokeAI/issues) to report bugs and make feature requests. Be sure to use the provided templates. They will help aid diagnose issues faster.
This software is rapidly evolving. Please use the [Issues tab](https://github.com/invoke-ai/InvokeAI/issues) to report bugs and make feature requests. Be sure to use the provided templates. They will help aid diagnose issues faster.
## :octicons-package-dependencies-24: Installation
This fork is supported across Linux, Windows and Macintosh. Linux users can use
This software is supported across Linux, Windows and Macintosh. Linux users can use
either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm
driver).
@@ -95,6 +95,8 @@ driver).
This method is recommended for experienced users and developers
#### [Docker Installation](installation/040_INSTALL_DOCKER.md)
This method is recommended for those familiar with running Docker containers
#### [Installation Troubleshooting](installation/010_INSTALL_AUTOMATED.md#troubleshooting)
Installation troubleshooting guide.
### Other Installation Guides
- [PyPatchMatch](installation/060_INSTALL_PATCHMATCH.md)
- [XFormers](installation/070_INSTALL_XFORMERS.md)
@@ -230,7 +232,7 @@ encouraged to do so.
## :octicons-person-24: Contributors
This fork is a combined effort of various people from across the world.
This software is a combined effort of various people from across the world.
[Check out the list of all these amazing people](other/CONTRIBUTORS.md). We
thank them for their time, hard work and effort.

View File

@@ -372,8 +372,71 @@ experimental versions later.
Once InvokeAI is installed, do not move or remove this directory."
<a name="troubleshooting"></a>
## Troubleshooting
### _OSErrors on Windows while installing dependencies_
During a zip file installation or an online update, installation stops
with an error like this:
![broken-dependency-screenshot](../assets/troubleshooting/broken-dependency.png){:width="800px"}
This seems to happen particularly often with the `pydantic` and
`numpy` packages. The most reliable solution requires several manual
steps to complete installation.
Open up a Powershell window and navigate to the `invokeai` directory
created by the installer. Then give the following series of commands:
```cmd
rm .\.venv -r -force
python -mvenv .venv
.\.venv\Scripts\activate
pip install invokeai
invokeai-configure --yes --root .
```
If you see anything marked as an error during this process please stop
and seek help on the Discord [installation support
channel](https://discord.com/channels/1020123559063990373/1041391462190956654). A
few warning messages are OK.
If you are updating from a previous version, this should restore your
system to a working state. If you are installing from scratch, there
is one additional command to give:
```cmd
wget -O invoke.bat https://raw.githubusercontent.com/invoke-ai/InvokeAI/main/installer/templates/invoke.bat.in
```
This will create the `invoke.bat` script needed to launch InvokeAI and
its related programs.
### _Stable Diffusion XL Generation Fails after Trying to Load unet_
InvokeAI is working in other respects, but when trying to generate
images with Stable Diffusion XL you get a "Server Error". The text log
in the launch window contains this log line above several more lines of
error messages:
```INFO --> Loading model:D:\LONG\PATH\TO\MODEL, type sdxl:main:unet```
This failure mode occurs when there is a network glitch during
downloading the very large SDXL model.
To address this, first go to the Web Model Manager and delete the
Stable-Diffusion-XL-base-1.X model. Then navigate to HuggingFace and
manually download the .safetensors version of the model. The 1.0
version is located at
https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/tree/main
and the file is named `sd_xl_base_1.0.safetensors`.
Save this file to disk and then reenter the Model Manager. Navigate to
Import Models->Add Model, then type (or drag-and-drop) the path to the
.safetensors file. Press "Add Model".
### _Package dependency conflicts_
If you have previously installed InvokeAI or another Stable Diffusion

View File

@@ -14,20 +14,25 @@ The nodes linked below have been developed and contributed by members of the Inv
## List of Nodes
### Face Mask
### FaceTools
**Description:** This node autodetects a face in the image using MediaPipe and masks it by making it transparent. Via outpainting you can swap faces with other faces, or invert the mask and swap things around the face with other things. Additionally, you can supply X and Y offset values to scale/change the shape of the mask for finer control. The node also outputs an all-white mask in the same dimensions as the input image. This is needed by the inpaint node (and unified canvas) for outpainting.
**Description:** FaceTools is a collection of nodes created to manipulate faces as you would in Unified Canvas. It includes FaceMask, FaceOff, and FacePlace. FaceMask autodetects a face in the image using MediaPipe and creates a mask from it. FaceOff similarly detects a face, then takes the face off of the image by adding a square bounding box around it and cropping/scaling it. FacePlace puts the bounded face image from FaceOff back onto the original image. Using these nodes with other inpainting node(s), you can put new faces on existing things, put new things around existing faces, and work closer with a face as a bounded image. Additionally, you can supply X and Y offset values to scale/change the shape of the mask for finer control on FaceMask and FaceOff. See GitHub repository below for usage examples.
**Node Link:** https://github.com/ymgenesis/InvokeAI/blob/facemaskmediapipe/invokeai/app/invocations/facemask.py
**Node Link:** https://github.com/ymgenesis/FaceTools/
**Example Node Graph:** https://www.mediafire.com/file/gohn5sb1bfp8use/21-July_2023-FaceMask.json/file
**FaceMask Output Examples**
**Output Examples**
![5cc8abce-53b0-487a-b891-3bf94dcc8960](https://github.com/invoke-ai/InvokeAI/assets/25252829/43f36d24-1429-4ab1-bd06-a4bedfe0955e)
![b920b710-1882-49a0-8d02-82dff2cca907](https://github.com/invoke-ai/InvokeAI/assets/25252829/7660c1ed-bf7d-4d0a-947f-1fc1679557ba)
![71a91805-fda5-481c-b380-264665703133](https://github.com/invoke-ai/InvokeAI/assets/25252829/f8f6a2ee-2b68-4482-87da-b90221d5c3e2)
![2e3168cb-af6a-475d-bfac-c7b7fd67b4c2](https://github.com/ymgenesis/InvokeAI/assets/25252829/a5ad7d44-2ada-4b3c-a56e-a21f8244a1ac)
![2_annotated](https://github.com/ymgenesis/InvokeAI/assets/25252829/53416c8a-a23b-4d76-bb6d-3cfd776e0096)
![2fe2150c-fd08-4e26-8c36-f0610bf441bb](https://github.com/ymgenesis/InvokeAI/assets/25252829/b0f7ecfe-f093-4147-a904-b9f131b41dc9)
![831b6b98-4f0f-4360-93c8-69a9c1338cbe](https://github.com/ymgenesis/InvokeAI/assets/25252829/fc7b0622-e361-4155-8a76-082894d084f0)
<hr>
### Ideal Size
**Description:** This node calculates an ideal image size for a first pass of a multi-pass upscaling. The aim is to avoid duplication that results from choosing a size larger than the model is capable of.
**Node Link:** https://github.com/JPPhoto/ideal-size-node
--------------------------------
### Super Cool Node Template
@@ -42,11 +47,5 @@ The nodes linked below have been developed and contributed by members of the Inv
![Invoke AI](https://invoke-ai.github.io/InvokeAI/assets/invoke_ai_banner.png)
### Ideal Size
**Description:** This node calculates an ideal image size for a first pass of a multi-pass upscaling. The aim is to avoid duplication that results from choosing a size larger than the model is capable of.
**Node Link:** https://github.com/JPPhoto/ideal-size-node
## Help
If you run into any issues with a node, please post in the [InvokeAI Discord](https://discord.gg/ZmtBAhwWhy).

25
flake.lock generated Normal file
View File

@@ -0,0 +1,25 @@
{
"nodes": {
"nixpkgs": {
"locked": {
"lastModified": 1690630721,
"narHash": "sha256-Y04onHyBQT4Erfr2fc82dbJTfXGYrf4V0ysLUYnPOP8=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "d2b52322f35597c62abf56de91b0236746b2a03d",
"type": "github"
},
"original": {
"id": "nixpkgs",
"type": "indirect"
}
},
"root": {
"inputs": {
"nixpkgs": "nixpkgs"
}
}
},
"root": "root",
"version": 7
}

81
flake.nix Normal file
View File

@@ -0,0 +1,81 @@
# Important note: this flake does not attempt to create a fully isolated, 'pure'
# Python environment for InvokeAI. Instead, it depends on local invocations of
# virtualenv/pip to install the required (binary) packages, most importantly the
# prebuilt binary pytorch packages with CUDA support.
# ML Python packages with CUDA support, like pytorch, are notoriously expensive
# to compile so it's purposefuly not what this flake does.
{
description = "An (impure) flake to develop on InvokeAI.";
outputs = { self, nixpkgs }:
let
system = "x86_64-linux";
pkgs = import nixpkgs {
inherit system;
config.allowUnfree = true;
};
python = pkgs.python310;
mkShell = { dir, install }:
let
setupScript = pkgs.writeScript "setup-invokai" ''
# This must be sourced using 'source', not executed.
${python}/bin/python -m venv ${dir}
${dir}/bin/python -m pip install ${install}
# ${dir}/bin/python -c 'import torch; assert(torch.cuda.is_available())'
source ${dir}/bin/activate
'';
in
pkgs.mkShell rec {
buildInputs = with pkgs; [
# Backend: graphics, CUDA.
cudaPackages.cudnn
cudaPackages.cuda_nvrtc
cudatoolkit
freeglut
glib
gperf
procps
libGL
libGLU
linuxPackages.nvidia_x11
python
stdenv.cc
stdenv.cc.cc.lib
xorg.libX11
xorg.libXext
xorg.libXi
xorg.libXmu
xorg.libXrandr
xorg.libXv
zlib
# Pre-commit hooks.
black
# Frontend.
yarn
nodejs
];
LD_LIBRARY_PATH = pkgs.lib.makeLibraryPath buildInputs;
CUDA_PATH = pkgs.cudatoolkit;
EXTRA_LDFLAGS = "-L${pkgs.linuxPackages.nvidia_x11}/lib";
shellHook = ''
if [[ -f "${dir}/bin/activate" ]]; then
source "${dir}/bin/activate"
echo "Using Python: $(which python)"
else
echo "Use 'source ${setupScript}' to set up the environment."
fi
'';
};
in
{
devShells.${system} = rec {
develop = mkShell { dir = "venv"; install = "-e '.[xformers]' --extra-index-url https://download.pytorch.org/whl/cu118"; };
default = develop;
};
};
}

View File

@@ -13,7 +13,7 @@ from pathlib import Path
from tempfile import TemporaryDirectory
from typing import Union
SUPPORTED_PYTHON = ">=3.9.0,<3.11"
SUPPORTED_PYTHON = ">=3.9.0,<=3.11.100"
INSTALLER_REQS = ["rich", "semver", "requests", "plumbum", "prompt-toolkit"]
BOOTSTRAP_VENV_PREFIX = "invokeai-installer-tmp"
@@ -149,7 +149,7 @@ class Installer:
return venv_dir
def install(
self, root: str = "~/invokeai-3", version: str = "latest", yes_to_all=False, find_links: Path = None
self, root: str = "~/invokeai", version: str = "latest", yes_to_all=False, find_links: Path = None
) -> None:
"""
Install the InvokeAI application into the given runtime path
@@ -168,7 +168,8 @@ class Installer:
messages.welcome()
self.dest = Path(root).expanduser().resolve() if yes_to_all else messages.dest_path(root)
default_path = os.environ.get("INVOKEAI_ROOT") or Path(root).expanduser().resolve()
self.dest = default_path if yes_to_all else messages.dest_path(root)
# create the venv for the app
self.venv = self.app_venv()
@@ -248,6 +249,9 @@ class InvokeAiInstance:
pip[
"install",
"--require-virtualenv",
"numpy~=1.24.0", # choose versions that won't be uninstalled during phase 2
"urllib3~=1.26.0",
"requests~=2.28.0",
"torch~=2.0.0",
"torchmetrics==0.11.4",
"torchvision>=0.14.1",

View File

@@ -3,6 +3,7 @@ InvokeAI Installer
"""
import argparse
import os
from pathlib import Path
from installer import Installer
@@ -15,7 +16,7 @@ if __name__ == "__main__":
dest="root",
type=str,
help="Destination path for installation",
default="~/invokeai",
default=os.environ.get("INVOKEAI_ROOT") or "~/invokeai",
)
parser.add_argument(
"-y",

View File

@@ -41,7 +41,7 @@ IF /I "%choice%" == "1" (
python .venv\Scripts\invokeai-configure.exe --skip-sd-weight --skip-support-models
) ELSE IF /I "%choice%" == "7" (
echo Running invokeai-configure...
python .venv\Scripts\invokeai-configure.exe --yes --default_only
python .venv\Scripts\invokeai-configure.exe --yes --skip-sd-weight
) ELSE IF /I "%choice%" == "8" (
echo Developer Console
echo Python command is:

View File

@@ -82,7 +82,7 @@ do_choice() {
7)
clear
printf "Re-run the configure script to fix a broken install or to complete a major upgrade\n"
invokeai-configure --root ${INVOKEAI_ROOT} --yes --default_only
invokeai-configure --root ${INVOKEAI_ROOT} --yes --default_only --skip-sd-weights
;;
8)
clear

View File

@@ -211,7 +211,7 @@ def invoke_api():
port = find_port(app_config.port)
if port != app_config.port:
logger.warn(f"Port {app_config.port} in use, using port {port}")
# Start our own event loop for eventing usage
loop = asyncio.new_event_loop()
config = uvicorn.Config(
@@ -229,7 +229,7 @@ def invoke_api():
l.handlers.clear()
for ch in logger.handlers:
l.addHandler(ch)
loop.run_until_complete(server.serve())

View File

@@ -4,6 +4,8 @@ from typing import Literal
from pydantic import Field
from invokeai.app.invocations.prompt import PromptOutput
from .baseinvocation import BaseInvocation, BaseInvocationOutput, InvocationConfig, InvocationContext
from .math import FloatOutput, IntOutput
@@ -64,3 +66,18 @@ class ParamStringInvocation(BaseInvocation):
def invoke(self, context: InvocationContext) -> StringOutput:
return StringOutput(text=self.text)
class ParamPromptInvocation(BaseInvocation):
"""A prompt input parameter"""
type: Literal["param_prompt"] = "param_prompt"
prompt: str = Field(default="", description="The prompt value")
class Config(InvocationConfig):
schema_extra = {
"ui": {"tags": ["param", "prompt"], "title": "Prompt"},
}
def invoke(self, context: InvocationContext) -> PromptOutput:
return PromptOutput(prompt=self.prompt)

View File

@@ -292,15 +292,16 @@ class SDXLTextToLatentsInvocation(BaseInvocation):
)
num_inference_steps = self.steps
scheduler.set_timesteps(num_inference_steps)
timesteps = scheduler.timesteps
latents = latents * scheduler.init_noise_sigma
unet_info = context.services.model_manager.get_model(**self.unet.unet.dict(), context=context)
do_classifier_free_guidance = True
cross_attention_kwargs = None
with unet_info as unet:
scheduler.set_timesteps(num_inference_steps, device=unet.device)
timesteps = scheduler.timesteps
latents = latents.to(device=unet.device, dtype=unet.dtype) * scheduler.init_noise_sigma
extra_step_kwargs = dict()
if "eta" in set(inspect.signature(scheduler.step).parameters.keys()):
extra_step_kwargs.update(
@@ -537,27 +538,28 @@ class SDXLLatentsToLatentsInvocation(BaseInvocation):
scheduler_name=self.scheduler,
)
# apply denoising_start
num_inference_steps = self.steps
scheduler.set_timesteps(num_inference_steps)
t_start = int(round(self.denoising_start * num_inference_steps))
timesteps = scheduler.timesteps[t_start * scheduler.order :]
num_inference_steps = num_inference_steps - t_start
# apply noise(if provided)
if self.noise is not None and timesteps.shape[0] > 0:
noise = context.services.latents.get(self.noise.latents_name)
latents = scheduler.add_noise(latents, noise, timesteps[:1])
del noise
unet_info = context.services.model_manager.get_model(
**self.unet.unet.dict(),
context=context,
)
do_classifier_free_guidance = True
cross_attention_kwargs = None
with unet_info as unet:
# apply denoising_start
num_inference_steps = self.steps
scheduler.set_timesteps(num_inference_steps, device=unet.device)
t_start = int(round(self.denoising_start * num_inference_steps))
timesteps = scheduler.timesteps[t_start * scheduler.order :]
num_inference_steps = num_inference_steps - t_start
# apply noise(if provided)
if self.noise is not None and timesteps.shape[0] > 0:
noise = context.services.latents.get(self.noise.latents_name)
latents = scheduler.add_noise(latents, noise, timesteps[:1])
del noise
# apply scheduler extra args
extra_step_kwargs = dict()
if "eta" in set(inspect.signature(scheduler.step).parameters.keys()):

View File

@@ -0,0 +1,52 @@
# Copyright (c) 2023 Lincoln D. Stein
from typing import Literal, Union, List
from pydantic import Field
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
InvocationContext,
InvocationConfig,
)
# from .params import StringOutput
translate_available = False
try:
import translators as ts
translate_available = True
TRANSLATORS = tuple(ts.translators_pool)
except:
TRANSLATORS = ("google", "bing")
DEFAULT_PROMPT = "" if translate_available else "To use this node, please 'pip install --upgrade translators'"
class TranslateOutput(BaseInvocationOutput):
"""Translated string output"""
type: Literal["translated_string_output"] = "translated_string_output"
prompt: str = Field(default=None, description="The translated prompt string")
class TranslateInvocation(BaseInvocation):
"""Use the translators package to translate 330 languages into English prompts"""
# fmt: off
type: Literal["translate"] = "translate"
# Inputs
text: str = Field(default=DEFAULT_PROMPT, description="Prompt in any language")
translator: Literal[TRANSLATORS] = Field(default="google", description="The translator service to use")
# fmt: on
# Schema customisation
class Config(InvocationConfig):
schema_extra = {
"ui": {"title": "Translate", "tags": ["prompt", "translate", "translator"]},
}
def invoke(self, context: InvocationContext) -> TranslateOutput:
translation: str = ts.translate_text(self.text, translator=self.translator)
return TranslateOutput(prompt=translation)

View File

@@ -171,7 +171,6 @@ from pydantic import BaseSettings, Field, parse_obj_as
from typing import ClassVar, Dict, List, Set, Literal, Union, get_origin, get_type_hints, get_args
INIT_FILE = Path("invokeai.yaml")
MODEL_CORE = Path("models/core")
DB_FILE = Path("invokeai.db")
LEGACY_INIT_FILE = Path("invokeai.init")
@@ -275,7 +274,7 @@ class InvokeAISettings(BaseSettings):
@classmethod
def _excluded(self) -> List[str]:
# internal fields that shouldn't be exposed as command line options
return ["type", "initconf"]
return ["type", "initconf", "cached_root"]
@classmethod
def _excluded_from_yaml(self) -> List[str]:
@@ -291,6 +290,7 @@ class InvokeAISettings(BaseSettings):
"restore",
"root",
"nsfw_checker",
"cached_root",
]
class Config:
@@ -357,7 +357,7 @@ def _find_root() -> Path:
venv = Path(os.environ.get("VIRTUAL_ENV") or ".")
if os.environ.get("INVOKEAI_ROOT"):
root = Path(os.environ.get("INVOKEAI_ROOT")).resolve()
elif any([(venv.parent / x).exists() for x in [INIT_FILE, LEGACY_INIT_FILE, MODEL_CORE]]):
elif any([(venv.parent / x).exists() for x in [INIT_FILE, LEGACY_INIT_FILE]]):
root = (venv.parent).resolve()
else:
root = Path("~/invokeai").expanduser().resolve()
@@ -424,6 +424,7 @@ class InvokeAIAppConfig(InvokeAISettings):
log_level : Literal[tuple(["debug","info","warning","error","critical"])] = Field(default="info", description="Emit logging messages at this level or higher", category="Logging")
version : bool = Field(default=False, description="Show InvokeAI version and exit", category="Other")
cached_root : Path = Field(default=None, description="internal use only", category="DEPRECATED")
# fmt: on
def parse_args(self, argv: List[str] = None, conf: DictConfig = None, clobber=False):
@@ -471,10 +472,15 @@ class InvokeAIAppConfig(InvokeAISettings):
"""
Path to the runtime root directory
"""
if self.root:
return Path(self.root).expanduser().absolute()
# we cache value of root to protect against it being '.' and the cwd changing
if self.cached_root:
root = self.cached_root
elif self.root:
root = Path(self.root).expanduser().absolute()
else:
return self.find_root()
root = self.find_root()
self.cached_root = root
return self.cached_root
@property
def root_dir(self) -> Path:

View File

@@ -181,7 +181,7 @@ def download_with_progress_bar(model_url: str, model_dest: str, label: str = "th
def download_conversion_models():
target_dir = config.root_path / "models/core/convert"
target_dir = config.models_path / "core/convert"
kwargs = dict() # for future use
try:
logger.info("Downloading core tokenizers and text encoders")

View File

@@ -7,7 +7,7 @@ import warnings
from dataclasses import dataclass, field
from pathlib import Path
from tempfile import TemporaryDirectory
from typing import List, Dict, Callable, Union, Set
from typing import List, Dict, Callable, Union, Set, Optional
import requests
from diffusers import DiffusionPipeline
@@ -128,7 +128,9 @@ class ModelInstall(object):
model_dict[key] = ModelLoadInfo(**value)
# supplement with entries in models.yaml
installed_models = self.mgr.list_models()
installed_models = [x for x in self.mgr.list_models()]
# suppresses autoloaded models
# installed_models = [x for x in self.mgr.list_models() if not self._is_autoloaded(x)]
for md in installed_models:
base = md["base_model"]
@@ -147,6 +149,17 @@ class ModelInstall(object):
)
return {x: model_dict[x] for x in sorted(model_dict.keys(), key=lambda y: model_dict[y].name.lower())}
def _is_autoloaded(self, model_info: dict) -> bool:
path = model_info.get("path")
if not path:
return False
for autodir in ["autoimport_dir", "lora_dir", "embedding_dir", "controlnet_dir"]:
if autodir_path := getattr(self.config, autodir):
autodir_path = self.config.root_path / autodir_path
if Path(path).is_relative_to(autodir_path):
return True
return False
def list_models(self, model_type):
installed = self.mgr.list_models(model_type=model_type)
print(f"Installed models of type `{model_type}`:")
@@ -273,6 +286,7 @@ class ModelInstall(object):
logger.error(f"Unable to download {url}. Skipping.")
info = ModelProbe().heuristic_probe(location)
dest = self.config.models_path / info.base_type.value / info.model_type.value / location.name
dest.parent.mkdir(parents=True, exist_ok=True)
models_path = shutil.move(location, dest)
# staged version will be garbage-collected at this time
@@ -346,7 +360,7 @@ class ModelInstall(object):
if key in self.datasets:
description = self.datasets[key].get("description") or description
rel_path = self.relative_to_root(path)
rel_path = self.relative_to_root(path, self.config.models_path)
attributes = dict(
path=str(rel_path),
@@ -386,8 +400,8 @@ class ModelInstall(object):
attributes.update(dict(config=str(legacy_conf)))
return attributes
def relative_to_root(self, path: Path) -> Path:
root = self.config.root_path
def relative_to_root(self, path: Path, root: Optional[Path] = None) -> Path:
root = root or self.config.root_path
if path.is_relative_to(root):
return path.relative_to(root)
else:

View File

@@ -63,7 +63,7 @@ from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionS
from diffusers.pipelines.stable_diffusion.stable_unclip_image_normalizer import StableUnCLIPImageNormalizer
from invokeai.backend.util.logging import InvokeAILogger
from invokeai.app.services.config import InvokeAIAppConfig, MODEL_CORE
from invokeai.app.services.config import InvokeAIAppConfig
from picklescan.scanner import scan_file_path
from .models import BaseModelType, ModelVariantType
@@ -81,7 +81,7 @@ if is_accelerate_available():
from accelerate.utils import set_module_tensor_to_device
logger = InvokeAILogger.getLogger(__name__)
CONVERT_MODEL_ROOT = InvokeAIAppConfig.get_config().root_path / MODEL_CORE / "convert"
CONVERT_MODEL_ROOT = InvokeAIAppConfig.get_config().models_path / "core/convert"
def shave_segments(path, n_shave_prefix_segments=1):
@@ -1070,7 +1070,7 @@ def convert_controlnet_checkpoint(
extract_ema,
use_linear_projection=None,
cross_attention_dim=None,
precision: torch.dtype = torch.float32,
precision: Optional[torch.dtype] = None,
):
ctrlnet_config = create_unet_diffusers_config(original_config, image_size=image_size, controlnet=True)
ctrlnet_config["upcast_attention"] = upcast_attention
@@ -1111,7 +1111,6 @@ def convert_controlnet_checkpoint(
return controlnet.to(precision)
# TO DO - PASS PRECISION
def download_from_original_stable_diffusion_ckpt(
checkpoint_path: str,
model_version: BaseModelType,
@@ -1121,7 +1120,7 @@ def download_from_original_stable_diffusion_ckpt(
prediction_type: str = None,
model_type: str = None,
extract_ema: bool = False,
precision: torch.dtype = torch.float32,
precision: Optional[torch.dtype] = None,
scheduler_type: str = "pndm",
num_in_channels: Optional[int] = None,
upcast_attention: Optional[bool] = None,
@@ -1194,6 +1193,8 @@ def download_from_original_stable_diffusion_ckpt(
[CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer)
to use. If this parameter is `None`, the function will load a new instance of [CLIPTokenizer] by itself, if
needed.
precision (`torch.dtype`, *optional*, defauts to `None`):
If not provided the precision will be set to the precision of the original file.
return: A StableDiffusionPipeline object representing the passed-in `.ckpt`/`.safetensors` file.
"""
@@ -1252,6 +1253,10 @@ def download_from_original_stable_diffusion_ckpt(
logger.debug(f"model_type = {model_type}; original_config_file = {original_config_file}")
precision_probing_key = "model.diffusion_model.input_blocks.0.0.bias"
logger.debug(f"original checkpoint precision == {checkpoint[precision_probing_key].dtype}")
precision = precision or checkpoint[precision_probing_key].dtype
if original_config_file is None:
key_name_v2_1 = "model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight"
key_name_sd_xl_base = "conditioner.embedders.1.model.transformer.resblocks.9.mlp.c_proj.bias"
@@ -1279,9 +1284,12 @@ def download_from_original_stable_diffusion_ckpt(
original_config_file = BytesIO(requests.get(config_url).content)
original_config = OmegaConf.load(original_config_file)
if original_config["model"]["params"].get("use_ema") is not None:
extract_ema = original_config["model"]["params"]["use_ema"]
if (
model_version == BaseModelType.StableDiffusion2
and original_config["model"]["params"]["parameterization"] == "v"
and original_config["model"]["params"].get("parameterization") == "v"
):
prediction_type = "v_prediction"
upcast_attention = True
@@ -1447,7 +1455,7 @@ def download_from_original_stable_diffusion_ckpt(
if controlnet:
pipe = pipeline_class(
vae=vae.to(precision),
text_encoder=text_model,
text_encoder=text_model.to(precision),
tokenizer=tokenizer,
unet=unet.to(precision),
scheduler=scheduler,
@@ -1459,7 +1467,7 @@ def download_from_original_stable_diffusion_ckpt(
else:
pipe = pipeline_class(
vae=vae.to(precision),
text_encoder=text_model,
text_encoder=text_model.to(precision),
tokenizer=tokenizer,
unet=unet.to(precision),
scheduler=scheduler,
@@ -1484,8 +1492,8 @@ def download_from_original_stable_diffusion_ckpt(
image_noising_scheduler=image_noising_scheduler,
# regular denoising components
tokenizer=tokenizer,
text_encoder=text_model,
unet=unet,
text_encoder=text_model.to(precision),
unet=unet.to(precision),
scheduler=scheduler,
# vae
vae=vae,
@@ -1560,7 +1568,7 @@ def download_from_original_stable_diffusion_ckpt(
if controlnet:
pipe = pipeline_class(
vae=vae.to(precision),
text_encoder=text_model,
text_encoder=text_model.to(precision),
tokenizer=tokenizer,
unet=unet.to(precision),
controlnet=controlnet,
@@ -1571,7 +1579,7 @@ def download_from_original_stable_diffusion_ckpt(
else:
pipe = pipeline_class(
vae=vae.to(precision),
text_encoder=text_model,
text_encoder=text_model.to(precision),
tokenizer=tokenizer,
unet=unet.to(precision),
scheduler=scheduler,
@@ -1594,9 +1602,9 @@ def download_from_original_stable_diffusion_ckpt(
pipe = StableDiffusionXLPipeline(
vae=vae.to(precision),
text_encoder=text_encoder,
text_encoder=text_encoder.to(precision),
tokenizer=tokenizer,
text_encoder_2=text_encoder_2,
text_encoder_2=text_encoder_2.to(precision),
tokenizer_2=tokenizer_2,
unet=unet.to(precision),
scheduler=scheduler,
@@ -1639,7 +1647,7 @@ def download_controlnet_from_original_ckpt(
original_config_file: str,
image_size: int = 512,
extract_ema: bool = False,
precision: torch.dtype = torch.float32,
precision: Optional[torch.dtype] = None,
num_in_channels: Optional[int] = None,
upcast_attention: Optional[bool] = None,
device: str = None,
@@ -1680,6 +1688,12 @@ def download_controlnet_from_original_ckpt(
while "state_dict" in checkpoint:
checkpoint = checkpoint["state_dict"]
# use original precision
precision_probing_key = "input_blocks.0.0.bias"
ckpt_precision = checkpoint[precision_probing_key].dtype
logger.debug(f"original controlnet precision = {ckpt_precision}")
precision = precision or ckpt_precision
original_config = OmegaConf.load(original_config_file)
if num_in_channels is not None:
@@ -1699,7 +1713,7 @@ def download_controlnet_from_original_ckpt(
cross_attention_dim=cross_attention_dim,
)
return controlnet
return controlnet.to(precision)
def convert_ldm_vae_to_diffusers(checkpoint, vae_config: DictConfig, image_size: int) -> AutoencoderKL:

View File

@@ -187,7 +187,9 @@ class ModelCache(object):
# TODO: lock for no copies on simultaneous calls?
cache_entry = self._cached_models.get(key, None)
if cache_entry is None:
self.logger.info(f"Loading model {model_path}, type {base_model}:{model_type}:{submodel}")
self.logger.info(
f"Loading model {model_path}, type {base_model.value}:{model_type.value}:{submodel.value if submodel else ''}"
)
# this will remove older cached models until
# there is sufficient room to load the requested model

View File

@@ -423,7 +423,7 @@ class ModelManager(object):
return (model_name, base_model, model_type)
def _get_model_cache_path(self, model_path):
return self.app_config.models_path / ".cache" / hashlib.md5(str(model_path).encode()).hexdigest()
return self.resolve_model_path(Path(".cache") / hashlib.md5(str(model_path).encode()).hexdigest())
@classmethod
def initialize_model_config(cls, config_path: Path):
@@ -456,7 +456,7 @@ class ModelManager(object):
raise ModelNotFoundException(f"Model not found - {model_key}")
model_config = self.models[model_key]
model_path = self.app_config.root_path / model_config.path
model_path = self.resolve_model_path(model_config.path)
if not model_path.exists():
if model_class.save_to_config:
@@ -586,7 +586,7 @@ class ModelManager(object):
# expose paths as absolute to help web UI
if path := model_dict.get("path"):
model_dict["path"] = str(self.app_config.root_path / path)
model_dict["path"] = str(self.resolve_model_path(path))
models.append(model_dict)
return models
@@ -623,7 +623,7 @@ class ModelManager(object):
self.cache.uncache_model(cache_id)
# if model inside invoke models folder - delete files
model_path = self.app_config.root_path / model_cfg.path
model_path = self.resolve_model_path(model_cfg.path)
cache_path = self._get_model_cache_path(model_path)
if cache_path.exists():
rmtree(str(cache_path))
@@ -654,10 +654,9 @@ class ModelManager(object):
The returned dict has the same format as the dict returned by
model_info().
"""
# relativize paths as they go in - this makes it easier to move the root directory around
# relativize paths as they go in - this makes it easier to move the models directory around
if path := model_attributes.get("path"):
if Path(path).is_relative_to(self.app_config.root_path):
model_attributes["path"] = str(Path(path).relative_to(self.app_config.root_path))
model_attributes["path"] = str(self.relative_model_path(Path(path)))
model_class = MODEL_CLASSES[base_model][model_type]
model_config = model_class.create_config(**model_attributes)
@@ -715,7 +714,7 @@ class ModelManager(object):
if not model_cfg:
raise ModelNotFoundException(f"Unknown model: {model_key}")
old_path = self.app_config.root_path / model_cfg.path
old_path = self.resolve_model_path(model_cfg.path)
new_name = new_name or model_name
new_base = new_base or base_model
new_key = self.create_key(new_name, new_base, model_type)
@@ -724,15 +723,15 @@ class ModelManager(object):
# if this is a model file/directory that we manage ourselves, we need to move it
if old_path.is_relative_to(self.app_config.models_path):
new_path = (
self.app_config.root_path
/ "models"
/ BaseModelType(new_base).value
/ ModelType(model_type).value
/ new_name
new_path = self.resolve_model_path(
Path(
BaseModelType(new_base).value,
ModelType(model_type).value,
new_name,
)
)
move(old_path, new_path)
model_cfg.path = str(new_path.relative_to(self.app_config.root_path))
model_cfg.path = str(new_path.relative_to(self.app_config.models_path))
# clean up caches
old_model_cache = self._get_model_cache_path(old_path)
@@ -782,7 +781,7 @@ class ModelManager(object):
**submodel,
)
checkpoint_path = self.app_config.root_path / info["path"]
old_diffusers_path = self.app_config.models_path / model.location
old_diffusers_path = self.resolve_model_path(model.location)
new_diffusers_path = (
dest_directory or self.app_config.models_path / base_model.value / model_type.value
) / model_name
@@ -795,7 +794,7 @@ class ModelManager(object):
info["path"] = (
str(new_diffusers_path)
if dest_directory
else str(new_diffusers_path.relative_to(self.app_config.root_path))
else str(new_diffusers_path.relative_to(self.app_config.models_path))
)
info.pop("config")
@@ -810,6 +809,15 @@ class ModelManager(object):
return result
def resolve_model_path(self, path: Union[Path, str]) -> Path:
"""return relative paths based on configured models_path"""
return self.app_config.models_path / path
def relative_model_path(self, model_path: Path) -> Path:
if model_path.is_relative_to(self.app_config.models_path):
model_path = model_path.relative_to(self.app_config.models_path)
return model_path
def search_models(self, search_folder):
self.logger.info(f"Finding Models In: {search_folder}")
models_folder_ckpt = Path(search_folder).glob("**/*.ckpt")
@@ -883,10 +891,17 @@ class ModelManager(object):
new_models_found = False
self.logger.info(f"Scanning {self.app_config.models_path} for new models")
with Chdir(self.app_config.root_path):
with Chdir(self.app_config.models_path):
for model_key, model_config in list(self.models.items()):
model_name, cur_base_model, cur_model_type = self.parse_key(model_key)
model_path = self.app_config.root_path.absolute() / model_config.path
# Patch for relative path bug in older models.yaml - paths should not
# be starting with a hard-coded 'models'. This will also fix up
# models.yaml when committed.
if model_config.path.startswith("models"):
model_config.path = str(Path(*Path(model_config.path).parts[1:]))
model_path = self.resolve_model_path(model_config.path).absolute()
if not model_path.exists():
model_class = MODEL_CLASSES[cur_base_model][cur_model_type]
if model_class.save_to_config:
@@ -905,7 +920,7 @@ class ModelManager(object):
if model_type is not None and cur_model_type != model_type:
continue
model_class = MODEL_CLASSES[cur_base_model][cur_model_type]
models_dir = self.app_config.models_path / cur_base_model.value / cur_model_type.value
models_dir = self.resolve_model_path(Path(cur_base_model.value, cur_model_type.value))
if not models_dir.exists():
continue # TODO: or create all folders?
@@ -919,9 +934,7 @@ class ModelManager(object):
if model_key in self.models:
raise DuplicateModelException(f"Model with key {model_key} added twice")
if model_path.is_relative_to(self.app_config.root_path):
model_path = model_path.relative_to(self.app_config.root_path)
model_path = self.relative_model_path(model_path)
model_config: ModelConfigBase = model_class.probe_config(str(model_path))
self.models[model_key] = model_config
new_models_found = True
@@ -932,12 +945,11 @@ class ModelManager(object):
except NotImplementedError as e:
self.logger.warning(e)
imported_models = self.autoimport()
imported_models = self.scan_autoimport_directory()
if (new_models_found or imported_models) and self.config_path:
self.commit()
def autoimport(self) -> Dict[str, AddModelResult]:
def scan_autoimport_directory(self) -> Dict[str, AddModelResult]:
"""
Scan the autoimport directory (if defined) and import new models, delete defunct models.
"""
@@ -971,7 +983,7 @@ class ModelManager(object):
# LS: hacky
# Patch in the SD VAE from core so that it is available for use by the UI
try:
self.heuristic_import({config.root_path / "models/core/convert/sd-vae-ft-mse"})
self.heuristic_import({self.resolve_model_path("core/convert/sd-vae-ft-mse")})
except:
pass

View File

@@ -17,6 +17,7 @@ from .base import (
ModelNotFoundException,
)
from invokeai.app.services.config import InvokeAIAppConfig
import invokeai.backend.util.logging as logger
class ControlNetModelFormat(str, Enum):
@@ -66,7 +67,7 @@ class ControlNetModel(ModelBase):
child_type: Optional[SubModelType] = None,
):
if child_type is not None:
raise Exception("There is no child models in controlnet model")
raise Exception("There are no child models in controlnet model")
model = None
for variant in ["fp16", None]:
@@ -124,9 +125,7 @@ class ControlNetModel(ModelBase):
return model_path
@classmethod
def _convert_controlnet_ckpt_and_cache(
cls,
model_path: str,
output_path: str,
base_model: BaseModelType,
@@ -141,6 +140,7 @@ def _convert_controlnet_ckpt_and_cache(
weights = app_config.root_path / model_path
output_path = Path(output_path)
logger.info(f"Converting {weights} to diffusers format")
# return cached version if it exists
if output_path.exists():
return output_path

View File

@@ -57,7 +57,7 @@ class LoRAModel(ModelBase):
@classproperty
def save_to_config(cls) -> bool:
return False
return True
@classmethod
def detect_format(cls, path: str):

View File

@@ -123,6 +123,7 @@ class StableDiffusion1Model(DiffusersModel):
return _convert_ckpt_and_cache(
version=BaseModelType.StableDiffusion1,
model_config=config,
load_safety_checker=False,
output_path=output_path,
)
else:
@@ -259,7 +260,7 @@ def _convert_ckpt_and_cache(
"""
app_config = InvokeAIAppConfig.get_config()
weights = app_config.root_path / model_config.path
weights = app_config.models_path / model_config.path
config_file = app_config.root_path / model_config.config
output_path = Path(output_path)

View File

@@ -112,7 +112,7 @@ def main():
extras = get_extras()
print(f":crossed_fingers: Upgrading to [yellow]{tag if tag else release}[/yellow]")
print(f":crossed_fingers: Upgrading to [yellow]{tag or release or branch}[/yellow]")
if release:
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_SRC}/{release}.zip" --use-pep517 --upgrade'
elif tag:

View File

@@ -58,6 +58,9 @@ logger = InvokeAILogger.getLogger()
# from https://stackoverflow.com/questions/92438/stripping-non-printable-characters-from-a-string-in-python
NOPRINT_TRANS_TABLE = {i: None for i in range(0, sys.maxunicode + 1) if not chr(i).isprintable()}
# maximum number of installed models we can display before overflowing vertically
MAX_OTHER_MODELS = 72
def make_printable(s: str) -> str:
"""Replace non-printable characters in a string"""
@@ -102,7 +105,7 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
SingleSelectColumns,
values=[
"STARTER MODELS",
"MORE MODELS",
"MAIN MODELS",
"CONTROLNETS",
"LORA/LYCORIS",
"TEXTUAL INVERSION",
@@ -153,7 +156,7 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
BufferBox,
name="Log Messages",
editable=False,
max_height=8,
max_height=15,
)
self.nextrely += 1
@@ -253,6 +256,7 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
model_labels = [self.model_labels[x] for x in model_list]
show_recommended = len(self.installed_models) == 0
truncated = False
if len(model_list) > 0:
max_width = max([len(x) for x in model_labels])
columns = window_width // (max_width + 8) # 8 characters for "[x] " and padding
@@ -271,6 +275,10 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
)
)
if len(model_labels) > MAX_OTHER_MODELS:
model_labels = model_labels[0:MAX_OTHER_MODELS]
truncated = True
widgets.update(
models_selected=self.add_widget_intelligent(
MultiSelectColumns,
@@ -289,6 +297,16 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
models=model_list,
)
if truncated:
widgets.update(
warning_message=self.add_widget_intelligent(
npyscreen.FixedText,
value=f"Too many models to display (max={MAX_OTHER_MODELS}). Some are not displayed.",
editable=False,
color="CAUTION",
)
)
self.nextrely += 1
widgets.update(
download_ids=self.add_widget_intelligent(
@@ -313,7 +331,7 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
widgets = self.add_model_widgets(
model_type=model_type,
window_width=window_width,
install_prompt=f"Additional {model_type.value.title()} models already installed.",
install_prompt=f"Installed {model_type.value.title()} models. Unchecked models in the InvokeAI root directory will be deleted. Enter URLs, paths or repo_ids to import.",
**kwargs,
)
@@ -399,7 +417,7 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
self.ok_button.hidden = True
self.display()
# for communication with the subprocess
# TO DO: Spawn a worker thread, not a subprocess
parent_conn, child_conn = Pipe()
p = Process(
target=process_and_execute,
@@ -414,7 +432,6 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
self.subprocess_connection = parent_conn
self.subprocess = p
app.install_selections = InstallSelections()
# process_and_execute(app.opt, app.install_selections)
def on_back(self):
self.parentApp.switchFormPrevious()
@@ -489,8 +506,6 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
# rebuild the form, saving and restoring some of the fields that need to be preserved.
saved_messages = self.monitor.entry_widget.values
# autoload_dir = str(config.root_path / self.pipeline_models['autoload_directory'].value)
# autoscan = self.pipeline_models['autoscan_on_startup'].value
app.main_form = app.addForm(
"MAIN",
@@ -544,12 +559,6 @@ class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
if downloads := section.get("download_ids"):
selections.install_models.extend(downloads.value.split())
# load directory and whether to scan on startup
# if self.parentApp.autoload_pending:
# selections.scan_directory = str(config.root_path / self.pipeline_models['autoload_directory'].value)
# self.parentApp.autoload_pending = False
# selections.autoscan_on_startup = self.pipeline_models['autoscan_on_startup'].value
class AddModelApplication(npyscreen.NPSAppManaged):
def __init__(self, opt):
@@ -639,6 +648,11 @@ def process_and_execute(
selections: InstallSelections,
conn_out: Connection = None,
):
# need to reinitialize config in subprocess
config = InvokeAIAppConfig.get_config()
args = ["--root", opt.root] if opt.root else []
config.parse_args(args)
# set up so that stderr is sent to conn_out
if conn_out:
translator = StderrToMessage(conn_out)
@@ -656,38 +670,11 @@ def process_and_execute(
conn_out.close()
def do_listings(opt) -> bool:
"""List installed models of various sorts, and return
True if any were requested."""
model_manager = ModelManager(config.model_conf_path)
if opt.list_models == "diffusers":
print("Diffuser models:")
model_manager.print_models()
elif opt.list_models == "controlnets":
print("Installed Controlnet Models:")
cnm = model_manager.list_controlnet_models()
print(textwrap.indent("\n".join([x for x in cnm if cnm[x]]), prefix=" "))
elif opt.list_models == "loras":
print("Installed LoRA/LyCORIS Models:")
cnm = model_manager.list_lora_models()
print(textwrap.indent("\n".join([x for x in cnm if cnm[x]]), prefix=" "))
elif opt.list_models == "tis":
print("Installed Textual Inversion Embeddings:")
cnm = model_manager.list_ti_models()
print(textwrap.indent("\n".join([x for x in cnm if cnm[x]]), prefix=" "))
else:
return False
return True
# --------------------------------------------------------
def select_and_download_models(opt: Namespace):
precision = "float32" if opt.full_precision else choose_precision(torch.device(choose_torch_device()))
config.precision = precision
helper = lambda x: ask_user_for_prediction_type(x)
# if do_listings(opt):
# pass
installer = ModelInstall(config, prediction_type_helper=helper)
if opt.list_models:
installer.list_models(opt.list_models)
@@ -706,8 +693,6 @@ def select_and_download_models(opt: Namespace):
# needed to support the probe() method running under a subprocess
torch.multiprocessing.set_start_method("spawn")
# the third argument is needed in the Windows 11 environment in
# order to launch and resize a console window running this program
set_min_terminal_size(MIN_COLS, MIN_LINES)
installApp = AddModelApplication(opt)
try:

View File

@@ -320,7 +320,7 @@ class mergeModelsForm(npyscreen.FormMultiPageAction):
def get_model_names(self, base_model: BaseModelType = None) -> List[str]:
model_names = [
info["name"]
info["model_name"]
for info in self.model_manager.list_models(model_type=ModelType.Main, base_model=base_model)
if info["model_format"] == "diffusers"
]

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -12,7 +12,7 @@
margin: 0;
}
</style>
<script type="module" crossorigin src="./assets/index-5a784cdd.js"></script>
<script type="module" crossorigin src="./assets/index-9bb68e3a.js"></script>
</head>
<body dir="ltr">

View File

@@ -340,6 +340,7 @@
"allModels": "All Models",
"checkpointModels": "Checkpoints",
"diffusersModels": "Diffusers",
"loraModels": "LoRAs",
"safetensorModels": "SafeTensors",
"modelAdded": "Model Added",
"modelUpdated": "Model Updated",

View File

@@ -340,6 +340,7 @@
"allModels": "All Models",
"checkpointModels": "Checkpoints",
"diffusersModels": "Diffusers",
"loraModels": "LoRAs",
"safetensorModels": "SafeTensors",
"modelAdded": "Model Added",
"modelUpdated": "Model Updated",

View File

@@ -139,8 +139,19 @@ const CurrentImageButtons = (props: CurrentImageButtonsProps) => {
useHotkeys('s', handleUseSeed, [imageDTO]);
const handleUsePrompt = useCallback(() => {
recallBothPrompts(metadata?.positive_prompt, metadata?.negative_prompt);
}, [metadata?.negative_prompt, metadata?.positive_prompt, recallBothPrompts]);
recallBothPrompts(
metadata?.positive_prompt,
metadata?.negative_prompt,
metadata?.positive_style_prompt,
metadata?.negative_style_prompt
);
}, [
metadata?.negative_prompt,
metadata?.positive_prompt,
metadata?.positive_style_prompt,
metadata?.negative_style_prompt,
recallBothPrompts,
]);
useHotkeys('p', handleUsePrompt, [imageDTO]);

View File

@@ -102,8 +102,19 @@ const SingleSelectionMenuItems = (props: SingleSelectionMenuItemsProps) => {
// Recall parameters handlers
const handleRecallPrompt = useCallback(() => {
recallBothPrompts(metadata?.positive_prompt, metadata?.negative_prompt);
}, [metadata?.negative_prompt, metadata?.positive_prompt, recallBothPrompts]);
recallBothPrompts(
metadata?.positive_prompt,
metadata?.negative_prompt,
metadata?.positive_style_prompt,
metadata?.negative_style_prompt
);
}, [
metadata?.negative_prompt,
metadata?.positive_prompt,
metadata?.positive_style_prompt,
metadata?.negative_style_prompt,
recallBothPrompts,
]);
const handleRecallSeed = useCallback(() => {
recallSeed(metadata?.seed);

View File

@@ -1,4 +1,4 @@
import { Input } from '@chakra-ui/react';
import { Input, Textarea } from '@chakra-ui/react';
import { useAppDispatch } from 'app/store/storeHooks';
import { fieldValueChanged } from 'features/nodes/store/nodesSlice';
import {
@@ -12,10 +12,11 @@ const StringInputFieldComponent = (
props: FieldComponentProps<StringInputFieldValue, StringInputFieldTemplate>
) => {
const { nodeId, field } = props;
const dispatch = useAppDispatch();
const handleValueChanged = (e: ChangeEvent<HTMLInputElement>) => {
const handleValueChanged = (
e: ChangeEvent<HTMLInputElement | HTMLTextAreaElement>
) => {
dispatch(
fieldValueChanged({
nodeId,
@@ -25,7 +26,11 @@ const StringInputFieldComponent = (
);
};
return <Input onChange={handleValueChanged} value={field.value}></Input>;
return ['prompt', 'style'].includes(field.name.toLowerCase()) ? (
<Textarea onChange={handleValueChanged} value={field.value} rows={2} />
) : (
<Input onChange={handleValueChanged} value={field.value} />
);
};
export default memo(StringInputFieldComponent);

View File

@@ -148,7 +148,7 @@ const ParamPositiveConditioning = () => {
<Box
sx={{
position: 'absolute',
top: shouldPinParametersPanel ? 6 : 0,
top: shouldPinParametersPanel ? 5 : 0,
insetInlineEnd: 0,
}}
>

View File

@@ -0,0 +1,17 @@
import { Flex } from '@chakra-ui/react';
import ParamNegativeConditioning from 'features/parameters/components/Parameters/Core/ParamNegativeConditioning';
import ParamPositiveConditioning from 'features/parameters/components/Parameters/Core/ParamPositiveConditioning';
export default function ParamPromptArea() {
return (
<Flex
sx={{
flexDirection: 'column',
gap: 2,
}}
>
<ParamPositiveConditioning />
<ParamNegativeConditioning />
</Flex>
);
}

View File

@@ -1,5 +1,15 @@
import { useAppToaster } from 'app/components/Toaster';
import { useAppDispatch } from 'app/store/storeHooks';
import {
refinerModelChanged,
setNegativeStylePromptSDXL,
setPositiveStylePromptSDXL,
setRefinerAestheticScore,
setRefinerCFGScale,
setRefinerScheduler,
setRefinerStart,
setRefinerSteps,
} from 'features/sdxl/store/sdxlSlice';
import { useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { UnsafeImageMetadata } from 'services/api/endpoints/images';
@@ -22,6 +32,10 @@ import {
isValidMainModel,
isValidNegativePrompt,
isValidPositivePrompt,
isValidSDXLNegativeStylePrompt,
isValidSDXLPositiveStylePrompt,
isValidSDXLRefinerAestheticScore,
isValidSDXLRefinerStart,
isValidScheduler,
isValidSeed,
isValidSteps,
@@ -74,17 +88,34 @@ export const useRecallParameters = () => {
* Recall both prompts with toast
*/
const recallBothPrompts = useCallback(
(positivePrompt: unknown, negativePrompt: unknown) => {
(
positivePrompt: unknown,
negativePrompt: unknown,
positiveStylePrompt: unknown,
negativeStylePrompt: unknown
) => {
if (
isValidPositivePrompt(positivePrompt) ||
isValidNegativePrompt(negativePrompt)
isValidNegativePrompt(negativePrompt) ||
isValidSDXLPositiveStylePrompt(positiveStylePrompt) ||
isValidSDXLNegativeStylePrompt(negativeStylePrompt)
) {
if (isValidPositivePrompt(positivePrompt)) {
dispatch(setPositivePrompt(positivePrompt));
}
if (isValidNegativePrompt(negativePrompt)) {
dispatch(setNegativePrompt(negativePrompt));
}
if (isValidSDXLPositiveStylePrompt(positiveStylePrompt)) {
dispatch(setPositiveStylePromptSDXL(positiveStylePrompt));
}
if (isValidSDXLPositiveStylePrompt(negativeStylePrompt)) {
dispatch(setNegativeStylePromptSDXL(negativeStylePrompt));
}
parameterSetToast();
return;
}
@@ -123,6 +154,36 @@ export const useRecallParameters = () => {
[dispatch, parameterSetToast, parameterNotSetToast]
);
/**
* Recall SDXL Positive Style Prompt with toast
*/
const recallSDXLPositiveStylePrompt = useCallback(
(positiveStylePrompt: unknown) => {
if (!isValidSDXLPositiveStylePrompt(positiveStylePrompt)) {
parameterNotSetToast();
return;
}
dispatch(setPositiveStylePromptSDXL(positiveStylePrompt));
parameterSetToast();
},
[dispatch, parameterSetToast, parameterNotSetToast]
);
/**
* Recall SDXL Negative Style Prompt with toast
*/
const recallSDXLNegativeStylePrompt = useCallback(
(negativeStylePrompt: unknown) => {
if (!isValidSDXLNegativeStylePrompt(negativeStylePrompt)) {
parameterNotSetToast();
return;
}
dispatch(setNegativeStylePromptSDXL(negativeStylePrompt));
parameterSetToast();
},
[dispatch, parameterSetToast, parameterNotSetToast]
);
/**
* Recall seed with toast
*/
@@ -271,6 +332,14 @@ export const useRecallParameters = () => {
steps,
width,
strength,
positive_style_prompt,
negative_style_prompt,
refiner_model,
refiner_cfg_scale,
refiner_steps,
refiner_scheduler,
refiner_aesthetic_store,
refiner_start,
} = metadata;
if (isValidCfgScale(cfg_scale)) {
@@ -304,6 +373,38 @@ export const useRecallParameters = () => {
dispatch(setImg2imgStrength(strength));
}
if (isValidSDXLPositiveStylePrompt(positive_style_prompt)) {
dispatch(setPositiveStylePromptSDXL(positive_style_prompt));
}
if (isValidSDXLNegativeStylePrompt(negative_style_prompt)) {
dispatch(setNegativeStylePromptSDXL(negative_style_prompt));
}
if (isValidMainModel(refiner_model)) {
dispatch(refinerModelChanged(refiner_model));
}
if (isValidSteps(refiner_steps)) {
dispatch(setRefinerSteps(refiner_steps));
}
if (isValidCfgScale(refiner_cfg_scale)) {
dispatch(setRefinerCFGScale(refiner_cfg_scale));
}
if (isValidScheduler(refiner_scheduler)) {
dispatch(setRefinerScheduler(refiner_scheduler));
}
if (isValidSDXLRefinerAestheticScore(refiner_aesthetic_store)) {
dispatch(setRefinerAestheticScore(refiner_aesthetic_store));
}
if (isValidSDXLRefinerStart(refiner_start)) {
dispatch(setRefinerStart(refiner_start));
}
allParameterSetToast();
},
[allParameterNotSetToast, allParameterSetToast, dispatch]
@@ -313,6 +414,8 @@ export const useRecallParameters = () => {
recallBothPrompts,
recallPositivePrompt,
recallNegativePrompt,
recallSDXLPositiveStylePrompt,
recallSDXLNegativeStylePrompt,
recallSeed,
recallCfgScale,
recallModel,

View File

@@ -1,3 +1,5 @@
import { components } from 'services/api/schema';
export const MODEL_TYPE_MAP = {
'sd-1': 'Stable Diffusion 1.x',
'sd-2': 'Stable Diffusion 2.x',
@@ -5,6 +7,13 @@ export const MODEL_TYPE_MAP = {
'sdxl-refiner': 'Stable Diffusion XL Refiner',
};
export const MODEL_TYPE_SHORT_MAP = {
'sd-1': 'SD1',
'sd-2': 'SD2',
sdxl: 'SDXL',
'sdxl-refiner': 'SDXLR',
};
export const clipSkipMap = {
'sd-1': {
maxClip: 12,
@@ -23,3 +32,12 @@ export const clipSkipMap = {
markers: [0, 1, 2, 3, 5, 10, 15, 20, 24],
},
};
type LoRAModelFormatMap = {
[key in components['schemas']['LoRAModelFormat']]: string;
};
export const LORA_MODEL_FORMAT_MAP: LoRAModelFormatMap = {
lycoris: 'LyCORIS',
diffusers: 'Diffusers',
};

View File

@@ -310,6 +310,39 @@ export type PrecisionParam = z.infer<typeof zPrecision>;
export const isValidPrecision = (val: unknown): val is PrecisionParam =>
zPrecision.safeParse(val).success;
/**
* Zod schema for SDXL refiner aesthetic score parameter
*/
export const zSDXLRefinerAestheticScore = z.number().min(1).max(10);
/**
* Type alias for SDXL refiner aesthetic score parameter, inferred from its zod schema
*/
export type SDXLRefinerAestheticScoreParam = z.infer<
typeof zSDXLRefinerAestheticScore
>;
/**
* Validates/type-guards a value as a SDXL refiner aesthetic score parameter
*/
export const isValidSDXLRefinerAestheticScore = (
val: unknown
): val is SDXLRefinerAestheticScoreParam =>
zSDXLRefinerAestheticScore.safeParse(val).success;
/**
* Zod schema for SDXL start parameter
*/
export const zSDXLRefinerstart = z.number().min(0).max(1);
/**
* Type alias for SDXL start, inferred from its zod schema
*/
export type SDXLRefinerStartParam = z.infer<typeof zSDXLRefinerstart>;
/**
* Validates/type-guards a value as a SDXL refiner aesthetic score parameter
*/
export const isValidSDXLRefinerStart = (
val: unknown
): val is SDXLRefinerStartParam => zSDXLRefinerstart.safeParse(val).success;
// /**
// * Zod schema for BaseModelType
// */

View File

@@ -0,0 +1,43 @@
import { RootState } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAIIconButton from 'common/components/IAIIconButton';
import { FaLink } from 'react-icons/fa';
import { setShouldConcatSDXLStylePrompt } from '../store/sdxlSlice';
export default function ParamSDXLConcatButton() {
const shouldConcatSDXLStylePrompt = useAppSelector(
(state: RootState) => state.sdxl.shouldConcatSDXLStylePrompt
);
const shouldPinParametersPanel = useAppSelector(
(state: RootState) => state.ui.shouldPinParametersPanel
);
const dispatch = useAppDispatch();
const handleShouldConcatPromptChange = () => {
dispatch(setShouldConcatSDXLStylePrompt(!shouldConcatSDXLStylePrompt));
};
return (
<IAIIconButton
aria-label="Concatenate Prompt & Style"
tooltip="Concatenate Prompt & Style"
variant="outline"
isChecked={shouldConcatSDXLStylePrompt}
onClick={handleShouldConcatPromptChange}
icon={<FaLink />}
size="xs"
sx={{
position: 'absolute',
insetInlineEnd: 1,
top: shouldPinParametersPanel ? 12 : 20,
border: 'none',
color: shouldConcatSDXLStylePrompt ? 'accent.500' : 'base.500',
_hover: {
bg: 'none',
},
}}
></IAIIconButton>
);
}

View File

@@ -1,33 +0,0 @@
import { Box } from '@chakra-ui/react';
import { RootState } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAISwitch from 'common/components/IAISwitch';
import { ChangeEvent } from 'react';
import { setShouldConcatSDXLStylePrompt } from '../store/sdxlSlice';
export default function ParamSDXLConcatPrompt() {
const shouldConcatSDXLStylePrompt = useAppSelector(
(state: RootState) => state.sdxl.shouldConcatSDXLStylePrompt
);
const dispatch = useAppDispatch();
const handleShouldConcatPromptChange = (e: ChangeEvent<HTMLInputElement>) => {
dispatch(setShouldConcatSDXLStylePrompt(e.target.checked));
};
return (
<Box
sx={{
px: 2,
}}
>
<IAISwitch
label="Concat Style Prompt"
tooltip="Concatenates Basic Prompt with Style (Recommended)"
isChecked={shouldConcatSDXLStylePrompt}
onChange={handleShouldConcatPromptChange}
/>
</Box>
);
}

View File

@@ -13,15 +13,20 @@ import { useIsReadyToInvoke } from 'common/hooks/useIsReadyToInvoke';
import AddEmbeddingButton from 'features/embedding/components/AddEmbeddingButton';
import ParamEmbeddingPopover from 'features/embedding/components/ParamEmbeddingPopover';
import { useFeatureStatus } from 'features/system/hooks/useFeatureStatus';
import { AnimatePresence } from 'framer-motion';
import { isEqual } from 'lodash-es';
import { flushSync } from 'react-dom';
import { setNegativeStylePromptSDXL } from '../store/sdxlSlice';
import SDXLConcatLink from './SDXLConcatLink';
const promptInputSelector = createSelector(
[stateSelector, activeTabNameSelector],
({ sdxl }, activeTabName) => {
const { negativeStylePrompt, shouldConcatSDXLStylePrompt } = sdxl;
return {
prompt: sdxl.negativeStylePrompt,
prompt: negativeStylePrompt,
shouldConcatSDXLStylePrompt,
activeTabName,
};
},
@@ -37,11 +42,13 @@ const promptInputSelector = createSelector(
*/
const ParamSDXLNegativeStyleConditioning = () => {
const dispatch = useAppDispatch();
const { prompt, activeTabName } = useAppSelector(promptInputSelector);
const isReady = useIsReadyToInvoke();
const promptRef = useRef<HTMLTextAreaElement>(null);
const { isOpen, onClose, onOpen } = useDisclosure();
const { prompt, activeTabName, shouldConcatSDXLStylePrompt } =
useAppSelector(promptInputSelector);
const handleChangePrompt = useCallback(
(e: ChangeEvent<HTMLTextAreaElement>) => {
dispatch(setNegativeStylePromptSDXL(e.target.value));
@@ -111,6 +118,20 @@ const ParamSDXLNegativeStyleConditioning = () => {
return (
<Box position="relative">
<AnimatePresence>
{shouldConcatSDXLStylePrompt && (
<Box
sx={{
position: 'absolute',
left: '3',
w: '94%',
top: '-17px',
}}
>
<SDXLConcatLink />
</Box>
)}
</AnimatePresence>
<FormControl>
<ParamEmbeddingPopover
isOpen={isOpen}

View File

@@ -13,15 +13,20 @@ import { useIsReadyToInvoke } from 'common/hooks/useIsReadyToInvoke';
import AddEmbeddingButton from 'features/embedding/components/AddEmbeddingButton';
import ParamEmbeddingPopover from 'features/embedding/components/ParamEmbeddingPopover';
import { useFeatureStatus } from 'features/system/hooks/useFeatureStatus';
import { AnimatePresence } from 'framer-motion';
import { isEqual } from 'lodash-es';
import { flushSync } from 'react-dom';
import { setPositiveStylePromptSDXL } from '../store/sdxlSlice';
import SDXLConcatLink from './SDXLConcatLink';
const promptInputSelector = createSelector(
[stateSelector, activeTabNameSelector],
({ sdxl }, activeTabName) => {
const { positiveStylePrompt, shouldConcatSDXLStylePrompt } = sdxl;
return {
prompt: sdxl.positiveStylePrompt,
prompt: positiveStylePrompt,
shouldConcatSDXLStylePrompt,
activeTabName,
};
},
@@ -37,11 +42,13 @@ const promptInputSelector = createSelector(
*/
const ParamSDXLPositiveStyleConditioning = () => {
const dispatch = useAppDispatch();
const { prompt, activeTabName } = useAppSelector(promptInputSelector);
const isReady = useIsReadyToInvoke();
const promptRef = useRef<HTMLTextAreaElement>(null);
const { isOpen, onClose, onOpen } = useDisclosure();
const { prompt, activeTabName, shouldConcatSDXLStylePrompt } =
useAppSelector(promptInputSelector);
const handleChangePrompt = useCallback(
(e: ChangeEvent<HTMLTextAreaElement>) => {
dispatch(setPositiveStylePromptSDXL(e.target.value));
@@ -111,6 +118,20 @@ const ParamSDXLPositiveStyleConditioning = () => {
return (
<Box position="relative">
<AnimatePresence>
{shouldConcatSDXLStylePrompt && (
<Box
sx={{
position: 'absolute',
left: '3',
w: '94%',
top: '-17px',
}}
>
<SDXLConcatLink />
</Box>
)}
</AnimatePresence>
<FormControl>
<ParamEmbeddingPopover
isOpen={isOpen}

View File

@@ -0,0 +1,23 @@
import { Flex } from '@chakra-ui/react';
import ParamNegativeConditioning from 'features/parameters/components/Parameters/Core/ParamNegativeConditioning';
import ParamPositiveConditioning from 'features/parameters/components/Parameters/Core/ParamPositiveConditioning';
import ParamSDXLConcatButton from './ParamSDXLConcatButton';
import ParamSDXLNegativeStyleConditioning from './ParamSDXLNegativeStyleConditioning';
import ParamSDXLPositiveStyleConditioning from './ParamSDXLPositiveStyleConditioning';
export default function ParamSDXLPromptArea() {
return (
<Flex
sx={{
flexDirection: 'column',
gap: 2,
}}
>
<ParamPositiveConditioning />
<ParamSDXLConcatButton />
<ParamSDXLPositiveStyleConditioning />
<ParamNegativeConditioning />
<ParamSDXLNegativeStyleConditioning />
</Flex>
);
}

View File

@@ -0,0 +1,101 @@
import { Box, Flex } from '@chakra-ui/react';
import { CSSObject } from '@emotion/react';
import { motion } from 'framer-motion';
import { FaLink } from 'react-icons/fa';
const sharedConcatLinkStyle: CSSObject = {
position: 'absolute',
bg: 'none',
w: 'full',
minH: 2,
borderRadius: 0,
borderLeft: 'none',
borderRight: 'none',
zIndex: 2,
maskImage:
'radial-gradient(circle at center, black, black 65%, black 30%, black 15%, transparent)',
};
export default function SDXLConcatLink() {
return (
<Flex>
<Box
as={motion.div}
initial={{
scaleX: 0,
borderWidth: 0,
display: 'none',
}}
animate={{
display: ['block', 'block', 'block', 'none'],
scaleX: [0, 0.25, 0.5, 1],
borderWidth: [0, 3, 3, 0],
transition: { duration: 0.37, times: [0, 0.25, 0.5, 1] },
}}
sx={{
top: '1px',
borderTop: 'none',
borderColor: 'base.400',
...sharedConcatLinkStyle,
_dark: {
borderColor: 'accent.500',
},
}}
/>
<Box
as={motion.div}
initial={{
opacity: 0,
scale: 0,
}}
animate={{
opacity: [0, 1, 1, 1],
scale: [0, 0.75, 1.5, 1],
transition: { duration: 0.42, times: [0, 0.25, 0.5, 1] },
}}
exit={{
opacity: 0,
scale: 0,
}}
sx={{
zIndex: 3,
position: 'absolute',
left: '48%',
top: '3px',
p: 1,
borderRadius: 4,
bg: 'accent.400',
color: 'base.50',
_dark: {
bg: 'accent.500',
},
}}
>
<FaLink size={12} />
</Box>
<Box
as={motion.div}
initial={{
scaleX: 0,
borderWidth: 0,
display: 'none',
}}
animate={{
display: ['block', 'block', 'block', 'none'],
scaleX: [0, 0.25, 0.5, 1],
borderWidth: [0, 3, 3, 0],
transition: { duration: 0.37, times: [0, 0.25, 0.5, 1] },
}}
sx={{
top: '17px',
borderBottom: 'none',
borderColor: 'base.400',
...sharedConcatLinkStyle,
_dark: {
borderColor: 'accent.500',
},
}}
/>
</Flex>
);
}

View File

@@ -1,34 +1,14 @@
import { Flex } from '@chakra-ui/react';
import ParamDynamicPromptsCollapse from 'features/dynamicPrompts/components/ParamDynamicPromptsCollapse';
import ParamNegativeConditioning from 'features/parameters/components/Parameters/Core/ParamNegativeConditioning';
import ParamPositiveConditioning from 'features/parameters/components/Parameters/Core/ParamPositiveConditioning';
import ParamNoiseCollapse from 'features/parameters/components/Parameters/Noise/ParamNoiseCollapse';
import ProcessButtons from 'features/parameters/components/ProcessButtons/ProcessButtons';
import ParamSDXLConcatPrompt from './ParamSDXLConcatPrompt';
import ParamSDXLNegativeStyleConditioning from './ParamSDXLNegativeStyleConditioning';
import ParamSDXLPositiveStyleConditioning from './ParamSDXLPositiveStyleConditioning';
import ParamSDXLPromptArea from './ParamSDXLPromptArea';
import ParamSDXLRefinerCollapse from './ParamSDXLRefinerCollapse';
import SDXLImageToImageTabCoreParameters from './SDXLImageToImageTabCoreParameters';
const SDXLImageToImageTabParameters = () => {
return (
<>
<Flex
sx={{
flexDirection: 'column',
gap: 2,
p: 2,
borderRadius: 4,
bg: 'base.100',
_dark: { bg: 'base.850' },
}}
>
<ParamPositiveConditioning />
<ParamSDXLPositiveStyleConditioning />
<ParamNegativeConditioning />
<ParamSDXLNegativeStyleConditioning />
<ParamSDXLConcatPrompt />
</Flex>
<ParamSDXLPromptArea />
<ProcessButtons />
<SDXLImageToImageTabCoreParameters />
<ParamSDXLRefinerCollapse />

View File

@@ -1,34 +1,14 @@
import { Flex } from '@chakra-ui/react';
import ParamDynamicPromptsCollapse from 'features/dynamicPrompts/components/ParamDynamicPromptsCollapse';
import ParamNegativeConditioning from 'features/parameters/components/Parameters/Core/ParamNegativeConditioning';
import ParamPositiveConditioning from 'features/parameters/components/Parameters/Core/ParamPositiveConditioning';
import ParamNoiseCollapse from 'features/parameters/components/Parameters/Noise/ParamNoiseCollapse';
import ProcessButtons from 'features/parameters/components/ProcessButtons/ProcessButtons';
import TextToImageTabCoreParameters from 'features/ui/components/tabs/TextToImage/TextToImageTabCoreParameters';
import ParamSDXLConcatPrompt from './ParamSDXLConcatPrompt';
import ParamSDXLNegativeStyleConditioning from './ParamSDXLNegativeStyleConditioning';
import ParamSDXLPositiveStyleConditioning from './ParamSDXLPositiveStyleConditioning';
import ParamSDXLPromptArea from './ParamSDXLPromptArea';
import ParamSDXLRefinerCollapse from './ParamSDXLRefinerCollapse';
const SDXLTextToImageTabParameters = () => {
return (
<>
<Flex
sx={{
flexDirection: 'column',
gap: 2,
p: 2,
borderRadius: 4,
bg: 'base.100',
_dark: { bg: 'base.850' },
}}
>
<ParamPositiveConditioning />
<ParamSDXLPositiveStyleConditioning />
<ParamNegativeConditioning />
<ParamSDXLNegativeStyleConditioning />
<ParamSDXLConcatPrompt />
</Flex>
<ParamSDXLPromptArea />
<ProcessButtons />
<TextToImageTabCoreParameters />
<ParamSDXLRefinerCollapse />

View File

@@ -2,20 +2,18 @@ import ParamDynamicPromptsCollapse from 'features/dynamicPrompts/components/Para
import ParamLoraCollapse from 'features/lora/components/ParamLoraCollapse';
import ParamAdvancedCollapse from 'features/parameters/components/Parameters/Advanced/ParamAdvancedCollapse';
import ParamControlNetCollapse from 'features/parameters/components/Parameters/ControlNet/ParamControlNetCollapse';
import ParamNegativeConditioning from 'features/parameters/components/Parameters/Core/ParamNegativeConditioning';
import ParamPositiveConditioning from 'features/parameters/components/Parameters/Core/ParamPositiveConditioning';
import ParamNoiseCollapse from 'features/parameters/components/Parameters/Noise/ParamNoiseCollapse';
import ParamSeamlessCollapse from 'features/parameters/components/Parameters/Seamless/ParamSeamlessCollapse';
import ParamSymmetryCollapse from 'features/parameters/components/Parameters/Symmetry/ParamSymmetryCollapse';
// import ParamVariationCollapse from 'features/parameters/components/Parameters/Variations/ParamVariationCollapse';
import ParamPromptArea from 'features/parameters/components/Parameters/Prompt/ParamPromptArea';
import ProcessButtons from 'features/parameters/components/ProcessButtons/ProcessButtons';
import ImageToImageTabCoreParameters from './ImageToImageTabCoreParameters';
const ImageToImageTabParameters = () => {
return (
<>
<ParamPositiveConditioning />
<ParamNegativeConditioning />
<ParamPromptArea />
<ProcessButtons />
<ImageToImageTabCoreParameters />
<ParamControlNetCollapse />

View File

@@ -3,20 +3,31 @@ import { Flex, Text } from '@chakra-ui/react';
import { useState } from 'react';
import {
MainModelConfigEntity,
DiffusersModelConfigEntity,
LoRAModelConfigEntity,
useGetMainModelsQuery,
useGetLoRAModelsQuery,
} from 'services/api/endpoints/models';
import CheckpointModelEdit from './ModelManagerPanel/CheckpointModelEdit';
import DiffusersModelEdit from './ModelManagerPanel/DiffusersModelEdit';
import LoRAModelEdit from './ModelManagerPanel/LoRAModelEdit';
import ModelList from './ModelManagerPanel/ModelList';
import { ALL_BASE_MODELS } from 'services/api/constants';
export default function ModelManagerPanel() {
const [selectedModelId, setSelectedModelId] = useState<string>();
const { model } = useGetMainModelsQuery(ALL_BASE_MODELS, {
const { mainModel } = useGetMainModelsQuery(ALL_BASE_MODELS, {
selectFromResult: ({ data }) => ({
model: selectedModelId ? data?.entities[selectedModelId] : undefined,
mainModel: selectedModelId ? data?.entities[selectedModelId] : undefined,
}),
});
const { loraModel } = useGetLoRAModelsQuery(undefined, {
selectFromResult: ({ data }) => ({
loraModel: selectedModelId ? data?.entities[selectedModelId] : undefined,
}),
});
const model = mainModel ? mainModel : loraModel;
return (
<Flex sx={{ gap: 8, w: 'full', h: 'full' }}>
@@ -30,7 +41,7 @@ export default function ModelManagerPanel() {
}
type ModelEditProps = {
model: MainModelConfigEntity | undefined;
model: MainModelConfigEntity | LoRAModelConfigEntity | undefined;
};
const ModelEdit = (props: ModelEditProps) => {
@@ -41,7 +52,16 @@ const ModelEdit = (props: ModelEditProps) => {
}
if (model?.model_format === 'diffusers') {
return <DiffusersModelEdit key={model.id} model={model} />;
return (
<DiffusersModelEdit
key={model.id}
model={model as DiffusersModelConfigEntity}
/>
);
}
if (model?.model_type === 'lora') {
return <LoRAModelEdit key={model.id} model={model} />;
}
return (

View File

@@ -0,0 +1,137 @@
import { Divider, Flex, Text } from '@chakra-ui/react';
import { useForm } from '@mantine/form';
import { makeToast } from 'features/system/util/makeToast';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAIButton from 'common/components/IAIButton';
import IAIMantineTextInput from 'common/components/IAIMantineInput';
import { selectIsBusy } from 'features/system/store/systemSelectors';
import { addToast } from 'features/system/store/systemSlice';
import { useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import {
LORA_MODEL_FORMAT_MAP,
MODEL_TYPE_MAP,
} from 'features/parameters/types/constants';
import {
LoRAModelConfigEntity,
useUpdateLoRAModelsMutation,
} from 'services/api/endpoints/models';
import { LoRAModelConfig } from 'services/api/types';
import BaseModelSelect from '../shared/BaseModelSelect';
type LoRAModelEditProps = {
model: LoRAModelConfigEntity;
};
export default function LoRAModelEdit(props: LoRAModelEditProps) {
const isBusy = useAppSelector(selectIsBusy);
const { model } = props;
const [updateLoRAModel, { isLoading }] = useUpdateLoRAModelsMutation();
const dispatch = useAppDispatch();
const { t } = useTranslation();
const loraEditForm = useForm<LoRAModelConfig>({
initialValues: {
model_name: model.model_name ? model.model_name : '',
base_model: model.base_model,
model_type: 'lora',
path: model.path ? model.path : '',
description: model.description ? model.description : '',
model_format: model.model_format,
},
validate: {
path: (value) =>
value.trim().length === 0 ? 'Must provide a path' : null,
},
});
const editModelFormSubmitHandler = useCallback(
(values: LoRAModelConfig) => {
const responseBody = {
base_model: model.base_model,
model_name: model.model_name,
body: values,
};
updateLoRAModel(responseBody)
.unwrap()
.then((payload) => {
loraEditForm.setValues(payload as LoRAModelConfig);
dispatch(
addToast(
makeToast({
title: t('modelManager.modelUpdated'),
status: 'success',
})
)
);
})
.catch((_) => {
loraEditForm.reset();
dispatch(
addToast(
makeToast({
title: t('modelManager.modelUpdateFailed'),
status: 'error',
})
)
);
});
},
[
dispatch,
loraEditForm,
model.base_model,
model.model_name,
t,
updateLoRAModel,
]
);
return (
<Flex flexDirection="column" rowGap={4} width="100%">
<Flex flexDirection="column">
<Text fontSize="lg" fontWeight="bold">
{model.model_name}
</Text>
<Text fontSize="sm" color="base.400">
{MODEL_TYPE_MAP[model.base_model]} Model {' '}
{LORA_MODEL_FORMAT_MAP[model.model_format]} format
</Text>
</Flex>
<Divider />
<form
onSubmit={loraEditForm.onSubmit((values) =>
editModelFormSubmitHandler(values)
)}
>
<Flex flexDirection="column" overflowY="scroll" gap={4}>
<IAIMantineTextInput
label={t('modelManager.name')}
{...loraEditForm.getInputProps('model_name')}
/>
<IAIMantineTextInput
label={t('modelManager.description')}
{...loraEditForm.getInputProps('description')}
/>
<BaseModelSelect {...loraEditForm.getInputProps('base_model')} />
<IAIMantineTextInput
label={t('modelManager.modelLocation')}
{...loraEditForm.getInputProps('path')}
/>
<IAIButton
type="submit"
isDisabled={isBusy || isLoading}
isLoading={isLoading}
>
{t('modelManager.updateModel')}
</IAIButton>
</Flex>
</form>
</Flex>
);
}

View File

@@ -9,6 +9,8 @@ import { useTranslation } from 'react-i18next';
import {
MainModelConfigEntity,
useGetMainModelsQuery,
useGetLoRAModelsQuery,
LoRAModelConfigEntity,
} from 'services/api/endpoints/models';
import ModelListItem from './ModelListItem';
import { ALL_BASE_MODELS } from 'services/api/constants';
@@ -20,22 +22,42 @@ type ModelListProps = {
type ModelFormat = 'images' | 'checkpoint' | 'diffusers';
type ModelType = 'main' | 'lora';
type CombinedModelFormat = ModelFormat | 'lora';
const ModelList = (props: ModelListProps) => {
const { selectedModelId, setSelectedModelId } = props;
const { t } = useTranslation();
const [nameFilter, setNameFilter] = useState<string>('');
const [modelFormatFilter, setModelFormatFilter] =
useState<ModelFormat>('images');
useState<CombinedModelFormat>('images');
const { filteredDiffusersModels } = useGetMainModelsQuery(ALL_BASE_MODELS, {
selectFromResult: ({ data }) => ({
filteredDiffusersModels: modelsFilter(data, 'diffusers', nameFilter),
filteredDiffusersModels: modelsFilter(
data,
'main',
'diffusers',
nameFilter
),
}),
});
const { filteredCheckpointModels } = useGetMainModelsQuery(ALL_BASE_MODELS, {
selectFromResult: ({ data }) => ({
filteredCheckpointModels: modelsFilter(data, 'checkpoint', nameFilter),
filteredCheckpointModels: modelsFilter(
data,
'main',
'checkpoint',
nameFilter
),
}),
});
const { filteredLoraModels } = useGetLoRAModelsQuery(undefined, {
selectFromResult: ({ data }) => ({
filteredLoraModels: modelsFilter(data, 'lora', undefined, nameFilter),
}),
});
@@ -68,6 +90,13 @@ const ModelList = (props: ModelListProps) => {
>
{t('modelManager.checkpointModels')}
</IAIButton>
<IAIButton
size="sm"
onClick={() => setModelFormatFilter('lora')}
isChecked={modelFormatFilter === 'lora'}
>
{t('modelManager.loraModels')}
</IAIButton>
</ButtonGroup>
<IAIInput
@@ -118,6 +147,24 @@ const ModelList = (props: ModelListProps) => {
</Flex>
</StyledModelContainer>
)}
{['images', 'lora'].includes(modelFormatFilter) &&
filteredLoraModels.length > 0 && (
<StyledModelContainer>
<Flex sx={{ gap: 2, flexDir: 'column' }}>
<Text variant="subtext" fontSize="sm">
LoRAs
</Text>
{filteredLoraModels.map((model) => (
<ModelListItem
key={model.id}
model={model}
isSelected={selectedModelId === model.id}
setSelectedModelId={setSelectedModelId}
/>
))}
</Flex>
</StyledModelContainer>
)}
</Flex>
</Flex>
</Flex>
@@ -126,12 +173,13 @@ const ModelList = (props: ModelListProps) => {
export default ModelList;
const modelsFilter = (
data: EntityState<MainModelConfigEntity> | undefined,
model_format: ModelFormat,
const modelsFilter = <T extends MainModelConfigEntity | LoRAModelConfigEntity>(
data: EntityState<T> | undefined,
model_type: ModelType,
model_format: ModelFormat | undefined,
nameFilter: string
) => {
const filteredModels: MainModelConfigEntity[] = [];
const filteredModels: T[] = [];
forEach(data?.entities, (model) => {
if (!model) {
return;
@@ -141,9 +189,11 @@ const modelsFilter = (
.toLowerCase()
.includes(nameFilter.toLowerCase());
const matchesFormat = model.model_format === model_format;
const matchesFormat =
model_format === undefined || model.model_format === model_format;
const matchesType = model.model_type === model_type;
if (matchesFilter && matchesFormat) {
if (matchesFilter && matchesFormat && matchesType) {
filteredModels.push(model);
}
});

View File

@@ -9,29 +9,26 @@ import { selectIsBusy } from 'features/system/store/systemSelectors';
import { addToast } from 'features/system/store/systemSlice';
import { useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { MODEL_TYPE_SHORT_MAP } from 'features/parameters/types/constants';
import {
MainModelConfigEntity,
LoRAModelConfigEntity,
useDeleteMainModelsMutation,
useDeleteLoRAModelsMutation,
} from 'services/api/endpoints/models';
type ModelListItemProps = {
model: MainModelConfigEntity;
model: MainModelConfigEntity | LoRAModelConfigEntity;
isSelected: boolean;
setSelectedModelId: (v: string | undefined) => void;
};
const modelBaseTypeMap = {
'sd-1': 'SD1',
'sd-2': 'SD2',
sdxl: 'SDXL',
'sdxl-refiner': 'SDXLR',
};
export default function ModelListItem(props: ModelListItemProps) {
const isBusy = useAppSelector(selectIsBusy);
const { t } = useTranslation();
const dispatch = useAppDispatch();
const [deleteMainModel] = useDeleteMainModelsMutation();
const [deleteLoRAModel] = useDeleteLoRAModelsMutation();
const { model, isSelected, setSelectedModelId } = props;
@@ -40,7 +37,10 @@ export default function ModelListItem(props: ModelListItemProps) {
}, [model.id, setSelectedModelId]);
const handleModelDelete = useCallback(() => {
deleteMainModel(model)
const method = { main: deleteMainModel, lora: deleteLoRAModel }[
model.model_type
];
method(model)
.unwrap()
.then((_) => {
dispatch(
@@ -60,14 +60,21 @@ export default function ModelListItem(props: ModelListItemProps) {
title: `${t('modelManager.modelDeleteFailed')}: ${
model.model_name
}`,
status: 'success',
status: 'error',
})
)
);
}
});
setSelectedModelId(undefined);
}, [deleteMainModel, model, setSelectedModelId, dispatch, t]);
}, [
deleteMainModel,
deleteLoRAModel,
model,
setSelectedModelId,
dispatch,
t,
]);
return (
<Flex sx={{ gap: 2, alignItems: 'center', w: 'full' }}>
@@ -100,8 +107,8 @@ export default function ModelListItem(props: ModelListItemProps) {
<Flex gap={4} alignItems="center">
<Badge minWidth={14} p={0.5} fontSize="sm" variant="solid">
{
modelBaseTypeMap[
model.base_model as keyof typeof modelBaseTypeMap
MODEL_TYPE_SHORT_MAP[
model.base_model as keyof typeof MODEL_TYPE_SHORT_MAP
]
}
</Badge>

View File

@@ -2,20 +2,18 @@ import ParamDynamicPromptsCollapse from 'features/dynamicPrompts/components/Para
import ParamLoraCollapse from 'features/lora/components/ParamLoraCollapse';
import ParamAdvancedCollapse from 'features/parameters/components/Parameters/Advanced/ParamAdvancedCollapse';
import ParamControlNetCollapse from 'features/parameters/components/Parameters/ControlNet/ParamControlNetCollapse';
import ParamNegativeConditioning from 'features/parameters/components/Parameters/Core/ParamNegativeConditioning';
import ParamPositiveConditioning from 'features/parameters/components/Parameters/Core/ParamPositiveConditioning';
import ParamNoiseCollapse from 'features/parameters/components/Parameters/Noise/ParamNoiseCollapse';
import ParamSeamlessCollapse from 'features/parameters/components/Parameters/Seamless/ParamSeamlessCollapse';
import ParamSymmetryCollapse from 'features/parameters/components/Parameters/Symmetry/ParamSymmetryCollapse';
// import ParamVariationCollapse from 'features/parameters/components/Parameters/Variations/ParamVariationCollapse';
import ProcessButtons from 'features/parameters/components/ProcessButtons/ProcessButtons';
import ParamPromptArea from '../../../../parameters/components/Parameters/Prompt/ParamPromptArea';
import TextToImageTabCoreParameters from './TextToImageTabCoreParameters';
const TextToImageTabParameters = () => {
return (
<>
<ParamPositiveConditioning />
<ParamNegativeConditioning />
<ParamPromptArea />
<ProcessButtons />
<TextToImageTabCoreParameters />
<ParamControlNetCollapse />

View File

@@ -4,18 +4,16 @@ import ParamAdvancedCollapse from 'features/parameters/components/Parameters/Adv
import ParamInfillAndScalingCollapse from 'features/parameters/components/Parameters/Canvas/InfillAndScaling/ParamInfillAndScalingCollapse';
import ParamSeamCorrectionCollapse from 'features/parameters/components/Parameters/Canvas/SeamCorrection/ParamSeamCorrectionCollapse';
import ParamControlNetCollapse from 'features/parameters/components/Parameters/ControlNet/ParamControlNetCollapse';
import ParamNegativeConditioning from 'features/parameters/components/Parameters/Core/ParamNegativeConditioning';
import ParamPositiveConditioning from 'features/parameters/components/Parameters/Core/ParamPositiveConditioning';
import ParamSymmetryCollapse from 'features/parameters/components/Parameters/Symmetry/ParamSymmetryCollapse';
// import ParamVariationCollapse from 'features/parameters/components/Parameters/Variations/ParamVariationCollapse';
import ParamPromptArea from 'features/parameters/components/Parameters/Prompt/ParamPromptArea';
import ProcessButtons from 'features/parameters/components/ProcessButtons/ProcessButtons';
import UnifiedCanvasCoreParameters from './UnifiedCanvasCoreParameters';
const UnifiedCanvasParameters = () => {
return (
<>
<ParamPositiveConditioning />
<ParamNegativeConditioning />
<ParamPromptArea />
<ProcessButtons />
<UnifiedCanvasCoreParameters />
<ParamControlNetCollapse />

View File

@@ -52,9 +52,17 @@ type UpdateMainModelArg = {
body: MainModelConfig;
};
type UpdateLoRAModelArg = {
base_model: BaseModelType;
model_name: string;
body: LoRAModelConfig;
};
type UpdateMainModelResponse =
paths['/api/v1/models/{base_model}/{model_type}/{model_name}']['patch']['responses']['200']['content']['application/json'];
type UpdateLoRAModelResponse = UpdateMainModelResponse;
type DeleteMainModelArg = {
base_model: BaseModelType;
model_name: string;
@@ -62,6 +70,10 @@ type DeleteMainModelArg = {
type DeleteMainModelResponse = void;
type DeleteLoRAModelArg = DeleteMainModelArg;
type DeleteLoRAModelResponse = void;
type ConvertMainModelArg = {
base_model: BaseModelType;
model_name: string;
@@ -320,6 +332,31 @@ export const modelsApi = api.injectEndpoints({
);
},
}),
updateLoRAModels: build.mutation<
UpdateLoRAModelResponse,
UpdateLoRAModelArg
>({
query: ({ base_model, model_name, body }) => {
return {
url: `models/${base_model}/lora/${model_name}`,
method: 'PATCH',
body: body,
};
},
invalidatesTags: [{ type: 'LoRAModel', id: LIST_TAG }],
}),
deleteLoRAModels: build.mutation<
DeleteLoRAModelResponse,
DeleteLoRAModelArg
>({
query: ({ base_model, model_name }) => {
return {
url: `models/${base_model}/lora/${model_name}`,
method: 'DELETE',
};
},
invalidatesTags: [{ type: 'LoRAModel', id: LIST_TAG }],
}),
getControlNetModels: build.query<
EntityState<ControlNetModelConfigEntity>,
void
@@ -467,6 +504,8 @@ export const {
useAddMainModelsMutation,
useConvertMainModelsMutation,
useMergeMainModelsMutation,
useDeleteLoRAModelsMutation,
useUpdateLoRAModelsMutation,
useSyncModelsMutation,
useGetModelsInFolderQuery,
useGetCheckpointConfigsQuery,

View File

@@ -1381,7 +1381,7 @@ export type components = {
* @description The nodes in this graph
*/
nodes?: {
[key: string]: (components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["LoadImageInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRawPromptInvocation"] | components["schemas"]["SDXLRefinerRawPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["TextToLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["InpaintInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["ParamIntInvocation"] | components["schemas"]["ParamFloatInvocation"] | components["schemas"]["ParamStringInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SDXLTextToLatentsInvocation"] | components["schemas"]["SDXLLatentsToLatentsInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["LatentsToLatentsInvocation"]) | undefined;
[key: string]: (components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRawPromptInvocation"] | components["schemas"]["SDXLRefinerRawPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["LoadImageInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["TextToLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["InpaintInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["ParamIntInvocation"] | components["schemas"]["ParamFloatInvocation"] | components["schemas"]["ParamStringInvocation"] | components["schemas"]["ParamPromptInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SDXLTextToLatentsInvocation"] | components["schemas"]["SDXLLatentsToLatentsInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["LatentsToLatentsInvocation"]) | undefined;
};
/**
* Edges
@@ -1424,7 +1424,7 @@ export type components = {
* @description The results of node executions
*/
results: {
[key: string]: (components["schemas"]["ImageOutput"] | components["schemas"]["MaskOutput"] | components["schemas"]["ControlOutput"] | components["schemas"]["ModelLoaderOutput"] | components["schemas"]["LoraLoaderOutput"] | components["schemas"]["VaeLoaderOutput"] | components["schemas"]["MetadataAccumulatorOutput"] | components["schemas"]["CompelOutput"] | components["schemas"]["ClipSkipInvocationOutput"] | components["schemas"]["LatentsOutput"] | components["schemas"]["IntOutput"] | components["schemas"]["FloatOutput"] | components["schemas"]["StringOutput"] | components["schemas"]["IntCollectionOutput"] | components["schemas"]["FloatCollectionOutput"] | components["schemas"]["ImageCollectionOutput"] | components["schemas"]["PromptOutput"] | components["schemas"]["PromptCollectionOutput"] | components["schemas"]["NoiseOutput"] | components["schemas"]["SDXLModelLoaderOutput"] | components["schemas"]["SDXLRefinerModelLoaderOutput"] | components["schemas"]["GraphInvocationOutput"] | components["schemas"]["IterateInvocationOutput"] | components["schemas"]["CollectInvocationOutput"]) | undefined;
[key: string]: (components["schemas"]["ImageOutput"] | components["schemas"]["MaskOutput"] | components["schemas"]["ControlOutput"] | components["schemas"]["ModelLoaderOutput"] | components["schemas"]["LoraLoaderOutput"] | components["schemas"]["VaeLoaderOutput"] | components["schemas"]["MetadataAccumulatorOutput"] | components["schemas"]["IntCollectionOutput"] | components["schemas"]["FloatCollectionOutput"] | components["schemas"]["ImageCollectionOutput"] | components["schemas"]["CompelOutput"] | components["schemas"]["ClipSkipInvocationOutput"] | components["schemas"]["LatentsOutput"] | components["schemas"]["IntOutput"] | components["schemas"]["FloatOutput"] | components["schemas"]["NoiseOutput"] | components["schemas"]["PromptOutput"] | components["schemas"]["PromptCollectionOutput"] | components["schemas"]["StringOutput"] | components["schemas"]["SDXLModelLoaderOutput"] | components["schemas"]["SDXLRefinerModelLoaderOutput"] | components["schemas"]["GraphInvocationOutput"] | components["schemas"]["IterateInvocationOutput"] | components["schemas"]["CollectInvocationOutput"]) | undefined;
};
/**
* Errors
@@ -2752,7 +2752,7 @@ export type components = {
vae?: components["schemas"]["VaeField"];
/**
* Tiled
* @description Decode latents by overlapping tiles(less memory consumption)
* @description Decode latents by overlaping tiles (less memory consumption)
* @default false
*/
tiled?: boolean;
@@ -3965,6 +3965,35 @@ export type components = {
*/
a?: number;
};
/**
* ParamPromptInvocation
* @description A prompt input parameter
*/
ParamPromptInvocation: {
/**
* Id
* @description The id of this node. Must be unique among all nodes.
*/
id: string;
/**
* Is Intermediate
* @description Whether or not this node is an intermediate node.
* @default false
*/
is_intermediate?: boolean;
/**
* Type
* @default param_prompt
* @enum {string}
*/
type?: "param_prompt";
/**
* Prompt
* @description The prompt value
* @default
*/
prompt?: string;
};
/**
* ParamStringInvocation
* @description A string parameter
@@ -5568,18 +5597,18 @@ export type components = {
* @enum {string}
*/
StableDiffusionXLModelFormat: "checkpoint" | "diffusers";
/**
* ControlNetModelFormat
* @description An enumeration.
* @enum {string}
*/
ControlNetModelFormat: "checkpoint" | "diffusers";
/**
* StableDiffusion1ModelFormat
* @description An enumeration.
* @enum {string}
*/
StableDiffusion1ModelFormat: "checkpoint" | "diffusers";
/**
* ControlNetModelFormat
* @description An enumeration.
* @enum {string}
*/
ControlNetModelFormat: "checkpoint" | "diffusers";
};
responses: never;
parameters: never;
@@ -5690,7 +5719,7 @@ export type operations = {
};
requestBody: {
content: {
"application/json": components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["LoadImageInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRawPromptInvocation"] | components["schemas"]["SDXLRefinerRawPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["TextToLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["InpaintInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["ParamIntInvocation"] | components["schemas"]["ParamFloatInvocation"] | components["schemas"]["ParamStringInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SDXLTextToLatentsInvocation"] | components["schemas"]["SDXLLatentsToLatentsInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["LatentsToLatentsInvocation"];
"application/json": components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRawPromptInvocation"] | components["schemas"]["SDXLRefinerRawPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["LoadImageInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["TextToLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["InpaintInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["ParamIntInvocation"] | components["schemas"]["ParamFloatInvocation"] | components["schemas"]["ParamStringInvocation"] | components["schemas"]["ParamPromptInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SDXLTextToLatentsInvocation"] | components["schemas"]["SDXLLatentsToLatentsInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["LatentsToLatentsInvocation"];
};
};
responses: {
@@ -5727,7 +5756,7 @@ export type operations = {
};
requestBody: {
content: {
"application/json": components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["LoadImageInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRawPromptInvocation"] | components["schemas"]["SDXLRefinerRawPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["TextToLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["InpaintInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["ParamIntInvocation"] | components["schemas"]["ParamFloatInvocation"] | components["schemas"]["ParamStringInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SDXLTextToLatentsInvocation"] | components["schemas"]["SDXLLatentsToLatentsInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["LatentsToLatentsInvocation"];
"application/json": components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRawPromptInvocation"] | components["schemas"]["SDXLRefinerRawPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["LoadImageInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ImageNSFWBlurInvocation"] | components["schemas"]["ImageWatermarkInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["TextToLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["InpaintInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["ParamIntInvocation"] | components["schemas"]["ParamFloatInvocation"] | components["schemas"]["ParamStringInvocation"] | components["schemas"]["ParamPromptInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SDXLTextToLatentsInvocation"] | components["schemas"]["SDXLLatentsToLatentsInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["LatentsToLatentsInvocation"];
};
};
responses: {

View File

@@ -13,6 +13,15 @@ const invokeAI = defineStyle((props) => ({
var(--invokeai-colors-base-200) 70%,
var(--invokeai-colors-base-200) 100%)`,
},
_disabled: {
'::-webkit-resizer': {
backgroundImage: `linear-gradient(135deg,
var(--invokeai-colors-base-50) 0%,
var(--invokeai-colors-base-50) 70%,
var(--invokeai-colors-base-200) 70%,
var(--invokeai-colors-base-200) 100%)`,
},
},
_dark: {
'::-webkit-resizer': {
backgroundImage: `linear-gradient(135deg,
@@ -21,6 +30,15 @@ const invokeAI = defineStyle((props) => ({
var(--invokeai-colors-base-800) 70%,
var(--invokeai-colors-base-800) 100%)`,
},
_disabled: {
'::-webkit-resizer': {
backgroundImage: `linear-gradient(135deg,
var(--invokeai-colors-base-900) 0%,
var(--invokeai-colors-base-900) 70%,
var(--invokeai-colors-base-800) 70%,
var(--invokeai-colors-base-800) 100%)`,
},
},
},
}));

View File

@@ -1 +1 @@
__version__ = "3.0.1rc2"
__version__ = "3.0.1post3"

View File

@@ -1,281 +1,283 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "ycYWcsEKc6w7"
},
"source": [
"# Stable Diffusion AI Notebook (Release 2.0.0)\n",
"\n",
"<img src=\"https://user-images.githubusercontent.com/60411196/186547976-d9de378a-9de8-4201-9c25-c057a9c59bad.jpeg\" alt=\"stable-diffusion-ai\" width=\"170px\"/> <br>\n",
"#### Instructions:\n",
"1. Execute each cell in order to mount a Dream bot and create images from text. <br>\n",
"2. Once cells 1-8 were run correctly you'll be executing a terminal in cell #9, you'll need to enter `python scripts/dream.py` command to run Dream bot.<br> \n",
"3. After launching dream bot, you'll see: <br> `Dream > ` in terminal. <br> Insert a command, eg. `Dream > Astronaut floating in a distant galaxy`, or type `-h` for help.\n",
"3. After completion you'll see your generated images in path `stable-diffusion/outputs/img-samples/`, you can also show last generated images in cell #10.\n",
"4. To quit Dream bot use `q` command. <br> \n",
"---\n",
"<font color=\"red\">Note:</font> It takes some time to load, but after installing all dependencies you can use the bot all time you want while colab instance is up. <br>\n",
"<font color=\"red\">Requirements:</font> For this notebook to work you need to have [Stable-Diffusion-v-1-4](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original) stored in your Google Drive, it will be needed in cell #7\n",
"##### For more details visit Github repository: [invoke-ai/InvokeAI](https://github.com/invoke-ai/InvokeAI)\n",
"---\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "dr32VLxlnouf"
},
"source": [
"## ◢ Installation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "a2Z5Qu_o8VtQ"
},
"outputs": [],
"source": [
"#@title 1. Check current GPU assigned\n",
"!nvidia-smi -L\n",
"!nvidia-smi"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "vbI9ZsQHzjqF"
},
"outputs": [],
"source": [
"#@title 2. Download stable-diffusion Repository\n",
"from os.path import exists\n",
"\n",
"!git clone --quiet https://github.com/invoke-ai/InvokeAI.git # Original repo\n",
"%cd /content/InvokeAI/\n",
"!git checkout --quiet tags/v2.0.0"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "QbXcGXYEFSNB"
},
"outputs": [],
"source": [
"#@title 3. Install dependencies\n",
"import gc\n",
"\n",
"!wget https://raw.githubusercontent.com/invoke-ai/InvokeAI/development/environments-and-requirements/requirements-base.txt\n",
"!wget https://raw.githubusercontent.com/invoke-ai/InvokeAI/development/environments-and-requirements/requirements-win-colab-cuda.txt\n",
"!pip install colab-xterm\n",
"!pip install -r requirements-lin-win-colab-CUDA.txt\n",
"!pip install clean-fid torchtext\n",
"!pip install transformers\n",
"gc.collect()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "8rSMhgnAttQa"
},
"outputs": [],
"source": [
"#@title 4. Restart Runtime\n",
"exit()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "ChIDWxLVHGGJ"
},
"outputs": [],
"source": [
"#@title 5. Load small ML models required\n",
"import gc\n",
"%cd /content/InvokeAI/\n",
"!python scripts/preload_models.py\n",
"gc.collect()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "795x1tMoo8b1"
},
"source": [
"## ◢ Configuration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "YEWPV-sF1RDM"
},
"outputs": [],
"source": [
"#@title 6. Mount google Drive\n",
"from google.colab import drive\n",
"drive.mount('/content/drive')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "zRTJeZ461WGu"
},
"outputs": [],
"source": [
"#@title 7. Drive Path to model\n",
"#@markdown Path should start with /content/drive/path-to-your-file <br>\n",
"#@markdown <font color=\"red\">Note:</font> Model should be downloaded from https://huggingface.co <br>\n",
"#@markdown Lastest release: [Stable-Diffusion-v-1-4](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)\n",
"from os.path import exists\n",
"\n",
"model_path = \"\" #@param {type:\"string\"}\n",
"if exists(model_path):\n",
" print(\"✅ Valid directory\")\n",
"else: \n",
" print(\"❌ File doesn't exist\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "UY-NNz4I8_aG"
},
"outputs": [],
"source": [
"#@title 8. Symlink to model\n",
"\n",
"from os.path import exists\n",
"import os \n",
"\n",
"# Folder creation if it doesn't exist\n",
"if exists(\"/content/InvokeAI/models/ldm/stable-diffusion-v1\"):\n",
" print(\"❗ Dir stable-diffusion-v1 already exists\")\n",
"else:\n",
" %mkdir /content/InvokeAI/models/ldm/stable-diffusion-v1\n",
" print(\"✅ Dir stable-diffusion-v1 created\")\n",
"\n",
"# Symbolic link if it doesn't exist\n",
"if exists(\"/content/InvokeAI/models/ldm/stable-diffusion-v1/model.ckpt\"):\n",
" print(\"❗ Symlink already created\")\n",
"else: \n",
" src = model_path\n",
" dst = '/content/InvokeAI/models/ldm/stable-diffusion-v1/model.ckpt'\n",
" os.symlink(src, dst) \n",
" print(\"✅ Symbolic link created successfully\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Mc28N0_NrCQH"
},
"source": [
"## ◢ Execution"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "ir4hCrMIuUpl"
},
"outputs": [],
"source": [
"#@title 9. Run Terminal and Execute Dream bot\n",
"#@markdown <font color=\"blue\">Steps:</font> <br>\n",
"#@markdown 1. Execute command `python scripts/invoke.py` to run InvokeAI.<br>\n",
"#@markdown 2. After initialized you'll see `Dream>` line.<br>\n",
"#@markdown 3. Example text: `Astronaut floating in a distant galaxy` <br>\n",
"#@markdown 4. To quit Dream bot use: `q` command.<br>\n",
"\n",
"%load_ext colabxterm\n",
"%xterm\n",
"gc.collect()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "qnLohSHmKoGk"
},
"outputs": [],
"source": [
"#@title 10. Show the last 15 generated images\n",
"import glob\n",
"import matplotlib.pyplot as plt\n",
"import matplotlib.image as mpimg\n",
"%matplotlib inline\n",
"\n",
"images = []\n",
"for img_path in sorted(glob.glob('/content/InvokeAI/outputs/img-samples/*.png'), reverse=True):\n",
" images.append(mpimg.imread(img_path))\n",
"\n",
"images = images[:15] \n",
"\n",
"plt.figure(figsize=(20,10))\n",
"\n",
"columns = 5\n",
"for i, image in enumerate(images):\n",
" ax = plt.subplot(len(images) / columns + 1, columns, i + 1)\n",
" ax.axes.xaxis.set_visible(False)\n",
" ax.axes.yaxis.set_visible(False)\n",
" ax.axis('off')\n",
" plt.imshow(image)\n",
" gc.collect()\n",
"\n"
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"collapsed_sections": [],
"private_outputs": true,
"provenance": []
},
"gpuClass": "standard",
"kernelspec": {
"display_name": "Python 3.9.12 64-bit",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.9.12"
},
"vscode": {
"interpreter": {
"hash": "4e870c5c5fe42db7e2c5647ae5af656ff3391bf8c2b729cbf7fa0e16ca8cb5af"
}
}
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "ycYWcsEKc6w7"
},
"source": [
"# Stable Diffusion AI Notebook (Release 2.0.0)\n",
"\n",
"<img src=\"https://user-images.githubusercontent.com/60411196/186547976-d9de378a-9de8-4201-9c25-c057a9c59bad.jpeg\" alt=\"stable-diffusion-ai\" width=\"170px\"/> <br>\n",
"#### Instructions:\n",
"1. Execute each cell in order to mount a Dream bot and create images from text. <br>\n",
"2. Once cells 1-8 were run correctly you'll be executing a terminal in cell #9, you'll need to enter `python scripts/dream.py` command to run Dream bot.<br> \n",
"3. After launching dream bot, you'll see: <br> `Dream > ` in terminal. <br> Insert a command, eg. `Dream > Astronaut floating in a distant galaxy`, or type `-h` for help.\n",
"3. After completion you'll see your generated images in path `stable-diffusion/outputs/img-samples/`, you can also show last generated images in cell #10.\n",
"4. To quit Dream bot use `q` command. <br> \n",
"---\n",
"<font color=\"red\">Note:</font> It takes some time to load, but after installing all dependencies you can use the bot all time you want while colab instance is up. <br>\n",
"<font color=\"red\">Requirements:</font> For this notebook to work you need to have [Stable-Diffusion-v-1-4](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original) stored in your Google Drive, it will be needed in cell #7\n",
"##### For more details visit Github repository: [invoke-ai/InvokeAI](https://github.com/invoke-ai/InvokeAI)\n",
"---\n"
]
},
"nbformat": 4,
"nbformat_minor": 0
{
"cell_type": "markdown",
"metadata": {
"id": "dr32VLxlnouf"
},
"source": [
"## ◢ Installation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "a2Z5Qu_o8VtQ"
},
"outputs": [],
"source": [
"# @title 1. Check current GPU assigned\n",
"!nvidia-smi -L\n",
"!nvidia-smi"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "vbI9ZsQHzjqF"
},
"outputs": [],
"source": [
"# @title 2. Download stable-diffusion Repository\n",
"from os.path import exists\n",
"\n",
"!git clone --quiet https://github.com/invoke-ai/InvokeAI.git # Original repo\n",
"%cd /content/InvokeAI/\n",
"!git checkout --quiet tags/v2.0.0"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "QbXcGXYEFSNB"
},
"outputs": [],
"source": [
"# @title 3. Install dependencies\n",
"import gc\n",
"\n",
"!wget https://raw.githubusercontent.com/invoke-ai/InvokeAI/development/environments-and-requirements/requirements-base.txt\n",
"!wget https://raw.githubusercontent.com/invoke-ai/InvokeAI/development/environments-and-requirements/requirements-win-colab-cuda.txt\n",
"!pip install colab-xterm\n",
"!pip install -r requirements-lin-win-colab-CUDA.txt\n",
"!pip install clean-fid torchtext\n",
"!pip install transformers\n",
"gc.collect()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "8rSMhgnAttQa"
},
"outputs": [],
"source": [
"# @title 4. Restart Runtime\n",
"exit()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "ChIDWxLVHGGJ"
},
"outputs": [],
"source": [
"# @title 5. Load small ML models required\n",
"import gc\n",
"\n",
"%cd /content/InvokeAI/\n",
"!python scripts/preload_models.py\n",
"gc.collect()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "795x1tMoo8b1"
},
"source": [
"## ◢ Configuration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "YEWPV-sF1RDM"
},
"outputs": [],
"source": [
"# @title 6. Mount google Drive\n",
"from google.colab import drive\n",
"\n",
"drive.mount(\"/content/drive\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "zRTJeZ461WGu"
},
"outputs": [],
"source": [
"# @title 7. Drive Path to model\n",
"# @markdown Path should start with /content/drive/path-to-your-file <br>\n",
"# @markdown <font color=\"red\">Note:</font> Model should be downloaded from https://huggingface.co <br>\n",
"# @markdown Lastest release: [Stable-Diffusion-v-1-4](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)\n",
"from os.path import exists\n",
"\n",
"model_path = \"\" # @param {type:\"string\"}\n",
"if exists(model_path):\n",
" print(\"✅ Valid directory\")\n",
"else:\n",
" print(\"❌ File doesn't exist\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "UY-NNz4I8_aG"
},
"outputs": [],
"source": [
"# @title 8. Symlink to model\n",
"\n",
"from os.path import exists\n",
"import os\n",
"\n",
"# Folder creation if it doesn't exist\n",
"if exists(\"/content/InvokeAI/models/ldm/stable-diffusion-v1\"):\n",
" print(\"❗ Dir stable-diffusion-v1 already exists\")\n",
"else:\n",
" %mkdir /content/InvokeAI/models/ldm/stable-diffusion-v1\n",
" print(\"✅ Dir stable-diffusion-v1 created\")\n",
"\n",
"# Symbolic link if it doesn't exist\n",
"if exists(\"/content/InvokeAI/models/ldm/stable-diffusion-v1/model.ckpt\"):\n",
" print(\"❗ Symlink already created\")\n",
"else:\n",
" src = model_path\n",
" dst = \"/content/InvokeAI/models/ldm/stable-diffusion-v1/model.ckpt\"\n",
" os.symlink(src, dst)\n",
" print(\"✅ Symbolic link created successfully\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Mc28N0_NrCQH"
},
"source": [
"## ◢ Execution"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "ir4hCrMIuUpl"
},
"outputs": [],
"source": [
"# @title 9. Run Terminal and Execute Dream bot\n",
"# @markdown <font color=\"blue\">Steps:</font> <br>\n",
"# @markdown 1. Execute command `python scripts/invoke.py` to run InvokeAI.<br>\n",
"# @markdown 2. After initialized you'll see `Dream>` line.<br>\n",
"# @markdown 3. Example text: `Astronaut floating in a distant galaxy` <br>\n",
"# @markdown 4. To quit Dream bot use: `q` command.<br>\n",
"\n",
"%load_ext colabxterm\n",
"%xterm\n",
"gc.collect()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "qnLohSHmKoGk"
},
"outputs": [],
"source": [
"#@title 10. Show the last 15 generated images\n",
"import glob\n",
"import matplotlib.pyplot as plt\n",
"import matplotlib.image as mpimg\n",
"%matplotlib inline\n",
"\n",
"images = []\n",
"for img_path in sorted(glob.glob('/content/InvokeAI/outputs/img-samples/*.png'), reverse=True):\n",
" images.append(mpimg.imread(img_path))\n",
"\n",
"images = images[:15] \n",
"\n",
"plt.figure(figsize=(20,10))\n",
"\n",
"columns = 5\n",
"for i, image in enumerate(images):\n",
" ax = plt.subplot(len(images) / columns + 1, columns, i + 1)\n",
" ax.axes.xaxis.set_visible(False)\n",
" ax.axes.yaxis.set_visible(False)\n",
" ax.axis('off')\n",
" plt.imshow(image)\n",
" gc.collect()\n",
"\n"
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"collapsed_sections": [],
"private_outputs": true,
"provenance": []
},
"gpuClass": "standard",
"kernelspec": {
"display_name": "Python 3.9.12 64-bit",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.9.12"
},
"vscode": {
"interpreter": {
"hash": "4e870c5c5fe42db7e2c5647ae5af656ff3391bf8c2b729cbf7fa0e16ca8cb5af"
}
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -58,14 +58,14 @@ dependencies = [
"invisible-watermark~=0.2.0", # needed to install SDXL base and refiner using their repo_ids
"matplotlib", # needed for plotting of Penner easing functions
"mediapipe", # needed for "mediapipeface" controlnet model
"numpy",
"npyscreen",
"numpy==1.24.4",
"omegaconf",
"opencv-python",
"pydantic==1.*",
"picklescan",
"pillow",
"prompt-toolkit",
"pydantic==1.10.10",
"pympler~=1.0.1",
"pypatchmatch",
'pyperclip',
@@ -81,7 +81,7 @@ dependencies = [
"test-tube~=0.7.5",
"torch~=2.0.1",
"torchvision~=0.15.2",
"torchmetrics~=1.0.1",
"torchmetrics~=0.11.0",
"torchsde~=0.2.5",
"transformers~=4.31.0",
"uvicorn[standard]~=0.21.1",

View File

@@ -52,17 +52,17 @@
"name": "stdout",
"text": [
"Cloning into 'latent-diffusion'...\n",
"remote: Enumerating objects: 992, done.\u001B[K\n",
"remote: Counting objects: 100% (695/695), done.\u001B[K\n",
"remote: Compressing objects: 100% (397/397), done.\u001B[K\n",
"remote: Total 992 (delta 375), reused 564 (delta 253), pack-reused 297\u001B[K\n",
"remote: Enumerating objects: 992, done.\u001b[K\n",
"remote: Counting objects: 100% (695/695), done.\u001b[K\n",
"remote: Compressing objects: 100% (397/397), done.\u001b[K\n",
"remote: Total 992 (delta 375), reused 564 (delta 253), pack-reused 297\u001b[K\n",
"Receiving objects: 100% (992/992), 30.78 MiB | 29.43 MiB/s, done.\n",
"Resolving deltas: 100% (510/510), done.\n",
"Cloning into 'taming-transformers'...\n",
"remote: Enumerating objects: 1335, done.\u001B[K\n",
"remote: Counting objects: 100% (525/525), done.\u001B[K\n",
"remote: Compressing objects: 100% (493/493), done.\u001B[K\n",
"remote: Total 1335 (delta 58), reused 481 (delta 30), pack-reused 810\u001B[K\n",
"remote: Enumerating objects: 1335, done.\u001b[K\n",
"remote: Counting objects: 100% (525/525), done.\u001b[K\n",
"remote: Compressing objects: 100% (493/493), done.\u001b[K\n",
"remote: Total 1335 (delta 58), reused 481 (delta 30), pack-reused 810\u001b[K\n",
"Receiving objects: 100% (1335/1335), 412.35 MiB | 30.53 MiB/s, done.\n",
"Resolving deltas: 100% (267/267), done.\n",
"Obtaining file:///content/taming-transformers\n",
@@ -73,23 +73,24 @@
"Installing collected packages: taming-transformers\n",
" Running setup.py develop for taming-transformers\n",
"Successfully installed taming-transformers-0.0.1\n",
"\u001B[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
"\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
"tensorflow 2.8.0 requires tf-estimator-nightly==2.8.0.dev2021122109, which is not installed.\n",
"arviz 0.11.4 requires typing-extensions<4,>=3.7.4.3, but you have typing-extensions 4.1.1 which is incompatible.\u001B[0m\n"
"arviz 0.11.4 requires typing-extensions<4,>=3.7.4.3, but you have typing-extensions 4.1.1 which is incompatible.\u001b[0m\n"
]
}
],
"source": [
"#@title Installation\n",
"# @title Installation\n",
"!git clone https://github.com/CompVis/latent-diffusion.git\n",
"!git clone https://github.com/CompVis/taming-transformers\n",
"!pip install -e ./taming-transformers\n",
"!pip install omegaconf>=2.0.0 pytorch-lightning>=1.0.8 torch-fidelity einops\n",
"\n",
"import sys\n",
"\n",
"sys.path.append(\".\")\n",
"sys.path.append('./taming-transformers')\n",
"from taming.models import vqgan "
"sys.path.append(\"./taming-transformers\")\n",
"from taming.models import vqgan"
]
},
{
@@ -104,11 +105,11 @@
{
"cell_type": "code",
"source": [
"#@title Download\n",
"%cd latent-diffusion/ \n",
"# @title Download\n",
"%cd latent-diffusion/\n",
"\n",
"!mkdir -p models/ldm/cin256-v2/\n",
"!wget -O models/ldm/cin256-v2/model.ckpt https://ommer-lab.com/files/latent-diffusion/nitro/cin/model.ckpt "
"!wget -O models/ldm/cin256-v2/model.ckpt https://ommer-lab.com/files/latent-diffusion/nitro/cin/model.ckpt"
],
"metadata": {
"colab": {
@@ -203,7 +204,7 @@
{
"cell_type": "code",
"source": [
"#@title loading utils\n",
"# @title loading utils\n",
"import torch\n",
"from omegaconf import OmegaConf\n",
"\n",
@@ -212,7 +213,7 @@
"\n",
"def load_model_from_config(config, ckpt):\n",
" print(f\"Loading model from {ckpt}\")\n",
" pl_sd = torch.load(ckpt)#, map_location=\"cpu\")\n",
" pl_sd = torch.load(ckpt) # , map_location=\"cpu\")\n",
" sd = pl_sd[\"state_dict\"]\n",
" model = instantiate_from_config(config.model)\n",
" m, u = model.load_state_dict(sd, strict=False)\n",
@@ -222,7 +223,7 @@
"\n",
"\n",
"def get_model():\n",
" config = OmegaConf.load(\"configs/latent-diffusion/cin256-v2.yaml\") \n",
" config = OmegaConf.load(\"configs/latent-diffusion/cin256-v2.yaml\")\n",
" model = load_model_from_config(config, \"models/ldm/cin256-v2/model.ckpt\")\n",
" return model"
],
@@ -276,18 +277,18 @@
{
"cell_type": "code",
"source": [
"import numpy as np \n",
"import numpy as np\n",
"from PIL import Image\n",
"from einops import rearrange\n",
"from torchvision.utils import make_grid\n",
"\n",
"\n",
"classes = [25, 187, 448, 992] # define classes to be sampled here\n",
"classes = [25, 187, 448, 992] # define classes to be sampled here\n",
"n_samples_per_class = 6\n",
"\n",
"ddim_steps = 20\n",
"ddim_eta = 0.0\n",
"scale = 3.0 # for unconditional guidance\n",
"scale = 3.0 # for unconditional guidance\n",
"\n",
"\n",
"all_samples = list()\n",
@@ -295,36 +296,39 @@
"with torch.no_grad():\n",
" with model.ema_scope():\n",
" uc = model.get_learned_conditioning(\n",
" {model.cond_stage_key: torch.tensor(n_samples_per_class*[1000]).to(model.device)}\n",
" )\n",
" \n",
" {model.cond_stage_key: torch.tensor(n_samples_per_class * [1000]).to(model.device)}\n",
" )\n",
"\n",
" for class_label in classes:\n",
" print(f\"rendering {n_samples_per_class} examples of class '{class_label}' in {ddim_steps} steps and using s={scale:.2f}.\")\n",
" xc = torch.tensor(n_samples_per_class*[class_label])\n",
" print(\n",
" f\"rendering {n_samples_per_class} examples of class '{class_label}' in {ddim_steps} steps and using s={scale:.2f}.\"\n",
" )\n",
" xc = torch.tensor(n_samples_per_class * [class_label])\n",
" c = model.get_learned_conditioning({model.cond_stage_key: xc.to(model.device)})\n",
" \n",
" samples_ddim, _ = sampler.sample(S=ddim_steps,\n",
" conditioning=c,\n",
" batch_size=n_samples_per_class,\n",
" shape=[3, 64, 64],\n",
" verbose=False,\n",
" unconditional_guidance_scale=scale,\n",
" unconditional_conditioning=uc, \n",
" eta=ddim_eta)\n",
"\n",
" samples_ddim, _ = sampler.sample(\n",
" S=ddim_steps,\n",
" conditioning=c,\n",
" batch_size=n_samples_per_class,\n",
" shape=[3, 64, 64],\n",
" verbose=False,\n",
" unconditional_guidance_scale=scale,\n",
" unconditional_conditioning=uc,\n",
" eta=ddim_eta,\n",
" )\n",
"\n",
" x_samples_ddim = model.decode_first_stage(samples_ddim)\n",
" x_samples_ddim = torch.clamp((x_samples_ddim+1.0)/2.0, \n",
" min=0.0, max=1.0)\n",
" x_samples_ddim = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0)\n",
" all_samples.append(x_samples_ddim)\n",
"\n",
"\n",
"# display as grid\n",
"grid = torch.stack(all_samples, 0)\n",
"grid = rearrange(grid, 'n b c h w -> (n b) c h w')\n",
"grid = rearrange(grid, \"n b c h w -> (n b) c h w\")\n",
"grid = make_grid(grid, nrow=n_samples_per_class)\n",
"\n",
"# to image\n",
"grid = 255. * rearrange(grid, 'c h w -> h w c').cpu().numpy()\n",
"grid = 255.0 * rearrange(grid, \"c h w -> h w c\").cpu().numpy()\n",
"Image.fromarray(grid.astype(np.uint8))"
],
"metadata": {