Compare commits

..

76 Commits

Author SHA1 Message Date
Lincoln Stein
c013fe5b5d Merge branch 'main' into release/invokeai-3-0-rc 2023-07-20 12:22:27 -04:00
Lincoln Stein
ddf7ddc2c1 Add sdxl generation preview (#3862)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:


## Description
Add progress preview for sdxl generation nodes
2023-07-20 12:21:57 -04:00
Sergey Borisov
4a0774b260 Use scale from vae 2023-07-20 18:54:51 +03:00
Lincoln Stein
17e401cb8c rebuild frontend 2023-07-20 11:47:04 -04:00
Sergey Borisov
29a590cced Add sdxl generation preview 2023-07-20 18:45:54 +03:00
Lincoln Stein
7deafa838b merge with main 2023-07-20 11:45:54 -04:00
Lincoln Stein
20757d1c02 Add get_log_level and set_log_level operations to the app route (#3858)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X ] Yes
- [ ] No, because:

      
## Have you updated relevant documentation?
- [ X] Yes (swagger)
- [ ] No


## Description

This add new routes for getting and setting the command line console
logging level.
2023-07-20 11:36:47 -04:00
Lincoln Stein
5134de7cfa Merge branch 'main' into lstein/logger-route 2023-07-20 11:29:48 -04:00
Lincoln Stein
b1a6ba552b reinitialize models.yaml if corrupt or missing 2023-07-20 11:26:20 -04:00
psychedelicious
cd21d2f2b6 fix(ui): fix no_board cache not updating
two areas marked TODO were not TODONE!
2023-07-20 23:50:14 +10:00
Mary Hipp
9dc28373d8 use brackets 2023-07-20 23:45:49 +10:00
Mary Hipp
ffe7d5785b if updating intermediate, dont add to gallery list cache 2023-07-20 23:45:49 +10:00
Lincoln Stein
a2e2f0858d bump version number 2023-07-20 09:42:02 -04:00
blessedcoolant
f73c70ca96 feat: ControlNet Resize Mode (#3854)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes  Discussed with @hipsterusername yesterday
- [ ] No, because:

      
## Have you updated relevant documentation?
- [ ] Yes 
- [X] No Not yet (but change to default ControlNet resizing doesn't
require any user documentation)


## Description
This PR adds resize modes (just_resize, crop_resize, fill_resize) to
InvokeAI's ControlNet node. The implementation is largely based on
lllyasviel's, which includes a high quality resizer specifically
intended to handle common ControlNet preprocessor outputs, such as
binary (black/white) images, grayscale images, and binary or grayscale
thin lines. Previously the InvokeAI ControlNet implementation only did a
simple resize with independent x/y scaling to match noise latent.

### "just_resize" mode (the default setting)
With the new implementation, using the default "just_resize" mode,
ControlNet images are still resized with independent x/y scaling to
match the noise latent resolution, but with the high quality resizer. As
a result, images generated in InvokeAI now look much closer to
counterparts generated via sd-webui-controlnet. See example below. All
inference runs are using prompt="old man", same ControlNet canny edge
detection preprocessor and model and control image, identical other
parameters except for control_mode. The top row is previous simple
resize implementation, the bottom row is with new high quality resizer
and "just_resize" mode. Control_mode is: left="balanced", middle="more
prompt", right="more control". The high quality resize images are
identical (at least by eye) to output from sd-webui-controlnet with same
settings.


![just_resize_simple_vs_just_resize_lvmin](https://github.com/invoke-ai/InvokeAI/assets/303100/5fe02121-616a-4531-b2a4-b423cc054b99)

## "crop_resize" and "fill_resize" modes
The other two resize modes are "crop_resize" and "fill_resize". Whereas
"just_resize" ignores any aspect ratio mismatch between the ControlNet
image and the noise latent, these other modes preserve the aspect ratio
of the ControlNet image. The "crop_resize" mode does this by cropping
the image, and the "fill_resize" option does this by expanding the image
(adding fill pixels). See example below. In this case all inference runs
are using prompt="old man", the ControlNet Midas depth detection
preprocessor and depth model, same control image of size 512x512,
control_mode="balanced", and identical other parameters except for
resize_mode and noise latent dimensions. For top row noise latent size
is 768x512, and for bottom row noise latent size is 512x768. Resize_mode
is: left="just_resize", middle="crop_resize", right="fill_resize"

![Screenshot from 2023-07-20
02-09-22](https://github.com/invoke-ai/InvokeAI/assets/303100/7b4df456-2a5e-4ec4-bce1-fafdba52f025)

## Are there any post deployment tasks we need to perform?
To use "just_resize" mode in linear UI, no post deployment work is
needed. The default is switched from old resizer to new high quality
resizer.

To use "just_resize", "crop_resize", and "fill_resize" modes in node UI,
no post deployment work is needed. There is also an additional option
"just_resize_simple" that uses old resizer, mainly left in for testing
and for anyone curious to see the difference.

To use "crop_resize" and "fill_resize" in linear UI, there will need to
be some work to incorporate choice of three modes in ControlNet UI
(probably best to not expose "just_resize_simple" in linear UI, it just
confuses things).
2023-07-21 01:31:52 +12:00
blessedcoolant
e2240feae4 fix: Chevron icon styling 2023-07-21 01:21:04 +12:00
blessedcoolant
e06348bfab fix: Expand chevron icon being too small 2023-07-21 01:14:19 +12:00
blessedcoolant
8fb970d436 fix: Use layout gap to control layout instead of margin 2023-07-21 01:07:00 +12:00
blessedcoolant
15256ed3a4 fix: Layout shift on the ControlNet Panel 2023-07-21 01:04:16 +12:00
Lincoln Stein
89a15f78dd collapse all autoimport directories into a single folder 2023-07-20 09:01:49 -04:00
blessedcoolant
8fc20c837b Merge branch 'main' into feat/controlnet-resize-mode 2023-07-21 00:58:28 +12:00
blessedcoolant
8dfe196c4f feat: Add Image Count to Board Name 2023-07-20 22:56:52 +10:00
psychedelicious
9e27fd9b90 feat(ui): color tweak on board 2023-07-20 22:56:52 +10:00
psychedelicious
2771328853 feat(ui): reduce saturation by 8% for 1337 contrast 2023-07-20 22:56:52 +10:00
psychedelicious
a481607d3f feat(ui): boards are only punch-you-in-the-face-purple if selected 2023-07-20 22:56:52 +10:00
psychedelicious
1e3cebbf42 feat(ui): add useBoardTotal hook to get total items in board
actually not using it now, but it's there
2023-07-20 22:56:52 +10:00
blessedcoolant
d523556558 fix: Truncate board name if longer than 20 chars 2023-07-20 22:56:52 +10:00
blessedcoolant
da523fa32f fix: Editable text aligning left instead of inplace. 2023-07-20 22:56:52 +10:00
blessedcoolant
ab9b5f3b95 fix: Possible fix to the name plate getting displaced 2023-07-20 22:56:52 +10:00
blessedcoolant
f32bd5dd10 fix: Minor color tweaks to the name plate on boards 2023-07-20 22:56:52 +10:00
psychedelicious
190ba5af59 feat(ui): boards styling 2023-07-20 22:56:52 +10:00
Lincoln Stein
cb29ac63a8 prevent crashes on quick install when hftoken not defined 2023-07-20 08:38:37 -04:00
Lincoln Stein
603989dc0d added get_log_level and set_log_level operations to the app route 2023-07-20 08:33:01 -04:00
blessedcoolant
2872ae2aab fix: Adjust layout of Resize Mode dropdown
Moved it next to ControlMode to make it more compact
2023-07-20 22:53:45 +12:00
blessedcoolant
b7cdda0781 feat: Add ControlNet Resize Mode to Linear UI 2023-07-20 22:48:35 +12:00
blessedcoolant
267940a77e Merge branch 'main' into feat/controlnet-resize-mode 2023-07-20 22:24:11 +12:00
blessedcoolant
8d77c5ca96 feat: Add Sync Models (#3850)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update


## Have you discussed this change with the InvokeAI team?
- [ X] Yes
- [ ] No, because:

      
## Description

This changes the "sync" route from a GET to POST method, in keeping with
the Representational Existential(?) State Transfer (REST) protocol.
2023-07-20 20:26:10 +12:00
blessedcoolant
0795d8764f Merge branch 'main' into fix/post-model-sync 2023-07-20 20:16:14 +12:00
user1
2db56306e4 Merge branch 'feat/controlnet-resize-mode' of github.com:invoke-ai/InvokeAI into feat/controlnet-resize-mode 2023-07-20 00:45:29 -07:00
user1
70fec9ddab Added pixel_perfect_resolution() method to controlnet_utils.py, but not using yet. To be usable this will likely require modification of ControlNet preprocessors 2023-07-20 00:41:49 -07:00
user1
909f538fb5 Switching over to controlnet_utils prepare_control_image(), with added resize_mode. 2023-07-20 00:41:49 -07:00
user1
bab8b6d240 Removed diffusers_pipeline prepare_control_image() -- replaced with controlnet_utils.prepare_control_image()
Added resize_mode to ControlNetData class.
2023-07-20 00:41:49 -07:00
user1
f2f49bd8d0 Added resize_mode param to ControlNet node 2023-07-20 00:41:49 -07:00
user1
b8e0810ed1 Added revised prepare_control_image() that leverages lvmin high quality resizing 2023-07-20 00:41:49 -07:00
user1
6cb9167a1b Added controlnet_utils.py with code from lvmin for high quality resize, crop+resize, fill+resize 2023-07-20 00:41:49 -07:00
user1
09dfcc4277 Added pixel_perfect_resolution() method to controlnet_utils.py, but not using yet. To be usable this will likely require modification of ControlNet preprocessors 2023-07-20 00:38:20 -07:00
blessedcoolant
82eb1f1075 feat: Add Sync Models to UI 2023-07-20 18:50:43 +12:00
psychedelicious
187cf906fa ui: enhance intermediates clear, enhance board auto-add (#3851)
* feat(ui): enhance clear intermediates feature

- retrieve the # of intermediates using a new query (just uses list images endpoint w/ limit of 0)
- display the count in the UI
- add types for clearIntermediates mutation
- minor styling and verbiage changes

* feat(ui): remove unused settings option for guides

* feat(ui): use solid badge variant

consistent with the rest of the usage of badges

* feat(ui): update board ctx menu, add board auto-add

- add context menu to system boards - only open is select board. did this so that you dont think its broken when you click it
- add auto-add board. you can right click a user board to enable it for auto-add, or use the gallery settings popover to select it. the invoke button has a tooltip on a short delay to remind you that you have auto-add enabled
- made useBoardName hook, provide it a board id and it gets your the board name
- removed `boardIdToAdTo` state & logic, updated workflows to auto-switch and auto-add on image generation

* fix(ui): clear controlnet when clearing intermediates

* feat: Make Add Board icon a button

* feat(db, api): clear intermediates now clears all of them

* feat(ui): make reset webui text subtext style

* feat(ui): board name change submits on blur

---------

Co-authored-by: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
2023-07-20 17:44:22 +12:00
Millun Atluri
82554b25fe Updated documentation (#3832)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: documentation update that needs review from the team
before going live

      
## Description

I updated the contribution guidelines, adding more structure and a
getting started guide. Also re-organized the tabs to be in the order of
most commonly used.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings
run `mkdocs serve` to check it out


## Added/updated tests?

- [ ] Yes
- [X ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-07-20 14:27:50 +10:00
Millun Atluri
039091c5d4 Updated frontend docs to be more accurate 2023-07-20 13:16:55 +10:00
Lincoln Stein
d76bf4444c Update invokeai/app/api/routers/models.py
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-07-19 22:46:49 -04:00
Lincoln Stein
82496fee14 Merge branch 'main' into main 2023-07-19 22:43:18 -04:00
user1
c2b99e7545 Switching over to controlnet_utils prepare_control_image(), with added resize_mode. 2023-07-19 19:26:49 -07:00
user1
e918168f7a Removed diffusers_pipeline prepare_control_image() -- replaced with controlnet_utils.prepare_control_image()
Added resize_mode to ControlNetData class.
2023-07-19 19:21:17 -07:00
blessedcoolant
6e36c275c9 feat: Add Setting Switch Component (#3847) 2023-07-20 14:17:51 +12:00
user1
6affe42310 Added resize_mode param to ControlNet node 2023-07-19 19:17:24 -07:00
Lincoln Stein
170bbd7da3 change GET to POST method for model synchronization route 2023-07-19 22:16:56 -04:00
blessedcoolant
f6d5e93020 fix: Model List not scrolling through checkpoints (#3849) 2023-07-20 14:16:32 +12:00
Lincoln Stein
f2515d9480 fix v1-finetune.yaml is not in the subpath of "" (#3848)
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2023-07-20 14:13:56 +12:00
Lincoln Stein
4d8f17c69d fix v1-finetune.yaml is not in the subpath of "" 2023-07-19 22:06:55 -04:00
user1
3a987b2e72 Added revised prepare_control_image() that leverages lvmin high quality resizing 2023-07-19 19:01:14 -07:00
user1
4e3f58552c Added controlnet_utils.py with code from lvmin for high quality resize, crop+resize, fill+resize 2023-07-19 18:52:30 -07:00
Lincoln Stein
77d9657980 don't write root into invokeai.yaml 2023-07-19 21:12:52 -04:00
Lincoln Stein
12cae33dcd fix inpaint model detection (#3843)
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2023-07-20 12:57:14 +12:00
Millun Atluri
1e5310793c Updated PR template 2023-07-20 09:46:05 +10:00
Millun Atluri
a0b5930340 Updated Code of Conduct URL 2023-07-20 09:35:09 +10:00
Millun Atluri
53ed252168 Fixed typos in docs 2023-07-20 09:34:16 +10:00
Millun Atluri
a683379dda Updated docs to be more accurate based on Lincoln's feedback 2023-07-20 09:28:21 +10:00
Millun Atluri
899aa1d251 Merge branch 'invoke-ai:main' into main 2023-07-20 09:22:26 +10:00
Lincoln Stein
5f940bf3b3 default precision to "auto" 2023-07-19 18:23:00 -04:00
psychedelicious
509514f11d feat(api): display warning when port is in use 2023-07-19 13:29:31 -04:00
psychedelicious
c557402dbb feat(api): use next available port
Resolves #3515

@ebr @brandonrising can't imagine this would cause issues but just FYI
2023-07-19 13:29:31 -04:00
Millun Atluri
c291b82b94 Added contribution disclaimer 2023-07-19 23:56:38 +10:00
Millun Atluri
6ba48af0a9 Added community node documentation 2023-07-19 22:04:17 +10:00
Millun Atluri
40fffec0b6 Merge branch 'invoke-ai:main' into main 2023-07-19 21:31:24 +10:00
Millun Atluri
ff74370eda • Updated best practices
• Updated index with new contribution guide link
2023-07-19 15:39:29 +10:00
Millun Atluri
446d87516a * Updated contributiion guide
* Updated nav to be in new order prioritizing more commonuly used tabs
* Added set nav in mkdocs.yaml
2023-07-19 14:34:03 +10:00
89 changed files with 2956 additions and 1121 deletions

View File

@@ -1,42 +1,38 @@
# How to Contribute
## Welcome to Invoke AI
We're thrilled to have you here and we're excited for you to contribute.
Invoke AI originated as a project built by the community, and that vision carries forward today as we aim to build the best pro-grade tools available. We work together to incorporate the latest in AI/ML research, making these tools available in over 20 languages to artists and creatives around the world as part of our fully permissive OSS project designed for individual users to self-host and use.
Here are some guidelines to help you get started:
### Technical Prerequisites
## Contributing to Invoke AI
Anyone who wishes to contribute to InvokeAI, whether features, bug fixes, code cleanup, testing, code reviews, documentation or translation is very much encouraged to do so.
Front-end: You'll need a working knowledge of React and TypeScript.
To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the GitHub discussion board.
Back-end: Depending on the scope of your contribution, you may need to know SQLite, FastAPI, Python, and Socketio. Also, a good majority of the backend logic involved in processing images is built in a modular way using a concept called "Nodes", which are isolated functions that carry out individual, discrete operations. This design allows for easy contributions of novel pipelines and capabilities.
### Areas of contribution:
### How to Submit Contributions
#### Development
If youd like to help with development, please see our [development guide](contribution_guides/development.md). If youre unfamiliar with contributing to open source projects, there is a tutorial contained within the development guide.
To start contributing, please follow these steps:
#### Documentation
If youd like to help with documentation, please see our [documentation guide](contribution_guides/documenation.md).
1. Familiarize yourself with our roadmap and open projects to see where your skills and interests align. These documents can serve as a source of inspiration.
2. Open a Pull Request (PR) with a clear description of the feature you're adding or the problem you're solving. Make sure your contribution aligns with the project's vision.
3. Adhere to general best practices. This includes assuming interoperability with other nodes, keeping the scope of your functions as small as possible, and organizing your code according to our architecture documents.
#### Translation
If you'd like to help with translation, please see our [translation guide](docs/contributing/.contribution_guides/translation.md).
### Types of Contributions We're Looking For
#### Tutorials
Please reach out to @imic or @hipsterusername on [Discord](https://discord.gg/ZmtBAhwWhy) to help create tutorials for InvokeAI.
We welcome all contributions that improve the project. Right now, we're especially looking for:
We hope you enjoy using our software as much as we enjoy creating it, and we hope that some of those of you who are reading this will elect to become part of our contributor community.
1. Quality of life (QOL) enhancements on the front-end.
2. New backend capabilities added through nodes.
3. Incorporating additional optimizations from the broader open-source software community.
### Communication and Decision-making Process
### Contributors
Project maintainers and code owners review PRs to ensure they align with the project's goals. They may provide design or architectural guidance, suggestions on user experience, or provide more significant feedback on the contribution itself. Expect to receive feedback on your submissions, and don't hesitate to ask questions or propose changes.
This project is a combined effort of dedicated people from across the world. [Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for their time, hard work and effort.
For more robust discussions, or if you're planning to add capabilities not currently listed on our roadmap, please reach out to us on our Discord server. That way, we can ensure your proposed contribution aligns with the project's direction before you start writing code.
### Code of Conduct
### Code of Conduct and Contribution Expectations
We want everyone in our community to have a positive experience. To facilitate this, we've established a code of conduct and a statement of values that we expect all contributors to adhere to. Please take a moment to review these documents—they're essential to maintaining a respectful and inclusive environment.
The InvokeAI community is a welcoming place, and we want your help in maintaining that. Please review our [Code of Conduct](https://github.com/invoke-ai/InvokeAI/blob/main/CODE_OF_CONDUCT.md) to learn more - it's essential to maintaining a respectful and inclusive environment.
By making a contribution to this project, you certify that:
@@ -49,6 +45,12 @@ This disclaimer is not a license and does not grant any rights or permissions. Y
This disclaimer is provided "as is" without warranty of any kind, whether expressed or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, or non-infringement. In no event shall the authors or copyright holders be liable for any claim, damages, or other liability, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the contribution or the use or other dealings in the contribution.
### Support
For support, please use this repository's [GitHub Issues](https://github.com/invoke-ai/InvokeAI/issues), or join the [Discord](https://discord.gg/ZmtBAhwWhy).
Original portions of the software are Copyright (c) 2023 by respective contributors.
---
Remember, your contributions help make this project great. We're excited to see what you'll bring to our community!

View File

@@ -0,0 +1,91 @@
# Development
## **What do I need to know to help?**
If you are looking to help to with a code contribution, InvokeAI uses several different technologies under the hood: Python (Pydantic, FastAPI, diffusers) and Typescript (React, Redux Toolkit, ChakraUI, Mantine, Konva). Familiarity with StableDiffusion and image generation concepts is helpful, but not essential.
For more information, please review our area specific documentation:
* #### [InvokeAI Architecure](../ARCHITECTURE.md)
* #### [Frontend Documentation](development_guides/contributingToFrontend.md)
* #### [Node Documentation](../INVOCATIONS.md)
* #### [Local Development](../LOCAL_DEVELOPMENT.md)
If you don't feel ready to make a code contribution yet, no problem! You can also help out in other ways, such as [documentation](documentation.md) or [translation](translation.md).
There are two paths to making a development contribution:
1. Choosing an open issue to address. Open issues can be found in the [Issues](https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen) section of the InvokeAI repository. These are tagged by the issue type (bug, enhancement, etc.) along with the “good first issues” tag denoting if they are suitable for first time contributors.
1. Additional items can be found on our roadmap <******************************link to roadmap>******************************. The roadmap is organized in terms of priority, and contains features of varying size and complexity. If there is an inflight item youd like to help with, reach out to the contributor assigned to the item to see how you can help.
2. Opening a new issue or feature to add. **Please make sure you have searched through existing issues before creating new ones.**
*Regardless of what you choose, please post in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord before you start development in order to confirm that the issue or feature is aligned with the current direction of the project. We value our contributors time and effort and want to ensure that no ones time is being misspent.*
## Best Practices:
* Keep your pull requests small. Smaller pull requests are more likely to be accepted and merged
* Comments! Commenting your code helps reviwers easily understand your contribution
* Use Python and Typescripts typing systems, and consider using an editor with [LSP](https://microsoft.github.io/language-server-protocol/) support to streamline development
* Make all communications public. This ensure knowledge is shared with the whole community
## **How do I make a contribution?**
Never made an open source contribution before? Wondering how contributions work in our project? Here's a quick rundown!
Before starting these steps, ensure you have your local environment [configured for development](../LOCAL_DEVELOPMENT.md).
1. Find a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) that you are interested in addressing or a feature that you would like to add. Then, reach out to our team in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord to ensure you are setup for success.
2. Fork the [InvokeAI](https://github.com/invoke-ai/InvokeAI) repository to your GitHub profile. This means that you will have a copy of the repository under **your-GitHub-username/InvokeAI**.
3. Clone the repository to your local machine using:
```bash
git clone https://github.com/your-GitHub-username/InvokeAI.git
```
If you're unfamiliar with using Git through the commandline, [GitHub Desktop](https://desktop.github.com) is a easy-to-use alternative with a UI. You can do all the same steps listed here, but through the interface.
4. Create a new branch for your fix using:
```bash
git checkout -b branch-name-here
```
5. Make the appropriate changes for the issue you are trying to address or the feature that you want to add.
6. Add the file contents of the changed files to the "snapshot" git uses to manage the state of the project, also known as the index:
```bash
git add insert-paths-of-changed-files-here
```
7. Store the contents of the index with a descriptive message.
```bash
git commit -m "Insert a short message of the changes made here"
```
8. Push the changes to the remote repository using
```markdown
git push origin branch-name-here
```
9. Submit a pull request to the **main** branch of the InvokeAI repository.
10. Title the pull request with a short description of the changes made and the issue or bug number associated with your change. For example, you can title an issue like so "Added more log outputting to resolve #1234".
11. In the description of the pull request, explain the changes that you made, any issues you think exist with the pull request you made, and any questions you have for the maintainer. It's OK if your pull request is not perfect (no pull request is), the reviewer will be able to help you fix any problems and improve it!
12. Wait for the pull request to be reviewed by other collaborators.
13. Make changes to the pull request if the reviewer(s) recommend them.
14. Celebrate your success after your pull request is merged!
If youd like to learn more about contributing to Open Source projects, here is a [Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github).
## **Where can I go for help?**
If you need help, you can ask questions in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord.
For frontend related work, **@pyschedelicious** is the best person to reach out to.
For backend related work, please reach out to **@blessedcoolant**, **@lstein**, **@StAlKeR7779** or **@pyschedelicious**.
## **What does the Code of Conduct mean for me?**
Our [Code of Conduct](CODE_OF_CONDUCT.md) means that you are responsible for treating everyone on the project with respect and courtesy regardless of their identity. If you are the victim of any inappropriate behavior or comments as described in our Code of Conduct, we are here for you and will do the best to ensure that the abuser is reprimanded appropriately, per our code.

View File

@@ -0,0 +1,75 @@
# Contributing to the Frontend
# InvokeAI Web UI
- [InvokeAI Web UI](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#invokeai-web-ui)
- [Stack](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#stack)
- [Contributing](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#contributing)
- [Dev Environment](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#dev-environment)
- [Production builds](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#production-builds)
The UI is a fairly straightforward Typescript React app, with the Unified Canvas being more complex.
Code is located in `invokeai/frontend/web/` for review.
## Stack
State management is Redux via [Redux Toolkit](https://github.com/reduxjs/redux-toolkit). We lean heavily on RTK:
- `createAsyncThunk` for HTTP requests
- `createEntityAdapter` for fetching images and models
- `createListenerMiddleware` for workflows
The API client and associated types are generated from the OpenAPI schema. See API_CLIENT.md.
Communication with server is a mix of HTTP and [socket.io](https://github.com/socketio/socket.io-client) (with a simple socket.io redux middleware to help).
[Chakra-UI](https://github.com/chakra-ui/chakra-ui) & [Mantine](https://github.com/mantinedev/mantine) for components and styling.
[Konva](https://github.com/konvajs/react-konva) for the canvas, but we are pushing the limits of what is feasible with it (and HTML canvas in general). We plan to rebuild it with [PixiJS](https://github.com/pixijs/pixijs) to take advantage of WebGL's improved raster handling.
[Vite](https://vitejs.dev/) for bundling.
Localisation is via [i18next](https://github.com/i18next/react-i18next), but translation happens on our [Weblate](https://hosted.weblate.org/engage/invokeai/) project. Only the English source strings should be changed on this repo.
## Contributing
Thanks for your interest in contributing to the InvokeAI Web UI!
We encourage you to ping @psychedelicious and @blessedcoolant on [Discord](https://discord.gg/ZmtBAhwWhy) if you want to contribute, just to touch base and ensure your work doesn't conflict with anything else going on. The project is very active.
### Dev Environment
**Setup**
1. Install [node](https://nodejs.org/en/download/). You can confirm node is installed with:
```bash
node --version
```
2. Install [yarn classic](https://classic.yarnpkg.com/lang/en/) and confirm it is installed by running this:
```bash
npm install --global yarn
yarn --version
```
From `invokeai/frontend/web/` run `yarn install` to get everything set up.
Start everything in dev mode:
1. Ensure your virtual environment is running
2. Start the dev server: `yarn dev`
3. Start the InvokeAI Nodes backend: `python scripts/invokeai-web.py # run from the repo root`
4. Point your browser to the dev server address e.g. [http://localhost:5173/](http://localhost:5173/)
### VSCode Remote Dev
We've noticed an intermittent issue with the VSCode Remote Dev port forwarding. If you use this feature of VSCode, you may intermittently click the Invoke button and then get nothing until the request times out. Suggest disabling the IDE's port forwarding feature and doing it manually via SSH:
`ssh -L 9090:localhost:9090 -L 5173:localhost:5173 user@host`
### Production builds
For a number of technical and logistical reasons, we need to commit UI build artefacts to the repo.
If you submit a PR, there is a good chance we will ask you to include a separate commit with a build of the app.
To build for production, run `yarn build`.

View File

@@ -0,0 +1,13 @@
# Documentation
Documentation is an important part of any open source project. It provides a clear and concise way to communicate how the software works, how to use it, and how to troubleshoot issues. Without proper documentation, it can be difficult for users to understand the purpose and functionality of the project.
## Contributing
All documentation is maintained in the InvokeAI GitHub repository. If you come across documentation that is out of date or incorrect, please submit a pull request with the necessary changes.
When updating or creating documentation, please keep in mind InvokeAI is a tool for everyone, not just those who have familiarity with generative art.
## Help & Questions
Please ping @imic1 or @hipsterusername in the [Discord](https://discord.com/channels/1020123559063990373/1049495067846524939) if you have any questions.

View File

@@ -0,0 +1,19 @@
# Translation
InvokeAI uses [Weblate](https://weblate.org/) for translation. Weblate is a FOSS project providing a scalable translation service. Weblate automates the tedious parts of managing translation of a growing project, and the service is generously provided at no cost to FOSS projects like InvokeAI.
## Contributing
If you'd like to contribute by adding or updating a translation, please visit our [Weblate project](https://hosted.weblate.org/engage/invokeai/). You'll need to sign in with your GitHub account (a number of other accounts are supported, including Google).
Once signed in, select a language and then the Web UI component. From here you can Browse and Translate strings from English to your chosen language. Zen mode offers a simpler translation experience.
Your changes will be attributed to you in the automated PR process; you don't need to do anything else.
## Help & Questions
Please check Weblate's [documentation](https://docs.weblate.org/en/latest/index.html) or ping @Harvestor on [Discord](https://discord.com/channels/1020123559063990373/1049495067846524939) if you have any questions.
## Thanks
Thanks to the InvokeAI community for their efforts to translate the project!

View File

@@ -0,0 +1,11 @@
# Tutorials
Tutorials help new & existing users expand their abilty to use InvokeAI to the full extent of our features and services.
Currently, we have a set of tutorials available on our [YouTube channel](https://www.youtube.com/@invokeai), but as InvokeAI continues to evolve with new updates, we want to ensure that we are giving our users the resources they need to succeed.
Tutorials can be in the form of videos or article walkthroughs on a subject of your choice. We recommend focusing tutorials on the key image generation methods, or on a specific component within one of the image generation methods.
## Contributing
Please reach out to @imic or @hipsterusername on [Discord](https://discord.gg/ZmtBAhwWhy) to help create tutorials for InvokeAI.

View File

@@ -222,14 +222,10 @@ get solutions for common installation problems and other issues.
Anyone who wishes to contribute to this project, whether documentation,
features, bug fixes, code cleanup, testing, or code reviews, is very much
encouraged to do so. If you are unfamiliar with how to contribute to GitHub
projects, here is a
[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github).
encouraged to do so.
A full set of contribution guidelines, along with templates, are in progress,
but for now the most important thing is to **make your pull request against the
"development" branch**, and not against "main". This will help keep public
breakage to a minimum and will allow you to propose more radical changes.
[Please take a look at our Contribution documentation to learn more about contributing to InvokeAI.
](contributing/CONTRIBUTING.md)
## :octicons-person-24: Contributors

View File

@@ -0,0 +1,28 @@
# Community Nodes
These are nodes that have been developed by the community for the community. If you're not sure what a node is, you can learn more about nodes [here](overview.md).
If you'd like to submit a node for the community, please refer to the [node creation overview](overview.md).
To download a node, simply download the `.py` node file from the link and add it to the `invokeai/app/invocations/` folder in your Invoke AI install location. Along with the node, an example node graph should be provided to help you get started with the node.
To use a community node graph, download the the `.json` node graph file and load it into Invoke AI via the **Load Nodes** button on the Node Editor.
## List of Nodes
--------------------------------
### Super Cool Node Template
**Description:** This node allows you to do super cool things with InvokeAI.
**Node Link:** https://github.com/invoke-ai/InvokeAI/fake_node.py
**Example Node Graph:** https://github.com/invoke-ai/InvokeAI/fake_node_graph.json
**Output Examples**
![Invoke AI](https://invoke-ai.github.io/InvokeAI/assets/invoke_ai_banner.png)
## Help
If you run into any issues with a node, please post in the [InvokeAI Discord](https://discord.gg/ZmtBAhwWhy).

41
docs/nodes/overview.md Normal file
View File

@@ -0,0 +1,41 @@
# Nodes
## What are Nodes?
An Node is simply a single operation that takes in some inputs and gives
out some outputs. We can then chain multiple nodes together to create more
complex functionality. All InvokeAI features are added through nodes.
This means nodes can be used to easily extend the image generation capabilities of InvokeAI, and allow you build workflows to suit your needs.
You can read more about nodes and the node editor [here](../features/NODES.md).
## Downloading Nodes
To download a new node, visit our list of [Community Nodes](communityNodes.md). These are codes that have been created by the community, for the community.
## Contributing Nodes
To learn about creating a new node, please visit our [Node creation documenation](../contributing/INVOCATIONS.md).
Once youve created a node and confirmed that it behaves as expected locally, follow these steps:
- Make sure the node is contained in a new Python (.py) file
- Submit a pull request with a link to your node in GitHub against the `nodes` branch to add the node to the [Community Nodes](Community Nodes) list
- Make sure you are following the template below and have provided all relevant details about the node and what it does.
- A maintainer will review the pull request and node. If the node is aligned with the direction of the project, you might be asked for permission to include it in the core project.
### Community Node Template
```markdown
--------------------------------
### Super Cool Node Template
**Description:** This node allows you to do super cool things with InvokeAI.
**Node Link:** https://github.com/invoke-ai/InvokeAI/fake_node.py
**Example Node Graph:** https://github.com/invoke-ai/InvokeAI/fake_node_graph.json
**Output Examples**
![InvokeAI](https://invoke-ai.github.io/InvokeAI/assets/invoke_ai_banner.png)
```

View File

@@ -1,9 +1,22 @@
from enum import Enum
from fastapi import Body
from fastapi.routing import APIRouter
from pydantic import BaseModel, Field
from invokeai.backend.image_util.patchmatch import PatchMatch
from invokeai.version import __version__
from ..dependencies import ApiDependencies
from invokeai.backend.util.logging import logging
class LogLevel(int, Enum):
NotSet = logging.NOTSET
Debug = logging.DEBUG
Info = logging.INFO
Warning = logging.WARNING
Error = logging.ERROR
Critical = logging.CRITICAL
app_router = APIRouter(prefix="/v1/app", tags=["app"])
@@ -34,3 +47,27 @@ async def get_config() -> AppConfig:
if PatchMatch.patchmatch_available():
infill_methods.append('patchmatch')
return AppConfig(infill_methods=infill_methods)
@app_router.get(
"/logging",
operation_id="get_log_level",
responses={200: {"description" : "The operation was successful"}},
response_model = LogLevel,
)
async def get_log_level(
) -> LogLevel:
"""Returns the log level"""
return LogLevel(ApiDependencies.invoker.services.logger.level)
@app_router.post(
"/logging",
operation_id="set_log_level",
responses={200: {"description" : "The operation was successful"}},
response_model = LogLevel,
)
async def set_log_level(
level: LogLevel = Body(description="New log verbosity level"),
) -> LogLevel:
"""Sets the log verbosity level"""
ApiDependencies.invoker.services.logger.setLevel(level)
return LogLevel(ApiDependencies.invoker.services.logger.level)

View File

@@ -1,8 +1,7 @@
import io
from typing import Optional
from fastapi import (Body, HTTPException, Path, Query, Request, Response,
UploadFile)
from fastapi import Body, HTTPException, Path, Query, Request, Response, UploadFile
from fastapi.responses import FileResponse
from fastapi.routing import APIRouter
from PIL import Image
@@ -11,9 +10,11 @@ from invokeai.app.invocations.metadata import ImageMetadata
from invokeai.app.models.image import ImageCategory, ResourceOrigin
from invokeai.app.services.image_record_storage import OffsetPaginatedResults
from invokeai.app.services.item_storage import PaginatedResults
from invokeai.app.services.models.image_record import (ImageDTO,
ImageRecordChanges,
ImageUrlsDTO)
from invokeai.app.services.models.image_record import (
ImageDTO,
ImageRecordChanges,
ImageUrlsDTO,
)
from ..dependencies import ApiDependencies
@@ -84,15 +85,16 @@ async def delete_image(
# TODO: Does this need any exception handling at all?
pass
@images_router.post("/clear-intermediates", operation_id="clear_intermediates")
async def clear_intermediates() -> int:
"""Clears first 100 intermediates"""
"""Clears all intermediates"""
try:
count_deleted = ApiDependencies.invoker.services.images.delete_many(is_intermediate=True)
count_deleted = ApiDependencies.invoker.services.images.delete_intermediates()
return count_deleted
except Exception as e:
# TODO: Does this need any exception handling at all?
raise HTTPException(status_code=500, detail="Failed to clear intermediates")
pass
@@ -130,6 +132,7 @@ async def get_image_dto(
except Exception as e:
raise HTTPException(status_code=404)
@images_router.get(
"/{image_name}/metadata",
operation_id="get_image_metadata",
@@ -254,7 +257,8 @@ async def list_image_dtos(
default=None, description="Whether to list intermediate images."
),
board_id: Optional[str] = Query(
default=None, description="The board id to filter by. Use 'none' to find images without a board."
default=None,
description="The board id to filter by. Use 'none' to find images without a board.",
),
offset: int = Query(default=0, description="The page offset"),
limit: int = Query(default=10, description="The number of images per page"),

View File

@@ -315,20 +315,21 @@ async def list_ckpt_configs(
return ApiDependencies.invoker.services.model_manager.list_checkpoint_configs()
@models_router.get(
@models_router.post(
"/sync",
operation_id="sync_to_config",
responses={
201: { "description": "synchronization successful" },
},
status_code = 201,
response_model = None
response_model = bool
)
async def sync_to_config(
)->None:
)->bool:
"""Call after making changes to models.yaml, autoimport directories or models directory to synchronize
in-memory data structures with disk data structures."""
return ApiDependencies.invoker.services.model_manager.sync_to_config()
ApiDependencies.invoker.services.model_manager.sync_to_config()
return True
@models_router.put(
"/merge/{base_model}",
@@ -373,50 +374,3 @@ async def merge_models(
except ValueError as e:
raise HTTPException(status_code=400, detail=str(e))
return response
# The rename operation is now supported by update_model and no longer needs to be
# a standalone route.
# @models_router.post(
# "/rename/{base_model}/{model_type}/{model_name}",
# operation_id="rename_model",
# responses= {
# 201: {"description" : "The model was renamed successfully"},
# 404: {"description" : "The model could not be found"},
# 409: {"description" : "There is already a model corresponding to the new name"},
# },
# status_code=201,
# response_model=ImportModelResponse
# )
# async def rename_model(
# base_model: BaseModelType = Path(description="Base model"),
# model_type: ModelType = Path(description="The type of model"),
# model_name: str = Path(description="current model name"),
# new_name: Optional[str] = Query(description="new model name", default=None),
# new_base: Optional[BaseModelType] = Query(description="new model base", default=None),
# ) -> ImportModelResponse:
# """ Rename a model"""
# logger = ApiDependencies.invoker.services.logger
# try:
# result = ApiDependencies.invoker.services.model_manager.rename_model(
# base_model = base_model,
# model_type = model_type,
# model_name = model_name,
# new_name = new_name,
# new_base = new_base,
# )
# logger.debug(result)
# logger.info(f'Successfully renamed {model_name}=>{new_name}')
# model_raw = ApiDependencies.invoker.services.model_manager.list_model(
# model_name=new_name or model_name,
# base_model=new_base or base_model,
# model_type=model_type
# )
# return parse_obj_as(ImportModelResponse, model_raw)
# except ModelNotFoundException as e:
# logger.error(str(e))
# raise HTTPException(status_code=404, detail=str(e))
# except ValueError as e:
# logger.error(str(e))
# raise HTTPException(status_code=409, detail=str(e))

View File

@@ -4,6 +4,7 @@ import sys
from inspect import signature
import uvicorn
import socket
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
@@ -193,9 +194,22 @@ app.mount("/",
)
def invoke_api():
def find_port(port: int):
"""Find a port not in use starting at given port"""
# Taken from https://waylonwalker.com/python-find-available-port/, thanks Waylon!
# https://github.com/WaylonWalker
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
if s.connect_ex(("localhost", port)) == 0:
return find_port(port=port + 1)
else:
return port
port = find_port(app_config.port)
if port != app_config.port:
logger.warn(f"Port {app_config.port} in use, using port {port}")
# Start our own event loop for eventing usage
loop = asyncio.new_event_loop()
config = uvicorn.Config(app=app, host=app_config.host, port=app_config.port, loop=loop)
config = uvicorn.Config(app=app, host=app_config.host, port=port, loop=loop)
# Use access_log to turn off logging
server = uvicorn.Server(config)
loop.run_until_complete(server.serve())

View File

@@ -85,8 +85,8 @@ CONTROLNET_DEFAULT_MODELS = [
CONTROLNET_NAME_VALUES = Literal[tuple(CONTROLNET_DEFAULT_MODELS)]
CONTROLNET_MODE_VALUES = Literal[tuple(
["balanced", "more_prompt", "more_control", "unbalanced"])]
# crop and fill options not ready yet
# CONTROLNET_RESIZE_VALUES = Literal[tuple(["just_resize", "crop_resize", "fill_resize"])]
CONTROLNET_RESIZE_VALUES = Literal[tuple(
["just_resize", "crop_resize", "fill_resize", "just_resize_simple",])]
class ControlNetModelField(BaseModel):
@@ -111,7 +111,8 @@ class ControlField(BaseModel):
description="When the ControlNet is last applied (% of total steps)")
control_mode: CONTROLNET_MODE_VALUES = Field(
default="balanced", description="The control mode to use")
# resize_mode: CONTROLNET_RESIZE_VALUES = Field(default="just_resize", description="The resize mode to use")
resize_mode: CONTROLNET_RESIZE_VALUES = Field(
default="just_resize", description="The resize mode to use")
@validator("control_weight")
def validate_control_weight(cls, v):
@@ -161,6 +162,7 @@ class ControlNetInvocation(BaseInvocation):
end_step_percent: float = Field(default=1, ge=0, le=1,
description="When the ControlNet is last applied (% of total steps)")
control_mode: CONTROLNET_MODE_VALUES = Field(default="balanced", description="The control mode used")
resize_mode: CONTROLNET_RESIZE_VALUES = Field(default="just_resize", description="The resize mode used")
# fmt: on
class Config(InvocationConfig):
@@ -187,6 +189,7 @@ class ControlNetInvocation(BaseInvocation):
begin_step_percent=self.begin_step_percent,
end_step_percent=self.end_step_percent,
control_mode=self.control_mode,
resize_mode=self.resize_mode,
),
)

View File

@@ -30,6 +30,7 @@ from .compel import ConditioningField
from .controlnet_image_processors import ControlField
from .image import ImageOutput
from .model import ModelInfo, UNetField, VaeField
from invokeai.app.util.controlnet_utils import prepare_control_image
from diffusers.models.attention_processor import (
AttnProcessor2_0,
@@ -288,7 +289,7 @@ class TextToLatentsInvocation(BaseInvocation):
# and add in batch_size, num_images_per_prompt?
# and do real check for classifier_free_guidance?
# prepare_control_image should return torch.Tensor of shape(batch_size, 3, height, width)
control_image = model.prepare_control_image(
control_image = prepare_control_image(
image=input_image,
do_classifier_free_guidance=do_classifier_free_guidance,
width=control_width_resize,
@@ -298,13 +299,18 @@ class TextToLatentsInvocation(BaseInvocation):
device=control_model.device,
dtype=control_model.dtype,
control_mode=control_info.control_mode,
resize_mode=control_info.resize_mode,
)
control_item = ControlNetData(
model=control_model, image_tensor=control_image,
model=control_model,
image_tensor=control_image,
weight=control_info.control_weight,
begin_step_percent=control_info.begin_step_percent,
end_step_percent=control_info.end_step_percent,
control_mode=control_info.control_mode,
# any resizing needed should currently be happening in prepare_control_image(),
# but adding resize_mode to ControlNetData in case needed in the future
resize_mode=control_info.resize_mode,
)
control_data.append(control_item)
# MultiControlNetModel has been refactored out, just need list[ControlNetData]
@@ -601,7 +607,7 @@ class ResizeLatentsInvocation(BaseInvocation):
antialias: bool = Field(
default=False,
description="Whether or not to antialias (applied in bilinear and bicubic modes only)")
class Config(InvocationConfig):
schema_extra = {
"ui": {
@@ -647,7 +653,7 @@ class ScaleLatentsInvocation(BaseInvocation):
antialias: bool = Field(
default=False,
description="Whether or not to antialias (applied in bilinear and bicubic modes only)")
class Config(InvocationConfig):
schema_extra = {
"ui": {
@@ -758,7 +764,7 @@ class ImageToLatentsInvocation(BaseInvocation):
dtype=vae.dtype
) # FIXME: uses torch.randn. make reproducible!
latents = 0.18215 * latents
latents = vae.config.scaling_factor * latents
latents = latents.to(dtype=orig_dtype)
name = f"{context.graph_execution_state_id}__{self.id}"

View File

@@ -6,6 +6,7 @@ from typing import List, Literal, Optional, Union
from pydantic import Field, validator
from ...backend.model_management import ModelType, SubModelType
from invokeai.app.util.step_callback import stable_diffusion_xl_step_callback
from .baseinvocation import (BaseInvocation, BaseInvocationOutput,
InvocationConfig, InvocationContext)
@@ -243,10 +244,31 @@ class SDXLTextToLatentsInvocation(BaseInvocation):
},
}
def dispatch_progress(
self,
context: InvocationContext,
source_node_id: str,
sample,
step,
total_steps,
) -> None:
stable_diffusion_xl_step_callback(
context=context,
node=self.dict(),
source_node_id=source_node_id,
sample=sample,
step=step,
total_steps=total_steps,
)
# based on
# https://github.com/huggingface/diffusers/blob/3ebbaf7c96801271f9e6c21400033b6aa5ffcf29/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion.py#L375
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
graph_execution_state = context.services.graph_execution_manager.get(
context.graph_execution_state_id
)
source_node_id = graph_execution_state.prepared_source_mapping[self.id]
latents = context.services.latents.get(self.noise.latents_name)
positive_cond_data = context.services.latents.get(self.positive_conditioning.conditioning_name)
@@ -341,6 +363,7 @@ class SDXLTextToLatentsInvocation(BaseInvocation):
# call the callback, if provided
if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % scheduler.order == 0):
progress_bar.update()
self.dispatch_progress(context, source_node_id, latents, i, num_inference_steps)
#if callback is not None and i % callback_steps == 0:
# callback(i, t, latents)
else:
@@ -409,6 +432,7 @@ class SDXLTextToLatentsInvocation(BaseInvocation):
# call the callback, if provided
if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % scheduler.order == 0):
progress_bar.update()
self.dispatch_progress(context, source_node_id, latents, i, num_inference_steps)
#if callback is not None and i % callback_steps == 0:
# callback(i, t, latents)
@@ -473,10 +497,31 @@ class SDXLLatentsToLatentsInvocation(BaseInvocation):
},
}
def dispatch_progress(
self,
context: InvocationContext,
source_node_id: str,
sample,
step,
total_steps,
) -> None:
stable_diffusion_xl_step_callback(
context=context,
node=self.dict(),
source_node_id=source_node_id,
sample=sample,
step=step,
total_steps=total_steps,
)
# based on
# https://github.com/huggingface/diffusers/blob/3ebbaf7c96801271f9e6c21400033b6aa5ffcf29/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion.py#L375
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
graph_execution_state = context.services.graph_execution_manager.get(
context.graph_execution_state_id
)
source_node_id = graph_execution_state.prepared_source_mapping[self.id]
latents = context.services.latents.get(self.latents.latents_name)
positive_cond_data = context.services.latents.get(self.positive_conditioning.conditioning_name)
@@ -579,6 +624,7 @@ class SDXLLatentsToLatentsInvocation(BaseInvocation):
# call the callback, if provided
if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % scheduler.order == 0):
progress_bar.update()
self.dispatch_progress(context, source_node_id, latents, i, num_inference_steps)
#if callback is not None and i % callback_steps == 0:
# callback(i, t, latents)
else:
@@ -647,6 +693,7 @@ class SDXLLatentsToLatentsInvocation(BaseInvocation):
# call the callback, if provided
if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % scheduler.order == 0):
progress_bar.update()
self.dispatch_progress(context, source_node_id, latents, i, num_inference_steps)
#if callback is not None and i % callback_steps == 0:
# callback(i, t, latents)

View File

@@ -277,7 +277,7 @@ class InvokeAISettings(BaseSettings):
@classmethod
def _excluded_from_yaml(self)->List[str]:
# combination of deprecated parameters and internal ones that shouldn't be exposed as invokeai.yaml options
return ['type','initconf', 'gpu_mem_reserved', 'max_loaded_models', 'version', 'from_file', 'model', 'restore']
return ['type','initconf', 'gpu_mem_reserved', 'max_loaded_models', 'version', 'from_file', 'model', 'restore', 'root']
class Config:
env_file_encoding = 'utf-8'
@@ -374,16 +374,16 @@ setting environment variables INVOKEAI_<setting>.
max_cache_size : float = Field(default=6.0, gt=0, description="Maximum memory amount used by model cache for rapid switching", category='Memory/Performance')
max_vram_cache_size : float = Field(default=2.75, ge=0, description="Amount of VRAM reserved for model storage", category='Memory/Performance')
gpu_mem_reserved : float = Field(default=2.75, ge=0, description="DEPRECATED: use max_vram_cache_size. Amount of VRAM reserved for model storage", category='DEPRECATED')
precision : Literal[tuple(['auto','float16','float32','autocast'])] = Field(default='float16',description='Floating point precision', category='Memory/Performance')
precision : Literal[tuple(['auto','float16','float32','autocast'])] = Field(default='auto',description='Floating point precision', category='Memory/Performance')
sequential_guidance : bool = Field(default=False, description="Whether to calculate guidance in serial instead of in parallel, lowering memory requirements", category='Memory/Performance')
xformers_enabled : bool = Field(default=True, description="Enable/disable memory-efficient attention", category='Memory/Performance')
tiled_decode : bool = Field(default=False, description="Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty)", category='Memory/Performance')
root : Path = Field(default=_find_root(), description='InvokeAI runtime root directory', category='Paths')
autoimport_dir : Path = Field(default='autoimport/main', description='Path to a directory of models files to be imported on startup.', category='Paths')
lora_dir : Path = Field(default='autoimport/lora', description='Path to a directory of LoRA/LyCORIS models to be imported on startup.', category='Paths')
embedding_dir : Path = Field(default='autoimport/embedding', description='Path to a directory of Textual Inversion embeddings to be imported on startup.', category='Paths')
controlnet_dir : Path = Field(default='autoimport/controlnet', description='Path to a directory of ControlNet embeddings to be imported on startup.', category='Paths')
autoimport_dir : Path = Field(default='autoimport', description='Path to a directory of models files to be imported on startup.', category='Paths')
lora_dir : Path = Field(default=None, description='Path to a directory of LoRA/LyCORIS models to be imported on startup.', category='Paths')
embedding_dir : Path = Field(default=None, description='Path to a directory of Textual Inversion embeddings to be imported on startup.', category='Paths')
controlnet_dir : Path = Field(default=None, description='Path to a directory of ControlNet embeddings to be imported on startup.', category='Paths')
conf_path : Path = Field(default='configs/models.yaml', description='Path to models definition file', category='Paths')
models_dir : Path = Field(default='models', description='Path to the models directory', category='Paths')
legacy_conf_dir : Path = Field(default='configs/stable-diffusion', description='Path to directory of legacy checkpoint config files', category='Paths')
@@ -446,7 +446,7 @@ setting environment variables INVOKEAI_<setting>.
Path to the runtime root directory
'''
if self.root:
return Path(self.root).expanduser()
return Path(self.root).expanduser().absolute()
else:
return self.find_root()

View File

@@ -122,6 +122,11 @@ class ImageRecordStorageBase(ABC):
"""Deletes many image records."""
pass
@abstractmethod
def delete_intermediates(self) -> list[str]:
"""Deletes all intermediate image records, returning a list of deleted image names."""
pass
@abstractmethod
def save(
self,
@@ -461,6 +466,32 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
finally:
self._lock.release()
def delete_intermediates(self) -> list[str]:
try:
self._lock.acquire()
self._cursor.execute(
"""--sql
SELECT image_name FROM images
WHERE is_intermediate = TRUE;
"""
)
result = cast(list[sqlite3.Row], self._cursor.fetchall())
image_names = list(map(lambda r: r[0], result))
self._cursor.execute(
"""--sql
DELETE FROM images
WHERE is_intermediate = TRUE;
"""
)
self._conn.commit()
return image_names
except sqlite3.Error as e:
self._conn.rollback()
raise ImageRecordDeleteException from e
finally:
self._lock.release()
def save(
self,
image_name: str,

View File

@@ -6,21 +6,33 @@ from typing import TYPE_CHECKING, Optional
from PIL.Image import Image as PILImageType
from invokeai.app.invocations.metadata import ImageMetadata
from invokeai.app.models.image import (ImageCategory,
InvalidImageCategoryException,
InvalidOriginException, ResourceOrigin)
from invokeai.app.services.board_image_record_storage import \
BoardImageRecordStorageBase
from invokeai.app.models.image import (
ImageCategory,
InvalidImageCategoryException,
InvalidOriginException,
ResourceOrigin,
)
from invokeai.app.services.board_image_record_storage import BoardImageRecordStorageBase
from invokeai.app.services.image_file_storage import (
ImageFileDeleteException, ImageFileNotFoundException,
ImageFileSaveException, ImageFileStorageBase)
ImageFileDeleteException,
ImageFileNotFoundException,
ImageFileSaveException,
ImageFileStorageBase,
)
from invokeai.app.services.image_record_storage import (
ImageRecordDeleteException, ImageRecordNotFoundException,
ImageRecordSaveException, ImageRecordStorageBase, OffsetPaginatedResults)
ImageRecordDeleteException,
ImageRecordNotFoundException,
ImageRecordSaveException,
ImageRecordStorageBase,
OffsetPaginatedResults,
)
from invokeai.app.services.item_storage import ItemStorageABC
from invokeai.app.services.models.image_record import (ImageDTO, ImageRecord,
ImageRecordChanges,
image_record_to_dto)
from invokeai.app.services.models.image_record import (
ImageDTO,
ImageRecord,
ImageRecordChanges,
image_record_to_dto,
)
from invokeai.app.services.resource_name import NameServiceBase
from invokeai.app.services.urls import UrlServiceBase
from invokeai.app.util.metadata import get_metadata_graph_from_raw_session
@@ -109,12 +121,10 @@ class ImageServiceABC(ABC):
pass
@abstractmethod
def delete_many(self, is_intermediate: bool) -> int:
"""Deletes many images."""
def delete_intermediates(self) -> int:
"""Deletes all intermediate images."""
pass
@abstractmethod
def delete_images_on_board(self, board_id: str):
"""Deletes all images on a board."""
@@ -401,21 +411,13 @@ class ImageService(ImageServiceABC):
except Exception as e:
self._services.logger.error("Problem deleting image records and files")
raise e
def delete_many(self, is_intermediate: bool):
def delete_intermediates(self) -> int:
try:
# only clears 100 at a time
images = self._services.image_records.get_many(offset=0, limit=100, is_intermediate=is_intermediate,)
count = len(images.items)
image_name_list = list(
map(
lambda r: r.image_name,
images.items,
)
)
for image_name in image_name_list:
image_names = self._services.image_records.delete_intermediates()
count = len(image_names)
for image_name in image_names:
self._services.image_files.delete(image_name)
self._services.image_records.delete_many(image_name_list)
return count
except ImageRecordDeleteException:
self._services.logger.error(f"Failed to delete image records")

View File

@@ -0,0 +1,342 @@
import torch
import numpy as np
import cv2
from PIL import Image
from diffusers.utils import PIL_INTERPOLATION
from einops import rearrange
from controlnet_aux.util import HWC3, resize_image
###################################################################
# Copy of scripts/lvminthin.py from Mikubill/sd-webui-controlnet
###################################################################
# High Quality Edge Thinning using Pure Python
# Written by Lvmin Zhangu
# 2023 April
# Stanford University
# If you use this, please Cite "High Quality Edge Thinning using Pure Python", Lvmin Zhang, In Mikubill/sd-webui-controlnet.
lvmin_kernels_raw = [
np.array([
[-1, -1, -1],
[0, 1, 0],
[1, 1, 1]
], dtype=np.int32),
np.array([
[0, -1, -1],
[1, 1, -1],
[0, 1, 0]
], dtype=np.int32)
]
lvmin_kernels = []
lvmin_kernels += [np.rot90(x, k=0, axes=(0, 1)) for x in lvmin_kernels_raw]
lvmin_kernels += [np.rot90(x, k=1, axes=(0, 1)) for x in lvmin_kernels_raw]
lvmin_kernels += [np.rot90(x, k=2, axes=(0, 1)) for x in lvmin_kernels_raw]
lvmin_kernels += [np.rot90(x, k=3, axes=(0, 1)) for x in lvmin_kernels_raw]
lvmin_prunings_raw = [
np.array([
[-1, -1, -1],
[-1, 1, -1],
[0, 0, -1]
], dtype=np.int32),
np.array([
[-1, -1, -1],
[-1, 1, -1],
[-1, 0, 0]
], dtype=np.int32)
]
lvmin_prunings = []
lvmin_prunings += [np.rot90(x, k=0, axes=(0, 1)) for x in lvmin_prunings_raw]
lvmin_prunings += [np.rot90(x, k=1, axes=(0, 1)) for x in lvmin_prunings_raw]
lvmin_prunings += [np.rot90(x, k=2, axes=(0, 1)) for x in lvmin_prunings_raw]
lvmin_prunings += [np.rot90(x, k=3, axes=(0, 1)) for x in lvmin_prunings_raw]
def remove_pattern(x, kernel):
objects = cv2.morphologyEx(x, cv2.MORPH_HITMISS, kernel)
objects = np.where(objects > 127)
x[objects] = 0
return x, objects[0].shape[0] > 0
def thin_one_time(x, kernels):
y = x
is_done = True
for k in kernels:
y, has_update = remove_pattern(y, k)
if has_update:
is_done = False
return y, is_done
def lvmin_thin(x, prunings=True):
y = x
for i in range(32):
y, is_done = thin_one_time(y, lvmin_kernels)
if is_done:
break
if prunings:
y, _ = thin_one_time(y, lvmin_prunings)
return y
def nake_nms(x):
f1 = np.array([[0, 0, 0], [1, 1, 1], [0, 0, 0]], dtype=np.uint8)
f2 = np.array([[0, 1, 0], [0, 1, 0], [0, 1, 0]], dtype=np.uint8)
f3 = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]], dtype=np.uint8)
f4 = np.array([[0, 0, 1], [0, 1, 0], [1, 0, 0]], dtype=np.uint8)
y = np.zeros_like(x)
for f in [f1, f2, f3, f4]:
np.putmask(y, cv2.dilate(x, kernel=f) == x, x)
return y
################################################################################
# copied from Mikubill/sd-webui-controlnet external_code.py and modified for InvokeAI
################################################################################
# FIXME: not using yet, if used in the future will most likely require modification of preprocessors
def pixel_perfect_resolution(
image: np.ndarray,
target_H: int,
target_W: int,
resize_mode: str,
) -> int:
"""
Calculate the estimated resolution for resizing an image while preserving aspect ratio.
The function first calculates scaling factors for height and width of the image based on the target
height and width. Then, based on the chosen resize mode, it either takes the smaller or the larger
scaling factor to estimate the new resolution.
If the resize mode is OUTER_FIT, the function uses the smaller scaling factor, ensuring the whole image
fits within the target dimensions, potentially leaving some empty space.
If the resize mode is not OUTER_FIT, the function uses the larger scaling factor, ensuring the target
dimensions are fully filled, potentially cropping the image.
After calculating the estimated resolution, the function prints some debugging information.
Args:
image (np.ndarray): A 3D numpy array representing an image. The dimensions represent [height, width, channels].
target_H (int): The target height for the image.
target_W (int): The target width for the image.
resize_mode (ResizeMode): The mode for resizing.
Returns:
int: The estimated resolution after resizing.
"""
raw_H, raw_W, _ = image.shape
k0 = float(target_H) / float(raw_H)
k1 = float(target_W) / float(raw_W)
if resize_mode == "fill_resize":
estimation = min(k0, k1) * float(min(raw_H, raw_W))
else: # "crop_resize" or "just_resize" (or possibly "just_resize_simple"?)
estimation = max(k0, k1) * float(min(raw_H, raw_W))
# print(f"Pixel Perfect Computation:")
# print(f"resize_mode = {resize_mode}")
# print(f"raw_H = {raw_H}")
# print(f"raw_W = {raw_W}")
# print(f"target_H = {target_H}")
# print(f"target_W = {target_W}")
# print(f"estimation = {estimation}")
return int(np.round(estimation))
###########################################################################
# Copied from detectmap_proc method in scripts/detectmap_proc.py in Mikubill/sd-webui-controlnet
# modified for InvokeAI
###########################################################################
# def detectmap_proc(detected_map, module, resize_mode, h, w):
def np_img_resize(
np_img: np.ndarray,
resize_mode: str,
h: int,
w: int,
device: torch.device = torch.device('cpu')
):
# if 'inpaint' in module:
# np_img = np_img.astype(np.float32)
# else:
# np_img = HWC3(np_img)
np_img = HWC3(np_img)
def safe_numpy(x):
# A very safe method to make sure that Apple/Mac works
y = x
# below is very boring but do not change these. If you change these Apple or Mac may fail.
y = y.copy()
y = np.ascontiguousarray(y)
y = y.copy()
return y
def get_pytorch_control(x):
# A very safe method to make sure that Apple/Mac works
y = x
# below is very boring but do not change these. If you change these Apple or Mac may fail.
y = torch.from_numpy(y)
y = y.float() / 255.0
y = rearrange(y, 'h w c -> 1 c h w')
y = y.clone()
# y = y.to(devices.get_device_for("controlnet"))
y = y.to(device)
y = y.clone()
return y
def high_quality_resize(x: np.ndarray,
size):
# Written by lvmin
# Super high-quality control map up-scaling, considering binary, seg, and one-pixel edges
inpaint_mask = None
if x.ndim == 3 and x.shape[2] == 4:
inpaint_mask = x[:, :, 3]
x = x[:, :, 0:3]
new_size_is_smaller = (size[0] * size[1]) < (x.shape[0] * x.shape[1])
new_size_is_bigger = (size[0] * size[1]) > (x.shape[0] * x.shape[1])
unique_color_count = np.unique(x.reshape(-1, x.shape[2]), axis=0).shape[0]
is_one_pixel_edge = False
is_binary = False
if unique_color_count == 2:
is_binary = np.min(x) < 16 and np.max(x) > 240
if is_binary:
xc = x
xc = cv2.erode(xc, np.ones(shape=(3, 3), dtype=np.uint8), iterations=1)
xc = cv2.dilate(xc, np.ones(shape=(3, 3), dtype=np.uint8), iterations=1)
one_pixel_edge_count = np.where(xc < x)[0].shape[0]
all_edge_count = np.where(x > 127)[0].shape[0]
is_one_pixel_edge = one_pixel_edge_count * 2 > all_edge_count
if 2 < unique_color_count < 200:
interpolation = cv2.INTER_NEAREST
elif new_size_is_smaller:
interpolation = cv2.INTER_AREA
else:
interpolation = cv2.INTER_CUBIC # Must be CUBIC because we now use nms. NEVER CHANGE THIS
y = cv2.resize(x, size, interpolation=interpolation)
if inpaint_mask is not None:
inpaint_mask = cv2.resize(inpaint_mask, size, interpolation=interpolation)
if is_binary:
y = np.mean(y.astype(np.float32), axis=2).clip(0, 255).astype(np.uint8)
if is_one_pixel_edge:
y = nake_nms(y)
_, y = cv2.threshold(y, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
y = lvmin_thin(y, prunings=new_size_is_bigger)
else:
_, y = cv2.threshold(y, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
y = np.stack([y] * 3, axis=2)
if inpaint_mask is not None:
inpaint_mask = (inpaint_mask > 127).astype(np.float32) * 255.0
inpaint_mask = inpaint_mask[:, :, None].clip(0, 255).astype(np.uint8)
y = np.concatenate([y, inpaint_mask], axis=2)
return y
# if resize_mode == external_code.ResizeMode.RESIZE:
if resize_mode == "just_resize": # RESIZE
np_img = high_quality_resize(np_img, (w, h))
np_img = safe_numpy(np_img)
return get_pytorch_control(np_img), np_img
old_h, old_w, _ = np_img.shape
old_w = float(old_w)
old_h = float(old_h)
k0 = float(h) / old_h
k1 = float(w) / old_w
safeint = lambda x: int(np.round(x))
# if resize_mode == external_code.ResizeMode.OUTER_FIT:
if resize_mode == "fill_resize": # OUTER_FIT
k = min(k0, k1)
borders = np.concatenate([np_img[0, :, :], np_img[-1, :, :], np_img[:, 0, :], np_img[:, -1, :]], axis=0)
high_quality_border_color = np.median(borders, axis=0).astype(np_img.dtype)
if len(high_quality_border_color) == 4:
# Inpaint hijack
high_quality_border_color[3] = 255
high_quality_background = np.tile(high_quality_border_color[None, None], [h, w, 1])
np_img = high_quality_resize(np_img, (safeint(old_w * k), safeint(old_h * k)))
new_h, new_w, _ = np_img.shape
pad_h = max(0, (h - new_h) // 2)
pad_w = max(0, (w - new_w) // 2)
high_quality_background[pad_h:pad_h + new_h, pad_w:pad_w + new_w] = np_img
np_img = high_quality_background
np_img = safe_numpy(np_img)
return get_pytorch_control(np_img), np_img
else: # resize_mode == "crop_resize" (INNER_FIT)
k = max(k0, k1)
np_img = high_quality_resize(np_img, (safeint(old_w * k), safeint(old_h * k)))
new_h, new_w, _ = np_img.shape
pad_h = max(0, (new_h - h) // 2)
pad_w = max(0, (new_w - w) // 2)
np_img = np_img[pad_h:pad_h + h, pad_w:pad_w + w]
np_img = safe_numpy(np_img)
return get_pytorch_control(np_img), np_img
def prepare_control_image(
# image used to be Union[PIL.Image.Image, List[PIL.Image.Image], torch.Tensor, List[torch.Tensor]]
# but now should be able to assume that image is a single PIL.Image, which simplifies things
image: Image,
# FIXME: need to fix hardwiring of width and height, change to basing on latents dimensions?
# latents_to_match_resolution, # TorchTensor of shape (batch_size, 3, height, width)
width=512, # should be 8 * latent.shape[3]
height=512, # should be 8 * latent height[2]
# batch_size=1, # currently no batching
# num_images_per_prompt=1, # currently only single image
device="cuda",
dtype=torch.float16,
do_classifier_free_guidance=True,
control_mode="balanced",
resize_mode="just_resize_simple",
):
# FIXME: implement "crop_resize_simple" and "fill_resize_simple", or pull them out
if (resize_mode == "just_resize_simple" or
resize_mode == "crop_resize_simple" or
resize_mode == "fill_resize_simple"):
image = image.convert("RGB")
if (resize_mode == "just_resize_simple"):
image = image.resize((width, height), resample=PIL_INTERPOLATION["lanczos"])
elif (resize_mode == "crop_resize_simple"): # not yet implemented
pass
elif (resize_mode == "fill_resize_simple"): # not yet implemented
pass
nimage = np.array(image)
nimage = nimage[None, :]
nimage = np.concatenate([nimage], axis=0)
# normalizing RGB values to [0,1] range (in PIL.Image they are [0-255])
nimage = np.array(nimage).astype(np.float32) / 255.0
nimage = nimage.transpose(0, 3, 1, 2)
timage = torch.from_numpy(nimage)
# use fancy lvmin controlnet resizing
elif (resize_mode == "just_resize" or resize_mode == "crop_resize" or resize_mode == "fill_resize"):
nimage = np.array(image)
timage, nimage = np_img_resize(
np_img=nimage,
resize_mode=resize_mode,
h=height,
w=width,
# device=torch.device('cpu')
device=device,
)
else:
pass
print("ERROR: invalid resize_mode ==> ", resize_mode)
exit(1)
timage = timage.to(device=device, dtype=dtype)
cfg_injection = (control_mode == "more_control" or control_mode == "unbalanced")
if do_classifier_free_guidance and not cfg_injection:
timage = torch.cat([timage] * 2)
return timage

View File

@@ -1,9 +1,30 @@
import torch
from PIL import Image
from invokeai.app.models.exceptions import CanceledException
from invokeai.app.models.image import ProgressImage
from ..invocations.baseinvocation import InvocationContext
from ...backend.util.util import image_to_dataURL
from ...backend.generator.base import Generator
from ...backend.stable_diffusion import PipelineIntermediateState
from invokeai.app.services.config import InvokeAIAppConfig
def sample_to_lowres_estimated_image(samples, latent_rgb_factors, smooth_matrix = None):
latent_image = samples[0].permute(1, 2, 0) @ latent_rgb_factors
if smooth_matrix is not None:
latent_image = latent_image.unsqueeze(0).permute(3, 0, 1, 2)
latent_image = torch.nn.functional.conv2d(latent_image, smooth_matrix.reshape((1,1,3,3)), padding=1)
latent_image = latent_image.permute(1, 2, 3, 0).squeeze(0)
latents_ubyte = (
((latent_image + 1) / 2)
.clamp(0, 1) # change scale from -1..1 to 0..1
.mul(0xFF) # to 0..255
.byte()
).cpu()
return Image.fromarray(latents_ubyte.numpy())
def stable_diffusion_step_callback(
@@ -37,7 +58,24 @@ def stable_diffusion_step_callback(
# step = intermediate_state.step
# TODO: only output a preview image when requested
image = Generator.sample_to_lowres_estimated_image(sample)
# origingally adapted from code by @erucipe and @keturn here:
# https://discuss.huggingface.co/t/decoding-latents-to-rgb-without-upscaling/23204/7
# these updated numbers for v1.5 are from @torridgristle
v1_5_latent_rgb_factors = torch.tensor(
[
# R G B
[0.3444, 0.1385, 0.0670], # L1
[0.1247, 0.4027, 0.1494], # L2
[-0.3192, 0.2513, 0.2103], # L3
[-0.1307, -0.1874, -0.7445], # L4
],
dtype=sample.dtype,
device=sample.device,
)
image = sample_to_lowres_estimated_image(sample, v1_5_latent_rgb_factors)
(width, height) = image.size
width *= 8
@@ -53,3 +91,56 @@ def stable_diffusion_step_callback(
step=intermediate_state.step,
total_steps=node["steps"],
)
def stable_diffusion_xl_step_callback(
context: InvocationContext,
node: dict,
source_node_id: str,
sample,
step,
total_steps,
):
if context.services.queue.is_canceled(context.graph_execution_state_id):
raise CanceledException
sdxl_latent_rgb_factors = torch.tensor(
[
# R G B
[ 0.3816, 0.4930, 0.5320],
[-0.3753, 0.1631, 0.1739],
[ 0.1770, 0.3588, -0.2048],
[-0.4350, -0.2644, -0.4289],
],
dtype=sample.dtype,
device=sample.device,
)
sdxl_smooth_matrix = torch.tensor(
[
#[ 0.0478, 0.1285, 0.0478],
#[ 0.1285, 0.2948, 0.1285],
#[ 0.0478, 0.1285, 0.0478],
[0.0358, 0.0964, 0.0358],
[0.0964, 0.4711, 0.0964],
[0.0358, 0.0964, 0.0358],
],
dtype=sample.dtype,
device=sample.device,
)
image = sample_to_lowres_estimated_image(sample, sdxl_latent_rgb_factors, sdxl_smooth_matrix)
(width, height) = image.size
width *= 8
height *= 8
dataURL = image_to_dataURL(image, image_format="JPEG")
context.services.events.emit_generator_progress(
graph_execution_state_id=context.graph_execution_state_id,
node=node,
source_node_id=source_node_id,
progress_image=ProgressImage(width=width, height=height, dataURL=dataURL),
step=step,
total_steps=total_steps,
)

View File

@@ -23,6 +23,7 @@ from urllib import request
import npyscreen
import transformers
import omegaconf
from diffusers import AutoencoderKL
from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
from huggingface_hub import HfFolder
@@ -44,6 +45,7 @@ from invokeai.backend.util.logging import InvokeAILogger
from invokeai.frontend.install.model_install import addModelsForm, process_and_execute
from invokeai.frontend.install.widgets import (
CenteredButtonPress,
FileBox,
IntTitleSlider,
set_min_terminal_size,
CyclingForm,
@@ -409,21 +411,21 @@ Use cursor arrows to make a checkbox selection, and space to toggle.
self.nextrely += 1
self.add_widget_intelligent(
npyscreen.FixedText,
value="Directories containing textual inversion, controlnet and LoRA models (<tab> autocompletes, ctrl-N advances):",
value="Folder to recursively scan for new checkpoints, ControlNets, LoRAs and TI models (<tab> autocompletes, ctrl-N advances):",
editable=False,
color="CONTROL",
)
self.autoimport_dirs = {}
for description, config_name, path in autoimport_paths(old_opts):
self.autoimport_dirs[config_name] = self.add_widget_intelligent(
npyscreen.TitleFilename,
name=description+':',
value=str(path),
self.autoimport_dirs['autoimport_dir'] = self.add_widget_intelligent(
FileBox,
name=f'Autoimport Folder',
value=str(config.root_path / config.autoimport_dir),
select_dir=True,
must_exist=False,
use_two_lines=False,
labelColor="GOOD",
begin_entry_at=32,
max_height = 3,
scroll_exit=True
)
self.nextrely += 1
@@ -567,7 +569,14 @@ def default_startup_options(init_file: Path) -> Namespace:
return opts
def default_user_selections(program_opts: Namespace) -> InstallSelections:
installer = ModelInstall(config)
try:
installer = ModelInstall(config)
except omegaconf.errors.ConfigKeyError:
logger.warning('Your models.yaml file is corrupt or out of date. Reinitializing')
initialize_rootdir(config.root_path, True)
installer = ModelInstall(config)
models = installer.all_models()
return InstallSelections(
install_models=[models[installer.default_model()].path or models[installer.default_model()].repo_id]
@@ -575,19 +584,8 @@ def default_user_selections(program_opts: Namespace) -> InstallSelections:
else [models[x].path or models[x].repo_id for x in installer.recommended_models()]
if program_opts.yes_to_all
else list(),
# scan_directory=None,
# autoscan_on_startup=None,
)
# -------------------------------------
def autoimport_paths(config: InvokeAIAppConfig):
return [
('Checkpoints & diffusers models', 'autoimport_dir', config.root_path / config.autoimport_dir),
('LoRA/LyCORIS models', 'lora_dir', config.root_path / config.lora_dir),
('Controlnet models', 'controlnet_dir', config.root_path / config.controlnet_dir),
('Textual Inversion Embeddings', 'embedding_dir', config.root_path / config.embedding_dir),
]
# -------------------------------------
def initialize_rootdir(root: Path, yes_to_all: bool = False):
logger.info("** INITIALIZING INVOKEAI RUNTIME DIRECTORY **")
@@ -663,7 +661,7 @@ def write_opts(opts: Namespace, init_file: Path):
with open(init_file,'w', encoding='utf-8') as file:
file.write(new_config.to_yaml())
if opts.hf_token:
if hasattr(opts,'hf_token'):
HfLogin(opts.hf_token)
# -------------------------------------

View File

@@ -956,7 +956,7 @@ class ModelManager(object):
config.lora_dir,
config.embedding_dir,
config.controlnet_dir,
]
] if x
}
scanner = ScanAndImport(directories, self.logger, ignore=known_paths, installer=installer)
scanner.search()

View File

@@ -219,6 +219,7 @@ class ControlNetData:
begin_step_percent: float = Field(default=0.0)
end_step_percent: float = Field(default=1.0)
control_mode: str = Field(default="balanced")
resize_mode: str = Field(default="just_resize")
@dataclass
@@ -653,7 +654,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
if cfg_injection:
# Inferred ControlNet only for the conditional batch.
# To apply the output of ControlNet to both the unconditional and conditional batches,
# add 0 to the unconditional batch to keep it unchanged.
# prepend zeros for unconditional batch
down_samples = [torch.cat([torch.zeros_like(d), d]) for d in down_samples]
mid_sample = torch.cat([torch.zeros_like(mid_sample), mid_sample])
@@ -954,53 +955,3 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
debug_image(
img, f"latents {msg} {i+1}/{len(decoded)}", debug_status=True
)
# Copied from diffusers pipeline_stable_diffusion_controlnet.py
# Returns torch.Tensor of shape (batch_size, 3, height, width)
@staticmethod
def prepare_control_image(
image,
# FIXME: need to fix hardwiring of width and height, change to basing on latents dimensions?
# latents,
width=512, # should be 8 * latent.shape[3]
height=512, # should be 8 * latent height[2]
batch_size=1,
num_images_per_prompt=1,
device="cuda",
dtype=torch.float16,
do_classifier_free_guidance=True,
control_mode="balanced"
):
if not isinstance(image, torch.Tensor):
if isinstance(image, PIL.Image.Image):
image = [image]
if isinstance(image[0], PIL.Image.Image):
images = []
for image_ in image:
image_ = image_.convert("RGB")
image_ = image_.resize((width, height), resample=PIL_INTERPOLATION["lanczos"])
image_ = np.array(image_)
image_ = image_[None, :]
images.append(image_)
image = images
image = np.concatenate(image, axis=0)
image = np.array(image).astype(np.float32) / 255.0
image = image.transpose(0, 3, 1, 2)
image = torch.from_numpy(image)
elif isinstance(image[0], torch.Tensor):
image = torch.cat(image, dim=0)
image_batch_size = image.shape[0]
if image_batch_size == 1:
repeat_by = batch_size
else:
# image batch size is the same as prompt batch size
repeat_by = num_images_per_prompt
image = image.repeat_interleave(repeat_by, dim=0)
image = image.to(device=device, dtype=dtype)
cfg_injection = (control_mode == "more_control" or control_mode == "unbalanced")
if do_classifier_free_guidance and not cfg_injection:
image = torch.cat([image] * 2)
return image

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -12,7 +12,7 @@
margin: 0;
}
</style>
<script type="module" crossorigin src="./assets/index-0ec007dd.js"></script>
<script type="module" crossorigin src="./assets/index-3a8b43e1.js"></script>
</head>
<body dir="ltr">

View File

@@ -455,7 +455,12 @@
"addDifference": "Add Difference",
"pickModelType": "Pick Model Type",
"selectModel": "Select Model",
"importModels": "Import Models"
"importModels": "Import Models",
"settings": "Settings",
"syncModels": "Sync Models",
"syncModelsDesc": "If your models are out of sync with the backend, you can refresh them up using this option. This is generally handy in cases where you manually update your models.yaml file or add models to the InvokeAI root folder after the application has booted.",
"modelsSynced": "Models Synced",
"modelSyncFailed": "Model Sync Failed"
},
"parameters": {
"general": "General",
@@ -547,7 +552,8 @@
"saveSteps": "Save images every n steps",
"confirmOnDelete": "Confirm On Delete",
"displayHelpIcons": "Display Help Icons",
"useCanvasBeta": "Use Canvas Beta Layout",
"alternateCanvasLayout": "Alternate Canvas Layout",
"enableNodesEditor": "Enable Nodes Editor",
"enableImageDebugging": "Enable Image Debugging",
"useSlidersForAll": "Use Sliders For All Options",
"showProgressInViewer": "Show Progress Images in Viewer",
@@ -564,7 +570,9 @@
"ui": "User Interface",
"favoriteSchedulers": "Favorite Schedulers",
"favoriteSchedulersPlaceholder": "No schedulers favorited",
"showAdvancedOptions": "Show Advanced Options"
"showAdvancedOptions": "Show Advanced Options",
"experimental": "Experimental",
"beta": "Beta"
},
"toast": {
"serverError": "Server Error",

View File

@@ -455,7 +455,12 @@
"addDifference": "Add Difference",
"pickModelType": "Pick Model Type",
"selectModel": "Select Model",
"importModels": "Import Models"
"importModels": "Import Models",
"settings": "Settings",
"syncModels": "Sync Models",
"syncModelsDesc": "If your models are out of sync with the backend, you can refresh them up using this option. This is generally handy in cases where you manually update your models.yaml file or add models to the InvokeAI root folder after the application has booted.",
"modelsSynced": "Models Synced",
"modelSyncFailed": "Model Sync Failed"
},
"parameters": {
"general": "General",
@@ -547,7 +552,8 @@
"saveSteps": "Save images every n steps",
"confirmOnDelete": "Confirm On Delete",
"displayHelpIcons": "Display Help Icons",
"useCanvasBeta": "Use Canvas Beta Layout",
"alternateCanvasLayout": "Alternate Canvas Layout",
"enableNodesEditor": "Enable Nodes Editor",
"enableImageDebugging": "Enable Image Debugging",
"useSlidersForAll": "Use Sliders For All Options",
"showProgressInViewer": "Show Progress Images in Viewer",
@@ -564,7 +570,9 @@
"ui": "User Interface",
"favoriteSchedulers": "Favorite Schedulers",
"favoriteSchedulersPlaceholder": "No schedulers favorited",
"showAdvancedOptions": "Show Advanced Options"
"showAdvancedOptions": "Show Advanced Options",
"experimental": "Experimental",
"beta": "Beta"
},
"toast": {
"serverError": "Server Error",

View File

@@ -6,11 +6,7 @@ import {
imageSelected,
} from 'features/gallery/store/gallerySlice';
import { progressImageSet } from 'features/system/store/systemSlice';
import {
SYSTEM_BOARDS,
imagesAdapter,
imagesApi,
} from 'services/api/endpoints/images';
import { imagesAdapter, imagesApi } from 'services/api/endpoints/images';
import { isImageOutput } from 'services/api/guards';
import { sessionCanceled } from 'services/api/thunks/session';
import {
@@ -32,8 +28,7 @@ export const addInvocationCompleteEventListener = () => {
);
const session_id = action.payload.data.graph_execution_state_id;
const { cancelType, isCancelScheduled, boardIdToAddTo } =
getState().system;
const { cancelType, isCancelScheduled } = getState().system;
// Handle scheduled cancelation
if (cancelType === 'scheduled' && isCancelScheduled) {
@@ -88,26 +83,28 @@ export const addInvocationCompleteEventListener = () => {
)
);
// add image to the board if we had one selected
if (boardIdToAddTo && !SYSTEM_BOARDS.includes(boardIdToAddTo)) {
const { autoAddBoardId } = gallery;
// add image to the board if auto-add is enabled
if (autoAddBoardId) {
dispatch(
imagesApi.endpoints.addImageToBoard.initiate({
board_id: boardIdToAddTo,
board_id: autoAddBoardId,
imageDTO,
})
);
}
const { selectedBoardId } = gallery;
if (boardIdToAddTo && boardIdToAddTo !== selectedBoardId) {
dispatch(boardIdSelected(boardIdToAddTo));
} else if (!boardIdToAddTo) {
dispatch(boardIdSelected('all'));
}
const { selectedBoardId, shouldAutoSwitch } = gallery;
// If auto-switch is enabled, select the new image
if (getState().gallery.shouldAutoSwitch) {
if (shouldAutoSwitch) {
// if auto-add is enabled, switch the board as the image comes in
if (autoAddBoardId && autoAddBoardId !== selectedBoardId) {
dispatch(boardIdSelected(autoAddBoardId));
} else if (!autoAddBoardId) {
dispatch(boardIdSelected('images'));
}
dispatch(imageSelected(imageDTO.image_name));
}
}

View File

@@ -17,13 +17,13 @@ import {
} from 'common/components/IAIImageFallback';
import ImageMetadataOverlay from 'common/components/ImageMetadataOverlay';
import { useImageUploadButton } from 'common/hooks/useImageUploadButton';
import ImageContextMenu from 'features/gallery/components/ImageContextMenu/ImageContextMenu';
import { MouseEvent, ReactElement, SyntheticEvent, memo } from 'react';
import { FaImage, FaUndo, FaUpload } from 'react-icons/fa';
import { ImageDTO, PostUploadAction } from 'services/api/types';
import { mode } from 'theme/util/mode';
import IAIDraggable from './IAIDraggable';
import IAIDroppable from './IAIDroppable';
import ImageContextMenu from 'features/gallery/components/ImageContextMenu/ImageContextMenu';
type IAIDndImageProps = {
imageDTO: ImageDTO | undefined;
@@ -148,7 +148,9 @@ const IAIDndImage = (props: IAIDndImageProps) => {
maxH: 'full',
borderRadius: 'base',
shadow: isSelected ? 'selected.light' : undefined,
_dark: { shadow: isSelected ? 'selected.dark' : undefined },
_dark: {
shadow: isSelected ? 'selected.dark' : undefined,
},
...imageSx,
}}
/>
@@ -183,13 +185,6 @@ const IAIDndImage = (props: IAIDndImageProps) => {
</>
)}
{!imageDTO && isUploadDisabled && noContentFallback}
{!isDropDisabled && (
<IAIDroppable
data={droppableData}
disabled={isDropDisabled}
dropLabel={dropLabel}
/>
)}
{imageDTO && !isDragDisabled && (
<IAIDraggable
data={draggableData}
@@ -197,6 +192,13 @@ const IAIDndImage = (props: IAIDndImageProps) => {
onClick={onClick}
/>
)}
{!isDropDisabled && (
<IAIDroppable
data={droppableData}
disabled={isDropDisabled}
dropLabel={dropLabel}
/>
)}
{onClickReset && withResetIcon && imageDTO && (
<IAIIconButton
onClick={onClickReset}

View File

@@ -13,10 +13,11 @@ type IAIDroppableProps = {
dropLabel?: ReactNode;
disabled?: boolean;
data?: TypesafeDroppableData;
hoverRef?: React.Ref<HTMLDivElement>;
};
const IAIDroppable = (props: IAIDroppableProps) => {
const { dropLabel, data, disabled } = props;
const { dropLabel, data, disabled, hoverRef } = props;
const dndId = useRef(uuidv4());
const { isOver, setNodeRef, active } = useDroppable({

View File

@@ -9,7 +9,7 @@ import {
} from '@chakra-ui/react';
import { memo } from 'react';
interface Props extends SwitchProps {
export interface IAISwitchProps extends SwitchProps {
label?: string;
width?: string | number;
formControlProps?: FormControlProps;
@@ -20,7 +20,7 @@ interface Props extends SwitchProps {
/**
* Customized Chakra FormControl + Switch multi-part component.
*/
const IAISwitch = (props: Props) => {
const IAISwitch = (props: IAISwitchProps) => {
const {
label,
isDisabled = false,

View File

@@ -24,6 +24,7 @@ import ParamControlNetShouldAutoConfig from './ParamControlNetShouldAutoConfig';
import ParamControlNetBeginEnd from './parameters/ParamControlNetBeginEnd';
import ParamControlNetControlMode from './parameters/ParamControlNetControlMode';
import ParamControlNetProcessorSelect from './parameters/ParamControlNetProcessorSelect';
import ParamControlNetResizeMode from './parameters/ParamControlNetResizeMode';
type ControlNetProps = {
controlNetId: string;
@@ -68,7 +69,7 @@ const ControlNet = (props: ControlNetProps) => {
<Flex
sx={{
flexDir: 'column',
gap: 2,
gap: 3,
p: 3,
borderRadius: 'base',
position: 'relative',
@@ -117,7 +118,12 @@ const ControlNet = (props: ControlNetProps) => {
tooltip={isExpanded ? 'Hide Advanced' : 'Show Advanced'}
aria-label={isExpanded ? 'Hide Advanced' : 'Show Advanced'}
onClick={toggleIsExpanded}
variant="link"
variant="ghost"
sx={{
_hover: {
bg: 'none',
},
}}
icon={
<ChevronUpIcon
sx={{
@@ -151,7 +157,7 @@ const ControlNet = (props: ControlNetProps) => {
/>
)}
</Flex>
<Flex sx={{ w: 'full', flexDirection: 'column' }}>
<Flex sx={{ w: 'full', flexDirection: 'column', gap: 3 }}>
<Flex sx={{ gap: 4, w: 'full', alignItems: 'center' }}>
<Flex
sx={{
@@ -176,16 +182,16 @@ const ControlNet = (props: ControlNetProps) => {
h: 28,
w: 28,
aspectRatio: '1/1',
mt: 3,
}}
>
<ControlNetImagePreview controlNetId={controlNetId} height={28} />
</Flex>
)}
</Flex>
<Box mt={2}>
<Flex sx={{ gap: 2 }}>
<ParamControlNetControlMode controlNetId={controlNetId} />
</Box>
<ParamControlNetResizeMode controlNetId={controlNetId} />
</Flex>
<ParamControlNetProcessorSelect controlNetId={controlNetId} />
</Flex>

View File

@@ -0,0 +1,62 @@
import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import IAIMantineSelect from 'common/components/IAIMantineSelect';
import {
ResizeModes,
controlNetResizeModeChanged,
} from 'features/controlNet/store/controlNetSlice';
import { useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
type ParamControlNetResizeModeProps = {
controlNetId: string;
};
const RESIZE_MODE_DATA = [
{ label: 'Resize', value: 'just_resize' },
{ label: 'Crop', value: 'crop_resize' },
{ label: 'Fill', value: 'fill_resize' },
];
export default function ParamControlNetResizeMode(
props: ParamControlNetResizeModeProps
) {
const { controlNetId } = props;
const dispatch = useAppDispatch();
const selector = useMemo(
() =>
createSelector(
stateSelector,
({ controlNet }) => {
const { resizeMode, isEnabled } =
controlNet.controlNets[controlNetId];
return { resizeMode, isEnabled };
},
defaultSelectorOptions
),
[controlNetId]
);
const { resizeMode, isEnabled } = useAppSelector(selector);
const { t } = useTranslation();
const handleResizeModeChange = useCallback(
(resizeMode: ResizeModes) => {
dispatch(controlNetResizeModeChanged({ controlNetId, resizeMode }));
},
[controlNetId, dispatch]
);
return (
<IAIMantineSelect
disabled={!isEnabled}
label="Resize Mode"
data={RESIZE_MODE_DATA}
value={String(resizeMode)}
onChange={handleResizeModeChange}
/>
);
}

View File

@@ -3,6 +3,7 @@ import { RootState } from 'app/store/store';
import { ControlNetModelParam } from 'features/parameters/types/parameterSchemas';
import { cloneDeep, forEach } from 'lodash-es';
import { imagesApi } from 'services/api/endpoints/images';
import { components } from 'services/api/schema';
import { isAnySessionRejected } from 'services/api/thunks/session';
import { appSocketInvocationError } from 'services/events/actions';
import { controlNetImageProcessed } from './actions';
@@ -16,11 +17,13 @@ import {
RequiredControlNetProcessorNode,
} from './types';
export type ControlModes =
| 'balanced'
| 'more_prompt'
| 'more_control'
| 'unbalanced';
export type ControlModes = NonNullable<
components['schemas']['ControlNetInvocation']['control_mode']
>;
export type ResizeModes = NonNullable<
components['schemas']['ControlNetInvocation']['resize_mode']
>;
export const initialControlNet: Omit<ControlNetConfig, 'controlNetId'> = {
isEnabled: true,
@@ -29,6 +32,7 @@ export const initialControlNet: Omit<ControlNetConfig, 'controlNetId'> = {
beginStepPct: 0,
endStepPct: 1,
controlMode: 'balanced',
resizeMode: 'just_resize',
controlImage: null,
processedControlImage: null,
processorType: 'canny_image_processor',
@@ -45,6 +49,7 @@ export type ControlNetConfig = {
beginStepPct: number;
endStepPct: number;
controlMode: ControlModes;
resizeMode: ResizeModes;
controlImage: string | null;
processedControlImage: string | null;
processorType: ControlNetProcessorType;
@@ -215,6 +220,16 @@ export const controlNetSlice = createSlice({
const { controlNetId, controlMode } = action.payload;
state.controlNets[controlNetId].controlMode = controlMode;
},
controlNetResizeModeChanged: (
state,
action: PayloadAction<{
controlNetId: string;
resizeMode: ResizeModes;
}>
) => {
const { controlNetId, resizeMode } = action.payload;
state.controlNets[controlNetId].resizeMode = resizeMode;
},
controlNetProcessorParamsChanged: (
state,
action: PayloadAction<{
@@ -342,6 +357,7 @@ export const {
controlNetBeginStepPctChanged,
controlNetEndStepPctChanged,
controlNetControlModeChanged,
controlNetResizeModeChanged,
controlNetProcessorParamsChanged,
controlNetProcessorTypeChanged,
controlNetReset,

View File

@@ -0,0 +1,80 @@
import { SelectItem } from '@mantine/core';
import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import IAIMantineSearchableSelect from 'common/components/IAIMantineSearchableSelect';
import IAIMantineSelectItemWithTooltip from 'common/components/IAIMantineSelectItemWithTooltip';
import { autoAddBoardIdChanged } from 'features/gallery/store/gallerySlice';
import { useCallback, useRef } from 'react';
import { useListAllBoardsQuery } from 'services/api/endpoints/boards';
const selector = createSelector(
[stateSelector],
({ gallery }) => {
const { autoAddBoardId } = gallery;
return {
autoAddBoardId,
};
},
defaultSelectorOptions
);
const BoardAutoAddSelect = () => {
const dispatch = useAppDispatch();
const { autoAddBoardId } = useAppSelector(selector);
const inputRef = useRef<HTMLInputElement>(null);
const { boards, hasBoards } = useListAllBoardsQuery(undefined, {
selectFromResult: ({ data }) => {
const boards: SelectItem[] = [
{
label: 'None',
value: 'none',
},
];
data?.forEach(({ board_id, board_name }) => {
boards.push({
label: board_name,
value: board_id,
});
});
return {
boards,
hasBoards: boards.length > 1,
};
},
});
const handleChange = useCallback(
(v: string | null) => {
if (!v) {
return;
}
dispatch(autoAddBoardIdChanged(v === 'none' ? null : v));
},
[dispatch]
);
return (
<IAIMantineSearchableSelect
label="Auto-Add Board"
inputRef={inputRef}
autoFocus
placeholder={'Select a Board'}
value={autoAddBoardId}
data={boards}
nothingFound="No matching Boards"
itemComponent={IAIMantineSelectItemWithTooltip}
disabled={!hasBoards}
filter={(value, item: SelectItem) =>
item.label?.toLowerCase().includes(value.toLowerCase().trim()) ||
item.value.toLowerCase().includes(value.toLowerCase().trim())
}
onChange={handleChange}
/>
);
};
export default BoardAutoAddSelect;

View File

@@ -0,0 +1,58 @@
import { Box, MenuItem, MenuList } from '@chakra-ui/react';
import { useAppDispatch } from 'app/store/storeHooks';
import { ContextMenu, ContextMenuProps } from 'chakra-ui-contextmenu';
import { boardIdSelected } from 'features/gallery/store/gallerySlice';
import { memo, useCallback } from 'react';
import { FaFolder } from 'react-icons/fa';
import { BoardDTO } from 'services/api/types';
import { menuListMotionProps } from 'theme/components/menu';
import GalleryBoardContextMenuItems from './GalleryBoardContextMenuItems';
import SystemBoardContextMenuItems from './SystemBoardContextMenuItems';
type Props = {
board?: BoardDTO;
board_id: string;
children: ContextMenuProps<HTMLDivElement>['children'];
setBoardToDelete?: (board?: BoardDTO) => void;
};
const BoardContextMenu = memo(
({ board, board_id, setBoardToDelete, children }: Props) => {
const dispatch = useAppDispatch();
const handleSelectBoard = useCallback(() => {
dispatch(boardIdSelected(board?.board_id ?? board_id));
}, [board?.board_id, board_id, dispatch]);
return (
<ContextMenu<HTMLDivElement>
menuProps={{ size: 'sm', isLazy: true }}
menuButtonProps={{
bg: 'transparent',
_hover: { bg: 'transparent' },
}}
renderMenu={() => (
<MenuList
sx={{ visibility: 'visible !important' }}
motionProps={menuListMotionProps}
>
<MenuItem icon={<FaFolder />} onClickCapture={handleSelectBoard}>
Select Board
</MenuItem>
{!board && <SystemBoardContextMenuItems board_id={board_id} />}
{board && (
<GalleryBoardContextMenuItems
board={board}
setBoardToDelete={setBoardToDelete}
/>
)}
</MenuList>
)}
>
{children}
</ContextMenu>
);
}
);
BoardContextMenu.displayName = 'HoverableBoard';
export default BoardContextMenu;

View File

@@ -1,5 +1,6 @@
import IAIButton from 'common/components/IAIButton';
import IAIIconButton from 'common/components/IAIIconButton';
import { useCallback } from 'react';
import { FaPlus } from 'react-icons/fa';
import { useCreateBoardMutation } from 'services/api/endpoints/boards';
const DEFAULT_BOARD_NAME = 'My Board';
@@ -12,15 +13,14 @@ const AddBoardButton = () => {
}, [createBoard]);
return (
<IAIButton
<IAIIconButton
icon={<FaPlus />}
isLoading={isLoading}
tooltip="Add Board"
aria-label="Add Board"
onClick={handleCreateBoard}
size="sm"
sx={{ px: 4 }}
>
Add Board
</IAIButton>
/>
);
};

View File

@@ -38,6 +38,7 @@ const AllAssetsBoard = ({ isSelected }: { isSelected: boolean }) => {
return (
<GenericBoard
board_id="assets"
onClick={handleClick}
isSelected={isSelected}
icon={FaFileImage}

View File

@@ -38,6 +38,7 @@ const AllImagesBoard = ({ isSelected }: { isSelected: boolean }) => {
return (
<GenericBoard
board_id="images"
onClick={handleClick}
isSelected={isSelected}
icon={FaImages}

View File

@@ -29,6 +29,7 @@ const BatchBoard = ({ isSelected }: { isSelected: boolean }) => {
return (
<GenericBoard
board_id="batch"
droppableData={droppableData}
onClick={handleBatchBoardClick}
isSelected={isSelected}

View File

@@ -1,27 +1,21 @@
import {
Collapse,
Flex,
Grid,
GridItem,
useDisclosure,
} from '@chakra-ui/react';
import { ButtonGroup, Collapse, Flex, Grid, GridItem } from '@chakra-ui/react';
import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import IAIIconButton from 'common/components/IAIIconButton';
import { AnimatePresence, motion } from 'framer-motion';
import { OverlayScrollbarsComponent } from 'overlayscrollbars-react';
import { memo, useState } from 'react';
import { memo, useCallback, useState } from 'react';
import { FaSearch } from 'react-icons/fa';
import { useListAllBoardsQuery } from 'services/api/endpoints/boards';
import { BoardDTO } from 'services/api/types';
import { useFeatureStatus } from '../../../../system/hooks/useFeatureStatus';
import DeleteBoardModal from '../DeleteBoardModal';
import AddBoardButton from './AddBoardButton';
import AllAssetsBoard from './AllAssetsBoard';
import AllImagesBoard from './AllImagesBoard';
import BatchBoard from './BatchBoard';
import BoardsSearch from './BoardsSearch';
import GalleryBoard from './GalleryBoard';
import NoBoardBoard from './NoBoardBoard';
import DeleteBoardModal from '../DeleteBoardModal';
import { BoardDTO } from 'services/api/types';
import SystemBoardButton from './SystemBoardButton';
const selector = createSelector(
[stateSelector],
@@ -48,7 +42,10 @@ const BoardsList = (props: Props) => {
)
: boards;
const [boardToDelete, setBoardToDelete] = useState<BoardDTO>();
const [searchMode, setSearchMode] = useState(false);
const [isSearching, setIsSearching] = useState(false);
const handleClickSearchIcon = useCallback(() => {
setIsSearching((v) => !v);
}, []);
return (
<>
@@ -64,7 +61,54 @@ const BoardsList = (props: Props) => {
}}
>
<Flex sx={{ gap: 2, alignItems: 'center' }}>
<BoardsSearch setSearchMode={setSearchMode} />
<AnimatePresence mode="popLayout">
{isSearching ? (
<motion.div
key="boards-search"
initial={{
opacity: 0,
}}
exit={{
opacity: 0,
}}
animate={{
opacity: 1,
transition: { duration: 0.1 },
}}
style={{ width: '100%' }}
>
<BoardsSearch setIsSearching={setIsSearching} />
</motion.div>
) : (
<motion.div
key="system-boards-select"
initial={{
opacity: 0,
}}
exit={{
opacity: 0,
}}
animate={{
opacity: 1,
transition: { duration: 0.1 },
}}
style={{ width: '100%' }}
>
<ButtonGroup sx={{ w: 'full', ps: 1.5 }} isAttached>
<SystemBoardButton board_id="images" />
<SystemBoardButton board_id="assets" />
<SystemBoardButton board_id="no_board" />
</ButtonGroup>
</motion.div>
)}
</AnimatePresence>
<IAIIconButton
aria-label="Search Boards"
size="sm"
isChecked={isSearching}
onClick={handleClickSearchIcon}
icon={<FaSearch />}
/>
<AddBoardButton />
</Flex>
<OverlayScrollbarsComponent
@@ -82,29 +126,10 @@ const BoardsList = (props: Props) => {
<Grid
className="list-container"
sx={{
gridTemplateRows: '6.5rem 6.5rem',
gridAutoFlow: 'column dense',
gridAutoColumns: '5rem',
gridTemplateColumns: `repeat(auto-fill, minmax(96px, 1fr));`,
maxH: 346,
}}
>
{!searchMode && (
<>
<GridItem sx={{ p: 1.5 }}>
<AllImagesBoard isSelected={selectedBoardId === 'images'} />
</GridItem>
<GridItem sx={{ p: 1.5 }}>
<AllAssetsBoard isSelected={selectedBoardId === 'assets'} />
</GridItem>
<GridItem sx={{ p: 1.5 }}>
<NoBoardBoard isSelected={selectedBoardId === 'no_board'} />
</GridItem>
{isBatchEnabled && (
<GridItem sx={{ p: 1.5 }}>
<BatchBoard isSelected={selectedBoardId === 'batch'} />
</GridItem>
)}
</>
)}
{filteredBoards &&
filteredBoards.map((board) => (
<GridItem key={board.board_id} sx={{ p: 1.5 }}>

View File

@@ -10,7 +10,14 @@ import { stateSelector } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import { setBoardSearchText } from 'features/gallery/store/boardSlice';
import { memo } from 'react';
import {
ChangeEvent,
KeyboardEvent,
memo,
useCallback,
useEffect,
useRef,
} from 'react';
const selector = createSelector(
[stateSelector],
@@ -22,31 +29,60 @@ const selector = createSelector(
);
type Props = {
setSearchMode: (searchMode: boolean) => void;
setIsSearching: (isSearching: boolean) => void;
};
const BoardsSearch = (props: Props) => {
const { setSearchMode } = props;
const { setIsSearching } = props;
const dispatch = useAppDispatch();
const { searchText } = useAppSelector(selector);
const inputRef = useRef<HTMLInputElement>(null);
const handleBoardSearch = (searchTerm: string) => {
setSearchMode(searchTerm.length > 0);
dispatch(setBoardSearchText(searchTerm));
};
const clearBoardSearch = () => {
setSearchMode(false);
const handleBoardSearch = useCallback(
(searchTerm: string) => {
dispatch(setBoardSearchText(searchTerm));
},
[dispatch]
);
const clearBoardSearch = useCallback(() => {
dispatch(setBoardSearchText(''));
};
setIsSearching(false);
}, [dispatch, setIsSearching]);
const handleKeydown = useCallback(
(e: KeyboardEvent<HTMLInputElement>) => {
// exit search mode on escape
if (e.key === 'Escape') {
clearBoardSearch();
}
},
[clearBoardSearch]
);
const handleChange = useCallback(
(e: ChangeEvent<HTMLInputElement>) => {
handleBoardSearch(e.target.value);
},
[handleBoardSearch]
);
useEffect(() => {
// focus the search box on mount
if (!inputRef.current) {
return;
}
inputRef.current.focus();
}, []);
return (
<InputGroup>
<Input
ref={inputRef}
placeholder="Search Boards..."
value={searchText}
onChange={(e) => {
handleBoardSearch(e.target.value);
}}
onKeyDown={handleKeydown}
onChange={handleChange}
/>
{searchText && searchText.length && (
<InputRightElement>
@@ -55,7 +91,8 @@ const BoardsSearch = (props: Props) => {
size="xs"
variant="ghost"
aria-label="Clear Search"
icon={<CloseIcon boxSize={3} />}
opacity={0.5}
icon={<CloseIcon boxSize={2} />}
/>
</InputRightElement>
)}

View File

@@ -1,31 +1,39 @@
import {
Badge,
Box,
ChakraProps,
Editable,
EditableInput,
EditablePreview,
Flex,
Icon,
Image,
MenuItem,
MenuList,
Text,
useColorMode,
} from '@chakra-ui/react';
import { createSelector } from '@reduxjs/toolkit';
import { skipToken } from '@reduxjs/toolkit/dist/query';
import { MoveBoardDropData } from 'app/components/ImageDnd/typesafeDnd';
import { useAppDispatch } from 'app/store/storeHooks';
import { ContextMenu } from 'chakra-ui-contextmenu';
import { stateSelector } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import IAIDroppable from 'common/components/IAIDroppable';
import { IAINoContentFallback } from 'common/components/IAIImageFallback';
import { boardIdSelected } from 'features/gallery/store/gallerySlice';
import { memo, useCallback, useMemo } from 'react';
import { FaTrash, FaUser } from 'react-icons/fa';
import { memo, useCallback, useMemo, useState } from 'react';
import { FaFolder } from 'react-icons/fa';
import { useUpdateBoardMutation } from 'services/api/endpoints/boards';
import { useGetImageDTOQuery } from 'services/api/endpoints/images';
import { BoardDTO } from 'services/api/types';
import { menuListMotionProps } from 'theme/components/menu';
import { mode } from 'theme/util/mode';
import BoardContextMenu from '../BoardContextMenu';
const AUTO_ADD_BADGE_STYLES: ChakraProps['sx'] = {
bg: 'accent.200',
color: 'blackAlpha.900',
};
const BASE_BADGE_STYLES: ChakraProps['sx'] = {
bg: 'base.500',
color: 'whiteAlpha.900',
};
interface GalleryBoardProps {
board: BoardDTO;
isSelected: boolean;
@@ -35,13 +43,30 @@ interface GalleryBoardProps {
const GalleryBoard = memo(
({ board, isSelected, setBoardToDelete }: GalleryBoardProps) => {
const dispatch = useAppDispatch();
const selector = useMemo(
() =>
createSelector(
stateSelector,
({ gallery }) => {
const isSelectedForAutoAdd =
board.board_id === gallery.autoAddBoardId;
return { isSelectedForAutoAdd };
},
defaultSelectorOptions
),
[board.board_id]
);
const { isSelectedForAutoAdd } = useAppSelector(selector);
const { currentData: coverImage } = useGetImageDTOQuery(
board.cover_image_name ?? skipToken
);
const { colorMode } = useColorMode();
const { board_name, board_id } = board;
const [localBoardName, setLocalBoardName] = useState(board_name);
const handleSelectBoard = useCallback(() => {
dispatch(boardIdSelected(board_id));
}, [board_id, dispatch]);
@@ -49,14 +74,6 @@ const GalleryBoard = memo(
const [updateBoard, { isLoading: isUpdateBoardLoading }] =
useUpdateBoardMutation();
const handleUpdateBoardName = (newBoardName: string) => {
updateBoard({ board_id, changes: { board_name: newBoardName } });
};
const handleDeleteBoard = useCallback(() => {
setBoardToDelete(board);
}, [board, setBoardToDelete]);
const droppableData: MoveBoardDropData = useMemo(
() => ({
id: board_id,
@@ -66,86 +83,116 @@ const GalleryBoard = memo(
[board_id]
);
const handleSubmit = useCallback(
(newBoardName: string) => {
if (!newBoardName) {
// empty strings are not allowed
setLocalBoardName(board_name);
return;
}
if (newBoardName === board_name) {
// don't updated the board name if it hasn't changed
return;
}
updateBoard({ board_id, changes: { board_name: newBoardName } })
.unwrap()
.then((response) => {
// update local state
setLocalBoardName(response.board_name);
})
.catch(() => {
// revert on error
setLocalBoardName(board_name);
});
},
[board_id, board_name, updateBoard]
);
const handleChange = useCallback((newBoardName: string) => {
setLocalBoardName(newBoardName);
}, []);
return (
<Box sx={{ touchAction: 'none', height: 'full' }}>
<ContextMenu<HTMLDivElement>
menuProps={{ size: 'sm', isLazy: true }}
menuButtonProps={{
bg: 'transparent',
_hover: { bg: 'transparent' },
<Box
sx={{ w: 'full', h: 'full', touchAction: 'none', userSelect: 'none' }}
>
<Flex
sx={{
position: 'relative',
justifyContent: 'center',
alignItems: 'center',
aspectRatio: '1/1',
w: 'full',
h: 'full',
}}
renderMenu={() => (
<MenuList
sx={{ visibility: 'visible !important' }}
motionProps={menuListMotionProps}
>
{board.image_count > 0 && (
<>
{/* <MenuItem
isDisabled={!board.image_count}
icon={<FaImages />}
onClickCapture={handleAddBoardToBatch}
>
Add Board to Batch
</MenuItem> */}
</>
)}
<MenuItem
sx={{ color: 'error.600', _dark: { color: 'error.300' } }}
icon={<FaTrash />}
onClickCapture={handleDeleteBoard}
>
Delete Board
</MenuItem>
</MenuList>
)}
>
{(ref) => (
<Flex
key={board_id}
userSelect="none"
ref={ref}
sx={{
flexDir: 'column',
justifyContent: 'space-between',
alignItems: 'center',
cursor: 'pointer',
w: 'full',
h: 'full',
}}
>
<BoardContextMenu
board={board}
board_id={board_id}
setBoardToDelete={setBoardToDelete}
>
{(ref) => (
<Flex
ref={ref}
onClick={handleSelectBoard}
sx={{
w: 'full',
h: 'full',
position: 'relative',
justifyContent: 'center',
alignItems: 'center',
borderRadius: 'base',
w: 'full',
aspectRatio: '1/1',
overflow: 'hidden',
shadow: isSelected ? 'selected.light' : undefined,
_dark: { shadow: isSelected ? 'selected.dark' : undefined },
flexShrink: 0,
cursor: 'pointer',
}}
>
{board.cover_image_name && coverImage?.thumbnail_url && (
<Image src={coverImage?.thumbnail_url} draggable={false} />
)}
{!(board.cover_image_name && coverImage?.thumbnail_url) && (
<IAINoContentFallback
boxSize={8}
icon={FaUser}
sx={{
borderWidth: '2px',
borderStyle: 'solid',
borderColor: 'base.200',
_dark: {
borderColor: 'base.800',
},
}}
/>
)}
<Flex
sx={{
w: 'full',
h: 'full',
justifyContent: 'center',
alignItems: 'center',
borderRadius: 'base',
bg: 'base.200',
_dark: {
bg: 'base.800',
},
}}
>
{coverImage?.thumbnail_url ? (
<Image
src={coverImage?.thumbnail_url}
draggable={false}
sx={{
maxW: 'full',
maxH: 'full',
borderRadius: 'base',
borderBottomRadius: 'lg',
}}
/>
) : (
<Flex
sx={{
w: 'full',
h: 'full',
justifyContent: 'center',
alignItems: 'center',
}}
>
<Icon
boxSize={12}
as={FaFolder}
sx={{
mt: -3,
opacity: 0.7,
color: 'base.500',
_dark: {
color: 'base.500',
},
}}
/>
</Flex>
)}
</Flex>
<Flex
sx={{
position: 'absolute',
@@ -154,58 +201,97 @@ const GalleryBoard = memo(
p: 1,
}}
>
<Badge variant="solid">{board.image_count}</Badge>
<Badge
variant="solid"
sx={
isSelectedForAutoAdd
? AUTO_ADD_BADGE_STYLES
: BASE_BADGE_STYLES
}
>
{board.image_count}
</Badge>
</Flex>
<Box
className="selection-box"
sx={{
position: 'absolute',
top: 0,
insetInlineEnd: 0,
bottom: 0,
insetInlineStart: 0,
borderRadius: 'base',
transitionProperty: 'common',
transitionDuration: 'common',
shadow: isSelected ? 'selected.light' : undefined,
_dark: {
shadow: isSelected ? 'selected.dark' : undefined,
},
}}
/>
<Flex
sx={{
position: 'absolute',
bottom: 0,
left: 0,
p: 1,
justifyContent: 'center',
alignItems: 'center',
w: 'full',
maxW: 'full',
borderBottomRadius: 'base',
bg: isSelected ? 'accent.400' : 'base.500',
color: isSelected ? 'base.50' : 'base.100',
_dark: {
bg: isSelected ? 'accent.500' : 'base.600',
color: isSelected ? 'base.50' : 'base.100',
},
lineHeight: 'short',
fontSize: 'xs',
}}
>
<Editable
value={localBoardName}
isDisabled={isUpdateBoardLoading}
submitOnBlur={true}
onChange={handleChange}
onSubmit={handleSubmit}
sx={{
w: 'full',
}}
>
<EditablePreview
sx={{
p: 0,
fontWeight: isSelected ? 700 : 500,
textAlign: 'center',
overflow: 'hidden',
textOverflow: 'ellipsis',
}}
noOfLines={1}
/>
<EditableInput
sx={{
p: 0,
_focusVisible: {
p: 0,
textAlign: 'center',
// get rid of the edit border
boxShadow: 'none',
},
}}
/>
</Editable>
</Flex>
<IAIDroppable
data={droppableData}
dropLabel={<Text fontSize="md">Move</Text>}
/>
</Flex>
<Flex
sx={{
width: 'full',
height: 'full',
justifyContent: 'center',
alignItems: 'center',
}}
>
<Editable
defaultValue={board_name}
submitOnBlur={false}
onSubmit={(nextValue) => {
handleUpdateBoardName(nextValue);
}}
sx={{ maxW: 'full' }}
>
<EditablePreview
sx={{
color: isSelected
? mode('base.900', 'base.50')(colorMode)
: mode('base.700', 'base.200')(colorMode),
fontWeight: isSelected ? 600 : undefined,
fontSize: 'xs',
textAlign: 'center',
p: 0,
overflow: 'hidden',
textOverflow: 'ellipsis',
}}
noOfLines={1}
/>
<EditableInput
sx={{
color: mode('base.900', 'base.50')(colorMode),
fontSize: 'xs',
borderColor: mode('base.500', 'base.500')(colorMode),
p: 0,
outline: 0,
}}
/>
</Editable>
</Flex>
</Flex>
)}
</ContextMenu>
)}
</BoardContextMenu>
</Flex>
</Box>
);
}

View File

@@ -2,9 +2,12 @@ import { As, Badge, Flex } from '@chakra-ui/react';
import { TypesafeDroppableData } from 'app/components/ImageDnd/typesafeDnd';
import IAIDroppable from 'common/components/IAIDroppable';
import { IAINoContentFallback } from 'common/components/IAIImageFallback';
import { BoardId } from 'features/gallery/store/gallerySlice';
import { ReactNode } from 'react';
import BoardContextMenu from '../BoardContextMenu';
type GenericBoardProps = {
board_id: BoardId;
droppableData?: TypesafeDroppableData;
onClick: () => void;
isSelected: boolean;
@@ -14,7 +17,7 @@ type GenericBoardProps = {
badgeCount?: number;
};
const formatBadgeCount = (count: number) =>
export const formatBadgeCount = (count: number) =>
Intl.NumberFormat('en-US', {
notation: 'compact',
maximumFractionDigits: 1,
@@ -22,6 +25,7 @@ const formatBadgeCount = (count: number) =>
const GenericBoard = (props: GenericBoardProps) => {
const {
board_id,
droppableData,
onClick,
isSelected,
@@ -32,67 +36,72 @@ const GenericBoard = (props: GenericBoardProps) => {
} = props;
return (
<Flex
sx={{
flexDir: 'column',
justifyContent: 'space-between',
alignItems: 'center',
cursor: 'pointer',
w: 'full',
h: 'full',
borderRadius: 'base',
}}
>
<Flex
onClick={onClick}
sx={{
position: 'relative',
justifyContent: 'center',
alignItems: 'center',
borderRadius: 'base',
w: 'full',
aspectRatio: '1/1',
overflow: 'hidden',
shadow: isSelected ? 'selected.light' : undefined,
_dark: { shadow: isSelected ? 'selected.dark' : undefined },
flexShrink: 0,
}}
>
<IAINoContentFallback
boxSize={8}
icon={icon}
sx={{
border: '2px solid var(--invokeai-colors-base-200)',
_dark: { border: '2px solid var(--invokeai-colors-base-800)' },
}}
/>
<BoardContextMenu board_id={board_id}>
{(ref) => (
<Flex
ref={ref}
sx={{
position: 'absolute',
insetInlineEnd: 0,
top: 0,
p: 1,
flexDir: 'column',
justifyContent: 'space-between',
alignItems: 'center',
cursor: 'pointer',
w: 'full',
h: 'full',
borderRadius: 'base',
}}
>
{badgeCount !== undefined && (
<Badge variant="solid">{formatBadgeCount(badgeCount)}</Badge>
)}
<Flex
onClick={onClick}
sx={{
position: 'relative',
justifyContent: 'center',
alignItems: 'center',
borderRadius: 'base',
w: 'full',
aspectRatio: '1/1',
overflow: 'hidden',
shadow: isSelected ? 'selected.light' : undefined,
_dark: { shadow: isSelected ? 'selected.dark' : undefined },
flexShrink: 0,
}}
>
<IAINoContentFallback
boxSize={8}
icon={icon}
sx={{
border: '2px solid var(--invokeai-colors-base-200)',
_dark: { border: '2px solid var(--invokeai-colors-base-800)' },
}}
/>
<Flex
sx={{
position: 'absolute',
insetInlineEnd: 0,
top: 0,
p: 1,
}}
>
{badgeCount !== undefined && (
<Badge variant="solid">{formatBadgeCount(badgeCount)}</Badge>
)}
</Flex>
<IAIDroppable data={droppableData} dropLabel={dropLabel} />
</Flex>
<Flex
sx={{
h: 'full',
alignItems: 'center',
fontWeight: isSelected ? 600 : undefined,
fontSize: 'sm',
color: isSelected ? 'base.900' : 'base.700',
_dark: { color: isSelected ? 'base.50' : 'base.200' },
}}
>
{label}
</Flex>
</Flex>
<IAIDroppable data={droppableData} dropLabel={dropLabel} />
</Flex>
<Flex
sx={{
h: 'full',
alignItems: 'center',
fontWeight: isSelected ? 600 : undefined,
fontSize: 'xs',
color: isSelected ? 'base.900' : 'base.700',
_dark: { color: isSelected ? 'base.50' : 'base.200' },
}}
>
{label}
</Flex>
</Flex>
)}
</BoardContextMenu>
);
};

View File

@@ -39,6 +39,7 @@ const NoBoardBoard = ({ isSelected }: { isSelected: boolean }) => {
return (
<GenericBoard
board_id="no_board"
droppableData={droppableData}
dropLabel={<Text fontSize="md">Move</Text>}
onClick={handleClick}

View File

@@ -0,0 +1,53 @@
import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import IAIButton from 'common/components/IAIButton';
import { boardIdSelected } from 'features/gallery/store/gallerySlice';
import { memo, useCallback, useMemo } from 'react';
import { useBoardName } from 'services/api/hooks/useBoardName';
type Props = {
board_id: 'images' | 'assets' | 'no_board';
};
const SystemBoardButton = ({ board_id }: Props) => {
const dispatch = useAppDispatch();
const selector = useMemo(
() =>
createSelector(
[stateSelector],
({ gallery }) => {
const { selectedBoardId } = gallery;
return { isSelected: selectedBoardId === board_id };
},
defaultSelectorOptions
),
[board_id]
);
const { isSelected } = useAppSelector(selector);
const boardName = useBoardName(board_id);
const handleClick = useCallback(() => {
dispatch(boardIdSelected(board_id));
}, [board_id, dispatch]);
return (
<IAIButton
onClick={handleClick}
size="sm"
isChecked={isSelected}
sx={{
flexGrow: 1,
borderRadius: 'base',
}}
>
{boardName}
</IAIButton>
);
};
export default memo(SystemBoardButton);

View File

@@ -0,0 +1,79 @@
import { MenuItem } from '@chakra-ui/react';
import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import { autoAddBoardIdChanged } from 'features/gallery/store/gallerySlice';
import { memo, useCallback, useMemo } from 'react';
import { FaMinus, FaPlus, FaTrash } from 'react-icons/fa';
import { BoardDTO } from 'services/api/types';
type Props = {
board: BoardDTO;
setBoardToDelete?: (board?: BoardDTO) => void;
};
const GalleryBoardContextMenuItems = ({ board, setBoardToDelete }: Props) => {
const dispatch = useAppDispatch();
const selector = useMemo(
() =>
createSelector(
stateSelector,
({ gallery }) => {
const isSelectedForAutoAdd =
board.board_id === gallery.autoAddBoardId;
return { isSelectedForAutoAdd };
},
defaultSelectorOptions
),
[board.board_id]
);
const { isSelectedForAutoAdd } = useAppSelector(selector);
const handleDelete = useCallback(() => {
if (!setBoardToDelete) {
return;
}
setBoardToDelete(board);
}, [board, setBoardToDelete]);
const handleToggleAutoAdd = useCallback(() => {
dispatch(
autoAddBoardIdChanged(isSelectedForAutoAdd ? null : board.board_id)
);
}, [board.board_id, dispatch, isSelectedForAutoAdd]);
return (
<>
{board.image_count > 0 && (
<>
{/* <MenuItem
isDisabled={!board.image_count}
icon={<FaImages />}
onClickCapture={handleAddBoardToBatch}
>
Add Board to Batch
</MenuItem> */}
</>
)}
<MenuItem
icon={isSelectedForAutoAdd ? <FaMinus /> : <FaPlus />}
onClickCapture={handleToggleAutoAdd}
>
{isSelectedForAutoAdd ? 'Disable Auto-Add' : 'Auto-Add to this Board'}
</MenuItem>
<MenuItem
sx={{ color: 'error.600', _dark: { color: 'error.300' } }}
icon={<FaTrash />}
onClickCapture={handleDelete}
>
Delete Board
</MenuItem>
</>
);
};
export default memo(GalleryBoardContextMenuItems);

View File

@@ -0,0 +1,12 @@
import { BoardId } from 'features/gallery/store/gallerySlice';
import { memo } from 'react';
type Props = {
board_id: BoardId;
};
const SystemBoardContextMenuItems = ({ board_id }: Props) => {
return <></>;
};
export default memo(SystemBoardContextMenuItems);

View File

@@ -1,20 +1,19 @@
import { ChevronUpIcon } from '@chakra-ui/icons';
import { Button, Flex, Text } from '@chakra-ui/react';
import { Box, Button, Flex, Spacer, Text } from '@chakra-ui/react';
import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import { memo } from 'react';
import { useListAllBoardsQuery } from 'services/api/endpoints/boards';
import { memo, useMemo } from 'react';
import { useBoardName } from 'services/api/hooks/useBoardName';
import { useBoardTotal } from 'services/api/hooks/useBoardTotal';
const selector = createSelector(
[stateSelector],
(state) => {
const { selectedBoardId } = state.gallery;
return {
selectedBoardId,
};
return { selectedBoardId };
},
defaultSelectorOptions
);
@@ -27,25 +26,18 @@ type Props = {
const GalleryBoardName = (props: Props) => {
const { isOpen, onToggle } = props;
const { selectedBoardId } = useAppSelector(selector);
const { selectedBoardName } = useListAllBoardsQuery(undefined, {
selectFromResult: ({ data }) => {
let selectedBoardName = '';
if (selectedBoardId === 'images') {
selectedBoardName = 'All Images';
} else if (selectedBoardId === 'assets') {
selectedBoardName = 'All Assets';
} else if (selectedBoardId === 'no_board') {
selectedBoardName = 'No Board';
} else if (selectedBoardId === 'batch') {
selectedBoardName = 'Batch';
} else {
const selectedBoard = data?.find((b) => b.board_id === selectedBoardId);
selectedBoardName = selectedBoard?.board_name || 'Unknown Board';
}
const boardName = useBoardName(selectedBoardId);
const numOfBoardImages = useBoardTotal(selectedBoardId);
return { selectedBoardName };
},
});
const formattedBoardName = useMemo(() => {
if (!boardName || !numOfBoardImages) {
return '';
}
if (boardName.length > 20) {
return `${boardName.substring(0, 20)}... (${numOfBoardImages})`;
}
return `${boardName} (${numOfBoardImages})`;
}, [boardName, numOfBoardImages]);
return (
<Flex
@@ -54,6 +46,8 @@ const GalleryBoardName = (props: Props) => {
size="sm"
variant="ghost"
sx={{
position: 'relative',
gap: 2,
w: 'full',
justifyContent: 'center',
alignItems: 'center',
@@ -64,19 +58,22 @@ const GalleryBoardName = (props: Props) => {
},
}}
>
<Text
noOfLines={1}
sx={{
w: 'full',
fontWeight: 600,
color: 'base.800',
_dark: {
color: 'base.200',
},
}}
>
{selectedBoardName}
</Text>
<Spacer />
<Box position="relative">
<Text
noOfLines={1}
sx={{
fontWeight: 600,
color: 'base.800',
_dark: {
color: 'base.200',
},
}}
>
{formattedBoardName}
</Text>
</Box>
<Spacer />
<ChevronUpIcon
sx={{
transform: isOpen ? 'rotate(0deg)' : 'rotate(180deg)',

View File

@@ -109,7 +109,7 @@ const GalleryDrawer = () => {
isResizable={true}
isOpen={shouldShowGallery}
onClose={handleCloseGallery}
minWidth={337}
minWidth={400}
>
<ImageGalleryContent />
</ResizableDrawer>

View File

@@ -1,19 +1,20 @@
import { Flex } from '@chakra-ui/react';
import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import IAIIconButton from 'common/components/IAIIconButton';
import IAIPopover from 'common/components/IAIPopover';
import IAISimpleCheckbox from 'common/components/IAISimpleCheckbox';
import IAISlider from 'common/components/IAISlider';
import { setGalleryImageMinimumWidth } from 'features/gallery/store/gallerySlice';
import {
setGalleryImageMinimumWidth,
shouldAutoSwitchChanged,
} from 'features/gallery/store/gallerySlice';
import { ChangeEvent } from 'react';
import { useTranslation } from 'react-i18next';
import { FaWrench } from 'react-icons/fa';
import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import { shouldAutoSwitchChanged } from 'features/gallery/store/gallerySlice';
import BoardAutoAddSelect from './Boards/BoardAutoAddSelect';
const selector = createSelector(
[stateSelector],
@@ -50,7 +51,7 @@ const GallerySettingsPopover = () => {
/>
}
>
<Flex direction="column" gap={2}>
<Flex direction="column" gap={4}>
<IAISlider
value={galleryImageMinimumWidth}
onChange={handleChangeGalleryImageMinimumWidth}
@@ -68,6 +69,7 @@ const GallerySettingsPopover = () => {
dispatch(shouldAutoSwitchChanged(e.target.checked))
}
/>
<BoardAutoAddSelect />
</Flex>
</IAIPopover>
);

View File

@@ -1,4 +1,4 @@
import { Box } from '@chakra-ui/react';
import { Box, Flex } from '@chakra-ui/react';
import { createSelector } from '@reduxjs/toolkit';
import { TypesafeDraggableData } from 'app/components/ImageDnd/typesafeDnd';
import { stateSelector } from 'app/store/store';
@@ -86,38 +86,31 @@ const GalleryImage = (props: HoverableImageProps) => {
return (
<Box sx={{ w: 'full', h: 'full', touchAction: 'none' }}>
<ImageContextMenu imageDTO={imageDTO}>
{(ref) => (
<Box
position="relative"
key={imageName}
userSelect="none"
ref={ref}
sx={{
display: 'flex',
justifyContent: 'center',
alignItems: 'center',
aspectRatio: '1/1',
}}
>
<IAIDndImage
onClick={handleClick}
imageDTO={imageDTO}
draggableData={draggableData}
isSelected={isSelected}
minSize={0}
onClickReset={handleDelete}
imageSx={{ w: 'full', h: 'full' }}
isDropDisabled={true}
isUploadDisabled={true}
thumbnail={true}
// resetIcon={<FaTrash />}
// resetTooltip="Delete image"
// withResetIcon // removed bc it's too easy to accidentally delete images
/>
</Box>
)}
</ImageContextMenu>
<Flex
userSelect="none"
sx={{
position: 'relative',
justifyContent: 'center',
alignItems: 'center',
aspectRatio: '1/1',
}}
>
<IAIDndImage
onClick={handleClick}
imageDTO={imageDTO}
draggableData={draggableData}
isSelected={isSelected}
minSize={0}
onClickReset={handleDelete}
imageSx={{ w: 'full', h: 'full' }}
isDropDisabled={true}
isUploadDisabled={true}
thumbnail={true}
// resetIcon={<FaTrash />}
// resetTooltip="Delete image"
// withResetIcon // removed bc it's too easy to accidentally delete images
/>
</Flex>
</Box>
);
};

View File

@@ -25,6 +25,7 @@ export type BoardId =
type GalleryState = {
selection: string[];
shouldAutoSwitch: boolean;
autoAddBoardId: string | null;
galleryImageMinimumWidth: number;
selectedBoardId: BoardId;
batchImageNames: string[];
@@ -34,6 +35,7 @@ type GalleryState = {
export const initialGalleryState: GalleryState = {
selection: [],
shouldAutoSwitch: true,
autoAddBoardId: null,
galleryImageMinimumWidth: 96,
selectedBoardId: 'images',
batchImageNames: [],
@@ -123,14 +125,34 @@ export const gallerySlice = createSlice({
state.batchImageNames = [];
state.selection = [];
},
autoAddBoardIdChanged: (state, action: PayloadAction<string | null>) => {
state.autoAddBoardId = action.payload;
},
},
extraReducers: (builder) => {
builder.addMatcher(
boardsApi.endpoints.deleteBoard.matchFulfilled,
(state, action) => {
if (action.meta.arg.originalArgs === state.selectedBoardId) {
const deletedBoardId = action.meta.arg.originalArgs;
if (deletedBoardId === state.selectedBoardId) {
state.selectedBoardId = 'images';
}
if (deletedBoardId === state.autoAddBoardId) {
state.autoAddBoardId = null;
}
}
);
builder.addMatcher(
boardsApi.endpoints.listAllBoards.matchFulfilled,
(state, action) => {
const boards = action.payload;
if (!state.autoAddBoardId) {
return;
}
if (!boards.map((b) => b.board_id).includes(state.autoAddBoardId)) {
state.autoAddBoardId = null;
}
}
);
},
@@ -147,6 +169,7 @@ export const {
isBatchEnabledChanged,
imagesAddedToBatch,
imagesRemovedFromBatch,
autoAddBoardIdChanged,
} = gallerySlice.actions;
export default gallerySlice.reducer;

View File

@@ -5,10 +5,12 @@ import {
ModelInputFieldTemplate,
} from 'features/nodes/types/types';
import { Box, Flex } from '@chakra-ui/react';
import { SelectItem } from '@mantine/core';
import IAIMantineSearchableSelect from 'common/components/IAIMantineSearchableSelect';
import { MODEL_TYPE_MAP } from 'features/parameters/types/constants';
import { modelIdToMainModelParam } from 'features/parameters/util/modelIdToMainModelParam';
import SyncModelsButton from 'features/ui/components/tabs/ModelManager/subpanels/ModelManagerSettingsPanel/SyncModelsButton';
import { forEach } from 'lodash-es';
import { memo, useCallback, useMemo } from 'react';
import { useTranslation } from 'react-i18next';
@@ -88,18 +90,23 @@ const ModelInputFieldComponent = (
data={[]}
/>
) : (
<IAIMantineSearchableSelect
tooltip={selectedModel?.description}
label={
selectedModel?.base_model && MODEL_TYPE_MAP[selectedModel?.base_model]
}
value={selectedModel?.id}
placeholder={data.length > 0 ? 'Select a model' : 'No models available'}
data={data}
error={data.length === 0}
disabled={data.length === 0}
onChange={handleChangeModel}
/>
<Flex w="100%" alignItems="center" gap={2}>
<IAIMantineSearchableSelect
tooltip={selectedModel?.description}
label={
selectedModel?.base_model && MODEL_TYPE_MAP[selectedModel?.base_model]
}
value={selectedModel?.id}
placeholder={data.length > 0 ? 'Select a model' : 'No models available'}
data={data}
error={data.length === 0}
disabled={data.length === 0}
onChange={handleChangeModel}
/>
<Box mt={7}>
<SyncModelsButton iconMode />
</Box>
</Flex>
);
};

View File

@@ -48,6 +48,7 @@ export const addControlNetToLinearGraph = (
beginStepPct,
endStepPct,
controlMode,
resizeMode,
model,
processorType,
weight,
@@ -60,6 +61,7 @@ export const addControlNetToLinearGraph = (
begin_step_percent: beginStepPct,
end_step_percent: endStepPct,
control_mode: controlMode,
resize_mode: resizeMode,
control_model: model as ControlNetInvocation['control_model'],
control_weight: weight,
};

View File

@@ -4,6 +4,7 @@ import { useTranslation } from 'react-i18next';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAIMantineSearchableSelect from 'common/components/IAIMantineSearchableSelect';
import { Box, Flex } from '@chakra-ui/react';
import { SelectItem } from '@mantine/core';
import { createSelector } from '@reduxjs/toolkit';
import { stateSelector } from 'app/store/store';
@@ -11,6 +12,7 @@ import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import { modelSelected } from 'features/parameters/store/actions';
import { MODEL_TYPE_MAP } from 'features/parameters/types/constants';
import { modelIdToMainModelParam } from 'features/parameters/util/modelIdToMainModelParam';
import SyncModelsButton from 'features/ui/components/tabs/ModelManager/subpanels/ModelManagerSettingsPanel/SyncModelsButton';
import { forEach } from 'lodash-es';
import { useGetMainModelsQuery } from 'services/api/endpoints/models';
@@ -84,16 +86,22 @@ const ParamMainModelSelect = () => {
data={[]}
/>
) : (
<IAIMantineSearchableSelect
tooltip={selectedModel?.description}
label={t('modelManager.model')}
value={selectedModel?.id}
placeholder={data.length > 0 ? 'Select a model' : 'No models available'}
data={data}
error={data.length === 0}
disabled={data.length === 0}
onChange={handleChangeModel}
/>
<Flex w="100%" alignItems="center" gap={2}>
<IAIMantineSearchableSelect
tooltip={selectedModel?.description}
label={t('modelManager.model')}
value={selectedModel?.id}
placeholder={data.length > 0 ? 'Select a model' : 'No models available'}
data={data}
error={data.length === 0}
disabled={data.length === 0}
onChange={handleChangeModel}
w="100%"
/>
<Box mt={7}>
<SyncModelsButton iconMode />
</Box>
</Flex>
);
};

View File

@@ -1,6 +1,9 @@
import { Box, ChakraProps } from '@chakra-ui/react';
import { Box, ChakraProps, Tooltip } from '@chakra-ui/react';
import { createSelector } from '@reduxjs/toolkit';
import { userInvoked } from 'app/store/actions';
import { stateSelector } from 'app/store/store';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions';
import IAIButton, { IAIButtonProps } from 'common/components/IAIButton';
import IAIIconButton, {
IAIIconButtonProps,
@@ -8,11 +11,13 @@ import IAIIconButton, {
import { useIsReadyToInvoke } from 'common/hooks/useIsReadyToInvoke';
import { clampSymmetrySteps } from 'features/parameters/store/generationSlice';
import ProgressBar from 'features/system/components/ProgressBar';
import { selectIsBusy } from 'features/system/store/systemSelectors';
import { activeTabNameSelector } from 'features/ui/store/uiSelectors';
import { useCallback } from 'react';
import { useHotkeys } from 'react-hotkeys-hook';
import { useTranslation } from 'react-i18next';
import { FaPlay } from 'react-icons/fa';
import { useBoardName } from 'services/api/hooks/useBoardName';
const IN_PROGRESS_STYLES: ChakraProps['sx'] = {
_disabled: {
@@ -26,6 +31,20 @@ const IN_PROGRESS_STYLES: ChakraProps['sx'] = {
},
};
const selector = createSelector(
[stateSelector, activeTabNameSelector, selectIsBusy],
({ gallery }, activeTabName, isBusy) => {
const { autoAddBoardId } = gallery;
return {
isBusy,
autoAddBoardId,
activeTabName,
};
},
defaultSelectorOptions
);
interface InvokeButton
extends Omit<IAIButtonProps | IAIIconButtonProps, 'aria-label'> {
iconButton?: boolean;
@@ -35,8 +54,8 @@ export default function InvokeButton(props: InvokeButton) {
const { iconButton = false, ...rest } = props;
const dispatch = useAppDispatch();
const isReady = useIsReadyToInvoke();
const activeTabName = useAppSelector(activeTabNameSelector);
const isProcessing = useAppSelector((state) => state.system.isProcessing);
const { isBusy, autoAddBoardId, activeTabName } = useAppSelector(selector);
const autoAddBoardName = useBoardName(autoAddBoardId);
const handleInvoke = useCallback(() => {
dispatch(clampSymmetrySteps());
@@ -75,43 +94,52 @@ export default function InvokeButton(props: InvokeButton) {
<ProgressBar />
</Box>
)}
{iconButton ? (
<IAIIconButton
aria-label={t('parameters.invoke')}
type="submit"
icon={<FaPlay />}
isDisabled={!isReady || isProcessing}
onClick={handleInvoke}
tooltip={t('parameters.invoke')}
tooltipProps={{ placement: 'top' }}
colorScheme="accent"
id="invoke-button"
{...rest}
sx={{
w: 'full',
flexGrow: 1,
...(isProcessing ? IN_PROGRESS_STYLES : {}),
}}
/>
) : (
<IAIButton
aria-label={t('parameters.invoke')}
type="submit"
isDisabled={!isReady || isProcessing}
onClick={handleInvoke}
colorScheme="accent"
id="invoke-button"
{...rest}
sx={{
w: 'full',
flexGrow: 1,
fontWeight: 700,
...(isProcessing ? IN_PROGRESS_STYLES : {}),
}}
>
Invoke
</IAIButton>
)}
<Tooltip
placement="top"
hasArrow
openDelay={500}
label={
autoAddBoardId ? `Auto-Adding to ${autoAddBoardName}` : undefined
}
>
{iconButton ? (
<IAIIconButton
aria-label={t('parameters.invoke')}
type="submit"
icon={<FaPlay />}
isDisabled={!isReady || isBusy}
onClick={handleInvoke}
tooltip={t('parameters.invoke')}
tooltipProps={{ placement: 'top' }}
colorScheme="accent"
id="invoke-button"
{...rest}
sx={{
w: 'full',
flexGrow: 1,
...(isBusy ? IN_PROGRESS_STYLES : {}),
}}
/>
) : (
<IAIButton
aria-label={t('parameters.invoke')}
type="submit"
isDisabled={!isReady || isBusy}
onClick={handleInvoke}
colorScheme="accent"
id="invoke-button"
{...rest}
sx={{
w: 'full',
flexGrow: 1,
fontWeight: 700,
...(isBusy ? IN_PROGRESS_STYLES : {}),
}}
>
Invoke
</IAIButton>
)}
</Tooltip>
</Box>
</Box>
);

View File

@@ -1,33 +0,0 @@
import { createSelector } from '@reduxjs/toolkit';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAIIconButton from 'common/components/IAIIconButton';
import { postprocessingSelector } from 'features/parameters/store/postprocessingSelectors';
import { setShouldLoopback } from 'features/parameters/store/postprocessingSlice';
import { useTranslation } from 'react-i18next';
import { FaRecycle } from 'react-icons/fa';
const loopbackSelector = createSelector(
postprocessingSelector,
({ shouldLoopback }) => shouldLoopback
);
const LoopbackButton = () => {
const dispatch = useAppDispatch();
const shouldLoopback = useAppSelector(loopbackSelector);
const { t } = useTranslation();
return (
<IAIIconButton
aria-label={t('parameters.toggleLoopback')}
tooltip={t('parameters.toggleLoopback')}
isChecked={shouldLoopback}
icon={<FaRecycle />}
onClick={() => {
dispatch(setShouldLoopback(!shouldLoopback));
}}
/>
);
};
export default LoopbackButton;

View File

@@ -9,7 +9,6 @@ const ProcessButtons = () => {
return (
<Flex gap={2}>
<InvokeButton />
{/* {activeTabName === 'img2img' && <LoopbackButton />} */}
<CancelButton />
</Flex>
);

View File

@@ -0,0 +1,57 @@
import { Badge, BadgeProps, Flex, Text, TextProps } from '@chakra-ui/react';
import IAISwitch, { IAISwitchProps } from 'common/components/IAISwitch';
import { useTranslation } from 'react-i18next';
type SettingSwitchProps = IAISwitchProps & {
label: string;
useBadge?: boolean;
badgeLabel?: string;
textProps?: TextProps;
badgeProps?: BadgeProps;
};
export default function SettingSwitch(props: SettingSwitchProps) {
const { t } = useTranslation();
const {
label,
textProps,
useBadge = false,
badgeLabel = t('settings.experimental'),
badgeProps,
...rest
} = props;
return (
<Flex justifyContent="space-between" py={1}>
<Flex gap={2} alignItems="center">
<Text
sx={{
fontSize: 14,
_dark: {
color: 'base.300',
},
}}
{...textProps}
>
{label}
</Text>
{useBadge && (
<Badge
size="xs"
sx={{
px: 2,
color: 'base.700',
bg: 'accent.200',
_dark: { bg: 'accent.500', color: 'base.200' },
}}
{...badgeProps}
>
{badgeLabel}
</Badge>
)}
</Flex>
<IAISwitch {...rest} />
</Flex>
);
}

View File

@@ -1,60 +1,71 @@
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { useCallback, useEffect, useState } from 'react';
import { StyledFlex } from './SettingsModal';
import { Heading, Text } from '@chakra-ui/react';
import { useAppDispatch } from 'app/store/storeHooks';
import { useCallback, useEffect } from 'react';
import IAIButton from '../../../../common/components/IAIButton';
import { useClearIntermediatesMutation } from '../../../../services/api/endpoints/images';
import { addToast } from '../../store/systemSlice';
import {
useClearIntermediatesMutation,
useGetIntermediatesCountQuery,
} from '../../../../services/api/endpoints/images';
import { resetCanvas } from '../../../canvas/store/canvasSlice';
import { addToast } from '../../store/systemSlice';
import { StyledFlex } from './SettingsModal';
import { controlNetReset } from 'features/controlNet/store/controlNetSlice';
export default function SettingsClearIntermediates() {
const dispatch = useAppDispatch();
const [isDisabled, setIsDisabled] = useState(false);
const { data: intermediatesCount, refetch: updateIntermediatesCount } =
useGetIntermediatesCountQuery();
const [clearIntermediates, { isLoading: isLoadingClearIntermediates }] =
useClearIntermediatesMutation();
const handleClickClearIntermediates = useCallback(() => {
clearIntermediates({})
clearIntermediates()
.unwrap()
.then((response) => {
dispatch(controlNetReset());
dispatch(resetCanvas());
dispatch(
addToast({
title:
response === 0
? `No intermediates to clear`
: `Successfully cleared ${response} intermediates`,
title: `Cleared ${response} intermediates`,
status: 'info',
})
);
if (response < 100) {
setIsDisabled(true);
}
});
}, [clearIntermediates, dispatch]);
useEffect(() => {
// update the count on mount
updateIntermediatesCount();
}, [updateIntermediatesCount]);
const buttonText = intermediatesCount
? `Clear ${intermediatesCount} Intermediate${
intermediatesCount > 1 ? 's' : ''
}`
: 'No Intermediates to Clear';
return (
<StyledFlex>
<Heading size="sm">Clear Intermediates</Heading>
<IAIButton
colorScheme="error"
colorScheme="warning"
onClick={handleClickClearIntermediates}
isLoading={isLoadingClearIntermediates}
isDisabled={isDisabled}
isDisabled={!intermediatesCount}
>
{isDisabled ? 'Intermediates Cleared' : 'Clear 100 Intermediates'}
{buttonText}
</IAIButton>
<Text>
Will permanently delete first 100 intermediates found on disk and in
database
<Text fontWeight="bold">
Clearing intermediates will reset your Canvas and ControlNet state.
</Text>
<Text fontWeight="bold">This will also clear your canvas state.</Text>
<Text>
<Text variant="subtext">
Intermediate images are byproducts of generation, different from the
result images in the gallery. Purging intermediates will free disk
space. Your gallery images will not be deleted.
result images in the gallery. Clearing intermediates will free disk
space.
</Text>
<Text variant="subtext">Your gallery images will not be deleted.</Text>
</StyledFlex>
);
}

View File

@@ -11,13 +11,12 @@ import {
Text,
useDisclosure,
} from '@chakra-ui/react';
import { createSelector, current } from '@reduxjs/toolkit';
import { createSelector } from '@reduxjs/toolkit';
import { VALID_LOG_LEVELS } from 'app/logging/useLogger';
import { LOCALSTORAGE_KEYS, LOCALSTORAGE_PREFIX } from 'app/store/constants';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAIButton from 'common/components/IAIButton';
import IAIMantineSelect from 'common/components/IAIMantineSelect';
import IAISwitch from 'common/components/IAISwitch';
import { systemSelector } from 'features/system/store/systemSelectors';
import {
SystemState,
@@ -25,7 +24,6 @@ import {
setEnableImageDebugging,
setIsNodesEnabled,
setShouldConfirmOnDelete,
setShouldDisplayGuides,
shouldAntialiasProgressImageChanged,
shouldLogToConsoleChanged,
} from 'features/system/store/systemSlice';
@@ -48,15 +46,15 @@ import {
} from 'react';
import { useTranslation } from 'react-i18next';
import { LogLevelName } from 'roarr';
import SettingsSchedulers from './SettingsSchedulers';
import SettingSwitch from './SettingSwitch';
import SettingsClearIntermediates from './SettingsClearIntermediates';
import SettingsSchedulers from './SettingsSchedulers';
const selector = createSelector(
[systemSelector, uiSelector],
(system: SystemState, ui: UIState) => {
const {
shouldConfirmOnDelete,
shouldDisplayGuides,
enableImageDebugging,
consoleLogLevel,
shouldLogToConsole,
@@ -73,7 +71,6 @@ const selector = createSelector(
return {
shouldConfirmOnDelete,
shouldDisplayGuides,
enableImageDebugging,
shouldUseCanvasBetaLayout,
shouldUseSliders,
@@ -139,7 +136,6 @@ const SettingsModal = ({ children, config }: SettingsModalProps) => {
const {
shouldConfirmOnDelete,
shouldDisplayGuides,
enableImageDebugging,
shouldUseCanvasBetaLayout,
shouldUseSliders,
@@ -195,7 +191,7 @@ const SettingsModal = ({ children, config }: SettingsModalProps) => {
<Modal
isOpen={isSettingsModalOpen}
onClose={onSettingsModalClose}
size="xl"
size="2xl"
isCentered
>
<ModalOverlay />
@@ -206,7 +202,7 @@ const SettingsModal = ({ children, config }: SettingsModalProps) => {
<Flex sx={{ gap: 4, flexDirection: 'column' }}>
<StyledFlex>
<Heading size="sm">{t('settings.general')}</Heading>
<IAISwitch
<SettingSwitch
label={t('settings.confirmOnDelete')}
isChecked={shouldConfirmOnDelete}
onChange={(e: ChangeEvent<HTMLInputElement>) =>
@@ -214,7 +210,7 @@ const SettingsModal = ({ children, config }: SettingsModalProps) => {
}
/>
{shouldShowAdvancedOptionsSettings && (
<IAISwitch
<SettingSwitch
label={t('settings.showAdvancedOptions')}
isChecked={shouldShowAdvancedOptions}
onChange={(e: ChangeEvent<HTMLInputElement>) =>
@@ -231,37 +227,21 @@ const SettingsModal = ({ children, config }: SettingsModalProps) => {
<StyledFlex>
<Heading size="sm">{t('settings.ui')}</Heading>
<IAISwitch
label={t('settings.displayHelpIcons')}
isChecked={shouldDisplayGuides}
onChange={(e: ChangeEvent<HTMLInputElement>) =>
dispatch(setShouldDisplayGuides(e.target.checked))
}
/>
{shouldShowBetaLayout && (
<IAISwitch
label={t('settings.useCanvasBeta')}
isChecked={shouldUseCanvasBetaLayout}
onChange={(e: ChangeEvent<HTMLInputElement>) =>
dispatch(setShouldUseCanvasBetaLayout(e.target.checked))
}
/>
)}
<IAISwitch
<SettingSwitch
label={t('settings.useSlidersForAll')}
isChecked={shouldUseSliders}
onChange={(e: ChangeEvent<HTMLInputElement>) =>
dispatch(setShouldUseSliders(e.target.checked))
}
/>
<IAISwitch
<SettingSwitch
label={t('settings.showProgressInViewer')}
isChecked={shouldShowProgressInViewer}
onChange={(e: ChangeEvent<HTMLInputElement>) =>
dispatch(setShouldShowProgressInViewer(e.target.checked))
}
/>
<IAISwitch
<SettingSwitch
label={t('settings.antialiasProgressImages')}
isChecked={shouldAntialiasProgressImage}
onChange={(e: ChangeEvent<HTMLInputElement>) =>
@@ -270,9 +250,21 @@ const SettingsModal = ({ children, config }: SettingsModalProps) => {
)
}
/>
{shouldShowBetaLayout && (
<SettingSwitch
label={t('settings.alternateCanvasLayout')}
useBadge
badgeLabel={t('settings.beta')}
isChecked={shouldUseCanvasBetaLayout}
onChange={(e: ChangeEvent<HTMLInputElement>) =>
dispatch(setShouldUseCanvasBetaLayout(e.target.checked))
}
/>
)}
{shouldShowNodesToggle && (
<IAISwitch
label="Enable Nodes Editor (Experimental)"
<SettingSwitch
label={t('settings.enableNodesEditor')}
useBadge
isChecked={isNodesEnabled}
onChange={handleToggleNodes}
/>
@@ -282,7 +274,7 @@ const SettingsModal = ({ children, config }: SettingsModalProps) => {
{shouldShowDeveloperSettings && (
<StyledFlex>
<Heading size="sm">{t('settings.developer')}</Heading>
<IAISwitch
<SettingSwitch
label={t('settings.shouldLogToConsole')}
isChecked={shouldLogToConsole}
onChange={handleLogToConsoleChanged}
@@ -294,7 +286,7 @@ const SettingsModal = ({ children, config }: SettingsModalProps) => {
value={consoleLogLevel}
data={VALID_LOG_LEVELS.concat()}
/>
<IAISwitch
<SettingSwitch
label={t('settings.enableImageDebugging')}
isChecked={enableImageDebugging}
onChange={(e: ChangeEvent<HTMLInputElement>) =>
@@ -313,8 +305,12 @@ const SettingsModal = ({ children, config }: SettingsModalProps) => {
</IAIButton>
{shouldShowResetWebUiText && (
<>
<Text>{t('settings.resetWebUIDesc1')}</Text>
<Text>{t('settings.resetWebUIDesc2')}</Text>
<Text variant="subtext">
{t('settings.resetWebUIDesc1')}
</Text>
<Text variant="subtext">
{t('settings.resetWebUIDesc2')}
</Text>
</>
)}
</StyledFlex>

View File

@@ -38,7 +38,6 @@ export interface SystemState {
currentIteration: number;
totalIterations: number;
currentStatusHasSteps: boolean;
shouldDisplayGuides: boolean;
isCancelable: boolean;
enableImageDebugging: boolean;
toastQueue: UseToastOptions[];
@@ -84,14 +83,12 @@ export interface SystemState {
shouldAntialiasProgressImage: boolean;
language: keyof typeof LANGUAGES;
isUploading: boolean;
boardIdToAddTo?: string;
isNodesEnabled: boolean;
}
export const initialSystemState: SystemState = {
isConnected: false,
isProcessing: false,
shouldDisplayGuides: true,
isGFPGANAvailable: true,
isESRGANAvailable: true,
shouldConfirmOnDelete: true,
@@ -134,9 +131,6 @@ export const systemSlice = createSlice({
setShouldConfirmOnDelete: (state, action: PayloadAction<boolean>) => {
state.shouldConfirmOnDelete = action.payload;
},
setShouldDisplayGuides: (state, action: PayloadAction<boolean>) => {
state.shouldDisplayGuides = action.payload;
},
setIsCancelable: (state, action: PayloadAction<boolean>) => {
state.isCancelable = action.payload;
},
@@ -204,7 +198,6 @@ export const systemSlice = createSlice({
*/
builder.addCase(appSocketSubscribed, (state, action) => {
state.sessionId = action.payload.sessionId;
state.boardIdToAddTo = action.payload.boardId;
state.canceledSession = '';
});
@@ -213,7 +206,6 @@ export const systemSlice = createSlice({
*/
builder.addCase(appSocketUnsubscribed, (state) => {
state.sessionId = null;
state.boardIdToAddTo = undefined;
});
/**
@@ -390,7 +382,6 @@ export const {
setIsProcessing,
setShouldConfirmOnDelete,
setCurrentStatus,
setShouldDisplayGuides,
setIsCancelable,
setEnableImageDebugging,
addToast,

View File

@@ -105,7 +105,7 @@ const enabledTabsSelector = createSelector(
}
);
const MIN_GALLERY_WIDTH = 300;
const MIN_GALLERY_WIDTH = 350;
const DEFAULT_GALLERY_PCT = 20;
export const NO_GALLERY_TABS: InvokeTabName[] = ['modelManager'];

View File

@@ -4,8 +4,13 @@ import { ReactNode, memo } from 'react';
import ImportModelsPanel from './subpanels/ImportModelsPanel';
import MergeModelsPanel from './subpanels/MergeModelsPanel';
import ModelManagerPanel from './subpanels/ModelManagerPanel';
import ModelManagerSettingsPanel from './subpanels/ModelManagerSettingsPanel';
type ModelManagerTabName = 'modelManager' | 'importModels' | 'mergeModels';
type ModelManagerTabName =
| 'modelManager'
| 'importModels'
| 'mergeModels'
| 'settings';
type ModelManagerTabInfo = {
id: ModelManagerTabName;
@@ -29,6 +34,11 @@ const tabs: ModelManagerTabInfo[] = [
label: i18n.t('modelManager.mergeModels'),
content: <MergeModelsPanel />,
},
{
id: 'settings',
label: i18n.t('modelManager.settings'),
content: <ModelManagerSettingsPanel />,
},
];
const ModelManagerTab = () => {

View File

@@ -75,42 +75,49 @@ const ModelList = (props: ModelListProps) => {
labelPos="side"
/>
{['images', 'diffusers'].includes(modelFormatFilter) &&
filteredDiffusersModels.length > 0 && (
<StyledModelContainer>
<Flex sx={{ gap: 2, flexDir: 'column' }}>
<Text variant="subtext" fontSize="sm">
Diffusers
</Text>
{filteredDiffusersModels.map((model) => (
<ModelListItem
key={model.id}
model={model}
isSelected={selectedModelId === model.id}
setSelectedModelId={setSelectedModelId}
/>
))}
</Flex>
</StyledModelContainer>
)}
{['images', 'checkpoint'].includes(modelFormatFilter) &&
filteredCheckpointModels.length > 0 && (
<StyledModelContainer>
<Flex sx={{ gap: 2, flexDir: 'column' }}>
<Text variant="subtext" fontSize="sm">
Checkpoint
</Text>
{filteredCheckpointModels.map((model) => (
<ModelListItem
key={model.id}
model={model}
isSelected={selectedModelId === model.id}
setSelectedModelId={setSelectedModelId}
/>
))}
</Flex>
</StyledModelContainer>
)}
<Flex
flexDirection="column"
gap={4}
maxHeight={window.innerHeight - 280}
overflow="scroll"
>
{['images', 'diffusers'].includes(modelFormatFilter) &&
filteredDiffusersModels.length > 0 && (
<StyledModelContainer>
<Flex sx={{ gap: 2, flexDir: 'column' }}>
<Text variant="subtext" fontSize="sm">
Diffusers
</Text>
{filteredDiffusersModels.map((model) => (
<ModelListItem
key={model.id}
model={model}
isSelected={selectedModelId === model.id}
setSelectedModelId={setSelectedModelId}
/>
))}
</Flex>
</StyledModelContainer>
)}
{['images', 'checkpoint'].includes(modelFormatFilter) &&
filteredCheckpointModels.length > 0 && (
<StyledModelContainer>
<Flex sx={{ gap: 2, flexDir: 'column' }}>
<Text variant="subtext" fontSize="sm">
Checkpoints
</Text>
{filteredCheckpointModels.map((model) => (
<ModelListItem
key={model.id}
model={model}
isSelected={selectedModelId === model.id}
setSelectedModelId={setSelectedModelId}
/>
))}
</Flex>
</StyledModelContainer>
)}
</Flex>
</Flex>
</Flex>
);
@@ -146,8 +153,6 @@ const StyledModelContainer = (props: PropsWithChildren) => {
return (
<Flex
flexDirection="column"
maxHeight={window.innerHeight - 280}
overflow="scroll"
gap={4}
borderRadius={4}
p={4}

View File

@@ -98,16 +98,7 @@ export default function ModelListItem(props: ModelListItemProps) {
onClick={handleSelectModel}
>
<Flex gap={4} alignItems="center">
<Badge
minWidth={14}
p={1}
fontSize="sm"
sx={{
bg: 'base.350',
color: 'base.900',
_dark: { bg: 'base.500' },
}}
>
<Badge minWidth={14} p={0.5} fontSize="sm" variant="solid">
{
modelBaseTypeMap[
model.base_model as keyof typeof modelBaseTypeMap

View File

@@ -0,0 +1,10 @@
import { Flex } from '@chakra-ui/react';
import SyncModels from './ModelManagerSettingsPanel/SyncModels';
export default function ModelManagerSettingsPanel() {
return (
<Flex>
<SyncModels />
</Flex>
);
}

View File

@@ -0,0 +1,35 @@
import { Flex, Text } from '@chakra-ui/react';
import { useTranslation } from 'react-i18next';
import SyncModelsButton from './SyncModelsButton';
export default function SyncModels() {
const { t } = useTranslation();
return (
<Flex
sx={{
w: 'full',
p: 4,
borderRadius: 4,
gap: 4,
justifyContent: 'space-between',
alignItems: 'center',
bg: 'base.200',
_dark: { bg: 'base.800' },
}}
>
<Flex
sx={{
flexDirection: 'column',
gap: 2,
}}
>
<Text sx={{ fontWeight: 600 }}>{t('modelManager.syncModels')}</Text>
<Text fontSize="sm" sx={{ _dark: { color: 'base.400' } }}>
{t('modelManager.syncModelsDesc')}
</Text>
</Flex>
<SyncModelsButton />
</Flex>
);
}

View File

@@ -0,0 +1,66 @@
import { makeToast } from 'app/components/Toaster';
import { useAppDispatch } from 'app/store/storeHooks';
import IAIButton from 'common/components/IAIButton';
import IAIIconButton from 'common/components/IAIIconButton';
import { addToast } from 'features/system/store/systemSlice';
import { useTranslation } from 'react-i18next';
import { FaSync } from 'react-icons/fa';
import { useSyncModelsMutation } from 'services/api/endpoints/models';
type SyncModelsButtonProps = {
iconMode?: boolean;
};
export default function SyncModelsButton(props: SyncModelsButtonProps) {
const { iconMode = false } = props;
const dispatch = useAppDispatch();
const { t } = useTranslation();
const [syncModels, { isLoading }] = useSyncModelsMutation();
const syncModelsHandler = () => {
syncModels()
.unwrap()
.then((_) => {
dispatch(
addToast(
makeToast({
title: `${t('modelManager.modelsSynced')}`,
status: 'success',
})
)
);
})
.catch((error) => {
if (error) {
dispatch(
addToast(
makeToast({
title: `${t('modelManager.modelSyncFailed')}`,
status: 'error',
})
)
);
}
});
};
return !iconMode ? (
<IAIButton
isLoading={isLoading}
onClick={syncModelsHandler}
minW="max-content"
>
Sync Models
</IAIButton>
) : (
<IAIIconButton
icon={<FaSync />}
tooltip={t('modelManager.syncModels')}
aria-label={t('modelManager.syncModels')}
isLoading={isLoading}
onClick={syncModelsHandler}
size="sm"
/>
);
}

View File

@@ -3,7 +3,7 @@ import { memo } from 'react';
import { PanelResizeHandle } from 'react-resizable-panels';
import { mode } from 'theme/util/mode';
type ResizeHandleProps = FlexProps & {
type ResizeHandleProps = Omit<FlexProps, 'direction'> & {
direction?: 'horizontal' | 'vertical';
};

View File

@@ -127,6 +127,13 @@ export const imagesApi = api.injectEndpoints({
// 24 hours - reducing this to a few minutes would reduce memory usage.
keepUnusedDataFor: 86400,
}),
getIntermediatesCount: build.query<number, void>({
query: () => ({ url: getListImagesUrl({ is_intermediate: true }) }),
providesTags: ['IntermediatesCount'],
transformResponse: (response: OffsetPaginatedResults_ImageDTO_) => {
return response.total;
},
}),
getImageDTO: build.query<ImageDTO, string>({
query: (image_name) => ({ url: `images/${image_name}` }),
providesTags: (result, error, arg) => {
@@ -148,8 +155,9 @@ export const imagesApi = api.injectEndpoints({
},
keepUnusedDataFor: 86400, // 24 hours
}),
clearIntermediates: build.mutation({
clearIntermediates: build.mutation<number, void>({
query: () => ({ url: `images/clear-intermediates`, method: 'POST' }),
invalidatesTags: ['IntermediatesCount'],
}),
deleteImage: build.mutation<void, ImageDTO>({
query: ({ image_name }) => ({
@@ -161,10 +169,11 @@ export const imagesApi = api.injectEndpoints({
],
async onQueryStarted(imageDTO, { dispatch, queryFulfilled }) {
/**
* Cache changes for deleteImage:
* - Remove from "All Images"
* - Remove from image's `board_id` if it has one, or "No Board" if not
* - Remove from "Batch"
* Cache changes for `deleteImage`:
* - *remove* from "All Images" / "All Assets"
* - IF it has a board:
* - THEN *remove* from it's own board
* - ELSE *remove* from "No Board"
*/
const { image_name, board_id, image_category } = imageDTO;
@@ -173,22 +182,23 @@ export const imagesApi = api.injectEndpoints({
// That means constructing the possible query args that are serialized into the cache key...
const removeFromCacheKeys: ListImagesArgs[] = [];
// determine `categories`, i.e. do we update "All Images" or "All Assets"
const categories = IMAGE_CATEGORIES.includes(image_category)
? IMAGE_CATEGORIES
: ASSETS_CATEGORIES;
// All Images board (e.g. no board)
// remove from "All Images"
removeFromCacheKeys.push({ categories });
// Board specific
if (board_id) {
// remove from it's own board
removeFromCacheKeys.push({ board_id });
} else {
// TODO: No Board
// remove from "No Board"
removeFromCacheKeys.push({ board_id: 'none' });
}
// TODO: Batch
const patches: PatchCollection[] = [];
removeFromCacheKeys.forEach((cacheKey) => {
patches.push(
@@ -232,32 +242,37 @@ export const imagesApi = api.injectEndpoints({
{ imageDTO: oldImageDTO, changes: _changes },
{ dispatch, queryFulfilled, getState }
) {
// TODO: Should we handle changes to boards via this mutation? Seems reasonable...
// let's be extra-sure we do not accidentally change categories
const changes = omit(_changes, 'image_category');
/**
* Cache changes for `updateImage`:
* - Update the ImageDTO
* - Update the image in "All Images" board:
* - IF it is in the date range represented by the cache:
* - add the image IF it is not already in the cache & update the total
* - ELSE update the image IF it is already in the cache
* Cache changes for "updateImage":
* - *update* "getImageDTO" cache
* - for "All Images" || "All Assets":
* - IF it is not already in the cache
* - THEN *add* it to "All Images" / "All Assets" and update the total
* - ELSE *update* it
* - IF the image has a board:
* - Update the image in it's own board
* - ELSE Update the image in the "No Board" board (TODO)
* - THEN *update* it's own board
* - ELSE *update* the "No Board" board
*/
const patches: PatchCollection[] = [];
const { image_name, board_id, image_category } = oldImageDTO;
const { image_name, board_id, image_category, is_intermediate } =
oldImageDTO;
const isChangingFromIntermediate = changes.is_intermediate === false;
// do not add intermediates to gallery cache
if (is_intermediate && !isChangingFromIntermediate) {
return;
}
// determine `categories`, i.e. do we update "All Images" or "All Assets"
const categories = IMAGE_CATEGORIES.includes(image_category)
? IMAGE_CATEGORIES
: ASSETS_CATEGORIES;
// TODO: No Board
// Update `getImageDTO` cache
// update `getImageDTO` cache
patches.push(
dispatch(
imagesApi.util.updateQueryData(
@@ -273,9 +288,13 @@ export const imagesApi = api.injectEndpoints({
// Update the "All Image" or "All Assets" board
const queryArgsToUpdate: ListImagesArgs[] = [{ categories }];
// IF the image has a board:
if (board_id) {
// We also need to update the user board
// THEN update it's own board
queryArgsToUpdate.push({ board_id });
} else {
// ELSE update the "No Board" board
queryArgsToUpdate.push({ board_id: 'none' });
}
queryArgsToUpdate.forEach((queryArg) => {
@@ -363,12 +382,12 @@ export const imagesApi = api.injectEndpoints({
return;
}
// Add the image to the "All Images" / "All Assets" board
const queryArg = {
categories: IMAGE_CATEGORIES.includes(image_category)
? IMAGE_CATEGORIES
: ASSETS_CATEGORIES,
};
// determine `categories`, i.e. do we update "All Images" or "All Assets"
const categories = IMAGE_CATEGORIES.includes(image_category)
? IMAGE_CATEGORIES
: ASSETS_CATEGORIES;
const queryArg = { categories };
dispatch(
imagesApi.util.updateQueryData('listImages', queryArg, (draft) => {
@@ -402,16 +421,14 @@ export const imagesApi = api.injectEndpoints({
{ dispatch, queryFulfilled, getState }
) {
/**
* Cache changes for addImageToBoard:
* - Remove from "No Board"
* - Remove from `old_board_id` if it has one
* - Add to new `board_id`
* - IF the image's `created_at` is within the range of the board's cached images
* Cache changes for `addImageToBoard`:
* - *update* the `getImageDTO` cache
* - *remove* from "No Board"
* - IF the image has an old `board_id`:
* - THEN *remove* from it's old `board_id`
* - IF the image's `created_at` is within the range of the board's cached images
* - OR the board cache has length of 0 or 1
* - Update the `total` for each board whose cache is updated
* - Update the ImageDTO
*
* TODO: maybe total should just be updated in the boards endpoints?
* - THEN *add* it to new `board_id`
*/
const { image_name, board_id: old_board_id } = oldImageDTO;
@@ -419,13 +436,10 @@ export const imagesApi = api.injectEndpoints({
// Figure out the `listImages` caches that we need to update
const removeFromQueryArgs: ListImagesArgs[] = [];
// TODO: No Board
// TODO: Batch
// Remove from No Board
// remove from "No Board"
removeFromQueryArgs.push({ board_id: 'none' });
// Remove from old board
// remove from old board
if (old_board_id) {
removeFromQueryArgs.push({ board_id: old_board_id });
}
@@ -526,17 +540,15 @@ export const imagesApi = api.injectEndpoints({
{ dispatch, queryFulfilled, getState }
) {
/**
* Cache changes for removeImageFromBoard:
* - Add to "No Board"
* - IF the image's `created_at` is within the range of the board's cached images
* - Remove from `old_board_id`
* - Update the ImageDTO
* Cache changes for `removeImageFromBoard`:
* - *update* `getImageDTO`
* - IF the image's `created_at` is within the range of the board's cached images
* - THEN *add* to "No Board"
* - *remove* from `old_board_id`
*/
const { image_name, board_id: old_board_id } = imageDTO;
// TODO: Batch
const patches: PatchCollection[] = [];
// Updated imageDTO with new board_id
@@ -617,6 +629,7 @@ export const imagesApi = api.injectEndpoints({
});
export const {
useGetIntermediatesCountQuery,
useListImagesQuery,
useLazyListImagesQuery,
useGetImageDTOQuery,

View File

@@ -93,6 +93,9 @@ type AddMainModelArg = {
type AddMainModelResponse =
paths['/api/v1/models/add']['post']['responses']['201']['content']['application/json'];
type SyncModelsResponse =
paths['/api/v1/models/sync']['post']['responses']['201']['content']['application/json'];
export type SearchFolderResponse =
paths['/api/v1/models/search']['get']['responses']['200']['content']['application/json'];
@@ -244,6 +247,15 @@ export const modelsApi = api.injectEndpoints({
},
invalidatesTags: [{ type: 'MainModel', id: LIST_TAG }],
}),
syncModels: build.mutation<SyncModelsResponse, void>({
query: () => {
return {
url: `models/sync`,
method: 'POST',
};
},
invalidatesTags: [{ type: 'MainModel', id: LIST_TAG }],
}),
getLoRAModels: build.query<EntityState<LoRAModelConfigEntity>, void>({
query: () => ({ url: 'models/', params: { model_type: 'lora' } }),
providesTags: (result, error, arg) => {
@@ -423,6 +435,7 @@ export const {
useAddMainModelsMutation,
useConvertMainModelsMutation,
useMergeMainModelsMutation,
useSyncModelsMutation,
useGetModelsInFolderQuery,
useGetCheckpointConfigsQuery,
} = modelsApi;

View File

@@ -0,0 +1,26 @@
import { BoardId } from 'features/gallery/store/gallerySlice';
import { useListAllBoardsQuery } from '../endpoints/boards';
export const useBoardName = (board_id: BoardId | null | undefined) => {
const { boardName } = useListAllBoardsQuery(undefined, {
selectFromResult: ({ data }) => {
let boardName = '';
if (board_id === 'images') {
boardName = 'Images';
} else if (board_id === 'assets') {
boardName = 'Assets';
} else if (board_id === 'no_board') {
boardName = 'No Board';
} else if (board_id === 'batch') {
boardName = 'Batch';
} else {
const selectedBoard = data?.find((b) => b.board_id === board_id);
boardName = selectedBoard?.board_name || 'Unknown Board';
}
return { boardName };
},
});
return boardName;
};

View File

@@ -0,0 +1,53 @@
import { skipToken } from '@reduxjs/toolkit/dist/query';
import {
ASSETS_CATEGORIES,
BoardId,
IMAGE_CATEGORIES,
INITIAL_IMAGE_LIMIT,
} from 'features/gallery/store/gallerySlice';
import { useMemo } from 'react';
import { ListImagesArgs, useListImagesQuery } from '../endpoints/images';
const baseQueryArg: ListImagesArgs = {
offset: 0,
limit: INITIAL_IMAGE_LIMIT,
is_intermediate: false,
};
const imagesQueryArg: ListImagesArgs = {
categories: IMAGE_CATEGORIES,
...baseQueryArg,
};
const assetsQueryArg: ListImagesArgs = {
categories: ASSETS_CATEGORIES,
...baseQueryArg,
};
const noBoardQueryArg: ListImagesArgs = {
board_id: 'none',
...baseQueryArg,
};
export const useBoardTotal = (board_id: BoardId | null | undefined) => {
const queryArg = useMemo(() => {
if (!board_id) {
return;
}
if (board_id === 'images') {
return imagesQueryArg;
} else if (board_id === 'assets') {
return assetsQueryArg;
} else if (board_id === 'no_board') {
return noBoardQueryArg;
} else {
return { board_id, ...baseQueryArg };
}
}, [board_id]);
const { total } = useListImagesQuery(queryArg ?? skipToken, {
selectFromResult: ({ currentData }) => ({ total: currentData?.total }),
});
return total;
};

View File

@@ -126,7 +126,7 @@ export type paths = {
* @description Call after making changes to models.yaml, autoimport directories or models directory to synchronize
* in-memory data structures with disk data structures.
*/
get: operations["sync_to_config"];
post: operations["sync_to_config"];
};
"/api/v1/models/merge/{base_model}": {
/**
@@ -167,7 +167,7 @@ export type paths = {
"/api/v1/images/clear-intermediates": {
/**
* Clear Intermediates
* @description Clears first 100 intermediates
* @description Clears all intermediates
*/
post: operations["clear_intermediates"];
};
@@ -228,6 +228,13 @@ export type paths = {
*/
patch: operations["update_board"];
};
"/api/v1/boards/{board_id}/image_names": {
/**
* List All Board Image Names
* @description Gets a list of images for a board
*/
get: operations["list_all_board_image_names"];
};
"/api/v1/board_images/": {
/**
* Create Board Image
@@ -240,13 +247,6 @@ export type paths = {
*/
delete: operations["remove_board_image"];
};
"/api/v1/board_images/{board_id}": {
/**
* List Board Images
* @description Gets a list of images for a board
*/
get: operations["list_board_images"];
};
"/api/v1/app/version": {
/** Get Version */
get: operations["app_version"];
@@ -255,6 +255,18 @@ export type paths = {
/** Get Config */
get: operations["get_config"];
};
"/api/v1/app/logging": {
/**
* Get Log Level
* @description Returns the log level
*/
get: operations["get_log_level"];
/**
* Set Log Level
* @description Sets the log verbosity level
*/
post: operations["set_log_level"];
};
};
export type webhooks = Record<string, never>;
@@ -800,6 +812,13 @@ export type components = {
* @enum {string}
*/
control_mode?: "balanced" | "more_prompt" | "more_control" | "unbalanced";
/**
* Resize Mode
* @description The resize mode to use
* @default just_resize
* @enum {string}
*/
resize_mode?: "just_resize" | "crop_resize" | "fill_resize" | "just_resize_simple";
};
/**
* ControlNetInvocation
@@ -859,6 +878,13 @@ export type components = {
* @enum {string}
*/
control_mode?: "balanced" | "more_prompt" | "more_control" | "unbalanced";
/**
* Resize Mode
* @description The resize mode used
* @default just_resize
* @enum {string}
*/
resize_mode?: "just_resize" | "crop_resize" | "fill_resize" | "just_resize_simple";
};
/** ControlNetModelConfig */
ControlNetModelConfig: {
@@ -1037,6 +1063,24 @@ export type components = {
*/
mask?: components["schemas"]["ImageField"];
};
/** DeleteBoardResult */
DeleteBoardResult: {
/**
* Board Id
* @description The id of the board that was deleted.
*/
board_id: string;
/**
* Deleted Board Images
* @description The image names of the board-images relationships that were deleted.
*/
deleted_board_images: (string)[];
/**
* Deleted Images
* @description The names of the images that were deleted.
*/
deleted_images: (string)[];
};
/**
* DivideInvocation
* @description Divides two numbers
@@ -2878,6 +2922,12 @@ export type components = {
*/
image?: components["schemas"]["ImageField"];
};
/**
* LogLevel
* @description An enumeration.
* @enum {integer}
*/
LogLevel: 0 | 10 | 20 | 30 | 40 | 50;
/** LoraInfo */
LoraInfo: {
/**
@@ -5305,18 +5355,18 @@ export type components = {
*/
image?: components["schemas"]["ImageField"];
};
/**
* StableDiffusion1ModelFormat
* @description An enumeration.
* @enum {string}
*/
StableDiffusion1ModelFormat: "checkpoint" | "diffusers";
/**
* StableDiffusionXLModelFormat
* @description An enumeration.
* @enum {string}
*/
StableDiffusionXLModelFormat: "checkpoint" | "diffusers";
/**
* StableDiffusion1ModelFormat
* @description An enumeration.
* @enum {string}
*/
StableDiffusion1ModelFormat: "checkpoint" | "diffusers";
/**
* StableDiffusion2ModelFormat
* @description An enumeration.
@@ -5909,7 +5959,7 @@ export type operations = {
/** @description synchronization successful */
201: {
content: {
"application/json": unknown;
"application/json": boolean;
};
};
};
@@ -5956,13 +6006,13 @@ export type operations = {
list_image_dtos: {
parameters: {
query?: {
/** @description The origin of images to list */
/** @description The origin of images to list. */
image_origin?: components["schemas"]["ResourceOrigin"];
/** @description The categories of image to include */
/** @description The categories of image to include. */
categories?: (components["schemas"]["ImageCategory"])[];
/** @description Whether to list intermediate images */
/** @description Whether to list intermediate images. */
is_intermediate?: boolean;
/** @description The board id to filter by */
/** @description The board id to filter by. Use 'none' to find images without a board. */
board_id?: string;
/** @description The page offset */
offset?: number;
@@ -6107,7 +6157,7 @@ export type operations = {
};
/**
* Clear Intermediates
* @description Clears first 100 intermediates
* @description Clears all intermediates
*/
clear_intermediates: {
responses: {
@@ -6328,7 +6378,7 @@ export type operations = {
/** @description Successful Response */
200: {
content: {
"application/json": unknown;
"application/json": components["schemas"]["DeleteBoardResult"];
};
};
/** @description Validation Error */
@@ -6370,6 +6420,32 @@ export type operations = {
};
};
};
/**
* List All Board Image Names
* @description Gets a list of images for a board
*/
list_all_board_image_names: {
parameters: {
path: {
/** @description The id of the board */
board_id: string;
};
};
responses: {
/** @description Successful Response */
200: {
content: {
"application/json": (string)[];
};
};
/** @description Validation Error */
422: {
content: {
"application/json": components["schemas"]["HTTPValidationError"];
};
};
};
};
/**
* Create Board Image
* @description Creates a board_image
@@ -6420,38 +6496,6 @@ export type operations = {
};
};
};
/**
* List Board Images
* @description Gets a list of images for a board
*/
list_board_images: {
parameters: {
query?: {
/** @description The page offset */
offset?: number;
/** @description The number of boards per page */
limit?: number;
};
path: {
/** @description The id of the board */
board_id: string;
};
};
responses: {
/** @description Successful Response */
200: {
content: {
"application/json": components["schemas"]["OffsetPaginatedResults_ImageDTO_"];
};
};
/** @description Validation Error */
422: {
content: {
"application/json": components["schemas"]["HTTPValidationError"];
};
};
};
};
/** Get Version */
app_version: {
responses: {
@@ -6474,4 +6518,43 @@ export type operations = {
};
};
};
/**
* Get Log Level
* @description Returns the log level
*/
get_log_level: {
responses: {
/** @description The operation was successful */
200: {
content: {
"application/json": components["schemas"]["LogLevel"];
};
};
};
};
/**
* Set Log Level
* @description Sets the log verbosity level
*/
set_log_level: {
requestBody: {
content: {
"application/json": components["schemas"]["LogLevel"];
};
};
responses: {
/** @description The operation was successful */
200: {
content: {
"application/json": components["schemas"]["LogLevel"];
};
};
/** @description Validation Error */
422: {
content: {
"application/json": components["schemas"]["HTTPValidationError"];
};
};
};
};
};

View File

@@ -2,11 +2,16 @@ import { InvokeAIThemeColors } from 'theme/themeTypes';
import { generateColorPalette } from 'theme/util/generateColorPalette';
const BASE = { H: 220, S: 16 };
const ACCENT = { H: 250, S: 52 };
const WORKING = { H: 47, S: 50 };
const WARNING = { H: 28, S: 50 };
const OK = { H: 113, S: 50 };
const ERROR = { H: 0, S: 50 };
const ACCENT = { H: 250, S: 42 };
// const ACCENT = { H: 250, S: 52 };
const WORKING = { H: 47, S: 42 };
// const WORKING = { H: 47, S: 50 };
const WARNING = { H: 28, S: 42 };
// const WARNING = { H: 28, S: 50 };
const OK = { H: 113, S: 42 };
// const OK = { H: 113, S: 50 };
const ERROR = { H: 0, S: 42 };
// const ERROR = { H: 0, S: 50 };
export const InvokeAIColors: InvokeAIThemeColors = {
base: generateColorPalette(BASE.H, BASE.S),

View File

@@ -0,0 +1,56 @@
import { editableAnatomy as parts } from '@chakra-ui/anatomy';
import {
createMultiStyleConfigHelpers,
defineStyle,
} from '@chakra-ui/styled-system';
import { mode } from '@chakra-ui/theme-tools';
const { definePartsStyle, defineMultiStyleConfig } =
createMultiStyleConfigHelpers(parts.keys);
const baseStylePreview = defineStyle({
borderRadius: 'md',
py: '1',
transitionProperty: 'common',
transitionDuration: 'normal',
});
const baseStyleInput = defineStyle((props) => ({
borderRadius: 'md',
py: '1',
transitionProperty: 'common',
transitionDuration: 'normal',
width: 'full',
_focusVisible: { boxShadow: 'outline' },
_placeholder: { opacity: 0.6 },
'::selection': {
color: mode('accent.900', 'accent.50')(props),
bg: mode('accent.200', 'accent.400')(props),
},
}));
const baseStyleTextarea = defineStyle({
borderRadius: 'md',
py: '1',
transitionProperty: 'common',
transitionDuration: 'normal',
width: 'full',
_focusVisible: { boxShadow: 'outline' },
_placeholder: { opacity: 0.6 },
});
const invokeAI = definePartsStyle((props) => ({
preview: baseStylePreview,
input: baseStyleInput(props),
textarea: baseStyleTextarea,
}));
export const editableTheme = defineMultiStyleConfig({
variants: {
invokeAI,
},
defaultProps: {
size: 'sm',
variant: 'invokeAI',
},
});

View File

@@ -4,6 +4,7 @@ import { InvokeAIColors } from './colors/colors';
import { accordionTheme } from './components/accordion';
import { buttonTheme } from './components/button';
import { checkboxTheme } from './components/checkbox';
import { editableTheme } from './components/editable';
import { formLabelTheme } from './components/formLabel';
import { inputTheme } from './components/input';
import { menuTheme } from './components/menu';
@@ -72,7 +73,17 @@ export const theme: ThemeOverride = {
selected: {
light:
'0px 0px 0px 1px var(--invokeai-colors-base-150), 0px 0px 0px 4px var(--invokeai-colors-accent-400)',
dark: '0px 0px 0px 1px var(--invokeai-colors-base-900), 0px 0px 0px 4px var(--invokeai-colors-accent-400)',
dark: '0px 0px 0px 1px var(--invokeai-colors-base-900), 0px 0px 0px 4px var(--invokeai-colors-accent-500)',
},
hoverSelected: {
light:
'0px 0px 0px 1px var(--invokeai-colors-base-150), 0px 0px 0px 4px var(--invokeai-colors-accent-500)',
dark: '0px 0px 0px 1px var(--invokeai-colors-base-900), 0px 0px 0px 4px var(--invokeai-colors-accent-300)',
},
hoverUnselected: {
light:
'0px 0px 0px 1px var(--invokeai-colors-base-150), 0px 0px 0px 4px var(--invokeai-colors-accent-200)',
dark: '0px 0px 0px 1px var(--invokeai-colors-base-900), 0px 0px 0px 4px var(--invokeai-colors-accent-600)',
},
nodeSelectedOutline: `0 0 0 2px var(--invokeai-colors-accent-450)`,
},
@@ -80,6 +91,7 @@ export const theme: ThemeOverride = {
components: {
Button: buttonTheme, // Button and IconButton
Input: inputTheme,
Editable: editableTheme,
Textarea: textareaTheme,
Tabs: tabsTheme,
Progress: progressTheme,

View File

@@ -37,4 +37,7 @@ export const getInputOutlineStyles = (props: StyleFunctionProps) => ({
_placeholder: {
color: mode('base.700', 'base.400')(props),
},
'::selection': {
bg: mode('accent.200', 'accent.400')(props),
},
});

View File

@@ -1 +1 @@
__version__ = "3.0.0rc1"
__version__ = "3.0.0rc2"

View File

@@ -12,7 +12,7 @@ repo_url: 'https://github.com/invoke-ai/InvokeAI'
edit_uri: edit/main/docs/
# Copyright
copyright: Copyright &copy; 2022 InvokeAI Team
copyright: Copyright &copy; 2023 InvokeAI Team
# Configuration
theme:
@@ -35,8 +35,11 @@ theme:
features:
- navigation.instant
- navigation.tabs
- navigation.tabs.sticky
- navigation.top
- navigation.tracking
- navigation.indexes
- navigation.path
- search.highlight
- search.suggest
- toc.integrate
@@ -95,3 +98,68 @@ plugins:
'installation/INSTALL_DOCKER.md': 'installation/040_INSTALL_DOCKER.md'
'installation/INSTALLING_MODELS.md': 'installation/050_INSTALLING_MODELS.md'
'installation/INSTALL_PATCHMATCH.md': 'installation/060_INSTALL_PATCHMATCH.md'
nav:
- Home: 'index.md'
- Installation:
- Overview: 'installation/index.md'
- Installing with the Automated Installer: 'installation/010_INSTALL_AUTOMATED.md'
- Installing manually: 'installation/020_INSTALL_MANUAL.md'
- NVIDIA Cuda / AMD ROCm: 'installation/030_INSTALL_CUDA_AND_ROCM.md'
- Installing with Docker: 'installation/040_INSTALL_DOCKER.md'
- Installing Models: 'installation/050_INSTALLING_MODELS.md'
- Installing PyPatchMatch: 'installation/060_INSTALL_PATCHMATCH.md'
- Installing xFormers: 'installation/070_INSTALL_XFORMERS.md'
- Developers Documentation: 'installation/Developers_documentation/BUILDING_BINARY_INSTALLERS.md'
- Deprecated Documentation:
- Binary Installer: 'installation/deprecated_documentation/INSTALL_BINARY.md'
- Runninng InvokeAI on Google Colab: 'installation/deprecated_documentation/INSTALL_JUPYTER.md'
- Manual Installation on Linux: 'installation/deprecated_documentation/INSTALL_LINUX.md'
- Manual Installation on macOS: 'installation/deprecated_documentation/INSTALL_MAC.md'
- Manual Installation on Windows: 'installation/deprecated_documentation/INSTALL_WINDOWS.md'
- Installing Invoke with pip: 'installation/deprecated_documentation/INSTALL_PCP.md'
- Source Installer: 'installation/deprecated_documentation/INSTALL_SOURCE.md'
- Community Nodes:
- Community Nodes: 'nodes/communityNodes.md'
- Overview: 'nodes/overview.md'
- Features:
- Overview: 'features/index.md'
- Concepts: 'features/CONCEPTS.md'
- Configuration: 'features/CONFIGURATION.md'
- ControlNet: 'features/CONTROLNET.md'
- Image-to-Image: 'features/IMG2IMG.md'
- Controlling Logging: 'features/LOGGING.md'
- Model Mergeing: 'features/MODEL_MERGING.md'
- Nodes Editor (Experimental): 'features/NODES.md'
- NSFW Checker: 'features/NSFW.md'
- Postprocessing: 'features/POSTPROCESS.md'
- Prompting Features: 'features/PROMPTS.md'
- Training: 'features/TRAINING.md'
- Unified Canvas: 'features/UNIFIED_CANVAS.md'
- Variations: 'features/VARIATIONS.md'
- InvokeAI Web Server: 'features/WEB.md'
- WebUI Hotkeys: "features/WEBUIHOTKEYS.md"
- Other: 'features/OTHER.md'
- Contributing:
- How to Contribute: 'contributing/CONTRIBUTING.md'
- Development:
- Overview: 'contributing/contribution_guides/development.md'
- InvokeAI Architecture: 'contributing/ARCHITECTURE.md'
- Frontend Documentation: 'contributing/contribution_guides/development_guides/contributingToFrontend.md'
- Local Development: 'contributing/LOCAL_DEVELOPMENT.md'
- Documentation: 'contributing/contribution_guides/documentation.md'
- Translation: 'contributing/contribution_guides/translation.md'
- Tutorials: 'contributing/contribution_guides/tutorials.md'
- Changelog: 'CHANGELOG.md'
- Deprecated:
- Command Line Interface: 'deprecated/CLI.md'
- Embiggen: 'deprecated/EMBIGGEN.md'
- Inpainting: 'deprecated/INPAINTING.md'
- Outpainting: 'deprecated/OUTPAINTING.md'
- Help:
- Sampler Convergence: 'help/SAMPLER_CONVERGENCE.md'
- Other:
- Contributors: 'other/CONTRIBUTORS.md'
- CompViz-README: 'other/README-CompViz.md'

View File

@@ -5,6 +5,7 @@
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
@@ -12,6 +13,11 @@
- [ ] No, because:
## Have you updated relevant documentation?
- [ ] Yes
- [ ] No
## Description