Compare commits

..

8 Commits

Author SHA1 Message Date
blessedcoolant
fd67df9447 Remove gfpgan_dir
+ Update GFPGAN Model Path Defaults
>  Update them to match the new file heirarchy
2022-11-13 00:27:56 +00:00
Lincoln Stein
45e5053d06 added assets back 2022-11-13 00:23:51 +00:00
Lincoln Stein
9c5999ede1 added caveats to use of installer script 2022-11-12 20:02:01 +00:00
Lincoln Stein
7ddf7f0b7d use invoke-ai gfpgan to store weights in right place 2022-11-12 19:31:41 +00:00
psychedelicious
b8de5244b1 Fixes issue with intermediates size
Sorry @lstein !
2022-11-12 19:03:22 +00:00
Lincoln Stein
72e011a4e4 stop crash on text mask generation 2022-11-12 18:23:16 +00:00
Lincoln Stein
98db0d746c fix crash in inpaint when no seed in original image 2022-11-12 17:57:43 +00:00
Lincoln Stein
1a8e007066 merge release-candidate-1-3-2 into main.
Squashed commit of the following:

commit 9a1fe8e7fb
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 12 17:07:40 2022 +0000

    swap in release URLs for installers

commit ff56f5251b
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 12 17:03:21 2022 +0000

    fix up bad unicode chars in invoke.py

commit ed943bd6c7
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 12 16:05:45 2022 +0000

    outcrop improvements, hand-added

commit 7ad2355b1d
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 12 15:14:33 2022 +0000

    documentation fixes

commit 66c920fc19
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 16:49:25 2022 -0500

    Revert "Resize hires as an image"

    This reverts commit d05b1b3544.

commit 3fc5cb09f8
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 12 12:43:17 2022 +0000

    fix incorrect link in install

commit 1345ec77ab
Author: tildebyte <337875+tildebyte@users.noreply.github.com>
Date:   Sun Nov 6 19:07:31 2022 -0500

    toil(repo): add tildebyte as owner of installer/ directory

commit b116715490
Author: Kyle Schouviller <kyle0654@hotmail.com>
Date:   Thu Nov 10 21:43:56 2022 -0800

    Fix performance issue introduced by torch cuda cache clear during generation

commit fa3670270e
Author: mauwii <Mauwii@outlook.de>
Date:   Fri Nov 11 12:42:03 2022 +0100

    small update to dockers huggingface section

commit c304250ef6
Author: mauwii <Mauwii@outlook.de>
Date:   Fri Nov 11 12:19:27 2022 +0100

    fix format and Link in INSTALL_INVOKE.md

commit 802ce5dde5
Author: mauwii <Mauwii@outlook.de>
Date:   Fri Nov 11 11:17:49 2022 +0100

    small fixex to format and a link in INSTALL_MANUAL

commit 311ee320ec
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 11 07:23:35 2022 +0000

    ignore installer intermediate files

commit e9df17b374
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 11 07:19:25 2022 +0000

    fix backslash-related syntax error

commit 061fb4ef00
Merge: 52be0d23 4095acd1
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 11 06:50:04 2022 +0000

    Merge branch 'release-candidate-2-1-3' of github.com:/invoke-ai/InvokeAI into release-candidate-2-1-3

commit 52be0d2396
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 11 06:49:45 2022 +0000

    add WindowsLongFileName batfile to source installer

commit 4095acd10e
Author: mauwii <Mauwii@outlook.de>
Date:   Fri Nov 11 07:05:17 2022 +0100

    Doc Updates
    A lot of re-formating of new Installation Docs
    also some content updates/corrections

commit 201eb22d76
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 11 04:41:02 2022 +0000

    prevent two models from being marked default in models.yaml

commit 17ab982200
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 11 03:56:54 2022 +0000

    installers download branch HEAD not tag

commit a04965b0e9
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 11 03:48:21 2022 +0000

    improve messaging during installation process

commit 0b529f0c57
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 15:22:32 2022 +0000

    enable outcropping of random JPG/PNG images

    - Works best with runwayML inpainting model
    - Numerous code changes required to propagate seed to final metadata.
      Original code predicated on the image being generated within InvokeAI.

commit 6f9f848345
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 17:27:42 2022 +0000

    enhance outcropping with ability to direct contents of new regions

    - When outcropping an image you can now add a `--new_prompt` option, to specify
      a new prompt to be used instead of the original one used to generate the image.

    - Similarly you can provide a new seed using `--seed` (or `-S`). A seed of zero
      will pick one randomly.

    - This PR also fixes the crash that happened when trying to outcrop an image
      that does not contain InvokeAI metadata.

commit 918c1589ef
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 20:16:47 2022 +0000

    fix #1402

commit 116415b3fc
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 21:27:25 2022 +0000

    fix invoke.py crash if no models.yaml file present

    - Script will now offer the user the ability to create a
      minimal models.yaml and then gracefully exit.
    - Closes #1420

commit b4b6eabaac
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 16:49:25 2022 -0500

    Revert "Log strength with hires"

    This reverts commit 82d4904c07.

commit 4ef1f4a854
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 20:01:49 2022 +0000

    remove temporary directory from repo

commit 510fc4ebaa
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 19:59:03 2022 +0000

    remove -e from clipseg load in installer

commit a20914434b
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 19:37:07 2022 +0000

    change clipseg repo branch to avoid clipseg not found error

commit 0d134195fd
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 18:39:29 2022 +0000

    update repo URL to point to rc

commit 649d8c8573
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 18:13:28 2022 +0000

    integrate tildebyte installer

commit a358d370a0
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 17:48:14 2022 +0000

    add @tildebyte compiled pip installer

commit 94a9033c4f
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 14:52:00 2022 +0000

    ignore source installer zip files

commit 18a947c503
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 14:46:36 2022 +0000

    documentation and environment file fixes

    - Have clarified the relationship between the @tildebyte and @cmdr2 installers;
      However, @tildebyte installer merge is still a WIP due to conflicts over
      such things as `invoke.sh`.
    - Rechristened 1click installer as "source" installer. @tildebyte installer will be
      "the" installer. (We'll see which one generates the least support requests and
      maintenance work.)
    - Sync'd `environment-mac.yml` with `development`. The former was failing with a
      taming-transformers error as per https://discord.com/channels/@me/1037201214154231899/1040060947378749460

commit a23b031895
Author: Mike DiGiovanni <vinblau@gmail.com>
Date:   Wed Nov 9 16:44:59 2022 -0500

    Fixes typos in README.md

commit 23af68c7d7
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 07:02:27 2022 -0500

    downgrade win installs to basicsr==1.4.1

commit e258beeb51
Merge: 7460c069 e481bfac
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 06:37:45 2022 -0500

    Merge branch 'release-candidate-2-1-3' of github.com:invoke-ai/InvokeAI into release-candidate-2-1-3

commit 7460c069b8
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 06:36:48 2022 -0500

    remove --prefer-binary from requirements-base.txt

    It appears that some versions of pip do not recognize this option
    when it appears in the requirements file. Did not explore this further
    but recommend --prefer-binary in the manual install instructions on
    the command line.

commit e481bfac61
Merge: 5040747c d1ab65a4
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 11:21:56 2022 +0000

    Merge branch 'release-candidate-2-1-3' of github.com:/invoke-ai/InvokeAI into release-candidate-2-1-3

commit 5040747c67
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 10 11:21:43 2022 +0000

    fix windows install instructions & bat file

commit d1ab65a431
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 10 07:18:59 2022 +0100

    update WEBUIHOTKEYS.md

commit af4ee7feb8
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 10 06:33:49 2022 +0100

    update INSTALL_DOCKER.md

commit 764fb29ade
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 10 06:30:15 2022 +0100

    fix formatting in INSTALL.md

commit 1014d3ba44
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 10 06:29:14 2022 +0100

    fix build.sh invokeai_conda_env_file default value

commit 40a48aca88
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 10 05:25:30 2022 +0100

    fix environment-mac.yml
    moved taming-transformers-rom1504 to pip dependencies

commit 92abc00f16
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 10 05:19:52 2022 +0100

    fix test-invoke-conda
    - copy required conda environment yaml
    - use environment.yml
    - I use cp instead of ln since would be compatible for windows runners

commit a5719aabf8
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 10 04:14:35 2022 +0100

    update Dockerfile
    - link environment.yml from new environemnts path
    - change default conda_env_file
    - quote all variables to avoid splitting
    - also remove paths from conda-env-files in build-container.yml

commit 44a18511fa
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 20:51:06 2022 +0000

    update paths in container build workflow

commit b850dbadaf
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 20:16:57 2022 +0000

    finished reorganization of install docs

commit 9ef8b944d5
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 18:50:58 2022 +0000

    tweaks to manual install documentation

    --prefer-binary is an iffy option in the requirements file. It isn't
    supported by some versions of pip, so I removed it from
    requirements-base.txt and inserted it into the manual install
    instructions where it seems to do what it is supposed to.

commit efc5a98488
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 18:20:03 2022 +0000

    manual installation documentation tested on Linux

commit 1417c87928
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 17:37:06 2022 +0000

    change name of requirements.txt to avoid confusion

commit 2dd6fc2b93
Merge: 22213612 71ee44a8
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 17:26:24 2022 +0000

    Merge branch 'release-candidate-2-1-3' of github.com:/invoke-ai/InvokeAI into release-candidate-2-1-3

commit 22213612a0
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 17:25:59 2022 +0000

    directory cleanup; working on install docs

commit 71ee44a827
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 02:07:13 2022 +0000

    prevent crash when switching to an invalid model

commit b17ca0a5e7
Author: damian0815 <null@damianstewart.com>
Date:   Tue Nov 8 14:28:38 2022 +0100

    don't suppress exceptions when doing cross-attention control

commit 71bbfe4a1a
Author: damian0815 <null@damianstewart.com>
Date:   Tue Nov 8 12:59:34 2022 +0100

    Fix #1362 by improving VRAM usage patterns when doing .swap()

    commit ef3f7a26e242b73c2beb0195c7fd8f654ef47f55
    Author: damian0815 <null@damianstewart.com>
    Date:   Tue Nov 8 12:18:37 2022 +0100

        remove log spam

    commit 7189d649622d4668b120b0dd278388ad672142c4
    Author: damian0815 <null@damianstewart.com>
    Date:   Tue Nov 8 12:10:28 2022 +0100

        change the way saved slicing strategy is applied

    commit 01c40f751ab72955140165c16f95ae411732265b
    Author: damian0815 <null@damianstewart.com>
    Date:   Tue Nov 8 12:04:43 2022 +0100

        fix slicing_strategy_getter callsite

    commit f8cfe25150a346958903316bc710737d99839923
    Author: damian0815 <null@damianstewart.com>
    Date:   Tue Nov 8 11:56:22 2022 +0100

        cleanup, consistent dim=0 also tested

    commit 5bf9b1e890d48e962afd4a668a219b68271e5dc1
    Author: damian0815 <null@damianstewart.com>
    Date:   Tue Nov 8 11:34:09 2022 +0100

        refactored context, tested with non-sliced cross attention control

    commit d58a46e39bf562e7459290d2444256e8c08ad0b6
    Author: damian0815 <null@damianstewart.com>
    Date:   Sun Nov 6 00:41:52 2022 +0100

        cleanup

    commit 7e2c658b4c06fe239311b65b9bb16fa3adec7fd7
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:57:31 2022 +0100

        disable logs

    commit 20ee89d93841b070738b3d8a4385c93b097d92eb
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:36:58 2022 +0100

        slice saved attention if necessary

    commit 0a7684a22c880ec0f48cc22bfed4526358f71546
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:32:38 2022 +0100

        raise instead of asserting

    commit 7083104c7f3a0d8fd96e94a2f391de50a3c942e4
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:31:00 2022 +0100

        store dim when saving slices

    commit f7c0808ed383ec1dc70645288a798ed2aa4fa85c
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:27:16 2022 +0100

        don't retry on exception

    commit 749a721e939b3fe7c1741e7998dab6bd2c85a0cb
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:24:50 2022 +0100

        stuff

    commit 032ab90e9533be8726301ec91b97137e2aadef9a
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:20:17 2022 +0100

        more logging

    commit 3dc34b387f033482305360e605809d95a40bf6f8
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:16:47 2022 +0100

        logs

    commit 901c4c1aa4b9bcef695a6551867ec8149e6e6a93
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:12:39 2022 +0100

        actually set save_slicing_strategy to True

    commit f780e0a0a7c6b6a3db320891064da82589358c8a
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 22:10:35 2022 +0100

        store slicing strategy

    commit 93bb6d566fd18c5c69ef7dacc8f74ba2cf671cb7
    Author: damian <git@damianstewart.com>
    Date:   Sat Nov 5 20:43:48 2022 +0100

        still not it

    commit 5e3a9541f8ae00bde524046963910323e20c40b7
    Author: damian <git@damianstewart.com>
    Date:   Sat Nov 5 17:20:02 2022 +0100

        wip offloading attention slices on-demand

    commit 4c2966aa856b6f3b446216da3619ae931552ef08
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 15:47:40 2022 +0100

        pre-emptive offloading, idk if it works

    commit 572576755e9f0a878d38e8173e485126c0efbefb
    Author: root <you@example.com>
    Date:   Sat Nov 5 11:25:32 2022 +0000

        push attention slices to cpu. slow but saves memory.

    commit b57c83a68f2ac03976ebc89ce2ff03812d6d185f
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 12:04:22 2022 +0100

        verbose logging

    commit 3a5dae116f110a96585d9eb71d713b5ed2bc3d2b
    Author: damian0815 <null@damianstewart.com>
    Date:   Sat Nov 5 11:50:48 2022 +0100

        wip fixing mem strategy crash (4 test on runpod)

    commit 3cf237db5fae0c7b0b4cc3c47c81830bdb2ae7de
    Author: damian0815 <null@damianstewart.com>
    Date:   Fri Nov 4 09:02:40 2022 +0100

        wip, only works on cuda

commit 5702271991
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 14:09:36 2022 +0000

    speculative reorganization of the requirements & environment files

    - This is only a test!
    - The various environment*.yml and requirements*.txt files have all
      been moved into a directory named "environments-and-requirements".
    - The idea is to clean up our root directory so that the github home
      page is tidy.
    - The manual install instructions will start with the instructions to
      create a symbolic link from environment.yml to the appropriate file
      for OS and GPU.
    - The 1-click installers have been updated to accommodate this change.

commit 10781e7dc4
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 9 01:59:45 2022 +0000

    refactoring requirements

commit 099d1157c5
Author: mauwii <Mauwii@outlook.de>
Date:   Wed Nov 9 00:16:18 2022 +0100

    better way to make sure if conda is useable

commit ab825bf7ee
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 22:05:33 2022 +0000

    add back --prefer-binaries to requirements

commit 10cfeb5ada
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 8 22:27:19 2022 +0100

    add quotes to set and use `$environment_file`

commit e97515d045
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 8 22:24:21 2022 +0100

    set environment file for conda update

commit 0f04bc5789
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 8 22:21:25 2022 +0100

    use conda env update

commit 3f74aabecd
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 8 22:20:44 2022 +0100

    use command instead of hash

commit b1a99a51b7
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 14:44:44 2022 -0500

    remove --global git config from 1-click installers

commit 8004f8a6d9
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Mon Nov 7 09:07:20 2022 -0500

    Revert "Use array slicing to calc ddim timesteps"

    This reverts commit 1f0c5b4cf1.

commit ff8ff2212a
Merge: 8e5363cd 636620b1
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 14:01:40 2022 +0000

    add initfile support from PR #1386

commit 8e5363cd83
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 13:26:18 2022 +0000

    move 'installer/' to '1-click-installer' to make room for tildebyte installer

commit 1450779146
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 12:56:36 2022 +0000

    update branch for installer to pull against

commit 8cd5d95b8a
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 05:30:20 2022 +0000

    move all models into subdirectories of ./models

    - this required an update to the invoke-ai fork of gfpgan
    - simultaneously reverted consolidation of environment and
      requirements files, as their presence in a directory
      triggered setup.py to try to install a sub-package.

commit abd6407394
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 03:52:46 2022 +0000

    leave a copy of environment-cuda.yml at top level

    - named it environment.yml
    - need to avoid a big change for users and breaking older support
      instructions.

commit 734dacfbe9
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 03:50:07 2022 +0000

    consolidate environment files

    - starting to remove unneeded entries and pins
    - no longer require -e in front of github dependencies
    - update setup.py with release number
    - update manual installation instructions

commit 636620b1d5
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 8 03:26:16 2022 +0000

    change initfile to ~/.invokeai

    - adjust documentation
    - also fix 'clipseg_models' to 'clipseg', which seems to be working now

commit 1fe41146f0
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 5 14:28:01 2022 -0400

    add support for an initialization file, invokeai.init

    - Place preferred startup command switches in a file named
      "invokeai.init". The file can consist of a single line of switches
      such as "--web --steps=28", a series of switches on each
      line, or any combination of the two.

     Example:
     ```
       --web
       --host=0.0.0.0
       --steps=28
       --grid
       -f 0.6 -C 11.0 -A k_euler_a
    ```

    - The following options, which were previously only available within
      the CLI, are now available on the command line as well:

      --steps
      --strength
      --cfg_scale
      --width
      --height
      --fit

commit 2ad6ef355a
Merge: 865502ee 8b47c829
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sun Nov 6 18:08:36 2022 +0000

    update discord link

commit 865502ee4f
Author: mauwii <Mauwii@outlook.de>
Date:   Sun Nov 6 18:00:16 2022 +0100

    update changelog

commit c7984f3299
Author: mauwii <Mauwii@outlook.de>
Date:   Sun Nov 6 17:07:27 2022 +0100

    update TROUBLESHOOT.md

commit 7f150ed833
Author: mauwii <Mauwii@outlook.de>
Date:   Sun Nov 6 16:56:58 2022 +0100

    remove `:`from headlines in CONTRIBUTORS.md

commit badf4e256c
Author: mauwii <Mauwii@outlook.de>
Date:   Sun Nov 6 16:56:37 2022 +0100

    enable navigation tabs
    Since the docs are growing, this way they look cleaner

commit e64c60bbb3
Author: mauwii <Mauwii@outlook.de>
Date:   Sun Nov 6 16:18:59 2022 +0100

    remove preflight checks from assets
    seems like somebody executed tests and commited them

commit 1780618543
Author: mauwii <Mauwii@outlook.de>
Date:   Sun Nov 6 16:15:06 2022 +0100

    update INSTALLING_MODELS.md

commit f91fd27624
Author: Kyle Schouviller <kyle0654@hotmail.com>
Date:   Sat Nov 5 14:47:53 2022 -0700

    Bug fix for inpaint size

commit 09e41e8f76
Author: Kyle Schouviller <kyle0654@hotmail.com>
Date:   Sat Nov 5 14:34:52 2022 -0700

    Add inpaint size options to inpaint at a larger size than the actual inpaint image, then scale back down for recombination

commit 6eeb2107b3
Author: mauwii <Mauwii@outlook.de>
Date:   Sat Nov 5 21:01:14 2022 +0100

    remove create-caches.yml since not used anywhere

commit 17053ad8b7
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 5 16:01:55 2022 -0400

    fix duplicated argument introduced by conflict resolution

commit fefb4dc1f8
Merge: 762ca60a d05b1b35
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 5 12:47:35 2022 -0700

    Merge branch 'development' into fix_generate.py

commit d05b1b3544
Author: Craig <cwallen@users.noreply.github.com>
Date:   Sat Oct 29 20:40:30 2022 -0400

    Resize hires as an image

commit 82d4904c07
Author: Craig <cwallen@users.noreply.github.com>
Date:   Sat Oct 29 20:37:40 2022 -0400

    Log strength with hires

commit 1cdcf33cfa
Merge: 6616fa83 cbc029c6
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Sat Nov 5 09:57:38 2022 -0400

    Merge branch 'main' into development

    - this synchronizes recent document fixes by mauwii

commit 6616fa835a
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 4 00:47:03 2022 -0400

    fix Windows library dependency issues

    This commit addresses two bugs:

    1) invokeai.py crashes immediately with a message about an undefined
       attritube sigKILL (closes #1288). The fix is to pin torch at 1.12.1.

    2) Version 1.4.2 of basicsr fails to load properly on Windows, and is
       a requirement of realesrgan, however 1.4.1 works. Pinning basicsr
       in our requirements file resulted in a dependency conflict, so I
       ended up cloning realesrgan into the invoke-ai Git space and changing
       the requirements file there.

    If there is a more elegant solution, please advise.

commit 7b9a4564b1
Author: Matthias Wild <40327258+mauwii@users.noreply.github.com>
Date:   Sat Nov 5 14:36:45 2022 +0100

    Update-docs (#1382)

    * update IMG2IMG.md

    * update INPAINTING.md

    * update WEBUIHOTKEYS.md

    * more doc updates (mostly fix formatting):
    - OUTPAINTING.md
    - POSTPROCESS.md
    - PROMPTS.md
    - VARIATIONS.md
    - WEB.md
    - WEBUIHOTKEYS.md

commit fcdefa0620
Author: Matthias Wild <40327258+mauwii@users.noreply.github.com>
Date:   Fri Nov 4 20:47:31 2022 +0100

    Hotifx docs (#1376) (#1377)

commit ef8b3ce639
Merge: b7042095 36870a8f
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Nov 4 12:08:44 2022 -0400

    Merge-main-into-development (#1373)

    To get the rid of the difference between main and development.

    Since otherwise it will be a pain to start fixing the documentatino
    (when the state between main and development is not the same ...)

    Also this should fix the problem of all tests failing since environment
    yamls get updated.

commit 36870a8f53
Merge: 6b89adfa b7042095
Author: Matthias Wild <40327258+mauwii@users.noreply.github.com>
Date:   Fri Nov 4 16:25:00 2022 +0100

    Merge branch 'development' into merge-main-into-development

commit b70420951d
Author: damian0815 <null@damianstewart.com>
Date:   Thu Nov 3 12:39:45 2022 +0100

    fix parsing error doing eg `forest ().swap(in winter)`

commit 1f0c5b4cf1
Author: wfng92 <43742196+wfng92@users.noreply.github.com>
Date:   Thu Nov 3 17:13:52 2022 +0800

    Use array slicing to calc ddim timesteps

commit 8648da8111
Author: mauwii <Mauwii@outlook.de>
Date:   Fri Nov 4 00:06:19 2022 +0100

    update environment-linux-aarch64 to use python 3.9

commit 45b4593563
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 3 22:31:46 2022 +0100

    update environment-linux-aarch64.yml
    - move getpass_asterisk to pip

commit 41b04316cf
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 3 20:40:08 2022 +0100

    rename job, remove debug branch from triggers

commit e97c6db2a3
Author: mauwii <Mauwii@outlook.de>
Date:   Thu Nov 3 20:34:01 2022 +0100

    include build matrix to build x86_64 and aarch64

commit 896820a349
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 05:01:15 2022 +0100

    disable caching

commit 06c8f468bf
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 04:26:39 2022 +0100

    disable PR-Validation
    since there are no files passed from context this is unecesarry

commit 61920e2701
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 04:09:39 2022 +0100

    update action to use current branch
    also update build-args of dockerfile and build.sh

commit f34ba7ca70
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 02:30:24 2022 +0100

    remove unecesarry mkdir command again

commit c30ef0895d
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 01:51:12 2022 +0100

    remove symlink to GFPGANv1.4
    also re-add mkdir to prevent action from failing

commit aa3a774f73
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 01:48:59 2022 +0100

    update build-container.yml to use cachev3

commit 2c30555b84
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 01:34:20 2022 +0100

    update Dockerfile
    - create models.yaml from models.yaml.example
    - run preload_models.py with --no-interactive

commit 743f605773
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 01:21:15 2022 +0100

    update build.sh to download sd-v1.5 model

commit 519c661abb
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 25 01:26:50 2022 +0200

    replace old fashined markdown templates with forms
    this will help the readability of issues a lot 🤓

commit 22c956c75f
Merge: 13696adc 0196571a
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 3 10:20:21 2022 -0400

    Merge branch 'development' of github.com:invoke-ai/InvokeAI into development

commit 13696adc3a
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Thu Nov 3 10:20:10 2022 -0400

    speculative change to solve windows esrgan issues

commit 0196571a12
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 2 22:39:35 2022 -0400

    remove merge markers from preload_models.py

commit 9666f466ab
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 2 18:29:34 2022 -0400

    use refined model by default

commit 240e5486c8
Merge: 8164b6b9 aa247e68
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 2 18:35:00 2022 -0400

    Merge branch 'spezialspezial-patch-9' into development

commit 8164b6b9cf
Merge: 4fc82d55 dd5a88dc
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Wed Nov 2 17:06:46 2022 -0400

    Merge branch 'development' of github.com:invoke-ai/InvokeAI into development

commit 4fc82d554f
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Thu Nov 3 04:17:28 2022 +1300

    [WebUI] Final 2.1 Release Build

commit 96b34c0f85
Author: damian0815 <null@damianstewart.com>
Date:   Wed Nov 2 09:08:11 2022 +0100

    Final WebUI build for Release 2.1
    - squashed commit of 52 commits from PR #1327

    don't log base64 progress images

    Fresh Build For WebUI

    [WebUI] Loopback Default False

    Fixes bugs/styling

    - Fixes missing web app state on new version:
    Adds stateReconciler to redux-persist.

    When we add more values to the state and then release the update app, they will be automatically merged in.

    Reseting web UI will be needed far less.
    7159ec

    - Fixes console z-index
    - Moves reset web UI button to visible area

    Decreases gallery width on inpainting

    Increases workarea split padding to 1rem

    Adds missing tooltips to site header

    Changes inpainting controls settings to hover

    Fixes hotkeys and settings buttons not working

    Improves bounding box interactions

    - Bounding box can now be moved by dragging any of its edges
    - Bounding box does not affect drawing if already drawing a stroke
    - Can lock bounding box to draw directly on the bounding box edges
    - Removes spacebar-hold behaviour due to technical issues

    Fixes silent crash when init image too large

    To send the mask to the server, the UI rendered the mask onto the init image and sent the whole image. The mask was then cropped by the server.

    If the image was too large, the app silently failed. Maybe it exceeds the websocket size limit.

    Fixed by cropping the mask in the UI layer, sending only bounding-box-sized mask image data.

    Disabled bounding box settings when locked

    Styles image uploader

    Builds fresh bundle

    Improves bounding box interaction

    Added spacebar-hold-to-transform back.

    Address bounding box feedback

    - Adds back toggle to hide bounding box
    - Box quick toggle = q, normal toggle = shift + q
    - Styles canvas alert icons

    Adds hints when unable to invoke

    - Popover on Invoke button indicates why exactly it is disabled, e.g. prompt is empty, something else is processing, etc.
    - There may be more than one reason; all are displayed.

    Fix Inpainting Alerts Styling

    Preventing unnecessary re-renders across the app

    Code Split Inpaint Options

    Isolate features to their own components so they dont re-render the other stuff each time.

    [TESTING] Remove  global isReady checking

    I dont believe this is need at all because the isready state is constantly updated when needed and tracked real time in the Redux store. This causes massive re-renders. @psychedelicious If this is absolutely essential for a reason that I do not see, please hit me up on Discord.

    Fresh Bundle

    Fix Bounding Box Settings re-rendering on brush stroke

    [Code Splitting] Bounding Box Options

    Isolated all bounding box components to trigger unnecessary re-renders. Still need to fix  bounding box  triggering re-renders on the control panel inside the canvas itself. But the options panel should be a good to go with this change.

    Inpainting Controls Code Spitting and Performance

    Codesplit the entirety of the inpainting controls. Created new selectors for each and every component to ensure there are no unnecessary re-renders. App feels a lot smoother.

    Fixes rerenders on ClearBrushHistory

    Fixes crash when requesting post-generation upscale/face restoration

    - Moves the inpainting paste to before the postprocessing.

    Removes unused isReady state

    Changes Report Bug icon to a bug

    Restores shift+q bounding box shortcut

    Adds alert for bounding box size to status icons

    Adds asCheckbox to IAIIconButton

    Rough draft of this. Not happy with the styling but it's clearer than having them look just like buttons.

    Fixes crash related to old value of progress_latents in state

    Styling changes and settings modal minor refactor

    Fixes: uploaded JPG images not loading

    Reworks CurrentImageButtons.tsx

    - Change all icons to FA iconset for consistency
    - Refactors IAIIconButton, IAIButton, IAIPopover to handle ref forwarding
    - Redesigns buttons into group

    Only generate 1 iteration when seed fixed & variations disabled

    Fixes progress images select

    Fixes edge case: upload over gets stuck while alt tabbing

    - Press esc to close it now

    Fixes display progress images select typing

    Fixes current image button rerenders

    Adds min width to ImageUploader

    Makes fast-latents in progress default

    Update Icon Button Checkbox Style Styling

    Fixes next/prev image buttons

    Refactor canvas buttons + more

    Add Save Intermediates Step Count

    For accurate mode only.

    Co-Authored-By: Richard Macarthy <richardmacarthy@protonmail.com>

    Restores "initial image" text

    Address feedback

    - moves mask clear button
    - fixes intermediates
    - shrinks inpainting icons by 10%

    Fix Loopback Styling

    Adds escape hotkey to close floating panels

    Readd Hotkey for Dual Display

    Updated Current Image Button Styling

commit dd5a88dcee
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Thu Nov 3 04:17:28 2022 +1300

    [WebUI] Final 2.1 Release Build

commit 95ed56bf82
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Thu Nov 3 04:16:31 2022 +1300

    Updated Current Image Button Styling

commit 1ae80f5ab9
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Thu Nov 3 04:07:57 2022 +1300

    Readd Hotkey for Dual Display

commit 1f0bd3ca6c
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Thu Nov 3 02:07:00 2022 +1100

    Adds escape hotkey to close floating panels

commit a1971f6830
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Thu Nov 3 03:38:15 2022 +1300

    Fix Loopback Styling

commit c6118e8898
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Thu Nov 3 01:29:51 2022 +1100

    Address feedback

    - moves mask clear button
    - fixes intermediates
    - shrinks inpainting icons by 10%

commit 7ba958cf7f
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Thu Nov 3 01:10:38 2022 +1100

    Restores "initial image" text

commit 383905d5d2
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Thu Nov 3 02:59:11 2022 +1300

    Add Save Intermediates Step Count

    For accurate mode only.

    Co-Authored-By: Richard Macarthy <richardmacarthy@protonmail.com>

commit 6173e3e9ca
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Thu Nov 3 00:53:53 2022 +1100

    Refactor canvas buttons + more

commit 3feb7d8922
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Thu Nov 3 00:49:23 2022 +1100

    Fixes next/prev image buttons

commit 1d9edbd0dd
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Thu Nov 3 00:50:44 2022 +1300

    Update Icon Button Checkbox Style Styling

commit d439abdb89
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 21:37:24 2022 +1100

    Makes fast-latents in progress default

commit ee47ea0c89
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 21:37:09 2022 +1100

    Adds min width to ImageUploader

commit 300bb2e627
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 21:28:22 2022 +1100

    Fixes current image button rerenders

commit ccf8593501
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 21:27:43 2022 +1100

    Fixes display progress images select typing

commit 0fda612f3f
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 21:02:01 2022 +1100

    Fixes edge case: upload over gets stuck while alt tabbing

    - Press esc to close it now

commit 5afff65b71
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 20:33:19 2022 +1100

    Fixes progress images select

commit 7e55bdefce
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 20:27:47 2022 +1100

    Only generate 1 iteration when seed fixed & variations disabled

commit 620cf84d3d
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 19:51:38 2022 +1100

    Reworks CurrentImageButtons.tsx

    - Change all icons to FA iconset for consistency
    - Refactors IAIIconButton, IAIButton, IAIPopover to handle ref forwarding
    - Redesigns buttons into group

commit cfe567c62a
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 16:14:50 2022 +1100

    Fixes: uploaded JPG images not loading

commit cefe12f1df
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 15:31:18 2022 +1100

    Styling changes and settings modal minor refactor

commit 1e51c39928
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 15:27:46 2022 +1100

    Fixes crash related to old value of progress_latents in state

commit 42a02bbb80
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 13:15:06 2022 +1100

    Adds asCheckbox to IAIIconButton

    Rough draft of this. Not happy with the styling but it's clearer than having them look just like buttons.

commit f1ae6dae4c
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 13:13:56 2022 +1100

    Adds alert for bounding box size to status icons

commit 6195579910
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 11:52:19 2022 +1100

    Restores shift+q bounding box shortcut

commit 16c8b23b34
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 11:32:07 2022 +1100

    Changes Report Bug icon to a bug

commit 07ae626b22
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 11:17:16 2022 +1100

    Removes unused isReady state

commit 8d171bb044
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 11:13:26 2022 +1100

    Fixes crash when requesting post-generation upscale/face restoration

    - Moves the inpainting paste to before the postprocessing.

commit 6e33ca7e9e
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Wed Nov 2 10:59:01 2022 +1100

    Fixes rerenders on ClearBrushHistory

commit db46e12f2b
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 11:36:28 2022 +1300

    Inpainting Controls Code Spitting and Performance

    Codesplit the entirety of the inpainting controls. Created new selectors for each and every component to ensure there are no unnecessary re-renders. App feels a lot smoother.

commit 868e4b2db8
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 07:40:31 2022 +1300

    [Code Splitting] Bounding Box Options

    Isolated all bounding box components to trigger unnecessary re-renders. Still need to fix  bounding box  triggering re-renders on the control panel inside the canvas itself. But the options panel should be a good to go with this change.

commit 2e562742c1
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 06:40:27 2022 +1300

    Fix Bounding Box Settings re-rendering on brush stroke

commit 68e6958009
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 06:28:34 2022 +1300

    Fresh Bundle

commit ea6e3a7949
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 06:26:56 2022 +1300

    [TESTING] Remove  global isReady checking

    I dont believe this is need at all because the isready state is constantly updated when needed and tracked real time in the Redux store. This causes massive re-renders. @psychedelicious If this is absolutely essential for a reason that I do not see, please hit me up on Discord.

commit b2879ca99f
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 06:08:59 2022 +1300

    Code Split Inpaint Options

    Isolate features to their own components so they dont re-render the other stuff each time.

commit 4e911566c3
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 03:50:56 2022 +1300

    Preventing unnecessary re-renders across the app

commit 9bafda6a15
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 03:02:35 2022 +1300

    Fix Inpainting Alerts Styling

commit 871a8a5375
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 23:52:07 2022 +1100

    Adds hints when unable to invoke

    - Popover on Invoke button indicates why exactly it is disabled, e.g. prompt is empty, something else is processing, etc.
    - There may be more than one reason; all are displayed.

commit 0eef74bc00
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 23:40:11 2022 +1100

    Address bounding box feedback

    - Adds back toggle to hide bounding box
    - Box quick toggle = q, normal toggle = shift + q
    - Styles canvas alert icons

commit 423ae32097
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 17:06:07 2022 +1100

    Improves bounding box interaction

    Added spacebar-hold-to-transform back.

commit 8282e5d045
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 16:57:07 2022 +1100

    Builds fresh bundle

commit 19305cdbdf
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 16:51:11 2022 +1100

    Styles image uploader

commit eb9028ab30
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 16:51:03 2022 +1100

    Disabled bounding box settings when locked

commit 21483f5d07
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 16:50:24 2022 +1100

    Fixes silent crash when init image too large

    To send the mask to the server, the UI rendered the mask onto the init image and sent the whole image. The mask was then cropped by the server.

    If the image was too large, the app silently failed. Maybe it exceeds the websocket size limit.

    Fixed by cropping the mask in the UI layer, sending only bounding-box-sized mask image data.

commit 82dcbac28f
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 16:28:30 2022 +1100

    Improves bounding box interactions

    - Bounding box can now be moved by dragging any of its edges
    - Bounding box does not affect drawing if already drawing a stroke
    - Can lock bounding box to draw directly on the bounding box edges
    - Removes spacebar-hold behaviour due to technical issues

commit d43bd4625d
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 15:10:49 2022 +1100

    Fixes hotkeys and settings buttons not working

commit ea891324a2
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 11:04:02 2022 +1100

    Changes inpainting controls settings to hover

commit 8fd9ea2193
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 11:03:41 2022 +1100

    Adds missing tooltips to site header

commit fb02666856
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 11:03:25 2022 +1100

    Increases workarea split padding to 1rem

commit f6f5c2731b
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 11:03:10 2022 +1100

    Decreases gallery width on inpainting

commit b4e3f771e0
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 10:54:59 2022 +1100

    Fixes bugs/styling

    - Fixes missing web app state on new version:
    Adds stateReconciler to redux-persist.

    When we add more values to the state and then release the update app, they will be automatically merged in.

    Reseting web UI will be needed far less.
    7159ec

    - Fixes console z-index
    - Moves reset web UI button to visible area

commit 99bb9491ac
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Tue Nov 1 08:35:45 2022 +1300

    [WebUI] Loopback Default False

commit 0453f21127
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Wed Nov 2 23:23:51 2022 +1300

    Fresh Build For WebUI

commit 9fc09aa4bd
Author: damian0815 <null@damianstewart.com>
Date:   Wed Nov 2 09:08:11 2022 +0100

    don't log base64 progress images

commit 5e87062cf8
Author: spezialspezial <75758219+spezialspezial@users.noreply.github.com>
Date:   Wed Nov 2 00:21:27 2022 +0100

    Option to directly invert the grayscale heatmap - fix

commit 3e7a459990
Author: spezialspezial <75758219+spezialspezial@users.noreply.github.com>
Date:   Tue Nov 1 21:37:33 2022 +0100

    Update txt2mask.py

commit bbf4c03e50
Author: spezialspezial <75758219+spezialspezial@users.noreply.github.com>
Date:   Tue Nov 1 21:11:19 2022 +0100

    Option to directly invert the grayscale heatmap

    Theoretically less work inverting the image while it's small but I can't measure a significant difference. Though, handy option to have in some cases.

commit 611a3a9753
Author: mauwii <Mauwii@outlook.de>
Date:   Wed Nov 2 02:23:09 2022 +0100

    fix name of caching step

commit 1611f0d181
Author: mauwii <Mauwii@outlook.de>
Date:   Wed Nov 2 02:18:46 2022 +0100

    readd caching of sd-models
    - this would remove the necesarrity of the secret availability in PRs

commit 08835115e4
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 1 22:10:12 2022 -0400

    pin pytorch_lightning to 1.7.7, issue #1331

commit 2d84e28d32
Merge: 533fd04e ef17aae8
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 1 22:11:04 2022 -0400

    Merge branch 'development' of github.com:invoke-ai/InvokeAI into development

commit ef17aae8ab
Author: damian0815 <null@damianstewart.com>
Date:   Wed Nov 2 00:39:48 2022 +0100

    add damian0815 to contributors list

commit 0cc39f01a3
Author: damian0815 <null@damianstewart.com>
Date:   Wed Nov 2 01:18:50 2022 +0100

    report full size for fast latents and update conversion matrix for v1.5

commit 688d7258f1
Author: damian0815 <null@damianstewart.com>
Date:   Wed Nov 2 00:33:00 2022 +0100

    fix a bug that broke cross attention control index mapping

commit 4513320bf1
Author: damian0815 <null@damianstewart.com>
Date:   Wed Nov 2 00:31:58 2022 +0100

    save VRAM by not recombining tensors that have been sliced to save VRAM

commit 533fd04ef0
Merge: 6215592b dff5681c
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 1 17:40:36 2022 -0400

    Merge branch 'development' of github.com:invoke-ai/InvokeAI into development

commit dff5681cf0
Author: damian0815 <d@d.com>
Date:   Tue Nov 1 13:56:03 2022 +0100

    shorter strings

commit 5a2790a69b
Author: damian0815 <d@d.com>
Date:   Tue Nov 1 13:19:20 2022 +0100

    convert progress display to a drop-down

commit 7c5305ccba
Author: damian0815 <d@d.com>
Date:   Tue Nov 1 12:54:46 2022 +0100

    do not try to save base64 intermediates in gallery on cancellation

commit 4013e8ad6f
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Tue Nov 1 21:54:35 2022 +1100

    Fixes b64 image sending and displaying

commit d1dfd257f9
Author: damian <d@d.com>
Date:   Tue Nov 1 11:40:40 2022 +0100

    wip base64

commit 5322d735ee
Author: damian <d@d.com>
Date:   Tue Nov 1 11:31:42 2022 +0100

    update frontend

commit cdb107dcda
Author: damian <d@d.com>
Date:   Tue Nov 1 11:17:43 2022 +0100

    add option to show intermediate latent space

commit be1393a41c
Author: damian <d@d.com>
Date:   Tue Nov 1 10:16:55 2022 +0100

    ensure existing exception handling code also handles new exception class

commit e554c2607f
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Nov 1 10:08:42 2022 +0100

    Rebuilt prompt parsing logic

    Complete re-write of the prompt parsing logic to be more readable and
    logical, and therefore also hopefully easier to debug, maintain, and
    augment.

    In the process it has also become more robust to badly-formed prompts.

    Squashed commit of the following:

    commit 8fcfa88a16e1390d41717e940d72aed64712171c
    Author: Damian at mba <damian@frey.NOSPAMco.nz>
    Date:   Sun Oct 30 17:05:57 2022 +0100

        further cleanup

    commit 1a1fd78bcfeb49d072e3e6d5808aa8df15441629
    Author: Damian at mba <damian@frey.NOSPAMco.nz>
    Date:   Sun Oct 30 16:07:57 2022 +0100

        cleanup and document

    commit 099c9659fa8b8135876f9a5a50fe80b20bc0635c
    Author: Damian at mba <damian@frey.NOSPAMco.nz>
    Date:   Sun Oct 30 15:54:58 2022 +0100

        works fully

    commit 5e6887ea8c25a1e21438ff6defb381fd027d25fd
    Author: Damian at mba <damian@frey.NOSPAMco.nz>
    Date:   Sun Oct 30 15:24:31 2022 +0100

        further...

    commit 492fda120844d9bc1ad4ec7dd408a3374762d0ff
    Author: Damian at mba <damian@frey.NOSPAMco.nz>
    Date:   Sun Oct 30 14:08:57 2022 +0100

        getting there...

    commit c6aab05a8450cc3c95c8691daf38fdc64c74f52d
    Author: Damian at mba <damian@frey.NOSPAMco.nz>
    Date:   Fri Oct 28 14:29:03 2022 +0200

        wip doesn't compile

    commit 5e533f731cfd20cd435330eeb0012e5689e87e81
    Author: Damian at mba <damian@frey.NOSPAMco.nz>
    Date:   Fri Oct 28 13:21:43 2022 +0200

        working with CrossAttentionCtonrol but no Attention support yet

    commit 9678348773431e500e110e8aede99086bb7b5955
    Author: Damian at mba <damian@frey.NOSPAMco.nz>
    Date:   Fri Oct 28 13:04:52 2022 +0200

        wip rebuiling prompt parser

commit 6215592b12
Merge: ef24d76a 349cc254
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 1 17:34:55 2022 -0400

    Merge branch 'development' of github.com:invoke-ai/InvokeAI into development

commit 349cc25433
Author: damian0815 <d@d.com>
Date:   Tue Nov 1 20:08:52 2022 +0100

    fix crash (be a little less aggressive clearing out the attention slice)

commit 214d276379
Author: damian0815 <d@d.com>
Date:   Tue Nov 1 19:57:55 2022 +0100

    be more aggressive at clearing out saved_attn_slice

commit ef24d76adc
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 1 14:34:23 2022 -0400

    fix library problems in preload_modules

commit ab2b5a691d
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Nov 1 17:22:48 2022 -0400

    fix model_cache memory management issues

commit c7de2b2801
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Nov 1 02:02:14 2022 +0100

    disable checks with sd-V1.4 model...
    ...to save some resources, since V1.5 is the default now

commit e8075658ac
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 31 22:20:51 2022 +0100

    update test-invoke-conda.yml
    - fix model dl path for sd-v1-4.ckpt
    - copy configs/models.yaml.example to configs/models.yaml

commit 4202dabee1
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 31 22:17:21 2022 +0100

    fix models example weights for sd-v1.4

commit d67db2bcf1
Author: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Date:   Tue Nov 1 08:35:45 2022 +1300

    [WebUI] Loopback Default False

commit 7159ec885f
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Mon Oct 31 11:33:05 2022 -0400

    further improvements to preload_models.py

    - Faster startup for command line switch processing
    - Specify configuration file to modify using --config option:

      ./scripts/preload_models.ply --config models/my-models-file.yaml

commit b5cf734ba9
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Mon Oct 31 11:08:19 2022 -0400

    improve behavior of preload_models.py

    - NEVER overwrite user's existing models.yaml
    - Instead, merge its contents into new config file,
      and rename original to models.yaml.orig (with
      message)
    - models.yaml has been removed from repository and renamed
      models.yaml.example

commit f7dc8eafee
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Mon Oct 31 10:47:35 2022 -0400

    restore models.yaml to virgin state

commit 762ca60a30
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Oct 4 22:55:10 2022 -0400

    Update INPAINTING.md

commit e7fb9f342c
Author: Hideyuki Katsushiro <h.katsushiro@qualia.tokyo.jp>
Date:   Wed Oct 5 10:08:53 2022 +0900

    add argument --outdir
2022-11-12 17:17:07 +00:00
52 changed files with 2190 additions and 15982 deletions

1
.github/CODEOWNERS vendored
View File

@@ -2,3 +2,4 @@ ldm/invoke/pngwriter.py @CapableWeb
ldm/invoke/server_legacy.py @CapableWeb
scripts/legacy_api.py @CapableWeb
tests/legacy_tests.sh @CapableWeb
installer/ @tildebyte

4
.gitignore vendored
View File

@@ -228,6 +228,10 @@ requirements.txt
# source installer files
source_installer/*zip
source_installer/invokeAI
install.bat
install.sh
update.bat
update.sh
# this may be present if the user created a venv
invokeai

View File

@@ -99,8 +99,7 @@ overridden on a per-prompt basis (see
| `--sampler <sampler>` | `-A<sampler>` | `k_lms` | Sampler to use. Use `-h` to get list of available samplers. |
| `--seamless` | | `False` | Create interesting effects by tiling elements of the image. |
| `--embedding_path <path>` | | `None` | Path to pre-trained embedding manager checkpoints, for custom models |
| `--gfpgan_dir` | | `src/gfpgan` | Path to where GFPGAN is installed. |
| `--gfpgan_model_path` | | `experiments/pretrained_models/GFPGANv1.4.pth` | Path to GFPGAN model file, relative to `--gfpgan_dir`. |
| `--gfpgan_model_path` | | `experiments/pretrained_models/GFPGANv1.4.pth` | Path to GFPGAN model file. |
| `--free_gpu_mem` | | `False` | Free GPU memory after sampling, to allow image decoding and saving in low VRAM conditions |
| `--precision` | | `auto` | Set model precision, default is selected by device. Options: auto, float32, float16, autocast |

View File

@@ -19,13 +19,13 @@ tree on a hill with a river, nature photograph, national geographic -I./test-pic
This will take the original image shown here:
<figure markdown>
![](https://user-images.githubusercontent.com/50542132/193946000-c42a96d8-5a74-4f8a-b4c3-5213e6cadcce.png)
![original-image](https://user-images.githubusercontent.com/50542132/193946000-c42a96d8-5a74-4f8a-b4c3-5213e6cadcce.png){ width=320 }
</figure>
and generate a new image based on it as shown here:
<figure markdown>
![](https://user-images.githubusercontent.com/111189/194135515-53d4c060-e994-4016-8121-7c685e281ac9.png)
![generated-image](https://user-images.githubusercontent.com/111189/194135515-53d4c060-e994-4016-8121-7c685e281ac9.png){ width=320 }
</figure>
The `--init_img` (`-I`) option gives the path to the seed picture. `--strength`
@@ -45,15 +45,16 @@ Note that the prompt makes a big difference. For example, this slight variation
on the prompt produces a very different image:
<figure markdown>
![](https://user-images.githubusercontent.com/111189/194135220-16b62181-b60c-4248-8989-4834a8fd7fbd.png)
![](https://user-images.githubusercontent.com/111189/194135220-16b62181-b60c-4248-8989-4834a8fd7fbd.png){ width=320 }
<caption markdown>photograph of a tree on a hill with a river</caption>
</figure>
!!! tip
When designing prompts, think about how the images scraped from the internet were captioned. Very few photographs will
be labeled "photograph" or "photorealistic." They will, however, be captioned with the publication, photographer, camera
model, or film settings.
When designing prompts, think about how the images scraped from the internet were
captioned. Very few photographs will be labeled "photograph" or "photorealistic."
They will, however, be captioned with the publication, photographer, camera model,
or film settings.
If the initial image contains transparent regions, then Stable Diffusion will
only draw within the transparent regions, a process called
@@ -61,17 +62,17 @@ only draw within the transparent regions, a process called
However, for this to work correctly, the color information underneath the
transparent needs to be preserved, not erased.
!!! warning
!!! warning "**IMPORTANT ISSUE** "
**IMPORTANT ISSUE** `img2img` does not work properly on initial images smaller
than 512x512. Please scale your image to at least 512x512 before using it.
Larger images are not a problem, but may run out of VRAM on your GPU card. To
fix this, use the --fit option, which downscales the initial image to fit within
the box specified by width x height:
`img2img` does not work properly on initial images smaller
than 512x512. Please scale your image to at least 512x512 before using it.
Larger images are not a problem, but may run out of VRAM on your GPU card. To
fix this, use the --fit option, which downscales the initial image to fit within
the box specified by width x height:
```
tree on a hill with a river, national geographic -I./test-pictures/big-sketch.png -H512 -W512 --fit
```
```
tree on a hill with a river, national geographic -I./test-pictures/big-sketch.png -H512 -W512 --fit
```
## How does it actually work, though?
@@ -87,7 +88,7 @@ from a prompt. If the step count is 10, then the "latent space" (Stable
Diffusion's internal representation of the image) for the prompt "fire" with
seed `1592514025` develops something like this:
```commandline
```bash
invoke> "fire" -s10 -W384 -H384 -S1592514025
```
@@ -133,9 +134,9 @@ Notice how much more fuzzy the starting image is for strength `0.7` compared to
| | strength = 0.7 | strength = 0.4 |
| --------------------------- | ------------------------------------------------------------- | ------------------------------------------------------------- |
| initial image that SD sees | ![](../assets/img2img/000032.step-0.png) | ![](../assets/img2img/000030.step-0.png) |
| initial image that SD sees | ![step-0](../assets/img2img/000032.step-0.png) | ![step-0](../assets/img2img/000030.step-0.png) |
| steps argument to `invoke>` | `-S10` | `-S10` |
| steps actually taken | 7 | 4 |
| steps actually taken | `7` | `4` |
| latent space at each step | ![gravity32](../assets/img2img/000032.steps.gravity.png) | ![gravity30](../assets/img2img/000030.steps.gravity.png) |
| output | ![000032.1592514025](../assets/img2img/000032.1592514025.png) | ![000030.1592514025](../assets/img2img/000030.1592514025.png) |
@@ -150,7 +151,7 @@ If you want to try this out yourself, all of these are using a seed of
`1592514025` with a width/height of `384`, step count `10`, the default sampler
(`k_lms`), and the single-word prompt `"fire"`:
```commandline
```bash
invoke> "fire" -s10 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png --strength 0.7
```
@@ -170,7 +171,7 @@ give each generation 20 steps.
Here's strength `0.4` (note step count `50`, which is `20 ÷ 0.4` to make sure SD
does `20` steps from my image):
```commandline
```bash
invoke> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
```

View File

@@ -92,6 +92,21 @@ The new image is larger than the original (576x704) because 64 pixels were added
to the top and right sides. You will need enough VRAM to process an image of
this size.
#### Outcropping non-InvokeAI images
You can outcrop an arbitrary image that was not generated by InvokeAI,
but your results will vary. The `inpainting-1.5` model is highly
recommended, but if not feasible, then you may be able to improve the
output by conditioning the outcropping with a text prompt that
describes the scene using the `--new_prompt` argument:
```bash
invoke> !fix images/vacation.png --outcrop top 128 --new_prompt "family vacation"
```
You may also provide a different seed for outcropping to use by passing
`-S<seed>`. A negative seed will generate a new random seed.
A number of caveats:
1. Although you can specify any pixel values, they will be rounded up to the

View File

@@ -6,49 +6,39 @@ title: Postprocessing
## Intro
This extension provides the ability to restore faces and upscale
images.
This extension provides the ability to restore faces and upscale images.
Face restoration and upscaling can be applied at the time you generate
the images, or at any later time against a previously-generated PNG
file, using the [!fix](#fixing-previously-generated-images)
command. [Outpainting and outcropping](OUTPAINTING.md) can only be
applied after the fact.
Face restoration and upscaling can be applied at the time you generate the
images, or at any later time against a previously-generated PNG file, using the
[!fix](#fixing-previously-generated-images) command.
[Outpainting and outcropping](OUTPAINTING.md) can only be applied after the
fact.
## Face Fixing
The default face restoration module is GFPGAN. The default upscale is
Real-ESRGAN. For an alternative face restoration module, see [CodeFormer
Support](#codeformer-support) below.
Real-ESRGAN. For an alternative face restoration module, see
[CodeFormer Support](#codeformer-support) below.
As of version 1.14, environment.yaml will install the Real-ESRGAN
package into the standard install location for python packages, and
will put GFPGAN into a subdirectory of "src" in the InvokeAI
directory. Upscaling with Real-ESRGAN should "just work" without
further intervention. Simply pass the `--upscale` (`-U`) option on the
`invoke>` command line, or indicate the desired scale on the popup in
the Web GUI.
As of version 1.14, environment.yaml will install the Real-ESRGAN package into
the standard install location for python packages, and will put GFPGAN into a
subdirectory of "src" in the InvokeAI directory. Upscaling with Real-ESRGAN
should "just work" without further intervention. Simply pass the `--upscale`
(`-U`) option on the `invoke>` command line, or indicate the desired scale on
the popup in the Web GUI.
**GFPGAN** requires a series of downloadable model files to
work. These are loaded when you run `scripts/preload_models.py`. If
GFPAN is failing with an error, please run the following from the
InvokeAI directory:
**GFPGAN** requires a series of downloadable model files to work. These are
loaded when you run `scripts/preload_models.py`. If GFPAN is failing with an
error, please run the following from the InvokeAI directory:
```bash
python scripts/preload_models.py
```
If you do not run this script in advance, the GFPGAN module will attempt
to download the models files the first time you try to perform facial
If you do not run this script in advance, the GFPGAN module will attempt to
download the models files the first time you try to perform facial
reconstruction.
Alternatively, if you have GFPGAN installed elsewhere, or if you are
using an earlier version of this package which asked you to install
GFPGAN in a sibling directory, you may use the `--gfpgan_dir` argument
with `invoke.py` to set a custom path to your GFPGAN directory. _There
are other GFPGAN related boot arguments if you wish to customize
further._
## Usage
You will now have access to two new prompt arguments.
@@ -119,15 +109,15 @@ actions.
This repo also allows you to perform face restoration using
[CodeFormer](https://github.com/sczhou/CodeFormer).
In order to setup CodeFormer to work, you need to download the models
like with GFPGAN. You can do this either by running
`preload_models.py` or by manually downloading the [model
file](https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth)
In order to setup CodeFormer to work, you need to download the models like with
GFPGAN. You can do this either by running `preload_models.py` or by manually
downloading the
[model file](https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth)
and saving it to `ldm/invoke/restoration/codeformer/weights` folder.
You can use `-ft` prompt argument to swap between CodeFormer and the
default GFPGAN. The above mentioned `-G` prompt argument will allow
you to control the strength of the restoration effect.
You can use `-ft` prompt argument to swap between CodeFormer and the default
GFPGAN. The above mentioned `-G` prompt argument will allow you to control the
strength of the restoration effect.
### Usage
@@ -157,9 +147,9 @@ situations when there is very little facial data to work with.
## Fixing Previously-Generated Images
It is easy to apply face restoration and/or upscaling to any
previously-generated file. Just use the syntax `!fix path/to/file.png
<options>`. For example, to apply GFPGAN at strength 0.8 and upscale
2X for a file named `./outputs/img-samples/000044.2945021133.png`,
previously-generated file. Just use the syntax
`!fix path/to/file.png <options>`. For example, to apply GFPGAN at strength 0.8
and upscale 2X for a file named `./outputs/img-samples/000044.2945021133.png`,
just run:
```bash

View File

@@ -4,12 +4,17 @@ title: Docker
# :fontawesome-brands-docker: Docker
## Before you begin
!!! warning "For end users"
- For end users: Install InvokeAI locally using the instructions for your OS.
- For developers: For container-related development tasks or for enabling easy
deployment to other environments (on-premises or cloud), follow these
instructions. For general use, install locally to leverage your machine's GPU.
We highly recommend to Install InvokeAI locally using [these instructions](index.md)"
!!! tip "For developers"
For container-related development tasks or for enabling easy
deployment to other environments (on-premises or cloud), follow these
instructions.
For general use, install locally to leverage your machine's GPU.
## Why containers?
@@ -37,16 +42,19 @@ another environment with NVIDIA GPUs on-premises or in the cloud.
#### Install [Docker](https://github.com/santisbon/guides#docker)
On the Docker Desktop app, go to Preferences, Resources, Advanced. Increase the
CPUs and Memory to avoid this
On the [Docker Desktop app](https://docs.docker.com/get-docker/), go to
Preferences, Resources, Advanced. Increase the CPUs and Memory to avoid this
[Issue](https://github.com/invoke-ai/InvokeAI/issues/342). You may need to
increase Swap and Disk image size too.
#### Get a Huggingface-Token
Go to [Hugging Face](https://huggingface.co/settings/tokens), create a token and
temporary place it somewhere like a open texteditor window (but dont save it!,
only keep it open, we need it in the next step)
Besides the Docker Agent you will need an Account on
[huggingface.co](https://huggingface.co/join).
After you succesfully registered your account, go to
[huggingface.co/settings/tokens](https://huggingface.co/settings/tokens), create
a token and copy it, since you will need in for the next step.
### Setup
@@ -94,25 +102,25 @@ After the build process is done, you can run the container via the provided
./docker-build/run.sh
```
When used without arguments, the container will start the website and provide
When used without arguments, the container will start the webserver and provide
you the link to open it. But if you want to use some other parameters you can
also do so.
!!! example
!!! example ""
```bash
docker-build/run.sh --from_file tests/validate_pr_prompt.txt
./docker-build/run.sh --from_file tests/validate_pr_prompt.txt
```
The output folder is located on the volume which is also used to store the model.
Find out more about available CLI-Parameter at [features/CLI.md](../features/CLI.md)
Find out more about available CLI-Parameters at [features/CLI.md](../features/CLI.md/#arguments)
---
!!! warning "Deprecated"
From here on you will find the rest of the previous Docker-Docs, which will still
From here on you will find the the previous Docker-Docs, which will still
provide some usefull informations.
## Usage (time to have fun)

View File

@@ -2,30 +2,41 @@
title: InvokeAI Installer
---
The InvokeAI installer is a shell script that will install InvokeAI
onto a stock computer running recent versions of Linux, MacOSX or
Windows. It will leave you with a version that runs a stable version
of InvokeAI. When a new version of InvokeAI is released, you will
download and reinstall the new version.
The InvokeAI installer is a shell script that will install InvokeAI onto a stock
computer running recent versions of Linux, MacOSX or Windows. It will leave you
with a version that runs a stable version of InvokeAI. When a new version of
InvokeAI is released, you will download and reinstall the new version.
If you wish to tinker with unreleased versions of InvokeAI that
introduce potentially unstable new features, you should consider using
the [source installer](INSTALL_SOURCE.md) or one of the [manual
install](INSTALL_MANUAL.md) methods.
If you wish to tinker with unreleased versions of InvokeAI that introduce
potentially unstable new features, you should consider using the
[source installer](INSTALL_SOURCE.md) or one of the
[manual install](INSTALL_MANUAL.md) methods.
Before you begin, make sure that you meet the [hardware
requirements](index.md#Hardware_Requirements) and has the appropriate
GPU drivers installed. In particular, if you are a Linux user with an
AMD GPU installed, you may need to install the [ROCm
driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
**Important Caveats**
- This script does not support AMD GPUs. For Linux AMD support,
please use the manual or source code installer methods.
Installation requires roughly 18G of free disk space to load the
libraries and recommended model weights files.
- This script has difficulty on some Macintosh machines
that have previously been used for Python development due to
conflicting development tools versions. Mac developers may wish
to try the source code installer or one of the manual methods instead.
!!! todo
Before you begin, make sure that you meet
the[hardware requirements](/#hardware-requirements) and has the
appropriate GPU drivers installed. In particular, if you are a Linux user with
an AMD GPU installed, you may need to install the
[ROCm-driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
Installation requires roughly 18G of free disk space to load the libraries and
recommended model weights files.
## Steps to Install
1. Download the [latest release](https://github.com/invoke-ai/InvokeAI/releases/latest)
of InvokeAI's installer for your platform
1. Download the
[latest release](https://github.com/invoke-ai/InvokeAI/releases/latest) of
InvokeAI's installer for your platform
2. Place the downloaded package someplace where you have plenty of HDD space,
and have full permissions (i.e. `~/` on Lin/Mac; your home folder on Windows)
@@ -34,7 +45,8 @@ libraries and recommended model weights files.
4. Open the extracted 'InvokeAI' folder
5. Double-click 'install.bat' (Windows), or 'install.sh' (Lin/Mac) (or run from a terminal)
5. Double-click 'install.bat' (Windows), or 'install.sh' (Lin/Mac) (or run from
a terminal)
6. Follow the prompts
@@ -43,10 +55,10 @@ libraries and recommended model weights files.
## Troubleshooting
If you run into problems during or after installation, the InvokeAI
team is available to help you. Either create an
[Issue](https://github.com/invoke-ai/InvokeAI/issues) at our GitHub
site, or make a request for help on the "bugs-and-support" channel of
our [Discord server](https://discord.gg/ZmtBAhwWhy). We are a 100%
volunteer organization, but typically somebody will be available to
help you within 24 hours, and often much sooner.
If you run into problems during or after installation, the InvokeAI team is
available to help you. Either create an
[Issue](https://github.com/invoke-ai/InvokeAI/issues) at our GitHub site, or
make a request for help on the "bugs-and-support" channel of our
[Discord server](https://discord.gg/ZmtBAhwWhy). We are a 100% volunteer
organization, but typically somebody will be available to help you within 24
hours, and often much sooner.

View File

@@ -14,8 +14,7 @@ download the notebook from the link above and load it up in VSCode
(with the appropriate extensions installed)/Jupyter/JupyterLab and
start running the cells one-by-one.
Note that you will need NVIDIA drivers, Python 3.10, and Git installed beforehand.
!!! Note "you will need NVIDIA drivers, Python 3.10, and Git installed beforehand"
## Walkthrough
@@ -25,4 +24,4 @@ Note that you will need NVIDIA drivers, Python 3.10, and Git installed beforehan
### Updating to the development version
## Troubleshooting
## Troubleshooting

View File

@@ -2,51 +2,54 @@
title: Manual Installation
---
# :fontawesome-brands-linux: Linux
# :fontawesome-brands-apple: macOS
# :fontawesome-brands-windows: Windows
<figure markdown>
# :fontawesome-brands-linux: Linux | :fontawesome-brands-apple: macOS | :fontawesome-brands-windows: Windows
</figure>
!!! warning "This is for advanced Users"
who are already expirienced with using conda or pip
## Introduction
You have two choices for manual installation, the [first
one](#Conda_method) based on the Anaconda3 package manager (`conda`),
and [a second one](#PIP_method) which uses basic Python virtual
environment (`venv`) commands and the PIP package manager. Both
methods require you to enter commands on the command-line shell, also
known as the "console".
You have two choices for manual installation, the [first one](#Conda_method)
based on the Anaconda3 package manager (`conda`), and
[a second one](#PIP_method) which uses basic Python virtual environment (`venv`)
commands and the PIP package manager. Both methods require you to enter commands
on the terminal, also known as the "console".
On Windows systems you are encouraged to install and use the
[Powershell](https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.3),
which provides compatibility with Linux and Mac shells and nice
features such as command-line completion.
which provides compatibility with Linux and Mac shells and nice features such as
command-line completion.
### Conda method
1. Check that your system meets the [hardware
requirements](index.md#Hardware_Requirements) and has the appropriate
GPU drivers installed. In particular, if you are a Linux user with an
AMD GPU installed, you may need to install the [ROCm
driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
1. Check that your system meets the
[hardware requirements](index.md#Hardware_Requirements) and has the
appropriate GPU drivers installed. In particular, if you are a Linux user
with an AMD GPU installed, you may need to install the
[ROCm driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
InvokeAI does not yet support Windows machines with AMD GPUs due to
the lack of ROCm driver support on this platform.
InvokeAI does not yet support Windows machines with AMD GPUs due to the lack
of ROCm driver support on this platform.
To confirm that the appropriate drivers are installed, run
`nvidia-smi` on NVIDIA/CUDA systems, and `rocm-smi` on AMD
systems. These should return information about the installed video
card.
To confirm that the appropriate drivers are installed, run `nvidia-smi` on
NVIDIA/CUDA systems, and `rocm-smi` on AMD systems. These should return
information about the installed video card.
Macintosh users with MPS acceleration, or anybody with a CPU-only
system, can skip this step.
Macintosh users with MPS acceleration, or anybody with a CPU-only system,
can skip this step.
2. You will need to install Anaconda3 and Git if they are not already
available. Use your operating system's preferred installer, or
download installers from the following URLs
2. You will need to install Anaconda3 and Git if they are not already
available. Use your operating system's preferred package manager, or
download the installers manually. You can find them here:
- Anaconda3 (https://www.anaconda.com/)
- git (https://git-scm.com/downloads)
- [Anaconda3](https://www.anaconda.com/)
- [git](https://git-scm.com/downloads)
3. Copy the InvokeAI source code from GitHub using `git`:
3. Clone the [InvokeAI](https://github.com/invoke-ai/InvokeAI) source code from
GitHub:
```bash
git clone https://github.com/invoke-ai/InvokeAI.git
@@ -55,122 +58,158 @@ download installers from the following URLs
This will create InvokeAI folder where you will follow the rest of the
steps.
3. Enter the newly-created InvokeAI folder. From this step forward make sure
that you are working in the InvokeAI directory!
4. Enter the newly-created InvokeAI folder:
```bash
cd InvokeAI
```
4. Select the appropriate environment file:
We have created a series of environment files suited for different
operating systems and GPU hardware. They are located in the
From this step forward make sure that you are working in the InvokeAI
directory!
5. Select the appropriate environment file:
We have created a series of environment files suited for different operating
systems and GPU hardware. They are located in the
`environments-and-requirements` directory:
<figure markdown>
| filename | OS |
| :----------------------: | :----------------------------: |
| environment-lin-amd.yml | Linux with an AMD (ROCm) GPU |
| environment-lin-cuda.yml | Linux with an NVIDIA CUDA GPU |
| environment-mac.yml | Macintosh |
| environment-win-cuda.yml | Windows with an NVIDA CUDA GPU |
</figure>
Choose the appropriate environment file for your system and link or copy it
to `environment.yml` in InvokeAI's top-level directory. To do so, run
following command from the repository-root:
!!! Example ""
=== "Macintosh and Linux"
!!! todo "Replace `xxx` and `yyy` with the appropriate OS and GPU codes as seen in the table above"
```bash
ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml
```
When this is done, confirm that a file `environment.yml` has been linked in
the InvokeAI root directory and that it points to the correct file in the
`environments-and-requirements`.
```bash
ls -la
```
=== "Windows"
!!! todo " Since it requires admin privileges to create links, we will use the copy command to create your `environment.yml`"
```cmd
copy environments-and-requirements\environment-win-cuda.yml environment.yml
```
Afterwards verify that the file `environment.yml` has been created, either via the
explorer or by using the command `dir` from the terminal
```cmd
dir
```
!!! warning "Do not try to run conda on directly on the subdirectory environments file. This won't work. Instead, copy or link it to the top-level directory as shown."
6. Create the conda environment:
```bash
environment-lin-amd.yml # Linux with an AMD (ROCm) GPU
environment-lin-cuda.yml # Linux with an NVIDIA CUDA GPU
environment-mac.yml # Macintoshes with MPS acceleration
environment-win-cuda.yml # Windows with an NVIDA CUDA GPU
conda env update
```
Select the appropriate environment file, and make a link to it
from `environment.yml` in the top-level InvokeAI directory. The
command to do this from the top-level directory is:
This will create a new environment named `invokeai` and install all InvokeAI
dependencies into it. If something goes wrong you should take a look at
[troubleshooting](#troubleshooting).
!!! todo "Macintosh and Linux"
```bash
ln -sf environments-and-requirements/environment-xxx-yyy.yml environment.yml
```
7. Activate the `invokeai` environment:
Replace `xxx` and `yyy` with the appropriate OS and GPU codes.
In order to use the newly created environment you will first need to
activate it
!!! todo "Windows requires admin privileges to make links, so we use the copy (cp) command"
```bash
cp environments-and-requirements\environment-win-cuda.yml environment.yml
```
```bash
conda activate invokeai
```
When this is done, confirm that a file `environment.yml` has been created in
the InvokeAI root directory and that it points to the correct file in the
`environments-and-requirements`.
Your command-line prompt should change to indicate that `invokeai` is active
by prepending `(invokeai)`.
4. Run conda:
8. Pre-Load the model weights files:
```bash
conda env update
```
!!! tip
This will create a new environment named `invokeai` and install all
InvokeAI dependencies into it.
If you have already downloaded the weights file(s) for another Stable
Diffusion distribution, you may skip this step (by selecting "skip" when
prompted) and configure InvokeAI to use the previously-downloaded files. The
process for this is described in [here](INSTALLING_MODELS.md).
If something goes wrong at this point, see
[troubleshooting](#Troubleshooting).
```bash
python scripts/preload_models.py
```
5. Activate the `invokeai` environment:
The script `preload_models.py` will interactively guide you through the
process of downloading and installing the weights files needed for InvokeAI.
Note that the main Stable Diffusion weights file is protected by a license
agreement that you have to agree to. The script will list the steps you need
to take to create an account on the site that hosts the weights files,
accept the agreement, and provide an access token that allows InvokeAI to
legally download and install the weights files.
```bash
conda activate invokeai
```
If you get an error message about a module not being installed, check that
the `invokeai` environment is active and if not, repeat step 5.
Your command-line prompt should change to indicate that `invokeai` is active.
9. Run the command-line- or the web- interface:
6. Load the model weights files:
!!! example ""
```bash
python scripts/preload_models.py
```
!!! warning "Make sure that the conda environment is activated, which should create `(invokeai)` in front of your prompt!"
(Windows users should use the backslash instead of the slash)
=== "CLI"
The script `preload_models.py` will interactively guide you through
downloading and installing the weights files needed for
InvokeAI. Note that the main Stable Diffusion weights file is
protected by a license agreement that you have to agree to. The
script will list the steps you need to take to create an account on
the site that hosts the weights files, accept the agreement, and
provide an access token that allows InvokeAI to legally download
and install the weights files.
```bash
python scripts/invoke.py
```
If you have already downloaded the weights file(s) for another
Stable Diffusion distribution, you may skip this step (by selecting
"skip" when prompted) and configure InvokeAI to use the
previously-downloaded files. The process for this is described in
[INSTALLING_MODELS.md].
=== "local Webserver"
If you get an error message about a module not being installed,
check that the `invokeai` environment is active and if not, repeat
step 5.
```bash
python scripts/invoke.py --web
```
7. Run the command-line interface or the web interface:
=== "Public Webserver"
```bash
python scripts/invoke.py # command line
python scripts/invoke.py --web # web interface
```
```bash
python scripts/invoke.py --web --host 0.0.0.0
```
(Windows users replace backslash with forward slash)
If you choose the run the web interface, point your browser at
http://localhost:9090 in order to load the GUI.
If you choose the run the web interface, point your browser at
http://localhost:9090 in order to load the GUI.
8. Render away!
10. Render away!
Browse the features listed in the [Stable Diffusion Toolkit
Docs](https://invoke-ai.git) to learn about all the things you can
do with InvokeAI.
Browse the [features](../features/CLI.md) section to learn about all the things you
can do with InvokeAI.
Note that some GPUs are slow to warm up. In particular, when using
an AMD card with the ROCm driver, you may have to wait for over a
minute the first time you try to generate an image. Fortunately, after
the warm up period rendering will be fast.
Note that some GPUs are slow to warm up. In particular, when using an AMD
card with the ROCm driver, you may have to wait for over a minute the first
time you try to generate an image. Fortunately, after the warm up period
rendering will be fast.
9. Subsequently, to relaunch the script, be sure to run "conda
activate invokeai", enter the `InvokeAI` directory, and then launch
the invoke script. If you forget to activate the 'invokeai'
environment, the script will fail with multiple `ModuleNotFound`
errors.
11. Subsequently, to relaunch the script, be sure to run "conda activate
invokeai", enter the `InvokeAI` directory, and then launch the invoke
script. If you forget to activate the 'invokeai' environment, the script
will fail with multiple `ModuleNotFound` errors.
## Updating to newer versions of the script
@@ -184,185 +223,194 @@ conda env update
python scripts/preload_models.py --no-interactive #optional
```
This will bring your local copy into sync with the remote one. The
last step may be needed to take advantage of new features or released
models. The `--no-interactive` flag will prevent the script from
prompting you to download the big Stable Diffusion weights files.
This will bring your local copy into sync with the remote one. The last step may
be needed to take advantage of new features or released models. The
`--no-interactive` flag will prevent the script from prompting you to download
the big Stable Diffusion weights files.
## pip Install
To install InvokeAI with only the PIP package manager, please follow
these steps:
To install InvokeAI with only the PIP package manager, please follow these
steps:
1. Make sure you are using Python 3.9 or higher. The rest of the install
procedure depends on this:
```bash
python -V
```
2. Install the `virtualenv` tool if you don't have it already:
```bash
pip install virtualenv
```
3. From within the InvokeAI top-level directory, create and activate a
virtual environment named `invokeai`:
```bash
virtualenv invokeai
source invokeai/bin/activate
```
4. Pick the correct `requirements*.txt` file for your hardware and
operating system.
We have created a series of environment files suited for different
operating systems and GPU hardware. They are located in the
`environments-and-requirements` directory:
1. Make sure you are using Python 3.9 or higher. The rest of the install
procedure depends on this:
```bash
requirements-lin-amd.txt # Linux with an AMD (ROCm) GPU
requirements-lin-arm64.txt # Linux running on arm64 systems
requirements-lin-cuda.txt # Linux with an NVIDIA (CUDA) GPU
requirements-mac-mps-cpu.txt # Macintoshes with MPS acceleration
requirements-lin-win-colab-cuda.txt # Windows with an NVIDA (CUDA) GPU
# (supports Google Colab too)
python -V
```
Select the appropriate requirements file, and make a link to it
from `environment.txt` in the top-level InvokeAI directory. The
command to do this from the top-level directory is:
2. Install the `virtualenv` tool if you don't have it already:
!!! todo "Macintosh and Linux"
```bash
ln -sf environments-and-requirements/requirements-xxx-yyy.txt requirements.txt
```
```bash
pip install virtualenv
```
Replace `xxx` and `yyy` with the appropriate OS and GPU codes.
3. From within the InvokeAI top-level directory, create and activate a virtual
environment named `invokeai`:
!!! todo "Windows requires admin privileges to make links, so we use the copy (cp) command instead"
```bash
cp environments-and-requirements\requirements-lin-win-colab-cuda.txt requirements.txt
```
```bash
virtualenv invokeai
source invokeai/bin/activate
```
Note that the order of arguments is reversed between the Linux/Mac and Windows
commands!
4. Pick the correct `requirements*.txt` file for your hardware and operating
system.
Please do not link directly to the file
`environments-and-requirements/requirements.txt`. This is a base requirements
file that does not have the platform-specific libraries.
We have created a series of environment files suited for different operating
systems and GPU hardware. They are located in the
`environments-and-requirements` directory:
When this is done, confirm that a file `requirements.txt` has been
created in the InvokeAI root directory and that it points to the
correct file in the `environments-and-requirements`.
<figure markdown>
5. Run PIP
| filename | OS |
| :---------------------------------: | :-------------------------------------------------------------: |
| requirements-lin-amd.txt | Linux with an AMD (ROCm) GPU |
| requirements-lin-arm64.txt | Linux running on arm64 systems |
| requirements-lin-cuda.txt | Linux with an NVIDIA (CUDA) GPU |
| requirements-mac-mps-cpu.txt | Macintoshes with MPS acceleration |
| requirements-lin-win-colab-cuda.txt | Windows with an NVIDA (CUDA) GPU<br>(supports Google Colab too) |
Be sure that the `invokeai` environment is active before doing
this:
</figure>
```bash
pip install --prefer-binary -r requirements.txt
```
Select the appropriate requirements file, and make a link to it from
`requirements.txt` in the top-level InvokeAI directory. The command to do
this from the top-level directory is:
!!! example ""
=== "Macintosh and Linux"
!!! info "Replace `xxx` and `yyy` with the appropriate OS and GPU codes."
```bash
ln -sf environments-and-requirements/requirements-xxx-yyy.txt requirements.txt
```
=== "Windows"
!!! info "on Windows, admin privileges are required to make links, so we use the copy command instead"
```cmd
copy environments-and-requirements\requirements-lin-win-colab-cuda.txt requirements.txt
```
!!! warning
Please do not link or copy `environments-and-requirements/requirements-base.txt`.
This is a base requirements file that does not have the platform-specific
libraries. Also, be sure to link or copy the platform-specific file to
a top-level file named `requirements.txt` as shown here. Running pip on
a requirements file in a subdirectory will not work as expected.
When this is done, confirm that a file named `requirements.txt` has been
created in the InvokeAI root directory and that it points to the correct
file in `environments-and-requirements`.
5. Run PIP
Be sure that the `invokeai` environment is active before doing this:
```bash
pip install --prefer-binary -r requirements.txt
```
---
## Troubleshooting
Here are some common issues and their suggested solutions.
### Conda install
### Conda
1. Conda fails before completing `conda update`:
#### Conda fails before completing `conda update`
The usual source of these errors is a package
incompatibility. While we have tried to minimize these, over time
packages get updated and sometimes introduce incompatibilities.
The usual source of these errors is a package incompatibility. While we have
tried to minimize these, over time packages get updated and sometimes introduce
incompatibilities.
We suggest that you search
[Issues](https://github.com/invoke-ai/InvokeAI/issues) or the
"bugs-and-support" channel of the [InvokeAI
Discord](https://discord.gg/ZmtBAhwWhy).
We suggest that you search
[Issues](https://github.com/invoke-ai/InvokeAI/issues) or the "bugs-and-support"
channel of the [InvokeAI Discord](https://discord.gg/ZmtBAhwWhy).
You may also try to install the broken packages manually using PIP. To do this, activate
the `invokeai` environment, and run `pip install` with the name and version of the
package that is causing the incompatibility. For example:
You may also try to install the broken packages manually using PIP. To do this,
activate the `invokeai` environment, and run `pip install` with the name and
version of the package that is causing the incompatibility. For example:
```bash
pip install test-tube==0.7.5
```
```bash
pip install test-tube==0.7.5
```
You can keep doing this until all requirements are satisfied and
the `invoke.py` script runs without errors. Please report to
[Issues](https://github.com/invoke-ai/InvokeAI/issues) what you
were able to do to work around the problem so that others can
benefit from your investigation.
You can keep doing this until all requirements are satisfied and the `invoke.py`
script runs without errors. Please report to
[Issues](https://github.com/invoke-ai/InvokeAI/issues) what you were able to do
to work around the problem so that others can benefit from your investigation.
2. `preload_models.py` or `invoke.py` crashes at an early stage
#### `preload_models.py` or `invoke.py` crashes at an early stage
This is usually due to an incomplete or corrupted Conda install.
Make sure you have linked to the correct environment file and run
`conda update` again.
This is usually due to an incomplete or corrupted Conda install. Make sure you
have linked to the correct environment file and run `conda update` again.
If the problem persists, a more extreme measure is to clear Conda's
caches and remove the `invokeai` environment:
If the problem persists, a more extreme measure is to clear Conda's caches and
remove the `invokeai` environment:
```bash
conda deactivate
conda env remove -n invokeai
conda clean -a
conda update
```
```bash
conda deactivate
conda env remove -n invokeai
conda clean -a
conda update
```
This removes all cached library files, including ones that may have
been corrupted somehow. (This is not supposed to happen, but does
anyway).
3. `invoke.py` crashes at a later stage.
This removes all cached library files, including ones that may have been
corrupted somehow. (This is not supposed to happen, but does anyway).
If the CLI or web site had been working ok, but something
unexpected happens later on during the session, you've encountered
a code bug that is probably unrelated to an install issue. Please
search [Issues](https://github.com/invoke-ai/InvokeAI/issues), file
a bug report, or ask for help on [Discord](https://discord.gg/ZmtBAhwWhy)
#### `invoke.py` crashes at a later stage
4. My renders are running very slowly!
If the CLI or web site had been working ok, but something unexpected happens
later on during the session, you've encountered a code bug that is probably
unrelated to an install issue. Please search
[Issues](https://github.com/invoke-ai/InvokeAI/issues), file a bug report, or
ask for help on [Discord](https://discord.gg/ZmtBAhwWhy)
You may have installed the wrong torch (machine learning) package,
and the system is running on CPU rather than the GPU. To check,
look at the log messages that appear when `invoke.py` is first
starting up. One of the earlier lines should say `Using device type
cuda`. On AMD systems, it will also say "cuda", and on Macintoshes,
it should say "mps". If instead the message says it is running on
"cpu", then you may need to install the correct torch library.
#### My renders are running very slowly
You may be able to fix this by installing a different torch
library. Here are the magic incantations for Conda and PIP.
You may have installed the wrong torch (machine learning) package, and the
system is running on CPU rather than the GPU. To check, look at the log messages
that appear when `invoke.py` is first starting up. One of the earlier lines
should say `Using device type cuda`. On AMD systems, it will also say "cuda",
and on Macintoshes, it should say "mps". If instead the message says it is
running on "cpu", then you may need to install the correct torch library.
!!! todo "For CUDA systems"
You may be able to fix this by installing a different torch library. Here are
the magic incantations for Conda and PIP.
(conda)
```bash
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
```
!!! todo "For CUDA systems"
(pip)
```bash
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
```
- conda
!!! todo "For AMD systems"
```bash
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
```
(conda)
```bash
conda activate invokeai
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2/
```
- pip
(pip)
```bash
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2/
```
```bash
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
```
More information and troubleshooting tips can be found at https://pytorch.org.
!!! todo "For AMD systems"
- conda
```bash
conda activate invokeai
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2/
```
- pip
```bash
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.2/
```
More information and troubleshooting tips can be found at https://pytorch.org.

View File

@@ -1,148 +1,137 @@
---
title: The InvokeAI Source Installer
title: Source Installer
---
# The InvokeAI Source Installer
## Introduction
The source installer is a shell script that attempts to automate every
step needed to install and run InvokeAI on a stock computer running
recent versions of Linux, MacOSX or Windows. It will leave you with a
version that runs a stable version of InvokeAI with the option to
upgrade to experimental versions later. It is not as foolproof as the
[InvokeAI installer](INSTALL_INVOKE.md)
The source installer is a shell script that attempts to automate every step
needed to install and run InvokeAI on a stock computer running recent versions
of Linux, MacOS or Windows. It will leave you with a version that runs a stable
version of InvokeAI with the option to upgrade to experimental versions later.
It is not as foolproof as the [InvokeAI installer](INSTALL_INVOKE.md)
Before you begin, make sure that you meet the [hardware
requirements](index.md#Hardware_Requirements) and has the appropriate
GPU drivers installed. In particular, if you are a Linux user with an
AMD GPU installed, you may need to install the [ROCm
driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
Before you begin, make sure that you meet the
[hardware requirements](index.md#Hardware_Requirements) and has the appropriate
GPU drivers installed. In particular, if you are a Linux user with an AMD GPU
installed, you may need to install the
[ROCm driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
Installation requires roughly 18G of free disk space to load the
libraries and recommended model weights files.
Installation requires roughly 18G of free disk space to load the libraries and
recommended model weights files.
## Walk through
Though there are multiple steps, there really is only one click
involved to kick off the process.
Though there are multiple steps, there really is only one click involved to kick
off the process.
1. The source installer is distributed in ZIP files. Go to the [latest
release](https://github.com/invoke-ai/InvokeAI/releases/latest), and
look for a series of files named:
1. The source installer is distributed in ZIP files. Go to the
[latest release](https://github.com/invoke-ai/InvokeAI/releases/latest), and
look for a series of files named:
- invokeAI-src-installer-mac.zip
- invokeAI-src-installer-windows.zip
- invokeAI-src-installer-linux.zip
- invokeAI-src-installer-mac.zip
- invokeAI-src-installer-windows.zip
- invokeAI-src-installer-linux.zip
Download the one that is appropriate for your operating system.
Download the one that is appropriate for your operating system.
2. Unpack the zip file into a directory that has at least 18G of free
space. Do *not* unpack into a directory that has an earlier version of
InvokeAI.
2. Unpack the zip file into a directory that has at least 18G of free space. Do
_not_ unpack into a directory that has an earlier version of InvokeAI.
This will create a new directory named "InvokeAI". This example
shows how this would look using the `unzip` command-line tool,
but you may use any graphical or command-line Zip extractor:
```bash
C:\Documents\Linco> unzip invokeAI-windows.zip
Archive: C: \Linco\Downloads\invokeAI-linux.zip
creating: invokeAI\
inflating: invokeAI\install.bat
inflating: invokeAI\readme.txt
```
This will create a new directory named "InvokeAI". This example shows how
this would look using the `unzip` command-line tool, but you may use any
graphical or command-line Zip extractor:
3. If you are using a desktop GUI, double-click the installer file.
It will be named `install.bat` on Windows systems and `install.sh`
on Linux and Macintosh systems.
```cmd
C:\Documents\Linco> unzip invokeAI-windows.zip
Archive: C: \Linco\Downloads\invokeAI-linux.zip
creating: invokeAI\
inflating: invokeAI\install.bat
inflating: invokeAI\readme.txt
```
4. Alternatively, form the command line, run the shell script or .bat
file:
3. If you are using a desktop GUI, double-click the installer file. It will be
named `install.bat` on Windows systems and `install.sh` on Linux and
Macintosh systems.
```bash
C:\Documents\Linco> cd invokeAI
C:\Documents\Linco> install.bat
```
4. Alternatively, form the command line, run the shell script or .bat file:
5. Sit back and let the install script work. It will install various
binary requirements including Conda, Git and Python, then download
the current InvokeAI code and install it along with its
dependencies.
```cmd
C:\Documents\Linco> cd invokeAI
C:\Documents\Linco\invokeAI> install.bat
```
6. After installation completes, the installer will launch a script
called `preload_models.py`, which will guide you through the
first-time process of selecting one or more Stable Diffusion model
weights files, downloading and configuring them.
5. Sit back and let the install script work. It will install various binary
requirements including Conda, Git and Python, then download the current
InvokeAI code and install it along with its dependencies.
Note that the main Stable Diffusion weights file is protected by a
license agreement that you must agree to in order to use. The
script will list the steps you need to take to create an account on
the official site that hosts the weights files, accept the
agreement, and provide an access token that allows InvokeAI to
legally download and install the weights files.
6. After installation completes, the installer will launch a script called
`preload_models.py`, which will guide you through the first-time process of
selecting one or more Stable Diffusion model weights files, downloading and
configuring them.
If you have already downloaded the weights file(s) for another
Stable Diffusion distribution, you may skip this step (by selecting
"skip" when prompted) and configure InvokeAI to use the
previously-downloaded files. The process for this is described in
[INSTALLING_MODELS.md].
Note that the main Stable Diffusion weights file is protected by a license
agreement that you must agree to in order to use. The script will list the
steps you need to take to create an account on the official site that hosts
the weights files, accept the agreement, and provide an access token that
allows InvokeAI to legally download and install the weights files.
7. The script will now exit and you'll be ready to generate some
images. The invokeAI directory will contain numerous files. Look
for a shell script named `invoke.sh` (Linux/Mac) or `invoke.bat`
(Windows). Launch the script by double-clicking it or typing
its name at the command-line:
If you have already downloaded the weights file(s) for another Stable
Diffusion distribution, you may skip this step (by selecting "skip" when
prompted) and configure InvokeAI to use the previously-downloaded files. The
process for this is described in [Installing Models](INSTALLING_MODELS.md).
```bash
C:\Documents\Linco\invokeAI> cd invokeAI
C:\Documents\Linco\invokeAI> invoke.bat
```
7. The script will now exit and you'll be ready to generate some images. The
invokeAI directory will contain numerous files. Look for a shell script
named `invoke.sh` (Linux/Mac) or `invoke.bat` (Windows). Launch the script
by double-clicking it or typing its name at the command-line:
The `invoke.bat` (`invoke.sh`) script will give you the choice of
starting (1) the command-line interface, or (2) the web GUI. If you
start the latter, you can load the user interface by pointing your
browser at http://localhost:9090.
```cmd
C:\Documents\Linco> cd invokeAI
C:\Documents\Linco\invokeAI> invoke.bat
```
The `invoke` script also offers you a third option labeled "open
the developer console". If you choose this option, you will be
dropped into a command-line interface in which you can run python
commands directly, access developer tools, and launch InvokeAI
with customized options. To do the latter, you would launch the
script `scripts/invoke.py` as shown in this example:
The `invoke.bat` (`invoke.sh`) script will give you the choice of starting (1)
the command-line interface, or (2) the web GUI. If you start the latter, you can
load the user interface by pointing your browser at http://localhost:9090.
```bash
python scripts\invoke.py --web --max_load_models=3 \
--model=waifu-1.3 --steps=30 --outdir=C:/Documents/AIPhotos
```
The `invoke` script also offers you a third option labeled "open the developer
console". If you choose this option, you will be dropped into a command-line
interface in which you can run python commands directly, access developer tools,
and launch InvokeAI with customized options. To do the latter, you would launch
the script `scripts/invoke.py` as shown in this example:
These options are described in detail in the [Command-Line
Interface](../features/CLI.md) documentation.
```cmd
python scripts/invoke.py --web --max_load_models=3 \
--model=waifu-1.3 --steps=30 --outdir=C:/Documents/AIPhotos
```
These options are described in detail in the
[Command-Line Interface](../features/CLI.md) documentation.
## Updating to newer versions
This section describes how to update InvokeAI to new versions of the
software.
This section describes how to update InvokeAI to new versions of the software.
### Updating the stable version
This distribution is changing rapidly, and we add new features on a
daily basis. To update to the latest released version (recommended),
run the `update.sh` (Linux/Mac) or `update.bat` (Windows)
scripts. This will fetch the latest release and re-run the
`preload_models` script to download any updated models files that may
be needed. You can also use this to add additional models that you did
not select at installation time.
This distribution is changing rapidly, and we add new features on a daily basis.
To update to the latest released version (recommended), run the `update.sh`
(Linux/Mac) or `update.bat` (Windows) scripts. This will fetch the latest
release and re-run the `preload_models` script to download any updated models
files that may be needed. You can also use this to add additional models that
you did not select at installation time.
### Updating to the development version
There may be times that there is a feature in the `development` branch
of InvokeAI that you'd like to take advantage of. Or perhaps there is
a branch that corrects an annoying bug. To do this, you will use the
developer's console.
There may be times that there is a feature in the `development` branch of
InvokeAI that you'd like to take advantage of. Or perhaps there is a branch that
corrects an annoying bug. To do this, you will use the developer's console.
From within the invokeAI directory, run the command `invoke.sh`
(Linux/Mac) or `invoke.bat` (Windows) and selection option (3) to open
the developers console. Then run the following command to get the
`development branch`:
From within the invokeAI directory, run the command `invoke.sh` (Linux/Mac) or
`invoke.bat` (Windows) and selection option (3) to open the developers console.
Then run the following command to get the `development branch`:
```bash
git checkout development
@@ -150,19 +139,18 @@ git pull
conda env update
```
You can now close the developer console and run `invoke` as before.
If you get complaints about missing models, then you may need to do
the additional step of running `preload_models.py`. This happens
relatively infrequently. To do this, simply open up the developer's
console again and type `python scripts/preload_models.py`.
You can now close the developer console and run `invoke` as before. If you get
complaints about missing models, then you may need to do the additional step of
running `preload_models.py`. This happens relatively infrequently. To do this,
simply open up the developer's console again and type
`python scripts/preload_models.py`.
## Troubleshooting
If you run into problems during or after installation, the InvokeAI
team is available to help you. Either create an
[Issue](https://github.com/invoke-ai/InvokeAI/issues) at our GitHub
site, or make a request for help on the "bugs-and-support" channel of
our [Discord server](https://discord.gg/ZmtBAhwWhy). We are a 100%
volunteer organization, but typically somebody will be available to
help you within 24 hours, and often much sooner.
If you run into problems during or after installation, the InvokeAI team is
available to help you. Either create an
[Issue](https://github.com/invoke-ai/InvokeAI/issues) at our GitHub site, or
make a request for help on the "bugs-and-support" channel of our
[Discord server](https://discord.gg/ZmtBAhwWhy). We are a 100% volunteer
organization, but typically somebody will be available to help you within 24
hours, and often much sooner.

View File

@@ -1,9 +1,7 @@
---
title: Installation Overview
title: Overview
---
## Installation
We offer several ways to install InvokeAI, each one suited to your
experience and preferences.
@@ -18,6 +16,14 @@ experience and preferences.
work", don't have an interest in tinkering with it, and do not
care about upgrading to unreleased experimental features.
**Important Caveats**
- This script does not support AMD GPUs. For Linux AMD support,
please use the manual or source code installer methods.
- This script has difficulty on some Macintosh machines
that have previously been used for Python development due to
conflicting development tools versions. Mac developers may wish
to try the source code installer or one of the manual methods instead.
2. [Source code installer](INSTALL_SOURCE.md)
This is a script that will install InvokeAI and all its essential

View File

@@ -31,7 +31,6 @@ dependencies:
- pip:
- dependency_injector==4.40.0
- getpass_asterisk
- gfpgan
- omegaconf==2.1.1
- pyreadline3
- realesrgan
@@ -40,6 +39,7 @@ dependencies:
- git+https://github.com/openai/CLIP.git@main#egg=clip
- git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k_diffusion
- git+https://github.com/invoke-ai/clipseg.git@relaxed-python-requirement#egg=clipseg
- git+https://github.com/invoke-ai/GFPGAN#egg=gfpgan
- -e .
variables:
PYTORCH_ENABLE_MPS_FALLBACK: 1

View File

@@ -18,7 +18,6 @@ dependencies:
- flask_cors==3.0.10
- flask_socketio==5.3.0
- getpass_asterisk
- gfpgan
- imageio-ffmpeg==0.4.2
- imageio==2.9.0
- kornia==0.6.0
@@ -42,4 +41,5 @@ dependencies:
- git+https://github.com/openai/CLIP.git@main#egg=clip
- git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k_diffusion
- git+https://github.com/invoke-ai/clipseg.git@relaxed-python-requirement#egg=clipseg
- git+https://github.com/invoke-ai/GFPGAN#egg=gfpgan
- -e .

View File

@@ -21,7 +21,6 @@ dependencies:
- flask_cors==3.0.10
- flask_socketio==5.3.0
- getpass_asterisk
- gfpgan
- imageio-ffmpeg==0.4.2
- imageio==2.9.0
- kornia==0.6.0
@@ -42,4 +41,5 @@ dependencies:
- git+https://github.com/openai/CLIP.git@main#egg=clip
- git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k_diffusion
- git+https://github.com/invoke-ai/clipseg.git@relaxed-python-requirement#egg=clipseg
- git+https://github.com/invoke-ai/GFPGAN#egg=gfpgan
- -e .

View File

@@ -22,7 +22,6 @@ dependencies:
- flask_cors==3.0.10
- flask_socketio==5.3.0
- getpass_asterisk
- gfpgan
- imageio-ffmpeg==0.4.2
- imageio==2.9.0
- kornia==0.6.0
@@ -43,4 +42,5 @@ dependencies:
- git+https://github.com/openai/CLIP.git@main#egg=clip
- git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k_diffusion
- git+https://github.com/invoke-ai/clipseg.git@relaxed-python-requirement#egg=clipseg
- git+https://github.com/invoke-ai/GFPGAN#egg=gfpgan
- -e .

View File

@@ -9,7 +9,6 @@ flask_cors==3.0.10
flask_socketio==5.3.0
flaskwebgui==0.3.7
getpass_asterisk
gfpgan
huggingface-hub
imageio
imageio-ffmpeg
@@ -34,3 +33,4 @@ transformers==4.21.*
git+https://github.com/openai/CLIP.git@main#egg=clip
git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k-diffusion
git+https://github.com/invoke-ai/clipseg.git@relaxed-python-requirement#egg=clipseg
git+https://github.com/invoke-ai/GFPGAN#egg=gfpgan

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -6,7 +6,7 @@
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>InvokeAI - A Stable Diffusion Toolkit</title>
<link rel="shortcut icon" type="icon" href="./assets/favicon.0d253ced.ico" />
<script type="module" crossorigin src="./assets/index.1fc0290b.js"></script>
<script type="module" crossorigin src="./assets/index.a8ba2a6c.js"></script>
<link rel="stylesheet" href="./assets/index.40a72c80.css">
</head>

10651
frontend/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,4 @@
import { IconButton, Image } from '@chakra-ui/react';
import { IconButton, Image, Spinner } from '@chakra-ui/react';
import { useState } from 'react';
import { FaAngleLeft, FaAngleRight } from 'react-icons/fa';
import { RootState, useAppDispatch, useAppSelector } from '../../app/store';
@@ -30,7 +30,6 @@ export const imagesSelector = createSelector(
return {
imageToDisplay: intermediateImage ? intermediateImage : currentImage,
isIntermediate: intermediateImage,
currentCategory,
isOnFirstImage: currentImageIndex === 0,
isOnLastImage:
@@ -56,7 +55,6 @@ export default function CurrentImagePreview() {
isOnLastImage,
shouldShowImageDetails,
imageToDisplay,
isIntermediate,
} = useAppSelector(imagesSelector);
const [shouldShowNextPrevButtons, setShouldShowNextPrevButtons] =
@@ -83,8 +81,8 @@ export default function CurrentImagePreview() {
{imageToDisplay && (
<Image
src={imageToDisplay.url}
width={isIntermediate ? imageToDisplay.width : undefined}
height={isIntermediate ? imageToDisplay.height : undefined}
width={imageToDisplay.width}
height={imageToDisplay.height}
/>
)}
{!shouldShowImageDetails && (

File diff suppressed because it is too large Load Diff

View File

@@ -1,172 +0,0 @@
@echo off
@rem This script will install git (if not found on the PATH variable)
@rem using micromamba (an 8mb static-linked single-file binary, conda replacement).
@rem For users who already have git, this step will be skipped.
@rem Next, it'll download the project's source code.
@rem Then it will download a self-contained, standalone Python and unpack it.
@rem Finally, it'll create the Python virtual environment and preload the models.
@rem This enables a user to install this project without manually installing git or Python
echo ***** Installing InvokeAI.. *****
set PATH=c:\windows\system32
@rem Config
set INSTALL_ENV_DIR=%cd%\installer_files\env
@rem https://mamba.readthedocs.io/en/latest/installation.html
set MICROMAMBA_DOWNLOAD_URL=https://micro.mamba.pm/api/micromamba/win-64/latest
set RELEASE_URL=https://github.com/tildebyte/InvokeAI
set RELEASE_SOURCEBALL=/archive/feat-install-pip-compile.tar.gz
set PYTHON_BUILD_STANDALONE_URL=https://github.com/indygreg/python-build-standalone/releases/download
set PYTHON_BUILD_STANDALONE=20221002/cpython-3.10.7+20221002-x86_64-pc-windows-msvc-shared-install_only.tar.gz
set PACKAGES_TO_INSTALL=
call git --version >.tmp1 2>.tmp2
if "%ERRORLEVEL%" NEQ "0" set PACKAGES_TO_INSTALL=%PACKAGES_TO_INSTALL% git
@rem Cleanup
del /q .tmp1 .tmp2
@rem (if necessary) install git into a contained environment
if "%PACKAGES_TO_INSTALL%" NEQ "" (
@rem download micromamba
echo ***** Downloading micromamba from %MICROMAMBA_DOWNLOAD_URL% to micromamba.exe *****
call curl -L "%MICROMAMBA_DOWNLOAD_URL%" > micromamba.tbz2
set err_msg=----- micromamba source unpack failed -----
tar -jxf micromamba.tbz2
if %errorlevel% neq 0 goto err_exit
move Library\bin\micromamba.exe micromamba.exe
rd /s /q Library info
del /q micromamba.tbz2
@rem test the mamba binary
echo ***** Micromamba version: *****
call micromamba.exe --version
@rem create the installer env
if not exist "%INSTALL_ENV_DIR%" (
call micromamba.exe create -y --prefix "%INSTALL_ENV_DIR%"
)
echo ***** Packages to install:%PACKAGES_TO_INSTALL% *****
call micromamba.exe install -y --prefix "%INSTALL_ENV_DIR%" -c conda-forge %PACKAGES_TO_INSTALL%
if not exist "%INSTALL_ENV_DIR%" (
echo ----- There was a problem while installing "%PACKAGES_TO_INSTALL%" using micromamba. Cannot continue. -----
pause
exit /b
)
)
del /q micromamba.exe
@rem For 'git' only
set PATH=%INSTALL_ENV_DIR%\Library\bin;%PATH%
@rem Download/unpack/clean up InvokeAI release sourceball
set err_msg=----- InvokeAI source download failed -----
curl -L %RELEASE_URL%/%RELEASE_SOURCEBALL% --output InvokeAI.tgz
if %errorlevel% neq 0 goto err_exit
set err_msg=----- InvokeAI source unpack failed -----
tar -zxf InvokeAI.tgz
if %errorlevel% neq 0 goto err_exit
del /q InvokeAI.tgz
set err_msg=----- InvokeAI source copy failed -----
cd InvokeAI-*
xcopy . .. /e /h
if %errorlevel% neq 0 goto err_exit
cd ..
@rem cleanup
for /f %%i in ('dir /b InvokeAI-*') do rd /s /q %%i
rd /s /q .dev_scripts .github docker-build tests
del /q requirements.in requirements-mkdocs.txt shell.nix
echo ***** Unpacked InvokeAI source *****
@rem Download/unpack/clean up python-build-standalone
set err_msg=----- Python download failed -----
curl -L %PYTHON_BUILD_STANDALONE_URL%/%PYTHON_BUILD_STANDALONE% --output python.tgz
if %errorlevel% neq 0 goto err_exit
set err_msg=----- Python unpack failed -----
tar -zxf python.tgz
if %errorlevel% neq 0 goto err_exit
del /q python.tgz
echo ***** Unpacked python-build-standalone *****
@rem create venv
set err_msg=----- problem creating venv -----
.\python\python -E -s -m venv .venv
@rem In reality, the following is ALL that 'activate.bat' does,
@rem aside from setting the prompt, which we don't care about
set PYTHONPATH=
set PATH=.venv\Scripts;%PATH%
if %errorlevel% neq 0 goto err_exit
echo ***** Created Python virtual environment *****
@rem Print venv's Python version
set err_msg=----- problem calling venv's python -----
echo We're running under
.venv\Scripts\python --version
if %errorlevel% neq 0 goto err_exit
set err_msg=----- pip update failed -----
.venv\Scripts\python -m pip install --no-cache-dir --no-warn-script-location --upgrade pip
if %errorlevel% neq 0 goto err_exit
echo ***** Updated pip *****
set err_msg=----- requirements file copy failed -----
copy installer\py3.10-windows-x86_64-cuda-reqs.txt requirements.txt
if %errorlevel% neq 0 goto err_exit
set err_msg=----- main pip install failed -----
.venv\Scripts\python -m pip install --no-cache-dir --no-warn-script-location -r requirements.txt
if %errorlevel% neq 0 goto err_exit
set err_msg=----- clipseg install failed -----
.venv\Scripts\python -m pip install --no-cache-dir --no-warn-script-location -e git+https://github.com/invoke-ai/clipseg.git@models-rename#egg=clipseg
if %errorlevel% neq 0 goto err_exit
set err_msg=----- InvokeAI setup failed -----
.venv\Scripts\python -m pip install --no-cache-dir --no-warn-script-location -e .
if %errorlevel% neq 0 goto err_exit
echo ***** Installed Python dependencies *****
@rem preload the models
call .venv\Scripts\python scripts\preload_models.py
set err_msg=----- model download clone failed -----
if %errorlevel% neq 0 goto err_exit
echo ***** Finished downloading models *****
echo ***** Installing invoke.bat ******
cp installer\invoke.bat .\invoke.bat
@rem more cleanup
rd /s /q installer installer_files
pause
exit
:err_exit
echo %err_msg%
pause
exit

View File

@@ -1,17 +0,0 @@
InvokeAI
Project homepage: https://github.com/invoke-ai/InvokeAI
Installation on Windows:
NOTE: You might need to enable Windows Long Paths. If you're not sure,
then you almost certainly need to. Simply double-click the 'WinLongPathsEnabled.reg'
file. Note that you will need to have admin privileges in order to
do this.
Please double-click the 'install.bat' file (while keeping it inside the invokeAI folder).
Installation on Linux and Mac:
Please open the terminal, and run './install.sh' (while keeping it inside the invokeAI folder).
After installation, please run the 'invoke.bat' file (on Windows) or 'invoke.sh'
file (on Linux/Mac) to start InvokeAI.

View File

@@ -17,9 +17,9 @@ set PATH=c:\windows\system32
@rem Config
set INSTALL_ENV_DIR=%cd%\installer_files\env
@rem https://mamba.readthedocs.io/en/latest/installation.html
set MICROMAMBA_DOWNLOAD_URL=https://micro.mamba.pm/api/micromamba/win-64/latest
set MICROMAMBA_DOWNLOAD_URL=https://github.com/cmdr2/stable-diffusion-ui/releases/download/v1.1/micromamba.exe
set RELEASE_URL=https://github.com/invoke-ai/InvokeAI
set RELEASE_SOURCEBALL=/archive/refs/tags/2.1.3-rc5.tar.gz
set RELEASE_SOURCEBALL=/archive/refs/heads/v2.1.3.tar.gz
set PYTHON_BUILD_STANDALONE_URL=https://github.com/indygreg/python-build-standalone/releases/download
set PYTHON_BUILD_STANDALONE=20221002/cpython-3.10.7+20221002-x86_64-pc-windows-msvc-shared-install_only.tar.gz
@@ -36,15 +36,7 @@ if "%PACKAGES_TO_INSTALL%" NEQ "" (
@rem download micromamba
echo ***** Downloading micromamba from %MICROMAMBA_DOWNLOAD_URL% to micromamba.exe *****
call curl -L "%MICROMAMBA_DOWNLOAD_URL%" > micromamba.tbz2
set err_msg=----- micromamba source unpack failed -----
tar -jxf micromamba.tbz2
if %errorlevel% neq 0 goto err_exit
move Library\bin\micromamba.exe micromamba.exe
rd /s /q Library info
del /q micromamba.tbz2
call curl -L "%MICROMAMBA_DOWNLOAD_URL%" > micromamba.exe
@rem test the mamba binary
echo ***** Micromamba version: *****
@@ -140,7 +132,7 @@ set err_msg=----- main pip install failed -----
if %errorlevel% neq 0 goto err_exit
set err_msg=----- clipseg install failed -----
.venv\Scripts\python -m pip install --no-cache-dir --no-warn-script-location -e git+https://github.com/invoke-ai/clipseg.git@relaxed-python-requirement#egg=clipseg
.venv\Scripts\python -m pip install --no-cache-dir --no-warn-script-location git+https://github.com/invoke-ai/clipseg.git@relaxed-python-requirement#egg=clipseg
if %errorlevel% neq 0 goto err_exit
set err_msg=----- InvokeAI setup failed -----
@@ -157,8 +149,8 @@ if %errorlevel% neq 0 goto err_exit
echo ***** Finished downloading models *****
echo ***** Installing invoke.bat ******
cp installer\invoke.bat .\invoke.bat
copy installer\invoke.bat .\invoke.bat
echo All done! Execute the file invoke.bat in this directory to start InvokeAI
@rem more cleanup
rd /s /q installer installer_files

View File

@@ -74,7 +74,7 @@ fi
INSTALL_ENV_DIR="$(pwd)/installer_files/env"
MICROMAMBA_DOWNLOAD_URL="https://micro.mamba.pm/api/micromamba/${MAMBA_OS_NAME}-${MAMBA_ARCH}/latest"
RELEASE_URL=https://github.com/invoke-ai/InvokeAI
RELEASE_SOURCEBALL=/archive/refs/tags/2.1.3-rc5.tar.gz
RELEASE_SOURCEBALL=/archive/refs/heads/v2.1.3.tar.gz
PYTHON_BUILD_STANDALONE_URL=https://github.com/indygreg/python-build-standalone/releases/download
if [ "$OS_NAME" == "darwin" ]; then
PYTHON_BUILD_STANDALONE=20221002/cpython-3.10.7+20221002-${PY_ARCH}-apple-darwin-install_only.tar.gz
@@ -184,7 +184,7 @@ _err_msg="\n----- main pip install failed -----\n"
_err_exit $? _err_msg
_err_msg="\n----- clipseg install failed -----\n"
.venv/bin/python3 -m pip install --no-cache-dir --no-warn-script-location -e git+https://github.com/invoke-ai/clipseg.git@relaxed-python-requirement#egg=clipseg
.venv/bin/python3 -m pip install --no-cache-dir --no-warn-script-location git+https://github.com/invoke-ai/clipseg.git@relaxed-python-requirement#egg=clipseg
_err_exit $? _err_msg
_err_msg="\n----- InvokeAI setup failed -----\n"
@@ -206,5 +206,6 @@ cp installer/invoke.sh .
# more cleanup
rm -rf installer/ installer_files/
echo "All done! Run the command './invoke.sh' to start InvokeAI."
read -p "Press any key to exit..."
exit

View File

@@ -16,7 +16,9 @@ IF /I "%restore%" == "1" (
.venv\Scripts\python scripts\invoke.py --web
) ELSE IF /I "%restore%" == "3" (
echo Developer Console
.venv\Scripts\python
call where python
call python --version
cmd /k
) ELSE (
echo Invalid selection

View File

@@ -561,17 +561,15 @@ class Generate:
):
# retrieve the seed from the image;
seed = None
image_metadata = None
prompt = None
args = metadata_from_png(image_path)
seed = args.seed
prompt = args.prompt
print(f'>> retrieved seed {seed} and prompt "{prompt}" from {image_path}')
if not seed:
print('* Could not recover seed for image. Replacing with 42. This will not affect image quality')
seed = 42
args = metadata_from_png(image_path)
seed = opt.seed or args.seed
if seed is None or seed < 0:
seed = random.randrange(0, np.iinfo(np.uint32).max)
prompt = opt.prompt or args.prompt or ''
print(f'>> using seed {seed} and prompt "{prompt}" for {image_path}')
# try to reuse the same filename prefix as the original file.
# we take everything up to the first period
@@ -618,7 +616,11 @@ class Generate:
extend_instructions[direction]=int(pixels)
except ValueError:
print(f'** invalid extension instruction. Use <directions> <pixels>..., as in "top 64 left 128 right 64 bottom 64"')
if len(extend_instructions)>0:
opt.seed = seed
opt.prompt = prompt
if len(extend_instructions) > 0:
restorer = Outcrop(image,self,)
return restorer.process (
extend_instructions,
@@ -1033,7 +1035,9 @@ class Generate:
return True
return False
def _check_for_erasure(self, image):
def _check_for_erasure(self, image:Image.Image)->bool:
if image.mode not in ('RGBA','RGB'):
return False
width, height = image.size
pixdata = image.load()
colored = 0

View File

@@ -247,8 +247,6 @@ class Args(object):
switches.append('--seamless')
if a['hires_fix']:
switches.append('--hires_fix')
if a['strength'] and a['strength']>0:
switches.append(f'-f {a["strength"]}')
# img2img generations have parameters relevant only to them and have special handling
if a['init_img'] and len(a['init_img'])>0:
@@ -554,14 +552,8 @@ class Args(object):
postprocessing_group.add_argument(
'--gfpgan_model_path',
type=str,
default='./GFPGANv1.4.pth',
help='Indicates the path to the GFPGAN model, relative to --gfpgan_dir.',
)
postprocessing_group.add_argument(
'--gfpgan_dir',
type=str,
default='./models/gfpgan',
help='Indicates the directory containing the GFPGAN code.',
default='./models/gfpgan/GFPGANv1.4.pth',
help='Indicates the path to the GFPGAN model',
)
web_server_group.add_argument(
'--web',
@@ -866,6 +858,11 @@ class Args(object):
default=32,
help='When outpainting, the tile size to use for filling outpaint areas',
)
postprocessing_group.add_argument(
'--new_prompt',
type=str,
help='Change the text prompt applied during postprocessing (default, use original generation prompt)',
)
postprocessing_group.add_argument(
'-ft',
'--facetool',

View File

@@ -63,7 +63,7 @@ class Generator():
**kwargs
)
results = []
seed = seed if seed is not None else self.new_seed()
seed = seed if seed is not None and seed >= 0 else self.new_seed()
first_seed = seed
seed, initial_noise = self.generate_initial_noise(seed, width, height)

View File

@@ -169,7 +169,8 @@ class Inpaint(Img2Img):
# Fill missing areas of original image
init_filled = self.tile_fill_missing(
self.pil_image.copy(),
seed = self.seed,
seed = self.seed if (self.seed is not None
and self.seed >= 0) else self.new_seed(),
tile_size = tile_size
)
init_filled.paste(init_image, (0,0), init_image.split()[-1])

View File

@@ -10,8 +10,6 @@ from ldm.models.diffusion.ddim import DDIMSampler
from ldm.invoke.generator.omnibus import Omnibus
from ldm.models.diffusion.shared_invokeai_diffusion import InvokeAIDiffuserComponent
from PIL import Image
from ldm.invoke.devices import choose_autocast
from ldm.invoke.image_util import InitImageResizer
class Txt2Img2Img(Generator):
def __init__(self, model, precision):
@@ -46,13 +44,16 @@ class Txt2Img2Img(Generator):
ddim_num_steps=steps, ddim_eta=ddim_eta, verbose=False
)
#x = self.get_noise(init_width, init_height)
x = x_T
if self.free_gpu_mem and self.model.model.device != self.model.device:
self.model.model.to(self.model.device)
samples, _ = sampler.sample(
batch_size = 1,
S = steps,
x_T = x_T,
x_T = x,
conditioning = c,
shape = shape,
verbose = False,
@@ -68,21 +69,11 @@ class Txt2Img2Img(Generator):
)
# resizing
image = self.sample_to_image(samples)
image = InitImageResizer(image).resize(width, height)
image = np.array(image).astype(np.float32) / 255.0
image = image[None].transpose(0, 3, 1, 2)
image = torch.from_numpy(image)
image = 2.0 * image - 1.0
image = image.to(self.model.device)
scope = choose_autocast(self.precision)
with scope(self.model.device.type):
samples = self.model.get_first_stage_encoding(
self.model.encode_first_stage(image)
) # move back to latent space
samples = torch.nn.functional.interpolate(
samples,
size=(height // self.downsampling_factor, width // self.downsampling_factor),
mode="bilinear"
)
t_enc = int(strength * steps)
ddim_sampler = DDIMSampler(self.model, device=self.model.device)

View File

@@ -109,10 +109,13 @@ class ModelCache(object):
Set the default model. The change will not take
effect until you call model_cache.commit()
'''
print(f'DEBUG: before set_default_model()\n{OmegaConf.to_yaml(self.config)}')
assert model_name in self.models,f"unknown model '{model_name}'"
for model in self.models:
self.models[model].pop('default',None)
self.models[model_name]['default'] = True
config = self.config
for model in config:
config[model].pop('default',None)
config[model_name]['default'] = True
print(f'DEBUG: after set_default_model():\n{OmegaConf.to_yaml(self.config)}')
def list_models(self) -> dict:
'''

View File

@@ -636,7 +636,7 @@ def split_weighted_subprompts(text, skip_normalize=False)->list:
weight_sum = sum(map(lambda x: x[1], parsed_prompts))
if weight_sum == 0:
print(
"Warning: Subprompt weights add up to zero. Discarding and using even weights instead.")
"* Warning: Subprompt weights add up to zero. Discarding and using even weights instead.")
equal_weight = 1 / max(len(parsed_prompts), 1)
return [(x[0], equal_weight) for x in parsed_prompts]
return [(x[0], x[1] / weight_sum) for x in parsed_prompts]

View File

@@ -284,6 +284,7 @@ class Completer(object):
switch,partial_path = match.groups()
partial_path = partial_path.lstrip()
matches = list()
path = os.path.expanduser(partial_path)
@@ -321,6 +322,7 @@ class Completer(object):
matches.append(
switch+os.path.join(os.path.dirname(full_path), node)
)
return matches
class DummyCompleter(Completer):

View File

@@ -2,9 +2,9 @@ class Restoration():
def __init__(self) -> None:
pass
def load_face_restore_models(self, gfpgan_dir='./src/gfpgan', gfpgan_model_path='experiments/pretrained_models/GFPGANv1.4.pth'):
def load_face_restore_models(self, gfpgan_model_path='./models/gfpgan/GFPGANv1.4.pth'):
# Load GFPGAN
gfpgan = self.load_gfpgan(gfpgan_dir, gfpgan_model_path)
gfpgan = self.load_gfpgan(gfpgan_model_path)
if gfpgan.gfpgan_model_exists:
print('>> GFPGAN Initialized')
else:
@@ -22,9 +22,9 @@ class Restoration():
return gfpgan, codeformer
# Face Restore Models
def load_gfpgan(self, gfpgan_dir, gfpgan_model_path):
def load_gfpgan(self, gfpgan_model_path):
from ldm.invoke.restoration.gfpgan import GFPGAN
return GFPGAN(gfpgan_dir, gfpgan_model_path)
return GFPGAN(gfpgan_model_path)
def load_codeformer(self):
from ldm.invoke.restoration.codeformer import CodeFormerRestoration

View File

@@ -10,17 +10,14 @@ from PIL import Image
class GFPGAN():
def __init__(
self,
gfpgan_dir='models/gfpgan',
gfpgan_model_path='GFPGANv1.4.pth'
) -> None:
gfpgan_model_path='./models/gfpgan/GFPGANv1.4.pth') -> None:
self.model_path = os.path.join(gfpgan_dir, gfpgan_model_path)
self.model_path = os.path.join(gfpgan_model_path)
self.gfpgan_model_exists = os.path.isfile(self.model_path)
if not self.gfpgan_model_exists:
print('## NOT FOUND: GFPGAN model not found at ' + self.model_path)
return None
sys.path.append(os.path.abspath(gfpgan_dir))
def model_exists(self):
return os.path.isfile(self.model_path)
@@ -51,7 +48,7 @@ class GFPGAN():
f'>> WARNING: GFPGAN not initialized.'
)
print(
f'>> Download https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth to {self.model_path}, \nor change GFPGAN directory with --gfpgan_dir.'
f'>> Download https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth to {self.model_path}'
)
image = image.convert('RGB')

View File

@@ -28,11 +28,12 @@ class Outcrop(object):
self.generate._set_sampler()
def wrapped_callback(img,seed,**kwargs):
image_callback(img,orig_opt.seed,use_prefix=prefix,**kwargs)
preferred_seed = orig_opt.seed if orig_opt.seed >= 0 else seed
image_callback(img,preferred_seed,use_prefix=prefix,**kwargs)
result= self.generate.prompt2image(
orig_opt.prompt,
seed = orig_opt.seed, # uncomment to make it deterministic
opt.prompt,
seed = opt.seed or orig_opt.seed,
sampler = self.generate.sampler,
steps = opt.steps,
cfg_scale = opt.cfg_scale,

View File

@@ -282,7 +282,6 @@ class CrossAttention(nn.Module):
def get_attention_mem_efficient(self, q, k, v):
if q.device.type == 'cuda':
torch.cuda.empty_cache()
#print("in get_attention_mem_efficient with q shape", q.shape, ", k shape", k.shape, ", free memory is", get_mem_free_total(q.device))
return self.einsum_op_cuda(q, k, v)

View File

@@ -29,6 +29,7 @@ infile = None
def main():
"""Initialize command-line parsers and the diffusion model"""
global infile
print('* Initializing, be patient...')
opt = Args()
args = opt.parse_args()
@@ -46,7 +47,6 @@ def main():
print('--max_loaded_models must be >= 1; using 1')
args.max_loaded_models = 1
print('* Initializing, be patient...')
from ldm.generate import Generate
# these two lines prevent a horrible warning message from appearing
@@ -90,7 +90,12 @@ def main():
safety_checker=opt.safety_checker,
max_loaded_models=opt.max_loaded_models,
)
except (FileNotFoundError, IOError, KeyError) as e:
except FileNotFoundError:
print('** You appear to be missing configs/models.yaml')
print('** You can either exit this script and run scripts/preload_models.py, or fix the problem now.')
emergency_model_create(opt)
sys.exit(-1)
except (IOError, KeyError) as e:
print(f'{e}. Aborting.')
sys.exit(-1)
@@ -208,7 +213,10 @@ def main_loop(gen, opt):
setattr(opt,attr,path)
# retrieve previous value of seed if requested
if opt.seed is not None and opt.seed < 0:
# Exception: for postprocess operations negative seed values
# mean "discard the original seed and generate a new one"
# (this is a non-obvious hack and needs to be reworked)
if opt.seed is not None and opt.seed < 0 and operation != 'postprocess':
try:
opt.seed = last_results[opt.seed][1]
print(f'>> Reusing previous seed {opt.seed}')
@@ -277,7 +285,7 @@ def main_loop(gen, opt):
filename = f'{prefix}.{use_prefix}.{seed}.png'
tm = opt.text_mask[0]
th = opt.text_mask[1] if len(opt.text_mask)>1 else 0.5
formatted_dream_prompt = f'!mask {opt.prompt} -tm {tm} {th}'
formatted_dream_prompt = f'!mask {opt.input_file_path} -tm {tm} {th}'
path = file_writer.save_image_and_prompt_to_png(
image = image,
dream_prompt = formatted_dream_prompt,
@@ -317,7 +325,7 @@ def main_loop(gen, opt):
tool = re.match('postprocess:(\w+)',opt.last_operation).groups()[0]
add_postprocessing_to_metadata(
opt,
opt.prompt,
opt.input_file_path,
filename,
tool,
formatted_dream_prompt,
@@ -482,6 +490,7 @@ def do_command(command:str, gen, opt:Args, completer) -> tuple:
command = '-h'
return command, operation
def add_weights_to_config(model_path:str, gen, opt, completer):
print(f'>> Model import in process. Please enter the values needed to configure this model:')
print()
@@ -578,7 +587,7 @@ def write_config_file(conf_path, gen, model_name, new_config, clobber=False, mak
try:
print('>> Verifying that new model loads...')
yaml_str = gen.model_cache.add_model(model_name, new_config, clobber)
gen.model_cache.add_model(model_name, new_config, clobber)
assert gen.set_model(model_name) is not None, 'model failed to load'
except AssertionError as e:
print(f'** aborting **')
@@ -604,6 +613,7 @@ def do_textmask(gen, opt, callback):
image_path = os.path.join(opt.outdir,image_path)
assert os.path.exists(image_path), '** "{opt.prompt}" not found. Please enter the name of an existing image file to mask **'
assert opt.text_mask is not None and len(opt.text_mask) >= 1, '** Please provide a text mask with -tm **'
opt.input_file_path = image_path
tm = opt.text_mask[0]
threshold = float(opt.text_mask[1]) if len(opt.text_mask) > 1 else 0.5
gen.apply_textmask(
@@ -614,10 +624,17 @@ def do_textmask(gen, opt, callback):
)
def do_postprocess (gen, opt, callback):
file_path = opt.prompt # treat the prompt as the file pathname
file_path = opt.prompt # treat the prompt as the file pathname
if opt.new_prompt is not None:
opt.prompt = opt.new_prompt
else:
opt.prompt = None
if os.path.dirname(file_path) == '': #basename given
file_path = os.path.join(opt.outdir,file_path)
opt.input_file_path = file_path
tool=None
if opt.facetool_strength > 0:
tool = opt.facetool
@@ -656,7 +673,10 @@ def do_postprocess (gen, opt, callback):
def add_postprocessing_to_metadata(opt,original_file,new_file,tool,command):
original_file = original_file if os.path.exists(original_file) else os.path.join(opt.outdir,original_file)
new_file = new_file if os.path.exists(new_file) else os.path.join(opt.outdir,new_file)
meta = retrieve_metadata(original_file)['sd-metadata']
try:
meta = retrieve_metadata(original_file)['sd-metadata']
except AttributeError:
meta = retrieve_metadata(new_file)['sd-metadata']
if 'image' not in meta:
meta = metadata_dumps(opt,seeds=[opt.seed])['image']
meta['image'] = {}
@@ -704,7 +724,7 @@ def prepare_image_metadata(
elif len(prior_variations) > 0:
formatted_dream_prompt = opt.dream_prompt_str(seed=first_seed)
elif operation == 'postprocess':
formatted_dream_prompt = '!fix '+opt.dream_prompt_str(seed=seed)
formatted_dream_prompt = '!fix '+opt.dream_prompt_str(seed=seed,prompt=opt.input_file_path)
else:
formatted_dream_prompt = opt.dream_prompt_str(seed=seed)
return filename,formatted_dream_prompt
@@ -789,7 +809,7 @@ def load_face_restoration(opt):
from ldm.invoke.restoration import Restoration
restoration = Restoration()
if opt.restore:
gfpgan, codeformer = restoration.load_face_restore_models(opt.gfpgan_dir, opt.gfpgan_model_path)
gfpgan, codeformer = restoration.load_face_restore_models(opt.gfpgan_model_path)
else:
print('>> Face restoration disabled')
if opt.esrgan:
@@ -878,6 +898,36 @@ def write_commands(opt, file_path:str, outfilepath:str):
f.write('\n'.join(commands))
print(f'>> File {outfilepath} with commands created')
def emergency_model_create(opt:Args):
completer = get_completer(opt)
completer.complete_extensions(('.yaml','.yml','.ckpt','.vae.pt'))
completer.set_default_dir('.')
valid_path = False
while not valid_path:
weights_file = input('Enter the path to a downloaded models file, or ^C to exit: ')
valid_path = os.path.exists(weights_file)
dir,basename = os.path.split(weights_file)
valid_name = False
while not valid_name:
name = input('Enter a short name for this model (no spaces): ')
name = 'unnamed model' if len(name)==0 else name
valid_name = ' ' not in name
description = input('Enter a description for this model: ')
description = 'no description' if len(description)==0 else description
with open(opt.conf, 'w', encoding='utf-8') as f:
f.write(f'{name}:\n')
f.write(f' description: {description}\n')
f.write(f' weights: {weights_file}\n')
f.write(f' config: ./configs/stable-diffusion/v1-inference.yaml\n')
f.write(f' width: 512\n')
f.write(f' height: 512\n')
f.write(f' default: true\n')
print(f'Config file {opt.conf} is created. This script will now exit.')
print(f'After restarting you may examine the entry with !models and edit it with !edit.')
######################################
if __name__ == '__main__':

View File

@@ -487,14 +487,8 @@ def create_argv_parser():
parser.add_argument(
'--gfpgan_model_path',
type=str,
default='experiments/pretrained_models/GFPGANv1.3.pth',
help='Indicates the path to the GFPGAN model, relative to --gfpgan_dir.',
)
parser.add_argument(
'--gfpgan_dir',
type=str,
default='./src/gfpgan',
help='Indicates the directory containing the GFPGAN code.',
default='./models/gfpgan/GFPGANv1.4.pth',
help='Indicates the path to the GFPGAN model.',
)
parser.add_argument(
'--web',

View File

@@ -104,13 +104,15 @@ def postscript():
print(
'''\n** Model Installation Successful **\nYou're all set! You may now launch InvokeAI using one of these two commands:
Web version:
python scripts/invoke.py --web (connect to http://localhost:9090)
Command-line version:
python scripts/invoke.py
Remember to activate that 'invokeai' environment before running invoke.py.
Or, if you used one of the automated installers, execute "invoke.sh" (Linux/Mac)
or "invoke.bat" (Windows) to start the script.
Have fun!
'''
)
@@ -446,15 +448,15 @@ def download_gfpgan():
for model in (
[
'https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth',
'models/gfpgan/GFPGANv1.4.pth'
'./models/gfpgan/GFPGANv1.4.pth'
],
[
'https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_Resnet50_Final.pth',
'models/gfpgan/weights/detection_Resnet50_Final.pth'
'./models/gfpgan/weights/detection_Resnet50_Final.pth'
],
[
'https://github.com/xinntao/facexlib/releases/download/v0.2.2/parsing_parsenet.pth',
'models/gfpgan/weights/parsing_parsenet.pth'
'./models/gfpgan/weights/parsing_parsenet.pth'
],
):
model_url,model_dest = model

View File

@@ -16,6 +16,7 @@ rm -rf invokeAI
mkdir -p invokeAI
cp install.bat invokeAI
cp readme.txt invokeAI
cp WinLongPathsEnabled.reg invokeAI
zip -r invokeAI-src-installer-windows.zip invokeAI

View File

@@ -72,7 +72,7 @@ if not exist ".git" (
call git config --local init.defaultBranch main
call git remote add origin %REPO_URL%
call git fetch
call git checkout origin/release-candidate-2-1-3 -ft
call git checkout origin/main -ft
)
@rem activate the base env
@@ -80,7 +80,7 @@ call conda activate
@rem create the environment
call conda env remove -n invokeai
cp environments-and-requirements\environment-win-cuda.yml environment.yml
copy environments-and-requirements\environment-win-cuda.yml environment.yml
call conda env create
if "%ERRORLEVEL%" NEQ "0" (
echo ""
@@ -92,8 +92,8 @@ if "%ERRORLEVEL%" NEQ "0" (
exit /b
)
cp source_installer/install.bat install.bat
cp source_installer/update.bat update.bat
copy source_installer\invoke.bat invoke.bat
copy source_installer\update.bat update.bat
call conda activate invokeai
@rem preload the models

View File

@@ -86,7 +86,7 @@ if [ ! -e ".git" ]; then
git config --local init.defaultBranch main
git remote add origin "$REPO_URL"
git fetch
git checkout origin/release-candidate-2-1-3 -ft
git checkout origin/main -ft
fi
# create the environment
@@ -116,8 +116,9 @@ then
echo "Please visit https://invoke-ai.github.io/InvokeAI/#installation for alternative"
echo "installation methods"
else
ln -sf source_installer/install.sh .
ln -sf source_installer/update.sh .
ln -sf ./source_installer/invoke.sh .
ln -sf ./source_installer/update.sh .
conda activate invokeai
# preload the models
echo "Calling the preload_models.py script"

View File

@@ -3,9 +3,14 @@ InvokeAI
Project homepage: https://github.com/invoke-ai/InvokeAI
Installation on Windows:
Please double-click the 'install.bat' file (while keeping it inside the invokeAI folder).
You may need to enable Windows Long Paths to install InvokeAI. If you're not
sure what this is, you almost certainly need to do this. Simply double-click the
"WinLongPathsEnabled.reg" file located in this directory, and approve the Windows
warnings. Note that you will need to have admin privileges in order to do this.
Then double-click the 'install.bat' file (while keeping it inside the invokeAI folder).
Installation on Linux and Mac:
Please open the terminal, and run './install.sh' (while keeping it inside the invokeAI folder).
After installation, please run the 'invoke.bat' file (on Windows) or 'invoke.sh' file (on Linux/Mac) to start InvokeAI.
After installation, please run the 'invoke.bat' file (on Windows) or 'invoke.sh' file (on Linux/Mac) to start InvokeAI.