Commit Graph

3498 Commits

Author SHA1 Message Date
Lincoln Stein
a91d01c27a enhancements to update routines
- Allow invokeai-update to update using a release, tag or branch.
- Allow CLI's root directory update routine to update directory
  contents regardless of whether current version is released.
- In model importation routine, clarify wording of instructions when user is
  asked to choose the type of model being imported.
v2.3.3-rc6
2023-03-28 15:58:36 -04:00
Lincoln Stein
5eeca47887 bump rc version number v2.3.3-rc5 2023-03-28 13:08:38 -04:00
Lincoln Stein
66b361294b update embedding file documentation 2023-03-28 12:24:01 -04:00
Lincoln Stein
0fb1e79a0b update model installation documentation 2023-03-28 12:07:47 -04:00
Lincoln Stein
14f1efaf4f launch --model supersedes persistent model 2023-03-28 10:53:32 -04:00
Lincoln Stein
23aa17e387 fix typo in name of vae cache 2023-03-28 10:48:03 -04:00
Lincoln Stein
f23cc54e1b save and restore selected model on startup/exit 2023-03-28 10:39:19 -04:00
Lincoln Stein
e3d992d5d7 add metadata dump script 2023-03-28 10:01:31 -04:00
Lincoln Stein
bb972b2e3d Add support for yet another TI embedding file format (2.3 version) (#3045)
- This variant, exemplified by "easynegative.safetensors" has a single
'embparam' key containing a Tensor.
- Also refactored code to make it easier to read.
- Handle both pickle and safetensor formats.
2023-03-28 00:46:30 -04:00
Lincoln Stein
41a8fdea53 fix bugs in online ckpt conversion of 2.0 models
This commit fixes bugs related to the on-the-fly conversion and loading of
legacy checkpoint models built on SD-2.0 base.

- When legacy checkpoints built on SD-2.0 models were converted
  on-the-fly using --ckpt_convert, generation would crash with a
  precision incompatibility error.

- In addition, broken logic was causing some 2.0-derived ckpt files to
  be converted into diffusers and then processed through the legacy
  generation routines - not good.
v2.3.3-rc3
2023-03-28 00:11:37 -04:00
Lincoln Stein
a78ff86e42 Merge branch 'v2.3' into enhance/handle-another-embedding-variant 2023-03-27 22:38:36 -04:00
Lincoln Stein
8e2fd4c96a fix ROCm version latest v2.3.3-rc2 2023-03-27 22:38:04 -04:00
Lincoln Stein
2f424f29a0 generalized root directory version updating 2023-03-27 22:35:12 -04:00
Lincoln Stein
90f00db032 version 2.3.3-rc2
- installer now installs the pretty dialog-based console launcher
- added dialogrc for custom colors
- add updater to download new launcher when users do an update
2023-03-27 21:10:24 -04:00
Lincoln Stein
77a63e5310 this is release candidate 2.3.3-rc1 (#3033)
This includes a number of bug fixes described in the draft release
notes.

It also incorporates a modified version of the dialog-based invoke.sh
script suggested by JoshuaKimsey:
https://discord.com/channels/1020123559063990373/1089119602425995304
2023-03-27 12:09:56 -04:00
Lincoln Stein
8f921741a5 Update installer/templates/invoke.sh.in
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2023-03-26 23:45:00 -04:00
Lincoln Stein
071df30597 handle a fourth variant of embedding .pt files
- This variant, exemplified by "easynegative.safetensors" has a single
  'embparam' key containing a Tensor.
- Also refactored code to make it easier to read.
- Handle both pickle and safetensor formats.
2023-03-26 23:40:29 -04:00
Lincoln Stein
589a817952 enhance model autodetection during import (#3043)
- Imported V2 legacy models will now autoconvert into diffusers at load
time regardless of setting of --ckpt_convert.

- model manager `heuristic_import()` function now looks for side-by-side
yaml and vae files for custom configuration and VAE respectively.

Example of this:

illuminati-v1.1.safetensors illuminati-v1.1.vae.safetensors
illuminati-v1.1.yaml

When the user tries to import `illuminati-v1.1.safetensors`, the yaml
file will be used for its configuration, and the VAE will be used for
its VAE. Conversion to diffusers will happen if needed, and the yaml
file will be used to determine which V2 format (if any) to apply.

NOTE that the changes to `ckpt_to_diffusers.py` were previously reviewed
by @JPPhoto on the `main` branch and approved.
2023-03-26 11:49:00 -04:00
Lincoln Stein
dcb21c0f46 enhance model autodetection during import
- Imported V2 legacy models will now autoconvert into diffusers
  at load time regardless of setting of --ckpt_convert.

- model manager `heuristic_import()` function now looks for
  side-by-side yaml and vae files for custom configuration and VAE
  respectively.

Example of this:

  illuminati-v1.1.safetensors
  illuminati-v1.1.vae.safetensors
  illuminati-v1.1.yaml

When the user tries to import `illuminati-v1.1.safetensors`, the yaml
file will be used for its configuration, and the VAE will be used for
its VAE. Conversion to diffusers will happen if needed, and the yaml
file will be used to determine which V2 format (if any) to apply.
2023-03-26 10:20:51 -04:00
Lincoln Stein
1cb88960fe this is release candidate 2.3.3-rc1
Incorporates a modified version of the dialog-based invoke.sh script
suggested by JoshuaKimsey:
https://discord.com/channels/1020123559063990373/1089119602425995304
v2.3.3-rc1
2023-03-25 16:58:08 -04:00
Eugene Brodsky
610a1483b7 installer: fix indentation in invoke.sh template (tabs -> spaces) 2023-03-25 13:52:37 -04:00
Lincoln Stein
b4e7fc0d1d prevent infinite loop when launching developer's console 2023-03-25 13:52:37 -04:00
blessedcoolant
b792b7d68c Security patch: Scan all pickle files, including VAEs; default to safetensor loading (#3011)
Several related security fixes:

1. Port #2946 from main to 2.3.2 branch - this closes a hole that allows
a pickle checkpoint file to masquerade as a safetensors file.
2. Add pickle scanning to the checkpoint to diffusers conversion script.
3. Pickle scan VAE non-safetensors files
4. Avoid running scanner twice on same file during the probing and
conversion process.
5. Clean up diagnostic messages.
2023-03-24 22:35:15 +13:00
blessedcoolant
abaa91195d Merge branch 'v2.3' into security/scan-ckpt-models 2023-03-24 22:11:34 +13:00
Lincoln Stein
1806bfb755 fix batch generation logfile name to be compatible with Windows OS (#3018)
- The command `invokeai-batch --invoke` was created a time-stamped
logfile with colons in its name, which is a Windows no-no. This corrects
the problem by writing the timestamp out as "13-06-2023_8-35-10"

- Closes #3005
2023-03-24 01:32:24 -04:00
blessedcoolant
7377855c02 Merge branch 'v2.3' into bugfix/batch-logfile-format 2023-03-24 18:10:00 +13:00
Lincoln Stein
5f2a6f24cf fix corrupted outputs/.next_prefix file (#3020)
- Since 2.3.2 invokeai stores the next PNG file's numeric prefix in a
file named `.next_prefix` in the outputs directory. This avoids the
overhead of doing a directory listing to find out what file number comes
next.

- The code uses advisory locking to prevent corruption of this file in
the event that multiple invokeai's try to access it simultaneously, but
some users have experienced corruption of the file nevertheless.

- This PR addresses the problem by detecting a potentially corrupted
`.next_prefix` file and falling back to the directory listing method. A
fixed version of the file is then written out.

- Closes #3001
2023-03-23 23:53:10 -04:00
Lincoln Stein
5b8b92d957 Merge branch 'v2.3' into bugfix/batch-logfile-format 2023-03-23 23:34:05 -04:00
Lincoln Stein
352202a7bc Merge branch 'v2.3' into bugfix/fix-corrupted-image-sequence-file 2023-03-23 23:28:11 -04:00
blessedcoolant
82144de85f Fix textual inversion documentation and code (#3015)
This PR addresses issues raised by #3008.
    
1. Update documentation to indicate the correct maximum batch size for
TI training when xformers is and isn't used.
    
2. Update textual inversion code so that the default for batch size is
aware of xformer availability.
    
3. Add documentation for how to launch TI with distributed learning.
2023-03-24 16:14:47 +13:00
Lincoln Stein
b70d713e89 Merge branch 'v2.3' into bugfix/batch-logfile-format 2023-03-23 23:12:43 -04:00
blessedcoolant
e39dde4140 Merge branch 'v2.3' into feat/adjust-ti-param-for-xformers 2023-03-24 15:40:38 +13:00
blessedcoolant
c151541703 bump version to 2.3.3-rc1 (#3019)
Lots of little bugs have been squashed since 2.3.2 and a new minor point
release is imminent. This PR updates the version number in preparation
for a RC.
2023-03-24 15:27:57 +13:00
Lincoln Stein
29b348ece1 fix corrupted outputs/.next_prefix file
- Since 2.3.2 invokeai stores the next PNG file's numeric prefix in a
  file named `.next_prefix` in the outputs directory. This avoids the
  overhead of doing a directory listing to find out what file number
  comes next.

- The code uses advisory locking to prevent corruption of this file in
  the event that multiple invokeai's try to access it simultaneously,
  but some users have experienced corruption of the file nevertheless.

- This PR addresses the problem by detecting a potentially corrupted
  `.next_prefix` file and falling back to the directory listing method.
  A fixed version of the file is then written out.

- Closes #3001
2023-03-23 22:07:05 -04:00
Lincoln Stein
9f7c86c33e bump version to 2.3.3-rc1
Lots of little bugs have been squashed since 2.3.2 and a new minor
point release is imminent. This PR updates the version number in
preparation for a RC.
2023-03-23 21:47:56 -04:00
Lincoln Stein
a79d40519c fix batch generation logfile name to be compatible with Windows OS
- `invokeai-batch --invoke` was created a time-stamped logfile with colons in its
  name, which is a Windows no-no. This corrects the problem by writing
  the timestamp out as "13-06-2023_8-35-10"

- Closes #3005
2023-03-23 21:43:21 -04:00
Lincoln Stein
4515d52a42 fix textual inversion documentation and code
This PR addresses issues raised by #3008.

1. Update documentation to indicate the correct maximum batch size for
   TI training when xformers is and isn't used.

2. Update textual inversion code so that the default for batch size
   is aware of xformer availability.

3. Add documentation for how to launch TI with distributed learning.
2023-03-23 21:00:54 -04:00
Lincoln Stein
2a8513eee0 adjust textual inversion training parameters according to xformers availability
- If xformers is available, then default "use xformers" checkbox to on.
- Increase batch size to 8 (from 3).
2023-03-23 19:49:13 -04:00
Jonathan
b856fac713 Keep torch version at 1.13.1 (#2985)
Now that torch 2.0 is out, Invoke 2.3 should lock down its version to 1.13.1 for new installs and upgrades.
2023-03-23 15:27:12 -04:00
Lincoln Stein
4a3951681c prevent double-scanning during convert
- Avoid running scanner twice on same file during the probing and
  conversion process.

- Clean up diagnostic messages.
2023-03-23 14:24:10 -04:00
Lincoln Stein
ba89444e36 scan legacy checkpoint models in converter script prior to unpickling
Two related security fixes:

1. Port #2946 from main to 2.3.2 branch - this closes a hole that
   allows a pickle checkpoint file to masquerade as a safetensors
   file.

2. Add pickle scanning to the checkpoint to diffusers conversion
   script. This will be ported to main in a separate PR.
2023-03-23 13:44:08 -04:00
Lincoln Stein
a044403ac3 Bugfix/fix 2.3.2 upgrade path (#2943)
This fixes #2930 by adding a missing line in `pyproject.toml` needed to create the `config/stable-diffusion` directory.
v2.3.2.post1
2023-03-13 10:14:37 -07:00
Lincoln Stein
16dea46b79 remove outdated comment 2023-03-13 12:51:27 -04:00
Lincoln Stein
1f80b5335b reenable run_patches() 2023-03-13 10:38:08 -04:00
Lincoln Stein
eee7f13771 add back stable diffusion config files 2023-03-13 10:35:39 -04:00
Lincoln Stein
6db509a4ff add --upgrade to update script 2023-03-13 10:15:33 -04:00
Lincoln Stein
b7965e1ee6 restore find-packages to pyproject.toml 2023-03-13 10:11:37 -04:00
Lincoln Stein
c3d292e8f9 bump version to post1 2023-03-13 09:35:25 -04:00
Lincoln Stein
206593ec99 update version number 2023-03-13 09:34:00 -04:00
Lincoln Stein
1b62c781d7 temporarily disable run-patches 2023-03-13 09:33:32 -04:00