mirror of
https://github.com/invoke-ai/InvokeAI.git
synced 2026-01-21 03:28:25 -05:00
Compare commits
9 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0d134195fd | ||
|
|
649d8c8573 | ||
|
|
a358d370a0 | ||
|
|
94a9033c4f | ||
|
|
18a947c503 | ||
|
|
a23b031895 | ||
|
|
23af68c7d7 | ||
|
|
e258beeb51 | ||
|
|
7460c069b8 |
4
.gitignore
vendored
4
.gitignore
vendored
@@ -225,5 +225,9 @@ invokeai.init
|
||||
environment.yml
|
||||
requirements.txt
|
||||
|
||||
# source installer files
|
||||
source_installer/*zip
|
||||
source_installer/invokeAI
|
||||
|
||||
# this may be present if the user created a venv
|
||||
invokeai
|
||||
|
||||
@@ -7,28 +7,27 @@ title: Installation Overview
|
||||
We offer several ways to install InvokeAI, each one suited to your
|
||||
experience and preferences.
|
||||
|
||||
1. [1-click installer](INSTALL_1CLICK.md)
|
||||
1. [InvokeAI installer](INSTALL_INVOKE.md)
|
||||
|
||||
This is an automated shell script that will handle installation of
|
||||
all dependencies for you, and is recommended for those who have
|
||||
limited or no experience with the Python programming language, are
|
||||
not currently interested in contributing to the project, and just want
|
||||
the thing to install and run. In this version, you interact with the
|
||||
web server and command-line clients through a shell script named
|
||||
`invoke.sh` (Linux/Mac) or `invoke.bat` (Windows), and perform
|
||||
updates using `update.sh` and `update.bat`.
|
||||
This is a installer script that installs InvokeAI and all the
|
||||
third party libraries it depends on. When a new version of
|
||||
InvokeAI is released, you will download and reinstall the new
|
||||
version.
|
||||
|
||||
2. [Pre-compiled PIP installer](INSTALL_PCP.md)
|
||||
This installer is designed for people who want the system to "just
|
||||
work", don't have an interest in tinkering with it, and do not
|
||||
care about upgrading to unreleased experimental features.
|
||||
|
||||
This is a series of installer files for which all the requirements
|
||||
for InvokeAI have been precompiled, thereby preventing the conflicts
|
||||
that sometimes occur when an external library is changed unexpectedly.
|
||||
It will leave you with an environment in which you interact directly
|
||||
with the scripts for running the web and command line clients, and
|
||||
you will update to new versions using standard developer commands.
|
||||
2. [Source code installer](INSTALL_SOURCE.md)
|
||||
|
||||
This method is recommended for users with a bit of experience using
|
||||
the `git` and `pip` tools.
|
||||
This is a script that will install InvokeAI and all its essential
|
||||
third party libraries. In contrast to the previous installer, it
|
||||
includes access to a "developer console" which will allow you to
|
||||
access experimental features on the development branch.
|
||||
|
||||
This method is recommended for individuals who are wish to stay
|
||||
on the cutting edge of InvokeAI development and are not afraid
|
||||
of occasional breakage.
|
||||
|
||||
3. [Manual Installation](INSTALL_MANUAL.md)
|
||||
|
||||
|
||||
52
docs/installation/INSTALL_INVOKE.md
Normal file
52
docs/installation/INSTALL_INVOKE.md
Normal file
@@ -0,0 +1,52 @@
|
||||
---
|
||||
title: InvokeAI Installer
|
||||
---
|
||||
|
||||
The InvokeAI installer is a shell script that will install InvokeAI
|
||||
onto a stock computer running recent versions of Linux, MacOSX or
|
||||
Windows. It will leave you with a version that runs a stable version
|
||||
of InvokeAI. When a new version of InvokeAI is released, you will
|
||||
download and reinstall the new version.
|
||||
|
||||
If you wish to tinker with unreleased versions of InvokeAI that
|
||||
introduce potentially unstable new features, you should consider using
|
||||
the [source installer](INSTALL_SOURCE.md) or one of the [manual
|
||||
install](INSTALL_MANUAL.md) methods.
|
||||
|
||||
Before you begin, make sure that you meet the [hardware
|
||||
requirements](index.md#Hardware_Requirements) and has the appropriate
|
||||
GPU drivers installed. In particular, if you are a Linux user with an
|
||||
AMD GPU installed, you may need to install the [ROCm
|
||||
driver](https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html).
|
||||
|
||||
Installation requires roughly 18G of free disk space to load the
|
||||
libraries and recommended model weights files.
|
||||
|
||||
## Steps to Install
|
||||
|
||||
1. Download the [latest release](https://github.com/invoke-ai/InvokeAI/releases/latest)
|
||||
of InvokeAI's installer for your platform
|
||||
|
||||
2. Place the downloaded package someplace where you have plenty of HDD space,
|
||||
and have full permissions (i.e. `~/` on Lin/Mac; your home folder on Windows)
|
||||
|
||||
3. Extract the 'InvokeAI' folder from the downloaded package
|
||||
|
||||
4. Open the extracted 'InvokeAI' folder
|
||||
|
||||
5. Double-click 'install.bat' (Windows), or 'install.sh' (Lin/Mac) (or run from a terminal)
|
||||
|
||||
6. Follow the prompts
|
||||
|
||||
7. After installation, please run the 'invoke.bat' file (on Windows) or
|
||||
'invoke.sh' file (on Linux/Mac) to start InvokeAI.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you run into problems during or after installation, the InvokeAI
|
||||
team is available to help you. Either create an
|
||||
[Issue](https://github.com/invoke-ai/InvokeAI/issues) at our GitHub
|
||||
site, or make a request for help on the "bugs-and-support" channel of
|
||||
our [Discord server](https://discord.gg/ZmtBAhwWhy). We are a 100%
|
||||
volunteer organization, but typically somebody will be available to
|
||||
help you within 24 hours, and often much sooner.
|
||||
@@ -1,12 +1,15 @@
|
||||
---
|
||||
title: The "One-Click" Installer
|
||||
title: The InvokeAI Source Installer
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
The one-click installer is a shell script that attempts to automate
|
||||
every step needed to install and run InvokeAI on a stock computer
|
||||
running recent versions of Linux, MacOSX or Windows.
|
||||
The source installer is a shell script that attempts to automate every
|
||||
step needed to install and run InvokeAI on a stock computer running
|
||||
recent versions of Linux, MacOSX or Windows. It will leave you with a
|
||||
version that runs a stable version of InvokeAI with the option to
|
||||
upgrade to experimental versions later. It is not as foolproof as the
|
||||
[InvokeAI installer](INSTALL_INVOKE.md)
|
||||
|
||||
Before you begin, make sure that you meet the [hardware
|
||||
requirements](index.md#Hardware_Requirements) and has the appropriate
|
||||
@@ -22,14 +25,15 @@ libraries and recommended model weights files.
|
||||
Though there are multiple steps, there really is only one click
|
||||
involved to kick off the process.
|
||||
|
||||
1. The 1-click installer is distributed in ZIP files. Download the one
|
||||
that is appropriate for your operating system:
|
||||
1. The source installer is distributed in ZIP files. Go to the [latest
|
||||
release](https://github.com/invoke-ai/InvokeAI/releases/latest), and
|
||||
look for a series of files named:
|
||||
|
||||
!!! todo "Change the URLs after release"
|
||||
|
||||
- [invokeAI-mac.zip](https://github.com/invoke-ai/InvokeAI/releases/download/2.1.3-rc1/invokeAI-mac.zip)
|
||||
- [invokeAI-linux.zip](https://github.com/invoke-ai/InvokeAI/releases/download/2.1.3-rc1/invokeAI-linux.zip)
|
||||
- [invokeAI-windows.zip](https://github.com/invoke-ai/InvokeAI/releases/download/2.1.3-rc1/invokeAI-windows.zip)
|
||||
- invokeAI-src-installer-mac.zip
|
||||
- invokeAI-src-installer-windows.zip
|
||||
- invokeAI-src-installer-linux.zip
|
||||
|
||||
Download the one that is appropriate for your operating system.
|
||||
|
||||
2. Unpack the zip file into a directory that has at least 18G of free
|
||||
space. Do *not* unpack into a directory that has an earlier version of
|
||||
@@ -2,54 +2,62 @@ name: invokeai
|
||||
channels:
|
||||
- pytorch
|
||||
- conda-forge
|
||||
- defaults
|
||||
dependencies:
|
||||
- python=3.9.13
|
||||
- pip=22.2.2
|
||||
- pytorch=1.12.1
|
||||
- torchvision=0.13.1
|
||||
- python=3.10
|
||||
- pip>=22.2
|
||||
- pytorch=1.12
|
||||
- pytorch-lightning=1.7
|
||||
- torchvision=0.13
|
||||
- torchmetrics=0.10
|
||||
- torch-fidelity=0.3
|
||||
|
||||
- albumentations=1.2.1
|
||||
- coloredlogs=15.0.1
|
||||
- diffusers=0.6.0
|
||||
- einops=0.4.1
|
||||
- grpcio=1.46.4
|
||||
# I suggest to keep the other deps sorted for convenience.
|
||||
# To determine what the latest versions should be, run:
|
||||
#
|
||||
# ```shell
|
||||
# sed -E 's/invokeai/invokeai-updated/;20,99s/- ([^=]+)==.+/- \1/' environment-mac.yml > environment-mac-updated.yml
|
||||
# CONDA_SUBDIR=osx-arm64 conda env create -f environment-mac-updated.yml && conda list -n invokeai-updated | awk ' {print " - " $1 "==" $2;} '
|
||||
# ```
|
||||
|
||||
- albumentations=1.2
|
||||
- coloredlogs=15.0
|
||||
- diffusers=0.6
|
||||
- einops=0.3
|
||||
- eventlet
|
||||
- grpcio=1.46
|
||||
- flask=2.1
|
||||
- flask-socketio=5.3
|
||||
- flask-cors=3.0
|
||||
- humanfriendly=10.0
|
||||
- imageio=2.21.2
|
||||
- imageio-ffmpeg=0.4.7
|
||||
- imgaug=0.4.0
|
||||
- kornia=0.6.7
|
||||
- mpmath=1.2.1
|
||||
- nomkl # arm64 has only 1.0 while x64 needs 3.0
|
||||
- numpy=1.23.4
|
||||
- omegaconf=2.1.1
|
||||
- openh264=2.3.0
|
||||
- onnx=1.12.0
|
||||
- onnxruntime=1.12.1
|
||||
- pudb=2022.1
|
||||
- pytorch-lightning=1.7.7
|
||||
- scipy=1.9.3
|
||||
- streamlit=1.12.2
|
||||
- sympy=1.10.1
|
||||
- tensorboard=2.10.0
|
||||
- torchmetrics=0.10.1
|
||||
- py-opencv=4.6.0
|
||||
- flask=2.1.3
|
||||
- flask-socketio=5.3.0
|
||||
- flask-cors=3.0.10
|
||||
- eventlet=0.33.1
|
||||
- protobuf=3.20.1
|
||||
- send2trash=1.8.0
|
||||
- transformers=4.23.1
|
||||
- torch-fidelity=0.3.0
|
||||
- imageio=2.21
|
||||
- imageio-ffmpeg=0.4
|
||||
- imgaug=0.4
|
||||
- kornia=0.6
|
||||
- mpmath=1.2
|
||||
- nomkl=3
|
||||
- numpy=1.23
|
||||
- omegaconf=2.1
|
||||
- openh264=2.3
|
||||
- onnx=1.12
|
||||
- onnxruntime=1.12
|
||||
- pudb=2019.2
|
||||
- protobuf=3.20
|
||||
- py-opencv=4.6
|
||||
- scipy=1.9
|
||||
- streamlit=1.12
|
||||
- sympy=1.10
|
||||
- send2trash=1.8
|
||||
- tensorboard=2.10
|
||||
- transformers=4.23
|
||||
- pip:
|
||||
- getpass_asterisk
|
||||
- dependency_injector==4.40.0
|
||||
- realesrgan==0.2.5.0
|
||||
- taming-transformers-rom1504
|
||||
- test-tube==0.7.5
|
||||
- git+https://github.com/openai/CLIP.git@main#egg=clip
|
||||
- git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k_diffusion
|
||||
- git+https://github.com/TencentARC/GFPGAN.git#egg=gfpgan
|
||||
- git+https://github.com/invoke-ai/k-diffusion.git@mps#egg=k_diffusion
|
||||
- git+https://github.com/invoke-ai/Real-ESRGAN.git#egg=realesrgan
|
||||
- git+https://github.com/invoke-ai/GFPGAN.git#egg=gfpgan
|
||||
- git+https://github.com/invoke-ai/clipseg.git@models-rename#egg=clipseg
|
||||
- -e .
|
||||
variables:
|
||||
|
||||
@@ -13,6 +13,7 @@ dependencies:
|
||||
- cudatoolkit=11.6
|
||||
- pip:
|
||||
- albumentations==0.4.3
|
||||
- basicsr==1.4.1
|
||||
- dependency_injector==4.40.0
|
||||
- diffusers==0.6.0
|
||||
- einops==0.3.0
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
# Get hardware-appropriate torch/torchvision
|
||||
--extra-index-url https://download.pytorch.org/whl/cu116 --trusted-host https://download.pytorch.org
|
||||
basicsr==1.4.1
|
||||
torch==1.12.1
|
||||
torchvision==0.13.1
|
||||
-e .
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
- `python scripts/dream.py --web` serves both frontend and backend at
|
||||
http://localhost:9090
|
||||
|
||||
## Evironment
|
||||
## Environment
|
||||
|
||||
Install [node](https://nodejs.org/en/download/) (includes npm) and optionally
|
||||
[yarn](https://yarnpkg.com/getting-started/install).
|
||||
@@ -15,7 +15,7 @@ packages.
|
||||
|
||||
## Dev
|
||||
|
||||
1. From `frontend/`, run `npm dev` / `yarn dev` to start the dev server.
|
||||
1. From `frontend/`, run `npm run dev` / `yarn dev` to start the dev server.
|
||||
2. Run `python scripts/dream.py --web`.
|
||||
3. Navigate to the dev server address e.g. `http://localhost:5173/`.
|
||||
|
||||
|
||||
BIN
installer/InvokeAI/WinLongPathsEnabled.reg
Normal file
BIN
installer/InvokeAI/WinLongPathsEnabled.reg
Normal file
Binary file not shown.
172
installer/InvokeAI/install.bat
Normal file
172
installer/InvokeAI/install.bat
Normal file
@@ -0,0 +1,172 @@
|
||||
@echo off
|
||||
|
||||
@rem This script will install git (if not found on the PATH variable)
|
||||
@rem using micromamba (an 8mb static-linked single-file binary, conda replacement).
|
||||
@rem For users who already have git, this step will be skipped.
|
||||
|
||||
@rem Next, it'll download the project's source code.
|
||||
@rem Then it will download a self-contained, standalone Python and unpack it.
|
||||
@rem Finally, it'll create the Python virtual environment and preload the models.
|
||||
|
||||
@rem This enables a user to install this project without manually installing git or Python
|
||||
|
||||
echo ***** Installing InvokeAI.. *****
|
||||
|
||||
set PATH=c:\windows\system32
|
||||
|
||||
@rem Config
|
||||
set INSTALL_ENV_DIR=%cd%\installer_files\env
|
||||
@rem https://mamba.readthedocs.io/en/latest/installation.html
|
||||
set MICROMAMBA_DOWNLOAD_URL=https://micro.mamba.pm/api/micromamba/win-64/latest
|
||||
set RELEASE_URL=https://github.com/tildebyte/InvokeAI
|
||||
set RELEASE_SOURCEBALL=/archive/feat-install-pip-compile.tar.gz
|
||||
set PYTHON_BUILD_STANDALONE_URL=https://github.com/indygreg/python-build-standalone/releases/download
|
||||
set PYTHON_BUILD_STANDALONE=20221002/cpython-3.10.7+20221002-x86_64-pc-windows-msvc-shared-install_only.tar.gz
|
||||
|
||||
set PACKAGES_TO_INSTALL=
|
||||
|
||||
call git --version >.tmp1 2>.tmp2
|
||||
if "%ERRORLEVEL%" NEQ "0" set PACKAGES_TO_INSTALL=%PACKAGES_TO_INSTALL% git
|
||||
|
||||
@rem Cleanup
|
||||
del /q .tmp1 .tmp2
|
||||
|
||||
@rem (if necessary) install git into a contained environment
|
||||
if "%PACKAGES_TO_INSTALL%" NEQ "" (
|
||||
@rem download micromamba
|
||||
echo ***** Downloading micromamba from %MICROMAMBA_DOWNLOAD_URL% to micromamba.exe *****
|
||||
|
||||
call curl -L "%MICROMAMBA_DOWNLOAD_URL%" > micromamba.tbz2
|
||||
|
||||
set err_msg=----- micromamba source unpack failed -----
|
||||
tar -jxf micromamba.tbz2
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
move Library\bin\micromamba.exe micromamba.exe
|
||||
rd /s /q Library info
|
||||
del /q micromamba.tbz2
|
||||
|
||||
@rem test the mamba binary
|
||||
echo ***** Micromamba version: *****
|
||||
call micromamba.exe --version
|
||||
|
||||
@rem create the installer env
|
||||
if not exist "%INSTALL_ENV_DIR%" (
|
||||
call micromamba.exe create -y --prefix "%INSTALL_ENV_DIR%"
|
||||
)
|
||||
|
||||
echo ***** Packages to install:%PACKAGES_TO_INSTALL% *****
|
||||
|
||||
call micromamba.exe install -y --prefix "%INSTALL_ENV_DIR%" -c conda-forge %PACKAGES_TO_INSTALL%
|
||||
|
||||
if not exist "%INSTALL_ENV_DIR%" (
|
||||
echo ----- There was a problem while installing "%PACKAGES_TO_INSTALL%" using micromamba. Cannot continue. -----
|
||||
pause
|
||||
exit /b
|
||||
)
|
||||
)
|
||||
|
||||
del /q micromamba.exe
|
||||
|
||||
@rem For 'git' only
|
||||
set PATH=%INSTALL_ENV_DIR%\Library\bin;%PATH%
|
||||
|
||||
@rem Download/unpack/clean up InvokeAI release sourceball
|
||||
set err_msg=----- InvokeAI source download failed -----
|
||||
curl -L %RELEASE_URL%/%RELEASE_SOURCEBALL% --output InvokeAI.tgz
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
set err_msg=----- InvokeAI source unpack failed -----
|
||||
tar -zxf InvokeAI.tgz
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
del /q InvokeAI.tgz
|
||||
|
||||
set err_msg=----- InvokeAI source copy failed -----
|
||||
cd InvokeAI-*
|
||||
xcopy . .. /e /h
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
cd ..
|
||||
|
||||
@rem cleanup
|
||||
for /f %%i in ('dir /b InvokeAI-*') do rd /s /q %%i
|
||||
rd /s /q .dev_scripts .github docker-build tests
|
||||
del /q requirements.in requirements-mkdocs.txt shell.nix
|
||||
|
||||
echo ***** Unpacked InvokeAI source *****
|
||||
|
||||
@rem Download/unpack/clean up python-build-standalone
|
||||
set err_msg=----- Python download failed -----
|
||||
curl -L %PYTHON_BUILD_STANDALONE_URL%/%PYTHON_BUILD_STANDALONE% --output python.tgz
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
set err_msg=----- Python unpack failed -----
|
||||
tar -zxf python.tgz
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
del /q python.tgz
|
||||
|
||||
echo ***** Unpacked python-build-standalone *****
|
||||
|
||||
@rem create venv
|
||||
set err_msg=----- problem creating venv -----
|
||||
.\python\python -E -s -m venv .venv
|
||||
@rem In reality, the following is ALL that 'activate.bat' does,
|
||||
@rem aside from setting the prompt, which we don't care about
|
||||
set PYTHONPATH=
|
||||
set PATH=.venv\Scripts;%PATH%
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
echo ***** Created Python virtual environment *****
|
||||
|
||||
@rem Print venv's Python version
|
||||
set err_msg=----- problem calling venv's python -----
|
||||
echo We're running under
|
||||
.venv\Scripts\python --version
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
set err_msg=----- pip update failed -----
|
||||
.venv\Scripts\python -m pip install --no-cache-dir --no-warn-script-location --upgrade pip
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
echo ***** Updated pip *****
|
||||
|
||||
set err_msg=----- requirements file copy failed -----
|
||||
copy installer\py3.10-windows-x86_64-cuda-reqs.txt requirements.txt
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
set err_msg=----- main pip install failed -----
|
||||
.venv\Scripts\python -m pip install --no-cache-dir --no-warn-script-location -r requirements.txt
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
set err_msg=----- clipseg install failed -----
|
||||
.venv\Scripts\python -m pip install --no-cache-dir --no-warn-script-location -e git+https://github.com/invoke-ai/clipseg.git@models-rename#egg=clipseg
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
set err_msg=----- InvokeAI setup failed -----
|
||||
.venv\Scripts\python -m pip install --no-cache-dir --no-warn-script-location -e .
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
echo ***** Installed Python dependencies *****
|
||||
|
||||
@rem preload the models
|
||||
call .venv\Scripts\python scripts\preload_models.py
|
||||
set err_msg=----- model download clone failed -----
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
echo ***** Finished downloading models *****
|
||||
|
||||
echo ***** Installing invoke.bat ******
|
||||
cp installer\invoke.bat .\invoke.bat
|
||||
|
||||
|
||||
@rem more cleanup
|
||||
rd /s /q installer installer_files
|
||||
|
||||
pause
|
||||
exit
|
||||
|
||||
:err_exit
|
||||
echo %err_msg%
|
||||
pause
|
||||
exit
|
||||
17
installer/InvokeAI/readme.txt
Normal file
17
installer/InvokeAI/readme.txt
Normal file
@@ -0,0 +1,17 @@
|
||||
InvokeAI
|
||||
|
||||
Project homepage: https://github.com/invoke-ai/InvokeAI
|
||||
|
||||
Installation on Windows:
|
||||
NOTE: You might need to enable Windows Long Paths. If you're not sure,
|
||||
then you almost certainly need to. Simply double-click the 'WinLongPathsEnabled.reg'
|
||||
file. Note that you will need to have admin privileges in order to
|
||||
do this.
|
||||
|
||||
Please double-click the 'install.bat' file (while keeping it inside the invokeAI folder).
|
||||
|
||||
Installation on Linux and Mac:
|
||||
Please open the terminal, and run './install.sh' (while keeping it inside the invokeAI folder).
|
||||
|
||||
After installation, please run the 'invoke.bat' file (on Windows) or 'invoke.sh'
|
||||
file (on Linux/Mac) to start InvokeAI.
|
||||
BIN
installer/WinLongPathsEnabled.reg
Normal file
BIN
installer/WinLongPathsEnabled.reg
Normal file
Binary file not shown.
29
installer/create_installers.sh
Executable file
29
installer/create_installers.sh
Executable file
@@ -0,0 +1,29 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
IFS=$'\n\t'
|
||||
|
||||
echo "Be certain that you're in the 'installer' directory before continuing."
|
||||
read -p "Press any key to continue, or CTRL-C to exit..."
|
||||
|
||||
# make the installer zip for linux and mac
|
||||
rm -rf InvokeAI
|
||||
mkdir -p InvokeAI
|
||||
cp install.sh InvokeAI
|
||||
cp readme.txt InvokeAI
|
||||
|
||||
zip -r InvokeAI-linux.zip InvokeAI
|
||||
zip -r InvokeAI-mac.zip InvokeAI
|
||||
|
||||
# make the installer zip for windows
|
||||
rm -rf InvokeAI
|
||||
mkdir -p InvokeAI
|
||||
cp install.bat InvokeAI
|
||||
cp readme.txt InvokeAI
|
||||
cp WinLongPathsEnabled.reg InvokeAI
|
||||
|
||||
zip -r InvokeAI-windows.zip InvokeAI
|
||||
|
||||
rm -rf InvokeAI
|
||||
|
||||
echo "The installer zips are ready for distribution."
|
||||
172
installer/install.bat
Normal file
172
installer/install.bat
Normal file
@@ -0,0 +1,172 @@
|
||||
@echo off
|
||||
|
||||
@rem This script will install git (if not found on the PATH variable)
|
||||
@rem using micromamba (an 8mb static-linked single-file binary, conda replacement).
|
||||
@rem For users who already have git, this step will be skipped.
|
||||
|
||||
@rem Next, it'll download the project's source code.
|
||||
@rem Then it will download a self-contained, standalone Python and unpack it.
|
||||
@rem Finally, it'll create the Python virtual environment and preload the models.
|
||||
|
||||
@rem This enables a user to install this project without manually installing git or Python
|
||||
|
||||
echo ***** Installing InvokeAI.. *****
|
||||
|
||||
set PATH=c:\windows\system32
|
||||
|
||||
@rem Config
|
||||
set INSTALL_ENV_DIR=%cd%\installer_files\env
|
||||
@rem https://mamba.readthedocs.io/en/latest/installation.html
|
||||
set MICROMAMBA_DOWNLOAD_URL=https://micro.mamba.pm/api/micromamba/win-64/latest
|
||||
set RELEASE_URL=https://github.com/invoke-ai/InvokeAI
|
||||
set RELEASE_SOURCEBALL=/archive/refs/tags/2.1.3-rc4.tar.gz
|
||||
set PYTHON_BUILD_STANDALONE_URL=https://github.com/indygreg/python-build-standalone/releases/download
|
||||
set PYTHON_BUILD_STANDALONE=20221002/cpython-3.10.7+20221002-x86_64-pc-windows-msvc-shared-install_only.tar.gz
|
||||
|
||||
set PACKAGES_TO_INSTALL=
|
||||
|
||||
call git --version >.tmp1 2>.tmp2
|
||||
if "%ERRORLEVEL%" NEQ "0" set PACKAGES_TO_INSTALL=%PACKAGES_TO_INSTALL% git
|
||||
|
||||
@rem Cleanup
|
||||
del /q .tmp1 .tmp2
|
||||
|
||||
@rem (if necessary) install git into a contained environment
|
||||
if "%PACKAGES_TO_INSTALL%" NEQ "" (
|
||||
@rem download micromamba
|
||||
echo ***** Downloading micromamba from %MICROMAMBA_DOWNLOAD_URL% to micromamba.exe *****
|
||||
|
||||
call curl -L "%MICROMAMBA_DOWNLOAD_URL%" > micromamba.tbz2
|
||||
|
||||
set err_msg=----- micromamba source unpack failed -----
|
||||
tar -jxf micromamba.tbz2
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
move Library\bin\micromamba.exe micromamba.exe
|
||||
rd /s /q Library info
|
||||
del /q micromamba.tbz2
|
||||
|
||||
@rem test the mamba binary
|
||||
echo ***** Micromamba version: *****
|
||||
call micromamba.exe --version
|
||||
|
||||
@rem create the installer env
|
||||
if not exist "%INSTALL_ENV_DIR%" (
|
||||
call micromamba.exe create -y --prefix "%INSTALL_ENV_DIR%"
|
||||
)
|
||||
|
||||
echo ***** Packages to install:%PACKAGES_TO_INSTALL% *****
|
||||
|
||||
call micromamba.exe install -y --prefix "%INSTALL_ENV_DIR%" -c conda-forge %PACKAGES_TO_INSTALL%
|
||||
|
||||
if not exist "%INSTALL_ENV_DIR%" (
|
||||
echo ----- There was a problem while installing "%PACKAGES_TO_INSTALL%" using micromamba. Cannot continue. -----
|
||||
pause
|
||||
exit /b
|
||||
)
|
||||
)
|
||||
|
||||
del /q micromamba.exe
|
||||
|
||||
@rem For 'git' only
|
||||
set PATH=%INSTALL_ENV_DIR%\Library\bin;%PATH%
|
||||
|
||||
@rem Download/unpack/clean up InvokeAI release sourceball
|
||||
set err_msg=----- InvokeAI source download failed -----
|
||||
curl -L %RELEASE_URL%/%RELEASE_SOURCEBALL% --output InvokeAI.tgz
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
set err_msg=----- InvokeAI source unpack failed -----
|
||||
tar -zxf InvokeAI.tgz
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
del /q InvokeAI.tgz
|
||||
|
||||
set err_msg=----- InvokeAI source copy failed -----
|
||||
cd InvokeAI-*
|
||||
xcopy . .. /e /h
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
cd ..
|
||||
|
||||
@rem cleanup
|
||||
for /f %%i in ('dir /b InvokeAI-*') do rd /s /q %%i
|
||||
rd /s /q .dev_scripts .github docker-build tests
|
||||
del /q requirements.in requirements-mkdocs.txt shell.nix
|
||||
|
||||
echo ***** Unpacked InvokeAI source *****
|
||||
|
||||
@rem Download/unpack/clean up python-build-standalone
|
||||
set err_msg=----- Python download failed -----
|
||||
curl -L %PYTHON_BUILD_STANDALONE_URL%/%PYTHON_BUILD_STANDALONE% --output python.tgz
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
set err_msg=----- Python unpack failed -----
|
||||
tar -zxf python.tgz
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
del /q python.tgz
|
||||
|
||||
echo ***** Unpacked python-build-standalone *****
|
||||
|
||||
@rem create venv
|
||||
set err_msg=----- problem creating venv -----
|
||||
.\python\python -E -s -m venv .venv
|
||||
@rem In reality, the following is ALL that 'activate.bat' does,
|
||||
@rem aside from setting the prompt, which we don't care about
|
||||
set PYTHONPATH=
|
||||
set PATH=.venv\Scripts;%PATH%
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
echo ***** Created Python virtual environment *****
|
||||
|
||||
@rem Print venv's Python version
|
||||
set err_msg=----- problem calling venv's python -----
|
||||
echo We're running under
|
||||
.venv\Scripts\python --version
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
set err_msg=----- pip update failed -----
|
||||
.venv\Scripts\python -m pip install --no-cache-dir --no-warn-script-location --upgrade pip
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
echo ***** Updated pip *****
|
||||
|
||||
set err_msg=----- requirements file copy failed -----
|
||||
copy installer\py3.10-windows-x86_64-cuda-reqs.txt requirements.txt
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
set err_msg=----- main pip install failed -----
|
||||
.venv\Scripts\python -m pip install --no-cache-dir --no-warn-script-location -r requirements.txt
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
set err_msg=----- clipseg install failed -----
|
||||
.venv\Scripts\python -m pip install --no-cache-dir --no-warn-script-location -e git+https://github.com/invoke-ai/clipseg.git@models-rename#egg=clipseg
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
set err_msg=----- InvokeAI setup failed -----
|
||||
.venv\Scripts\python -m pip install --no-cache-dir --no-warn-script-location -e .
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
echo ***** Installed Python dependencies *****
|
||||
|
||||
@rem preload the models
|
||||
call .venv\Scripts\python scripts\preload_models.py
|
||||
set err_msg=----- model download clone failed -----
|
||||
if %errorlevel% neq 0 goto err_exit
|
||||
|
||||
echo ***** Finished downloading models *****
|
||||
|
||||
echo ***** Installing invoke.bat ******
|
||||
cp installer\invoke.bat .\invoke.bat
|
||||
|
||||
|
||||
@rem more cleanup
|
||||
rd /s /q installer installer_files
|
||||
|
||||
pause
|
||||
exit
|
||||
|
||||
:err_exit
|
||||
echo %err_msg%
|
||||
pause
|
||||
exit
|
||||
210
installer/install.sh
Executable file
210
installer/install.sh
Executable file
@@ -0,0 +1,210 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
IFS=$'\n\t'
|
||||
|
||||
function _err_exit {
|
||||
if test "$1" -ne 0
|
||||
then
|
||||
echo -e "Error code $1; Error caught was '$2'"
|
||||
read -p "Press any key to exit..."
|
||||
exit
|
||||
fi
|
||||
}
|
||||
|
||||
# This script will install git (if not found on the PATH variable)
|
||||
# using micromamba (an 8mb static-linked single-file binary, conda replacement).
|
||||
# For users who already have git, this step will be skipped.
|
||||
|
||||
# Next, it'll download the project's source code.
|
||||
# Then it will download a self-contained, standalone Python and unpack it.
|
||||
# Finally, it'll create the Python virtual environment and preload the models.
|
||||
|
||||
# This enables a user to install this project without manually installing git or Python
|
||||
|
||||
echo -e "\n***** Installing InvokeAI... *****\n"
|
||||
|
||||
|
||||
OS_NAME=$(uname -s)
|
||||
case "${OS_NAME}" in
|
||||
Linux*) OS_NAME="linux";;
|
||||
Darwin*) OS_NAME="darwin";;
|
||||
*) echo -e "\n----- Unknown OS: $OS_NAME! This script runs only on Linux or MacOS -----\n" && exit
|
||||
esac
|
||||
|
||||
OS_ARCH=$(uname -m)
|
||||
case "${OS_ARCH}" in
|
||||
x86_64*) ;;
|
||||
arm64*) ;;
|
||||
*) echo -e "\n----- Unknown system architecture: $OS_ARCH! This script runs only on x86_64 or arm64 -----\n" && exit
|
||||
esac
|
||||
|
||||
# https://mamba.readthedocs.io/en/latest/installation.html
|
||||
MAMBA_OS_NAME=$OS_NAME
|
||||
MAMBA_ARCH=$OS_ARCH
|
||||
if [ "$OS_NAME" == "darwin" ]; then
|
||||
MAMBA_OS_NAME="osx"
|
||||
fi
|
||||
|
||||
if [ "$OS_ARCH" == "linux" ]; then
|
||||
MAMBA_ARCH="aarch64"
|
||||
fi
|
||||
|
||||
if [ "$OS_ARCH" == "x86_64" ]; then
|
||||
MAMBA_ARCH="64"
|
||||
fi
|
||||
|
||||
PY_ARCH=$OS_ARCH
|
||||
if [ "$OS_ARCH" == "arm64" ]; then
|
||||
PY_ARCH="aarch64"
|
||||
fi
|
||||
|
||||
# Compute device ('cd' segment of reqs files) detect goes here
|
||||
# This needs a ton of work
|
||||
# Suggestions:
|
||||
# - lspci
|
||||
# - check $PATH for nvidia-smi, gtt CUDA/GPU version from output
|
||||
# - Surely there's a similar utility for AMD?
|
||||
CD="cuda"
|
||||
if [ "$OS_NAME" == "darwin" ] && [ "$OS_ARCH" == "arm64" ]; then
|
||||
CD="mps"
|
||||
fi
|
||||
|
||||
# config
|
||||
INSTALL_ENV_DIR="$(pwd)/installer_files/env"
|
||||
MICROMAMBA_DOWNLOAD_URL="https://micro.mamba.pm/api/micromamba/${MAMBA_OS_NAME}-${MAMBA_ARCH}/latest"
|
||||
RELEASE_URL=https://github.com/invoke-ai/InvokeAI
|
||||
RELEASE_SOURCEBALL=/archive/refs/tags/2.1.3-rc4.tar.gz
|
||||
PYTHON_BUILD_STANDALONE_URL=https://github.com/indygreg/python-build-standalone/releases/download
|
||||
if [ "$OS_NAME" == "darwin" ]; then
|
||||
PYTHON_BUILD_STANDALONE=20221002/cpython-3.10.7+20221002-${PY_ARCH}-apple-darwin-install_only.tar.gz
|
||||
elif [ "$OS_NAME" == "linux" ]; then
|
||||
PYTHON_BUILD_STANDALONE=20221002/cpython-3.10.7+20221002-${PY_ARCH}-unknown-linux-gnu-install_only.tar.gz
|
||||
fi
|
||||
|
||||
PACKAGES_TO_INSTALL=""
|
||||
|
||||
if ! hash "git" &>/dev/null; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL git"; fi
|
||||
|
||||
# (if necessary) install git and conda into a contained environment
|
||||
if [ "$PACKAGES_TO_INSTALL" != "" ]; then
|
||||
# download micromamba
|
||||
echo -e "\n***** Downloading micromamba from $MICROMAMBA_DOWNLOAD_URL to micromamba *****\n"
|
||||
|
||||
curl -L "$MICROMAMBA_DOWNLOAD_URL" | tar -xvj bin/micromamba -O > micromamba
|
||||
|
||||
chmod u+x "micromamba"
|
||||
|
||||
# test the mamba binary
|
||||
echo -e "\n***** Micromamba version: *****\n"
|
||||
"micromamba" --version
|
||||
|
||||
# create the installer env
|
||||
if [ ! -e "$INSTALL_ENV_DIR" ]; then
|
||||
"micromamba" create -y --prefix "$INSTALL_ENV_DIR"
|
||||
fi
|
||||
|
||||
echo -e "\n***** Packages to install:$PACKAGES_TO_INSTALL *****\n"
|
||||
|
||||
"micromamba" install -y --prefix "$INSTALL_ENV_DIR" -c conda-forge $PACKAGES_TO_INSTALL
|
||||
|
||||
if [ ! -e "$INSTALL_ENV_DIR" ]; then
|
||||
echo -e "\n----- There was a problem while initializing micromamba. Cannot continue. -----\n"
|
||||
exit
|
||||
fi
|
||||
fi
|
||||
|
||||
rm -f micromamba.exe
|
||||
|
||||
export PATH="$INSTALL_ENV_DIR/bin:$PATH"
|
||||
|
||||
# Download/unpack/clean up InvokeAI release sourceball
|
||||
_err_msg="\n----- InvokeAI source download failed -----\n"
|
||||
curl -L $RELEASE_URL/$RELEASE_SOURCEBALL --output InvokeAI.tgz
|
||||
_err_exit $? _err_msg
|
||||
_err_msg="\n----- InvokeAI source unpack failed -----\n"
|
||||
tar -zxf InvokeAI.tgz
|
||||
_err_exit $? _err_msg
|
||||
|
||||
rm -f InvokeAI.tgz
|
||||
|
||||
_err_msg="\n----- InvokeAI source copy failed -----\n"
|
||||
cd InvokeAI-*
|
||||
cp -r . ..
|
||||
_err_exit $? _err_msg
|
||||
cd ..
|
||||
|
||||
# cleanup
|
||||
rm -rf InvokeAI-*/
|
||||
rm -rf .dev_scripts/ .github/ docker-build/ tests/ requirements.in requirements-mkdocs.txt shell.nix
|
||||
|
||||
echo -e "\n***** Unpacked InvokeAI source *****\n"
|
||||
|
||||
# Download/unpack/clean up python-build-standalone
|
||||
_err_msg="\n----- Python download failed -----\n"
|
||||
curl -L $PYTHON_BUILD_STANDALONE_URL/$PYTHON_BUILD_STANDALONE --output python.tgz
|
||||
_err_exit $? _err_msg
|
||||
_err_msg="\n----- Python unpack failed -----\n"
|
||||
tar -zxf python.tgz
|
||||
_err_exit $? _err_msg
|
||||
|
||||
rm -f python.tgz
|
||||
|
||||
echo -e "\n***** Unpacked python-build-standalone *****\n"
|
||||
|
||||
# create venv
|
||||
_err_msg="\n----- problem creating venv -----\n"
|
||||
./python/bin/python3 -E -s -m venv .venv
|
||||
_err_exit $? _err_msg
|
||||
# In reality, the following is ALL that 'activate.bat' does,
|
||||
# aside from setting the prompt, which we don't care about
|
||||
export PYTHONPATH=
|
||||
export PATH=.venv/bin:$PATH
|
||||
|
||||
echo -e "\n***** Created Python virtual environment *****\n"
|
||||
|
||||
# Print venv's Python version
|
||||
_err_msg="\n----- problem calling venv's python -----\n"
|
||||
echo -e "We're running under"
|
||||
.venv/bin/python3 --version
|
||||
_err_exit $? _err_msg
|
||||
|
||||
_err_msg="\n----- pip update failed -----\n"
|
||||
.venv/bin/python3 -m pip install --no-cache-dir --no-warn-script-location --upgrade pip
|
||||
_err_exit $? _err_msg
|
||||
|
||||
echo -e "\n***** Updated pip *****\n"
|
||||
|
||||
_err_msg="\n----- requirements file copy failed -----\n"
|
||||
cp installer/py3.10-${OS_NAME}-"${OS_ARCH}"-${CD}-reqs.txt requirements.txt
|
||||
_err_exit $? _err_msg
|
||||
|
||||
_err_msg="\n----- main pip install failed -----\n"
|
||||
.venv/bin/python3 -m pip install --no-cache-dir --no-warn-script-location -r requirements.txt
|
||||
_err_exit $? _err_msg
|
||||
|
||||
_err_msg="\n----- clipseg install failed -----\n"
|
||||
.venv/bin/python3 -m pip install --no-cache-dir --no-warn-script-location -e git+https://github.com/invoke-ai/clipseg.git@models-rename#egg=clipseg
|
||||
_err_exit $? _err_msg
|
||||
|
||||
_err_msg="\n----- InvokeAI setup failed -----\n"
|
||||
.venv/bin/python3 -m pip install --no-cache-dir --no-warn-script-location -e .
|
||||
_err_exit $? _err_msg
|
||||
|
||||
echo -e "\n***** Installed Python dependencies *****\n"
|
||||
|
||||
# preload the models
|
||||
.venv/bin/python3 scripts/preload_models.py
|
||||
_err_msg="\n----- model download clone failed -----\n"
|
||||
_err_exit $? _err_msg
|
||||
|
||||
echo -e "\n***** Finished downloading models *****\n"
|
||||
|
||||
echo -e "\n***** Installing invoke.sh ******\n"
|
||||
cp installer/invoke.sh .
|
||||
|
||||
# more cleanup
|
||||
rm -rf installer/ installer_files/
|
||||
|
||||
read -p "Press any key to exit..."
|
||||
exit
|
||||
25
installer/invoke.bat
Normal file
25
installer/invoke.bat
Normal file
@@ -0,0 +1,25 @@
|
||||
@echo off
|
||||
|
||||
set PATH=c:\windows\system32
|
||||
set PATH=.venv\Scripts;%PATH%
|
||||
|
||||
echo Do you want to generate images using the
|
||||
echo 1. command-line
|
||||
echo 2. browser-based UI
|
||||
echo 3. open the developer console
|
||||
set /P restore="Please enter 1, 2 or 3: "
|
||||
IF /I "%restore%" == "1" (
|
||||
echo Starting the InvokeAI command-line..
|
||||
.venv\Scripts\python scripts\invoke.py
|
||||
) ELSE IF /I "%restore%" == "2" (
|
||||
echo Starting the InvokeAI browser-based UI..
|
||||
.venv\Scripts\python scripts\invoke.py --web
|
||||
) ELSE IF /I "%restore%" == "3" (
|
||||
echo Developer Console
|
||||
.venv\Scripts\python
|
||||
cmd /k
|
||||
) ELSE (
|
||||
echo Invalid selection
|
||||
pause
|
||||
exit /b
|
||||
)
|
||||
24
installer/invoke.sh
Executable file
24
installer/invoke.sh
Executable file
@@ -0,0 +1,24 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
IFS=$'\n\t'
|
||||
|
||||
PATH=.venv/scripts:$PATH
|
||||
|
||||
if [ "$0" != "bash" ]; then
|
||||
echo "Do you want to generate images using the"
|
||||
echo "1. command-line"
|
||||
echo "2. browser-based UI"
|
||||
echo "3. open the developer console"
|
||||
read -p "Please enter 1, 2, or 3: " yn
|
||||
case $yn in
|
||||
1 ) printf "\nStarting the InvokeAI command-line..\n"; .venv/bin/python scripts/invoke.py;;
|
||||
2 ) printf "\nStarting the InvokeAI browser-based UI..\n"; .venv/bin/python scripts/invoke.py --web;;
|
||||
3 ) printf "\nDeveloper Console:\n"; file_name=$(basename "${BASH_SOURCE[0]}"); bash --init-file "$file_name";;
|
||||
* ) echo "Invalid selection"; exit;;
|
||||
esac
|
||||
else # in developer console
|
||||
python --version
|
||||
echo "Press ^D to exit"
|
||||
export PS1="(InvokeAI) \u@\h \w> "
|
||||
fi
|
||||
2069
installer/py3.10-darwin-arm64-mps-reqs.txt
Normal file
2069
installer/py3.10-darwin-arm64-mps-reqs.txt
Normal file
File diff suppressed because it is too large
Load Diff
2069
installer/py3.10-darwin-x86_64-cpu-reqs.txt
Normal file
2069
installer/py3.10-darwin-x86_64-cpu-reqs.txt
Normal file
File diff suppressed because it is too large
Load Diff
2066
installer/py3.10-linux-x86_64-cuda-reqs.txt
Normal file
2066
installer/py3.10-linux-x86_64-cuda-reqs.txt
Normal file
File diff suppressed because it is too large
Load Diff
2072
installer/py3.10-windows-x86_64-cuda-reqs.txt
Normal file
2072
installer/py3.10-windows-x86_64-cuda-reqs.txt
Normal file
File diff suppressed because it is too large
Load Diff
17
installer/readme.txt
Normal file
17
installer/readme.txt
Normal file
@@ -0,0 +1,17 @@
|
||||
InvokeAI
|
||||
|
||||
Project homepage: https://github.com/invoke-ai/InvokeAI
|
||||
|
||||
Installation on Windows:
|
||||
NOTE: You might need to enable Windows Long Paths. If you're not sure,
|
||||
then you almost certainly need to. Simply double-click the 'WinLongPathsEnabled.reg'
|
||||
file. Note that you will need to have admin privileges in order to
|
||||
do this.
|
||||
|
||||
Please double-click the 'install.bat' file (while keeping it inside the invokeAI folder).
|
||||
|
||||
Installation on Linux and Mac:
|
||||
Please open the terminal, and run './install.sh' (while keeping it inside the invokeAI folder).
|
||||
|
||||
After installation, please run the 'invoke.bat' file (on Windows) or 'invoke.sh'
|
||||
file (on Linux/Mac) to start InvokeAI.
|
||||
24
installer/requirements.in
Normal file
24
installer/requirements.in
Normal file
@@ -0,0 +1,24 @@
|
||||
--prefer-binary
|
||||
--extra-index-url https://download.pytorch.org/whl/cu116
|
||||
--trusted-host https://download.pytorch.org
|
||||
albumentations
|
||||
diffusers
|
||||
eventlet
|
||||
flask_cors
|
||||
flask_socketio
|
||||
flaskwebgui
|
||||
getpass_asterisk
|
||||
imageio-ffmpeg
|
||||
pyreadline3
|
||||
realesrgan
|
||||
send2trash
|
||||
streamlit
|
||||
taming-transformers-rom1504
|
||||
test-tube
|
||||
torch-fidelity
|
||||
torchvision==0.13.1 ; platform_system == 'Darwin'
|
||||
torchvision==0.13.1+cu116 ; platform_system == 'Linux' or platform_system == 'Windows'
|
||||
transformers
|
||||
https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip
|
||||
https://github.com/TencentARC/GFPGAN/archive/2eac2033893ca7f427f4035d80fe95b92649ac56.zip
|
||||
https://github.com/invoke-ai/k-diffusion/archive/7f16b2c33411f26b3eae78d10648d625cb0c1095.zip
|
||||
@@ -8,8 +8,8 @@ mkdir -p invokeAI
|
||||
cp install.sh invokeAI
|
||||
cp readme.txt invokeAI
|
||||
|
||||
zip -r invokeAI-linux.zip invokeAI
|
||||
zip -r invokeAI-mac.zip invokeAI
|
||||
zip -r invokeAI-src-installer-linux.zip invokeAI
|
||||
zip -r invokeAI-src-installer-mac.zip invokeAI
|
||||
|
||||
# make the installer zip for windows
|
||||
rm -rf invokeAI
|
||||
@@ -17,6 +17,6 @@ mkdir -p invokeAI
|
||||
cp install.bat invokeAI
|
||||
cp readme.txt invokeAI
|
||||
|
||||
zip -r invokeAI-windows.zip invokeAI
|
||||
zip -r invokeAI-src-installer-windows.zip invokeAI
|
||||
|
||||
echo "The installer zips are ready to be distributed.."
|
||||
@@ -72,7 +72,6 @@ if not exist ".git" (
|
||||
call git config --local init.defaultBranch main
|
||||
call git remote add origin %REPO_URL%
|
||||
call git fetch
|
||||
# call git checkout origin/main -ft
|
||||
call git checkout origin/release-candidate-2-1-3 -ft
|
||||
)
|
||||
|
||||
@@ -93,6 +92,9 @@ if "%ERRORLEVEL%" NEQ "0" (
|
||||
exit /b
|
||||
)
|
||||
|
||||
cp source_installer/install.bat install.bat
|
||||
cp source_installer/update.bat update.bat
|
||||
|
||||
call conda activate invokeai
|
||||
@rem preload the models
|
||||
call python scripts\preload_models.py
|
||||
@@ -116,6 +116,8 @@ then
|
||||
echo "Please visit https://invoke-ai.github.io/InvokeAI/#installation for alternative"
|
||||
echo "installation methods"
|
||||
else
|
||||
ln -sf source_installer/install.sh .
|
||||
ln -sf source_installer/update.sh .
|
||||
conda activate invokeai
|
||||
# preload the models
|
||||
echo "Calling the preload_models.py script"
|
||||
@@ -23,6 +23,7 @@ if [ "$0" != "bash" ]; then
|
||||
* ) echo "Invalid selection"; exit;;
|
||||
esac
|
||||
else # in developer console
|
||||
which python
|
||||
python --version
|
||||
echo "Press ^D to exit"
|
||||
export PS1="(InvokeAI) \u@\h \w> "
|
||||
fi
|
||||
Reference in New Issue
Block a user