With these changes, the Docker image can be built and executed successfully on hosts with AMD devices with ROCm acceleration. Previously, a ROCm-enabled version of torch would be installed, but later removed during installation of InvokeAI itself. This was caused by InvokeAI needing a newer torch version than was previously installed. The fix consists of multiple components: * Update the hardcoded versions of torch and torchvision to the versions currently used in pyproject.toml, so that a new version need not be installed during installation of InvokeAI. * Specify --extra-index-url on installation of InvokeAI so that even if a verison mismatch occurs, the correct torch version should still be installed. This also necessitates changing --index-url to --extra-index-url for the Torch repo. Otherwise non-torch dependencies would not be found. * In run.sh, build the image for the selected service.
InvokeAI Containerized
All commands should be run within the docker directory: cd docker
Quickstart 🚀
On a known working Linux+Docker+CUDA (Nvidia) system, execute ./run.sh in this directory. It will take a few minutes - depending on your internet speed - to install the core models. Once the application starts up, open http://localhost:9090 in your browser to Invoke!
For more configuration options (using an AMD GPU, custom root directory location, etc): read on.
Detailed setup
Linux
- Ensure builkit is enabled in the Docker daemon settings (
/etc/docker/daemon.json) - Install the
docker composeplugin using your package manager, or follow a tutorial.- The deprecated
docker-compose(hyphenated) CLI continues to work for now.
- The deprecated
- Ensure docker daemon is able to access the GPU.
- You may need to install nvidia-container-toolkit
macOS
- Ensure Docker has at least 16GB RAM
- Enable VirtioFS for file sharing
- Enable
docker composeV2 support
This is done via Docker Desktop preferences
Configure Invoke environment
- Make a copy of
.env.sampleand name it.env(cp .env.sample .env(Mac/Linux) orcopy example.env .env(Windows)). Make changes as necessary. SetINVOKEAI_ROOTto an absolute path to: a. the desired location of the InvokeAI runtime directory, or b. an existing, v3.0.0 compatible runtime directory. - Execute
run.sh
The image will be built automatically if needed.
The runtime directory (holding models and outputs) will be created in the location specified by INVOKEAI_ROOT. The default location is ~/invokeai. The runtime directory will be populated with the base configs and models necessary to start generating.
Use a GPU
- Linux is recommended for GPU support in Docker.
- WSL2 is required for Windows.
- only
x86_64architecture is supported.
The Docker daemon on the system must be already set up to use the GPU. In case of Linux, this involves installing nvidia-docker-runtime and configuring the nvidia runtime as default. Steps will be different for AMD. Please see Docker documentation for the most up-to-date instructions for using your GPU with Docker.
To use an AMD GPU, set GPU_DRIVER=rocm in your .env file.
Customize
Check the .env.sample file. It contains some environment variables for running in Docker. Copy it, name it .env, and fill it in with your own values. Next time you run run.sh, your custom values will be used.
You can also set these values in docker-compose.yml directly, but .env will help avoid conflicts when code is updated.
Values are optional, but setting INVOKEAI_ROOT is highly recommended. The default is ~/invokeai. Example:
INVOKEAI_ROOT=/Volumes/WorkDrive/invokeai
HUGGINGFACE_TOKEN=the_actual_token
CONTAINER_UID=1000
GPU_DRIVER=nvidia
Any environment variables supported by InvokeAI can be set here - please see the Configuration docs for further detail.
Even Moar Customizing!
See the docker-compose.yml file. The command instruction can be uncommented and used to run arbitrary startup commands. Some examples below.
Reconfigure the runtime directory
Can be used to download additional models from the supported model list
In conjunction with INVOKEAI_ROOT can be also used to initialize a runtime directory
command:
- invokeai-configure
- --yes
Or install models:
command:
- invokeai-model-install