mirror of
https://github.com/All-Hands-AI/OpenHands.git
synced 2026-01-10 07:18:10 -05:00
Revamp docker build process (#1121)
* refactor docker building * change to buildx * disable branch filter * disable tags * matrix for building * fix branch filter * rename workflow * sanitize ref name * fix sanitization * fix source command * fix source command * add push arg * enable for all branches * logs * empty commit * try freeing disk space * try disk clean again * try alpine * Update ghcr.yml * Update ghcr.yml * move checkout * ignore .git * add disk space debug * add df h to build script * remove pull * try another failure bypass * remove maximize build space step * remove df -h debug * add no-root * multi-stage python build * add ssh * update readme * remove references to config.toml
This commit is contained in:
@@ -6,37 +6,26 @@ OpenDevin uses LiteLLM for completion calls. You can find their documentation on
|
||||
|
||||
## azure openai configs
|
||||
|
||||
During installation of OpenDevin, you can set up the following parameters:
|
||||
When running the OpenDevin Docker image, you'll need to set the following environment variables using `-e`:
|
||||
```
|
||||
LLM_BASE_URL="<azure-api-base-url>" # e.g. "https://openai-gpt-4-test-v-1.openai.azure.com/"
|
||||
LLM_API_KEY="<azure-api-key>"
|
||||
LLM_MODEL="azure/<your-gpt-deployment-name>"
|
||||
AZURE_API_VERSION = "<api-version>" # e.g. "2024-02-15-preview"
|
||||
```
|
||||
|
||||
They will be saved in the `config.toml` file in the `OpenDevin` directory. You can add or edit them manually in the file after installation.
|
||||
|
||||
In addition, you need to set the following environment variable, which is used by the LiteLLM library to make requests to the Azure API:
|
||||
|
||||
`AZURE_API_VERSION = "<api-version>" # e.g. "2024-02-15-preview"`
|
||||
|
||||
You can set the environment variable in your terminal or in an `.env` file in the `OpenDevin` directory.
|
||||
|
||||
Alternatively, you can add all these in .env, however in that case make sure to check the LiteLLM documentation for the correct variables.
|
||||
|
||||
# 2. Embeddings
|
||||
|
||||
OpenDevin uses llama-index for embeddings. You can find their documentation on Azure [here](https://docs.llamaindex.ai/en/stable/api_reference/embeddings/azure_openai/)
|
||||
|
||||
## azure openai configs
|
||||
|
||||
The model used for Azure OpenAI embeddings is "text-embedding-ada-002". You need the correct deployment name for this model in your Azure account.
|
||||
|
||||
During installation of OpenDevin, you can set the following parameters used for embeddings, when prompted by the makefile:
|
||||
The model used for Azure OpenAI embeddings is "text-embedding-ada-002".
|
||||
You need the correct deployment name for this model in your Azure account.
|
||||
|
||||
When running OpenDevin in Docker, set the following environment variables using `-e`:
|
||||
```
|
||||
LLM_EMBEDDING_MODEL="azureopenai"
|
||||
DEPLOYMENT_NAME = "<your-embedding-deployment-name>" # e.g. "TextEmbedding...<etc>"
|
||||
LLM_API_VERSION = "<api-version>" # e.g. "2024-02-15-preview"
|
||||
```
|
||||
|
||||
You can re-run ```make setup-config``` anytime, or add or edit them manually in the file afterwards.
|
||||
|
||||
@@ -7,7 +7,7 @@ Linux:
|
||||
```
|
||||
curl -fsSL https://ollama.com/install.sh | sh
|
||||
```
|
||||
Windows or macOS:
|
||||
Windows or macOS:
|
||||
|
||||
- Download from [here](https://ollama.com/download/)
|
||||
|
||||
@@ -60,30 +60,10 @@ sudo systemctl stop ollama
|
||||
|
||||
For more info go [here](https://github.com/ollama/ollama/blob/main/docs/faq.md)
|
||||
|
||||
## 3. Follow the default installation of OpenDevin:
|
||||
```
|
||||
git clone git@github.com:OpenDevin/OpenDevin.git
|
||||
```
|
||||
or
|
||||
```
|
||||
git clone git@github.com:<YOUR-USERNAME>/OpenDevin.git
|
||||
```
|
||||
## 3. Start OpenDevin
|
||||
|
||||
then
|
||||
```
|
||||
cd OpenDevin
|
||||
```
|
||||
|
||||
## 4. Run setup commands:
|
||||
```
|
||||
make build
|
||||
make setup-config
|
||||
```
|
||||
|
||||
## 5. Modify config file:
|
||||
|
||||
- After running `make setup-config` you will see a generated file `OpenDevin/config.toml`.
|
||||
- Open this file and modify it to your needs based on this template:
|
||||
Use the instructions in [README.md](/README.md) to start OpenDevin using Docker.
|
||||
When running `docker run`, add the following environment variables using `-e`:
|
||||
|
||||
```
|
||||
LLM_API_KEY="ollama"
|
||||
@@ -92,34 +72,25 @@ LLM_EMBEDDING_MODEL="local"
|
||||
LLM_BASE_URL="http://localhost:<port_number>"
|
||||
WORKSPACE_DIR="./workspace"
|
||||
```
|
||||
Notes:
|
||||
- The API key should be set to `"ollama"`
|
||||
- The base url needs to be `localhost`
|
||||
Notes:
|
||||
- The API key should be set to `"ollama"`
|
||||
- The base url needs to be `localhost`
|
||||
- By default ollama port is `11434` unless you set it
|
||||
- `model_name` needs to be the entire model name
|
||||
- Example: `LLM_MODEL="ollama/llama2:13b-chat-q4_K_M"`
|
||||
|
||||
## 6. Start OpenDevin:
|
||||
|
||||
At this point everything should be set up and working properly.
|
||||
1. Start by running the ollama server using the method outlined above
|
||||
2. Run `make build` in your terminal `~/OpenDevin/`
|
||||
3. Run `make run` in your terminal
|
||||
4. If that fails try running the server and front end in sepparate terminals:
|
||||
- In the first terminal `make start-backend`
|
||||
- In the second terminal `make start-frontend`
|
||||
5. you should now be able to connect to `http://localhost:3001/` with your local model running!
|
||||
You should now be able to connect to `http://localhost:3001/` with your local model running!
|
||||
|
||||
|
||||
## Additional Notes for WSL2 Users:
|
||||
|
||||
1. If you encounter the following error during setup: `Exception: Failed to create opendevin user in sandbox: b'useradd: UID 0 is not unique\n'`
|
||||
You can resolve it by running:
|
||||
1. If you encounter the following error during setup: `Exception: Failed to create opendevin user in sandbox: b'useradd: UID 0 is not unique\n'`
|
||||
You can resolve it by running:
|
||||
```
|
||||
export SANDBOX_USER_ID=1000
|
||||
```
|
||||
|
||||
2. If you face issues running Poetry even after installing it during the build process, you may need to add its binary path to your environment:
|
||||
2. If you face issues running Poetry even after installing it during the build process, you may need to add its binary path to your environment:
|
||||
```
|
||||
export PATH="$HOME/.local/bin:$PATH"
|
||||
```
|
||||
@@ -134,4 +105,4 @@ You can resolve it by running:
|
||||
```
|
||||
- Save the `.wslconfig` file.
|
||||
- Restart WSL2 completely by exiting any running WSL2 instances and executing the command `wsl --shutdown` in your command prompt or terminal.
|
||||
- After restarting WSL, attempt to execute `make run` again. The networking issue should be resolved.
|
||||
- After restarting WSL, attempt to execute `make run` again. The networking issue should be resolved.
|
||||
|
||||
Reference in New Issue
Block a user