mirror of
https://github.com/All-Hands-AI/OpenHands.git
synced 2026-01-10 07:18:10 -05:00
Add ollama wsl section to LocalLLMs.md (#1318)
This commit is contained in:
committed by
GitHub
parent
620bbb38cf
commit
f6dcbeaa1d
@@ -3,7 +3,7 @@
|
||||
Ensure that you have the Ollama server up and running.
|
||||
For detailed startup instructions, refer to the [here](https://github.com/ollama/ollama)
|
||||
|
||||
This guide assumes you've started ollama with `ollama serve`. If you're running ollama differently (e.g. inside docker), the instructions might need to be modified.
|
||||
This guide assumes you've started ollama with `ollama serve`. If you're running ollama differently (e.g. inside docker), the instructions might need to be modified. Please note that if you're running wsl the default ollama configuration blocks requests from docker containers. See [here](#4-configuring-the-ollama-service-wsl).
|
||||
|
||||
## 1. Pull Models
|
||||
|
||||
@@ -61,3 +61,58 @@ Then in the `Model` input, enter `ollama/codellama:7b`, or the name of the model
|
||||
If it doesn’t show up in a dropdown, that’s fine, just type it in. Click Save when you’re done.
|
||||
|
||||
And now you're ready to go!
|
||||
|
||||
## 4. Configuring the ollama service (WSL)
|
||||
|
||||
The default configuration for ollama in wsl only serves localhost. This means you can't reach it from a docker container. eg. it wont work with OpenDevin. First let's test that ollama is running correctly.
|
||||
|
||||
```bash
|
||||
ollama list # get list of installed models
|
||||
curl http://localhost:11434/api/generate -d '{"model":"[NAME]","prompt":"hi"}'
|
||||
#ex. curl http://localhost:11434/api/generate -d '{"model":"codellama:7b","prompt":"hi"}'
|
||||
#ex. curl http://localhost:11434/api/generate -d '{"model":"codellama","prompt":"hi"}' #the tag is optional if there is only one
|
||||
```
|
||||
|
||||
Once that is done test that it allows "outside" requests, like those from inside a docker container.
|
||||
|
||||
```bash
|
||||
docker ps # get list of running docker containers, for most accurate test choose the open devin sandbox container.
|
||||
docker exec [CONTAINER ID] curl http://host.docker.internal:11434/api/generate -d '{"model":"[NAME]","prompt":"hi"}'
|
||||
#ex. docker exec cd9cc82f7a11 curl http://host.docker.internal:11434/api/generate -d '{"model":"codellama","prompt":"hi"}'
|
||||
```
|
||||
|
||||
### Fixing it
|
||||
|
||||
Now let's make it work, edit /etc/systemd/system/ollama.service with sudo priviledges. (Path may vary depending on linux flavor)
|
||||
|
||||
```bash
|
||||
sudo vi /etc/systemd/system/ollama.service
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```bash
|
||||
sudo nano /etc/systemd/system/ollama.service
|
||||
```
|
||||
|
||||
In the [Service] bracket add these lines
|
||||
|
||||
```
|
||||
Environment="OLLAMA_HOST=0.0.0.0:11434"
|
||||
Environment="OLLAMA_ORIGINS=*"
|
||||
```
|
||||
|
||||
Then save, reload the configuration and restart the service.
|
||||
|
||||
```bash
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl restart ollama
|
||||
```
|
||||
|
||||
Finally test that ollama is accessible from within the container
|
||||
|
||||
```bash
|
||||
ollama list # get list of installed models
|
||||
docker ps # get list of running docker containers, for most accurate test choose the open devin sandbox container.
|
||||
docker exec [CONTAINER ID] curl http://host.docker.internal:11434/api/generate -d '{"model":"[NAME]","prompt":"hi"}'
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user