feat(ollama): Adding ollama for enabling local model agents (#153)

* feat(ollama): add ollama package dependency, add two separate deployment docker compose files and add a shell script to toggle between the deployment docker compose files

add base ollama.ts implementation

add latest attempt to fetch Ollama models dynamically

fix ollama dynamic model fetching, models now being rendered on GUI

fix package and package-lock.json to remove ollama dependency and add types.ts for ollama

switch MODEL_PROVIDERS to getModelProviders

make dynamic ollama model dropdown change using zustland store

make dynamic ollama model changes to router and evaluator ts too

* feat(ollama): fix evaluated options by de-duplicating it

* feat(ollama): make README.md change to reflect local model workflow

* feat(ollama): add base non-ollama docker compose file, add --local flag to start_simstudio_docker.sh with ollama service

* feat(ollama): fix README.md local model instructions

* feat(ollama): remove de-duplication logic and separate getModelProviders into two

* fix non-local init and translate.ts

* create combined docker-compose file and fix start_simstudio_docker script too

* update package-lock.json

* feat(ollama): fix README.md instructions and docker compose

---------

Co-authored-by: Arunabh Sharma <arunabh.sharma@supernal.aero>
This commit is contained in:
Arunabh Sharma
2025-03-29 13:34:44 -07:00
committed by GitHub
parent 272a486bcc
commit fe2c7d8d98
20 changed files with 691 additions and 148 deletions

View File

@@ -39,8 +39,12 @@ cd sim
# Create environment file and update with required environment variables (BETTER_AUTH_SECRET)
cp sim/.env.example sim/.env
# Start the Docker environment
docker compose up -d
# Start Sim Studio using the provided script
docker compose up -d --build
or
./start_simstudio_docker.sh
```
After running these commands:
@@ -66,6 +70,36 @@ After running these commands:
docker compose up -d --build
```
#### Working with Local Models
To use local models with Sim Studio, follow these steps:
1. **Pull Local Models**
```bash
# Run the ollama_docker.sh script to pull the required models
./sim/scripts/ollama_docker.sh pull <model_name>
```
2. **Start Sim Studio with Local Models**
```bash
#Start Sim Studio with local model support
./start_simstudio_docker.sh --local
# or
# Start Sim Studio with local model support if you have nvidia GPU
docker compose up --profile local-gpu -d --build
# or
# Start Sim Studio with local model support if you don't have nvidia GPU
docker compose up --profile local-cpu -d --build
```
The application will now be configured to use your local models. You can access it at [http://localhost:3000/w/](http://localhost:3000/w/).
### Option 2: Dev Containers
1. Open VS Code or your favorite VS Code fork (Cursor, Windsurf, etc.)