remove openai key assertion, enable alternate embedding models (#231)

* remove openai key assertion

* support different embedding models

* add todo

* add local embeddings

* Make lint happy (#232)

* Include Azure AI embedding model (#239)

* Include Azure AI embedding model

* updated requirements

---------

Co-authored-by: Rohit Rushil <rohit.rushil@honeywell.com>

* Update agenthub/langchains_agent/utils/memory.py

* Update agenthub/langchains_agent/utils/memory.py

* add base url

* add docs

* Update requirements.txt

* default to local embeddings

* Update llm.py

* fix fn

---------

Co-authored-by: Engel Nyst <enyst@users.noreply.github.com>
Co-authored-by: RoHitRushil <43521824+RohitX0X@users.noreply.github.com>
Co-authored-by: Rohit Rushil <rohit.rushil@honeywell.com>
This commit is contained in:
Robert Brennan
2024-03-27 14:58:47 -04:00
committed by GitHub
parent 9ae903697d
commit 4304aceff3
5 changed files with 57 additions and 15 deletions

View File

@@ -54,9 +54,19 @@ export LLM_API_KEY="your-api-key"
export LLM_MODEL="claude-3-opus-20240229"
```
### Running on the Command Line
You can also run OpenDevin from your command line:
You can also set the base URL for local/custom models:
```bash
export LLM_BASE_URL="https://localhost:3000"
```
And you can customize which embeddings are used for the vector database storage:
```bash
export LLM_EMBEDDING_MODEL="llama2" # can be "llama2", "openai", "azureopenai", or "local"
```
### Running on the Command Line
You can run OpenDevin from your command line:
```bash
PYTHONPATH=`pwd` python opendevin/main.py -d ./workspace/ -i 100 -t "Write a bash script that prints 'hello world'"
```