mirror of
https://github.com/All-Hands-AI/OpenHands.git
synced 2026-01-10 07:18:10 -05:00
remove openai key assertion, enable alternate embedding models (#231)
* remove openai key assertion * support different embedding models * add todo * add local embeddings * Make lint happy (#232) * Include Azure AI embedding model (#239) * Include Azure AI embedding model * updated requirements --------- Co-authored-by: Rohit Rushil <rohit.rushil@honeywell.com> * Update agenthub/langchains_agent/utils/memory.py * Update agenthub/langchains_agent/utils/memory.py * add base url * add docs * Update requirements.txt * default to local embeddings * Update llm.py * fix fn --------- Co-authored-by: Engel Nyst <enyst@users.noreply.github.com> Co-authored-by: RoHitRushil <43521824+RohitX0X@users.noreply.github.com> Co-authored-by: Rohit Rushil <rohit.rushil@honeywell.com>
This commit is contained in:
14
README.md
14
README.md
@@ -54,9 +54,19 @@ export LLM_API_KEY="your-api-key"
|
||||
export LLM_MODEL="claude-3-opus-20240229"
|
||||
```
|
||||
|
||||
### Running on the Command Line
|
||||
You can also run OpenDevin from your command line:
|
||||
You can also set the base URL for local/custom models:
|
||||
```bash
|
||||
export LLM_BASE_URL="https://localhost:3000"
|
||||
```
|
||||
|
||||
And you can customize which embeddings are used for the vector database storage:
|
||||
```bash
|
||||
export LLM_EMBEDDING_MODEL="llama2" # can be "llama2", "openai", "azureopenai", or "local"
|
||||
```
|
||||
|
||||
### Running on the Command Line
|
||||
You can run OpenDevin from your command line:
|
||||
```bash
|
||||
PYTHONPATH=`pwd` python opendevin/main.py -d ./workspace/ -i 100 -t "Write a bash script that prints 'hello world'"
|
||||
```
|
||||
|
||||
|
||||
Reference in New Issue
Block a user