fix: Update requirements.txt with google-generativai (#315)

* Install google generativeai package and update requirements.txt using pip freeze

* Switch to pipenv for package management and add google-generateai package as well

* Update README with new installation instructions, refactor a little for better ordering of instructions

* Fix typo

---------

Co-authored-by: Robert Brennan <accounts@rbren.io>
This commit is contained in:
George Balch
2024-03-29 12:16:12 -07:00
committed by GitHub
parent 98e7057d53
commit b443c0af29
4 changed files with 4002 additions and 26 deletions

31
Pipfile Normal file
View File

@@ -0,0 +1,31 @@
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
datasets = "*"
pandas = "*"
litellm = "*"
termcolor = "*"
seaborn = "*"
docker = "*"
fastapi = "*"
uvicorn = {extras = ["standard"], version = "*"}
ruff = "*"
mypy = "*"
langchain = "*"
langchain-core = "*"
langchain-community = "*"
llama-index = "*"
llama-index-vector-stores-chroma = "*"
chromadb = "*"
llama-index-embeddings-huggingface = "*"
llama-index-embeddings-azure-openai = "*"
llama-index-embeddings-ollama = "*"
google-generativeai = "*"
[dev-packages]
[requires]
python_version = "3.10"

3949
Pipfile.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -30,22 +30,26 @@ Then pull our latest image [here](https://github.com/opendevin/OpenDevin/pkgs/co
```bash
docker pull ghcr.io/opendevin/sandbox:v0.1
```
Then start the backend:
We manage python packages and the virtual environment with `pipenv`.
Make sure python >= 3.10.
```bash
python -m pip install pipenv
pipenv install -v
pipenv shell
export OPENAI_API_KEY="..."
export WORKSPACE_DIR="/path/to/your/project"
python -m pip install -r requirements.txt
uvicorn opendevin.server.listen:app --port 3000
```
Then in a second terminal:
```bash
cd frontend
npm install
npm start
```
You'll see OpenDevin running at localhost:3001
The virtual environment is now activated and you should see `(OpenDevin)` in front of your cmdline prompt.
### Picking a Model
We use LiteLLM, so you can run OpenDevin with any foundation model, including OpenAI, Claude, and Gemini.
@@ -69,6 +73,20 @@ And you can customize which embeddings are used for the vector database storage:
export LLM_EMBEDDING_MODEL="llama2" # can be "llama2", "openai", "azureopenai", or "local"
```
### Running the app
You should be able to run the backend now
```bash
uvicorn opendevin.server.listen:app --port 3000
```
Then in a second terminal:
```bash
cd frontend
npm install
npm run start -- --port 3001
```
You'll see OpenDevin running at localhost:3001
### Running on the Command Line
You can run OpenDevin from your command line:
```bash

View File

@@ -1,22 +0,0 @@
datasets
pandas
litellm
termcolor
seaborn
docker
fastapi
uvicorn[standard]
ruff
mypy
pytest
# for agenthub/lanchangs_agent
langchain
langchain-core
langchain-community
llama-index
llama-index-vector-stores-chroma
chromadb
llama-index-embeddings-huggingface
llama-index-embeddings-azure-openai
llama-index-embeddings-ollama