Update documentation again (#8362)

This commit is contained in:
mamoodi
2025-05-08 15:56:04 -04:00
committed by GitHub
parent b030594646
commit a87bb10bfc
6 changed files with 53 additions and 70 deletions

View File

@@ -1,17 +1,17 @@
# CLI Mode # CLI Mode
CLI mode provides a powerful interactive Command-Line Interface (CLI) that lets you engage with OpenHands directly from your terminal. CLI mode provides a powerful interactive Command-Line Interface (CLI) that lets you engage with OpenHands directly
from your terminal.
This mode is different from the [headless mode](headless-mode), which is non-interactive and better for scripting. This mode is different from the [headless mode](headless-mode), which is non-interactive and better for scripting.
## Getting Started ## Getting Started
### Prerequisites
- Ensure you have followed the [Development setup instructions](https://github.com/All-Hands-AI/OpenHands/blob/main/Development.md).
- You will need your LLM model name and API key.
### Running with Python ### Running with Python
To launch an interactive OpenHands conversation from the command line:
1. Ensure you have followed the [Development setup instructions](https://github.com/All-Hands-AI/OpenHands/blob/main/Development.md).
2. Set your model, API key, and other preferences using environment variables or with the [`config.toml`](https://github.com/All-Hands-AI/OpenHands/blob/main/config.template.toml) file.
3. Launch an interactive OpenHands conversation from the command line:
```bash ```bash
poetry run python -m openhands.cli.main poetry run python -m openhands.cli.main
@@ -19,17 +19,12 @@ poetry run python -m openhands.cli.main
This command opens an interactive prompt where you can type tasks or commands and get responses from OpenHands. This command opens an interactive prompt where you can type tasks or commands and get responses from OpenHands.
You can set your model, API key, and other preferences using environment variables or with the [`config.toml`](https://github.com/All-Hands-AI/OpenHands/blob/main/config.template.toml) file.
### Running with Docker ### Running with Docker
To use OpenHands CLI mode with Docker:
1. Set the following environment variables in your terminal: 1. Set the following environment variables in your terminal:
- `SANDBOX_VOLUMES` to specify the directory you want OpenHands to access (Ex: `export SANDBOX_VOLUMES=$(pwd)/workspace:/workspace:rw`). - `SANDBOX_VOLUMES` to specify the directory you want OpenHands to access ([See using SANDBOX_VOLUMES for more info](../runtimes/docker#using-sandbox_volumes))
- The agent works in `/workspace` by default, so mount your project directory there if you want the agent to modify files. - `LLM_MODEL` - the LLM model to use (e.g. `export LLM_MODEL="anthropic/claude-3-7-sonnet-20250219"`)
- For read-only data, use a different mount path (Ex: `export SANDBOX_VOLUMES=$(pwd)/workspace:/workspace:rw,/path/to/large/dataset:/data:ro`). - `LLM_API_KEY` - your API key (e.g. `export LLM_API_KEY="sk_test_12345"`)
- `LLM_MODEL` — the LLM model to use (e.g. `export LLM_MODEL="anthropic/claude-3-5-sonnet-20241022"`)
- `LLM_API_KEY` — your API key (e.g. `export LLM_API_KEY="sk_test_12345"`)
2. Run the following command: 2. Run the following command:
@@ -53,22 +48,19 @@ This launches the CLI in Docker, allowing you to interact with OpenHands as desc
The `-e SANDBOX_USER_ID=$(id -u)` ensures files created by the agent in your workspace have the correct permissions. The `-e SANDBOX_USER_ID=$(id -u)` ensures files created by the agent in your workspace have the correct permissions.
---
## Interactive CLI Overview ## Interactive CLI Overview
### What is CLI Mode? ### What is CLI Mode?
CLI mode enables real-time interaction with OpenHands agents. You can type natural language tasks, use interactive commands, and receive instant feedback—all inside your terminal.
CLI mode enables real-time interaction with OpenHands agents. You can type natural language tasks, use interactive
commands, and receive instant feedback—all inside your terminal.
### Starting a Conversation ### Starting a Conversation
When you start the CLI, you'll see a welcome message and a prompt (`>`). Enter your first task or type a command to begin your conversation.
### Entering Tasks When you start the CLI, you'll see a welcome message and a prompt (`>`). Enter your first task or type a command to
Type your request (for example, "Refactor the utils.py file to improve readability") and press Enter. The agent will process your input and reply. begin your conversation.
--- ### Available Commands
## Available Commands
You can use the following commands whenever the prompt (`>`) is displayed: You can use the following commands whenever the prompt (`>`) is displayed:
@@ -82,30 +74,25 @@ You can use the following commands whenever the prompt (`>`) is displayed:
| `/settings` | View and modify current LLM/agent settings | | `/settings` | View and modify current LLM/agent settings |
| `/resume` | Resume the agent if paused | | `/resume` | Resume the agent if paused |
--- #### Settings and Configuration
## Settings and Configuration You can update your model, API key, agent, and other preferences interactively using the `/settings` command. Just
follow the prompts:
You can update your model, API key, agent, and other preferences interactively using the `/settings` command. Just follow the prompts:
- **Basic settings**: Choose a model/provider and enter your API key. - **Basic settings**: Choose a model/provider and enter your API key.
- **Advanced settings**: Set custom endpoints, enable or disable confirmation mode, and configure memory condensation. - **Advanced settings**: Set custom endpoints, enable or disable confirmation mode, and configure memory condensation.
Settings can also be managed via the `config.toml` file. Settings can also be managed via the `config.toml` file.
--- #### Repository Initialization
## Repository Initialization The `/init` command helps the agent understand your project by creating a `.openhands/microagents/repo.md` file with
project details and structure. Use this when onboarding the agent to a new codebase.
The `/init` command helps the agent understand your project by creating a `.openhands/microagents/repo.md` file with project details and structure. Use this when onboarding the agent to a new codebase. #### Agent Pause/Resume Feature
--- You can pause the agent while it is running by pressing `Ctrl-P`. To continue the conversation after pausing, simply
type `/resume` at the prompt.
## Agent Pause/Resume Feature
You can pause the agent while it is running by pressing `Ctrl-P`. To continue the conversation after pausing, simply type `/resume` at the prompt.
---
## Tips and Troubleshooting ## Tips and Troubleshooting

View File

@@ -39,8 +39,9 @@ OpenHands automatically exports a `GITHUB_TOKEN` to the shell environment if pro
- Minimal Permissions ( Select `Meta Data = Read-only` read for search, `Pull Requests = Read and Write` and `Content = Read and Write` for branch creation) - Minimal Permissions ( Select `Meta Data = Read-only` read for search, `Pull Requests = Read and Write` and `Content = Read and Write` for branch creation)
2. **Enter Token in OpenHands**: 2. **Enter Token in OpenHands**:
- Click the Settings button (gear icon). - Click the Settings button (gear icon).
- Navigate to the `Git` tab.
- Paste your token in the `GitHub Token` field. - Paste your token in the `GitHub Token` field.
- Click `Save` to apply the changes. - Click `Save Changes` to apply the changes.
</details> </details>
<details> <details>
@@ -98,9 +99,9 @@ OpenHands automatically exports a `GITLAB_TOKEN` to the shell environment if pro
- Set an expiration date or leave it blank for a non-expiring token. - Set an expiration date or leave it blank for a non-expiring token.
2. **Enter Token in OpenHands**: 2. **Enter Token in OpenHands**:
- Click the Settings button (gear icon). - Click the Settings button (gear icon).
- Navigate to the `Git` tab.
- Paste your token in the `GitLab Token` field. - Paste your token in the `GitLab Token` field.
- Enter your GitLab instance URL if using self-hosted GitLab. - Click `Save Changes` to apply the changes.
- Click `Save` to apply the changes.
</details> </details>
<details> <details>
@@ -112,7 +113,6 @@ OpenHands automatically exports a `GITLAB_TOKEN` to the shell environment if pro
- Ensure the token is properly saved in settings. - Ensure the token is properly saved in settings.
- Check that the token hasn't expired. - Check that the token hasn't expired.
- Verify the token has the required scopes. - Verify the token has the required scopes.
- For self-hosted instances, verify the correct instance URL.
- **Access Denied**: - **Access Denied**:
- Verify project access permissions. - Verify project access permissions.

View File

@@ -21,13 +21,10 @@ You'll need to be sure to set your model, API key, and other settings via enviro
To run OpenHands in Headless mode with Docker: To run OpenHands in Headless mode with Docker:
1. Set the following environmental variables in your terminal: 1. Set the following environment variables in your terminal:
- `SANDBOX_VOLUMES` to specify the directory you want OpenHands to access ([See using SANDBOX_VOLUMES for more info](../runtimes/docker#using-sandbox_volumes))
- `SANDBOX_VOLUMES` to specify the directory you want OpenHands to access (Ex: `export SANDBOX_VOLUMES=$(pwd)/workspace:/workspace:rw`). - `LLM_MODEL` - the LLM model to use (e.g. `export LLM_MODEL="anthropic/claude-3-7-sonnet-20250219"`)
- The agent works in `/workspace` by default, so mount your project directory there if you want the agent to modify files. - `LLM_API_KEY` - your API key (e.g. `export LLM_API_KEY="sk_test_12345"`)
- For read-only data, use a different mount path (Ex: `export SANDBOX_VOLUMES=$(pwd)/workspace:/workspace:rw,/path/to/large/dataset:/data:ro`).
- `LLM_MODEL` to the model to use (Ex: `export LLM_MODEL="anthropic/claude-3-5-sonnet-20241022"`).
- `LLM_API_KEY` to the API key (Ex: `export LLM_API_KEY="sk_test_12345"`).
2. Run the following Docker command: 2. Run the following Docker command:

View File

@@ -1,12 +1,15 @@
# Model Context Protocol (MCP) # Model Context Protocol (MCP)
:::note :::note
This page outlines how to configure and use the Model Context Protocol (MCP) in OpenHands, allowing you to extend the agent's capabilities with custom tools. This page outlines how to configure and use the Model Context Protocol (MCP) in OpenHands, allowing you to extend the
agent's capabilities with custom tools.
::: :::
## Overview ## Overview
Model Context Protocol (MCP) is a mechanism that allows OpenHands to communicate with external tool servers. These servers can provide additional functionality to the agent, such as specialized data processing, external API access, or custom tools. MCP is based on the open standard defined at [modelcontextprotocol.io](https://modelcontextprotocol.io). Model Context Protocol (MCP) is a mechanism that allows OpenHands to communicate with external tool servers. These
servers can provide additional functionality to the agent, such as specialized data processing, external API access,
or custom tools. MCP is based on the open standard defined at [modelcontextprotocol.io](https://modelcontextprotocol.io).
## Configuration ## Configuration
@@ -79,13 +82,13 @@ Stdio servers are configured using an object with the following properties:
When OpenHands starts, it: When OpenHands starts, it:
1. Reads the MCP configuration from `config.toml` 1. Reads the MCP configuration from `config.toml`.
2. Connects to any configured SSE servers 2. Connects to any configured SSE servers.
3. Starts any configured stdio servers 3. Starts any configured stdio servers.
4. Registers the tools provided by these servers with the agent 4. Registers the tools provided by these servers with the agent.
The agent can then use these tools just like any built-in tool. When the agent calls an MCP tool: The agent can then use these tools just like any built-in tool. When the agent calls an MCP tool:
1. OpenHands routes the call to the appropriate MCP server 1. OpenHands routes the call to the appropriate MCP server.
2. The server processes the request and returns a response 2. The server processes the request and returns a response.
3. OpenHands converts the response to an observation and presents it to the agent 3. OpenHands converts the response to an observation and presents it to the agent.

View File

@@ -15,9 +15,11 @@ A useful feature is the ability to connect to your local filesystem. To mount yo
The simplest way to mount your local filesystem is to use the `SANDBOX_VOLUMES` environment variable: The simplest way to mount your local filesystem is to use the `SANDBOX_VOLUMES` environment variable:
```bash ```bash
export SANDBOX_VOLUMES=/path/to/your/code:/workspace:rw
docker run # ... docker run # ...
-e SANDBOX_USER_ID=$(id -u) \ -e SANDBOX_USER_ID=$(id -u) \
-e SANDBOX_VOLUMES=/path/to/your/code:/workspace:rw \ -e SANDBOX_VOLUMES=$SANDBOX_VOLUMES \
# ... # ...
``` ```
@@ -32,23 +34,23 @@ The `SANDBOX_VOLUMES` format is `host_path:container_path[:mode]` where:
You can also specify multiple mounts by separating them with commas (`,`): You can also specify multiple mounts by separating them with commas (`,`):
```bash ```bash
-e SANDBOX_VOLUMES=/path1:/workspace/path1,/path2:/workspace/path2:ro export SANDBOX_VOLUMES=/path1:/workspace/path1,/path2:/workspace/path2:ro
``` ```
Examples: Examples:
```bash ```bash
# Linux and Mac Example - Writable workspace # Linux and Mac Example - Writable workspace
-e SANDBOX_VOLUMES=$HOME/OpenHands:/workspace:rw export SANDBOX_VOLUMES=$HOME/OpenHands:/workspace:rw
# WSL on Windows Example - Writable workspace # WSL on Windows Example - Writable workspace
-e SANDBOX_VOLUMES=/mnt/c/dev/OpenHands:/workspace:rw export SANDBOX_VOLUMES=/mnt/c/dev/OpenHands:/workspace:rw
# Read-only reference code example # Read-only reference code example
-e SANDBOX_VOLUMES=/path/to/reference/code:/data:ro export SANDBOX_VOLUMES=/path/to/reference/code:/data:ro
# Multiple mounts example - Writable workspace with read-only reference data # Multiple mounts example - Writable workspace with read-only reference data
-e SANDBOX_VOLUMES=$HOME/projects:/workspace:rw,/path/to/large/dataset:/data:ro export SANDBOX_VOLUMES=$HOME/projects:/workspace:rw,/path/to/large/dataset:/data:ro
``` ```
### Using WORKSPACE_* variables (Deprecated) ### Using WORKSPACE_* variables (Deprecated)

View File

@@ -53,15 +53,9 @@ If `SANDBOX_VOLUMES` is not set, the runtime will create a temporary directory f
Here's an example of how to start OpenHands with the Local Runtime in Headless Mode: Here's an example of how to start OpenHands with the Local Runtime in Headless Mode:
```bash ```bash
# Set the runtime type to local
export RUNTIME=local export RUNTIME=local
export SANDBOX_VOLUMES=/my_folder/myproject:/workspace:rw
# Set a workspace directory (the agent works in /workspace by default)
export SANDBOX_VOLUMES=/path/to/your/project:/workspace:rw
# For read-only data that you don't want the agent to modify, use a different path
# export SANDBOX_VOLUMES=/path/to/your/project:/workspace:rw,/path/to/reference/data:/data:ro
# Start OpenHands
poetry run python -m openhands.core.main -t "write a bash script that prints hi" poetry run python -m openhands.core.main -t "write a bash script that prints hi"
``` ```